Implements gradient-based optimization exploiting MLP surrogate differentiability. Achieves 100-1000x faster convergence than derivative-free methods (TPE, CMA-ES). New files: - optimization_engine/gradient_optimizer.py: GradientOptimizer class with L-BFGS/Adam/SGD - studies/M1_Mirror/m1_mirror_adaptive_V14/run_lbfgs_polish.py: Per-study runner Updated docs: - SYS_14_NEURAL_ACCELERATION.md: Full L-BFGS section (v2.4) - 01_CHEATSHEET.md: Quick reference for L-BFGS usage - atomizer_fast_solver_technologies.md: Architecture context Usage: python -m optimization_engine.gradient_optimizer studies/my_study --n-starts 20 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
509 lines
16 KiB
Markdown
509 lines
16 KiB
Markdown
---
|
|
skill_id: SKILL_001
|
|
version: 2.2
|
|
last_updated: 2025-12-28
|
|
type: reference
|
|
code_dependencies:
|
|
- optimization_engine/extractors/__init__.py
|
|
- optimization_engine/method_selector.py
|
|
- optimization_engine/utils/trial_manager.py
|
|
- optimization_engine/utils/dashboard_db.py
|
|
requires_skills:
|
|
- SKILL_000
|
|
---
|
|
|
|
# Atomizer Quick Reference Cheatsheet
|
|
|
|
**Version**: 2.2
|
|
**Updated**: 2025-12-28
|
|
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
|
|
|
---
|
|
|
|
## Task → Protocol Quick Lookup
|
|
|
|
| I want to... | Use Protocol | Key Command/Action |
|
|
|--------------|--------------|-------------------|
|
|
| Create a new optimization study | OP_01 | Place in `studies/{geometry_type}/`, generate config + runner + **README.md** |
|
|
| Run an optimization | OP_02 | `conda activate atomizer && python run_optimization.py` |
|
|
| Check optimization progress | OP_03 | Query `study.db` or check dashboard at `localhost:3000` |
|
|
| See best results | OP_04 | `optuna-dashboard sqlite:///study.db` or dashboard |
|
|
| Export neural training data | OP_05 | `python run_optimization.py --export-training` |
|
|
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
|
|
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
|
|
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
|
|
| Generate physics insight | SYS_16 | `python -m optimization_engine.insights generate <study>` |
|
|
|
|
---
|
|
|
|
## Extractor Quick Reference
|
|
|
|
| Physics | Extractor | Function Call |
|
|
|---------|-----------|---------------|
|
|
| Max displacement | E1 | `extract_displacement(op2_file, subcase=1)` |
|
|
| Natural frequency | E2 | `extract_frequency(op2_file, subcase=1, mode_number=1)` |
|
|
| Von Mises stress | E3 | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` |
|
|
| BDF mass | E4 | `extract_mass_from_bdf(bdf_file)` |
|
|
| CAD expression mass | E5 | `extract_mass_from_expression(prt_file, expression_name='p173')` |
|
|
| Field data | E6 | `FieldDataExtractor(field_file, result_column, aggregation)` |
|
|
| Stiffness (k=F/δ) | E7 | `StiffnessCalculator(...)` |
|
|
| Zernike WFE (standard) | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
|
|
| Zernike relative | E9 | `extract_zernike_relative_rms(op2_file, bdf_file, target, ref)` |
|
|
| Zernike builder | E10 | `ZernikeObjectiveBuilder(op2_finder)` |
|
|
| Part mass + material | E11 | `extract_part_mass_material(prt_file)` → mass, volume, material |
|
|
| Zernike Analytic | E20 | `extract_zernike_analytic(op2_file, focal_length=5000.0)` |
|
|
| **Zernike OPD** | E22 | `extract_zernike_opd(op2_file)` ← **Most rigorous, RECOMMENDED** |
|
|
|
|
> **Mass extraction tip**: Always use E11 (geometry .prt) over E4 (BDF) for accuracy.
|
|
> pyNastran under-reports mass ~7% on hex-dominant meshes with tet/pyramid fills.
|
|
|
|
**Full details**: See `SYS_12_EXTRACTOR_LIBRARY.md` or `modules/extractors-catalog.md`
|
|
|
|
---
|
|
|
|
## Protocol Selection Guide
|
|
|
|
### Single Objective Optimization
|
|
```
|
|
Question: Do you have ONE goal to minimize/maximize?
|
|
├─ Yes, simple problem (smooth, <10 params)
|
|
│ └─► Protocol 10 + CMA-ES or GP-BO sampler
|
|
│
|
|
├─ Yes, complex problem (noisy, many params)
|
|
│ └─► Protocol 10 + TPE sampler
|
|
│
|
|
└─ Not sure about problem characteristics?
|
|
└─► Protocol 10 with adaptive characterization (default)
|
|
```
|
|
|
|
### Multi-Objective Optimization
|
|
```
|
|
Question: Do you have 2-3 competing goals?
|
|
├─ Yes (e.g., minimize mass AND minimize stress)
|
|
│ └─► Protocol 11 + NSGA-II sampler
|
|
│
|
|
└─ Pareto front needed?
|
|
└─► Protocol 11 (returns best_trials, not best_trial)
|
|
```
|
|
|
|
### Neural Network Acceleration
|
|
```
|
|
Question: Do you need >50 trials OR surrogate model?
|
|
├─ Yes
|
|
│ └─► Protocol 14 (configure surrogate_settings in config)
|
|
│
|
|
└─ Training data export needed?
|
|
└─► OP_05_EXPORT_TRAINING_DATA.md
|
|
```
|
|
|
|
---
|
|
|
|
## Configuration Quick Reference
|
|
|
|
### optimization_config.json Structure
|
|
```json
|
|
{
|
|
"study_name": "my_study",
|
|
"design_variables": [
|
|
{"name": "thickness", "min": 1.0, "max": 10.0, "unit": "mm"}
|
|
],
|
|
"objectives": [
|
|
{"name": "mass", "goal": "minimize", "unit": "kg"}
|
|
],
|
|
"constraints": [
|
|
{"name": "max_stress", "type": "<=", "threshold": 250, "unit": "MPa"}
|
|
],
|
|
"optimization_settings": {
|
|
"protocol": "protocol_10_single_objective",
|
|
"sampler": "TPESampler",
|
|
"n_trials": 50
|
|
},
|
|
"simulation": {
|
|
"model_file": "model.prt",
|
|
"sim_file": "model.sim",
|
|
"solver": "nastran"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Sampler Quick Selection
|
|
| Sampler | Use When | Protocol |
|
|
|---------|----------|----------|
|
|
| `TPESampler` | Default, robust to noise | P10 |
|
|
| `CMAESSampler` | Smooth, unimodal problems | P10 |
|
|
| `GPSampler` | Expensive FEA, few trials | P10 |
|
|
| `NSGAIISampler` | Multi-objective (2-3 goals) | P11 |
|
|
| `RandomSampler` | Characterization phase only | P10 |
|
|
| **L-BFGS** | **Polish phase (after surrogate)** | **P14** |
|
|
|
|
### L-BFGS Gradient Optimization (NEW)
|
|
|
|
Exploits surrogate differentiability for **100-1000x faster** local refinement:
|
|
|
|
```python
|
|
from optimization_engine.gradient_optimizer import GradientOptimizer, run_lbfgs_polish
|
|
|
|
# Quick usage - polish from top FEA candidates
|
|
results = run_lbfgs_polish(study_dir, n_starts=20, n_iterations=100)
|
|
|
|
# Or with more control
|
|
optimizer = GradientOptimizer(surrogate, objective_weights=[5.0, 5.0, 1.0])
|
|
result = optimizer.optimize(starting_points=top_candidates, method='lbfgs')
|
|
```
|
|
|
|
**CLI usage**:
|
|
```bash
|
|
python -m optimization_engine.gradient_optimizer studies/my_study --n-starts 20
|
|
|
|
# Or per-study script (if available)
|
|
python run_lbfgs_polish.py --n-starts 20 --grid-then-grad
|
|
```
|
|
|
|
**When to use**: After training surrogate, before final FEA validation
|
|
|
|
---
|
|
|
|
## Study File Structure
|
|
|
|
```
|
|
studies/{study_name}/
|
|
├── 1_setup/
|
|
│ ├── model/ # NX files (.prt, .sim, .fem)
|
|
│ └── optimization_config.json
|
|
├── 2_results/
|
|
│ ├── study.db # Optuna SQLite database
|
|
│ ├── optimizer_state.json # Real-time state (P13)
|
|
│ └── trial_logs/
|
|
├── README.md # MANDATORY: Engineering blueprint
|
|
├── STUDY_REPORT.md # MANDATORY: Results tracking
|
|
└── run_optimization.py # Entrypoint script
|
|
```
|
|
|
|
---
|
|
|
|
## Common Commands
|
|
|
|
```bash
|
|
# Activate environment (ALWAYS FIRST)
|
|
conda activate atomizer
|
|
|
|
# Run optimization
|
|
python run_optimization.py --start
|
|
|
|
# Run with specific trial count
|
|
python run_optimization.py --start --trials 50
|
|
|
|
# Resume interrupted optimization
|
|
python run_optimization.py --start --resume
|
|
|
|
# Export training data for neural network
|
|
python run_optimization.py --export-training
|
|
|
|
# View results in Optuna dashboard
|
|
optuna-dashboard sqlite:///3_results/study.db
|
|
|
|
# Check study status
|
|
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///3_results/study.db'); print(f'Trials: {len(s.trials)}')"
|
|
```
|
|
|
|
### When to Use --resume
|
|
|
|
| Scenario | Command |
|
|
|----------|---------|
|
|
| **First run of NEW study** | `python run_optimization.py --start --trials 50` |
|
|
| **First run with SEEDING** (e.g., V15 from V14) | `python run_optimization.py --start --trials 50` |
|
|
| **Continue INTERRUPTED run** | `python run_optimization.py --start --resume` |
|
|
| **Add MORE trials to completed study** | `python run_optimization.py --start --trials 20 --resume` |
|
|
|
|
**Key insight**: `--resume` is for continuing an existing `study.db`, NOT for seeding from prior studies. Seeding happens automatically on first run when `source_studies` is configured.
|
|
|
|
---
|
|
|
|
## LAC (Learning Atomizer Core) Commands
|
|
|
|
```bash
|
|
# View LAC statistics
|
|
python knowledge_base/lac.py stats
|
|
|
|
# Generate full LAC report
|
|
python knowledge_base/lac.py report
|
|
|
|
# View pending protocol updates
|
|
python knowledge_base/lac.py pending
|
|
|
|
# Query insights for a context
|
|
python knowledge_base/lac.py insights "bracket mass optimization"
|
|
```
|
|
|
|
### Python API Quick Reference
|
|
```python
|
|
from knowledge_base.lac import get_lac
|
|
lac = get_lac()
|
|
|
|
# Query prior knowledge
|
|
insights = lac.get_relevant_insights("bracket mass")
|
|
similar = lac.query_similar_optimizations("bracket", ["mass"])
|
|
rec = lac.get_best_method_for("bracket", n_objectives=1)
|
|
|
|
# Record learning
|
|
lac.record_insight("success_pattern", "context", "insight", confidence=0.8)
|
|
|
|
# Record optimization outcome
|
|
lac.record_optimization_outcome(study_name="...", geometry_type="...", ...)
|
|
```
|
|
|
|
---
|
|
|
|
## Error Quick Fixes
|
|
|
|
| Error | Likely Cause | Quick Fix |
|
|
|-------|--------------|-----------|
|
|
| "No module named optuna" | Wrong environment | `conda activate atomizer` |
|
|
| "NX session timeout" | Model too complex | Increase `timeout` in config |
|
|
| "OP2 file not found" | Solve failed | Check NX log for errors |
|
|
| "No feasible solutions" | Constraints too tight | Relax constraint thresholds |
|
|
| "NSGA-II requires >1 objective" | Wrong protocol | Use P10 for single-objective |
|
|
| "Expression not found" | Wrong parameter name | Verify expression names in NX |
|
|
| **All trials identical results** | **Missing `*_i.prt`** | **Copy idealized part to study folder!** |
|
|
|
|
**Full troubleshooting**: See `OP_06_TROUBLESHOOT.md`
|
|
|
|
---
|
|
|
|
## CRITICAL: NX FEM Mesh Update
|
|
|
|
**If all optimization trials produce identical results, the mesh is NOT updating!**
|
|
|
|
### Required Files for Mesh Updates
|
|
```
|
|
studies/{study}/1_setup/model/
|
|
├── Model.prt # Geometry
|
|
├── Model_fem1_i.prt # Idealized part ← MUST EXIST!
|
|
├── Model_fem1.fem # FEM
|
|
└── Model_sim1.sim # Simulation
|
|
```
|
|
|
|
### Why It Matters
|
|
The `*_i.prt` (idealized part) MUST be:
|
|
1. **Present** in the study folder
|
|
2. **Loaded** before `UpdateFemodel()` (already implemented in `solve_simulation.py`)
|
|
|
|
Without it, `UpdateFemodel()` runs but the mesh doesn't change!
|
|
|
|
---
|
|
|
|
## Privilege Levels
|
|
|
|
| Level | Can Create Studies | Can Add Extractors | Can Add Protocols |
|
|
|-------|-------------------|-------------------|------------------|
|
|
| user | ✓ | ✗ | ✗ |
|
|
| power_user | ✓ | ✓ | ✗ |
|
|
| admin | ✓ | ✓ | ✓ |
|
|
|
|
---
|
|
|
|
## Dashboard URLs
|
|
|
|
| Service | URL | Purpose |
|
|
|---------|-----|---------|
|
|
| Atomizer Dashboard | `http://localhost:3000` | Real-time optimization monitoring |
|
|
| Optuna Dashboard | `http://localhost:8080` | Trial history, parameter importance |
|
|
| API Backend | `http://localhost:5000` | REST API for dashboard |
|
|
|
|
---
|
|
|
|
## Protocol Numbers Reference
|
|
|
|
| # | Name | Purpose |
|
|
|---|------|---------|
|
|
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
|
|
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
|
|
| 12 | Extractor Library | Physics extraction catalog |
|
|
| 13 | Dashboard | Real-time tracking and visualization |
|
|
| 14 | Neural | Surrogate model acceleration |
|
|
| 15 | Method Selector | Recommends optimization strategy |
|
|
| 16 | Study Insights | Physics visualizations (Zernike, stress, modal) |
|
|
|
|
---
|
|
|
|
## Study Insights Quick Reference (SYS_16)
|
|
|
|
Generate physics-focused visualizations from FEA results.
|
|
|
|
### Available Insight Types
|
|
| Type | Purpose | Data Required |
|
|
|------|---------|---------------|
|
|
| `zernike_dashboard` | **RECOMMENDED: Unified WFE dashboard** | OP2 with displacements |
|
|
| `zernike_wfe` | WFE with Standard/OPD toggle | OP2 with displacements |
|
|
| `zernike_opd_comparison` | Compare Standard vs OPD methods | OP2 with displacements |
|
|
| `stress_field` | Von Mises stress contours | OP2 with stresses |
|
|
| `modal` | Mode shapes + frequencies | OP2 with eigenvectors |
|
|
| `thermal` | Temperature distribution | OP2 with temperatures |
|
|
| `design_space` | Parameter-objective landscape | study.db with 5+ trials |
|
|
|
|
### Zernike Method Comparison
|
|
| Method | Use | RMS Difference |
|
|
|--------|-----|----------------|
|
|
| **Standard (Z-only)** | Quick analysis | Baseline |
|
|
| **OPD (X,Y,Z)** ← RECOMMENDED | Any surface with lateral displacement | **+8-11% higher** (more accurate) |
|
|
|
|
### Commands
|
|
```bash
|
|
# List available insights for a study
|
|
python -m optimization_engine.insights list
|
|
|
|
# Generate all insights
|
|
python -m optimization_engine.insights generate studies/my_study
|
|
|
|
# Generate specific insight
|
|
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
|
```
|
|
|
|
### Python API
|
|
```python
|
|
from optimization_engine.insights import get_insight, list_available_insights
|
|
from pathlib import Path
|
|
|
|
study_path = Path("studies/my_study")
|
|
|
|
# Check what's available
|
|
available = list_available_insights(study_path)
|
|
|
|
# Generate Zernike WFE insight
|
|
insight = get_insight('zernike_wfe', study_path)
|
|
result = insight.generate()
|
|
print(result.html_path) # Path to generated HTML
|
|
print(result.summary) # Key metrics dict
|
|
```
|
|
|
|
**Output**: HTMLs saved to `{study}/3_insights/`
|
|
|
|
---
|
|
|
|
## CRITICAL: NXSolver Initialization Pattern
|
|
|
|
**NEVER pass full config dict to NXSolver. Use named parameters:**
|
|
|
|
```python
|
|
# WRONG - causes TypeError
|
|
self.nx_solver = NXSolver(self.config) # ❌
|
|
|
|
# CORRECT - use FEARunner pattern from V14/V15
|
|
nx_settings = self.config.get('nx_settings', {})
|
|
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
|
|
|
|
# Extract version from path
|
|
import re
|
|
version_match = re.search(r'NX(\d+)', nx_install_dir)
|
|
nastran_version = version_match.group(1) if version_match else "2506"
|
|
|
|
self.nx_solver = NXSolver(
|
|
master_model_dir=str(self.master_model_dir), # Path to 1_setup/model
|
|
nx_install_dir=nx_install_dir,
|
|
nastran_version=nastran_version,
|
|
timeout=nx_settings.get('simulation_timeout_s', 600),
|
|
use_iteration_folders=True,
|
|
study_name="my_study_name"
|
|
)
|
|
```
|
|
|
|
### FEARunner Class Pattern
|
|
|
|
Always wrap NXSolver in a `FEARunner` class for:
|
|
- Lazy initialization (setup on first use)
|
|
- Clean separation of NX setup from optimization logic
|
|
- Consistent error handling
|
|
|
|
```python
|
|
class FEARunner:
|
|
def __init__(self, config: Dict):
|
|
self.config = config
|
|
self.nx_solver = None
|
|
self.master_model_dir = SETUP_DIR / "model"
|
|
|
|
def setup(self):
|
|
# Initialize NX and solver here
|
|
...
|
|
|
|
def run_fea(self, params: Dict, trial_num: int) -> Optional[Dict]:
|
|
if self.nx_solver is None:
|
|
self.setup()
|
|
# Run simulation...
|
|
```
|
|
|
|
**Reference implementations**:
|
|
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
|
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
|
|
|
---
|
|
|
|
## Trial Management Utilities
|
|
|
|
### TrialManager - Unified Trial Folder + DB Management
|
|
|
|
```python
|
|
from optimization_engine.utils.trial_manager import TrialManager
|
|
|
|
tm = TrialManager(study_dir)
|
|
|
|
# Start new trial (creates folder, saves params)
|
|
trial = tm.new_trial(
|
|
params={'rib_thickness': 10.5, 'mirror_face_thickness': 17.0},
|
|
source="turbo",
|
|
metadata={'turbo_batch': 1, 'predicted_ws': 42.0}
|
|
)
|
|
# Returns: {'trial_id': 47, 'trial_number': 47, 'folder_path': Path(...)}
|
|
|
|
# After FEA completes
|
|
tm.complete_trial(
|
|
trial_number=trial['trial_number'],
|
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
|
weighted_sum=42.5,
|
|
is_feasible=True
|
|
)
|
|
|
|
# Mark failed trial
|
|
tm.fail_trial(trial_number=47, error="NX solver timeout")
|
|
```
|
|
|
|
### DashboardDB - Optuna-Compatible Database
|
|
|
|
```python
|
|
from optimization_engine.utils.dashboard_db import DashboardDB, convert_custom_to_optuna
|
|
|
|
# Create new dashboard-compatible database
|
|
db = DashboardDB(db_path, study_name="my_study")
|
|
|
|
# Log a trial
|
|
trial_id = db.log_trial(
|
|
params={'rib_thickness': 10.5},
|
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
|
weighted_sum=42.5,
|
|
is_feasible=True,
|
|
state="COMPLETE"
|
|
)
|
|
|
|
# Mark best trial
|
|
db.mark_best(trial_id)
|
|
|
|
# Get summary
|
|
summary = db.get_summary()
|
|
|
|
# Convert existing custom database to Optuna format
|
|
convert_custom_to_optuna(db_path, study_name)
|
|
```
|
|
|
|
### Trial Naming Convention
|
|
|
|
```
|
|
2_iterations/
|
|
├── trial_0001/ # Zero-padded, monotonically increasing
|
|
├── trial_0002/ # NEVER reset, NEVER overwritten
|
|
└── trial_0003/
|
|
```
|
|
|
|
**Key principles**:
|
|
- Trial numbers **NEVER reset** across study lifetime
|
|
- Surrogate predictions (5K per batch) are NOT logged as trials
|
|
- Only FEA-validated results become trials
|