docs: Complete M1 mirror optimization campaign V11-V15

## M1 Mirror Campaign Summary
- V11-V15 optimization campaign completed (~1,400 FEA evaluations)
- Best design: V14 Trial #725 with Weighted Sum = 121.72
- V15 NSGA-II confirmed V14 TPE found optimal solution
- Campaign improved from WS=129.33 (V11) to WS=121.72 (V14): -5.9%

## Key Results
- 40° tracking: 5.99 nm (target 4.0 nm)
- 60° tracking: 13.10 nm (target 10.0 nm)
- Manufacturing: 26.28 nm (target 20.0 nm)
- Targets not achievable within current design space

## Documentation Added
- V15 STUDY_REPORT.md: Detailed NSGA-II results analysis
- M1_MIRROR_CAMPAIGN_SUMMARY.md: Full V11-V15 campaign overview
- Updated CLAUDE.md, ATOMIZER_CONTEXT.md with NXSolver patterns
- Updated 01_CHEATSHEET.md with --resume guidance
- Updated OP_01_CREATE_STUDY.md with FEARunner template

## Studies Added
- m1_mirror_adaptive_V13: TPE validation (291 trials)
- m1_mirror_adaptive_V14: TPE intensive (785 trials, BEST)
- m1_mirror_adaptive_V15: NSGA-II exploration (126 new FEA)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Antoine
2025-12-16 14:55:23 -05:00
parent d1261d62fd
commit 01a7d7d121
88 changed files with 2574 additions and 62 deletions

View File

@@ -61,11 +61,9 @@ Use keyword matching to load appropriate context:
```bash
# Optimization workflow
python run_optimization.py --discover # 1 trial - model introspection
python run_optimization.py --validate # 1 trial - verify pipeline
python run_optimization.py --test # 3 trials - quick sanity check
python run_optimization.py --run --trials 50 # Full optimization
python run_optimization.py --resume # Continue existing study
python run_optimization.py --start --trials 50 # Run optimization
python run_optimization.py --start --resume # Continue interrupted run
python run_optimization.py --test # Single trial test
# Neural acceleration
python run_nn_optimization.py --turbo --nn-trials 5000 # Fast NN exploration
@@ -75,6 +73,17 @@ python -m optimization_engine.method_selector config.json study.db # Get recomm
cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
```
### When to Use --resume
| Scenario | Use --resume? |
|----------|---------------|
| First run of new study | NO |
| First run with seeding (e.g., V15 from V14) | NO - seeding is automatic |
| Continue interrupted run | YES |
| Add more trials to completed study | YES |
**Key**: `--resume` continues existing `study.db`. Seeding from `source_studies` in config happens automatically on first run - don't confuse seeding with resuming!
### Study Structure (100% standardized)
```
@@ -248,6 +257,54 @@ surrogate.run() # Handles --train, --turbo, --all
---
## CRITICAL: NXSolver Initialization Pattern
**NEVER pass full config dict to NXSolver.** This causes `TypeError: expected str, bytes or os.PathLike object, not dict`.
### WRONG
```python
self.nx_solver = NXSolver(self.config) # ❌ NEVER DO THIS
```
### CORRECT - FEARunner Pattern
Always wrap NXSolver in a `FEARunner` class with explicit parameters:
```python
class FEARunner:
def __init__(self, config: Dict):
self.config = config
self.nx_solver = None
self.master_model_dir = SETUP_DIR / "model"
def setup(self):
import re
nx_settings = self.config.get('nx_settings', {})
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
version_match = re.search(r'NX(\d+)', nx_install_dir)
nastran_version = version_match.group(1) if version_match else "2506"
self.nx_solver = NXSolver(
master_model_dir=str(self.master_model_dir),
nx_install_dir=nx_install_dir,
nastran_version=nastran_version,
timeout=nx_settings.get('simulation_timeout_s', 600),
use_iteration_folders=True,
study_name=self.config.get('study_name', 'my_study')
)
def run_fea(self, params, iter_num):
if self.nx_solver is None:
self.setup()
# ... run simulation
```
**Reference implementations**:
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
---
## Skill Registry (Phase 3 - Consolidated Skills)
All skills now have YAML frontmatter with metadata for versioning and dependency tracking.
@@ -354,17 +411,18 @@ python -m optimization_engine.auto_doc templates
| Component | Version | Last Updated |
|-----------|---------|--------------|
| ATOMIZER_CONTEXT | 1.5 | 2025-12-07 |
| ATOMIZER_CONTEXT | 1.6 | 2025-12-12 |
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
| GenericSurrogate | 1.0 | 2025-12-07 |
| Study State Detector | 1.0 | 2025-12-07 |
| Template Registry | 1.0 | 2025-12-07 |
| Extractor Library | 1.3 | 2025-12-07 |
| Extractor Library | 1.4 | 2025-12-12 |
| Method Selector | 2.1 | 2025-12-07 |
| Protocol System | 2.0 | 2025-12-06 |
| Skill System | 2.0 | 2025-12-07 |
| Protocol System | 2.1 | 2025-12-12 |
| Skill System | 2.1 | 2025-12-12 |
| Auto-Doc Generator | 1.0 | 2025-12-07 |
| Subagent Commands | 1.0 | 2025-12-07 |
| FEARunner Pattern | 1.0 | 2025-12-12 |
---