docs: Complete M1 mirror optimization campaign V11-V15
## M1 Mirror Campaign Summary - V11-V15 optimization campaign completed (~1,400 FEA evaluations) - Best design: V14 Trial #725 with Weighted Sum = 121.72 - V15 NSGA-II confirmed V14 TPE found optimal solution - Campaign improved from WS=129.33 (V11) to WS=121.72 (V14): -5.9% ## Key Results - 40° tracking: 5.99 nm (target 4.0 nm) - 60° tracking: 13.10 nm (target 10.0 nm) - Manufacturing: 26.28 nm (target 20.0 nm) - Targets not achievable within current design space ## Documentation Added - V15 STUDY_REPORT.md: Detailed NSGA-II results analysis - M1_MIRROR_CAMPAIGN_SUMMARY.md: Full V11-V15 campaign overview - Updated CLAUDE.md, ATOMIZER_CONTEXT.md with NXSolver patterns - Updated 01_CHEATSHEET.md with --resume guidance - Updated OP_01_CREATE_STUDY.md with FEARunner template ## Studies Added - m1_mirror_adaptive_V13: TPE validation (291 trials) - m1_mirror_adaptive_V14: TPE intensive (785 trials, BEST) - m1_mirror_adaptive_V15: NSGA-II exploration (126 new FEA) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -61,11 +61,9 @@ Use keyword matching to load appropriate context:
|
||||
|
||||
```bash
|
||||
# Optimization workflow
|
||||
python run_optimization.py --discover # 1 trial - model introspection
|
||||
python run_optimization.py --validate # 1 trial - verify pipeline
|
||||
python run_optimization.py --test # 3 trials - quick sanity check
|
||||
python run_optimization.py --run --trials 50 # Full optimization
|
||||
python run_optimization.py --resume # Continue existing study
|
||||
python run_optimization.py --start --trials 50 # Run optimization
|
||||
python run_optimization.py --start --resume # Continue interrupted run
|
||||
python run_optimization.py --test # Single trial test
|
||||
|
||||
# Neural acceleration
|
||||
python run_nn_optimization.py --turbo --nn-trials 5000 # Fast NN exploration
|
||||
@@ -75,6 +73,17 @@ python -m optimization_engine.method_selector config.json study.db # Get recomm
|
||||
cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
|
||||
```
|
||||
|
||||
### When to Use --resume
|
||||
|
||||
| Scenario | Use --resume? |
|
||||
|----------|---------------|
|
||||
| First run of new study | NO |
|
||||
| First run with seeding (e.g., V15 from V14) | NO - seeding is automatic |
|
||||
| Continue interrupted run | YES |
|
||||
| Add more trials to completed study | YES |
|
||||
|
||||
**Key**: `--resume` continues existing `study.db`. Seeding from `source_studies` in config happens automatically on first run - don't confuse seeding with resuming!
|
||||
|
||||
### Study Structure (100% standardized)
|
||||
|
||||
```
|
||||
@@ -248,6 +257,54 @@ surrogate.run() # Handles --train, --turbo, --all
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: NXSolver Initialization Pattern
|
||||
|
||||
**NEVER pass full config dict to NXSolver.** This causes `TypeError: expected str, bytes or os.PathLike object, not dict`.
|
||||
|
||||
### WRONG
|
||||
```python
|
||||
self.nx_solver = NXSolver(self.config) # ❌ NEVER DO THIS
|
||||
```
|
||||
|
||||
### CORRECT - FEARunner Pattern
|
||||
Always wrap NXSolver in a `FEARunner` class with explicit parameters:
|
||||
|
||||
```python
|
||||
class FEARunner:
|
||||
def __init__(self, config: Dict):
|
||||
self.config = config
|
||||
self.nx_solver = None
|
||||
self.master_model_dir = SETUP_DIR / "model"
|
||||
|
||||
def setup(self):
|
||||
import re
|
||||
nx_settings = self.config.get('nx_settings', {})
|
||||
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
|
||||
|
||||
version_match = re.search(r'NX(\d+)', nx_install_dir)
|
||||
nastran_version = version_match.group(1) if version_match else "2506"
|
||||
|
||||
self.nx_solver = NXSolver(
|
||||
master_model_dir=str(self.master_model_dir),
|
||||
nx_install_dir=nx_install_dir,
|
||||
nastran_version=nastran_version,
|
||||
timeout=nx_settings.get('simulation_timeout_s', 600),
|
||||
use_iteration_folders=True,
|
||||
study_name=self.config.get('study_name', 'my_study')
|
||||
)
|
||||
|
||||
def run_fea(self, params, iter_num):
|
||||
if self.nx_solver is None:
|
||||
self.setup()
|
||||
# ... run simulation
|
||||
```
|
||||
|
||||
**Reference implementations**:
|
||||
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
||||
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
||||
|
||||
---
|
||||
|
||||
## Skill Registry (Phase 3 - Consolidated Skills)
|
||||
|
||||
All skills now have YAML frontmatter with metadata for versioning and dependency tracking.
|
||||
@@ -354,17 +411,18 @@ python -m optimization_engine.auto_doc templates
|
||||
|
||||
| Component | Version | Last Updated |
|
||||
|-----------|---------|--------------|
|
||||
| ATOMIZER_CONTEXT | 1.5 | 2025-12-07 |
|
||||
| ATOMIZER_CONTEXT | 1.6 | 2025-12-12 |
|
||||
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
||||
| GenericSurrogate | 1.0 | 2025-12-07 |
|
||||
| Study State Detector | 1.0 | 2025-12-07 |
|
||||
| Template Registry | 1.0 | 2025-12-07 |
|
||||
| Extractor Library | 1.3 | 2025-12-07 |
|
||||
| Extractor Library | 1.4 | 2025-12-12 |
|
||||
| Method Selector | 2.1 | 2025-12-07 |
|
||||
| Protocol System | 2.0 | 2025-12-06 |
|
||||
| Skill System | 2.0 | 2025-12-07 |
|
||||
| Protocol System | 2.1 | 2025-12-12 |
|
||||
| Skill System | 2.1 | 2025-12-12 |
|
||||
| Auto-Doc Generator | 1.0 | 2025-12-07 |
|
||||
| Subagent Commands | 1.0 | 2025-12-07 |
|
||||
| FEARunner Pattern | 1.0 | 2025-12-12 |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -157,24 +157,35 @@ studies/{study_name}/
|
||||
conda activate atomizer
|
||||
|
||||
# Run optimization
|
||||
python run_optimization.py
|
||||
python run_optimization.py --start
|
||||
|
||||
# Run with specific trial count
|
||||
python run_optimization.py --n-trials 100
|
||||
python run_optimization.py --start --trials 50
|
||||
|
||||
# Resume interrupted optimization
|
||||
python run_optimization.py --resume
|
||||
python run_optimization.py --start --resume
|
||||
|
||||
# Export training data for neural network
|
||||
python run_optimization.py --export-training
|
||||
|
||||
# View results in Optuna dashboard
|
||||
optuna-dashboard sqlite:///2_results/study.db
|
||||
optuna-dashboard sqlite:///3_results/study.db
|
||||
|
||||
# Check study status
|
||||
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///2_results/study.db'); print(f'Trials: {len(s.trials)}')"
|
||||
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///3_results/study.db'); print(f'Trials: {len(s.trials)}')"
|
||||
```
|
||||
|
||||
### When to Use --resume
|
||||
|
||||
| Scenario | Command |
|
||||
|----------|---------|
|
||||
| **First run of NEW study** | `python run_optimization.py --start --trials 50` |
|
||||
| **First run with SEEDING** (e.g., V15 from V14) | `python run_optimization.py --start --trials 50` |
|
||||
| **Continue INTERRUPTED run** | `python run_optimization.py --start --resume` |
|
||||
| **Add MORE trials to completed study** | `python run_optimization.py --start --trials 20 --resume` |
|
||||
|
||||
**Key insight**: `--resume` is for continuing an existing `study.db`, NOT for seeding from prior studies. Seeding happens automatically on first run when `source_studies` is configured.
|
||||
|
||||
---
|
||||
|
||||
## LAC (Learning Atomizer Core) Commands
|
||||
@@ -276,6 +287,64 @@ Without it, `UpdateFemodel()` runs but the mesh doesn't change!
|
||||
|---|------|---------|
|
||||
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
|
||||
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
|
||||
| 12 | - | (Reserved) |
|
||||
| 12 | Extractor Library | Physics extraction catalog |
|
||||
| 13 | Dashboard | Real-time tracking and visualization |
|
||||
| 14 | Neural | Surrogate model acceleration |
|
||||
| 15 | Method Selector | Recommends optimization strategy |
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL: NXSolver Initialization Pattern
|
||||
|
||||
**NEVER pass full config dict to NXSolver. Use named parameters:**
|
||||
|
||||
```python
|
||||
# WRONG - causes TypeError
|
||||
self.nx_solver = NXSolver(self.config) # ❌
|
||||
|
||||
# CORRECT - use FEARunner pattern from V14/V15
|
||||
nx_settings = self.config.get('nx_settings', {})
|
||||
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
|
||||
|
||||
# Extract version from path
|
||||
import re
|
||||
version_match = re.search(r'NX(\d+)', nx_install_dir)
|
||||
nastran_version = version_match.group(1) if version_match else "2506"
|
||||
|
||||
self.nx_solver = NXSolver(
|
||||
master_model_dir=str(self.master_model_dir), # Path to 1_setup/model
|
||||
nx_install_dir=nx_install_dir,
|
||||
nastran_version=nastran_version,
|
||||
timeout=nx_settings.get('simulation_timeout_s', 600),
|
||||
use_iteration_folders=True,
|
||||
study_name="my_study_name"
|
||||
)
|
||||
```
|
||||
|
||||
### FEARunner Class Pattern
|
||||
|
||||
Always wrap NXSolver in a `FEARunner` class for:
|
||||
- Lazy initialization (setup on first use)
|
||||
- Clean separation of NX setup from optimization logic
|
||||
- Consistent error handling
|
||||
|
||||
```python
|
||||
class FEARunner:
|
||||
def __init__(self, config: Dict):
|
||||
self.config = config
|
||||
self.nx_solver = None
|
||||
self.master_model_dir = SETUP_DIR / "model"
|
||||
|
||||
def setup(self):
|
||||
# Initialize NX and solver here
|
||||
...
|
||||
|
||||
def run_fea(self, params: Dict, trial_num: int) -> Optional[Dict]:
|
||||
if self.nx_solver is None:
|
||||
self.setup()
|
||||
# Run simulation...
|
||||
```
|
||||
|
||||
**Reference implementations**:
|
||||
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
||||
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
||||
|
||||
Reference in New Issue
Block a user