docs: Update documentation for v2.0 module reorganization

- Update feature_registry.json paths to new module locations (v0.3.0)
- Update cheatsheet with new import paths (v2.3)
- Mark migration plan as completed (v3.0)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2025-12-29 13:01:36 -05:00
parent eabcc4c3ca
commit 820c34c39a
3 changed files with 48 additions and 21 deletions

View File

@@ -1,11 +1,11 @@
---
skill_id: SKILL_001
version: 2.2
last_updated: 2025-12-28
version: 2.3
last_updated: 2025-12-29
type: reference
code_dependencies:
- optimization_engine/extractors/__init__.py
- optimization_engine/method_selector.py
- optimization_engine/core/method_selector.py
- optimization_engine/utils/trial_manager.py
- optimization_engine/utils/dashboard_db.py
requires_skills:
@@ -14,8 +14,8 @@ requires_skills:
# Atomizer Quick Reference Cheatsheet
**Version**: 2.2
**Updated**: 2025-12-28
**Version**: 2.3
**Updated**: 2025-12-29
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
---
@@ -142,7 +142,7 @@ Question: Do you need >50 trials OR surrogate model?
Exploits surrogate differentiability for **100-1000x faster** local refinement:
```python
from optimization_engine.gradient_optimizer import GradientOptimizer, run_lbfgs_polish
from optimization_engine.core.gradient_optimizer import GradientOptimizer, run_lbfgs_polish
# Quick usage - polish from top FEA candidates
results = run_lbfgs_polish(study_dir, n_starts=20, n_iterations=100)
@@ -154,7 +154,7 @@ result = optimizer.optimize(starting_points=top_candidates, method='lbfgs')
**CLI usage**:
```bash
python -m optimization_engine.gradient_optimizer studies/my_study --n-starts 20
python -m optimization_engine.core.gradient_optimizer studies/my_study --n-starts 20
# Or per-study script (if available)
python run_lbfgs_polish.py --n-starts 20 --grid-then-grad