feat: Add Protocol 13 adaptive optimization, Plotly charts, and dashboard improvements

## Protocol 13: Adaptive Multi-Objective Optimization
- Iterative FEA + Neural Network surrogate workflow
- Initial FEA sampling, NN training, NN-accelerated search
- FEA validation of top NN predictions, retraining loop
- adaptive_state.json tracks iteration history and best values
- M1 mirror study (V11) with 103 FEA, 3000 NN trials

## Dashboard Visualization Enhancements
- Added Plotly.js interactive charts (parallel coords, Pareto, convergence)
- Lazy loading with React.lazy() for performance
- Code splitting: plotly.js-basic-dist (~1MB vs 3.5MB)
- Chart library toggle (Recharts default, Plotly on-demand)
- ExpandableChart component for full-screen modal views
- ConsoleOutput component for real-time log viewing

## Documentation
- Protocol 13 detailed documentation
- Dashboard visualization guide
- Plotly components README
- Updated run-optimization skill with Mode 5 (adaptive)

## Bug Fixes
- Fixed TypeScript errors in dashboard components
- Fixed Card component to accept ReactNode title
- Removed unused imports across components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Antoine
2025-12-04 07:41:54 -05:00
parent e74f1ccf36
commit 8cbdbcad78
270 changed files with 15471 additions and 517 deletions

View File

@@ -143,6 +143,96 @@ Always validate before:
- Real-time monitoring: **Dashboard at localhost:3000**
- Results analysis: **Both (you interpret, dashboard visualizes)**
## CRITICAL: Code Reuse Protocol (MUST FOLLOW)
### STOP! Before Writing ANY Code in run_optimization.py
**This is the #1 cause of code duplication. EVERY TIME you're about to write:**
- A function longer than 20 lines
- Any physics/math calculations (Zernike, RMS, stress, etc.)
- Any OP2/BDF parsing logic
- Any post-processing or extraction logic
**STOP and run this checklist:**
```
□ Did I check optimization_engine/extractors/__init__.py?
□ Did I grep for similar function names in optimization_engine/?
□ Does this functionality exist somewhere else in the codebase?
```
### The 20-Line Rule
If you're writing a function longer than ~20 lines in `studies/*/run_optimization.py`:
1. **STOP** - This is a code smell
2. **SEARCH** - The functionality probably exists
3. **IMPORT** - Use the existing module
4. **Only if truly new** - Create in `optimization_engine/extractors/`, NOT in the study
### Available Extractors (ALWAYS CHECK FIRST)
| Module | Functions | Use For |
|--------|-----------|---------|
| **`extract_zernike.py`** | `ZernikeExtractor`, `extract_zernike_from_op2`, `extract_zernike_filtered_rms`, `extract_zernike_relative_rms` | Telescope mirror WFE analysis - Noll indexing, RMS calculations, multi-subcase |
| **`zernike_helpers.py`** | `create_zernike_objective`, `ZernikeObjectiveBuilder`, `extract_zernike_for_trial` | Zernike optimization integration |
| **`extract_displacement.py`** | `extract_displacement` | Max/min displacement from OP2 |
| **`extract_von_mises_stress.py`** | `extract_solid_stress` | Von Mises stress extraction |
| **`extract_frequency.py`** | `extract_frequency` | Natural frequencies from OP2 |
| **`extract_mass.py`** | `extract_mass_from_expression` | CAD mass property |
| **`op2_extractor.py`** | Generic OP2 result extraction | Low-level OP2 access |
| **`field_data_extractor.py`** | Field data for neural networks | Training data generation |
### Correct Pattern: Zernike Example
**❌ WRONG - What I did (and must NEVER do again):**
```python
# studies/m1_mirror/run_optimization.py
def noll_indices(j): # 30 lines
...
def zernike_radial(n, m, r): # 20 lines
...
def compute_zernike_coefficients(...): # 80 lines
...
def compute_rms_metrics(...): # 40 lines
...
# Total: 500+ lines of duplicated code
```
**✅ CORRECT - What I should have done:**
```python
# studies/m1_mirror/run_optimization.py
from optimization_engine.extractors import (
ZernikeExtractor,
extract_zernike_for_trial
)
# In objective function - 5 lines instead of 500
extractor = ZernikeExtractor(op2_file, bdf_file)
result = extractor.extract_relative(target_subcase="40", reference_subcase="20")
filtered_rms = result['relative_filtered_rms_nm']
```
### Creating New Extractors (Only When Truly Needed)
When functionality genuinely doesn't exist:
```
1. CREATE module in optimization_engine/extractors/new_feature.py
2. ADD exports to optimization_engine/extractors/__init__.py
3. UPDATE this table in CLAUDE.md
4. IMPORT in run_optimization.py (just the import, not the implementation)
```
### Why This Is Critical
| Embedding Code in Studies | Using Central Extractors |
|---------------------------|-------------------------|
| Bug fixes don't propagate | Fix once, applies everywhere |
| No unit tests | Tested in isolation |
| Hard to discover | Clear API in __init__.py |
| Copy-paste errors | Single source of truth |
| 500+ line studies | Clean, readable studies |
## Key Principles
1. **Conversation first** - Don't ask user to edit JSON manually
@@ -150,6 +240,8 @@ Always validate before:
3. **Explain decisions** - Say why you chose a sampler/protocol
4. **Sensible defaults** - User only specifies what they care about
5. **Progressive disclosure** - Start simple, add complexity when needed
6. **NEVER modify master files** - Always copy model files to study working directory before optimization. User's source files must remain untouched. If corruption occurs during iteration, working copy can be deleted and re-copied.
7. **ALWAYS reuse existing code** - Check `optimization_engine/extractors/` BEFORE writing any new post-processing logic. Never duplicate functionality that already exists.
## Current State Awareness