refactor: Major project cleanup and reorganization
## Removed Duplicate Directories - Deleted old `dashboard/` (replaced by atomizer-dashboard) - Deleted old `mcp_server/` Python tools (moved model_discovery to optimization_engine) - Deleted `tests/mcp_server/` (obsolete tests) - Deleted `launch_dashboard.bat` (old launcher) ## Consolidated Code - Moved `mcp_server/tools/model_discovery.py` to `optimization_engine/model_discovery/` - Updated import in `optimization_config_builder.py` - Deleted stub `extract_mass.py` (use extract_mass_from_bdf instead) - Deleted unused `intelligent_setup.py` and `hybrid_study_creator.py` - Archived `result_extractors/` to `archive/deprecated/` ## Documentation Cleanup - Deleted deprecated `docs/06_PROTOCOLS_DETAILED/` (14 files) - Archived dated dev docs to `docs/08_ARCHIVE/sessions/` - Archived old plans to `docs/08_ARCHIVE/plans/` - Updated `docs/protocols/README.md` with SYS_15 ## Skills Consolidation - Archived redundant study creation skills to `.claude/skills/archive/` - Kept `core/study-creation-core.md` as canonical ## Housekeeping - Updated `.gitignore` to prevent `nul` and `_dat_run*.dat` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,329 +0,0 @@
|
||||
# Assembly FEM Optimization Workflow
|
||||
|
||||
This document describes the multi-part assembly FEM workflow used when optimizing complex assemblies with `.afm` (Assembly FEM) files.
|
||||
|
||||
## CRITICAL: Working Copy Requirement
|
||||
|
||||
**NEVER run optimization directly on user's master model files.**
|
||||
|
||||
Before any optimization run, ALL model files must be copied to the study's working directory:
|
||||
|
||||
```
|
||||
Source (NEVER MODIFY) Working Copy (optimization runs here)
|
||||
────────────────────────────────────────────────────────────────────────────
|
||||
C:/Users/.../M1-Gigabit/Latest/ studies/{study}/1_setup/model/
|
||||
├── M1_Blank.prt → ├── M1_Blank.prt
|
||||
├── M1_Blank_fem1.fem → ├── M1_Blank_fem1.fem
|
||||
├── M1_Blank_fem1_i.prt → ├── M1_Blank_fem1_i.prt
|
||||
├── M1_Vertical_Support_Skeleton.prt → ├── M1_Vertical_Support_Skeleton.prt
|
||||
├── ASSY_M1_assyfem1.afm → ├── ASSY_M1_assyfem1.afm
|
||||
└── ASSY_M1_assyfem1_sim1.sim → └── ASSY_M1_assyfem1_sim1.sim
|
||||
```
|
||||
|
||||
**Why**: Optimization iteratively modifies expressions, meshes, and saves files. If corruption occurs during iteration (solver crash, bad parameter combo), the working copy can be deleted and re-copied. Master files remain safe.
|
||||
|
||||
**Files to Copy**:
|
||||
- `*.prt` - All part files (geometry + idealized)
|
||||
- `*.fem` - All FEM files
|
||||
- `*.afm` - Assembly FEM files
|
||||
- `*.sim` - Simulation files
|
||||
- `*.exp` - Expression files (if any)
|
||||
|
||||
## Overview
|
||||
|
||||
Assembly FEMs have a more complex dependency chain than single-part simulations:
|
||||
|
||||
```
|
||||
.prt (geometry) → _fem1.fem (component mesh) → .afm (assembly mesh) → .sim (solution)
|
||||
```
|
||||
|
||||
Each level must be updated in sequence when design parameters change.
|
||||
|
||||
## When This Workflow Applies
|
||||
|
||||
This workflow is automatically triggered when:
|
||||
- The working directory contains `.afm` files
|
||||
- Multiple `.fem` files exist (component meshes)
|
||||
- Multiple `.prt` files exist (component geometry)
|
||||
|
||||
Examples:
|
||||
- M1 Mirror assembly (M1_Blank + M1_Vertical_Support_Skeleton)
|
||||
- Multi-component mechanical assemblies
|
||||
- Any NX assembly where components have separate FEM files
|
||||
|
||||
## The 4-Step Workflow
|
||||
|
||||
### Step 1: Update Expressions in Geometry Part (.prt)
|
||||
|
||||
```
|
||||
Open M1_Blank.prt
|
||||
├── Find and update design expressions
|
||||
│ ├── whiffle_min = 42.5
|
||||
│ ├── whiffle_outer_to_vertical = 75.0
|
||||
│ └── inner_circular_rib_dia = 550.0
|
||||
├── Rebuild geometry (DoUpdate)
|
||||
└── Save part
|
||||
```
|
||||
|
||||
The `.prt` file contains the parametric CAD model with expressions that drive dimensions. These expressions are updated with new design parameter values, then the geometry is rebuilt.
|
||||
|
||||
### Step 1b: Update ALL Linked Geometry Parts (CRITICAL!)
|
||||
|
||||
**⚠️ THIS STEP IS CRITICAL - SKIPPING IT CAUSES CORRUPT RESULTS ⚠️**
|
||||
|
||||
```
|
||||
For each geometry part with linked expressions:
|
||||
├── Open M1_Vertical_Support_Skeleton.prt
|
||||
├── DoUpdate() - propagate linked expression changes
|
||||
├── Geometry rebuilds to match M1_Blank
|
||||
└── Save part
|
||||
```
|
||||
|
||||
**Why this is critical:**
|
||||
- M1_Vertical_Support_Skeleton has expressions linked to M1_Blank
|
||||
- When M1_Blank geometry changes, the support skeleton MUST also update
|
||||
- If not updated, FEM nodes will be at OLD positions → nodes not coincident → merge fails
|
||||
- Result: "billion nm" RMS values (corrupt displacement data)
|
||||
|
||||
**Rule: YOU MUST UPDATE ALL GEOMETRY PARTS UNDER THE .sim FILE!**
|
||||
- If there are 5 geometry parts, update all 5
|
||||
- If there are 10 geometry parts, update all 10
|
||||
- Unless explicitly told otherwise in the study config
|
||||
|
||||
### Step 2: Update Component FEM Files (.fem)
|
||||
|
||||
```
|
||||
For each component FEM:
|
||||
├── Open M1_Blank_fem1.fem
|
||||
│ ├── UpdateFemodel() - regenerates mesh from updated geometry
|
||||
│ └── Save FEM
|
||||
├── Open M1_Vertical_Support_Skeleton_fem1.fem
|
||||
│ ├── UpdateFemodel()
|
||||
│ └── Save FEM
|
||||
└── ... (repeat for all component FEMs)
|
||||
```
|
||||
|
||||
Each component FEM is linked to its source geometry. `UpdateFemodel()` regenerates the mesh based on the updated geometry.
|
||||
|
||||
### Step 3: Update Assembly FEM (.afm)
|
||||
|
||||
```
|
||||
Open ASSY_M1_assyfem1.afm
|
||||
├── UpdateFemodel() - updates assembly mesh
|
||||
├── Merge coincident nodes (at component interfaces)
|
||||
├── Resolve labeling conflicts (duplicate node/element IDs)
|
||||
└── Save AFM
|
||||
```
|
||||
|
||||
The assembly FEM combines component meshes. This step:
|
||||
- Reconnects meshes at shared interfaces
|
||||
- Resolves numbering conflicts between component meshes
|
||||
- Ensures mesh continuity for accurate analysis
|
||||
|
||||
### Step 4: Solve Simulation (.sim)
|
||||
|
||||
```
|
||||
Open ASSY_M1_assyfem1_sim1.sim
|
||||
├── Execute solve
|
||||
│ ├── Foreground mode for all solutions
|
||||
│ └── or Background mode for specific solution
|
||||
└── Save simulation
|
||||
```
|
||||
|
||||
The simulation file references the assembly FEM and contains solution setup (loads, constraints, subcases).
|
||||
|
||||
## File Dependencies
|
||||
|
||||
```
|
||||
M1 Mirror Example:
|
||||
|
||||
M1_Blank.prt ─────────────────────> M1_Blank_fem1.fem ─────────┐
|
||||
│ │ │
|
||||
│ (expressions) │ (component mesh) │
|
||||
↓ ↓ │
|
||||
M1_Vertical_Support_Skeleton.prt ──> M1_..._Skeleton_fem1.fem ─┤
|
||||
│
|
||||
↓
|
||||
ASSY_M1_assyfem1.afm ──> ASSY_M1_assyfem1_sim1.sim
|
||||
(assembly mesh) (solution)
|
||||
```
|
||||
|
||||
## API Functions Used
|
||||
|
||||
| Step | NX API Call | Purpose |
|
||||
|------|-------------|---------|
|
||||
| 1 | `OpenBase()` | Open .prt file |
|
||||
| 1 | `ImportFromFile()` | Import expressions from .exp file (preferred) |
|
||||
| 1 | `DoUpdate()` | Rebuild geometry |
|
||||
| 2-3 | `UpdateFemodel()` | Regenerate mesh from geometry |
|
||||
| 3 | `DuplicateNodesCheckBuilder` | Merge coincident nodes at interfaces |
|
||||
| 3 | `MergeOccurrenceNodes = True` | Critical: enables cross-component merge |
|
||||
| 4 | `SolveAllSolutions()` | Execute FEA (Foreground mode recommended)
|
||||
|
||||
### Expression Update Method
|
||||
|
||||
The recommended approach uses expression file import:
|
||||
|
||||
```python
|
||||
# Write expressions to .exp file
|
||||
with open(exp_path, 'w') as f:
|
||||
for name, value in expressions.items():
|
||||
unit = get_unit_for_expression(name)
|
||||
f.write(f"[{unit}]{name}={value}\n")
|
||||
|
||||
# Import into part
|
||||
modified, errors = workPart.Expressions.ImportFromFile(
|
||||
exp_path,
|
||||
NXOpen.ExpressionCollection.ImportMode.Replace
|
||||
)
|
||||
```
|
||||
|
||||
This is more reliable than `EditExpressionWithUnits()` for batch updates.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common issues and solutions:
|
||||
|
||||
### "Update undo happened"
|
||||
- Geometry update failed due to constraint violations
|
||||
- Check expression values are within valid ranges
|
||||
- May need to adjust parameter bounds
|
||||
|
||||
### "This operation can only be done on the work part"
|
||||
- Work part not properly set before operation
|
||||
- Use `SetWork()` to make target part the work part
|
||||
|
||||
### Node merge warnings
|
||||
- Manual intervention may be needed for complex interfaces
|
||||
- Check mesh connectivity in NX after solve
|
||||
|
||||
### "Billion nm" RMS values
|
||||
- Indicates node merging failed - coincident nodes not properly merged
|
||||
- Check `MergeOccurrenceNodes = True` is set
|
||||
- Verify tolerance (0.01 mm recommended)
|
||||
- Run node merge after every FEM update, not just once
|
||||
|
||||
## Configuration
|
||||
|
||||
The workflow auto-detects assembly FEMs, but you can configure behavior:
|
||||
|
||||
```json
|
||||
{
|
||||
"nx_settings": {
|
||||
"expression_part": "M1_Blank", // Override auto-detection
|
||||
"component_fems": [ // Explicit list of FEMs to update
|
||||
"M1_Blank_fem1.fem",
|
||||
"M1_Vertical_Support_Skeleton_fem1.fem"
|
||||
],
|
||||
"afm_file": "ASSY_M1_assyfem1.afm"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Reference
|
||||
|
||||
See `optimization_engine/solve_simulation.py` for the full implementation:
|
||||
|
||||
- `detect_assembly_fem()` - Detects if assembly workflow needed
|
||||
- `update_expressions_in_part()` - Step 1 implementation
|
||||
- `update_fem_part()` - Step 2 implementation
|
||||
- `update_assembly_fem()` - Step 3 implementation
|
||||
- `solve_simulation_file()` - Step 4 implementation
|
||||
|
||||
## HEEDS-Style Iteration Folder Management (V9+)
|
||||
|
||||
For complex assemblies, each optimization trial uses a fresh copy of the master model:
|
||||
|
||||
```
|
||||
study_name/
|
||||
├── 1_setup/
|
||||
│ └── model/ # Master model files (NEVER MODIFY)
|
||||
│ ├── ASSY_M1.prt
|
||||
│ ├── ASSY_M1_assyfem1.afm
|
||||
│ ├── ASSY_M1_assyfem1_sim1.sim
|
||||
│ ├── M1_Blank.prt
|
||||
│ ├── M1_Blank_fem1.fem
|
||||
│ └── ...
|
||||
├── 2_iterations/
|
||||
│ ├── iter0/ # Trial 0 working copy
|
||||
│ │ ├── [all model files]
|
||||
│ │ ├── params.exp # Expression values for this trial
|
||||
│ │ └── results/ # OP2, Zernike CSV, etc.
|
||||
│ ├── iter1/ # Trial 1 working copy
|
||||
│ └── ...
|
||||
└── 3_results/
|
||||
└── study.db # Optuna database
|
||||
```
|
||||
|
||||
### Why Fresh Copies Per Iteration?
|
||||
|
||||
1. **Corruption isolation**: If mesh regeneration fails mid-trial, only that iteration is affected
|
||||
2. **Reproducibility**: Can re-run any trial by using its params.exp
|
||||
3. **Debugging**: All intermediate files preserved for post-mortem analysis
|
||||
4. **Parallelization**: Multiple NX sessions could run different iterations (future)
|
||||
|
||||
### Iteration Folder Contents
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `*.prt, *.fem, *.afm, *.sim` | Fresh copy of all NX model files |
|
||||
| `params.exp` | Expression file with trial parameter values |
|
||||
| `*-solution_1.op2` | Nastran results (after solve) |
|
||||
| `results/zernike_trial_N.csv` | Extracted Zernike metrics |
|
||||
|
||||
### 0-Based Iteration Numbering
|
||||
|
||||
Iterations are numbered starting from 0 to match Optuna trial numbers:
|
||||
- `iter0` = Optuna trial 0 = Dashboard shows trial 0
|
||||
- `iter1` = Optuna trial 1 = Dashboard shows trial 1
|
||||
|
||||
This ensures cross-referencing between dashboard, database, and file system is straightforward.
|
||||
|
||||
## Multi-Subcase Solutions
|
||||
|
||||
For gravity analysis at multiple orientations, use subcases:
|
||||
|
||||
```
|
||||
Simulation Setup in NX:
|
||||
├── Subcase 1: 90 deg elevation (zenith/polishing)
|
||||
├── Subcase 2: 20 deg elevation (low angle reference)
|
||||
├── Subcase 3: 40 deg elevation
|
||||
└── Subcase 4: 60 deg elevation
|
||||
```
|
||||
|
||||
### Solving All Subcases
|
||||
|
||||
Use `solution_name=None` or `solve_all_subcases=True` to ensure all subcases are solved:
|
||||
|
||||
```json
|
||||
"nx_settings": {
|
||||
"solution_name": "Solution 1",
|
||||
"solve_all_subcases": true
|
||||
}
|
||||
```
|
||||
|
||||
### Subcase ID Mapping
|
||||
|
||||
NX subcase IDs (1, 2, 3, 4) may not match the angle labels. Always define explicit mapping:
|
||||
|
||||
```json
|
||||
"zernike_settings": {
|
||||
"subcases": ["1", "2", "3", "4"],
|
||||
"subcase_labels": {
|
||||
"1": "90deg",
|
||||
"2": "20deg",
|
||||
"3": "40deg",
|
||||
"4": "60deg"
|
||||
},
|
||||
"reference_subcase": "2"
|
||||
}
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Start with baseline solve**: Before optimization, manually verify the full workflow completes in NX
|
||||
2. **Check mesh quality**: Poor mesh quality after updates can cause solve failures
|
||||
3. **Monitor memory**: Assembly FEMs with many components use significant memory
|
||||
4. **Use Foreground mode**: For multi-subcase solutions, Foreground mode ensures all subcases complete
|
||||
5. **Validate OP2 data**: Check for corrupt results (all zeros, unrealistic magnitudes) before processing
|
||||
6. **Preserve user NX sessions**: NXSessionManager tracks PIDs to avoid closing user's NX instances
|
||||
@@ -1,228 +0,0 @@
|
||||
# Atomizer Dashboard Visualization Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The Atomizer Dashboard provides real-time visualization of optimization studies with interactive charts, trial history, and study management. It supports two chart libraries:
|
||||
|
||||
- **Recharts** (default): Fast, lightweight, good for real-time updates
|
||||
- **Plotly**: Interactive with zoom, pan, export - better for analysis
|
||||
|
||||
## Starting the Dashboard
|
||||
|
||||
```bash
|
||||
# Quick start (both backend and frontend)
|
||||
python start_dashboard.py
|
||||
|
||||
# Or manually:
|
||||
cd atomizer-dashboard/backend && python -m uvicorn main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
```
|
||||
|
||||
Access at: http://localhost:3003
|
||||
|
||||
## Chart Components
|
||||
|
||||
### 1. Pareto Front Plot
|
||||
|
||||
Visualizes the trade-off between objectives in multi-objective optimization.
|
||||
|
||||
**Features:**
|
||||
- 2D scatter plot for 2 objectives
|
||||
- 3D view for 3+ objectives (Plotly only)
|
||||
- Color differentiation: FEA (blue), NN (orange), Pareto (green)
|
||||
- Axis selector for choosing which objectives to display
|
||||
- Hover tooltips with trial details
|
||||
|
||||
**Usage:**
|
||||
- Click points to select trials
|
||||
- Use axis dropdowns to switch objectives
|
||||
- Toggle 2D/3D view (Plotly mode)
|
||||
|
||||
### 2. Parallel Coordinates Plot
|
||||
|
||||
Shows relationships between all design variables and objectives simultaneously.
|
||||
|
||||
**Features:**
|
||||
- Each vertical axis represents a variable or objective
|
||||
- Lines connect values for each trial
|
||||
- Brush filtering: drag on any axis to filter
|
||||
- Color coding by trial source (FEA/NN/Pareto)
|
||||
|
||||
**Usage:**
|
||||
- Drag on axes to create filters
|
||||
- Double-click to reset filters
|
||||
- Hover for trial details
|
||||
|
||||
### 3. Convergence Plot
|
||||
|
||||
Tracks optimization progress over time.
|
||||
|
||||
**Features:**
|
||||
- Scatter points for each trial's objective value
|
||||
- Step line showing best-so-far
|
||||
- Range slider for zooming (Plotly)
|
||||
- FEA vs NN differentiation
|
||||
|
||||
**Metrics Displayed:**
|
||||
- Best value achieved
|
||||
- Current trial value
|
||||
- Total trial count
|
||||
|
||||
### 4. Parameter Importance Chart
|
||||
|
||||
Shows which design variables most influence the objective.
|
||||
|
||||
**Features:**
|
||||
- Horizontal bar chart of correlation coefficients
|
||||
- Color coding: Red (positive), Green (negative)
|
||||
- Sortable by importance or name
|
||||
- Pearson correlation calculation
|
||||
|
||||
**Interpretation:**
|
||||
- Positive correlation: Higher parameter → Higher objective
|
||||
- Negative correlation: Higher parameter → Lower objective
|
||||
- |r| > 0.5: Strong influence
|
||||
|
||||
### 5. Expandable Charts
|
||||
|
||||
All charts support full-screen modal view:
|
||||
|
||||
**Features:**
|
||||
- Click expand icon to open modal
|
||||
- Larger view for detailed analysis
|
||||
- Maintains all interactivity
|
||||
- Close with X or click outside
|
||||
|
||||
## Chart Library Toggle
|
||||
|
||||
Switch between Recharts and Plotly using the header buttons:
|
||||
|
||||
| Feature | Recharts | Plotly |
|
||||
|---------|----------|--------|
|
||||
| Load Speed | Fast | Slower (lazy loaded) |
|
||||
| Interactivity | Basic | Advanced |
|
||||
| Export | Screenshot | PNG/SVG native |
|
||||
| 3D Support | No | Yes |
|
||||
| Real-time Updates | Better | Good |
|
||||
|
||||
**Recommendation:**
|
||||
- Use Recharts during active optimization (real-time)
|
||||
- Switch to Plotly for post-optimization analysis
|
||||
|
||||
## Study Management
|
||||
|
||||
### Study Selection
|
||||
|
||||
- Left sidebar shows all available studies
|
||||
- Click to select and load data
|
||||
- Badge shows study status (running/completed)
|
||||
|
||||
### Metrics Cards
|
||||
|
||||
Top row displays key metrics:
|
||||
- **Trials**: Total completed trials
|
||||
- **Best Value**: Best objective achieved
|
||||
- **Pruned**: Trials pruned by sampler
|
||||
|
||||
### Trial History
|
||||
|
||||
Bottom section shows trial details:
|
||||
- Trial number and objective value
|
||||
- Parameter values (expandable)
|
||||
- Source indicator (FEA/NN)
|
||||
- Sort by performance or chronological
|
||||
|
||||
## Report Viewer
|
||||
|
||||
Access generated study reports:
|
||||
|
||||
1. Click "View Report" button
|
||||
2. Markdown rendered with syntax highlighting
|
||||
3. Supports tables, code blocks, math
|
||||
|
||||
## Console Output
|
||||
|
||||
Real-time log viewer:
|
||||
|
||||
- Shows optimization progress
|
||||
- Error messages highlighted
|
||||
- Auto-scroll to latest
|
||||
- Collapsible panel
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The dashboard uses these REST endpoints:
|
||||
|
||||
```
|
||||
GET /api/optimization/studies # List all studies
|
||||
GET /api/optimization/studies/{id}/status # Study status
|
||||
GET /api/optimization/studies/{id}/history # Trial history
|
||||
GET /api/optimization/studies/{id}/metadata # Study config
|
||||
GET /api/optimization/studies/{id}/pareto # Pareto front
|
||||
GET /api/optimization/studies/{id}/report # Markdown report
|
||||
GET /api/optimization/studies/{id}/console # Log output
|
||||
```
|
||||
|
||||
## WebSocket Updates
|
||||
|
||||
Real-time updates via WebSocket:
|
||||
|
||||
```
|
||||
ws://localhost:8000/api/ws/optimization/{study_id}
|
||||
```
|
||||
|
||||
Events:
|
||||
- `trial_completed`: New trial finished
|
||||
- `trial_pruned`: Trial was pruned
|
||||
- `new_best`: New best value found
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### For Large Studies (1000+ trials)
|
||||
|
||||
1. Use Recharts for real-time monitoring
|
||||
2. Switch to Plotly for final analysis
|
||||
3. Limit displayed trials in parallel coordinates
|
||||
|
||||
### Bundle Optimization
|
||||
|
||||
The dashboard uses:
|
||||
- `plotly.js-basic-dist` (smaller bundle, ~1MB vs 3.5MB)
|
||||
- Lazy loading for Plotly components
|
||||
- Code splitting (vendor, recharts, plotly chunks)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Charts Not Loading
|
||||
|
||||
1. Check backend is running (port 8000)
|
||||
2. Verify API proxy in vite.config.ts
|
||||
3. Check browser console for errors
|
||||
|
||||
### Slow Performance
|
||||
|
||||
1. Switch to Recharts mode
|
||||
2. Reduce trial history limit
|
||||
3. Close unused browser tabs
|
||||
|
||||
### Missing Data
|
||||
|
||||
1. Verify study.db exists
|
||||
2. Check study has completed trials
|
||||
3. Refresh page after new trials
|
||||
|
||||
## Development
|
||||
|
||||
### Adding New Charts
|
||||
|
||||
1. Create component in `src/components/`
|
||||
2. Add Plotly version in `src/components/plotly/`
|
||||
3. Export from `src/components/plotly/index.ts`
|
||||
4. Add to Dashboard.tsx with toggle logic
|
||||
|
||||
### Styling
|
||||
|
||||
Uses Tailwind CSS with dark theme:
|
||||
- Background: `dark-800`, `dark-900`
|
||||
- Text: `dark-100`, `dark-200`
|
||||
- Accent: `primary-500`, `primary-600`
|
||||
@@ -1,471 +0,0 @@
|
||||
# LLM-Orchestrated Atomizer Workflow
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
**Atomizer is LLM-first.** The user talks to Claude Code, describes what they want in natural language, and the LLM orchestrates everything:
|
||||
|
||||
- Interprets engineering intent
|
||||
- Creates optimized configurations
|
||||
- Sets up study structure
|
||||
- Runs optimizations
|
||||
- Generates reports
|
||||
- Implements custom features
|
||||
|
||||
**The dashboard is for monitoring, not setup.**
|
||||
|
||||
---
|
||||
|
||||
## Architecture: Skills + Protocols + Validators
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ USER (Natural Language) │
|
||||
│ "I want to optimize this drone arm for weight while keeping it stiff" │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ CLAUDE CODE (LLM Orchestrator) │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ SKILLS │ │ PROTOCOLS │ │ VALIDATORS │ │ KNOWLEDGE │ │
|
||||
│ │ (.claude/ │ │ (docs/06_) │ │ (Python) │ │ (docs/) │ │
|
||||
│ │ commands/) │ │ │ │ │ │ │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │ │ │ │
|
||||
│ └─────────────────┴─────────────────┴─────────────────┘ │
|
||||
│ │ │
|
||||
│ ORCHESTRATION LOGIC │
|
||||
│ (Intent → Plan → Execute → Validate) │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ ATOMIZER ENGINE │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Config │ │ Runner │ │ Extractors │ │ Reports │ │
|
||||
│ │ Generator │ │ (FEA/NN) │ │ (OP2/CAD) │ │ Generator │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ OUTPUTS (User-Visible) │
|
||||
│ │
|
||||
│ • study/1_setup/optimization_config.json (config) │
|
||||
│ • study/2_results/study.db (optimization data) │
|
||||
│ • reports/ (visualizations) │
|
||||
│ • Dashboard at localhost:3000 (live monitoring) │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Three Pillars
|
||||
|
||||
### 1. SKILLS (What LLM Can Do)
|
||||
Location: `.claude/skills/*.md`
|
||||
|
||||
Skills are **instruction sets** that tell Claude Code how to perform specific tasks with high rigor. They're like recipes that ensure consistency.
|
||||
|
||||
```
|
||||
.claude/skills/
|
||||
├── create-study.md # Create new optimization study
|
||||
├── analyze-model.md # Analyze NX model for optimization
|
||||
├── configure-surrogate.md # Setup NN surrogate settings
|
||||
├── generate-report.md # Create performance reports
|
||||
├── troubleshoot.md # Debug common issues
|
||||
└── extend-feature.md # Add custom functionality
|
||||
```
|
||||
|
||||
### 2. PROTOCOLS (How To Do It Right)
|
||||
Location: `docs/06_PROTOCOLS_DETAILED/`
|
||||
|
||||
Protocols are **step-by-step procedures** that define the correct sequence for complex operations. They ensure rigor and reproducibility.
|
||||
|
||||
```
|
||||
docs/06_PROTOCOLS_DETAILED/
|
||||
├── PROTOCOL_01_STUDY_SETUP.md
|
||||
├── PROTOCOL_02_MODEL_VALIDATION.md
|
||||
├── PROTOCOL_03_OPTIMIZATION_RUN.md
|
||||
├── PROTOCOL_11_MULTI_OBJECTIVE.md
|
||||
├── PROTOCOL_12_HYBRID_SURROGATE.md
|
||||
└── LLM_ORCHESTRATED_WORKFLOW.md (this file)
|
||||
```
|
||||
|
||||
### 3. VALIDATORS (Verify It's Correct)
|
||||
Location: `optimization_engine/validators/`
|
||||
|
||||
Validators are **Python modules** that check configurations, outputs, and state. They catch errors before they cause problems.
|
||||
|
||||
```python
|
||||
# Example: optimization_engine/validators/config_validator.py
|
||||
def validate_optimization_config(config: dict) -> ValidationResult:
|
||||
"""Ensure config is valid before running."""
|
||||
errors = []
|
||||
warnings = []
|
||||
|
||||
# Check required fields
|
||||
if 'design_variables' not in config:
|
||||
errors.append("Missing design_variables")
|
||||
|
||||
# Check bounds make sense
|
||||
for var in config.get('design_variables', []):
|
||||
if var['bounds'][0] >= var['bounds'][1]:
|
||||
errors.append(f"{var['parameter']}: min >= max")
|
||||
|
||||
return ValidationResult(errors, warnings)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Master Skill: `/create-study`
|
||||
|
||||
This is the primary entry point. When user says "I want to optimize X", this skill orchestrates everything.
|
||||
|
||||
### Skill File: `.claude/skills/create-study.md`
|
||||
|
||||
```markdown
|
||||
# Create Study Skill
|
||||
|
||||
## Trigger
|
||||
User wants to create a new optimization study.
|
||||
|
||||
## Required Information (Gather via conversation)
|
||||
|
||||
### 1. Model Information
|
||||
- [ ] NX model file location (.prt)
|
||||
- [ ] Simulation file (.sim)
|
||||
- [ ] FEM file (.fem)
|
||||
- [ ] Analysis types (static, modal, buckling, etc.)
|
||||
|
||||
### 2. Engineering Goals
|
||||
- [ ] What to optimize (minimize mass, maximize stiffness, etc.)
|
||||
- [ ] Target values (if any)
|
||||
- [ ] Constraints (max stress, min frequency, etc.)
|
||||
- [ ] Engineering context (what is this part for?)
|
||||
|
||||
### 3. Design Variables
|
||||
- [ ] Which parameters can change
|
||||
- [ ] Bounds for each (min/max)
|
||||
- [ ] Integer vs continuous
|
||||
|
||||
### 4. Optimization Settings
|
||||
- [ ] Number of trials
|
||||
- [ ] Single vs multi-objective
|
||||
- [ ] Enable NN surrogate? (recommend for >50 trials)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Analyze Model
|
||||
Read the NX model to:
|
||||
- Extract existing expressions (potential design variables)
|
||||
- Identify geometry features
|
||||
- Check simulation setup
|
||||
|
||||
### Step 2: Generate Configuration
|
||||
Create optimization_config.json with:
|
||||
- All gathered information
|
||||
- Sensible defaults for missing info
|
||||
- Appropriate protocol selection
|
||||
|
||||
### Step 3: Validate Configuration
|
||||
Run config validator to check:
|
||||
- All required fields present
|
||||
- Bounds are reasonable
|
||||
- Extraction methods are correct
|
||||
|
||||
### Step 4: Create Study Structure
|
||||
```
|
||||
studies/{study_name}/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # Copy of NX files
|
||||
│ ├── optimization_config.json
|
||||
│ └── workflow_config.json
|
||||
├── 2_results/
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### Step 5: Confirm with User
|
||||
Show summary:
|
||||
- Design variables and bounds
|
||||
- Objectives and constraints
|
||||
- Recommended settings
|
||||
- Estimated time
|
||||
|
||||
### Step 6: Start Optimization (if confirmed)
|
||||
Run the optimization with appropriate flags.
|
||||
|
||||
## Error Handling
|
||||
- If model not found: Ask for correct path
|
||||
- If expressions not found: List available ones
|
||||
- If config invalid: Show errors, suggest fixes
|
||||
|
||||
## Output
|
||||
- Valid study folder structure
|
||||
- Validated optimization_config.json
|
||||
- README with study description
|
||||
- Optional: Start optimization run
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conversation Flow Example
|
||||
|
||||
```
|
||||
USER: I want to optimize this bracket for weight. The model is at
|
||||
examples/Models/Bracket/bracket.prt
|
||||
|
||||
CLAUDE: I'll help you set up an optimization study for the bracket. Let me
|
||||
first analyze your model to understand its structure.
|
||||
|
||||
[Reads model, extracts expressions]
|
||||
|
||||
I found these parametric expressions in your model:
|
||||
- wall_thickness (current: 2mm)
|
||||
- fillet_radius (current: 3mm)
|
||||
- hole_diameter (current: 10mm)
|
||||
- rib_count (current: 4)
|
||||
|
||||
Which of these should be design variables for optimization?
|
||||
|
||||
USER: All of them. Wall thickness 1-4mm, fillet 2-8mm, hole 5-20mm,
|
||||
ribs 2-6.
|
||||
|
||||
CLAUDE: Got it. Now for the engineering requirements:
|
||||
|
||||
1. What's the loading condition? (I see a static analysis in your sim)
|
||||
2. Any stress limits? (Material yield strength?)
|
||||
3. Any displacement limits?
|
||||
4. Is there a target weight you're trying to achieve?
|
||||
|
||||
USER: It's holding a 50kg load. Aluminum 6061, so keep stress under 200 MPa.
|
||||
Max deflection 0.5mm. Want it as light as possible.
|
||||
|
||||
CLAUDE: Perfect. Here's your optimization setup:
|
||||
|
||||
**Study: bracket_weight_optimization**
|
||||
|
||||
**Design Variables:**
|
||||
| Parameter | Min | Max | Type |
|
||||
|---------------|-----|-----|---------|
|
||||
| wall_thickness | 1 | 4 | float |
|
||||
| fillet_radius | 2 | 8 | float |
|
||||
| hole_diameter | 5 | 20 | float |
|
||||
| rib_count | 2 | 6 | integer |
|
||||
|
||||
**Objective:** Minimize mass
|
||||
|
||||
**Constraints:**
|
||||
- Max von Mises stress < 200 MPa
|
||||
- Max displacement < 0.5 mm
|
||||
|
||||
**Settings:**
|
||||
- Protocol: Single-objective constrained
|
||||
- Trials: 50 (recommended for 4 variables)
|
||||
- Sampler: TPE (good for constrained problems)
|
||||
|
||||
Should I create this study? I can also enable neural network
|
||||
acceleration if you want more than 50 trials.
|
||||
|
||||
USER: Yes, create it. And enable the NN thing, let's do 200 trials.
|
||||
|
||||
CLAUDE: Creating study with NN surrogate enabled...
|
||||
|
||||
[Creates folder structure]
|
||||
[Generates optimization_config.json]
|
||||
[Validates configuration]
|
||||
[Copies model files]
|
||||
|
||||
Study created at: studies/bracket_weight_optimization/
|
||||
|
||||
Configuration validated successfully:
|
||||
✓ All design variables have valid bounds
|
||||
✓ Extraction methods configured
|
||||
✓ Surrogate settings enabled
|
||||
|
||||
Ready to start optimization. The workflow will:
|
||||
1. Run 30 initial FEA trials
|
||||
2. Generate 100 space-filling training points
|
||||
3. Train NN surrogate
|
||||
4. Run 170 NN-accelerated trials with FEA validation
|
||||
|
||||
Start now? (You can monitor at localhost:3000)
|
||||
|
||||
USER: Yes, start it.
|
||||
|
||||
CLAUDE: Starting optimization...
|
||||
|
||||
[Runs optimization in background]
|
||||
|
||||
Optimization running (ID: bracket_weight_optimization)
|
||||
- Monitor: http://localhost:3000
|
||||
- Optuna Dashboard: http://localhost:8081
|
||||
|
||||
I'll notify you when Stage 1 (initial FEA) completes.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Skill Library
|
||||
|
||||
### Core Skills (Must Have)
|
||||
|
||||
| Skill | Trigger | Purpose |
|
||||
|-------|---------|---------|
|
||||
| `/create-study` | "optimize", "new study" | Create optimization from scratch |
|
||||
| `/analyze-model` | "look at model", "what can I optimize" | Extract model info |
|
||||
| `/run-optimization` | "start", "run" | Execute optimization |
|
||||
| `/check-status` | "how's it going", "progress" | Report on running studies |
|
||||
| `/generate-report` | "report", "results" | Create visualizations |
|
||||
|
||||
### Advanced Skills (For Power Users)
|
||||
|
||||
| Skill | Trigger | Purpose |
|
||||
|-------|---------|---------|
|
||||
| `/configure-surrogate` | "neural network", "surrogate" | Setup NN acceleration |
|
||||
| `/add-constraint` | "add constraint" | Modify existing study |
|
||||
| `/compare-studies` | "compare" | Cross-study analysis |
|
||||
| `/export-results` | "export", "pareto" | Export optimal designs |
|
||||
| `/troubleshoot` | "error", "failed" | Debug issues |
|
||||
|
||||
### Custom Skills (Project-Specific)
|
||||
|
||||
Users can create their own skills for recurring tasks:
|
||||
```
|
||||
.claude/skills/
|
||||
├── my-bracket-setup.md # Pre-configured bracket optimization
|
||||
├── thermal-analysis.md # Custom thermal workflow
|
||||
└── batch-runner.md # Run multiple studies
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
### Phase 1: Foundation (Current)
|
||||
- [x] Basic skill system (create-study.md exists)
|
||||
- [x] Config validation
|
||||
- [x] Manual protocol following
|
||||
- [ ] **Formalize skill structure**
|
||||
- [ ] **Create skill template**
|
||||
|
||||
### Phase 2: Skill Library
|
||||
- [ ] Implement all core skills
|
||||
- [ ] Add protocol references in skills
|
||||
- [ ] Create skill chaining (one skill calls another)
|
||||
- [ ] Add user confirmation checkpoints
|
||||
|
||||
### Phase 3: Validators
|
||||
- [ ] Config validator (comprehensive)
|
||||
- [ ] Model validator (check NX setup)
|
||||
- [ ] Results validator (check outputs)
|
||||
- [ ] State validator (check study health)
|
||||
|
||||
### Phase 4: Knowledge Integration
|
||||
- [ ] Physics knowledge base queries
|
||||
- [ ] Similar study lookup
|
||||
- [ ] Transfer learning suggestions
|
||||
- [ ] Best practices recommendations
|
||||
|
||||
---
|
||||
|
||||
## Skill Template
|
||||
|
||||
Every skill should follow this structure:
|
||||
|
||||
```markdown
|
||||
# Skill Name
|
||||
|
||||
## Purpose
|
||||
What this skill accomplishes.
|
||||
|
||||
## Triggers
|
||||
Keywords/phrases that activate this skill.
|
||||
|
||||
## Prerequisites
|
||||
What must be true before running.
|
||||
|
||||
## Information Gathering
|
||||
Questions to ask user (with defaults).
|
||||
|
||||
## Protocol Reference
|
||||
Link to detailed protocol in docs/06_PROTOCOLS_DETAILED/
|
||||
|
||||
## Execution Steps
|
||||
1. Step one (with validation)
|
||||
2. Step two (with validation)
|
||||
3. ...
|
||||
|
||||
## Validation Checkpoints
|
||||
- After step X, verify Y
|
||||
- Before step Z, check W
|
||||
|
||||
## Error Handling
|
||||
- Error type 1: Recovery action
|
||||
- Error type 2: Recovery action
|
||||
|
||||
## User Confirmations
|
||||
Points where user approval is needed.
|
||||
|
||||
## Outputs
|
||||
What gets created/modified.
|
||||
|
||||
## Next Steps
|
||||
What to suggest after completion.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Principles
|
||||
|
||||
### 1. Conversation > Configuration
|
||||
Don't ask user to edit JSON. Have a conversation, then generate the config.
|
||||
|
||||
### 2. Validation at Every Step
|
||||
Never proceed with invalid state. Check before, during, and after.
|
||||
|
||||
### 3. Sensible Defaults
|
||||
Provide good defaults so user only specifies what they care about.
|
||||
|
||||
### 4. Explain Decisions
|
||||
When making choices (sampler, n_trials, etc.), explain why.
|
||||
|
||||
### 5. Graceful Degradation
|
||||
If something fails, recover gracefully with clear explanation.
|
||||
|
||||
### 6. Progressive Disclosure
|
||||
Start simple, offer complexity only when needed.
|
||||
|
||||
---
|
||||
|
||||
## Integration with Dashboard
|
||||
|
||||
The dashboard complements LLM interaction:
|
||||
|
||||
| LLM Handles | Dashboard Handles |
|
||||
|-------------|-------------------|
|
||||
| Study setup | Live monitoring |
|
||||
| Configuration | Progress visualization |
|
||||
| Troubleshooting | Results exploration |
|
||||
| Reports | Pareto front interaction |
|
||||
| Custom features | Historical comparison |
|
||||
|
||||
**The LLM creates, the dashboard observes.**
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Formalize Skill Structure**: Create template that all skills follow
|
||||
2. **Implement Core Skills**: Start with create-study, analyze-model
|
||||
3. **Add Validators**: Python modules for each validation type
|
||||
4. **Test Conversation Flows**: Verify natural interaction patterns
|
||||
5. **Build Skill Chaining**: Allow skills to call other skills
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Created: 2025-11-25*
|
||||
*Philosophy: Talk to the LLM, not the dashboard*
|
||||
@@ -1,251 +0,0 @@
|
||||
# NX Multi-Solution Solve Protocol
|
||||
|
||||
## Critical Finding: SolveAllSolutions API Required for Multi-Solution Models
|
||||
|
||||
**Date**: November 23, 2025
|
||||
**Last Updated**: November 23, 2025
|
||||
**Protocol**: Multi-Solution Nastran Solve
|
||||
**Affected Models**: Any NX simulation with multiple solutions (e.g., static + modal, thermal + structural)
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
When an NX simulation contains multiple solutions (e.g., Solution 1 = Static Analysis, Solution 2 = Modal Analysis), using `SolveChainOfSolutions()` with Background mode **does not wait for all solutions to complete** before returning control to Python. This causes:
|
||||
|
||||
1. **Missing OP2 Files**: Only the first solution's OP2 file is generated
|
||||
2. **Stale Data**: Subsequent trials read old OP2 files from previous runs
|
||||
3. **Identical Results**: All trials show the same values for results from missing solutions
|
||||
4. **Silent Failures**: No error is raised - the solve completes but files are not written
|
||||
|
||||
### Example Scenario
|
||||
|
||||
**Drone Gimbal Arm Optimization**:
|
||||
- Solution 1: Static analysis (stress, displacement)
|
||||
- Solution 2: Modal analysis (frequency)
|
||||
|
||||
**Symptoms**:
|
||||
- All 100 trials showed **identical frequency** (27.476 Hz)
|
||||
- Only `beam_sim1-solution_1.op2` was created
|
||||
- `beam_sim1-solution_2.op2` was never regenerated after Trial 0
|
||||
- Both `.dat` files were written correctly, but solve didn't wait for completion
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
```python
|
||||
# WRONG APPROACH (doesn't wait for completion)
|
||||
psolutions1 = []
|
||||
solution_idx = 1
|
||||
while True:
|
||||
solution_obj_name = f"Solution[Solution {solution_idx}]"
|
||||
simSolution = simSimulation1.FindObject(solution_obj_name)
|
||||
if simSolution:
|
||||
psolutions1.append(simSolution)
|
||||
solution_idx += 1
|
||||
else:
|
||||
break
|
||||
|
||||
theCAESimSolveManager.SolveChainOfSolutions(
|
||||
psolutions1,
|
||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteDeepCheckAndOutputErrors,
|
||||
NXOpen.CAE.SimSolution.SolveMode.Background # ❌ Returns immediately!
|
||||
)
|
||||
```
|
||||
|
||||
**Issue**: Background mode runs asynchronously and returns control to Python before all solutions finish solving.
|
||||
|
||||
---
|
||||
|
||||
## Correct Solution
|
||||
|
||||
### For Solving All Solutions
|
||||
|
||||
Use `SolveAllSolutions()` API with **Foreground mode**:
|
||||
|
||||
```python
|
||||
# CORRECT APPROACH (waits for completion)
|
||||
if solution_name:
|
||||
# Solve specific solution in background mode
|
||||
solution_obj_name = f"Solution[{solution_name}]"
|
||||
simSolution1 = simSimulation1.FindObject(solution_obj_name)
|
||||
psolutions1 = [simSolution1]
|
||||
|
||||
numsolutionssolved1, numsolutionsfailed1, numsolutionsskipped1 = theCAESimSolveManager.SolveChainOfSolutions(
|
||||
psolutions1,
|
||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteDeepCheckAndOutputErrors,
|
||||
NXOpen.CAE.SimSolution.SolveMode.Background
|
||||
)
|
||||
else:
|
||||
# Solve ALL solutions using SolveAllSolutions API (Foreground mode)
|
||||
# This ensures all solutions (static + modal, etc.) complete before returning
|
||||
print(f"[JOURNAL] Solving all solutions using SolveAllSolutions API (Foreground mode)...")
|
||||
|
||||
numsolutionssolved1, numsolutionsfailed1, numsolutionsskipped1 = theCAESimSolveManager.SolveAllSolutions(
|
||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
||||
NXOpen.CAE.SimSolution.SolveMode.Foreground, # ✅ Blocks until complete
|
||||
False
|
||||
)
|
||||
```
|
||||
|
||||
### Key Differences
|
||||
|
||||
| Aspect | SolveChainOfSolutions | SolveAllSolutions |
|
||||
|--------|----------------------|-------------------|
|
||||
| **Manual enumeration** | Required (loop through solutions) | Automatic (handles all solutions) |
|
||||
| **Background mode behavior** | Returns immediately, async | N/A (Foreground recommended) |
|
||||
| **Foreground mode behavior** | Blocks until complete | Blocks until complete ✅ |
|
||||
| **Use case** | Specific solution selection | Solve all solutions |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Location
|
||||
|
||||
**File**: `optimization_engine/solve_simulation.py`
|
||||
**Lines**: 271-295
|
||||
|
||||
**When to use this protocol**:
|
||||
- When `solution_name=None` is passed to `NXSolver.run_simulation()`
|
||||
- Any simulation with multiple solutions that must all complete
|
||||
- Multi-objective optimization requiring results from different analysis types
|
||||
|
||||
---
|
||||
|
||||
## Verification Steps
|
||||
|
||||
After implementing the fix, verify:
|
||||
|
||||
1. **Both .dat files are written** (one per solution)
|
||||
```
|
||||
beam_sim1-solution_1.dat # Static analysis
|
||||
beam_sim1-solution_2.dat # Modal analysis
|
||||
```
|
||||
|
||||
2. **Both .op2 files are created** with updated timestamps
|
||||
```
|
||||
beam_sim1-solution_1.op2 # Contains stress, displacement
|
||||
beam_sim1-solution_2.op2 # Contains eigenvalues, mode shapes
|
||||
```
|
||||
|
||||
3. **Results are unique per trial** - check that frequency values vary across trials
|
||||
|
||||
4. **Journal log shows**:
|
||||
```
|
||||
[JOURNAL] Solving all solutions using SolveAllSolutions API (Foreground mode)...
|
||||
[JOURNAL] Solve completed!
|
||||
[JOURNAL] Solutions solved: 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Solution Monitor Window Control (November 24, 2025)
|
||||
|
||||
### Problem: Monitor Window Pile-Up
|
||||
|
||||
When running optimization studies with multiple trials, NX opens solution monitor windows for each trial. These windows:
|
||||
- Superpose on top of each other
|
||||
- Cannot be easily closed programmatically
|
||||
- Cause usability issues during long optimization runs
|
||||
- Slow down the optimization process
|
||||
|
||||
### Solution: Automatic Monitor Disabling
|
||||
|
||||
The solution monitor is now automatically disabled when solving multiple solutions (when `solution_name=None`).
|
||||
|
||||
**Implementation**: `optimization_engine/solve_simulation.py` lines 271-295
|
||||
|
||||
```python
|
||||
# CRITICAL: Disable solution monitor when solving multiple solutions
|
||||
# This prevents NX from opening multiple monitor windows which superpose and cause usability issues
|
||||
if not solution_name:
|
||||
print("[JOURNAL] Disabling solution monitor for all solutions to prevent window pile-up...")
|
||||
try:
|
||||
# Get all solutions in the simulation
|
||||
solutions_disabled = 0
|
||||
solution_num = 1
|
||||
while True:
|
||||
try:
|
||||
solution_obj_name = f"Solution[Solution {solution_num}]"
|
||||
simSolution = simSimulation1.FindObject(solution_obj_name)
|
||||
if simSolution:
|
||||
propertyTable = simSolution.SolverOptionsPropertyTable
|
||||
propertyTable.SetBooleanPropertyValue("solution monitor", False)
|
||||
solutions_disabled += 1
|
||||
solution_num += 1
|
||||
else:
|
||||
break
|
||||
except:
|
||||
break # No more solutions
|
||||
print(f"[JOURNAL] Solution monitor disabled for {solutions_disabled} solution(s)")
|
||||
except Exception as e:
|
||||
print(f"[JOURNAL] WARNING: Could not disable solution monitor: {e}")
|
||||
print(f"[JOURNAL] Continuing with solve anyway...")
|
||||
```
|
||||
|
||||
**When this activates**:
|
||||
- Automatically when `solution_name=None` (solve all solutions mode)
|
||||
- For any study with multiple trials (typical optimization scenario)
|
||||
- No user configuration required
|
||||
|
||||
**User-recorded journal**: `nx_journals/user_generated_journals/journal_monitor_window_off.py`
|
||||
|
||||
---
|
||||
|
||||
## Related Issues Fixed
|
||||
|
||||
1. **All trials showing identical frequency**: Fixed by ensuring modal solution runs
|
||||
2. **Only one data point in dashboard**: Fixed by all trials succeeding
|
||||
3. **Parallel coordinates with NaN**: Fixed by having complete data from all solutions
|
||||
4. **Solution monitor windows piling up**: Fixed by automatically disabling monitor for multi-solution runs
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- **User's Example**: `nx_journals/user_generated_journals/journal_solve_all_solution.py` (line 27)
|
||||
- **NX Open Documentation**: SimSolveManager.SolveAllSolutions() method
|
||||
- **Implementation**: `optimization_engine/solve_simulation.py`
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use Foreground mode** when solving all solutions
|
||||
2. **Verify OP2 timestamp changes** to ensure fresh solves
|
||||
3. **Check solve counts** in journal output to confirm both solutions ran
|
||||
4. **Test with 5 trials** before running large optimizations
|
||||
5. **Monitor unique frequency values** as a smoke test for multi-solution models
|
||||
|
||||
---
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
### ✅ Correct Usage
|
||||
|
||||
```python
|
||||
# Multi-objective optimization with static + modal
|
||||
result = nx_solver.run_simulation(
|
||||
sim_file=sim_file,
|
||||
working_dir=model_dir,
|
||||
expression_updates=design_vars,
|
||||
solution_name=None # Solve ALL solutions
|
||||
)
|
||||
```
|
||||
|
||||
### ❌ Incorrect Usage (Don't Do This)
|
||||
|
||||
```python
|
||||
# Running modal separately - inefficient and error-prone
|
||||
result1 = nx_solver.run_simulation(..., solution_name="Solution 1") # Static
|
||||
result2 = nx_solver.run_simulation(..., solution_name="Solution 2") # Modal
|
||||
# This doubles the solve time and requires managing two result objects
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Status**: ✅ Implemented and Verified
|
||||
**Impact**: Critical for all multi-solution optimization workflows
|
||||
@@ -1,278 +0,0 @@
|
||||
# Protocol 13: Adaptive Multi-Objective Optimization
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 13 implements an adaptive multi-objective optimization strategy that combines:
|
||||
- **FEA (Finite Element Analysis)** for ground truth simulations
|
||||
- **Neural Network Surrogates** for rapid exploration
|
||||
- **Iterative refinement** with periodic retraining
|
||||
|
||||
This protocol is ideal for expensive simulations where each FEA run takes significant time (minutes to hours), but you need to explore a large design space efficiently.
|
||||
|
||||
## When to Use Protocol 13
|
||||
|
||||
| Scenario | Recommended |
|
||||
|----------|-------------|
|
||||
| FEA takes > 5 minutes per run | Yes |
|
||||
| Need to explore > 100 designs | Yes |
|
||||
| Multi-objective optimization (2-4 objectives) | Yes |
|
||||
| Single objective, fast FEA (< 1 min) | No, use Protocol 10/11 |
|
||||
| Highly nonlinear response surfaces | Yes, with more FEA samples |
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Adaptive Optimization Loop │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Iteration 1: │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Initial FEA │ -> │ Train NN │ -> │ NN Search │ │
|
||||
│ │ (50-100) │ │ Surrogate │ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │
|
||||
│ v │
|
||||
│ Iteration 2+: │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Validate Top │ -> │ Retrain NN │ -> │ NN Search │ │
|
||||
│ │ NN with FEA │ │ with new data│ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### optimization_config.json
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "my_adaptive_study",
|
||||
"protocol": 13,
|
||||
|
||||
"adaptive_settings": {
|
||||
"enabled": true,
|
||||
"initial_fea_trials": 50,
|
||||
"nn_trials_per_iteration": 1000,
|
||||
"fea_validation_per_iteration": 5,
|
||||
"max_iterations": 10,
|
||||
"convergence_threshold": 0.01,
|
||||
"retrain_epochs": 100
|
||||
},
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "thermal_40_vs_20",
|
||||
"direction": "minimize",
|
||||
"weight": 1.0
|
||||
},
|
||||
{
|
||||
"name": "thermal_60_vs_20",
|
||||
"direction": "minimize",
|
||||
"weight": 0.5
|
||||
},
|
||||
{
|
||||
"name": "manufacturability",
|
||||
"direction": "minimize",
|
||||
"weight": 0.3
|
||||
}
|
||||
],
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "rib_thickness",
|
||||
"expression_name": "rib_thickness",
|
||||
"min": 5.0,
|
||||
"max": 15.0,
|
||||
"baseline": 10.0
|
||||
}
|
||||
],
|
||||
|
||||
"surrogate_settings": {
|
||||
"enabled": true,
|
||||
"model_type": "neural_network",
|
||||
"hidden_layers": [128, 64, 32],
|
||||
"learning_rate": 0.001,
|
||||
"batch_size": 32
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Parameters
|
||||
|
||||
| Parameter | Description | Recommended |
|
||||
|-----------|-------------|-------------|
|
||||
| `initial_fea_trials` | FEA runs before first NN training | 50-100 |
|
||||
| `nn_trials_per_iteration` | NN-predicted trials per iteration | 500-2000 |
|
||||
| `fea_validation_per_iteration` | Top NN trials validated with FEA | 3-10 |
|
||||
| `max_iterations` | Maximum adaptive iterations | 5-20 |
|
||||
| `convergence_threshold` | Stop if improvement < threshold | 0.01 (1%) |
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Initial FEA Sampling
|
||||
|
||||
```python
|
||||
# Generates space-filling Latin Hypercube samples
|
||||
# Runs FEA on each sample
|
||||
# Stores results in Optuna database with source='FEA'
|
||||
```
|
||||
|
||||
### Phase 2: Neural Network Training
|
||||
|
||||
```python
|
||||
# Extracts all FEA trials from database
|
||||
# Normalizes inputs (design variables) to [0, 1]
|
||||
# Trains multi-output neural network
|
||||
# Validates on held-out set (20%)
|
||||
```
|
||||
|
||||
### Phase 3: NN-Accelerated Search
|
||||
|
||||
```python
|
||||
# Uses trained NN as objective function
|
||||
# Runs NSGA-II with 1000+ trials (fast, ~ms per trial)
|
||||
# Identifies Pareto-optimal candidates
|
||||
# Stores predictions with source='NN'
|
||||
```
|
||||
|
||||
### Phase 4: FEA Validation
|
||||
|
||||
```python
|
||||
# Selects top N NN predictions
|
||||
# Runs actual FEA on these candidates
|
||||
# Updates database with ground truth
|
||||
# Checks for improvement
|
||||
```
|
||||
|
||||
### Phase 5: Iteration
|
||||
|
||||
```python
|
||||
# If improved: retrain NN with new FEA data
|
||||
# If converged: stop and report best
|
||||
# Otherwise: continue to next iteration
|
||||
```
|
||||
|
||||
## Output Files
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── 3_results/
|
||||
│ ├── study.db # Optuna database (all trials)
|
||||
│ ├── adaptive_state.json # Current iteration state
|
||||
│ ├── surrogate_model.pt # Trained neural network
|
||||
│ ├── training_history.json # NN training metrics
|
||||
│ └── STUDY_REPORT.md # Generated summary report
|
||||
```
|
||||
|
||||
### adaptive_state.json
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 3,
|
||||
"total_fea_count": 103,
|
||||
"total_nn_count": 3000,
|
||||
"best_weighted": 1.456,
|
||||
"best_params": {
|
||||
"rib_thickness": 8.5,
|
||||
"...": "..."
|
||||
},
|
||||
"history": [
|
||||
{"iteration": 1, "fea_count": 50, "nn_count": 1000, "improved": true},
|
||||
{"iteration": 2, "fea_count": 55, "nn_count": 2000, "improved": true},
|
||||
{"iteration": 3, "fea_count": 103, "nn_count": 3000, "improved": false}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
Protocol 13 studies display in the Atomizer dashboard with:
|
||||
|
||||
- **FEA vs NN Differentiation**: Blue circles for FEA, orange crosses for NN
|
||||
- **Pareto Front Highlighting**: Green markers for Pareto-optimal solutions
|
||||
- **Convergence Plot**: Shows optimization progress with best-so-far line
|
||||
- **Parallel Coordinates**: Filter and explore the design space
|
||||
- **Parameter Importance**: Correlation-based sensitivity analysis
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Initial Sampling Strategy
|
||||
|
||||
Use Latin Hypercube Sampling (LHS) for initial FEA trials to ensure good coverage:
|
||||
|
||||
```python
|
||||
sampler = optuna.samplers.LatinHypercubeSampler(seed=42)
|
||||
```
|
||||
|
||||
### 2. Neural Network Architecture
|
||||
|
||||
For most problems, start with:
|
||||
- 2-3 hidden layers
|
||||
- 64-128 neurons per layer
|
||||
- ReLU activation
|
||||
- Adam optimizer with lr=0.001
|
||||
|
||||
### 3. Validation Strategy
|
||||
|
||||
Always validate top NN predictions with FEA before trusting them:
|
||||
- NN predictions can be wrong in unexplored regions
|
||||
- FEA validation provides ground truth
|
||||
- More FEA = more accurate NN (trade-off with time)
|
||||
|
||||
### 4. Convergence Criteria
|
||||
|
||||
Stop when:
|
||||
- No improvement for 2-3 consecutive iterations
|
||||
- Reached FEA budget limit
|
||||
- Objective improvement < 1% threshold
|
||||
|
||||
## Example: M1 Mirror Optimization
|
||||
|
||||
```bash
|
||||
# Start adaptive optimization
|
||||
cd studies/m1_mirror_adaptive_V11
|
||||
python run_optimization.py --start
|
||||
|
||||
# Monitor progress
|
||||
python run_optimization.py --status
|
||||
|
||||
# Generate report
|
||||
python generate_report.py
|
||||
```
|
||||
|
||||
Results after 3 iterations:
|
||||
- 103 FEA trials
|
||||
- 3000 NN trials
|
||||
- Best thermal 40°C vs 20°C: 5.99 nm RMS
|
||||
- Best thermal 60°C vs 20°C: 14.02 nm RMS
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### NN Predictions Don't Match FEA
|
||||
|
||||
- Increase initial FEA samples
|
||||
- Add more hidden layers
|
||||
- Check for outliers in training data
|
||||
- Ensure proper normalization
|
||||
|
||||
### Optimization Not Converging
|
||||
|
||||
- Increase NN trials per iteration
|
||||
- Check objective function implementation
|
||||
- Verify design variable bounds
|
||||
- Consider adding constraints
|
||||
|
||||
### Memory Issues
|
||||
|
||||
- Reduce `nn_trials_per_iteration`
|
||||
- Use batch processing for large datasets
|
||||
- Clear trial cache periodically
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Protocol 11: Multi-Objective NSGA-II](./PROTOCOL_11_MULTI_OBJECTIVE.md)
|
||||
- [Protocol 12: Hybrid FEA/NN](./PROTOCOL_12_HYBRID.md)
|
||||
- [Neural Surrogate Training](../07_DEVELOPMENT/NEURAL_SURROGATE.md)
|
||||
- [Zernike Extractor](./ZERNIKE_EXTRACTOR.md)
|
||||
@@ -1,403 +0,0 @@
|
||||
# Zernike Coefficient Extractor
|
||||
|
||||
## Overview
|
||||
|
||||
The Zernike extractor module provides complete wavefront error (WFE) analysis for telescope mirror optimization. It extracts Zernike polynomial coefficients from FEA displacement results and computes RMS metrics used as optimization objectives.
|
||||
|
||||
**Location**: `optimization_engine/extractors/extract_zernike.py`
|
||||
|
||||
---
|
||||
|
||||
## Mathematical Background
|
||||
|
||||
### What are Zernike Polynomials?
|
||||
|
||||
Zernike polynomials are a set of orthogonal functions defined on the unit disk. They are the standard basis for describing optical aberrations because:
|
||||
|
||||
1. **Orthogonality**: Each mode is independent (no cross-talk)
|
||||
2. **Physical meaning**: Each mode corresponds to a recognizable aberration
|
||||
3. **RMS property**: Total RMS² = sum of individual coefficient²
|
||||
|
||||
### Noll Indexing Convention
|
||||
|
||||
We use the Noll indexing scheme (standard in optics):
|
||||
|
||||
| Noll j | n | m | Name | Physical Meaning |
|
||||
|--------|---|----|---------------------|------------------|
|
||||
| 1 | 0 | 0 | Piston | Constant offset (ignored) |
|
||||
| 2 | 1 | 1 | Tilt Y | Pointing error - correctable |
|
||||
| 3 | 1 | -1 | Tilt X | Pointing error - correctable |
|
||||
| 4 | 2 | 0 | Defocus | Focus error - correctable |
|
||||
| 5 | 2 | -2 | Astigmatism 45° | 3rd order aberration |
|
||||
| 6 | 2 | 2 | Astigmatism 0° | 3rd order aberration |
|
||||
| 7 | 3 | -1 | Coma X | 3rd order aberration |
|
||||
| 8 | 3 | 1 | Coma Y | 3rd order aberration |
|
||||
| 9 | 3 | -3 | Trefoil X | Triangular aberration |
|
||||
| 10 | 3 | 3 | Trefoil Y | Triangular aberration |
|
||||
| 11 | 4 | 0 | Primary Spherical | 4th order spherical |
|
||||
| 12-50 | ...| ...| Higher orders | Higher-order aberrations |
|
||||
|
||||
### Zernike Polynomial Formula
|
||||
|
||||
Each Zernike polynomial Z_j(r, θ) is computed as:
|
||||
|
||||
```
|
||||
Z_j(r, θ) = R_n^m(r) × { cos(m·θ) if m ≥ 0
|
||||
{ sin(|m|·θ) if m < 0
|
||||
|
||||
where R_n^m(r) = Σ(s=0 to (n-|m|)/2) [(-1)^s × (n-s)! / (s! × ((n+|m|)/2-s)! × ((n-|m|)/2-s)!)] × r^(n-2s)
|
||||
```
|
||||
|
||||
### Wavefront Error Conversion
|
||||
|
||||
FEA gives surface displacement in mm. We convert to wavefront error in nm:
|
||||
|
||||
```
|
||||
WFE = 2 × displacement × 10⁶ [nm]
|
||||
↑ ↑
|
||||
optical reflection mm → nm
|
||||
```
|
||||
|
||||
The factor of 2 accounts for the optical path difference when light reflects off a surface.
|
||||
|
||||
---
|
||||
|
||||
## Module Structure
|
||||
|
||||
### Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `extract_zernike.py` | Core extraction: Zernike fitting, RMS computation, OP2 parsing |
|
||||
| `zernike_helpers.py` | High-level helpers for optimization integration |
|
||||
| `extract_zernike_surface.py` | Surface-based extraction (alternative method) |
|
||||
|
||||
### Key Classes
|
||||
|
||||
#### `ZernikeExtractor`
|
||||
|
||||
Main class for Zernike analysis:
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(
|
||||
op2_path="results/model-solution_1.op2",
|
||||
bdf_path="results/model.dat", # Optional, auto-detected
|
||||
displacement_unit="mm", # Unit in OP2 file
|
||||
n_modes=50, # Number of Zernike modes
|
||||
filter_orders=4 # Modes to filter (J1-J4)
|
||||
)
|
||||
|
||||
# Extract single subcase
|
||||
result = extractor.extract_subcase("20")
|
||||
print(f"Filtered RMS: {result['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
# Extract relative metrics (target vs reference)
|
||||
relative = extractor.extract_relative(
|
||||
target_subcase="40",
|
||||
reference_subcase="20"
|
||||
)
|
||||
print(f"Relative RMS (40 vs 20): {relative['relative_filtered_rms_nm']:.2f} nm")
|
||||
|
||||
# Extract all subcases
|
||||
all_results = extractor.extract_all_subcases(reference_subcase="20")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RMS Metrics Explained
|
||||
|
||||
### Global RMS
|
||||
|
||||
Raw RMS of the entire wavefront error surface:
|
||||
|
||||
```
|
||||
global_rms = sqrt(mean(WFE²))
|
||||
```
|
||||
|
||||
### Filtered RMS (J1-J4 removed)
|
||||
|
||||
RMS after removing correctable aberrations (piston, tip, tilt, defocus):
|
||||
|
||||
```python
|
||||
# Subtract low-order contribution
|
||||
WFE_filtered = WFE - Σ(j=1 to 4) c_j × Z_j(r, θ)
|
||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
||||
```
|
||||
|
||||
**This is typically the primary optimization objective** because:
|
||||
- Piston (J1): Doesn't affect imaging
|
||||
- Tip/Tilt (J2-J3): Corrected by telescope pointing
|
||||
- Defocus (J4): Corrected by focus mechanism
|
||||
|
||||
### Optician Workload (J1-J3 removed)
|
||||
|
||||
RMS for manufacturing assessment - keeps defocus because it requires material removal:
|
||||
|
||||
```python
|
||||
# Subtract only piston and tilt
|
||||
WFE_j1to3 = WFE - Σ(j=1 to 3) c_j × Z_j(r, θ)
|
||||
rms_filter_j1to3 = sqrt(mean(WFE_j1to3²))
|
||||
```
|
||||
|
||||
### Relative RMS Between Subcases
|
||||
|
||||
Measures gravity-induced deformation relative to a reference orientation:
|
||||
|
||||
```python
|
||||
# Compute difference surface
|
||||
ΔWFE = WFE_target - WFE_reference
|
||||
|
||||
# Fit Zernike to difference
|
||||
Δc = zernike_fit(ΔWFE)
|
||||
|
||||
# Filter and compute RMS
|
||||
relative_filtered_rms = sqrt(Σ(j=5 to 50) Δc_j²)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Coefficient-Based vs Surface-Based RMS
|
||||
|
||||
Due to Zernike orthogonality, these two methods are mathematically equivalent:
|
||||
|
||||
### Method 1: Coefficient-Based (Fast)
|
||||
```python
|
||||
# From coefficients directly
|
||||
filtered_rms = sqrt(Σ(j=5 to 50) c_j²)
|
||||
```
|
||||
|
||||
### Method 2: Surface-Based (More accurate for irregular meshes)
|
||||
```python
|
||||
# Reconstruct and subtract low-order surface
|
||||
WFE_low = Σ(j=1 to 4) c_j × Z_j(r, θ)
|
||||
WFE_filtered = WFE - WFE_low
|
||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
||||
```
|
||||
|
||||
The module uses the surface-based method for maximum accuracy with FEA meshes.
|
||||
|
||||
---
|
||||
|
||||
## Usage in Optimization
|
||||
|
||||
### Simple: Single Objective
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_zernike_filtered_rms
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
|
||||
rms = extract_zernike_filtered_rms(
|
||||
op2_file=sim_dir / "model-solution_1.op2",
|
||||
subcase="20"
|
||||
)
|
||||
return rms
|
||||
```
|
||||
|
||||
### Multi-Subcase: Weighted Sum
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
|
||||
extractor = ZernikeExtractor(op2_path)
|
||||
|
||||
# Extract relative metrics
|
||||
rel_40_20 = extractor.extract_relative("3", "2")['relative_filtered_rms_nm']
|
||||
rel_60_20 = extractor.extract_relative("4", "2")['relative_filtered_rms_nm']
|
||||
mfg_90 = extractor.extract_relative("1", "2")['relative_rms_filter_j1to3']
|
||||
|
||||
# Weighted objective
|
||||
weighted = (
|
||||
5.0 * (rel_40_20 / 4.0) + # Target: 4 nm
|
||||
5.0 * (rel_60_20 / 10.0) + # Target: 10 nm
|
||||
1.0 * (mfg_90 / 20.0) # Target: 20 nm
|
||||
) / 11.0
|
||||
|
||||
return weighted
|
||||
```
|
||||
|
||||
### Using Helper Classes
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
|
||||
|
||||
builder = ZernikeObjectiveBuilder(
|
||||
op2_finder=lambda: sim_dir / "model-solution_1.op2"
|
||||
)
|
||||
|
||||
builder.add_relative_objective("3", "2", weight=5.0) # 40 vs 20
|
||||
builder.add_relative_objective("4", "2", weight=5.0) # 60 vs 20
|
||||
builder.add_relative_objective("1", "2",
|
||||
metric="relative_rms_filter_j1to3",
|
||||
weight=1.0) # 90 vs 20
|
||||
|
||||
objective = builder.build_weighted_sum()
|
||||
value = objective() # Returns combined metric
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Dictionary Reference
|
||||
|
||||
### `extract_subcase()` Returns:
|
||||
|
||||
| Key | Type | Description |
|
||||
|-----|------|-------------|
|
||||
| `subcase` | str | Subcase identifier |
|
||||
| `global_rms_nm` | float | Global RMS WFE (nm) |
|
||||
| `filtered_rms_nm` | float | Filtered RMS (J1-J4 removed) |
|
||||
| `rms_filter_j1to3` | float | J1-J3 filtered RMS (keeps defocus) |
|
||||
| `n_nodes` | int | Number of nodes analyzed |
|
||||
| `defocus_nm` | float | Defocus magnitude (J4) |
|
||||
| `astigmatism_rms_nm` | float | Combined astigmatism (J5+J6) |
|
||||
| `coma_rms_nm` | float | Combined coma (J7+J8) |
|
||||
| `trefoil_rms_nm` | float | Combined trefoil (J9+J10) |
|
||||
| `spherical_nm` | float | Primary spherical (J11) |
|
||||
|
||||
### `extract_relative()` Returns:
|
||||
|
||||
| Key | Type | Description |
|
||||
|-----|------|-------------|
|
||||
| `target_subcase` | str | Target subcase |
|
||||
| `reference_subcase` | str | Reference subcase |
|
||||
| `relative_global_rms_nm` | float | Global RMS of difference |
|
||||
| `relative_filtered_rms_nm` | float | Filtered RMS of difference |
|
||||
| `relative_rms_filter_j1to3` | float | J1-J3 filtered RMS of difference |
|
||||
| `relative_defocus_nm` | float | Defocus change |
|
||||
| `relative_astigmatism_rms_nm` | float | Astigmatism change |
|
||||
| ... | ... | (all aberrations with `relative_` prefix) |
|
||||
|
||||
---
|
||||
|
||||
## Subcase Mapping
|
||||
|
||||
NX Nastran subcases map to gravity orientations:
|
||||
|
||||
| Subcase ID | Elevation | Purpose |
|
||||
|------------|-----------|---------|
|
||||
| 1 | 90° (zenith) | Polishing/manufacturing orientation |
|
||||
| 2 | 20° | Reference (low elevation) |
|
||||
| 3 | 40° | Mid-range tracking |
|
||||
| 4 | 60° | High-range tracking |
|
||||
|
||||
The **20° orientation is typically used as reference** because:
|
||||
- It represents typical low-elevation observing
|
||||
- Polishing is done at 90°, so we measure change from a tracking position
|
||||
|
||||
---
|
||||
|
||||
## Saving Zernike Coefficients for Surrogate Training
|
||||
|
||||
For neural network training, save all 200 coefficients (50 modes × 4 subcases):
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_path)
|
||||
|
||||
# Get coefficients for all subcases
|
||||
rows = []
|
||||
for j in range(1, 51):
|
||||
row = {'noll_index': j}
|
||||
for subcase, label in [('1', '90deg'), ('2', '20deg'),
|
||||
('3', '40deg'), ('4', '60deg')]:
|
||||
result = extractor.extract_subcase(subcase, include_coefficients=True)
|
||||
row[f'{label}_nm'] = result['coefficients'][j-1]
|
||||
rows.append(row)
|
||||
|
||||
df = pd.DataFrame(rows)
|
||||
df.to_csv(f"zernike_coefficients_trial_{trial_num}.csv", index=False)
|
||||
```
|
||||
|
||||
### CSV Format
|
||||
|
||||
| noll_index | 90deg_nm | 20deg_nm | 40deg_nm | 60deg_nm |
|
||||
|------------|----------|----------|----------|----------|
|
||||
| 1 | 0.05 | 0.03 | 0.04 | 0.04 |
|
||||
| 2 | -1.23 | -0.98 | -1.05 | -1.12 |
|
||||
| ... | ... | ... | ... | ... |
|
||||
| 50 | 0.02 | 0.01 | 0.02 | 0.02 |
|
||||
|
||||
**Note**: These are ABSOLUTE coefficients in nm, not relative RMS values.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No displacement data found in OP2"**
|
||||
- Check that solve completed successfully
|
||||
- Verify OP2 file isn't corrupted or incomplete
|
||||
|
||||
2. **"Subcase 'X' not found"**
|
||||
- List available subcases: `print(extractor.displacements.keys())`
|
||||
- Check subcase numbering in NX simulation setup
|
||||
|
||||
3. **"No valid points inside unit disk"**
|
||||
- Mirror surface nodes may not be properly identified
|
||||
- Check BDF node coordinates
|
||||
|
||||
4. **pyNastran version warning**
|
||||
- `nx version='2506.5' is not supported` - This is just a warning, extraction still works
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
```python
|
||||
# Required
|
||||
pyNastran >= 1.3.4 # OP2/BDF parsing
|
||||
numpy >= 1.20 # Numerical computations
|
||||
|
||||
# Optional (for visualization)
|
||||
matplotlib # Plotting Zernike surfaces
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
1. **Noll, R. J. (1976)**. "Zernike polynomials and atmospheric turbulence."
|
||||
*Journal of the Optical Society of America*, 66(3), 207-211.
|
||||
|
||||
2. **Born, M. & Wolf, E. (1999)**. *Principles of Optics* (7th ed.).
|
||||
Cambridge University Press. Chapter 9: Aberrations.
|
||||
|
||||
3. **Wyant, J. C. & Creath, K. (1992)**. "Basic Wavefront Aberration Theory
|
||||
for Optical Metrology." *Applied Optics and Optical Engineering*, Vol. XI.
|
||||
|
||||
---
|
||||
|
||||
## Module Exports
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
# Main class
|
||||
ZernikeExtractor,
|
||||
|
||||
# Convenience functions
|
||||
extract_zernike_from_op2,
|
||||
extract_zernike_filtered_rms,
|
||||
extract_zernike_relative_rms,
|
||||
|
||||
# Helpers for optimization
|
||||
create_zernike_objective,
|
||||
create_relative_zernike_objective,
|
||||
ZernikeObjectiveBuilder,
|
||||
|
||||
# Low-level utilities
|
||||
compute_zernike_coefficients,
|
||||
compute_rms_metrics,
|
||||
noll_indices,
|
||||
zernike_noll,
|
||||
zernike_name,
|
||||
)
|
||||
```
|
||||
@@ -1,356 +0,0 @@
|
||||
# Zernike Mirror Optimization Protocol
|
||||
|
||||
## Overview
|
||||
|
||||
This document captures the learnings from the M1 mirror Zernike optimization studies (V1-V9), including the Assembly FEM (AFEM) workflow, subcase handling, and wavefront error metrics.
|
||||
|
||||
## Assembly FEM (AFEM) Structure
|
||||
|
||||
### NX File Organization
|
||||
|
||||
A typical telescope mirror assembly in NX consists of:
|
||||
|
||||
```
|
||||
ASSY_M1.prt # Master assembly part
|
||||
ASSY_M1_assyfem1.afm # Assembly FEM container
|
||||
ASSY_M1_assyfem1_sim1.sim # Simulation file (this is what we solve)
|
||||
M1_Blank.prt # Mirror blank part
|
||||
M1_Blank_fem1.fem # Mirror blank mesh
|
||||
M1_Blank_fem1_i.prt # Idealized geometry for FEM
|
||||
M1_Vertical_Support_Skeleton.prt # Support structure part
|
||||
M1_Vertical_Support_Skeleton_fem1.fem
|
||||
M1_Vertical_Support_Skeleton_fem1_i.prt
|
||||
```
|
||||
|
||||
### Key Relationships
|
||||
|
||||
1. **Assembly Part (.prt)** - Contains the CAD geometry and expressions (design parameters)
|
||||
2. **Assembly FEM (.afm)** - Links component FEMs together, defines connections
|
||||
3. **Simulation (.sim)** - Contains solutions, loads, boundary conditions, subcases
|
||||
4. **Component FEMs (.fem)** - Individual meshes that get assembled
|
||||
|
||||
### Expression Propagation
|
||||
|
||||
Expressions defined in the master `.prt` propagate through the assembly:
|
||||
- Modify expression in `ASSY_M1.prt`
|
||||
- AFEM updates mesh connections automatically
|
||||
- Solve via `.sim` file
|
||||
|
||||
## Multi-Subcase Analysis
|
||||
|
||||
### Telescope Gravity Orientations
|
||||
|
||||
For telescope mirrors, we analyze multiple gravity orientations (subcases):
|
||||
|
||||
| Subcase | Elevation Angle | Purpose |
|
||||
|---------|-----------------|---------|
|
||||
| 1 | 90 deg (zenith) | Polishing orientation - manufacturing reference |
|
||||
| 2 | 20 deg | Low elevation - reference for relative metrics |
|
||||
| 3 | 40 deg | Mid-low elevation |
|
||||
| 4 | 60 deg | Mid-high elevation |
|
||||
|
||||
### Subcase Mapping
|
||||
|
||||
**Important**: NX subcase numbers don't always match angle labels!
|
||||
|
||||
```json
|
||||
"subcase_labels": {
|
||||
"1": "90deg", // Subcase 1 = 90 degrees
|
||||
"2": "20deg", // Subcase 2 = 20 degrees (reference)
|
||||
"3": "40deg", // Subcase 3 = 40 degrees
|
||||
"4": "60deg" // Subcase 4 = 60 degrees
|
||||
}
|
||||
```
|
||||
|
||||
Always verify subcase-to-angle mapping by checking the NX simulation setup.
|
||||
|
||||
## Zernike Wavefront Error Analysis
|
||||
|
||||
### Optical Convention
|
||||
|
||||
For mirror surface deformation to wavefront error:
|
||||
```
|
||||
WFE = 2 * surface_displacement (reflection doubles the path difference)
|
||||
```
|
||||
|
||||
Unit conversion:
|
||||
```python
|
||||
NM_PER_MM = 1e6 # 1 mm displacement = 1e6 nm WFE contribution
|
||||
wfe_nm = 2.0 * displacement_mm * 1e6
|
||||
```
|
||||
|
||||
### Zernike Polynomial Indexing
|
||||
|
||||
We use **Noll indexing** (standard in optics):
|
||||
|
||||
| J | Name | (n,m) | Correctable? |
|
||||
|---|------|-------|--------------|
|
||||
| 1 | Piston | (0,0) | Yes - alignment |
|
||||
| 2 | Tilt X | (1,-1) | Yes - alignment |
|
||||
| 3 | Tilt Y | (1,1) | Yes - alignment |
|
||||
| 4 | Defocus | (2,0) | Yes - focus adjustment |
|
||||
| 5 | Astigmatism 45 | (2,-2) | Partially |
|
||||
| 6 | Astigmatism 0 | (2,2) | Partially |
|
||||
| 7 | Coma X | (3,-1) | No |
|
||||
| 8 | Coma Y | (3,1) | No |
|
||||
| 9 | Trefoil X | (3,-3) | No |
|
||||
| 10 | Trefoil Y | (3,3) | No |
|
||||
| 11 | Spherical | (4,0) | No |
|
||||
|
||||
### RMS Metrics
|
||||
|
||||
| Metric | Filter | Use Case |
|
||||
|--------|--------|----------|
|
||||
| `global_rms_nm` | None | Total surface error |
|
||||
| `filtered_rms_nm` | J1-J4 removed | Uncorrectable error (optimization target) |
|
||||
| `rms_filter_j1to3` | J1-J3 removed | Optician workload (keeps defocus) |
|
||||
|
||||
### Relative Metrics
|
||||
|
||||
For gravity-induced deformation, we compute relative WFE:
|
||||
```
|
||||
WFE_relative = WFE_target_orientation - WFE_reference_orientation
|
||||
```
|
||||
|
||||
This removes the static (manufacturing) shape and isolates gravity effects.
|
||||
|
||||
Example: `rel_filtered_rms_40_vs_20` = filtered RMS at 40 deg relative to 20 deg reference
|
||||
|
||||
## Optimization Objectives
|
||||
|
||||
### Typical M1 Mirror Objectives
|
||||
|
||||
```json
|
||||
"objectives": [
|
||||
{
|
||||
"name": "rel_filtered_rms_40_vs_20",
|
||||
"description": "Gravity-induced WFE at 40 deg vs 20 deg reference",
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 4.0,
|
||||
"units": "nm"
|
||||
},
|
||||
{
|
||||
"name": "rel_filtered_rms_60_vs_20",
|
||||
"description": "Gravity-induced WFE at 60 deg vs 20 deg reference",
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 10.0,
|
||||
"units": "nm"
|
||||
},
|
||||
{
|
||||
"name": "mfg_90_optician_workload",
|
||||
"description": "Polishing effort at zenith (J1-J3 filtered)",
|
||||
"direction": "minimize",
|
||||
"weight": 1.0,
|
||||
"target": 20.0,
|
||||
"units": "nm"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Weighted Sum Formulation
|
||||
|
||||
```python
|
||||
weighted_objective = sum(weight_i * (value_i / target_i)) / sum(weight_i)
|
||||
```
|
||||
|
||||
Targets normalize different metrics to comparable scales.
|
||||
|
||||
## Design Variables
|
||||
|
||||
### Typical Mirror Support Parameters
|
||||
|
||||
| Parameter | Description | Typical Range |
|
||||
|-----------|-------------|---------------|
|
||||
| `whiffle_min` | Whiffle tree minimum dimension | 35-55 mm |
|
||||
| `whiffle_outer_to_vertical` | Whiffle arm angle | 68-80 deg |
|
||||
| `whiffle_triangle_closeness` | Triangle geometry | 50-65 mm |
|
||||
| `inner_circular_rib_dia` | Rib diameter | 480-620 mm |
|
||||
| `lateral_inner_angle` | Lateral support angle | 25-28.5 deg |
|
||||
| `blank_backface_angle` | Mirror blank geometry | 3.5-5.0 deg |
|
||||
|
||||
### Expression File Format (params.exp)
|
||||
|
||||
```
|
||||
[mm]whiffle_min=42.49
|
||||
[Degrees]whiffle_outer_to_vertical=79.41
|
||||
[mm]inner_circular_rib_dia=582.48
|
||||
```
|
||||
|
||||
## Iteration Folder Structure (V9)
|
||||
|
||||
```
|
||||
study_name/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # Master NX files (NEVER modify)
|
||||
│ └── optimization_config.json
|
||||
├── 2_iterations/
|
||||
│ ├── iter0/ # Trial 0 (0-based to match Optuna)
|
||||
│ │ ├── [all NX files] # Fresh copy from master
|
||||
│ │ ├── params.exp # Expression updates for this trial
|
||||
│ │ └── results/ # Processed outputs
|
||||
│ ├── iter1/
|
||||
│ └── ...
|
||||
└── 3_results/
|
||||
└── study.db # Optuna database
|
||||
```
|
||||
|
||||
### Why 0-Based Iteration Folders?
|
||||
|
||||
Optuna uses 0-based trial numbers. Using `iter{trial.number}` ensures:
|
||||
- Dashboard shows Trial 0 -> corresponds to folder iter0
|
||||
- No confusion when cross-referencing results
|
||||
- Consistent indexing throughout the system
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
### 1. TPE Sampler Seed Issue
|
||||
|
||||
**Problem**: When resuming a study, re-initializing TPESampler with a fixed seed causes the sampler to restart its random sequence, generating duplicate parameters.
|
||||
|
||||
**Solution**: Only set seed for NEW studies:
|
||||
```python
|
||||
if is_new_study:
|
||||
sampler = TPESampler(seed=42, ...)
|
||||
else:
|
||||
sampler = TPESampler(...) # No seed for resume
|
||||
```
|
||||
|
||||
### 2. Code Reuse Protocol
|
||||
|
||||
**Problem**: Embedding 500+ lines of Zernike code in `run_optimization.py` violates DRY principle.
|
||||
|
||||
**Solution**: Use centralized extractors:
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_file)
|
||||
result = extractor.extract_relative("3", "2")
|
||||
rms = result['relative_filtered_rms_nm']
|
||||
```
|
||||
|
||||
### 3. Subcase Numbering
|
||||
|
||||
**Problem**: NX subcase numbers (1,2,3,4) don't match angle labels (20,40,60,90).
|
||||
|
||||
**Solution**: Use explicit mapping in config and translate:
|
||||
```python
|
||||
subcase_labels = {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"}
|
||||
label_to_subcase = {v: k for k, v in subcase_labels.items()}
|
||||
```
|
||||
|
||||
### 4. OP2 Data Validation
|
||||
|
||||
**Problem**: Corrupt OP2 files can have all-zero or unrealistic displacement values.
|
||||
|
||||
**Solution**: Validate before processing:
|
||||
```python
|
||||
unique_values = len(np.unique(disp_z))
|
||||
if unique_values < 10:
|
||||
raise RuntimeError("CORRUPT OP2: insufficient unique values")
|
||||
|
||||
if np.abs(disp_z).max() > 1e6:
|
||||
raise RuntimeError("CORRUPT OP2: unrealistic displacement magnitude")
|
||||
```
|
||||
|
||||
### 5. Reference Subcase for Relative Metrics
|
||||
|
||||
**Problem**: Which orientation to use as reference?
|
||||
|
||||
**Solution**: Use the lowest operational elevation (typically 20 deg) as reference. This makes higher elevations show positive relative WFE as gravity effects increase.
|
||||
|
||||
## ZernikeExtractor API Reference
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
# Create extractor
|
||||
extractor = ZernikeExtractor(
|
||||
op2_path="path/to/results.op2",
|
||||
bdf_path=None, # Auto-detect from same folder
|
||||
displacement_unit="mm",
|
||||
n_modes=50,
|
||||
filter_orders=4
|
||||
)
|
||||
|
||||
# Single subcase
|
||||
result = extractor.extract_subcase("2")
|
||||
# Returns: global_rms_nm, filtered_rms_nm, rms_filter_j1to3, aberrations...
|
||||
|
||||
# Relative between subcases
|
||||
rel = extractor.extract_relative(target_subcase="3", reference_subcase="2")
|
||||
# Returns: relative_filtered_rms_nm, relative_rms_filter_j1to3, ...
|
||||
|
||||
# All subcases with relative metrics
|
||||
all_results = extractor.extract_all_subcases(reference_subcase="2")
|
||||
```
|
||||
|
||||
### Available Metrics
|
||||
|
||||
| Method | Returns |
|
||||
|--------|---------|
|
||||
| `extract_subcase()` | global_rms_nm, filtered_rms_nm, rms_filter_j1to3, defocus_nm, astigmatism_rms_nm, coma_rms_nm, trefoil_rms_nm, spherical_nm |
|
||||
| `extract_relative()` | relative_global_rms_nm, relative_filtered_rms_nm, relative_rms_filter_j1to3, relative aberrations |
|
||||
| `extract_all_subcases()` | Dict of all subcases with both absolute and relative metrics |
|
||||
|
||||
## Configuration Template
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "m1_mirror_optimization",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "whiffle_min",
|
||||
"expression_name": "whiffle_min",
|
||||
"min": 35.0,
|
||||
"max": 55.0,
|
||||
"baseline": 40.55,
|
||||
"units": "mm",
|
||||
"enabled": true
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "rel_filtered_rms_40_vs_20",
|
||||
"extractor": "zernike_relative",
|
||||
"extractor_config": {
|
||||
"target_subcase": "3",
|
||||
"reference_subcase": "2",
|
||||
"metric": "relative_filtered_rms_nm"
|
||||
},
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 4.0
|
||||
}
|
||||
],
|
||||
|
||||
"zernike_settings": {
|
||||
"n_modes": 50,
|
||||
"filter_low_orders": 4,
|
||||
"displacement_unit": "mm",
|
||||
"subcases": ["1", "2", "3", "4"],
|
||||
"subcase_labels": {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"},
|
||||
"reference_subcase": "2"
|
||||
},
|
||||
|
||||
"optimization_settings": {
|
||||
"sampler": "TPE",
|
||||
"seed": 42,
|
||||
"n_startup_trials": 15
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Key Changes |
|
||||
|---------|-------------|
|
||||
| V1-V6 | Initial development, various folder structures |
|
||||
| V7 | HEEDS-style iteration folders, fresh model copies |
|
||||
| V8 | Autonomous NX session management, but had embedded Zernike code |
|
||||
| V9 | Clean ZernikeExtractor integration, fixed sampler seed, 0-based folders |
|
||||
@@ -1,385 +0,0 @@
|
||||
# Protocol 10: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
|
||||
**Status**: Active
|
||||
**Version**: 2.0 (Adaptive Two-Study Architecture)
|
||||
**Last Updated**: 2025-11-20
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 10 implements intelligent, adaptive optimization that automatically:
|
||||
1. Characterizes the optimization landscape
|
||||
2. Selects the best optimization algorithm
|
||||
3. Executes optimization with the ideal strategy
|
||||
|
||||
**Key Innovation**: Adaptive characterization phase that intelligently determines when enough landscape exploration has been done, then seamlessly transitions to the optimal algorithm.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Two-Study Approach
|
||||
|
||||
Protocol 10 uses a **two-study architecture** to overcome Optuna's fixed-sampler limitation:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PROTOCOL 10: INTELLIGENT MULTI-STRATEGY OPTIMIZATION │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: ADAPTIVE CHARACTERIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Random/Sobol (unbiased exploration) │
|
||||
│ Trials: 10-30 (adapts to problem complexity) │
|
||||
│ │
|
||||
│ Every 5 trials: │
|
||||
│ → Analyze landscape metrics │
|
||||
│ → Check metric convergence │
|
||||
│ → Calculate characterization confidence │
|
||||
│ → Decide if ready to stop │
|
||||
│ │
|
||||
│ Stop when: │
|
||||
│ ✓ Confidence ≥ 85% │
|
||||
│ ✓ OR max trials reached (30) │
|
||||
│ │
|
||||
│ Simple problems (smooth, unimodal): │
|
||||
│ Stop at ~10-15 trials │
|
||||
│ │
|
||||
│ Complex problems (multimodal, rugged): │
|
||||
│ Continue to ~20-30 trials │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TRANSITION: LANDSCAPE ANALYSIS & STRATEGY SELECTION │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Analyze final landscape: │
|
||||
│ - Smoothness (0-1) │
|
||||
│ - Multimodality (clusters of good solutions) │
|
||||
│ - Parameter correlation │
|
||||
│ - Noise level │
|
||||
│ │
|
||||
│ Classify landscape: │
|
||||
│ → smooth_unimodal │
|
||||
│ → smooth_multimodal │
|
||||
│ → rugged_unimodal │
|
||||
│ → rugged_multimodal │
|
||||
│ → noisy │
|
||||
│ │
|
||||
│ Recommend strategy: │
|
||||
│ smooth_unimodal → GP-BO (best) or CMA-ES │
|
||||
│ smooth_multimodal → GP-BO │
|
||||
│ rugged_multimodal → TPE │
|
||||
│ rugged_unimodal → TPE or CMA-ES │
|
||||
│ noisy → TPE (most robust) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: OPTIMIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Recommended from Phase 1 │
|
||||
│ Warm Start: Initialize from best characterization point │
|
||||
│ Trials: User-specified (default 50) │
|
||||
│ │
|
||||
│ Optimizes efficiently using: │
|
||||
│ - Right algorithm for the landscape │
|
||||
│ - Knowledge from characterization phase │
|
||||
│ - Focused exploitation around promising regions │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Adaptive Characterization (`adaptive_characterization.py`)
|
||||
|
||||
**Purpose**: Intelligently determine when enough landscape exploration has been done.
|
||||
|
||||
**Key Features**:
|
||||
- Progressive landscape analysis (every 5 trials starting at trial 10)
|
||||
- Metric convergence detection
|
||||
- Complexity-aware sample adequacy
|
||||
- Parameter space coverage assessment
|
||||
- Confidence scoring (combines all factors)
|
||||
|
||||
**Confidence Calculation** (weighted sum):
|
||||
```python
|
||||
confidence = (
|
||||
0.40 * metric_stability_score + # Are metrics converging?
|
||||
0.30 * parameter_coverage_score + # Explored enough space?
|
||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
||||
0.10 * landscape_clarity_score # Clear classification?
|
||||
)
|
||||
```
|
||||
|
||||
**Stopping Criteria**:
|
||||
- **Minimum trials**: 10 (always gather baseline data)
|
||||
- **Maximum trials**: 30 (prevent over-characterization)
|
||||
- **Confidence threshold**: 85% (high confidence in landscape understanding)
|
||||
- **Check interval**: Every 5 trials
|
||||
|
||||
**Adaptive Behavior**:
|
||||
```python
|
||||
# Simple problem (smooth, unimodal, low noise):
|
||||
if smoothness > 0.6 and unimodal and noise < 0.3:
|
||||
required_samples = 10 + dimensionality
|
||||
# Stops at ~10-15 trials
|
||||
|
||||
# Complex problem (multimodal with N modes):
|
||||
if multimodal and n_modes > 2:
|
||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
||||
# Continues to ~20-30 trials
|
||||
```
|
||||
|
||||
### 2. Landscape Analyzer (`landscape_analyzer.py`)
|
||||
|
||||
**Purpose**: Characterize the optimization landscape from trial history.
|
||||
|
||||
**Metrics Computed**:
|
||||
|
||||
1. **Smoothness** (0-1):
|
||||
- Method: Spearman correlation between parameter distance and objective difference
|
||||
- High smoothness (>0.6): Nearby points have similar objectives (good for CMA-ES, GP-BO)
|
||||
- Low smoothness (<0.4): Rugged landscape (good for TPE)
|
||||
|
||||
2. **Multimodality** (boolean + n_modes):
|
||||
- Method: DBSCAN clustering on good trials (bottom 30%)
|
||||
- Detects multiple distinct regions of good solutions
|
||||
|
||||
3. **Parameter Correlation**:
|
||||
- Method: Spearman correlation between each parameter and objective
|
||||
- Identifies which parameters strongly affect objective
|
||||
|
||||
4. **Noise Level** (0-1):
|
||||
- Method: Local consistency check (nearby points should give similar outputs)
|
||||
- **Important**: Wide exploration range ≠ noise
|
||||
- Only true noise (simulation instability) is detected
|
||||
|
||||
**Landscape Classification**:
|
||||
```python
|
||||
'smooth_unimodal' # Single smooth bowl → GP-BO or CMA-ES
|
||||
'smooth_multimodal' # Multiple smooth regions → GP-BO
|
||||
'rugged_unimodal' # Single rugged region → TPE or CMA-ES
|
||||
'rugged_multimodal' # Multiple rugged regions → TPE
|
||||
'noisy' # High noise level → TPE (robust)
|
||||
```
|
||||
|
||||
### 3. Strategy Selector (`strategy_selector.py`)
|
||||
|
||||
**Purpose**: Recommend the best optimization algorithm based on landscape.
|
||||
|
||||
**Algorithm Recommendations**:
|
||||
|
||||
| Landscape Type | Primary Strategy | Fallback | Rationale |
|
||||
|----------------|------------------|----------|-----------|
|
||||
| smooth_unimodal | GP-BO | CMA-ES | GP surrogate models smoothness explicitly |
|
||||
| smooth_multimodal | GP-BO | TPE | GP handles multiple modes well |
|
||||
| rugged_unimodal | TPE | CMA-ES | TPE robust to ruggedness |
|
||||
| rugged_multimodal | TPE | - | TPE excellent for complex landscapes |
|
||||
| noisy | TPE | - | TPE most robust to noise |
|
||||
|
||||
**Algorithm Characteristics**:
|
||||
|
||||
**GP-BO (Gaussian Process Bayesian Optimization)**:
|
||||
- ✅ Best for: Smooth, expensive functions (like FEA)
|
||||
- ✅ Explicit surrogate model (Gaussian Process)
|
||||
- ✅ Models smoothness + uncertainty
|
||||
- ✅ Acquisition function balances exploration/exploitation
|
||||
- ❌ Less effective: Highly rugged landscapes
|
||||
|
||||
**CMA-ES (Covariance Matrix Adaptation Evolution Strategy)**:
|
||||
- ✅ Best for: Smooth unimodal problems
|
||||
- ✅ Fast convergence to local optimum
|
||||
- ✅ Adapts search distribution to landscape
|
||||
- ❌ Can get stuck in local minima
|
||||
- ❌ No explicit surrogate model
|
||||
|
||||
**TPE (Tree-structured Parzen Estimator)**:
|
||||
- ✅ Best for: Multimodal, rugged, or noisy problems
|
||||
- ✅ Robust to noise and discontinuities
|
||||
- ✅ Good global exploration
|
||||
- ❌ Slower convergence than GP-BO/CMA-ES on smooth problems
|
||||
|
||||
### 4. Intelligent Optimizer (`intelligent_optimizer.py`)
|
||||
|
||||
**Purpose**: Orchestrate the entire Protocol 10 workflow.
|
||||
|
||||
**Workflow**:
|
||||
```python
|
||||
1. Create characterization study (Random/Sobol sampler)
|
||||
2. Run adaptive characterization with stopping criterion
|
||||
3. Analyze final landscape
|
||||
4. Select optimal strategy
|
||||
5. Create optimization study with recommended sampler
|
||||
6. Warm-start from best characterization point
|
||||
7. Run optimization
|
||||
8. Generate intelligence report
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
|
||||
# Create optimizer
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="my_optimization",
|
||||
study_dir=results_dir,
|
||||
config=optimization_config,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define design variables
|
||||
design_vars = {
|
||||
'parameter1': (lower_bound, upper_bound),
|
||||
'parameter2': (lower_bound, upper_bound)
|
||||
}
|
||||
|
||||
# Run Protocol 10
|
||||
results = optimizer.optimize(
|
||||
objective_function=my_objective,
|
||||
design_variables=design_vars,
|
||||
n_trials=50, # For optimization phase
|
||||
target_value=target,
|
||||
tolerance=0.1
|
||||
)
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Add to `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
},
|
||||
"landscape_analysis": {
|
||||
"min_trials_for_analysis": 10
|
||||
},
|
||||
"strategy_selection": {
|
||||
"allow_cmaes": true,
|
||||
"allow_gpbo": true,
|
||||
"allow_tpe": true
|
||||
}
|
||||
},
|
||||
"trials": {
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Intelligence Report
|
||||
|
||||
Protocol 10 generates comprehensive reports tracking:
|
||||
|
||||
1. **Characterization Phase**:
|
||||
- Metric evolution (smoothness, multimodality, noise)
|
||||
- Confidence progression
|
||||
- Stopping decision details
|
||||
|
||||
2. **Landscape Analysis**:
|
||||
- Final landscape classification
|
||||
- Parameter correlations
|
||||
- Objective statistics
|
||||
|
||||
3. **Strategy Selection**:
|
||||
- Recommended algorithm
|
||||
- Decision rationale
|
||||
- Alternative strategies considered
|
||||
|
||||
4. **Optimization Performance**:
|
||||
- Best solution found
|
||||
- Convergence history
|
||||
- Algorithm effectiveness
|
||||
|
||||
## Benefits
|
||||
|
||||
### Efficiency
|
||||
- **Simple problems**: Stops characterization early (~10-15 trials)
|
||||
- **Complex problems**: Extends characterization for adequate coverage (~20-30 trials)
|
||||
- **Right algorithm**: Uses optimal strategy for the landscape type
|
||||
|
||||
### Robustness
|
||||
- **Adaptive**: Adjusts to problem complexity automatically
|
||||
- **Confidence-based**: Only stops when confident in landscape understanding
|
||||
- **Fallback strategies**: Handles edge cases gracefully
|
||||
|
||||
### Transparency
|
||||
- **Detailed reports**: Explains all decisions
|
||||
- **Metric tracking**: Full history of landscape analysis
|
||||
- **Reproducibility**: All decisions logged to JSON
|
||||
|
||||
## Example: Circular Plate Frequency Tuning
|
||||
|
||||
**Problem**: Tune circular plate dimensions to achieve 115 Hz first natural frequency
|
||||
|
||||
**Protocol 10 Behavior**:
|
||||
|
||||
```
|
||||
PHASE 1: CHARACTERIZATION (Trials 1-14)
|
||||
Trial 5: Landscape = smooth_unimodal (preliminary)
|
||||
Trial 10: Landscape = smooth_unimodal (confidence 72%)
|
||||
Trial 14: Landscape = smooth_unimodal (confidence 87%)
|
||||
|
||||
→ CHARACTERIZATION COMPLETE
|
||||
→ Confidence threshold met (87% ≥ 85%)
|
||||
→ Recommended Strategy: GP-BO
|
||||
|
||||
PHASE 2: OPTIMIZATION (Trials 15-64)
|
||||
Sampler: GP-BO (warm-started from best characterization point)
|
||||
Trial 15: 0.325 Hz error (baseline from characterization)
|
||||
Trial 23: 0.142 Hz error
|
||||
Trial 31: 0.089 Hz error
|
||||
Trial 42: 0.047 Hz error
|
||||
Trial 56: 0.012 Hz error ← TARGET ACHIEVED!
|
||||
|
||||
→ Total Trials: 56 (14 characterization + 42 optimization)
|
||||
→ Best Frequency: 115.012 Hz (error 0.012 Hz)
|
||||
```
|
||||
|
||||
**Comparison** (without Protocol 10):
|
||||
- TPE alone: ~95 trials to achieve target
|
||||
- Random search: ~150+ trials
|
||||
- **Protocol 10: 56 trials** (41% reduction vs TPE)
|
||||
|
||||
## Limitations and Future Work
|
||||
|
||||
### Current Limitations
|
||||
|
||||
1. **Optuna Constraint**: Cannot change sampler mid-study (necessitates two-study approach)
|
||||
2. **GP-BO Integration**: Requires external GP-BO library (e.g., BoTorch, scikit-optimize)
|
||||
3. **Warm Start**: Not all samplers support warm-starting equally well
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
1. **Multi-Fidelity**: Extend to support cheap/expensive function evaluations
|
||||
2. **Constraint Handling**: Better support for constrained optimization
|
||||
3. **Transfer Learning**: Use knowledge from previous similar problems
|
||||
4. **Active Learning**: More sophisticated characterization sampling
|
||||
|
||||
## References
|
||||
|
||||
- Landscape Analysis: Mersmann et al. "Exploratory Landscape Analysis" (2011)
|
||||
- CMA-ES: Hansen & Ostermeier "Completely Derandomized Self-Adaptation" (2001)
|
||||
- GP-BO: Snoek et al. "Practical Bayesian Optimization" (2012)
|
||||
- TPE: Bergstra et al. "Algorithms for Hyper-Parameter Optimization" (2011)
|
||||
|
||||
## Version History
|
||||
|
||||
### Version 2.0 (2025-11-20)
|
||||
- ✅ Added adaptive characterization with intelligent stopping
|
||||
- ✅ Implemented two-study architecture (overcomes Optuna limitation)
|
||||
- ✅ Fixed noise detection algorithm (local consistency instead of global CV)
|
||||
- ✅ Added GP-BO as primary recommendation for smooth problems
|
||||
- ✅ Comprehensive intelligence reporting
|
||||
|
||||
### Version 1.0 (2025-11-19)
|
||||
- Initial implementation with dynamic strategy switching
|
||||
- Discovered Optuna sampler limitation
|
||||
- Single-study architecture (non-functional)
|
||||
@@ -1,346 +0,0 @@
|
||||
# Protocol 10 v2.0 - Bug Fixes
|
||||
|
||||
**Date**: November 20, 2025
|
||||
**Version**: 2.1 (Post-Test Improvements)
|
||||
**Status**: ✅ Fixed and Ready for Retesting
|
||||
|
||||
## Summary
|
||||
|
||||
After testing Protocol 10 v2.0 on the circular plate problem, we identified three issues that reduced optimization efficiency. All have been fixed.
|
||||
|
||||
## Test Results (Before Fixes)
|
||||
|
||||
**Study**: circular_plate_protocol10_v2_test
|
||||
**Total trials**: 50 (40 successful, 10 pruned)
|
||||
**Best result**: 0.94 Hz error (Trial #49)
|
||||
**Target**: 0.1 Hz tolerance ❌ Not achieved
|
||||
|
||||
**Issues Found**:
|
||||
1. Wrong algorithm selected (TPE instead of GP-BO)
|
||||
2. False multimodality detection
|
||||
3. High pruning rate (20% failures)
|
||||
|
||||
---
|
||||
|
||||
## Fix #1: Strategy Selector - Use Characterization Trial Count
|
||||
|
||||
### Problem
|
||||
|
||||
The strategy selector used **total trial count** (including pruned trials) instead of **characterization trial count**.
|
||||
|
||||
**Impact**: Characterization completed at trial #26, but optimization started at trial #35 (because trials 0-34 included 9 pruned trials). The condition `trials_completed < 30` was FALSE, so GP-BO wasn't selected.
|
||||
|
||||
**Wrong behavior**:
|
||||
```python
|
||||
# Characterization: 26 successful trials (trials 0-34 total)
|
||||
# trials_completed = 35 at start of optimization
|
||||
if trials_completed < 30: # FALSE! (35 > 30)
|
||||
return 'gp_bo' # Not reached
|
||||
else:
|
||||
return 'tpe' # Selected instead
|
||||
```
|
||||
|
||||
### Solution
|
||||
|
||||
Use characterization trial count from landscape analysis, not total trial count:
|
||||
|
||||
**File**: [optimization_engine/strategy_selector.py:70-72](../optimization_engine/strategy_selector.py#L70-L72)
|
||||
|
||||
```python
|
||||
# Use characterization trial count for strategy decisions (not total trials)
|
||||
# This prevents premature algorithm selection when many trials were pruned
|
||||
char_trials = landscape.get('total_trials', trials_completed)
|
||||
|
||||
# Decision tree for strategy selection
|
||||
strategy, details = self._apply_decision_tree(
|
||||
...
|
||||
trials_completed=char_trials # Use characterization trials, not total
|
||||
)
|
||||
```
|
||||
|
||||
**Result**: Now correctly selects GP-BO when characterization completes at ~26 trials.
|
||||
|
||||
---
|
||||
|
||||
## Fix #2: Improve Multimodality Detection
|
||||
|
||||
### Problem
|
||||
|
||||
The landscape analyzer detected **2 modes** when the problem was actually **unimodal**.
|
||||
|
||||
**Evidence from test**:
|
||||
- Smoothness = 0.67 (high smoothness)
|
||||
- Noise = 0.15 (low noise)
|
||||
- 2 modes detected → Classified as "smooth_multimodal"
|
||||
|
||||
**Why this happened**: The circular plate has two parameter combinations that achieve similar frequencies:
|
||||
- Small diameter + thick plate (~67 mm, ~7 mm)
|
||||
- Medium diameter + medium plate (~83 mm, ~6.5 mm)
|
||||
|
||||
But these aren't separate "modes" - they're part of a **smooth continuous manifold**.
|
||||
|
||||
### Solution
|
||||
|
||||
Add heuristic to detect false multimodality from smooth continuous surfaces:
|
||||
|
||||
**File**: [optimization_engine/landscape_analyzer.py:285-292](../optimization_engine/landscape_analyzer.py#L285-L292)
|
||||
|
||||
```python
|
||||
# IMPROVEMENT: Detect false multimodality from smooth continuous manifolds
|
||||
# If only 2 modes detected with high smoothness and low noise,
|
||||
# it's likely a continuous smooth surface, not true multimodality
|
||||
if multimodal and n_modes == 2 and smoothness > 0.6 and noise < 0.2:
|
||||
if self.verbose:
|
||||
print(f"[LANDSCAPE] Reclassifying: 2 modes with smoothness={smoothness:.2f}, noise={noise:.2f}")
|
||||
print(f"[LANDSCAPE] This appears to be a smooth continuous manifold, not true multimodality")
|
||||
multimodal = False # Override: treat as unimodal
|
||||
```
|
||||
|
||||
**Updated call site**:
|
||||
```python
|
||||
# Pass n_modes to classification function
|
||||
landscape_type = self._classify_landscape(smoothness, multimodal, noise_level, n_modes)
|
||||
```
|
||||
|
||||
**Result**: Circular plate will now be classified as "smooth_unimodal" → CMA-ES or GP-BO selected.
|
||||
|
||||
---
|
||||
|
||||
## Fix #3: Simulation Validation
|
||||
|
||||
### Problem
|
||||
|
||||
20% of trials failed with OP2 extraction errors:
|
||||
```
|
||||
OP2 EXTRACTION FAILED: There was a Nastran FATAL Error. Check the F06.
|
||||
last table=b'EQEXIN'; post=-1 version='nx'
|
||||
```
|
||||
|
||||
**Root cause**: Extreme parameter values causing:
|
||||
- Poor mesh quality (very thin or thick plates)
|
||||
- Numerical instability (extreme aspect ratios)
|
||||
- Solver convergence issues
|
||||
|
||||
### Solution
|
||||
|
||||
Created validation module to check parameters before simulation:
|
||||
|
||||
**New file**: [optimization_engine/simulation_validator.py](../optimization_engine/simulation_validator.py)
|
||||
|
||||
**Features**:
|
||||
1. **Hard limits**: Reject invalid parameters (outside bounds)
|
||||
2. **Soft limits**: Warn about risky parameters (may cause issues)
|
||||
3. **Aspect ratio checks**: Validate diameter/thickness ratio
|
||||
4. **Model-specific rules**: Different rules for different FEA models
|
||||
5. **Correction suggestions**: Clamp parameters to safe ranges
|
||||
|
||||
**Usage example**:
|
||||
```python
|
||||
from optimization_engine.simulation_validator import SimulationValidator
|
||||
|
||||
validator = SimulationValidator(model_type='circular_plate', verbose=True)
|
||||
|
||||
# Before running simulation
|
||||
is_valid, warnings = validator.validate(design_variables)
|
||||
|
||||
if not is_valid:
|
||||
print(f"Invalid parameters: {warnings}")
|
||||
raise optuna.TrialPruned() # Skip this trial
|
||||
|
||||
# Optional: auto-correct risky parameters
|
||||
if warnings:
|
||||
design_variables = validator.suggest_corrections(design_variables)
|
||||
```
|
||||
|
||||
**Validation rules for circular plate**:
|
||||
```python
|
||||
{
|
||||
'inner_diameter': {
|
||||
'min': 50.0, 'max': 150.0, # Hard limits
|
||||
'soft_min': 55.0, 'soft_max': 145.0, # Recommended range
|
||||
'reason': 'Extreme diameters may cause meshing failures'
|
||||
},
|
||||
'plate_thickness': {
|
||||
'min': 2.0, 'max': 10.0,
|
||||
'soft_min': 2.5, 'soft_max': 9.5,
|
||||
'reason': 'Extreme thickness may cause poor element aspect ratios'
|
||||
},
|
||||
'aspect_ratio': {
|
||||
'min': 5.0, 'max': 50.0, # diameter/thickness
|
||||
'reason': 'Poor aspect ratio can cause solver convergence issues'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Result**: Prevents ~15-20% of failures by rejecting extreme parameters early.
|
||||
|
||||
---
|
||||
|
||||
## Integration Example
|
||||
|
||||
Here's how to use all fixes together in a new study:
|
||||
|
||||
```python
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
from optimization_engine.simulation_validator import SimulationValidator
|
||||
from optimization_engine.nx_updater import NXParameterUpdater
|
||||
from optimization_engine.nx_solver import NXSolver
|
||||
|
||||
# Initialize
|
||||
validator = SimulationValidator(model_type='circular_plate')
|
||||
updater = NXParameterUpdater(prt_file)
|
||||
solver = NXSolver()
|
||||
|
||||
def objective(trial):
|
||||
# Sample parameters
|
||||
inner_diameter = trial.suggest_float('inner_diameter', 50, 150)
|
||||
plate_thickness = trial.suggest_float('plate_thickness', 2, 10)
|
||||
|
||||
params = {
|
||||
'inner_diameter': inner_diameter,
|
||||
'plate_thickness': plate_thickness
|
||||
}
|
||||
|
||||
# FIX #3: Validate before simulation
|
||||
is_valid, warnings = validator.validate(params)
|
||||
if not is_valid:
|
||||
print(f" Invalid parameters - skipping trial")
|
||||
raise optuna.TrialPruned()
|
||||
|
||||
# Run simulation
|
||||
updater.update_expressions(params)
|
||||
result = solver.run_simulation(sim_file, solution_name="Solution_Normal_Modes")
|
||||
|
||||
if not result['success']:
|
||||
raise optuna.TrialPruned()
|
||||
|
||||
# Extract and return objective
|
||||
frequency = extract_first_frequency(result['op2_file'])
|
||||
return abs(frequency - target_frequency)
|
||||
|
||||
# Create optimizer with fixes
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="circular_plate_with_fixes",
|
||||
study_dir=results_dir,
|
||||
config={
|
||||
"intelligent_optimization": {
|
||||
"enabled": True,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
}
|
||||
}
|
||||
},
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run optimization
|
||||
# FIX #1 & #2 applied automatically in strategy selector and landscape analyzer
|
||||
results = optimizer.optimize(
|
||||
objective_function=objective,
|
||||
design_variables={'inner_diameter': (50, 150), 'plate_thickness': (2, 10)},
|
||||
n_trials=50
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Improvements
|
||||
|
||||
### With All Fixes Applied:
|
||||
|
||||
| Metric | Before Fixes | After Fixes | Improvement |
|
||||
|--------|-------------|-------------|-------------|
|
||||
| Algorithm selected | TPE | GP-BO → CMA-ES | ✅ Better |
|
||||
| Landscape classification | smooth_multimodal | smooth_unimodal | ✅ Correct |
|
||||
| Pruning rate | 20% (10/50) | ~5% (2-3/50) | ✅ 75% reduction |
|
||||
| Total successful trials | 40 | ~47-48 | ✅ +18% |
|
||||
| Expected best error | 0.94 Hz | **<0.1 Hz** | ✅ Target achieved |
|
||||
| Trials to convergence | 50+ | ~35-40 | ✅ 20-30% faster |
|
||||
|
||||
### Algorithm Performance Comparison:
|
||||
|
||||
**TPE** (used before fixes):
|
||||
- Good for: Multimodal, robust, general-purpose
|
||||
- Convergence: Slower on smooth problems
|
||||
- Result: 0.94 Hz in 50 trials
|
||||
|
||||
**GP-BO → CMA-ES** (used after fixes):
|
||||
- Good for: Smooth landscapes, sample-efficient
|
||||
- Convergence: Faster local refinement
|
||||
- Expected: 0.05-0.1 Hz in 35-40 trials
|
||||
|
||||
---
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Retest Protocol 10 v2.1:
|
||||
|
||||
1. **Delete old study**:
|
||||
```bash
|
||||
rm -rf studies/circular_plate_protocol10_v2_test
|
||||
```
|
||||
|
||||
2. **Create new study** with same config:
|
||||
```bash
|
||||
python create_protocol10_v2_test_study.py
|
||||
```
|
||||
|
||||
3. **Run optimization**:
|
||||
```bash
|
||||
cd studies/circular_plate_protocol10_v2_test
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
4. **Verify fixes**:
|
||||
- Check `intelligence_report.json`: Should recommend GP-BO, not TPE
|
||||
- Check `characterization_progress.json`: Should show "smooth_unimodal" reclassification
|
||||
- Check pruned trial count: Should be ≤3 (down from 10)
|
||||
- Check final result: Should achieve <0.1 Hz error
|
||||
|
||||
---
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. ✅ [optimization_engine/strategy_selector.py](../optimization_engine/strategy_selector.py#L70-L82)
|
||||
- Fixed: Use characterization trial count for decisions
|
||||
|
||||
2. ✅ [optimization_engine/landscape_analyzer.py](../optimization_engine/landscape_analyzer.py#L77)
|
||||
- Fixed: Pass n_modes to `_classify_landscape()`
|
||||
|
||||
3. ✅ [optimization_engine/landscape_analyzer.py](../optimization_engine/landscape_analyzer.py#L285-L292)
|
||||
- Fixed: Detect false multimodality from smooth manifolds
|
||||
|
||||
4. ✅ [optimization_engine/simulation_validator.py](../optimization_engine/simulation_validator.py) (NEW)
|
||||
- Added: Parameter validation before simulations
|
||||
|
||||
5. ✅ [docs/PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) (NEW - this file)
|
||||
- Added: Complete documentation of fixes
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
### Version 2.1 (2025-11-20)
|
||||
- Fixed strategy selector timing logic
|
||||
- Improved multimodality detection
|
||||
- Added simulation parameter validation
|
||||
- Reduced pruning rate from 20% → ~5%
|
||||
|
||||
### Version 2.0 (2025-11-20)
|
||||
- Adaptive characterization implemented
|
||||
- Two-study architecture
|
||||
- GP-BO/CMA-ES/TPE support
|
||||
|
||||
### Version 1.0 (2025-11-17)
|
||||
- Initial Protocol 10 implementation
|
||||
- Fixed characterization trials (15)
|
||||
- Basic strategy selection
|
||||
|
||||
---
|
||||
|
||||
**Status**: ✅ All fixes implemented and ready for retesting
|
||||
**Next step**: Run retest to validate improvements
|
||||
**Expected outcome**: Achieve 0.1 Hz tolerance in ~35-40 trials
|
||||
@@ -1,359 +0,0 @@
|
||||
# Protocol 10 v2.0 Implementation Summary
|
||||
|
||||
**Date**: November 20, 2025
|
||||
**Version**: 2.0 - Adaptive Two-Study Architecture
|
||||
**Status**: ✅ Complete and Ready for Testing
|
||||
|
||||
## What Was Implemented
|
||||
|
||||
### 1. Adaptive Characterization Module
|
||||
|
||||
**File**: [`optimization_engine/adaptive_characterization.py`](../optimization_engine/adaptive_characterization.py)
|
||||
|
||||
**Purpose**: Intelligently determines when enough landscape exploration has been done during the characterization phase.
|
||||
|
||||
**Key Features**:
|
||||
- Progressive landscape analysis (every 5 trials starting at trial 10)
|
||||
- Metric convergence detection (smoothness, multimodality, noise stability)
|
||||
- Complexity-aware sample adequacy (simple problems need fewer trials)
|
||||
- Parameter space coverage assessment
|
||||
- Confidence scoring (weighted combination of all factors)
|
||||
|
||||
**Adaptive Behavior**:
|
||||
```python
|
||||
# Simple problem (smooth, unimodal):
|
||||
required_samples = 10 + dimensionality
|
||||
# Stops at ~10-15 trials
|
||||
|
||||
# Complex problem (multimodal with N modes):
|
||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
||||
# Continues to ~20-30 trials
|
||||
```
|
||||
|
||||
**Confidence Calculation**:
|
||||
```python
|
||||
confidence = (
|
||||
0.40 * metric_stability_score + # Are metrics converging?
|
||||
0.30 * parameter_coverage_score + # Explored enough space?
|
||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
||||
0.10 * landscape_clarity_score # Clear classification?
|
||||
)
|
||||
```
|
||||
|
||||
**Stopping Criteria**:
|
||||
- **Minimum trials**: 10 (always gather baseline data)
|
||||
- **Maximum trials**: 30 (prevent over-characterization)
|
||||
- **Confidence threshold**: 85% (high confidence required)
|
||||
- **Check interval**: Every 5 trials
|
||||
|
||||
### 2. Updated Intelligent Optimizer
|
||||
|
||||
**File**: [`optimization_engine/intelligent_optimizer.py`](../optimization_engine/intelligent_optimizer.py)
|
||||
|
||||
**Changes**:
|
||||
- Integrated `CharacterizationStoppingCriterion` into the optimization workflow
|
||||
- Replaced fixed characterization trials with adaptive loop
|
||||
- Added characterization summary reporting
|
||||
|
||||
**New Workflow**:
|
||||
```python
|
||||
# Stage 1: Adaptive Characterization
|
||||
stopping_criterion = CharacterizationStoppingCriterion(...)
|
||||
|
||||
while not stopping_criterion.should_stop(study):
|
||||
study.optimize(objective, n_trials=check_interval) # Run batch
|
||||
landscape = analyzer.analyze(study) # Analyze
|
||||
stopping_criterion.update(landscape, n_trials) # Update confidence
|
||||
|
||||
# Stage 2: Strategy Selection (based on final landscape)
|
||||
strategy = selector.recommend_strategy(landscape)
|
||||
|
||||
# Stage 3: Optimization (with recommended strategy)
|
||||
optimization_study = create_study(recommended_sampler)
|
||||
optimization_study.optimize(objective, n_trials=remaining)
|
||||
```
|
||||
|
||||
### 3. Comprehensive Documentation
|
||||
|
||||
**File**: [`docs/PROTOCOL_10_IMSO.md`](PROTOCOL_10_IMSO.md)
|
||||
|
||||
**Contents**:
|
||||
- Complete Protocol 10 architecture explanation
|
||||
- Two-study approach rationale
|
||||
- Adaptive characterization details
|
||||
- Algorithm recommendations (GP-BO, CMA-ES, TPE)
|
||||
- Usage examples
|
||||
- Expected performance (41% reduction vs TPE alone)
|
||||
- Comparison with Version 1.0
|
||||
|
||||
**File**: [`docs/INDEX.md`](INDEX.md) - Updated
|
||||
|
||||
**Changes**:
|
||||
- Added Protocol 10 to Architecture & Design section
|
||||
- Added to Key Files reference table
|
||||
- Positioned as advanced optimization technique
|
||||
|
||||
### 4. Test Script
|
||||
|
||||
**File**: [`test_adaptive_characterization.py`](../test_adaptive_characterization.py)
|
||||
|
||||
**Purpose**: Validate that adaptive characterization behaves correctly for different problem types.
|
||||
|
||||
**Tests**:
|
||||
1. **Simple Smooth Quadratic**: Expected ~10-15 trials
|
||||
2. **Complex Multimodal (Rastrigin)**: Expected ~15-30 trials
|
||||
|
||||
**How to Run**:
|
||||
```bash
|
||||
python test_adaptive_characterization.py
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Old Config (v1.0):
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization_trials": 15, // Fixed!
|
||||
"min_analysis_trials": 10,
|
||||
"stagnation_window": 10,
|
||||
"min_improvement_threshold": 0.001
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### New Config (v2.0):
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
},
|
||||
"landscape_analysis": {
|
||||
"min_trials_for_analysis": 10
|
||||
},
|
||||
"strategy_selection": {
|
||||
"allow_cmaes": true,
|
||||
"allow_gpbo": true,
|
||||
"allow_tpe": true
|
||||
}
|
||||
},
|
||||
"trials": {
|
||||
"n_trials": 50 // For optimization phase
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Intelligence Added
|
||||
|
||||
### Problem: How to determine characterization trial count?
|
||||
|
||||
**Old Approach (v1.0)**:
|
||||
- Fixed 15 trials for all problems
|
||||
- Wasteful for simple problems (only need ~10 trials)
|
||||
- Insufficient for complex problems (may need ~25 trials)
|
||||
|
||||
**New Approach (v2.0) - Adaptive Intelligence**:
|
||||
|
||||
1. **Metric Stability Detection**:
|
||||
```python
|
||||
# Track smoothness over last 3 analyses
|
||||
smoothness_values = [0.72, 0.68, 0.71] # Converging!
|
||||
smoothness_std = 0.017 # Low variance = stable
|
||||
if smoothness_std < 0.05:
|
||||
metric_stable = True # Confident in measurement
|
||||
```
|
||||
|
||||
2. **Complexity-Aware Sample Adequacy**:
|
||||
```python
|
||||
if multimodal and n_modes > 2:
|
||||
# Complex: need to sample multiple regions
|
||||
required = 10 + 5 * n_modes + 2 * dims
|
||||
elif smooth and unimodal:
|
||||
# Simple: quick convergence expected
|
||||
required = 10 + dims
|
||||
```
|
||||
|
||||
3. **Parameter Coverage Assessment**:
|
||||
```python
|
||||
# Check if explored enough of each parameter range
|
||||
for param in params:
|
||||
coverage = (explored_max - explored_min) / (bound_max - bound_min)
|
||||
# Need at least 50% coverage for confidence
|
||||
```
|
||||
|
||||
4. **Landscape Clarity**:
|
||||
```python
|
||||
# Clear classification = confident stopping
|
||||
if smoothness > 0.7 or smoothness < 0.3: # Very smooth or very rugged
|
||||
clarity_high = True
|
||||
if noise < 0.3 or noise > 0.7: # Low noise or high noise
|
||||
clarity_high = True
|
||||
```
|
||||
|
||||
### Result: Self-Adapting Characterization
|
||||
|
||||
**Simple Problem Example** (circular plate frequency tuning):
|
||||
```
|
||||
Trial 5: Landscape = smooth_unimodal (preliminary)
|
||||
Trial 10: Landscape = smooth_unimodal (confidence 72%)
|
||||
- Smoothness stable (0.71 ± 0.02)
|
||||
- Unimodal confirmed
|
||||
- Coverage adequate (60%)
|
||||
|
||||
Trial 15: Landscape = smooth_unimodal (confidence 87%)
|
||||
- All metrics converged
|
||||
- Clear classification
|
||||
|
||||
STOP: Confidence threshold met (87% ≥ 85%)
|
||||
Total characterization trials: 14
|
||||
```
|
||||
|
||||
**Complex Problem Example** (multimodal with 4 modes):
|
||||
```
|
||||
Trial 10: Landscape = multimodal (preliminary, 3 modes)
|
||||
Trial 15: Landscape = multimodal (confidence 58%, 4 modes detected)
|
||||
- Multimodality still evolving
|
||||
- Need more coverage
|
||||
|
||||
Trial 20: Landscape = rugged_multimodal (confidence 71%, 4 modes)
|
||||
- Classification stable
|
||||
- Coverage improving (55%)
|
||||
|
||||
Trial 25: Landscape = rugged_multimodal (confidence 86%, 4 modes)
|
||||
- All metrics converged
|
||||
- Adequate coverage (62%)
|
||||
|
||||
STOP: Confidence threshold met (86% ≥ 85%)
|
||||
Total characterization trials: 26
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
### Efficiency
|
||||
- ✅ **Simple problems**: Stop early (~10-15 trials) → 33% reduction
|
||||
- ✅ **Complex problems**: Extend as needed (~20-30 trials) → Adequate coverage
|
||||
- ✅ **No wasted trials**: Only characterize as much as necessary
|
||||
|
||||
### Robustness
|
||||
- ✅ **Adaptive**: Adjusts to problem complexity automatically
|
||||
- ✅ **Confidence-based**: Only stops when metrics are stable
|
||||
- ✅ **Bounded**: Min 10, max 30 trials (safety limits)
|
||||
|
||||
### Transparency
|
||||
- ✅ **Detailed reports**: Explains all stopping decisions
|
||||
- ✅ **Metric tracking**: Full history of convergence
|
||||
- ✅ **Reproducibility**: All logged to JSON
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
|
||||
# Create optimizer with adaptive characterization config
|
||||
config = {
|
||||
"intelligent_optimization": {
|
||||
"enabled": True,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
}
|
||||
},
|
||||
"trials": {
|
||||
"n_trials": 50 # For optimization phase after characterization
|
||||
}
|
||||
}
|
||||
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="my_optimization",
|
||||
study_dir=Path("results"),
|
||||
config=config,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define design variables
|
||||
design_vars = {
|
||||
'parameter1': (lower1, upper1),
|
||||
'parameter2': (lower2, upper2)
|
||||
}
|
||||
|
||||
# Run Protocol 10 with adaptive characterization
|
||||
results = optimizer.optimize(
|
||||
objective_function=my_objective,
|
||||
design_variables=design_vars,
|
||||
n_trials=50, # Only for optimization phase
|
||||
target_value=115.0,
|
||||
tolerance=0.1
|
||||
)
|
||||
|
||||
# Characterization will stop at 10-30 trials automatically
|
||||
# Then optimization will use recommended algorithm for remaining trials
|
||||
```
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
1. **Unit Test**: Run `test_adaptive_characterization.py`
|
||||
- Validates adaptive behavior on toy problems
|
||||
- Expected: Simple problem stops early, complex problem continues
|
||||
|
||||
2. **Integration Test**: Run existing circular plate study
|
||||
- Should stop characterization at ~12-15 trials (smooth unimodal)
|
||||
- Compare with fixed 15-trial approach (should be similar or better)
|
||||
|
||||
3. **Stress Test**: Create highly multimodal FEA problem
|
||||
- Should extend characterization to ~25-30 trials
|
||||
- Verify adequate coverage of multiple modes
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Test on Real FEA Problem**: Use circular plate frequency tuning study
|
||||
2. **Validate Stopping Decisions**: Review characterization logs
|
||||
3. **Benchmark Performance**: Compare v2.0 vs v1.0 trial efficiency
|
||||
4. **GP-BO Integration**: Add Gaussian Process Bayesian Optimization support
|
||||
5. **Two-Study Implementation**: Complete the transition to new optimized study
|
||||
|
||||
## Version Comparison
|
||||
|
||||
| Feature | v1.0 | v2.0 |
|
||||
|---------|------|------|
|
||||
| Characterization trials | Fixed (15) | Adaptive (10-30) |
|
||||
| Problem complexity aware | ❌ No | ✅ Yes |
|
||||
| Metric convergence detection | ❌ No | ✅ Yes |
|
||||
| Confidence scoring | ❌ No | ✅ Yes |
|
||||
| Simple problem efficiency | 15 trials | ~12 trials (20% reduction) |
|
||||
| Complex problem adequacy | 15 trials (may be insufficient) | ~25 trials (adequate) |
|
||||
| Transparency | Basic logs | Comprehensive reports |
|
||||
| Algorithm recommendation | TPE/CMA-ES | GP-BO/CMA-ES/TPE |
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. ✅ `optimization_engine/adaptive_characterization.py` (NEW)
|
||||
2. ✅ `optimization_engine/intelligent_optimizer.py` (UPDATED)
|
||||
3. ✅ `docs/PROTOCOL_10_IMSO.md` (NEW)
|
||||
4. ✅ `docs/INDEX.md` (UPDATED)
|
||||
5. ✅ `test_adaptive_characterization.py` (NEW)
|
||||
6. ✅ `docs/PROTOCOL_10_V2_IMPLEMENTATION.md` (NEW - this file)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Adaptive characterization module implemented
|
||||
✅ Integration with intelligent optimizer complete
|
||||
✅ Comprehensive documentation written
|
||||
✅ Test script created
|
||||
✅ Configuration updated
|
||||
✅ All code compiles without errors
|
||||
|
||||
**Status**: READY FOR TESTING ✅
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: November 20, 2025
|
||||
**Implementation Time**: ~2 hours
|
||||
**Lines of Code Added**: ~600 lines (module + docs + tests)
|
||||
@@ -1,142 +0,0 @@
|
||||
# Fix Summary: Protocol 11 - Multi-Objective Support
|
||||
|
||||
**Date:** 2025-11-21
|
||||
**Issue:** IntelligentOptimizer crashes on multi-objective optimization studies
|
||||
**Status:** ✅ FIXED
|
||||
|
||||
## Root Cause
|
||||
|
||||
The IntelligentOptimizer (Protocol 10) was hardcoded for single-objective optimization only. When used with multi-objective studies:
|
||||
|
||||
1. **Trials executed successfully** - All simulations ran and data was saved to `study.db`
|
||||
2. **Crash during result compilation** - Failed when accessing `study.best_trial/best_params/best_value`
|
||||
3. **No tracking files generated** - intelligent_optimizer folder remained empty
|
||||
4. **Silent failure** - Error only visible in console output, not in results
|
||||
|
||||
## Files Modified
|
||||
|
||||
### 1. `optimization_engine/intelligent_optimizer.py`
|
||||
|
||||
**Changes:**
|
||||
- Added `self.directions` attribute to store study type
|
||||
- Modified `_compile_results()` to handle both single and multi-objective (lines 327-370)
|
||||
- Modified `_run_fallback_optimization()` to handle both cases (lines 372-413)
|
||||
- Modified `_print_final_summary()` to format multi-objective values correctly (lines 427-445)
|
||||
- Added Protocol 11 initialization message (lines 116-119)
|
||||
|
||||
**Key Fix:**
|
||||
```python
|
||||
def _compile_results(self) -> Dict[str, Any]:
|
||||
is_multi_objective = len(self.study.directions) > 1
|
||||
|
||||
if is_multi_objective:
|
||||
best_trials = self.study.best_trials # Pareto front
|
||||
representative_trial = best_trials[0] if best_trials else None
|
||||
# ...
|
||||
else:
|
||||
best_params = self.study.best_params # Single objective API
|
||||
# ...
|
||||
```
|
||||
|
||||
### 2. `optimization_engine/landscape_analyzer.py`
|
||||
|
||||
**Changes:**
|
||||
- Modified `print_landscape_report()` to handle `None` input (lines 346-354)
|
||||
- Added check for multi-objective studies
|
||||
|
||||
**Key Fix:**
|
||||
```python
|
||||
def print_landscape_report(landscape: Dict, verbose: bool = True):
|
||||
# Handle None (multi-objective studies)
|
||||
if landscape is None:
|
||||
print(f"\n [LANDSCAPE ANALYSIS] Skipped for multi-objective optimization")
|
||||
return
|
||||
```
|
||||
|
||||
### 3. `optimization_engine/strategy_selector.py`
|
||||
|
||||
**Changes:**
|
||||
- Modified `recommend_strategy()` to handle `None` landscape (lines 58-61)
|
||||
- Added None check before calling `.get()` on landscape dict
|
||||
|
||||
**Key Fix:**
|
||||
```python
|
||||
def recommend_strategy(...):
|
||||
# Handle None landscape (multi-objective optimization)
|
||||
if landscape is None or not landscape.get('ready', False):
|
||||
return self._recommend_random_exploration(trials_completed)
|
||||
```
|
||||
|
||||
### 4. `studies/bracket_stiffness_optimization/run_optimization.py`
|
||||
|
||||
**Changes:**
|
||||
- Fixed landscape_analysis None check in results printing (line 251)
|
||||
|
||||
**Key Fix:**
|
||||
```python
|
||||
if 'landscape_analysis' in results and results['landscape_analysis'] is not None:
|
||||
print(f" Landscape Type: {results['landscape_analysis'].get('landscape_type', 'N/A')}")
|
||||
```
|
||||
|
||||
### 5. `atomizer-dashboard/frontend/src/pages/Dashboard.tsx`
|
||||
|
||||
**Changes:**
|
||||
- Removed hardcoded "Hz" units from objective values and metrics
|
||||
- Made dashboard generic for all optimization types
|
||||
|
||||
**Changes:**
|
||||
- Line 204: Removed " Hz" from Best Value metric
|
||||
- Line 209: Removed " Hz" from Avg Objective metric
|
||||
- Line 242: Changed Y-axis label from "Objective (Hz)" to "Objective"
|
||||
- Line 298: Removed " Hz" from parameter space tooltip
|
||||
- Line 341: Removed " Hz" from trial feed objective display
|
||||
- Line 43: Removed " Hz" from new best alert message
|
||||
|
||||
### 6. `docs/PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md`
|
||||
|
||||
**Created:** Comprehensive documentation explaining:
|
||||
- The problem and root cause
|
||||
- The solution pattern
|
||||
- Implementation checklist
|
||||
- Testing protocol
|
||||
- Files that need review
|
||||
|
||||
## Testing
|
||||
|
||||
Tested with bracket_stiffness_optimization study:
|
||||
- **Objectives:** Maximize stiffness, Minimize mass
|
||||
- **Directions:** `["minimize", "minimize"]` (multi-objective)
|
||||
- **Expected:** Complete successfully with all tracking files
|
||||
|
||||
## Results
|
||||
|
||||
✅ **Before Fix:**
|
||||
- study.db created ✓
|
||||
- intelligent_optimizer/ EMPTY ✗
|
||||
- optimization_summary.json MISSING ✗
|
||||
- RuntimeError in console ✗
|
||||
|
||||
✅ **After Fix:**
|
||||
- study.db created ✓
|
||||
- intelligent_optimizer/ populated ✓
|
||||
- optimization_summary.json created ✓
|
||||
- No errors ✓
|
||||
- Protocol 11 message displayed ✓
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
1. **Always test both single and multi-objective cases**
|
||||
2. **Check for `None` before calling `.get()` on dict-like objects**
|
||||
3. **Multi-objective support must be baked into the design, not added later**
|
||||
4. **Silent failures are dangerous - always validate output files exist**
|
||||
|
||||
## Future Work
|
||||
|
||||
- [ ] Review files listed in Protocol 11 documentation for similar issues
|
||||
- [ ] Add unit tests for multi-objective support in all optimizers
|
||||
- [ ] Create helper function `get_best_solution(study)` for both cases
|
||||
- [ ] Add validation checks in study creation to warn about configuration issues
|
||||
|
||||
## Conclusion
|
||||
|
||||
Protocol 11 is now **MANDATORY** for all optimization components. Any code that accesses `study.best_trial`, `study.best_params`, or `study.best_value` MUST first check if the study is multi-objective and handle it appropriately.
|
||||
@@ -1,177 +0,0 @@
|
||||
# Protocol 11: Multi-Objective Optimization Support
|
||||
|
||||
**Status:** MANDATORY
|
||||
**Applies To:** ALL optimization studies
|
||||
**Last Updated:** 2025-11-21
|
||||
|
||||
## Overview
|
||||
|
||||
ALL optimization engines in Atomizer MUST support both single-objective and multi-objective optimization without requiring code changes. This is a **critical requirement** that prevents runtime failures.
|
||||
|
||||
## The Problem
|
||||
|
||||
Previously, IntelligentOptimizer (Protocol 10) only supported single-objective optimization. When used with multi-objective studies, it would:
|
||||
1. Successfully run all trials
|
||||
2. Save trials to the Optuna database (`study.db`)
|
||||
3. **CRASH** when trying to compile results, causing:
|
||||
- No intelligent optimizer tracking files (confidence_history.json, strategy_transitions.json)
|
||||
- No optimization_summary.json
|
||||
- No final reports
|
||||
- Silent failures that are hard to debug
|
||||
|
||||
## The Root Cause
|
||||
|
||||
Optuna has different APIs for single vs. multi-objective studies:
|
||||
|
||||
### Single-Objective
|
||||
```python
|
||||
study.best_trial # Returns single Trial object
|
||||
study.best_params # Returns dict of parameters
|
||||
study.best_value # Returns float
|
||||
```
|
||||
|
||||
### Multi-Objective
|
||||
```python
|
||||
study.best_trials # Returns LIST of Pareto-optimal trials
|
||||
study.best_params # ❌ RAISES RuntimeError
|
||||
study.best_value # ❌ RAISES RuntimeError
|
||||
study.best_trial # ❌ RAISES RuntimeError
|
||||
```
|
||||
|
||||
## The Solution
|
||||
|
||||
### 1. Always Check Study Type
|
||||
|
||||
```python
|
||||
is_multi_objective = len(study.directions) > 1
|
||||
```
|
||||
|
||||
### 2. Use Conditional Access Patterns
|
||||
|
||||
```python
|
||||
if is_multi_objective:
|
||||
best_trials = study.best_trials
|
||||
if best_trials:
|
||||
# Select representative trial (e.g., first Pareto solution)
|
||||
representative_trial = best_trials[0]
|
||||
best_params = representative_trial.params
|
||||
best_value = representative_trial.values # Tuple
|
||||
best_trial_num = representative_trial.number
|
||||
else:
|
||||
best_params = {}
|
||||
best_value = None
|
||||
best_trial_num = None
|
||||
else:
|
||||
# Single-objective: safe to use standard API
|
||||
best_params = study.best_params
|
||||
best_value = study.best_value
|
||||
best_trial_num = study.best_trial.number
|
||||
```
|
||||
|
||||
### 3. Return Rich Metadata
|
||||
|
||||
Always include in results:
|
||||
```python
|
||||
{
|
||||
'best_params': best_params,
|
||||
'best_value': best_value, # float or tuple
|
||||
'best_trial': best_trial_num,
|
||||
'is_multi_objective': is_multi_objective,
|
||||
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
|
||||
# ... other fields
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
When creating or modifying any optimization component:
|
||||
|
||||
- [ ] **Study Creation**: Support `directions` parameter
|
||||
```python
|
||||
if directions:
|
||||
study = optuna.create_study(directions=directions, ...)
|
||||
else:
|
||||
study = optuna.create_study(direction='minimize', ...)
|
||||
```
|
||||
|
||||
- [ ] **Result Compilation**: Check `len(study.directions) > 1`
|
||||
- [ ] **Best Trial Access**: Use conditional logic (single vs. multi)
|
||||
- [ ] **Logging**: Print Pareto front size for multi-objective
|
||||
- [ ] **Reports**: Handle tuple objectives in visualization
|
||||
- [ ] **Testing**: Test with BOTH single and multi-objective cases
|
||||
|
||||
## Files Fixed
|
||||
|
||||
- ✅ `optimization_engine/intelligent_optimizer.py`
|
||||
- `_compile_results()` method
|
||||
- `_run_fallback_optimization()` method
|
||||
|
||||
## Files That Need Review
|
||||
|
||||
Check these files for similar issues:
|
||||
|
||||
- [ ] `optimization_engine/study_continuation.py` (lines 96, 259-260)
|
||||
- [ ] `optimization_engine/hybrid_study_creator.py` (line 468)
|
||||
- [ ] `optimization_engine/intelligent_setup.py` (line 606)
|
||||
- [ ] `optimization_engine/llm_optimization_runner.py` (line 384)
|
||||
|
||||
## Testing Protocol
|
||||
|
||||
Before marking any optimization study as complete:
|
||||
|
||||
1. **Single-Objective Test**
|
||||
```python
|
||||
directions=None # or ['minimize']
|
||||
# Should complete without errors
|
||||
```
|
||||
|
||||
2. **Multi-Objective Test**
|
||||
```python
|
||||
directions=['minimize', 'minimize']
|
||||
# Should complete without errors
|
||||
# Should generate ALL tracking files
|
||||
```
|
||||
|
||||
3. **Verify Outputs**
|
||||
- `2_results/study.db` exists
|
||||
- `2_results/intelligent_optimizer/` has tracking files
|
||||
- `2_results/optimization_summary.json` exists
|
||||
- No RuntimeError in logs
|
||||
|
||||
## Design Principle
|
||||
|
||||
**"Write Once, Run Anywhere"**
|
||||
|
||||
Any optimization component should:
|
||||
1. Accept both single and multi-objective problems
|
||||
2. Automatically detect the study type
|
||||
3. Handle result compilation appropriately
|
||||
4. Never raise RuntimeError due to API misuse
|
||||
|
||||
## Example: Bracket Study
|
||||
|
||||
The bracket_stiffness_optimization study is multi-objective:
|
||||
- Objective 1: Maximize stiffness (minimize -stiffness)
|
||||
- Objective 2: Minimize mass
|
||||
- Constraint: mass ≤ 0.2 kg
|
||||
|
||||
This study exposed the bug because:
|
||||
```python
|
||||
directions = ["minimize", "minimize"] # Multi-objective
|
||||
```
|
||||
|
||||
After the fix, it should:
|
||||
- Run all 50 trials successfully
|
||||
- Generate Pareto front with multiple solutions
|
||||
- Save all intelligent optimizer tracking files
|
||||
- Create complete reports with tuple objectives
|
||||
|
||||
## Future Work
|
||||
|
||||
- Add explicit validation in `IntelligentOptimizer.__init__()` to warn about common mistakes
|
||||
- Create helper function `get_best_solution(study)` that handles both cases
|
||||
- Add unit tests for multi-objective support in all optimizers
|
||||
|
||||
---
|
||||
|
||||
**Remember:** Multi-objective support is NOT optional. It's a core requirement for production-ready optimization engines.
|
||||
@@ -1,386 +0,0 @@
|
||||
# Protocol 13: Real-Time Dashboard Tracking
|
||||
|
||||
**Status**: ✅ COMPLETED (Enhanced December 2025)
|
||||
**Date**: November 21, 2025 (Last Updated: December 3, 2025)
|
||||
**Priority**: P1 (Critical)
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 13 implements a comprehensive real-time web dashboard for monitoring multi-objective optimization studies. It provides live visualization of optimizer state, Pareto fronts, parallel coordinates, and trial history.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Backend Components
|
||||
|
||||
#### 1. Real-Time Tracking System
|
||||
**File**: `optimization_engine/realtime_tracking.py`
|
||||
|
||||
- **Per-Trial JSON Writes**: Writes `optimizer_state.json` after every trial completion
|
||||
- **Optimizer State Tracking**: Captures current phase, strategy, trial progress
|
||||
- **Multi-Objective Support**: Tracks study directions and Pareto front status
|
||||
|
||||
```python
|
||||
def create_realtime_callback(tracking_dir, optimizer_ref, verbose=False):
|
||||
"""Creates Optuna callback for per-trial JSON writes"""
|
||||
# Writes to: {study_dir}/2_results/intelligent_optimizer/optimizer_state.json
|
||||
```
|
||||
|
||||
**Data Structure**:
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-11-21T15:27:28.828930",
|
||||
"trial_number": 29,
|
||||
"total_trials": 50,
|
||||
"current_phase": "adaptive_optimization",
|
||||
"current_strategy": "GP_UCB",
|
||||
"is_multi_objective": true,
|
||||
"study_directions": ["maximize", "minimize"]
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. REST API Endpoints
|
||||
**File**: `atomizer-dashboard/backend/api/routes/optimization.py`
|
||||
|
||||
**New Protocol 13 Endpoints**:
|
||||
|
||||
1. **GET `/api/optimization/studies/{study_id}/metadata`**
|
||||
- Returns objectives, design variables, constraints with units
|
||||
- Implements unit inference from descriptions
|
||||
- Supports Protocol 11 multi-objective format
|
||||
|
||||
2. **GET `/api/optimization/studies/{study_id}/optimizer-state`**
|
||||
- Returns real-time optimizer state from JSON
|
||||
- Shows current phase and strategy
|
||||
- Updates every trial
|
||||
|
||||
3. **GET `/api/optimization/studies/{study_id}/pareto-front`**
|
||||
- Returns Pareto-optimal solutions for multi-objective studies
|
||||
- Uses Optuna's `study.best_trials` API
|
||||
- Includes constraint satisfaction status
|
||||
|
||||
**Unit Inference Function**:
|
||||
```python
|
||||
def _infer_objective_unit(objective: Dict) -> str:
|
||||
"""Infer unit from objective name and description"""
|
||||
# Pattern matching: frequency→Hz, stiffness→N/mm, mass→kg
|
||||
# Regex extraction: "(N/mm)" from description
|
||||
```
|
||||
|
||||
### Frontend Components
|
||||
|
||||
#### 1. OptimizerPanel Component
|
||||
**File**: `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx`
|
||||
|
||||
**Features**:
|
||||
- Real-time phase display (Characterization, Exploration, Exploitation, Adaptive)
|
||||
- Current strategy indicator (TPE, GP, NSGA-II, etc.)
|
||||
- Progress bar with trial count
|
||||
- Multi-objective study detection
|
||||
- Auto-refresh every 2 seconds
|
||||
|
||||
**Visual Design**:
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Intelligent Optimizer Status │
|
||||
├─────────────────────────────────┤
|
||||
│ Phase: [Adaptive Optimization] │
|
||||
│ Strategy: [GP_UCB] │
|
||||
│ Progress: [████████░░] 29/50 │
|
||||
│ Multi-Objective: ✓ │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### 2. ParetoPlot Component
|
||||
**File**: `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx`
|
||||
|
||||
**Features**:
|
||||
- Scatter plot of Pareto-optimal solutions
|
||||
- Pareto front line connecting optimal points
|
||||
- **3 Normalization Modes**:
|
||||
- **Raw**: Original engineering values
|
||||
- **Min-Max**: Scales to [0, 1] for equal comparison
|
||||
- **Z-Score**: Standardizes to mean=0, std=1
|
||||
- Tooltip shows raw values regardless of normalization
|
||||
- Color-coded feasibility (green=feasible, red=infeasible)
|
||||
- Dynamic axis labels with units
|
||||
|
||||
**Normalization Math**:
|
||||
```typescript
|
||||
// Min-Max: (x - min) / (max - min) → [0, 1]
|
||||
// Z-Score: (x - mean) / std → standardized
|
||||
```
|
||||
|
||||
#### 3. ParallelCoordinatesPlot Component
|
||||
**File**: `atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx`
|
||||
|
||||
**Features**:
|
||||
- High-dimensional visualization (objectives + design variables)
|
||||
- Interactive trial selection (click to toggle, hover to highlight)
|
||||
- Normalized [0, 1] axes for all dimensions
|
||||
- Color coding: green (feasible), red (infeasible), yellow (selected)
|
||||
- Opacity management: non-selected fade to 10% when selection active
|
||||
- Clear selection button
|
||||
|
||||
**Visualization Structure**:
|
||||
```
|
||||
Stiffness Mass support_angle tip_thickness
|
||||
| | | |
|
||||
| ╱─────╲ ╱ |
|
||||
| ╱ ╲─────────╱ |
|
||||
| ╱ ╲ |
|
||||
```
|
||||
|
||||
#### 4. ConvergencePlot Component (NEW - December 2025)
|
||||
**File**: `atomizer-dashboard/frontend/src/components/ConvergencePlot.tsx`
|
||||
|
||||
**Features**:
|
||||
- Dual-line visualization: trial values + running best
|
||||
- Area fill gradient under trial curve
|
||||
- Statistics header: Best value, Improvement %, 90% convergence trial
|
||||
- Summary footer: First value, Mean, Std Dev, Total trials
|
||||
- Step-after interpolation for running best line
|
||||
- Reference line at best value
|
||||
|
||||
#### 5. ParameterImportanceChart Component (NEW - December 2025)
|
||||
**File**: `atomizer-dashboard/frontend/src/components/ParameterImportanceChart.tsx`
|
||||
|
||||
**Features**:
|
||||
- Pearson correlation between parameters and objectives
|
||||
- Horizontal bar chart sorted by absolute importance
|
||||
- Color coding: Green (negative correlation), Red (positive correlation)
|
||||
- Tooltip with percentage and raw correlation coefficient
|
||||
- Requires minimum 3 trials for statistical analysis
|
||||
|
||||
#### 6. StudyReportViewer Component (NEW - December 2025)
|
||||
**File**: `atomizer-dashboard/frontend/src/components/StudyReportViewer.tsx`
|
||||
|
||||
**Features**:
|
||||
- Full-screen modal for viewing STUDY_REPORT.md
|
||||
- KaTeX math equation rendering (`$...$` inline, `$$...$$` block)
|
||||
- GitHub-flavored markdown (tables, code blocks, task lists)
|
||||
- Custom dark theme styling for all markdown elements
|
||||
- Refresh button for live updates
|
||||
- External link to open in system editor
|
||||
|
||||
#### 7. Dashboard Integration
|
||||
**File**: `atomizer-dashboard/frontend/src/pages/Dashboard.tsx`
|
||||
|
||||
**Layout Structure**:
|
||||
```
|
||||
┌──────────────────────────────────────────────────┐
|
||||
│ Study Selection │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ Metrics Grid (Best, Avg, Trials, Pruned) │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [OptimizerPanel] [ParetoPlot] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [ParallelCoordinatesPlot - Full Width] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Convergence] [Parameter Space] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Recent Trials Table] │
|
||||
└──────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Dynamic Units**:
|
||||
- `getParamLabel()` helper function looks up units from metadata
|
||||
- Applied to Parameter Space chart axes
|
||||
- Format: `"support_angle (degrees)"`, `"tip_thickness (mm)"`
|
||||
|
||||
## Integration with Existing Protocols
|
||||
|
||||
### Protocol 10: Intelligent Optimizer
|
||||
- Real-time callback integrated into `IntelligentOptimizer.optimize()`
|
||||
- Tracks phase transitions (characterization → adaptive optimization)
|
||||
- Reports strategy changes
|
||||
- Location: `optimization_engine/intelligent_optimizer.py:117-121`
|
||||
|
||||
### Protocol 11: Multi-Objective Support
|
||||
- Pareto front endpoint checks `len(study.directions) > 1`
|
||||
- Dashboard conditionally renders Pareto plots
|
||||
- Handles both single and multi-objective studies gracefully
|
||||
- Uses Optuna's `study.best_trials` for Pareto front
|
||||
|
||||
### Protocol 12: Unified Extraction Library
|
||||
- Extractors provide objective values for dashboard visualization
|
||||
- Units defined in extractor classes flow to dashboard
|
||||
- Consistent data format across all studies
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
Trial Completion (Optuna)
|
||||
↓
|
||||
Realtime Callback (optimization_engine/realtime_tracking.py)
|
||||
↓
|
||||
Write optimizer_state.json
|
||||
↓
|
||||
Backend API /optimizer-state endpoint
|
||||
↓
|
||||
Frontend OptimizerPanel (2s polling)
|
||||
↓
|
||||
User sees live updates
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Tested With
|
||||
- **Study**: `bracket_stiffness_optimization_V2`
|
||||
- **Trials**: 50 (30 completed in testing)
|
||||
- **Objectives**: 2 (stiffness maximize, mass minimize)
|
||||
- **Design Variables**: 2 (support_angle, tip_thickness)
|
||||
- **Pareto Solutions**: 20 identified
|
||||
- **Dashboard Port**: 3001 (frontend) + 8000 (backend)
|
||||
|
||||
### Verified Features
|
||||
✅ Real-time optimizer state updates
|
||||
✅ Pareto front visualization with line
|
||||
✅ Normalization toggle (Raw, Min-Max, Z-Score)
|
||||
✅ Parallel coordinates with selection
|
||||
✅ Dynamic units from config
|
||||
✅ Multi-objective detection
|
||||
✅ Constraint satisfaction coloring
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
atomizer-dashboard/
|
||||
├── backend/
|
||||
│ └── api/
|
||||
│ └── routes/
|
||||
│ └── optimization.py (Protocol 13 endpoints)
|
||||
└── frontend/
|
||||
└── src/
|
||||
├── components/
|
||||
│ ├── OptimizerPanel.tsx (NEW)
|
||||
│ ├── ParetoPlot.tsx (NEW)
|
||||
│ └── ParallelCoordinatesPlot.tsx (NEW)
|
||||
└── pages/
|
||||
└── Dashboard.tsx (updated with Protocol 13)
|
||||
|
||||
optimization_engine/
|
||||
├── realtime_tracking.py (NEW - per-trial JSON writes)
|
||||
└── intelligent_optimizer.py (updated with realtime callback)
|
||||
|
||||
studies/
|
||||
└── {study_name}/
|
||||
└── 2_results/
|
||||
└── intelligent_optimizer/
|
||||
└── optimizer_state.json (written every trial)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Backend Setup
|
||||
```bash
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
```
|
||||
|
||||
### Frontend Setup
|
||||
```bash
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev # Runs on port 3001
|
||||
```
|
||||
|
||||
### Study Requirements
|
||||
- Must use Protocol 10 (IntelligentOptimizer)
|
||||
- Must have `optimization_config.json` with objectives and design_variables
|
||||
- Real-time tracking enabled by default in IntelligentOptimizer
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Start Dashboard**:
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Terminal 2: Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
2. **Start Optimization**:
|
||||
```bash
|
||||
cd studies/my_study
|
||||
python run_optimization.py --trials 50
|
||||
```
|
||||
|
||||
3. **View Dashboard**:
|
||||
- Open browser to `http://localhost:3001`
|
||||
- Select study from dropdown
|
||||
- Watch real-time updates every trial
|
||||
|
||||
4. **Interact with Plots**:
|
||||
- Toggle normalization on Pareto plot
|
||||
- Click lines in parallel coordinates to select trials
|
||||
- Hover for detailed trial information
|
||||
|
||||
## Performance
|
||||
|
||||
- **Backend**: ~10ms per endpoint (SQLite queries cached)
|
||||
- **Frontend**: 2s polling interval (configurable)
|
||||
- **Real-time writes**: <5ms per trial (JSON serialization)
|
||||
- **Dashboard load time**: <500ms initial render
|
||||
|
||||
## December 2025 Enhancements
|
||||
|
||||
### Completed
|
||||
- [x] **ConvergencePlot**: Enhanced with running best, statistics panel, gradient fill
|
||||
- [x] **ParameterImportanceChart**: Pearson correlation analysis with color-coded bars
|
||||
- [x] **StudyReportViewer**: Full markdown rendering with KaTeX math equation support
|
||||
- [x] **Pruning endpoint**: Now queries Optuna SQLite directly instead of JSON file
|
||||
- [x] **Report endpoint**: New `/studies/{id}/report` endpoint for STUDY_REPORT.md
|
||||
- [x] **Chart data fix**: Proper `values` array transformation for single/multi-objective
|
||||
|
||||
### API Endpoint Additions (December 2025)
|
||||
|
||||
4. **GET `/api/optimization/studies/{study_id}/pruning`** (Enhanced)
|
||||
- Now queries Optuna database directly for PRUNED trials
|
||||
- Returns params, timing, and pruning cause for each trial
|
||||
- Fallback to legacy JSON file if database unavailable
|
||||
|
||||
5. **GET `/api/optimization/studies/{study_id}/report`** (NEW)
|
||||
- Returns STUDY_REPORT.md content as JSON
|
||||
- Searches in 2_results/, 3_results/, and study root
|
||||
- Returns 404 if no report found
|
||||
|
||||
## Future Enhancements (P3)
|
||||
|
||||
- [ ] WebSocket support for instant updates (currently polling)
|
||||
- [ ] Export Pareto front as CSV/JSON
|
||||
- [ ] 3D Pareto plot for 3+ objectives
|
||||
- [ ] Strategy performance comparison charts
|
||||
- [ ] Historical phase duration analysis
|
||||
- [ ] Mobile-responsive design
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Dashboard shows "No Pareto front data yet"
|
||||
- Study must have multiple objectives
|
||||
- At least 2 trials must complete
|
||||
- Check `/api/optimization/studies/{id}/pareto-front` endpoint
|
||||
|
||||
### OptimizerPanel shows "Not available"
|
||||
- Study must use IntelligentOptimizer (Protocol 10)
|
||||
- Check `2_results/intelligent_optimizer/optimizer_state.json` exists
|
||||
- Verify realtime_callback is registered in optimize() call
|
||||
|
||||
### Units not showing
|
||||
- Add `unit` field to objectives in `optimization_config.json`
|
||||
- Or ensure description contains unit pattern: "(N/mm)", "Hz", etc.
|
||||
- Backend will infer from common patterns
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Protocol 10: Intelligent Optimizer](PROTOCOL_10_V2_IMPLEMENTATION.md)
|
||||
- [Protocol 11: Multi-Objective Support](PROTOCOL_10_IMSO.md)
|
||||
- [Protocol 12: Unified Extraction](HOW_TO_EXTEND_OPTIMIZATION.md)
|
||||
- [Dashboard React Implementation](DASHBOARD_REACT_IMPLEMENTATION.md)
|
||||
|
||||
---
|
||||
|
||||
**Implementation Complete**: All P1 and P2 features delivered
|
||||
**Ready for Production**: Yes
|
||||
**Tested**: Yes (50-trial multi-objective study)
|
||||
@@ -1,425 +0,0 @@
|
||||
# Implementation Guide: Protocol 13 - Real-Time Tracking
|
||||
|
||||
**Date:** 2025-11-21
|
||||
**Status:** 🚧 IN PROGRESS
|
||||
**Priority:** P0 - CRITICAL
|
||||
|
||||
## What's Done ✅
|
||||
|
||||
1. **Created [`realtime_tracking.py`](../optimization_engine/realtime_tracking.py)**
|
||||
- `RealtimeTrackingCallback` class
|
||||
- Writes JSON files after EVERY trial (atomic writes)
|
||||
- Files: optimizer_state.json, strategy_history.json, trial_log.json, landscape_snapshot.json, confidence_history.json
|
||||
|
||||
2. **Fixed Multi-Objective Strategy (Protocol 12)**
|
||||
- Modified [`strategy_selector.py`](../optimization_engine/strategy_selector.py)
|
||||
- Added `_recommend_multiobjective_strategy()` method
|
||||
- Multi-objective: Random (8 trials) → TPE with multivariate
|
||||
|
||||
## What's Needed ⚠️
|
||||
|
||||
### Step 1: Integrate Callback into IntelligentOptimizer
|
||||
|
||||
**File:** [`optimization_engine/intelligent_optimizer.py`](../optimization_engine/intelligent_optimizer.py)
|
||||
|
||||
**Line 48 - Add import:**
|
||||
```python
|
||||
from optimization_engine.adaptive_characterization import CharacterizationStoppingCriterion
|
||||
from optimization_engine.realtime_tracking import create_realtime_callback # ADD THIS
|
||||
```
|
||||
|
||||
**Line ~90 in `__init__()` - Create callback:**
|
||||
```python
|
||||
def __init__(self, study_name: str, study_dir: Path, config: Dict, verbose: bool = True):
|
||||
# ... existing init code ...
|
||||
|
||||
# Create realtime tracking callback (Protocol 13)
|
||||
self.realtime_callback = create_realtime_callback(
|
||||
tracking_dir=self.tracking_dir,
|
||||
optimizer_ref=self,
|
||||
verbose=self.verbose
|
||||
)
|
||||
```
|
||||
|
||||
**Find ALL `study.optimize()` calls and add callback:**
|
||||
|
||||
Search for: `self.study.optimize(`
|
||||
|
||||
Replace pattern:
|
||||
```python
|
||||
# BEFORE:
|
||||
self.study.optimize(objective_function, n_trials=check_interval)
|
||||
|
||||
# AFTER:
|
||||
self.study.optimize(
|
||||
objective_function,
|
||||
n_trials=check_interval,
|
||||
callbacks=[self.realtime_callback]
|
||||
)
|
||||
```
|
||||
|
||||
**Locations to fix (approximate line numbers):**
|
||||
- Line ~190: Characterization phase
|
||||
- Line ~230: Optimization phase (multiple locations)
|
||||
- Line ~260: Refinement phase
|
||||
- Line ~380: Fallback optimization
|
||||
|
||||
**CRITICAL:** EVERY `study.optimize()` call must include `callbacks=[self.realtime_callback]`
|
||||
|
||||
### Step 2: Test Realtime Tracking
|
||||
|
||||
```bash
|
||||
# Clear old results
|
||||
cd studies/bracket_stiffness_optimization_V2
|
||||
del /Q 2_results\study.db
|
||||
rd /S /Q 2_results\intelligent_optimizer
|
||||
|
||||
# Run with new code
|
||||
python -B run_optimization.py --trials 10
|
||||
|
||||
# Verify files appear IMMEDIATELY after each trial
|
||||
dir 2_results\intelligent_optimizer
|
||||
# Should see:
|
||||
# - optimizer_state.json
|
||||
# - strategy_history.json
|
||||
# - trial_log.json
|
||||
# - landscape_snapshot.json
|
||||
# - confidence_history.json
|
||||
|
||||
# Check file updates in real-time
|
||||
python -c "import json; print(json.load(open('2_results/intelligent_optimizer/trial_log.json'))[-1])"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Implementation Plan
|
||||
|
||||
### Backend API Endpoints (Python/FastAPI)
|
||||
|
||||
**File:** [`atomizer-dashboard/backend/api/routes/optimization.py`](../atomizer-dashboard/backend/api/routes/optimization.py)
|
||||
|
||||
**Add new endpoints:**
|
||||
|
||||
```python
|
||||
@router.get("/studies/{study_id}/metadata")
|
||||
async def get_study_metadata(study_id: str):
|
||||
"""Read optimization_config.json for objectives, design vars, units."""
|
||||
study_dir = find_study_dir(study_id)
|
||||
config_file = study_dir / "optimization_config.json"
|
||||
|
||||
with open(config_file) as f:
|
||||
config = json.load(f)
|
||||
|
||||
return {
|
||||
"objectives": config["objectives"],
|
||||
"design_variables": config["design_variables"],
|
||||
"constraints": config.get("constraints", []),
|
||||
"study_name": config["study_name"]
|
||||
}
|
||||
|
||||
@router.get("/studies/{study_id}/optimizer-state")
|
||||
async def get_optimizer_state(study_id: str):
|
||||
"""Read realtime optimizer state from intelligent_optimizer/."""
|
||||
study_dir = find_study_dir(study_id)
|
||||
state_file = study_dir / "2_results/intelligent_optimizer/optimizer_state.json"
|
||||
|
||||
if not state_file.exists():
|
||||
return {"available": False}
|
||||
|
||||
with open(state_file) as f:
|
||||
state = json.load(f)
|
||||
|
||||
return {"available": True, **state}
|
||||
|
||||
@router.get("/studies/{study_id}/pareto-front")
|
||||
async def get_pareto_front(study_id: str):
|
||||
"""Get Pareto-optimal solutions for multi-objective studies."""
|
||||
study_dir = find_study_dir(study_id)
|
||||
db_path = study_dir / "2_results/study.db"
|
||||
|
||||
storage = optuna.storages.RDBStorage(f"sqlite:///{db_path}")
|
||||
study = optuna.load_study(study_name=study_id, storage=storage)
|
||||
|
||||
if len(study.directions) == 1:
|
||||
return {"is_multi_objective": False}
|
||||
|
||||
pareto_trials = study.best_trials
|
||||
|
||||
return {
|
||||
"is_multi_objective": True,
|
||||
"pareto_front": [
|
||||
{
|
||||
"trial_number": t.number,
|
||||
"values": t.values,
|
||||
"params": t.params,
|
||||
"user_attrs": dict(t.user_attrs)
|
||||
}
|
||||
for t in pareto_trials
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Frontend Components (React/TypeScript)
|
||||
|
||||
**1. Optimizer Panel Component**
|
||||
|
||||
**File:** `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx` (CREATE NEW)
|
||||
|
||||
```typescript
|
||||
import { useEffect, useState } from 'react';
|
||||
import { Card } from './Card';
|
||||
|
||||
interface OptimizerState {
|
||||
available: boolean;
|
||||
current_phase?: string;
|
||||
current_strategy?: string;
|
||||
trial_number?: number;
|
||||
total_trials?: number;
|
||||
latest_recommendation?: {
|
||||
strategy: string;
|
||||
confidence: number;
|
||||
reasoning: string;
|
||||
};
|
||||
}
|
||||
|
||||
export function OptimizerPanel({ studyId }: { studyId: string }) {
|
||||
const [state, setState] = useState<OptimizerState | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const fetchState = async () => {
|
||||
const res = await fetch(`/api/optimization/studies/${studyId}/optimizer-state`);
|
||||
const data = await res.json();
|
||||
setState(data);
|
||||
};
|
||||
|
||||
fetchState();
|
||||
const interval = setInterval(fetchState, 1000); // Update every second
|
||||
return () => clearInterval(interval);
|
||||
}, [studyId]);
|
||||
|
||||
if (!state?.available) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<Card title="Intelligent Optimizer Status">
|
||||
<div className="space-y-4">
|
||||
{/* Phase */}
|
||||
<div>
|
||||
<div className="text-sm text-dark-300">Phase</div>
|
||||
<div className="text-lg font-semibold text-primary-400">
|
||||
{state.current_phase || 'Unknown'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Strategy */}
|
||||
<div>
|
||||
<div className="text-sm text-dark-300">Current Strategy</div>
|
||||
<div className="text-lg font-semibold text-blue-400">
|
||||
{state.current_strategy?.toUpperCase() || 'Unknown'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Progress */}
|
||||
<div>
|
||||
<div className="text-sm text-dark-300">Progress</div>
|
||||
<div className="text-lg">
|
||||
{state.trial_number} / {state.total_trials} trials
|
||||
</div>
|
||||
<div className="w-full bg-dark-500 rounded-full h-2 mt-2">
|
||||
<div
|
||||
className="bg-primary-400 h-2 rounded-full transition-all"
|
||||
style={{
|
||||
width: `${((state.trial_number || 0) / (state.total_trials || 1)) * 100}%`
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Confidence */}
|
||||
{state.latest_recommendation && (
|
||||
<div>
|
||||
<div className="text-sm text-dark-300">Confidence</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex-1 bg-dark-500 rounded-full h-2">
|
||||
<div
|
||||
className="bg-green-400 h-2 rounded-full transition-all"
|
||||
style={{
|
||||
width: `${state.latest_recommendation.confidence * 100}%`
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<span className="text-sm font-mono">
|
||||
{(state.latest_recommendation.confidence * 100).toFixed(0)}%
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Reasoning */}
|
||||
{state.latest_recommendation && (
|
||||
<div>
|
||||
<div className="text-sm text-dark-300">Reasoning</div>
|
||||
<div className="text-sm text-dark-100 mt-1">
|
||||
{state.latest_recommendation.reasoning}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
**2. Pareto Front Plot**
|
||||
|
||||
**File:** `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx` (CREATE NEW)
|
||||
|
||||
```typescript
|
||||
import { ScatterChart, Scatter, XAxis, YAxis, CartesianGrid, Tooltip, Cell, ResponsiveContainer } from 'recharts';
|
||||
|
||||
interface ParetoData {
|
||||
trial_number: number;
|
||||
values: [number, number];
|
||||
params: Record<string, number>;
|
||||
constraint_satisfied?: boolean;
|
||||
}
|
||||
|
||||
export function ParetoPlot({ paretoData, objectives }: {
|
||||
paretoData: ParetoData[];
|
||||
objectives: Array<{ name: string; unit?: string }>;
|
||||
}) {
|
||||
if (paretoData.length === 0) {
|
||||
return (
|
||||
<div className="h-64 flex items-center justify-center text-dark-300">
|
||||
No Pareto front data yet
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const data = paretoData.map(trial => ({
|
||||
x: trial.values[0],
|
||||
y: trial.values[1],
|
||||
trial_number: trial.number,
|
||||
feasible: trial.constraint_satisfied !== false
|
||||
}));
|
||||
|
||||
return (
|
||||
<ResponsiveContainer width="100%" height={400}>
|
||||
<ScatterChart>
|
||||
<CartesianGrid strokeDasharray="3 3" stroke="#334155" />
|
||||
<XAxis
|
||||
type="number"
|
||||
dataKey="x"
|
||||
name={objectives[0]?.name || 'Objective 1'}
|
||||
stroke="#94a3b8"
|
||||
label={{
|
||||
value: `${objectives[0]?.name || 'Objective 1'} ${objectives[0]?.unit || ''}`.trim(),
|
||||
position: 'insideBottom',
|
||||
offset: -5,
|
||||
fill: '#94a3b8'
|
||||
}}
|
||||
/>
|
||||
<YAxis
|
||||
type="number"
|
||||
dataKey="y"
|
||||
name={objectives[1]?.name || 'Objective 2'}
|
||||
stroke="#94a3b8"
|
||||
label={{
|
||||
value: `${objectives[1]?.name || 'Objective 2'} ${objectives[1]?.unit || ''}`.trim(),
|
||||
angle: -90,
|
||||
position: 'insideLeft',
|
||||
fill: '#94a3b8'
|
||||
}}
|
||||
/>
|
||||
<Tooltip
|
||||
contentStyle={{ backgroundColor: '#1e293b', border: 'none', borderRadius: '8px' }}
|
||||
labelStyle={{ color: '#e2e8f0' }}
|
||||
/>
|
||||
<Scatter name="Pareto Front" data={data}>
|
||||
{data.map((entry, index) => (
|
||||
<Cell
|
||||
key={`cell-${index}`}
|
||||
fill={entry.feasible ? '#10b981' : '#ef4444'}
|
||||
r={entry.feasible ? 6 : 4}
|
||||
/>
|
||||
))}
|
||||
</Scatter>
|
||||
</ScatterChart>
|
||||
</ResponsiveContainer>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
**3. Update Dashboard.tsx**
|
||||
|
||||
**File:** [`atomizer-dashboard/frontend/src/pages/Dashboard.tsx`](../atomizer-dashboard/frontend/src/pages/Dashboard.tsx)
|
||||
|
||||
Add imports at top:
|
||||
```typescript
|
||||
import { OptimizerPanel } from '../components/OptimizerPanel';
|
||||
import { ParetoPlot } from '../components/ParetoPlot';
|
||||
```
|
||||
|
||||
Add new state:
|
||||
```typescript
|
||||
const [studyMetadata, setStudyMetadata] = useState(null);
|
||||
const [paretoFront, setParetoFront] = useState([]);
|
||||
```
|
||||
|
||||
Fetch metadata when study selected:
|
||||
```typescript
|
||||
useEffect(() => {
|
||||
if (selectedStudyId) {
|
||||
fetch(`/api/optimization/studies/${selectedStudyId}/metadata`)
|
||||
.then(res => res.json())
|
||||
.then(setStudyMetadata);
|
||||
|
||||
fetch(`/api/optimization/studies/${selectedStudyId}/pareto-front`)
|
||||
.then(res => res.json())
|
||||
.then(data => {
|
||||
if (data.is_multi_objective) {
|
||||
setParetoFront(data.pareto_front);
|
||||
}
|
||||
});
|
||||
}
|
||||
}, [selectedStudyId]);
|
||||
```
|
||||
|
||||
Add components to layout:
|
||||
```typescript
|
||||
{/* Add after metrics grid */}
|
||||
<div className="grid grid-cols-2 gap-6 mb-6">
|
||||
<OptimizerPanel studyId={selectedStudyId} />
|
||||
{paretoFront.length > 0 && (
|
||||
<Card title="Pareto Front">
|
||||
<ParetoPlot
|
||||
paretoData={paretoFront}
|
||||
objectives={studyMetadata?.objectives || []}
|
||||
/>
|
||||
</Card>
|
||||
)}
|
||||
</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [ ] Realtime callback writes files after EVERY trial
|
||||
- [ ] optimizer_state.json updates in real-time
|
||||
- [ ] Dashboard shows optimizer panel with live updates
|
||||
- [ ] Pareto front appears for multi-objective studies
|
||||
- [ ] Units are dynamic (read from config)
|
||||
- [ ] Multi-objective strategy switches from random → TPE after 8 trials
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Integrate callback into IntelligentOptimizer (Steps above)
|
||||
2. Implement backend API endpoints
|
||||
3. Create frontend components
|
||||
4. Test end-to-end with bracket study
|
||||
5. Document as Protocol 13
|
||||
|
||||
@@ -64,6 +64,7 @@ Core technical specifications:
|
||||
- **SYS_12**: Extractor Library
|
||||
- **SYS_13**: Real-Time Dashboard Tracking
|
||||
- **SYS_14**: Neural Network Acceleration
|
||||
- **SYS_15**: Method Selector
|
||||
|
||||
### Layer 4: Extensions (`extensions/`)
|
||||
Guides for extending Atomizer:
|
||||
@@ -140,6 +141,7 @@ LOAD_WITH: [{dependencies}]
|
||||
| 12 | Extractors | [System](system/SYS_12_EXTRACTOR_LIBRARY.md) |
|
||||
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
|
||||
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
|
||||
| 15 | Method Selector | [System](system/SYS_15_METHOD_SELECTOR.md) |
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user