feat: Major update with validators, skills, dashboard, and docs reorganization
- Add validation framework (config, model, results, study validators) - Add Claude Code skills (create-study, run-optimization, generate-report, troubleshoot, analyze-model) - Add Atomizer Dashboard (React frontend + FastAPI backend) - Reorganize docs into structured directories (00-09) - Add neural surrogate modules and training infrastructure - Add multi-objective optimization support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
236
docs/08_ARCHIVE/historical/CRITICAL_ISSUES_ROADMAP.md
Normal file
236
docs/08_ARCHIVE/historical/CRITICAL_ISSUES_ROADMAP.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# CRITICAL ISSUES - IMMEDIATE ACTION REQUIRED
|
||||
|
||||
**Date:** 2025-11-21
|
||||
**Status:** 🚨 BLOCKING PRODUCTION USE
|
||||
|
||||
## Issue 1: Real-Time Tracking Files - **MANDATORY EVERY ITERATION**
|
||||
|
||||
### Current State ❌
|
||||
- Intelligent optimizer only writes tracking files at END of optimization
|
||||
- Dashboard cannot show real-time progress
|
||||
- No visibility into optimizer state during execution
|
||||
|
||||
### Required Behavior ✅
|
||||
```
|
||||
AFTER EVERY SINGLE TRIAL:
|
||||
1. Write optimizer_state.json (current strategy, confidence, phase)
|
||||
2. Write strategy_history.json (append new recommendation)
|
||||
3. Write landscape_snapshot.json (current analysis if available)
|
||||
4. Write trial_log.json (append trial result with timestamp)
|
||||
```
|
||||
|
||||
### Implementation Plan
|
||||
1. Create `RealtimeCallback` class that triggers after each trial
|
||||
2. Hook into `study.optimize(..., callbacks=[realtime_callback])`
|
||||
3. Write incremental JSON files to `intelligent_optimizer/` folder
|
||||
4. Files must be atomic writes (temp file + rename)
|
||||
|
||||
### Files to Modify
|
||||
- `optimization_engine/intelligent_optimizer.py` - Add callback system
|
||||
- New file: `optimization_engine/realtime_tracking.py` - Callback implementation
|
||||
|
||||
---
|
||||
|
||||
## Issue 2: Dashboard - Complete Overhaul Required
|
||||
|
||||
###Current Problems ❌
|
||||
1. **No Pareto front plot** for multi-objective
|
||||
2. **No parallel coordinates** for high-dimensional visualization
|
||||
3. **Units hardcoded/wrong** - should read from optimization_config.json
|
||||
4. **Convergence plot backwards** - X-axis should be trial number (already is, but user reports issue)
|
||||
5. **No objective normalization** - raw values make comparison difficult
|
||||
6. **Missing intelligent optimizer panel** - no real-time strategy display
|
||||
7. **Poor UX** - not professional looking
|
||||
|
||||
### Required Features ✅
|
||||
|
||||
#### A. Intelligent Optimizer Panel (NEW)
|
||||
```typescript
|
||||
<OptimizerPanel>
|
||||
- Current Phase: "Characterization" | "Optimization" | "Refinement"
|
||||
- Current Strategy: "TPE" | "CMA-ES" | "Random" | "GP-BO"
|
||||
- Confidence: 0.95 (progress bar)
|
||||
- Trials in Phase: 15/30
|
||||
- Strategy Transitions: Timeline view
|
||||
- Landscape Type: "Smooth Unimodal" | "Rugged Multi-modal" | etc.
|
||||
</OptimizerPanel>
|
||||
```
|
||||
|
||||
#### B. Pareto Front Plot (Multi-Objective)
|
||||
```typescript
|
||||
<ParetoPlot objectives={study.objectives}>
|
||||
- 2D scatter: objective1 vs objective2
|
||||
- Color by constraint satisfaction
|
||||
- Interactive: click to see design variables
|
||||
- Dominance regions shaded
|
||||
</ParetoPlot>
|
||||
```
|
||||
|
||||
#### C. Parallel Coordinates (Multi-Objective)
|
||||
```typescript
|
||||
<ParallelCoordinates>
|
||||
- One axis per design variable + objectives
|
||||
- Lines colored by Pareto front membership
|
||||
- Interactive brushing to filter solutions
|
||||
</ParallelCoordinates>
|
||||
```
|
||||
|
||||
#### D. Dynamic Units & Metadata
|
||||
```typescript
|
||||
// Read from optimization_config.json
|
||||
interface StudyMetadata {
|
||||
objectives: Array<{name: string, type: 'minimize'|'maximize', unit?: string}>
|
||||
design_variables: Array<{name: string, unit?: string, min: number, max: number}>
|
||||
constraints: Array<{name: string, type: string, value: number}>
|
||||
}
|
||||
```
|
||||
|
||||
#### E. Normalized Objectives
|
||||
```typescript
|
||||
// Option 1: Min-Max normalization (0-1 scale)
|
||||
normalized = (value - min) / (max - min)
|
||||
|
||||
// Option 2: Z-score normalization
|
||||
normalized = (value - mean) / stddev
|
||||
```
|
||||
|
||||
### Implementation Plan
|
||||
1. **Backend:** Add `/api/studies/{id}/metadata` endpoint (read config)
|
||||
2. **Backend:** Add `/api/studies/{id}/optimizer-state` endpoint (read real-time JSON)
|
||||
3. **Frontend:** Create `<OptimizerPanel>` component
|
||||
4. **Frontend:** Create `<ParetoPlot>` component (use Recharts)
|
||||
5. **Frontend:** Create `<ParallelCoordinates>` component (use D3.js or Plotly)
|
||||
6. **Frontend:** Refactor `Dashboard.tsx` with new layout
|
||||
|
||||
---
|
||||
|
||||
## Issue 3: Multi-Objective Strategy Selection (FIXED ✅)
|
||||
|
||||
**Status:** Completed - Protocol 12 implemented
|
||||
- Multi-objective now uses: Random (8 trials) → TPE with multivariate
|
||||
- No longer stuck on random for entire optimization
|
||||
|
||||
---
|
||||
|
||||
## Issue 4: Missing Tracking Files in V2 Study
|
||||
|
||||
### Root Cause
|
||||
V2 study ran with OLD code (before Protocol 12). All 30 trials used random strategy.
|
||||
|
||||
### Solution
|
||||
Re-run V2 study with fixed optimizer:
|
||||
```bash
|
||||
cd studies/bracket_stiffness_optimization_V2
|
||||
# Clear old results
|
||||
del /Q 2_results\study.db
|
||||
rd /S /Q 2_results\intelligent_optimizer
|
||||
# Run with new code
|
||||
python run_optimization.py --trials 50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Priority Order
|
||||
|
||||
### P0 - CRITICAL (Do Immediately)
|
||||
1. ✅ Fix multi-objective strategy selector (DONE - Protocol 12)
|
||||
2. 🚧 Implement per-trial tracking callback
|
||||
3. 🚧 Add intelligent optimizer panel to dashboard
|
||||
4. 🚧 Add Pareto front plot
|
||||
|
||||
### P1 - HIGH (Do Today)
|
||||
5. Add parallel coordinates plot
|
||||
6. Implement dynamic units (read from config)
|
||||
7. Add objective normalization toggle
|
||||
|
||||
### P2 - MEDIUM (Do This Week)
|
||||
8. Improve dashboard UX/layout
|
||||
9. Add hypervolume indicator for multi-objective
|
||||
10. Create optimization report generator
|
||||
|
||||
---
|
||||
|
||||
## Testing Protocol
|
||||
|
||||
After implementing each fix:
|
||||
|
||||
1. **Per-Trial Tracking Test**
|
||||
```bash
|
||||
# Run optimization and check files appear immediately
|
||||
python run_optimization.py --trials 10
|
||||
# Verify: intelligent_optimizer/*.json files update EVERY trial
|
||||
```
|
||||
|
||||
2. **Dashboard Test**
|
||||
```bash
|
||||
# Start backend + frontend
|
||||
# Navigate to http://localhost:3001
|
||||
# Verify: All panels update in real-time
|
||||
# Verify: Pareto front appears for multi-objective
|
||||
# Verify: Units match optimization_config.json
|
||||
```
|
||||
|
||||
3. **Multi-Objective Test**
|
||||
```bash
|
||||
# Re-run bracket_stiffness_optimization_V2
|
||||
# Verify: Strategy switches from random → TPE after 8 trials
|
||||
# Verify: Tracking files generated every trial
|
||||
# Verify: Pareto front has 10+ solutions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Architecture
|
||||
|
||||
### Realtime Tracking System
|
||||
```
|
||||
intelligent_optimizer/
|
||||
├── optimizer_state.json # Updated every trial
|
||||
├── strategy_history.json # Append-only log
|
||||
├── landscape_snapshots.json # Updated when landscape analyzed
|
||||
├── trial_log.json # Append-only with timestamps
|
||||
├── confidence_history.json # Confidence over time
|
||||
└── strategy_transitions.json # When/why strategy changed
|
||||
```
|
||||
|
||||
### Dashboard Data Flow
|
||||
```
|
||||
Trial Complete
|
||||
↓
|
||||
Optuna Callback
|
||||
↓
|
||||
Write JSON Files (atomic)
|
||||
↓
|
||||
Backend API detects file change
|
||||
↓
|
||||
WebSocket broadcast to frontend
|
||||
↓
|
||||
Dashboard components update
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Estimated Effort
|
||||
|
||||
- **Per-Trial Tracking:** 2-3 hours
|
||||
- **Dashboard Overhaul:** 6-8 hours
|
||||
- Optimizer Panel: 1 hour
|
||||
- Pareto Plot: 2 hours
|
||||
- Parallel Coordinates: 2 hours
|
||||
- Dynamic Units: 1 hour
|
||||
- Layout/UX: 2 hours
|
||||
|
||||
**Total:** 8-11 hours for production-ready system
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ **After implementation:**
|
||||
1. User can see optimizer strategy change in real-time
|
||||
2. Intelligent optimizer folder updates EVERY trial (not batched)
|
||||
3. Dashboard shows Pareto front for multi-objective studies
|
||||
4. Dashboard units are dynamic (read from config)
|
||||
5. Dashboard is professional quality (like Optuna Dashboard or Weights & Biases)
|
||||
6. No hardcoded assumptions (Hz, single-objective, etc.)
|
||||
|
||||
Reference in New Issue
Block a user