- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
6.9 KiB
6.9 KiB
CRITICAL ISSUES - IMMEDIATE ACTION REQUIRED
Date: 2025-11-21 Status: 🚨 BLOCKING PRODUCTION USE
Issue 1: Real-Time Tracking Files - MANDATORY EVERY ITERATION
Current State ❌
- Intelligent optimizer only writes tracking files at END of optimization
- Dashboard cannot show real-time progress
- No visibility into optimizer state during execution
Required Behavior ✅
AFTER EVERY SINGLE TRIAL:
1. Write optimizer_state.json (current strategy, confidence, phase)
2. Write strategy_history.json (append new recommendation)
3. Write landscape_snapshot.json (current analysis if available)
4. Write trial_log.json (append trial result with timestamp)
Implementation Plan
- Create
RealtimeCallbackclass that triggers after each trial - Hook into
study.optimize(..., callbacks=[realtime_callback]) - Write incremental JSON files to
intelligent_optimizer/folder - Files must be atomic writes (temp file + rename)
Files to Modify
optimization_engine/intelligent_optimizer.py- Add callback system- New file:
optimization_engine/realtime_tracking.py- Callback implementation
Issue 2: Dashboard - Complete Overhaul Required
###Current Problems ❌
- No Pareto front plot for multi-objective
- No parallel coordinates for high-dimensional visualization
- Units hardcoded/wrong - should read from optimization_config.json
- Convergence plot backwards - X-axis should be trial number (already is, but user reports issue)
- No objective normalization - raw values make comparison difficult
- Missing intelligent optimizer panel - no real-time strategy display
- Poor UX - not professional looking
Required Features ✅
A. Intelligent Optimizer Panel (NEW)
<OptimizerPanel>
- Current Phase: "Characterization" | "Optimization" | "Refinement"
- Current Strategy: "TPE" | "CMA-ES" | "Random" | "GP-BO"
- Confidence: 0.95 (progress bar)
- Trials in Phase: 15/30
- Strategy Transitions: Timeline view
- Landscape Type: "Smooth Unimodal" | "Rugged Multi-modal" | etc.
</OptimizerPanel>
B. Pareto Front Plot (Multi-Objective)
<ParetoPlot objectives={study.objectives}>
- 2D scatter: objective1 vs objective2
- Color by constraint satisfaction
- Interactive: click to see design variables
- Dominance regions shaded
</ParetoPlot>
C. Parallel Coordinates (Multi-Objective)
<ParallelCoordinates>
- One axis per design variable + objectives
- Lines colored by Pareto front membership
- Interactive brushing to filter solutions
</ParallelCoordinates>
D. Dynamic Units & Metadata
// Read from optimization_config.json
interface StudyMetadata {
objectives: Array<{name: string, type: 'minimize'|'maximize', unit?: string}>
design_variables: Array<{name: string, unit?: string, min: number, max: number}>
constraints: Array<{name: string, type: string, value: number}>
}
E. Normalized Objectives
// Option 1: Min-Max normalization (0-1 scale)
normalized = (value - min) / (max - min)
// Option 2: Z-score normalization
normalized = (value - mean) / stddev
Implementation Plan
- Backend: Add
/api/studies/{id}/metadataendpoint (read config) - Backend: Add
/api/studies/{id}/optimizer-stateendpoint (read real-time JSON) - Frontend: Create
<OptimizerPanel>component - Frontend: Create
<ParetoPlot>component (use Recharts) - Frontend: Create
<ParallelCoordinates>component (use D3.js or Plotly) - Frontend: Refactor
Dashboard.tsxwith new layout
Issue 3: Multi-Objective Strategy Selection (FIXED ✅)
Status: Completed - Protocol 12 implemented
- Multi-objective now uses: Random (8 trials) → TPE with multivariate
- No longer stuck on random for entire optimization
Issue 4: Missing Tracking Files in V2 Study
Root Cause
V2 study ran with OLD code (before Protocol 12). All 30 trials used random strategy.
Solution
Re-run V2 study with fixed optimizer:
cd studies/bracket_stiffness_optimization_V2
# Clear old results
del /Q 2_results\study.db
rd /S /Q 2_results\intelligent_optimizer
# Run with new code
python run_optimization.py --trials 50
Priority Order
P0 - CRITICAL (Do Immediately)
- ✅ Fix multi-objective strategy selector (DONE - Protocol 12)
- 🚧 Implement per-trial tracking callback
- 🚧 Add intelligent optimizer panel to dashboard
- 🚧 Add Pareto front plot
P1 - HIGH (Do Today)
- Add parallel coordinates plot
- Implement dynamic units (read from config)
- Add objective normalization toggle
P2 - MEDIUM (Do This Week)
- Improve dashboard UX/layout
- Add hypervolume indicator for multi-objective
- Create optimization report generator
Testing Protocol
After implementing each fix:
-
Per-Trial Tracking Test
# Run optimization and check files appear immediately python run_optimization.py --trials 10 # Verify: intelligent_optimizer/*.json files update EVERY trial -
Dashboard Test
# Start backend + frontend # Navigate to http://localhost:3001 # Verify: All panels update in real-time # Verify: Pareto front appears for multi-objective # Verify: Units match optimization_config.json -
Multi-Objective Test
# Re-run bracket_stiffness_optimization_V2 # Verify: Strategy switches from random → TPE after 8 trials # Verify: Tracking files generated every trial # Verify: Pareto front has 10+ solutions
Code Architecture
Realtime Tracking System
intelligent_optimizer/
├── optimizer_state.json # Updated every trial
├── strategy_history.json # Append-only log
├── landscape_snapshots.json # Updated when landscape analyzed
├── trial_log.json # Append-only with timestamps
├── confidence_history.json # Confidence over time
└── strategy_transitions.json # When/why strategy changed
Dashboard Data Flow
Trial Complete
↓
Optuna Callback
↓
Write JSON Files (atomic)
↓
Backend API detects file change
↓
WebSocket broadcast to frontend
↓
Dashboard components update
Estimated Effort
- Per-Trial Tracking: 2-3 hours
- Dashboard Overhaul: 6-8 hours
- Optimizer Panel: 1 hour
- Pareto Plot: 2 hours
- Parallel Coordinates: 2 hours
- Dynamic Units: 1 hour
- Layout/UX: 2 hours
Total: 8-11 hours for production-ready system
Success Criteria
✅ After implementation:
- User can see optimizer strategy change in real-time
- Intelligent optimizer folder updates EVERY trial (not batched)
- Dashboard shows Pareto front for multi-objective studies
- Dashboard units are dynamic (read from config)
- Dashboard is professional quality (like Optuna Dashboard or Weights & Biases)
- No hardcoded assumptions (Hz, single-objective, etc.)