# Atomizer Dashboard Improvement Plan - Complete Implementation Guide **Created:** 2026-01-06 **Status:** Planning - Awaiting Approval **Estimated Total Effort:** 35-45 hours **Priority:** High --- ## Table of Contents 1. [Executive Summary](#1-executive-summary) 2. [Phase 0: Quick Wins (P0)](#2-phase-0-quick-wins-p0) 3. [Phase 1: Control Panel Overhaul (P1)](#3-phase-1-control-panel-overhaul-p1) 4. [Phase 2: Visualization Improvements (P2)](#4-phase-2-visualization-improvements-p2) 5. [Phase 3: Analysis & Reporting (P3)](#5-phase-3-analysis--reporting-p3) 6. [Testing & Validation](#6-testing--validation) 7. [Appendix: Technical Specifications](#7-appendix-technical-specifications) --- ## 1. Executive Summary ### Problems Identified | # | Issue | Severity | User Impact | |---|-------|----------|-------------| | 1 | Optuna dashboard button broken | Critical | Cannot access Optuna visualization | | 2 | ETA not displaying | High | No idea when optimization will finish | | 3 | No running/pause status indicator | High | Unclear if optimization is active | | 4 | No pause/stop controls | High | Cannot control running optimization | | 5 | Parallel coords ugly (white, confusing FEA/NN) | Medium | Poor visualization experience | | 6 | Convergence plot hides improvements | Medium | Can't see optimization progress clearly | | 7 | Optimizer state static/outdated | Medium | Stale information about current phase | | 8 | Analysis tab components broken | Medium | Multiple features non-functional | | 9 | Report generation fails | Medium | Cannot generate study reports | ### Implementation Phases ``` Phase 0 (P0) - Quick Wins [~2 hours] ████░░░░░░ Phase 1 (P1) - Controls [~12 hours] ████████░░ Phase 2 (P2) - Visualizations [~15 hours] ██████████ Phase 3 (P3) - Analysis/Reports [~10 hours] ██████░░░░ ``` ### Success Criteria - [ ] All buttons functional (Optuna, Start, Stop, Pause, Resume) - [ ] ETA displays accurately with rate calculation - [ ] Clear visual status indicator (Running/Paused/Stopped/Completed) - [ ] Clean dark-themed parallel coordinates (no Plotly white) - [ ] Log-scale convergence plot option - [ ] Dynamic optimizer state from running script - [ ] All Analysis tabs working - [ ] Report generation produces valid markdown --- ## 2. Phase 0: Quick Wins (P0) **Estimated Time:** 2 hours **Dependencies:** None **Risk:** Low ### 2.1 Fix Optuna Dashboard Button #### Root Cause Analysis ``` Error: {"detail":"Failed to launch Optuna dashboard: [WinError 2] The system cannot find the file specified"} ``` The `optuna-dashboard` CLI package is not installed in the atomizer conda environment. #### Solution A: Install Package (Recommended) **Step 1: Install optuna-dashboard** ```bash # Open terminal and run: conda activate atomizer pip install optuna-dashboard # Verify installation optuna-dashboard --help ``` **Step 2: Verify backend can find it** ```bash # Test the command directly optuna-dashboard sqlite:///C:/Users/antoi/Atomizer/studies/M1_Mirror/m1_mirror_cost_reduction_V13/3_results/study.db --port 8081 ``` **Step 3: Test via API** ```bash curl -X POST http://127.0.0.1:8003/api/optimization/studies/m1_mirror_cost_reduction_V13/optuna-dashboard ``` #### Solution B: Add Graceful Fallback (Additional) **File:** `atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx` Add availability check and helpful error message: ```tsx // Add state for optuna availability const [optunaAvailable, setOptunaAvailable] = useState(null); // Check on mount useEffect(() => { const checkOptuna = async () => { try { const response = await fetch('/api/optimization/optuna-status'); const data = await response.json(); setOptunaAvailable(data.available); } catch { setOptunaAvailable(false); } }; checkOptuna(); }, []); // In render, show install hint if not available {optunaAvailable === false && (
Install: pip install optuna-dashboard
)} ``` **File:** `atomizer-dashboard/backend/api/routes/optimization.py` Add status check endpoint: ```python @router.get("/optuna-status") async def check_optuna_status(): """Check if optuna-dashboard CLI is available""" import shutil optuna_path = shutil.which("optuna-dashboard") return { "available": optuna_path is not None, "path": optuna_path, "install_command": "pip install optuna-dashboard" } ``` #### Testing Checklist - [ ] `optuna-dashboard --help` works in terminal - [ ] Button click launches dashboard - [ ] Dashboard opens in new browser tab - [ ] Dashboard shows study data correctly --- ### 2.2 Fix ETA Calculation and Display #### Root Cause Analysis The backend `/api/optimization/studies/{id}/process` endpoint doesn't return timing information needed for ETA calculation. #### Solution: Add Timing Tracking **Step 1: Modify Backend Process Status** **File:** `atomizer-dashboard/backend/api/routes/optimization.py` Find the `get_process_status` function and enhance it: ```python @router.get("/studies/{study_id}/process") async def get_process_status(study_id: str): """Get the process status for a study's optimization run with ETA""" try: study_dir = resolve_study_path(study_id) results_dir = get_results_dir(study_dir) # Get process running status (existing code) is_running = is_optimization_running(study_id) # Load config for total trials config_file = study_dir / "optimization_config.json" total_trials = 100 # default if config_file.exists(): config = json.loads(config_file.read_text()) total_trials = config.get("optimization_settings", {}).get("n_trials", 100) # Query database for trial timing db_path = results_dir / "study.db" completed_trials = 0 avg_time_per_trial = None eta_seconds = None eta_formatted = None rate_per_hour = None recent_trials = [] if db_path.exists(): conn = sqlite3.connect(str(db_path)) cursor = conn.cursor() # Get completed trial count cursor.execute(""" SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE' """) completed_trials = cursor.fetchone()[0] # Get recent trial timestamps for ETA calculation cursor.execute(""" SELECT datetime_start, datetime_complete FROM trials WHERE state = 'COMPLETE' AND datetime_start IS NOT NULL AND datetime_complete IS NOT NULL ORDER BY trial_id DESC LIMIT 20 """) rows = cursor.fetchall() if len(rows) >= 2: # Calculate average time per trial from recent trials durations = [] for start, end in rows: try: start_dt = datetime.fromisoformat(start.replace('Z', '+00:00')) end_dt = datetime.fromisoformat(end.replace('Z', '+00:00')) duration = (end_dt - start_dt).total_seconds() if 0 < duration < 7200: # Sanity check: 0-2 hours durations.append(duration) except: continue if durations: avg_time_per_trial = sum(durations) / len(durations) remaining_trials = max(0, total_trials - completed_trials) eta_seconds = avg_time_per_trial * remaining_trials # Format ETA if eta_seconds < 60: eta_formatted = f"{int(eta_seconds)}s" elif eta_seconds < 3600: eta_formatted = f"{int(eta_seconds // 60)}m" else: hours = int(eta_seconds // 3600) minutes = int((eta_seconds % 3600) // 60) eta_formatted = f"{hours}h {minutes}m" # Calculate rate rate_per_hour = 3600 / avg_time_per_trial if avg_time_per_trial > 0 else 0 conn.close() return { "study_id": study_id, "is_running": is_running, "completed_trials": completed_trials, "total_trials": total_trials, "progress_percent": round((completed_trials / total_trials) * 100, 1) if total_trials > 0 else 0, "avg_time_per_trial_seconds": round(avg_time_per_trial, 1) if avg_time_per_trial else None, "eta_seconds": round(eta_seconds) if eta_seconds else None, "eta_formatted": eta_formatted, "rate_per_hour": round(rate_per_hour, 2) if rate_per_hour else None } except HTTPException: raise except Exception as e: return { "study_id": study_id, "is_running": False, "error": str(e) } ``` **Step 2: Update Frontend Display** **File:** `atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx` Add ETA display in the horizontal layout: ```tsx {/* ETA and Rate Display */} {processStatus && processStatus.is_running && (
{processStatus.eta_formatted && (
ETA: {processStatus.eta_formatted}
)} {processStatus.rate_per_hour && (
Rate: {processStatus.rate_per_hour}/hr
)}
)} ``` Add imports at top: ```tsx import { Clock, TrendingUp } from 'lucide-react'; ``` **Step 3: Update API Client Types** **File:** `atomizer-dashboard/frontend/src/api/client.ts` Update ProcessStatus interface: ```typescript export interface ProcessStatus { study_id: string; is_running: boolean; completed_trials?: number; total_trials?: number; progress_percent?: number; avg_time_per_trial_seconds?: number; eta_seconds?: number; eta_formatted?: string; rate_per_hour?: number; pid?: number; start_time?: string; iteration?: number; fea_count?: number; nn_count?: number; } ``` #### Testing Checklist - [ ] API returns eta_formatted when optimization running - [ ] ETA updates as trials complete - [ ] Rate per hour is reasonable (matches actual) - [ ] Display shows "--" when no data available --- ## 3. Phase 1: Control Panel Overhaul (P1) **Estimated Time:** 12 hours **Dependencies:** Phase 0 **Risk:** Medium ### 3.1 Status Indicator Component #### Design Specification ``` ┌─────────────────────────────────────────────────────────────────────────┐ │ ● RUNNING Trial 47/100 (47%) │ ETA: 2h 15m │ Rate: 3.2/hr │ ├─────────────────────────────────────────────────────────────────────────┤ │ [■ STOP] [⏸ PAUSE] [⚙ Settings] [📊 Optuna] [📋 Report] │ └─────────────────────────────────────────────────────────────────────────┘ Status States: ● RUNNING - Green pulsing dot, green text ⏸ PAUSED - Yellow dot, yellow text ○ STOPPED - Gray dot, gray text ✓ COMPLETE - Cyan dot, cyan text ``` #### Implementation **File:** `atomizer-dashboard/frontend/src/components/dashboard/StatusBadge.tsx` (NEW) ```tsx import React from 'react'; import { CheckCircle, PauseCircle, StopCircle, PlayCircle } from 'lucide-react'; export type OptimizationStatus = 'running' | 'paused' | 'stopped' | 'completed' | 'error'; interface StatusBadgeProps { status: OptimizationStatus; showLabel?: boolean; size?: 'sm' | 'md' | 'lg'; } const statusConfig = { running: { color: 'green', label: 'Running', icon: PlayCircle, dotClass: 'bg-green-500 animate-pulse', textClass: 'text-green-400', bgClass: 'bg-green-500/10 border-green-500/30', }, paused: { color: 'yellow', label: 'Paused', icon: PauseCircle, dotClass: 'bg-yellow-500', textClass: 'text-yellow-400', bgClass: 'bg-yellow-500/10 border-yellow-500/30', }, stopped: { color: 'gray', label: 'Stopped', icon: StopCircle, dotClass: 'bg-dark-500', textClass: 'text-dark-400', bgClass: 'bg-dark-700 border-dark-600', }, completed: { color: 'cyan', label: 'Completed', icon: CheckCircle, dotClass: 'bg-primary-500', textClass: 'text-primary-400', bgClass: 'bg-primary-500/10 border-primary-500/30', }, error: { color: 'red', label: 'Error', icon: StopCircle, dotClass: 'bg-red-500', textClass: 'text-red-400', bgClass: 'bg-red-500/10 border-red-500/30', }, }; const sizeConfig = { sm: { dot: 'w-2 h-2', text: 'text-xs', padding: 'px-2 py-0.5', icon: 'w-3 h-3' }, md: { dot: 'w-3 h-3', text: 'text-sm', padding: 'px-3 py-1', icon: 'w-4 h-4' }, lg: { dot: 'w-4 h-4', text: 'text-base', padding: 'px-4 py-2', icon: 'w-5 h-5' }, }; export function StatusBadge({ status, showLabel = true, size = 'md' }: StatusBadgeProps) { const config = statusConfig[status]; const sizes = sizeConfig[size]; return (
{showLabel && ( {config.label} )}
); } export function getStatusFromProcess( isRunning: boolean, isPaused: boolean, isCompleted: boolean, hasError: boolean ): OptimizationStatus { if (hasError) return 'error'; if (isCompleted) return 'completed'; if (isPaused) return 'paused'; if (isRunning) return 'running'; return 'stopped'; } ``` ### 3.2 Pause/Resume Functionality #### Backend Implementation **File:** `atomizer-dashboard/backend/api/routes/optimization.py` Add new endpoints: ```python import psutil import signal # Track paused state (in-memory, could also use file) _paused_studies: Dict[str, bool] = {} @router.post("/studies/{study_id}/pause") async def pause_optimization(study_id: str): """ Pause the running optimization process. Uses SIGSTOP on Unix or process.suspend() on Windows. """ try: study_dir = resolve_study_path(study_id) # Find the optimization process proc = find_optimization_process(study_id, study_dir) if not proc: raise HTTPException(status_code=404, detail="No running optimization found") # Suspend the process and all children process = psutil.Process(proc.pid) children = process.children(recursive=True) # Suspend children first, then parent for child in children: try: child.suspend() except psutil.NoSuchProcess: pass process.suspend() _paused_studies[study_id] = True return { "success": True, "message": f"Optimization paused (PID: {proc.pid})", "pid": proc.pid } except HTTPException: raise except Exception as e: raise HTTPException(status_code=500, detail=f"Failed to pause: {str(e)}") @router.post("/studies/{study_id}/resume") async def resume_optimization(study_id: str): """ Resume a paused optimization process. Uses SIGCONT on Unix or process.resume() on Windows. """ try: study_dir = resolve_study_path(study_id) # Find the optimization process proc = find_optimization_process(study_id, study_dir) if not proc: raise HTTPException(status_code=404, detail="No paused optimization found") # Resume the process and all children process = psutil.Process(proc.pid) children = process.children(recursive=True) # Resume parent first, then children process.resume() for child in children: try: child.resume() except psutil.NoSuchProcess: pass _paused_studies[study_id] = False return { "success": True, "message": f"Optimization resumed (PID: {proc.pid})", "pid": proc.pid } except HTTPException: raise except Exception as e: raise HTTPException(status_code=500, detail=f"Failed to resume: {str(e)}") @router.get("/studies/{study_id}/is-paused") async def is_optimization_paused(study_id: str): """Check if optimization is currently paused""" return { "study_id": study_id, "is_paused": _paused_studies.get(study_id, False) } def find_optimization_process(study_id: str, study_dir: Path): """Find the running optimization process for a study""" for proc in psutil.process_iter(['pid', 'name', 'cmdline', 'cwd']): try: cmdline = proc.info.get('cmdline') or [] cmdline_str = ' '.join(cmdline) if cmdline else '' if 'python' in cmdline_str.lower() and 'run_optimization' in cmdline_str: if study_id in cmdline_str or str(study_dir) in cmdline_str: return proc except (psutil.NoSuchProcess, psutil.AccessDenied): continue return None ``` #### Frontend Implementation **File:** `atomizer-dashboard/frontend/src/api/client.ts` Add new methods: ```typescript async pauseOptimization(studyId: string): Promise<{ success: boolean; message: string }> { const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/pause`, { method: 'POST', }); if (!response.ok) { const error = await response.json(); throw new Error(error.detail || 'Failed to pause optimization'); } return response.json(); } async resumeOptimization(studyId: string): Promise<{ success: boolean; message: string }> { const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/resume`, { method: 'POST', }); if (!response.ok) { const error = await response.json(); throw new Error(error.detail || 'Failed to resume optimization'); } return response.json(); } async isOptimizationPaused(studyId: string): Promise<{ is_paused: boolean }> { const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/is-paused`); return response.json(); } ``` **File:** `atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx` Update to include pause/resume buttons: ```tsx // Add state const [isPaused, setIsPaused] = useState(false); // Add handlers const handlePause = async () => { if (!selectedStudy) return; setActionInProgress('pause'); setError(null); try { await apiClient.pauseOptimization(selectedStudy.id); setIsPaused(true); await fetchProcessStatus(); } catch (err: any) { setError(err.message || 'Failed to pause optimization'); } finally { setActionInProgress(null); } }; const handleResume = async () => { if (!selectedStudy) return; setActionInProgress('resume'); setError(null); try { await apiClient.resumeOptimization(selectedStudy.id); setIsPaused(false); await fetchProcessStatus(); } catch (err: any) { setError(err.message || 'Failed to resume optimization'); } finally { setActionInProgress(null); } }; // Update button rendering {isRunning && !isPaused && ( )} {isPaused && ( )} ``` ### 3.3 Dynamic Optimizer State #### Design: Dashboard State File The optimization script writes a JSON file that the dashboard reads: **File structure:** `{study_dir}/3_results/dashboard_state.json` ```json { "updated_at": "2026-01-06T12:34:56.789Z", "phase": { "name": "exploitation", "display_name": "Exploitation", "description": "Focusing optimization on promising regions of the design space", "progress": 0.65, "started_at": "2026-01-06T11:00:00Z" }, "sampler": { "name": "TPESampler", "display_name": "TPE (Bayesian)", "description": "Tree-structured Parzen Estimator uses Bayesian optimization with kernel density estimation to model promising and unpromising regions", "parameters": { "n_startup_trials": 20, "multivariate": true, "constant_liar": true } }, "objectives": [ { "name": "weighted_sum", "display_name": "Weighted Sum", "direction": "minimize", "weight": 1.0, "current_best": 175.87, "initial_value": 450.23, "improvement_percent": 60.9, "unit": "" }, { "name": "wfe_40_20", "display_name": "WFE @ 40-20C", "direction": "minimize", "weight": 10.0, "current_best": 5.63, "initial_value": 15.2, "improvement_percent": 63.0, "unit": "nm RMS" }, { "name": "mass_kg", "display_name": "Mass", "direction": "minimize", "weight": 1.0, "current_best": 118.67, "initial_value": 145.0, "improvement_percent": 18.2, "unit": "kg" } ], "plan": { "total_phases": 4, "current_phase_index": 1, "phases": [ {"name": "exploration", "display_name": "Exploration", "trials": "1-20"}, {"name": "exploitation", "display_name": "Exploitation", "trials": "21-60"}, {"name": "refinement", "display_name": "Refinement", "trials": "61-90"}, {"name": "convergence", "display_name": "Convergence", "trials": "91-100"} ] }, "strategy": { "current": "Focused sampling around best solution", "next_action": "Evaluating 5 candidates near Pareto front" }, "performance": { "fea_count": 45, "nn_count": 0, "avg_fea_time_seconds": 127.5, "total_runtime_seconds": 5737 } } ``` #### Backend Endpoint **File:** `atomizer-dashboard/backend/api/routes/optimization.py` ```python @router.get("/studies/{study_id}/dashboard-state") async def get_dashboard_state(study_id: str): """ Get dynamic optimizer state from dashboard_state.json or compute from DB. The optimization script writes this file periodically. """ try: study_dir = resolve_study_path(study_id) results_dir = get_results_dir(study_dir) # Try to read dynamic state file first state_file = results_dir / "dashboard_state.json" if state_file.exists(): state = json.loads(state_file.read_text()) # Add freshness indicator try: updated = datetime.fromisoformat(state.get("updated_at", "").replace('Z', '+00:00')) age_seconds = (datetime.now(updated.tzinfo) - updated).total_seconds() state["_age_seconds"] = age_seconds state["_is_stale"] = age_seconds > 60 # Stale if >1 minute old except: state["_is_stale"] = True return state # Fallback: compute state from database return await compute_state_from_db(study_id, study_dir, results_dir) except HTTPException: raise except Exception as e: raise HTTPException(status_code=500, detail=f"Failed to get dashboard state: {str(e)}") async def compute_state_from_db(study_id: str, study_dir: Path, results_dir: Path): """Compute optimizer state from database when no state file exists""" config_file = study_dir / "optimization_config.json" config = {} if config_file.exists(): config = json.loads(config_file.read_text()) db_path = results_dir / "study.db" completed = 0 total = config.get("optimization_settings", {}).get("n_trials", 100) if db_path.exists(): conn = sqlite3.connect(str(db_path)) cursor = conn.cursor() cursor.execute("SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE'") completed = cursor.fetchone()[0] conn.close() # Determine phase based on progress progress = completed / total if total > 0 else 0 if progress < 0.2: phase_name, phase_desc = "exploration", "Initial sampling to understand design space" elif progress < 0.6: phase_name, phase_desc = "exploitation", "Focusing on promising regions" elif progress < 0.9: phase_name, phase_desc = "refinement", "Fine-tuning best solutions" else: phase_name, phase_desc = "convergence", "Final convergence check" # Get sampler info sampler = config.get("optimization_settings", {}).get("sampler", "TPESampler") sampler_descriptions = { "TPESampler": "Tree-structured Parzen Estimator - Bayesian optimization", "NSGAIISampler": "Non-dominated Sorting Genetic Algorithm II - Multi-objective evolutionary", "CmaEsSampler": "Covariance Matrix Adaptation Evolution Strategy - Continuous optimization", "RandomSampler": "Random sampling - Baseline/exploration" } # Get objectives objectives = [] for obj in config.get("objectives", []): objectives.append({ "name": obj.get("name", "objective"), "display_name": obj.get("name", "Objective").replace("_", " ").title(), "direction": obj.get("direction", "minimize"), "weight": obj.get("weight", 1.0), "unit": obj.get("unit", "") }) return { "updated_at": datetime.now().isoformat(), "_is_computed": True, "phase": { "name": phase_name, "display_name": phase_name.title(), "description": phase_desc, "progress": progress }, "sampler": { "name": sampler, "display_name": sampler.replace("Sampler", ""), "description": sampler_descriptions.get(sampler, "Optimization sampler") }, "objectives": objectives, "plan": { "total_phases": 4, "current_phase_index": ["exploration", "exploitation", "refinement", "convergence"].index(phase_name), "phases": [ {"name": "exploration", "display_name": "Exploration"}, {"name": "exploitation", "display_name": "Exploitation"}, {"name": "refinement", "display_name": "Refinement"}, {"name": "convergence", "display_name": "Convergence"} ] } } ``` #### Frontend Component Update **File:** `atomizer-dashboard/frontend/src/components/tracker/OptimizerStatePanel.tsx` Complete rewrite for dynamic state: ```tsx import React, { useEffect, useState } from 'react'; import { Target, Layers, TrendingUp, Brain, Database, RefreshCw } from 'lucide-react'; import { apiClient } from '../../api/client'; interface DashboardState { updated_at: string; _is_stale?: boolean; _is_computed?: boolean; phase: { name: string; display_name: string; description: string; progress: number; }; sampler: { name: string; display_name: string; description: string; parameters?: Record; }; objectives: Array<{ name: string; display_name: string; direction: string; weight?: number; current_best?: number; initial_value?: number; improvement_percent?: number; unit?: string; }>; plan?: { total_phases: number; current_phase_index: number; phases: Array<{ name: string; display_name: string }>; }; strategy?: { current: string; next_action?: string; }; performance?: { fea_count: number; nn_count: number; avg_fea_time_seconds?: number; }; } interface OptimizerStatePanelProps { studyId: string | null; refreshInterval?: number; } export function OptimizerStatePanel({ studyId, refreshInterval = 5000 }: OptimizerStatePanelProps) { const [state, setState] = useState(null); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); const fetchState = async () => { if (!studyId) return; try { const response = await fetch(`/api/optimization/studies/${studyId}/dashboard-state`); if (!response.ok) throw new Error('Failed to fetch state'); const data = await response.json(); setState(data); setError(null); } catch (err: any) { setError(err.message); } finally { setLoading(false); } }; useEffect(() => { fetchState(); const interval = setInterval(fetchState, refreshInterval); return () => clearInterval(interval); }, [studyId, refreshInterval]); if (!studyId) { return (
Select a study to view optimizer state
); } if (loading && !state) { return (
); } if (error || !state) { return (
{error || 'Failed to load state'}
); } const phaseColors: Record = { exploration: 'text-blue-400 bg-blue-500/10 border-blue-500/30', exploitation: 'text-yellow-400 bg-yellow-500/10 border-yellow-500/30', refinement: 'text-purple-400 bg-purple-500/10 border-purple-500/30', convergence: 'text-green-400 bg-green-500/10 border-green-500/30' }; return (
{/* Stale indicator */} {state._is_stale && (
State may be outdated
)} {/* Sampler Card */}
{state.sampler.display_name}

{state.sampler.description}

{state.sampler.parameters && (
{Object.entries(state.sampler.parameters).slice(0, 3).map(([key, val]) => ( {key}: {String(val)} ))}
)}
{/* Phase Progress */}
Phase {state.phase.display_name}
{/* Phase progress bar */}

{state.phase.description}

{/* Phase timeline */} {state.plan && (
{state.plan.phases.map((phase, idx) => (
))}
)}
{/* Objectives */}
Objectives {state.objectives.length > 1 && ( Multi-Objective )}
{state.objectives.map((obj) => (
{obj.direction === 'minimize' ? '↓' : '↑'} {obj.display_name}
{obj.current_best !== undefined && ( {obj.current_best.toFixed(2)} {obj.unit && {obj.unit}} )} {obj.improvement_percent !== undefined && ( ↓{obj.improvement_percent.toFixed(1)}% )}
))}
{/* Performance Stats */} {state.performance && (
{state.performance.fea_count}
FEA Runs
{state.performance.nn_count > 0 && (
{state.performance.nn_count}
NN Predictions
)}
)} {/* Current Strategy */} {state.strategy && (
Current Strategy
{state.strategy.current}
{state.strategy.next_action && (
Next: {state.strategy.next_action}
)}
)}
); } ``` --- ## 4. Phase 2: Visualization Improvements (P2) **Estimated Time:** 15 hours **Dependencies:** Phase 0, Phase 1 (for integration) **Risk:** Medium ### 4.1 Parallel Coordinates Overhaul #### Decision: Nivo vs Custom D3 | Criteria | Nivo | Custom D3 | |----------|------|-----------| | Development time | 4 hours | 12 hours | | Dark theme | Built-in | Manual | | Brushing | Limited | Full control | | Bundle size | +25kb | +0kb (D3 already used) | | Maintenance | Low | Medium | **Recommendation:** Start with Nivo, add custom brushing layer if needed. #### Implementation Plan **Step 1: Install Nivo** ```bash cd atomizer-dashboard/frontend npm install @nivo/parallel-coordinates @nivo/core ``` **Step 2: Create New Component** **File:** `atomizer-dashboard/frontend/src/components/charts/ParallelCoordinatesChart.tsx` ```tsx import React, { useMemo, useState } from 'react'; import { ResponsiveParallelCoordinates } from '@nivo/parallel-coordinates'; interface Trial { trial_number: number; values: number[]; params: Record; constraint_satisfied?: boolean; source?: string; } interface ParallelCoordinatesChartProps { trials: Trial[]; paramNames: string[]; objectiveNames: string[]; paretoTrials?: Set; height?: number; maxTrials?: number; } // Atomaste-aligned dark theme const darkTheme = { background: 'transparent', textColor: '#94a3b8', fontSize: 11, axis: { domain: { line: { stroke: '#334155', strokeWidth: 2, }, }, ticks: { line: { stroke: '#1e293b', strokeWidth: 1, }, text: { fill: '#64748b', fontSize: 10, }, }, legend: { text: { fill: '#94a3b8', fontSize: 11, fontWeight: 500, }, }, }, }; export function ParallelCoordinatesChart({ trials, paramNames, objectiveNames, paretoTrials = new Set(), height = 400, maxTrials = 300 }: ParallelCoordinatesChartProps) { const [hoveredTrial, setHoveredTrial] = useState(null); // Process and limit trials const processedData = useMemo(() => { // Filter feasible trials and limit count const feasible = trials .filter(t => t.constraint_satisfied !== false) .slice(-maxTrials); // Build data array for Nivo return feasible.map(trial => { const dataPoint: Record = { trial_number: trial.trial_number, _isPareto: paretoTrials.has(trial.trial_number), }; // Add parameters paramNames.forEach(name => { dataPoint[name] = trial.params[name] ?? 0; }); // Add objectives objectiveNames.forEach((name, idx) => { dataPoint[name] = trial.values[idx] ?? 0; }); return dataPoint; }); }, [trials, paramNames, objectiveNames, paretoTrials, maxTrials]); // Build variables (axes) configuration const variables = useMemo(() => { const vars: any[] = []; // Parameters first paramNames.forEach(name => { const values = processedData.map(d => d[name]).filter(v => v !== undefined); vars.push({ key: name, type: 'linear' as const, min: Math.min(...values) * 0.95, max: Math.max(...values) * 1.05, legend: name.replace(/_/g, ' '), legendPosition: 'start' as const, legendOffset: -15, }); }); // Then objectives objectiveNames.forEach(name => { const values = processedData.map(d => d[name]).filter(v => v !== undefined); vars.push({ key: name, type: 'linear' as const, min: Math.min(...values) * 0.95, max: Math.max(...values) * 1.05, legend: name.replace(/_/g, ' '), legendPosition: 'start' as const, legendOffset: -15, }); }); return vars; }, [processedData, paramNames, objectiveNames]); // Color function - Pareto in cyan, rest in blue gradient const getColor = (datum: any) => { if (datum._isPareto) return '#00d4e6'; // Atomaste cyan // Gradient based on first objective (assuming minimize) const objValues = processedData.map(d => d[objectiveNames[0]]); const minObj = Math.min(...objValues); const maxObj = Math.max(...objValues); const normalized = (datum[objectiveNames[0]] - minObj) / (maxObj - minObj); // Blue gradient: good (dark blue) to bad (light gray) if (normalized < 0.25) return '#3b82f6'; // Blue - top 25% if (normalized < 0.5) return '#60a5fa'; // Light blue if (normalized < 0.75) return '#94a3b8'; // Gray return '#475569'; // Dark gray - worst }; if (processedData.length === 0) { return (
No feasible trials to display
); } return (
{/* Legend */}
Pareto Optimal
Top 25%
Other
); } ``` **Step 3: Update Dashboard to use new component** **File:** `atomizer-dashboard/frontend/src/pages/Dashboard.tsx` Replace the old parallel coordinates import: ```tsx // Remove import { ParallelCoordinatesPlot } from '../components/ParallelCoordinatesPlot'; // Add import { ParallelCoordinatesChart } from '../components/charts/ParallelCoordinatesChart'; ``` Update usage: ```tsx t.trial_number))} height={400} maxTrials={300} /> ``` **Step 4: Remove old Plotly component** Delete or archive: - `atomizer-dashboard/frontend/src/components/plotly/PlotlyParallelCoordinates.tsx` - `atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx` ### 4.2 Convergence Plot Log Scale #### Implementation **File:** `atomizer-dashboard/frontend/src/components/ConvergencePlot.tsx` Add log scale toggle: ```tsx // Add to component state const [useLogScale, setUseLogScale] = useState(false); // Add transformation function const transformValue = (value: number): number => { if (!useLogScale) return value; // Handle negative values for minimization if (value <= 0) return value; return Math.log10(value); }; // Add inverse for display const formatValue = (value: number): string => { if (!useLogScale) return value.toFixed(4); // Show original value in tooltip return Math.pow(10, value).toFixed(4); }; // Update data processing const chartData = useMemo(() => { return processedData.map(d => ({ ...d, value: transformValue(d.value), best: transformValue(d.best), })); }, [processedData, useLogScale]); // Add toggle button in render
{useLogScale ? 'Logarithmic' : 'Linear'}
``` Also add for PlotlyConvergencePlot: **File:** `atomizer-dashboard/frontend/src/components/plotly/PlotlyConvergencePlot.tsx` ```tsx // Add state const [logScale, setLogScale] = useState(false); // Update layout const layout = { ...existingLayout, yaxis: { ...existingLayout.yaxis, type: logScale ? 'log' : 'linear', title: logScale ? `${objectiveName} (log scale)` : objectiveName, }, }; // Add toggle ``` --- ## 5. Phase 3: Analysis & Reporting (P3) **Estimated Time:** 10 hours **Dependencies:** None (can run in parallel with P2) **Risk:** Low ### 5.1 Analysis Tab Fixes #### Testing Matrix | Tab | Component | Test | Status | |-----|-----------|------|--------| | Overview | Statistics grid | Load with 100+ trials | | | Overview | Convergence plot | Shows best-so-far line | | | Parameters | Importance chart | Shows ranking bars | | | Parameters | Parallel coords | Axes render correctly | | | Pareto | 2D/3D plot | Toggle works | | | Pareto | Pareto table | Shows top solutions | | | Correlations | Heatmap | All cells colored | | | Correlations | Top correlations | Table populated | | | Constraints | Feasibility chart | Line chart renders | | | Constraints | Infeasible list | Shows violations | | | Surrogate | Quality chart | FEA vs NN comparison | | | Runs | Comparison | Multiple runs overlay | | #### Common Fixes Needed **Fix 1: Add loading states to all tabs** ```tsx // Wrap each tab content {loading ? (
Loading {tabName}...
) : error ? (

{error}

) : ( // Tab content )} ``` **Fix 2: Memoize expensive calculations** ```tsx // Wrap correlation calculation const correlationData = useMemo(() => { if (trials.length < 10) return null; return calculateCorrelations(trials, paramNames, objectiveNames); }, [trials, paramNames, objectiveNames]); ``` **Fix 3: Add empty state handlers** ```tsx {trials.length === 0 ? (

No trials yet

Run some optimization trials to see analysis

) : ( // Content )} ``` ### 5.2 Report Generation Protocol #### Create Protocol Document **File:** `docs/protocols/operations/OP_08_GENERATE_REPORT.md` ```markdown # OP_08: Generate Study Report ## Overview Generate a comprehensive markdown report for an optimization study. ## Trigger - Dashboard "Generate Report" button - CLI: `atomizer report ` - Claude Code: "generate report for {study}" ## Prerequisites - Study must have completed trials - study.db must exist with trial data - optimization_config.json must be present ## Process ### Step 1: Gather Data ```python # Load configuration config = load_json(study_dir / "optimization_config.json") # Query database db = sqlite3.connect(study_dir / "3_results/study.db") trials = query_all_trials(db) best_trial = get_best_trial(db) # Calculate metrics convergence_trial = find_90_percent_improvement(trials) feasibility_rate = count_feasible(trials) / len(trials) ``` ### Step 2: Generate Sections #### Executive Summary - Total trials completed - Best objective value achieved - Improvement percentage from initial - Key design changes #### Results Table | Metric | Initial | Final | Change | |--------|---------|-------|--------| | objective_1 | X | Y | Z% | #### Best Solution - Trial number - All design variable values - All objective values - Constraint satisfaction #### Convergence Analysis - Phase breakdown - Convergence trial identification - Exploration vs exploitation ratio #### Recommendations - Potential further improvements - Sensitivity observations - Next steps ### Step 3: Write Report ```python report_path = study_dir / "STUDY_REPORT.md" report_path.write_text(markdown_content) ``` ## Output - `STUDY_REPORT.md` in study root directory - Returns markdown content to API caller ## Template See: `optimization_engine/reporting/templates/study_report.md` ``` #### Implement Backend **File:** `atomizer-dashboard/backend/api/routes/optimization.py` Enhance the generate-report endpoint: ```python @router.post("/studies/{study_id}/generate-report") async def generate_report(study_id: str, format: str = "markdown"): """ Generate comprehensive study report. Args: study_id: Study identifier format: Output format (markdown, html, json) Returns: Generated report content and file path """ try: study_dir = resolve_study_path(study_id) results_dir = get_results_dir(study_dir) # Load configuration config_file = study_dir / "optimization_config.json" if not config_file.exists(): raise HTTPException(status_code=404, detail="No optimization config found") config = json.loads(config_file.read_text()) # Load trial data from database db_path = results_dir / "study.db" if not db_path.exists(): raise HTTPException(status_code=404, detail="No study database found") conn = sqlite3.connect(str(db_path)) conn.row_factory = sqlite3.Row cursor = conn.cursor() # Get all completed trials cursor.execute(""" SELECT t.trial_id, t.number, GROUP_CONCAT(tv.value) as values, GROUP_CONCAT(tp.param_name || '=' || tp.param_value) as params FROM trials t LEFT JOIN trial_values tv ON t.trial_id = tv.trial_id LEFT JOIN trial_params tp ON t.trial_id = tp.trial_id WHERE t.state = 'COMPLETE' GROUP BY t.trial_id ORDER BY t.number """) trials = cursor.fetchall() # Get best trial cursor.execute(""" SELECT t.trial_id, t.number, MIN(tv.value) as best_value FROM trials t JOIN trial_values tv ON t.trial_id = tv.trial_id WHERE t.state = 'COMPLETE' GROUP BY t.trial_id ORDER BY best_value LIMIT 1 """) best = cursor.fetchone() conn.close() # Generate report content report = generate_markdown_report( study_id=study_id, config=config, trials=trials, best_trial=best ) # Save to file report_path = study_dir / "STUDY_REPORT.md" report_path.write_text(report) return { "success": True, "content": report, "path": str(report_path), "format": format, "generated_at": datetime.now().isoformat() } except HTTPException: raise except Exception as e: raise HTTPException(status_code=500, detail=f"Failed to generate report: {str(e)}") def generate_markdown_report(study_id: str, config: dict, trials: list, best_trial) -> str: """Generate markdown report content""" # Extract info objectives = config.get("objectives", []) design_vars = config.get("design_variables", []) n_trials = len(trials) # Calculate metrics if trials and best_trial: first_value = float(trials[0]['values'].split(',')[0]) if trials[0]['values'] else 0 best_value = best_trial['best_value'] improvement = ((first_value - best_value) / first_value * 100) if first_value != 0 else 0 else: first_value = best_value = improvement = 0 # Build report report = f"""# {study_id.replace('_', ' ').title()} - Optimization Report **Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} **Status:** {'Completed' if n_trials >= config.get('optimization_settings', {}).get('n_trials', 100) else 'In Progress'} --- ## Executive Summary This optimization study completed **{n_trials} trials** and achieved a **{improvement:.1f}%** improvement in the primary objective. | Metric | Value | |--------|-------| | Total Trials | {n_trials} | | Best Value | {best_value:.4f} | | Initial Value | {first_value:.4f} | | Improvement | {improvement:.1f}% | --- ## Objectives | Name | Direction | Weight | |------|-----------|--------| """ for obj in objectives: report += f"| {obj.get('name', 'N/A')} | {obj.get('direction', 'minimize')} | {obj.get('weight', 1.0)} |\n" report += f""" --- ## Design Variables | Name | Min | Max | Best Value | |------|-----|-----|------------| """ # Parse best params best_params = {} if best_trial and best_trial['params']: for pair in best_trial['params'].split(','): if '=' in pair: k, v = pair.split('=', 1) best_params[k] = float(v) for dv in design_vars: name = dv.get('name', 'N/A') min_val = dv.get('bounds', [0, 1])[0] max_val = dv.get('bounds', [0, 1])[1] best_val = best_params.get(name, 'N/A') if isinstance(best_val, float): best_val = f"{best_val:.4f}" report += f"| {name} | {min_val} | {max_val} | {best_val} |\n" report += f""" --- ## Best Solution **Trial #{best_trial['number'] if best_trial else 'N/A'}** achieved the optimal result. --- ## Recommendations 1. Consider extending the optimization if convergence is not yet achieved 2. Validate the best solution with high-fidelity FEA 3. Perform sensitivity analysis around the optimal design point --- *Generated by Atomizer Dashboard* """ return report ``` --- ## 6. Testing & Validation ### 6.1 Test Plan #### Unit Tests | Component | Test Case | Expected | |-----------|-----------|----------| | StatusBadge | Render all states | Correct colors/icons | | ETA calculation | 20 trials with timestamps | Accurate ETA | | Pause/Resume | Running process | State toggles | | Log scale | Large value range | Compressed view | #### Integration Tests | Flow | Steps | Expected | |------|-------|----------| | Start optimization | Click Start → Watch status | Shows Running | | Pause optimization | Click Pause while running | Shows Paused, process suspended | | Resume optimization | Click Resume while paused | Shows Running, process resumes | | Stop optimization | Click Stop | Shows Stopped, process killed | | Optuna dashboard | Click Optuna button | New tab opens with Optuna | | Generate report | Click Generate Report | STUDY_REPORT.md created | #### Manual Testing Checklist ``` Phase 0: [ ] pip install optuna-dashboard succeeds [ ] Optuna button launches dashboard [ ] ETA shows reasonable time [ ] Rate per hour matches reality Phase 1: [ ] Status badge shows correct state [ ] Pause button suspends process [ ] Resume button resumes process [ ] Stop button kills process [ ] Optimizer state updates dynamically Phase 2: [ ] Parallel coords renders in dark theme [ ] Pareto solutions highlighted in cyan [ ] Log scale toggle works [ ] Convergence improvements visible Phase 3: [ ] All Analysis tabs load [ ] No console errors [ ] Report generates successfully [ ] Report contains accurate data ``` ### 6.2 Rollback Plan If issues arise: 1. **Git revert**: Each phase is a separate commit 2. **Feature flags**: Add `ENABLE_NEW_CONTROLS=false` env var 3. **Component fallback**: Keep old components renamed with `_legacy` suffix --- ## 7. Appendix: Technical Specifications ### 7.1 Package Dependencies **Frontend (package.json additions):** ```json { "dependencies": { "@nivo/parallel-coordinates": "^0.84.0", "@nivo/core": "^0.84.0" } } ``` **Backend (pip):** ``` optuna-dashboard>=0.14.0 ``` ### 7.2 API Endpoints Summary | Method | Endpoint | Description | Phase | |--------|----------|-------------|-------| | GET | `/api/optimization/optuna-status` | Check if optuna-dashboard installed | P0 | | GET | `/api/optimization/studies/{id}/process` | Enhanced with ETA | P0 | | POST | `/api/optimization/studies/{id}/pause` | Pause optimization | P1 | | POST | `/api/optimization/studies/{id}/resume` | Resume optimization | P1 | | GET | `/api/optimization/studies/{id}/is-paused` | Check pause state | P1 | | GET | `/api/optimization/studies/{id}/dashboard-state` | Dynamic optimizer state | P1 | | POST | `/api/optimization/studies/{id}/generate-report` | Enhanced report gen | P3 | ### 7.3 File Changes Summary | File | Action | Phase | |------|--------|-------| | `backend/api/routes/optimization.py` | Modify | P0, P1, P3 | | `frontend/src/api/client.ts` | Modify | P0, P1 | | `frontend/src/components/dashboard/ControlPanel.tsx` | Modify | P0, P1 | | `frontend/src/components/dashboard/StatusBadge.tsx` | Create | P1 | | `frontend/src/components/tracker/OptimizerStatePanel.tsx` | Rewrite | P1 | | `frontend/src/components/charts/ParallelCoordinatesChart.tsx` | Create | P2 | | `frontend/src/components/ConvergencePlot.tsx` | Modify | P2 | | `frontend/src/components/plotly/PlotlyConvergencePlot.tsx` | Modify | P2 | | `frontend/src/pages/Analysis.tsx` | Modify | P3 | | `docs/protocols/operations/OP_08_GENERATE_REPORT.md` | Create | P3 | ### 7.4 Color Palette (Atomaste Theme) ```css /* Primary */ --atomaste-cyan: #00d4e6; --atomaste-cyan-light: #34d399; --atomaste-cyan-dark: #0891b2; /* Status */ --status-running: #22c55e; --status-paused: #eab308; --status-stopped: #6b7280; --status-error: #ef4444; --status-complete: #00d4e6; /* Visualization */ --chart-pareto: #00d4e6; --chart-good: #3b82f6; --chart-neutral: #64748b; --chart-bad: #475569; --chart-infeasible: #ef4444; /* Backgrounds */ --bg-primary: #0a0f1a; --bg-card: #1e293b; --bg-input: #334155; /* Text */ --text-primary: #f1f5f9; --text-secondary: #94a3b8; --text-muted: #64748b; ``` --- ## Approval - [ ] **User Review**: Plan reviewed and approved - [ ] **Technical Review**: Implementation approach validated - [ ] **Resource Allocation**: Time allocated for each phase --- **Ready to begin implementation upon approval.**