Files
Atomizer/docs/archive/review/DASHBOARD_IMPROVEMENT_PLAN.md
Antoine 8d9d55356c docs: Archive stale docs and create Atomizer-HQ agent documentation
Archive Management:
- Moved RALPH_LOOP, CANVAS, and dashboard implementation plans to archive/review/ for CEO review
- Moved completed restructuring plan and protocol v1 to archive/historical/
- Moved old session summaries to archive/review/

New HQ Documentation (docs/hq/):
- README.md: Overview of Atomizer-HQ multi-agent optimization team
- PROJECT_STRUCTURE.md: Standard KB-integrated project layout with Hydrotech reference
- KB_CONVENTIONS.md: Knowledge Base accumulation principles with generation tracking
- AGENT_WORKFLOWS.md: Project lifecycle phases and agent handoffs (OP_09 integration)
- STUDY_CONVENTIONS.md: Technical study execution standards and atomizer_spec.json format

Index Update:
- Reorganized docs/00_INDEX.md with HQ docs prominent
- Updated structure to reflect new agent-focused organization
- Maintained core documentation access for engineers

No files deleted, only moved to appropriate archive locations.
2026-02-09 02:48:35 +00:00

58 KiB

Atomizer Dashboard Improvement Plan - Complete Implementation Guide

Created: 2026-01-06 Status: Planning - Awaiting Approval Estimated Total Effort: 35-45 hours Priority: High


Table of Contents

  1. Executive Summary
  2. Phase 0: Quick Wins (P0)
  3. Phase 1: Control Panel Overhaul (P1)
  4. Phase 2: Visualization Improvements (P2)
  5. Phase 3: Analysis & Reporting (P3)
  6. Testing & Validation
  7. Appendix: Technical Specifications

1. Executive Summary

Problems Identified

# Issue Severity User Impact
1 Optuna dashboard button broken Critical Cannot access Optuna visualization
2 ETA not displaying High No idea when optimization will finish
3 No running/pause status indicator High Unclear if optimization is active
4 No pause/stop controls High Cannot control running optimization
5 Parallel coords ugly (white, confusing FEA/NN) Medium Poor visualization experience
6 Convergence plot hides improvements Medium Can't see optimization progress clearly
7 Optimizer state static/outdated Medium Stale information about current phase
8 Analysis tab components broken Medium Multiple features non-functional
9 Report generation fails Medium Cannot generate study reports

Implementation Phases

Phase 0 (P0) - Quick Wins     [~2 hours]  ████░░░░░░
Phase 1 (P1) - Controls       [~12 hours] ████████░░
Phase 2 (P2) - Visualizations [~15 hours] ██████████
Phase 3 (P3) - Analysis/Reports [~10 hours] ██████░░░░

Success Criteria

  • All buttons functional (Optuna, Start, Stop, Pause, Resume)
  • ETA displays accurately with rate calculation
  • Clear visual status indicator (Running/Paused/Stopped/Completed)
  • Clean dark-themed parallel coordinates (no Plotly white)
  • Log-scale convergence plot option
  • Dynamic optimizer state from running script
  • All Analysis tabs working
  • Report generation produces valid markdown

2. Phase 0: Quick Wins (P0)

Estimated Time: 2 hours Dependencies: None Risk: Low

2.1 Fix Optuna Dashboard Button

Root Cause Analysis

Error: {"detail":"Failed to launch Optuna dashboard: [WinError 2] The system cannot find the file specified"}

The optuna-dashboard CLI package is not installed in the atomizer conda environment.

Step 1: Install optuna-dashboard

# Open terminal and run:
conda activate atomizer
pip install optuna-dashboard

# Verify installation
optuna-dashboard --help

Step 2: Verify backend can find it

# Test the command directly
optuna-dashboard sqlite:///C:/Users/antoi/Atomizer/studies/M1_Mirror/m1_mirror_cost_reduction_V13/3_results/study.db --port 8081

Step 3: Test via API

curl -X POST http://127.0.0.1:8003/api/optimization/studies/m1_mirror_cost_reduction_V13/optuna-dashboard

Solution B: Add Graceful Fallback (Additional)

File: atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx

Add availability check and helpful error message:

// Add state for optuna availability
const [optunaAvailable, setOptunaAvailable] = useState<boolean | null>(null);

// Check on mount
useEffect(() => {
  const checkOptuna = async () => {
    try {
      const response = await fetch('/api/optimization/optuna-status');
      const data = await response.json();
      setOptunaAvailable(data.available);
    } catch {
      setOptunaAvailable(false);
    }
  };
  checkOptuna();
}, []);

// In render, show install hint if not available
{optunaAvailable === false && (
  <div className="text-xs text-yellow-400 mt-1">
    Install: pip install optuna-dashboard
  </div>
)}

File: atomizer-dashboard/backend/api/routes/optimization.py

Add status check endpoint:

@router.get("/optuna-status")
async def check_optuna_status():
    """Check if optuna-dashboard CLI is available"""
    import shutil
    optuna_path = shutil.which("optuna-dashboard")
    return {
        "available": optuna_path is not None,
        "path": optuna_path,
        "install_command": "pip install optuna-dashboard"
    }

Testing Checklist

  • optuna-dashboard --help works in terminal
  • Button click launches dashboard
  • Dashboard opens in new browser tab
  • Dashboard shows study data correctly

2.2 Fix ETA Calculation and Display

Root Cause Analysis

The backend /api/optimization/studies/{id}/process endpoint doesn't return timing information needed for ETA calculation.

Solution: Add Timing Tracking

Step 1: Modify Backend Process Status

File: atomizer-dashboard/backend/api/routes/optimization.py

Find the get_process_status function and enhance it:

@router.get("/studies/{study_id}/process")
async def get_process_status(study_id: str):
    """Get the process status for a study's optimization run with ETA"""
    try:
        study_dir = resolve_study_path(study_id)
        results_dir = get_results_dir(study_dir)

        # Get process running status (existing code)
        is_running = is_optimization_running(study_id)

        # Load config for total trials
        config_file = study_dir / "optimization_config.json"
        total_trials = 100  # default
        if config_file.exists():
            config = json.loads(config_file.read_text())
            total_trials = config.get("optimization_settings", {}).get("n_trials", 100)

        # Query database for trial timing
        db_path = results_dir / "study.db"
        completed_trials = 0
        avg_time_per_trial = None
        eta_seconds = None
        eta_formatted = None
        rate_per_hour = None
        recent_trials = []

        if db_path.exists():
            conn = sqlite3.connect(str(db_path))
            cursor = conn.cursor()

            # Get completed trial count
            cursor.execute("""
                SELECT COUNT(*) FROM trials
                WHERE state = 'COMPLETE'
            """)
            completed_trials = cursor.fetchone()[0]

            # Get recent trial timestamps for ETA calculation
            cursor.execute("""
                SELECT datetime_start, datetime_complete
                FROM trials
                WHERE state = 'COMPLETE'
                  AND datetime_start IS NOT NULL
                  AND datetime_complete IS NOT NULL
                ORDER BY trial_id DESC
                LIMIT 20
            """)
            rows = cursor.fetchall()

            if len(rows) >= 2:
                # Calculate average time per trial from recent trials
                durations = []
                for start, end in rows:
                    try:
                        start_dt = datetime.fromisoformat(start.replace('Z', '+00:00'))
                        end_dt = datetime.fromisoformat(end.replace('Z', '+00:00'))
                        duration = (end_dt - start_dt).total_seconds()
                        if 0 < duration < 7200:  # Sanity check: 0-2 hours
                            durations.append(duration)
                    except:
                        continue

                if durations:
                    avg_time_per_trial = sum(durations) / len(durations)
                    remaining_trials = max(0, total_trials - completed_trials)
                    eta_seconds = avg_time_per_trial * remaining_trials

                    # Format ETA
                    if eta_seconds < 60:
                        eta_formatted = f"{int(eta_seconds)}s"
                    elif eta_seconds < 3600:
                        eta_formatted = f"{int(eta_seconds // 60)}m"
                    else:
                        hours = int(eta_seconds // 3600)
                        minutes = int((eta_seconds % 3600) // 60)
                        eta_formatted = f"{hours}h {minutes}m"

                    # Calculate rate
                    rate_per_hour = 3600 / avg_time_per_trial if avg_time_per_trial > 0 else 0

            conn.close()

        return {
            "study_id": study_id,
            "is_running": is_running,
            "completed_trials": completed_trials,
            "total_trials": total_trials,
            "progress_percent": round((completed_trials / total_trials) * 100, 1) if total_trials > 0 else 0,
            "avg_time_per_trial_seconds": round(avg_time_per_trial, 1) if avg_time_per_trial else None,
            "eta_seconds": round(eta_seconds) if eta_seconds else None,
            "eta_formatted": eta_formatted,
            "rate_per_hour": round(rate_per_hour, 2) if rate_per_hour else None
        }

    except HTTPException:
        raise
    except Exception as e:
        return {
            "study_id": study_id,
            "is_running": False,
            "error": str(e)
        }

Step 2: Update Frontend Display

File: atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx

Add ETA display in the horizontal layout:

{/* ETA and Rate Display */}
{processStatus && processStatus.is_running && (
  <div className="flex items-center gap-4 text-sm border-l border-dark-600 pl-4">
    {processStatus.eta_formatted && (
      <div className="flex items-center gap-1">
        <Clock className="w-4 h-4 text-dark-400" />
        <span className="text-dark-400">ETA:</span>
        <span className="text-white font-mono">{processStatus.eta_formatted}</span>
      </div>
    )}
    {processStatus.rate_per_hour && (
      <div className="flex items-center gap-1">
        <TrendingUp className="w-4 h-4 text-dark-400" />
        <span className="text-dark-400">Rate:</span>
        <span className="text-white font-mono">{processStatus.rate_per_hour}/hr</span>
      </div>
    )}
  </div>
)}

Add imports at top:

import { Clock, TrendingUp } from 'lucide-react';

Step 3: Update API Client Types

File: atomizer-dashboard/frontend/src/api/client.ts

Update ProcessStatus interface:

export interface ProcessStatus {
  study_id: string;
  is_running: boolean;
  completed_trials?: number;
  total_trials?: number;
  progress_percent?: number;
  avg_time_per_trial_seconds?: number;
  eta_seconds?: number;
  eta_formatted?: string;
  rate_per_hour?: number;
  pid?: number;
  start_time?: string;
  iteration?: number;
  fea_count?: number;
  nn_count?: number;
}

Testing Checklist

  • API returns eta_formatted when optimization running
  • ETA updates as trials complete
  • Rate per hour is reasonable (matches actual)
  • Display shows "--" when no data available

3. Phase 1: Control Panel Overhaul (P1)

Estimated Time: 12 hours Dependencies: Phase 0 Risk: Medium

3.1 Status Indicator Component

Design Specification

┌─────────────────────────────────────────────────────────────────────────┐
│  ● RUNNING   Trial 47/100 (47%)   │   ETA: 2h 15m   │   Rate: 3.2/hr   │
├─────────────────────────────────────────────────────────────────────────┤
│   [■ STOP]   [⏸ PAUSE]   [⚙ Settings]   [📊 Optuna]   [📋 Report]      │
└─────────────────────────────────────────────────────────────────────────┘

Status States:
  ● RUNNING  - Green pulsing dot, green text
  ⏸ PAUSED   - Yellow dot, yellow text
  ○ STOPPED  - Gray dot, gray text
  ✓ COMPLETE - Cyan dot, cyan text

Implementation

File: atomizer-dashboard/frontend/src/components/dashboard/StatusBadge.tsx (NEW)

import React from 'react';
import { CheckCircle, PauseCircle, StopCircle, PlayCircle } from 'lucide-react';

export type OptimizationStatus = 'running' | 'paused' | 'stopped' | 'completed' | 'error';

interface StatusBadgeProps {
  status: OptimizationStatus;
  showLabel?: boolean;
  size?: 'sm' | 'md' | 'lg';
}

const statusConfig = {
  running: {
    color: 'green',
    label: 'Running',
    icon: PlayCircle,
    dotClass: 'bg-green-500 animate-pulse',
    textClass: 'text-green-400',
    bgClass: 'bg-green-500/10 border-green-500/30',
  },
  paused: {
    color: 'yellow',
    label: 'Paused',
    icon: PauseCircle,
    dotClass: 'bg-yellow-500',
    textClass: 'text-yellow-400',
    bgClass: 'bg-yellow-500/10 border-yellow-500/30',
  },
  stopped: {
    color: 'gray',
    label: 'Stopped',
    icon: StopCircle,
    dotClass: 'bg-dark-500',
    textClass: 'text-dark-400',
    bgClass: 'bg-dark-700 border-dark-600',
  },
  completed: {
    color: 'cyan',
    label: 'Completed',
    icon: CheckCircle,
    dotClass: 'bg-primary-500',
    textClass: 'text-primary-400',
    bgClass: 'bg-primary-500/10 border-primary-500/30',
  },
  error: {
    color: 'red',
    label: 'Error',
    icon: StopCircle,
    dotClass: 'bg-red-500',
    textClass: 'text-red-400',
    bgClass: 'bg-red-500/10 border-red-500/30',
  },
};

const sizeConfig = {
  sm: { dot: 'w-2 h-2', text: 'text-xs', padding: 'px-2 py-0.5', icon: 'w-3 h-3' },
  md: { dot: 'w-3 h-3', text: 'text-sm', padding: 'px-3 py-1', icon: 'w-4 h-4' },
  lg: { dot: 'w-4 h-4', text: 'text-base', padding: 'px-4 py-2', icon: 'w-5 h-5' },
};

export function StatusBadge({ status, showLabel = true, size = 'md' }: StatusBadgeProps) {
  const config = statusConfig[status];
  const sizes = sizeConfig[size];

  return (
    <div
      className={`inline-flex items-center gap-2 rounded-full border ${config.bgClass} ${sizes.padding}`}
    >
      <div className={`rounded-full ${config.dotClass} ${sizes.dot}`} />
      {showLabel && (
        <span className={`font-semibold uppercase tracking-wide ${config.textClass} ${sizes.text}`}>
          {config.label}
        </span>
      )}
    </div>
  );
}

export function getStatusFromProcess(
  isRunning: boolean,
  isPaused: boolean,
  isCompleted: boolean,
  hasError: boolean
): OptimizationStatus {
  if (hasError) return 'error';
  if (isCompleted) return 'completed';
  if (isPaused) return 'paused';
  if (isRunning) return 'running';
  return 'stopped';
}

3.2 Pause/Resume Functionality

Backend Implementation

File: atomizer-dashboard/backend/api/routes/optimization.py

Add new endpoints:

import psutil
import signal

# Track paused state (in-memory, could also use file)
_paused_studies: Dict[str, bool] = {}

@router.post("/studies/{study_id}/pause")
async def pause_optimization(study_id: str):
    """
    Pause the running optimization process.
    Uses SIGSTOP on Unix or process.suspend() on Windows.
    """
    try:
        study_dir = resolve_study_path(study_id)

        # Find the optimization process
        proc = find_optimization_process(study_id, study_dir)
        if not proc:
            raise HTTPException(status_code=404, detail="No running optimization found")

        # Suspend the process and all children
        process = psutil.Process(proc.pid)
        children = process.children(recursive=True)

        # Suspend children first, then parent
        for child in children:
            try:
                child.suspend()
            except psutil.NoSuchProcess:
                pass

        process.suspend()
        _paused_studies[study_id] = True

        return {
            "success": True,
            "message": f"Optimization paused (PID: {proc.pid})",
            "pid": proc.pid
        }

    except HTTPException:
        raise
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to pause: {str(e)}")


@router.post("/studies/{study_id}/resume")
async def resume_optimization(study_id: str):
    """
    Resume a paused optimization process.
    Uses SIGCONT on Unix or process.resume() on Windows.
    """
    try:
        study_dir = resolve_study_path(study_id)

        # Find the optimization process
        proc = find_optimization_process(study_id, study_dir)
        if not proc:
            raise HTTPException(status_code=404, detail="No paused optimization found")

        # Resume the process and all children
        process = psutil.Process(proc.pid)
        children = process.children(recursive=True)

        # Resume parent first, then children
        process.resume()
        for child in children:
            try:
                child.resume()
            except psutil.NoSuchProcess:
                pass

        _paused_studies[study_id] = False

        return {
            "success": True,
            "message": f"Optimization resumed (PID: {proc.pid})",
            "pid": proc.pid
        }

    except HTTPException:
        raise
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to resume: {str(e)}")


@router.get("/studies/{study_id}/is-paused")
async def is_optimization_paused(study_id: str):
    """Check if optimization is currently paused"""
    return {
        "study_id": study_id,
        "is_paused": _paused_studies.get(study_id, False)
    }


def find_optimization_process(study_id: str, study_dir: Path):
    """Find the running optimization process for a study"""
    for proc in psutil.process_iter(['pid', 'name', 'cmdline', 'cwd']):
        try:
            cmdline = proc.info.get('cmdline') or []
            cmdline_str = ' '.join(cmdline) if cmdline else ''

            if 'python' in cmdline_str.lower() and 'run_optimization' in cmdline_str:
                if study_id in cmdline_str or str(study_dir) in cmdline_str:
                    return proc
        except (psutil.NoSuchProcess, psutil.AccessDenied):
            continue
    return None

Frontend Implementation

File: atomizer-dashboard/frontend/src/api/client.ts

Add new methods:

async pauseOptimization(studyId: string): Promise<{ success: boolean; message: string }> {
  const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/pause`, {
    method: 'POST',
  });
  if (!response.ok) {
    const error = await response.json();
    throw new Error(error.detail || 'Failed to pause optimization');
  }
  return response.json();
}

async resumeOptimization(studyId: string): Promise<{ success: boolean; message: string }> {
  const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/resume`, {
    method: 'POST',
  });
  if (!response.ok) {
    const error = await response.json();
    throw new Error(error.detail || 'Failed to resume optimization');
  }
  return response.json();
}

async isOptimizationPaused(studyId: string): Promise<{ is_paused: boolean }> {
  const response = await fetch(`${API_BASE}/optimization/studies/${studyId}/is-paused`);
  return response.json();
}

File: atomizer-dashboard/frontend/src/components/dashboard/ControlPanel.tsx

Update to include pause/resume buttons:

// Add state
const [isPaused, setIsPaused] = useState(false);

// Add handlers
const handlePause = async () => {
  if (!selectedStudy) return;
  setActionInProgress('pause');
  setError(null);

  try {
    await apiClient.pauseOptimization(selectedStudy.id);
    setIsPaused(true);
    await fetchProcessStatus();
  } catch (err: any) {
    setError(err.message || 'Failed to pause optimization');
  } finally {
    setActionInProgress(null);
  }
};

const handleResume = async () => {
  if (!selectedStudy) return;
  setActionInProgress('resume');
  setError(null);

  try {
    await apiClient.resumeOptimization(selectedStudy.id);
    setIsPaused(false);
    await fetchProcessStatus();
  } catch (err: any) {
    setError(err.message || 'Failed to resume optimization');
  } finally {
    setActionInProgress(null);
  }
};

// Update button rendering
{isRunning && !isPaused && (
  <button
    onClick={handlePause}
    disabled={actionInProgress !== null}
    className="flex items-center gap-2 px-3 py-1.5 bg-yellow-600 hover:bg-yellow-500
               disabled:opacity-50 text-white rounded-lg text-sm font-medium"
  >
    {actionInProgress === 'pause' ? (
      <Loader2 className="w-4 h-4 animate-spin" />
    ) : (
      <Pause className="w-4 h-4" />
    )}
    Pause
  </button>
)}

{isPaused && (
  <button
    onClick={handleResume}
    disabled={actionInProgress !== null}
    className="flex items-center gap-2 px-3 py-1.5 bg-green-600 hover:bg-green-500
               disabled:opacity-50 text-white rounded-lg text-sm font-medium"
  >
    {actionInProgress === 'resume' ? (
      <Loader2 className="w-4 h-4 animate-spin" />
    ) : (
      <Play className="w-4 h-4" />
    )}
    Resume
  </button>
)}

3.3 Dynamic Optimizer State

Design: Dashboard State File

The optimization script writes a JSON file that the dashboard reads:

File structure: {study_dir}/3_results/dashboard_state.json

{
  "updated_at": "2026-01-06T12:34:56.789Z",
  "phase": {
    "name": "exploitation",
    "display_name": "Exploitation",
    "description": "Focusing optimization on promising regions of the design space",
    "progress": 0.65,
    "started_at": "2026-01-06T11:00:00Z"
  },
  "sampler": {
    "name": "TPESampler",
    "display_name": "TPE (Bayesian)",
    "description": "Tree-structured Parzen Estimator uses Bayesian optimization with kernel density estimation to model promising and unpromising regions",
    "parameters": {
      "n_startup_trials": 20,
      "multivariate": true,
      "constant_liar": true
    }
  },
  "objectives": [
    {
      "name": "weighted_sum",
      "display_name": "Weighted Sum",
      "direction": "minimize",
      "weight": 1.0,
      "current_best": 175.87,
      "initial_value": 450.23,
      "improvement_percent": 60.9,
      "unit": ""
    },
    {
      "name": "wfe_40_20",
      "display_name": "WFE @ 40-20C",
      "direction": "minimize",
      "weight": 10.0,
      "current_best": 5.63,
      "initial_value": 15.2,
      "improvement_percent": 63.0,
      "unit": "nm RMS"
    },
    {
      "name": "mass_kg",
      "display_name": "Mass",
      "direction": "minimize",
      "weight": 1.0,
      "current_best": 118.67,
      "initial_value": 145.0,
      "improvement_percent": 18.2,
      "unit": "kg"
    }
  ],
  "plan": {
    "total_phases": 4,
    "current_phase_index": 1,
    "phases": [
      {"name": "exploration", "display_name": "Exploration", "trials": "1-20"},
      {"name": "exploitation", "display_name": "Exploitation", "trials": "21-60"},
      {"name": "refinement", "display_name": "Refinement", "trials": "61-90"},
      {"name": "convergence", "display_name": "Convergence", "trials": "91-100"}
    ]
  },
  "strategy": {
    "current": "Focused sampling around best solution",
    "next_action": "Evaluating 5 candidates near Pareto front"
  },
  "performance": {
    "fea_count": 45,
    "nn_count": 0,
    "avg_fea_time_seconds": 127.5,
    "total_runtime_seconds": 5737
  }
}

Backend Endpoint

File: atomizer-dashboard/backend/api/routes/optimization.py

@router.get("/studies/{study_id}/dashboard-state")
async def get_dashboard_state(study_id: str):
    """
    Get dynamic optimizer state from dashboard_state.json or compute from DB.
    The optimization script writes this file periodically.
    """
    try:
        study_dir = resolve_study_path(study_id)
        results_dir = get_results_dir(study_dir)

        # Try to read dynamic state file first
        state_file = results_dir / "dashboard_state.json"
        if state_file.exists():
            state = json.loads(state_file.read_text())
            # Add freshness indicator
            try:
                updated = datetime.fromisoformat(state.get("updated_at", "").replace('Z', '+00:00'))
                age_seconds = (datetime.now(updated.tzinfo) - updated).total_seconds()
                state["_age_seconds"] = age_seconds
                state["_is_stale"] = age_seconds > 60  # Stale if >1 minute old
            except:
                state["_is_stale"] = True
            return state

        # Fallback: compute state from database
        return await compute_state_from_db(study_id, study_dir, results_dir)

    except HTTPException:
        raise
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to get dashboard state: {str(e)}")


async def compute_state_from_db(study_id: str, study_dir: Path, results_dir: Path):
    """Compute optimizer state from database when no state file exists"""

    config_file = study_dir / "optimization_config.json"
    config = {}
    if config_file.exists():
        config = json.loads(config_file.read_text())

    db_path = results_dir / "study.db"
    completed = 0
    total = config.get("optimization_settings", {}).get("n_trials", 100)

    if db_path.exists():
        conn = sqlite3.connect(str(db_path))
        cursor = conn.cursor()
        cursor.execute("SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE'")
        completed = cursor.fetchone()[0]
        conn.close()

    # Determine phase based on progress
    progress = completed / total if total > 0 else 0
    if progress < 0.2:
        phase_name, phase_desc = "exploration", "Initial sampling to understand design space"
    elif progress < 0.6:
        phase_name, phase_desc = "exploitation", "Focusing on promising regions"
    elif progress < 0.9:
        phase_name, phase_desc = "refinement", "Fine-tuning best solutions"
    else:
        phase_name, phase_desc = "convergence", "Final convergence check"

    # Get sampler info
    sampler = config.get("optimization_settings", {}).get("sampler", "TPESampler")
    sampler_descriptions = {
        "TPESampler": "Tree-structured Parzen Estimator - Bayesian optimization",
        "NSGAIISampler": "Non-dominated Sorting Genetic Algorithm II - Multi-objective evolutionary",
        "CmaEsSampler": "Covariance Matrix Adaptation Evolution Strategy - Continuous optimization",
        "RandomSampler": "Random sampling - Baseline/exploration"
    }

    # Get objectives
    objectives = []
    for obj in config.get("objectives", []):
        objectives.append({
            "name": obj.get("name", "objective"),
            "display_name": obj.get("name", "Objective").replace("_", " ").title(),
            "direction": obj.get("direction", "minimize"),
            "weight": obj.get("weight", 1.0),
            "unit": obj.get("unit", "")
        })

    return {
        "updated_at": datetime.now().isoformat(),
        "_is_computed": True,
        "phase": {
            "name": phase_name,
            "display_name": phase_name.title(),
            "description": phase_desc,
            "progress": progress
        },
        "sampler": {
            "name": sampler,
            "display_name": sampler.replace("Sampler", ""),
            "description": sampler_descriptions.get(sampler, "Optimization sampler")
        },
        "objectives": objectives,
        "plan": {
            "total_phases": 4,
            "current_phase_index": ["exploration", "exploitation", "refinement", "convergence"].index(phase_name),
            "phases": [
                {"name": "exploration", "display_name": "Exploration"},
                {"name": "exploitation", "display_name": "Exploitation"},
                {"name": "refinement", "display_name": "Refinement"},
                {"name": "convergence", "display_name": "Convergence"}
            ]
        }
    }

Frontend Component Update

File: atomizer-dashboard/frontend/src/components/tracker/OptimizerStatePanel.tsx

Complete rewrite for dynamic state:

import React, { useEffect, useState } from 'react';
import { Target, Layers, TrendingUp, Brain, Database, RefreshCw } from 'lucide-react';
import { apiClient } from '../../api/client';

interface DashboardState {
  updated_at: string;
  _is_stale?: boolean;
  _is_computed?: boolean;
  phase: {
    name: string;
    display_name: string;
    description: string;
    progress: number;
  };
  sampler: {
    name: string;
    display_name: string;
    description: string;
    parameters?: Record<string, any>;
  };
  objectives: Array<{
    name: string;
    display_name: string;
    direction: string;
    weight?: number;
    current_best?: number;
    initial_value?: number;
    improvement_percent?: number;
    unit?: string;
  }>;
  plan?: {
    total_phases: number;
    current_phase_index: number;
    phases: Array<{ name: string; display_name: string }>;
  };
  strategy?: {
    current: string;
    next_action?: string;
  };
  performance?: {
    fea_count: number;
    nn_count: number;
    avg_fea_time_seconds?: number;
  };
}

interface OptimizerStatePanelProps {
  studyId: string | null;
  refreshInterval?: number;
}

export function OptimizerStatePanel({ studyId, refreshInterval = 5000 }: OptimizerStatePanelProps) {
  const [state, setState] = useState<DashboardState | null>(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState<string | null>(null);

  const fetchState = async () => {
    if (!studyId) return;

    try {
      const response = await fetch(`/api/optimization/studies/${studyId}/dashboard-state`);
      if (!response.ok) throw new Error('Failed to fetch state');
      const data = await response.json();
      setState(data);
      setError(null);
    } catch (err: any) {
      setError(err.message);
    } finally {
      setLoading(false);
    }
  };

  useEffect(() => {
    fetchState();
    const interval = setInterval(fetchState, refreshInterval);
    return () => clearInterval(interval);
  }, [studyId, refreshInterval]);

  if (!studyId) {
    return (
      <div className="p-4 text-center text-dark-400">
        Select a study to view optimizer state
      </div>
    );
  }

  if (loading && !state) {
    return (
      <div className="p-4 flex items-center justify-center">
        <RefreshCw className="w-5 h-5 animate-spin text-primary-400" />
      </div>
    );
  }

  if (error || !state) {
    return (
      <div className="p-4 text-center text-red-400 text-sm">
        {error || 'Failed to load state'}
      </div>
    );
  }

  const phaseColors: Record<string, string> = {
    exploration: 'text-blue-400 bg-blue-500/10 border-blue-500/30',
    exploitation: 'text-yellow-400 bg-yellow-500/10 border-yellow-500/30',
    refinement: 'text-purple-400 bg-purple-500/10 border-purple-500/30',
    convergence: 'text-green-400 bg-green-500/10 border-green-500/30'
  };

  return (
    <div className="space-y-4">
      {/* Stale indicator */}
      {state._is_stale && (
        <div className="text-xs text-yellow-400 flex items-center gap-1">
          <RefreshCw className="w-3 h-3" />
          State may be outdated
        </div>
      )}

      {/* Sampler Card */}
      <div className="p-3 bg-dark-750 rounded-lg border border-dark-600">
        <div className="flex items-center gap-2 mb-2">
          <Target className="w-4 h-4 text-primary-400" />
          <span className="font-medium text-white">{state.sampler.display_name}</span>
        </div>
        <p className="text-xs text-dark-400 leading-relaxed">
          {state.sampler.description}
        </p>
        {state.sampler.parameters && (
          <div className="mt-2 flex flex-wrap gap-2">
            {Object.entries(state.sampler.parameters).slice(0, 3).map(([key, val]) => (
              <span key={key} className="text-xs px-2 py-0.5 bg-dark-600 rounded text-dark-300">
                {key}: {String(val)}
              </span>
            ))}
          </div>
        )}
      </div>

      {/* Phase Progress */}
      <div className="p-3 bg-dark-750 rounded-lg border border-dark-600">
        <div className="flex justify-between items-center mb-2">
          <span className="text-dark-400 text-sm">Phase</span>
          <span className={`px-2 py-0.5 rounded text-xs font-medium border ${phaseColors[state.phase.name] || 'text-dark-300'}`}>
            {state.phase.display_name}
          </span>
        </div>

        {/* Phase progress bar */}
        <div className="h-2 bg-dark-600 rounded-full overflow-hidden mb-2">
          <div
            className="h-full bg-gradient-to-r from-primary-600 to-primary-400 transition-all duration-500"
            style={{ width: `${state.phase.progress * 100}%` }}
          />
        </div>

        <p className="text-xs text-dark-400">{state.phase.description}</p>

        {/* Phase timeline */}
        {state.plan && (
          <div className="mt-3 flex gap-1">
            {state.plan.phases.map((phase, idx) => (
              <div
                key={phase.name}
                className={`flex-1 h-1 rounded ${
                  idx < state.plan!.current_phase_index
                    ? 'bg-primary-500'
                    : idx === state.plan!.current_phase_index
                    ? 'bg-primary-400 animate-pulse'
                    : 'bg-dark-600'
                }`}
                title={phase.display_name}
              />
            ))}
          </div>
        )}
      </div>

      {/* Objectives */}
      <div className="p-3 bg-dark-750 rounded-lg border border-dark-600">
        <div className="flex items-center gap-2 mb-3">
          <Layers className="w-4 h-4 text-primary-400" />
          <span className="font-medium text-white">Objectives</span>
          {state.objectives.length > 1 && (
            <span className="text-xs px-1.5 py-0.5 bg-purple-500/20 text-purple-400 rounded">
              Multi-Objective
            </span>
          )}
        </div>

        <div className="space-y-2">
          {state.objectives.map((obj) => (
            <div key={obj.name} className="flex items-center justify-between">
              <div className="flex items-center gap-2">
                <span className={`text-xs px-1.5 py-0.5 rounded ${
                  obj.direction === 'minimize' ? 'bg-blue-500/20 text-blue-400' : 'bg-green-500/20 text-green-400'
                }`}>
                  {obj.direction === 'minimize' ? '↓' : '↑'}
                </span>
                <span className="text-dark-300 text-sm">{obj.display_name}</span>
              </div>
              <div className="text-right">
                {obj.current_best !== undefined && (
                  <span className="font-mono text-primary-400">
                    {obj.current_best.toFixed(2)}
                    {obj.unit && <span className="text-dark-500 ml-1">{obj.unit}</span>}
                  </span>
                )}
                {obj.improvement_percent !== undefined && (
                  <span className="text-xs text-green-400 ml-2">
                    {obj.improvement_percent.toFixed(1)}%
                  </span>
                )}
              </div>
            </div>
          ))}
        </div>
      </div>

      {/* Performance Stats */}
      {state.performance && (
        <div className="grid grid-cols-2 gap-2">
          <div className="p-2 bg-dark-750 rounded-lg border border-dark-600 text-center">
            <Database className="w-4 h-4 text-blue-400 mx-auto mb-1" />
            <div className="text-lg font-mono text-white">{state.performance.fea_count}</div>
            <div className="text-xs text-dark-400">FEA Runs</div>
          </div>
          {state.performance.nn_count > 0 && (
            <div className="p-2 bg-dark-750 rounded-lg border border-dark-600 text-center">
              <Brain className="w-4 h-4 text-purple-400 mx-auto mb-1" />
              <div className="text-lg font-mono text-white">{state.performance.nn_count}</div>
              <div className="text-xs text-dark-400">NN Predictions</div>
            </div>
          )}
        </div>
      )}

      {/* Current Strategy */}
      {state.strategy && (
        <div className="p-3 bg-dark-700 rounded-lg border border-dark-600">
          <div className="text-xs text-dark-400 mb-1">Current Strategy</div>
          <div className="text-sm text-dark-200">{state.strategy.current}</div>
          {state.strategy.next_action && (
            <div className="text-xs text-primary-400 mt-1">
              Next: {state.strategy.next_action}
            </div>
          )}
        </div>
      )}
    </div>
  );
}

4. Phase 2: Visualization Improvements (P2)

Estimated Time: 15 hours Dependencies: Phase 0, Phase 1 (for integration) Risk: Medium

4.1 Parallel Coordinates Overhaul

Decision: Nivo vs Custom D3

Criteria Nivo Custom D3
Development time 4 hours 12 hours
Dark theme Built-in Manual
Brushing Limited Full control
Bundle size +25kb +0kb (D3 already used)
Maintenance Low Medium

Recommendation: Start with Nivo, add custom brushing layer if needed.

Implementation Plan

Step 1: Install Nivo

cd atomizer-dashboard/frontend
npm install @nivo/parallel-coordinates @nivo/core

Step 2: Create New Component

File: atomizer-dashboard/frontend/src/components/charts/ParallelCoordinatesChart.tsx

import React, { useMemo, useState } from 'react';
import { ResponsiveParallelCoordinates } from '@nivo/parallel-coordinates';

interface Trial {
  trial_number: number;
  values: number[];
  params: Record<string, number>;
  constraint_satisfied?: boolean;
  source?: string;
}

interface ParallelCoordinatesChartProps {
  trials: Trial[];
  paramNames: string[];
  objectiveNames: string[];
  paretoTrials?: Set<number>;
  height?: number;
  maxTrials?: number;
}

// Atomaste-aligned dark theme
const darkTheme = {
  background: 'transparent',
  textColor: '#94a3b8',
  fontSize: 11,
  axis: {
    domain: {
      line: {
        stroke: '#334155',
        strokeWidth: 2,
      },
    },
    ticks: {
      line: {
        stroke: '#1e293b',
        strokeWidth: 1,
      },
      text: {
        fill: '#64748b',
        fontSize: 10,
      },
    },
    legend: {
      text: {
        fill: '#94a3b8',
        fontSize: 11,
        fontWeight: 500,
      },
    },
  },
};

export function ParallelCoordinatesChart({
  trials,
  paramNames,
  objectiveNames,
  paretoTrials = new Set(),
  height = 400,
  maxTrials = 300
}: ParallelCoordinatesChartProps) {
  const [hoveredTrial, setHoveredTrial] = useState<number | null>(null);

  // Process and limit trials
  const processedData = useMemo(() => {
    // Filter feasible trials and limit count
    const feasible = trials
      .filter(t => t.constraint_satisfied !== false)
      .slice(-maxTrials);

    // Build data array for Nivo
    return feasible.map(trial => {
      const dataPoint: Record<string, any> = {
        trial_number: trial.trial_number,
        _isPareto: paretoTrials.has(trial.trial_number),
      };

      // Add parameters
      paramNames.forEach(name => {
        dataPoint[name] = trial.params[name] ?? 0;
      });

      // Add objectives
      objectiveNames.forEach((name, idx) => {
        dataPoint[name] = trial.values[idx] ?? 0;
      });

      return dataPoint;
    });
  }, [trials, paramNames, objectiveNames, paretoTrials, maxTrials]);

  // Build variables (axes) configuration
  const variables = useMemo(() => {
    const vars: any[] = [];

    // Parameters first
    paramNames.forEach(name => {
      const values = processedData.map(d => d[name]).filter(v => v !== undefined);
      vars.push({
        key: name,
        type: 'linear' as const,
        min: Math.min(...values) * 0.95,
        max: Math.max(...values) * 1.05,
        legend: name.replace(/_/g, ' '),
        legendPosition: 'start' as const,
        legendOffset: -15,
      });
    });

    // Then objectives
    objectiveNames.forEach(name => {
      const values = processedData.map(d => d[name]).filter(v => v !== undefined);
      vars.push({
        key: name,
        type: 'linear' as const,
        min: Math.min(...values) * 0.95,
        max: Math.max(...values) * 1.05,
        legend: name.replace(/_/g, ' '),
        legendPosition: 'start' as const,
        legendOffset: -15,
      });
    });

    return vars;
  }, [processedData, paramNames, objectiveNames]);

  // Color function - Pareto in cyan, rest in blue gradient
  const getColor = (datum: any) => {
    if (datum._isPareto) return '#00d4e6'; // Atomaste cyan

    // Gradient based on first objective (assuming minimize)
    const objValues = processedData.map(d => d[objectiveNames[0]]);
    const minObj = Math.min(...objValues);
    const maxObj = Math.max(...objValues);
    const normalized = (datum[objectiveNames[0]] - minObj) / (maxObj - minObj);

    // Blue gradient: good (dark blue) to bad (light gray)
    if (normalized < 0.25) return '#3b82f6'; // Blue - top 25%
    if (normalized < 0.5) return '#60a5fa';  // Light blue
    if (normalized < 0.75) return '#94a3b8'; // Gray
    return '#475569'; // Dark gray - worst
  };

  if (processedData.length === 0) {
    return (
      <div className="flex items-center justify-center h-64 text-dark-400">
        No feasible trials to display
      </div>
    );
  }

  return (
    <div style={{ height }}>
      <ResponsiveParallelCoordinates
        data={processedData}
        variables={variables}
        theme={darkTheme}
        margin={{ top: 50, right: 120, bottom: 30, left: 60 }}
        lineOpacity={0.4}
        colors={getColor}
        strokeWidth={2}
        animate={false}
      />

      {/* Legend */}
      <div className="flex items-center justify-center gap-6 mt-2 text-xs">
        <div className="flex items-center gap-2">
          <div className="w-4 h-0.5 bg-[#00d4e6]" />
          <span className="text-dark-400">Pareto Optimal</span>
        </div>
        <div className="flex items-center gap-2">
          <div className="w-4 h-0.5 bg-[#3b82f6]" />
          <span className="text-dark-400">Top 25%</span>
        </div>
        <div className="flex items-center gap-2">
          <div className="w-4 h-0.5 bg-[#475569]" />
          <span className="text-dark-400">Other</span>
        </div>
      </div>
    </div>
  );
}

Step 3: Update Dashboard to use new component

File: atomizer-dashboard/frontend/src/pages/Dashboard.tsx

Replace the old parallel coordinates import:

// Remove
import { ParallelCoordinatesPlot } from '../components/ParallelCoordinatesPlot';

// Add
import { ParallelCoordinatesChart } from '../components/charts/ParallelCoordinatesChart';

Update usage:

<ParallelCoordinatesChart
  trials={displayedTrials}
  paramNames={paramNames}
  objectiveNames={objectiveNames}
  paretoTrials={new Set(paretoFront.map(t => t.trial_number))}
  height={400}
  maxTrials={300}
/>

Step 4: Remove old Plotly component

Delete or archive:

  • atomizer-dashboard/frontend/src/components/plotly/PlotlyParallelCoordinates.tsx
  • atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx

4.2 Convergence Plot Log Scale

Implementation

File: atomizer-dashboard/frontend/src/components/ConvergencePlot.tsx

Add log scale toggle:

// Add to component state
const [useLogScale, setUseLogScale] = useState(false);

// Add transformation function
const transformValue = (value: number): number => {
  if (!useLogScale) return value;
  // Handle negative values for minimization
  if (value <= 0) return value;
  return Math.log10(value);
};

// Add inverse for display
const formatValue = (value: number): string => {
  if (!useLogScale) return value.toFixed(4);
  // Show original value in tooltip
  return Math.pow(10, value).toFixed(4);
};

// Update data processing
const chartData = useMemo(() => {
  return processedData.map(d => ({
    ...d,
    value: transformValue(d.value),
    best: transformValue(d.best),
  }));
}, [processedData, useLogScale]);

// Add toggle button in render
<div className="flex items-center gap-2 mb-2">
  <button
    onClick={() => setUseLogScale(!useLogScale)}
    className={`px-2 py-1 text-xs rounded ${
      useLogScale
        ? 'bg-primary-500 text-white'
        : 'bg-dark-600 text-dark-300 hover:bg-dark-500'
    }`}
  >
    Log Scale
  </button>
  <span className="text-xs text-dark-400">
    {useLogScale ? 'Logarithmic' : 'Linear'}
  </span>
</div>

Also add for PlotlyConvergencePlot:

File: atomizer-dashboard/frontend/src/components/plotly/PlotlyConvergencePlot.tsx

// Add state
const [logScale, setLogScale] = useState(false);

// Update layout
const layout = {
  ...existingLayout,
  yaxis: {
    ...existingLayout.yaxis,
    type: logScale ? 'log' : 'linear',
    title: logScale ? `${objectiveName} (log scale)` : objectiveName,
  },
};

// Add toggle
<button
  onClick={() => setLogScale(!logScale)}
  className="absolute top-2 right-2 px-2 py-1 text-xs bg-dark-700 hover:bg-dark-600 rounded"
>
  {logScale ? 'Linear' : 'Log'}
</button>

5. Phase 3: Analysis & Reporting (P3)

Estimated Time: 10 hours Dependencies: None (can run in parallel with P2) Risk: Low

5.1 Analysis Tab Fixes

Testing Matrix

Tab Component Test Status
Overview Statistics grid Load with 100+ trials
Overview Convergence plot Shows best-so-far line
Parameters Importance chart Shows ranking bars
Parameters Parallel coords Axes render correctly
Pareto 2D/3D plot Toggle works
Pareto Pareto table Shows top solutions
Correlations Heatmap All cells colored
Correlations Top correlations Table populated
Constraints Feasibility chart Line chart renders
Constraints Infeasible list Shows violations
Surrogate Quality chart FEA vs NN comparison
Runs Comparison Multiple runs overlay

Common Fixes Needed

Fix 1: Add loading states to all tabs

// Wrap each tab content
{loading ? (
  <div className="flex items-center justify-center h-64">
    <RefreshCw className="w-6 h-6 animate-spin text-primary-400" />
    <span className="ml-2 text-dark-400">Loading {tabName}...</span>
  </div>
) : error ? (
  <div className="text-center text-red-400 py-8">
    <AlertTriangle className="w-8 h-8 mx-auto mb-2" />
    <p>{error}</p>
  </div>
) : (
  // Tab content
)}

Fix 2: Memoize expensive calculations

// Wrap correlation calculation
const correlationData = useMemo(() => {
  if (trials.length < 10) return null;
  return calculateCorrelations(trials, paramNames, objectiveNames);
}, [trials, paramNames, objectiveNames]);

Fix 3: Add empty state handlers

{trials.length === 0 ? (
  <div className="text-center text-dark-400 py-12">
    <Database className="w-12 h-12 mx-auto mb-4 opacity-30" />
    <p className="text-lg">No trials yet</p>
    <p className="text-sm">Run some optimization trials to see analysis</p>
  </div>
) : (
  // Content
)}

5.2 Report Generation Protocol

Create Protocol Document

File: docs/protocols/operations/OP_08_GENERATE_REPORT.md

# OP_08: Generate Study Report

## Overview
Generate a comprehensive markdown report for an optimization study.

## Trigger
- Dashboard "Generate Report" button
- CLI: `atomizer report <study_name>`
- Claude Code: "generate report for {study}"

## Prerequisites
- Study must have completed trials
- study.db must exist with trial data
- optimization_config.json must be present

## Process

### Step 1: Gather Data
```python
# Load configuration
config = load_json(study_dir / "optimization_config.json")

# Query database
db = sqlite3.connect(study_dir / "3_results/study.db")
trials = query_all_trials(db)
best_trial = get_best_trial(db)

# Calculate metrics
convergence_trial = find_90_percent_improvement(trials)
feasibility_rate = count_feasible(trials) / len(trials)

Step 2: Generate Sections

Executive Summary

  • Total trials completed
  • Best objective value achieved
  • Improvement percentage from initial
  • Key design changes

Results Table

Metric Initial Final Change
objective_1 X Y Z%

Best Solution

  • Trial number
  • All design variable values
  • All objective values
  • Constraint satisfaction

Convergence Analysis

  • Phase breakdown
  • Convergence trial identification
  • Exploration vs exploitation ratio

Recommendations

  • Potential further improvements
  • Sensitivity observations
  • Next steps

Step 3: Write Report

report_path = study_dir / "STUDY_REPORT.md"
report_path.write_text(markdown_content)

Output

  • STUDY_REPORT.md in study root directory
  • Returns markdown content to API caller

Template

See: optimization_engine/reporting/templates/study_report.md


#### Implement Backend

**File:** `atomizer-dashboard/backend/api/routes/optimization.py`

Enhance the generate-report endpoint:

```python
@router.post("/studies/{study_id}/generate-report")
async def generate_report(study_id: str, format: str = "markdown"):
    """
    Generate comprehensive study report.

    Args:
        study_id: Study identifier
        format: Output format (markdown, html, json)

    Returns:
        Generated report content and file path
    """
    try:
        study_dir = resolve_study_path(study_id)
        results_dir = get_results_dir(study_dir)

        # Load configuration
        config_file = study_dir / "optimization_config.json"
        if not config_file.exists():
            raise HTTPException(status_code=404, detail="No optimization config found")

        config = json.loads(config_file.read_text())

        # Load trial data from database
        db_path = results_dir / "study.db"
        if not db_path.exists():
            raise HTTPException(status_code=404, detail="No study database found")

        conn = sqlite3.connect(str(db_path))
        conn.row_factory = sqlite3.Row
        cursor = conn.cursor()

        # Get all completed trials
        cursor.execute("""
            SELECT t.trial_id, t.number,
                   GROUP_CONCAT(tv.value) as values,
                   GROUP_CONCAT(tp.param_name || '=' || tp.param_value) as params
            FROM trials t
            LEFT JOIN trial_values tv ON t.trial_id = tv.trial_id
            LEFT JOIN trial_params tp ON t.trial_id = tp.trial_id
            WHERE t.state = 'COMPLETE'
            GROUP BY t.trial_id
            ORDER BY t.number
        """)
        trials = cursor.fetchall()

        # Get best trial
        cursor.execute("""
            SELECT t.trial_id, t.number, MIN(tv.value) as best_value
            FROM trials t
            JOIN trial_values tv ON t.trial_id = tv.trial_id
            WHERE t.state = 'COMPLETE'
            GROUP BY t.trial_id
            ORDER BY best_value
            LIMIT 1
        """)
        best = cursor.fetchone()

        conn.close()

        # Generate report content
        report = generate_markdown_report(
            study_id=study_id,
            config=config,
            trials=trials,
            best_trial=best
        )

        # Save to file
        report_path = study_dir / "STUDY_REPORT.md"
        report_path.write_text(report)

        return {
            "success": True,
            "content": report,
            "path": str(report_path),
            "format": format,
            "generated_at": datetime.now().isoformat()
        }

    except HTTPException:
        raise
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to generate report: {str(e)}")


def generate_markdown_report(study_id: str, config: dict, trials: list, best_trial) -> str:
    """Generate markdown report content"""

    # Extract info
    objectives = config.get("objectives", [])
    design_vars = config.get("design_variables", [])
    n_trials = len(trials)

    # Calculate metrics
    if trials and best_trial:
        first_value = float(trials[0]['values'].split(',')[0]) if trials[0]['values'] else 0
        best_value = best_trial['best_value']
        improvement = ((first_value - best_value) / first_value * 100) if first_value != 0 else 0
    else:
        first_value = best_value = improvement = 0

    # Build report
    report = f"""# {study_id.replace('_', ' ').title()} - Optimization Report

**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
**Status:** {'Completed' if n_trials >= config.get('optimization_settings', {}).get('n_trials', 100) else 'In Progress'}

---

## Executive Summary

This optimization study completed **{n_trials} trials** and achieved a **{improvement:.1f}%** improvement in the primary objective.

| Metric | Value |
|--------|-------|
| Total Trials | {n_trials} |
| Best Value | {best_value:.4f} |
| Initial Value | {first_value:.4f} |
| Improvement | {improvement:.1f}% |

---

## Objectives

| Name | Direction | Weight |
|------|-----------|--------|
"""

    for obj in objectives:
        report += f"| {obj.get('name', 'N/A')} | {obj.get('direction', 'minimize')} | {obj.get('weight', 1.0)} |\n"

    report += f"""
---

## Design Variables

| Name | Min | Max | Best Value |
|------|-----|-----|------------|
"""

    # Parse best params
    best_params = {}
    if best_trial and best_trial['params']:
        for pair in best_trial['params'].split(','):
            if '=' in pair:
                k, v = pair.split('=', 1)
                best_params[k] = float(v)

    for dv in design_vars:
        name = dv.get('name', 'N/A')
        min_val = dv.get('bounds', [0, 1])[0]
        max_val = dv.get('bounds', [0, 1])[1]
        best_val = best_params.get(name, 'N/A')
        if isinstance(best_val, float):
            best_val = f"{best_val:.4f}"
        report += f"| {name} | {min_val} | {max_val} | {best_val} |\n"

    report += f"""
---

## Best Solution

**Trial #{best_trial['number'] if best_trial else 'N/A'}** achieved the optimal result.

---

## Recommendations

1. Consider extending the optimization if convergence is not yet achieved
2. Validate the best solution with high-fidelity FEA
3. Perform sensitivity analysis around the optimal design point

---

*Generated by Atomizer Dashboard*
"""

    return report

6. Testing & Validation

6.1 Test Plan

Unit Tests

Component Test Case Expected
StatusBadge Render all states Correct colors/icons
ETA calculation 20 trials with timestamps Accurate ETA
Pause/Resume Running process State toggles
Log scale Large value range Compressed view

Integration Tests

Flow Steps Expected
Start optimization Click Start → Watch status Shows Running
Pause optimization Click Pause while running Shows Paused, process suspended
Resume optimization Click Resume while paused Shows Running, process resumes
Stop optimization Click Stop Shows Stopped, process killed
Optuna dashboard Click Optuna button New tab opens with Optuna
Generate report Click Generate Report STUDY_REPORT.md created

Manual Testing Checklist

Phase 0:
[ ] pip install optuna-dashboard succeeds
[ ] Optuna button launches dashboard
[ ] ETA shows reasonable time
[ ] Rate per hour matches reality

Phase 1:
[ ] Status badge shows correct state
[ ] Pause button suspends process
[ ] Resume button resumes process
[ ] Stop button kills process
[ ] Optimizer state updates dynamically

Phase 2:
[ ] Parallel coords renders in dark theme
[ ] Pareto solutions highlighted in cyan
[ ] Log scale toggle works
[ ] Convergence improvements visible

Phase 3:
[ ] All Analysis tabs load
[ ] No console errors
[ ] Report generates successfully
[ ] Report contains accurate data

6.2 Rollback Plan

If issues arise:

  1. Git revert: Each phase is a separate commit
  2. Feature flags: Add ENABLE_NEW_CONTROLS=false env var
  3. Component fallback: Keep old components renamed with _legacy suffix

7. Appendix: Technical Specifications

7.1 Package Dependencies

Frontend (package.json additions):

{
  "dependencies": {
    "@nivo/parallel-coordinates": "^0.84.0",
    "@nivo/core": "^0.84.0"
  }
}

Backend (pip):

optuna-dashboard>=0.14.0

7.2 API Endpoints Summary

Method Endpoint Description Phase
GET /api/optimization/optuna-status Check if optuna-dashboard installed P0
GET /api/optimization/studies/{id}/process Enhanced with ETA P0
POST /api/optimization/studies/{id}/pause Pause optimization P1
POST /api/optimization/studies/{id}/resume Resume optimization P1
GET /api/optimization/studies/{id}/is-paused Check pause state P1
GET /api/optimization/studies/{id}/dashboard-state Dynamic optimizer state P1
POST /api/optimization/studies/{id}/generate-report Enhanced report gen P3

7.3 File Changes Summary

File Action Phase
backend/api/routes/optimization.py Modify P0, P1, P3
frontend/src/api/client.ts Modify P0, P1
frontend/src/components/dashboard/ControlPanel.tsx Modify P0, P1
frontend/src/components/dashboard/StatusBadge.tsx Create P1
frontend/src/components/tracker/OptimizerStatePanel.tsx Rewrite P1
frontend/src/components/charts/ParallelCoordinatesChart.tsx Create P2
frontend/src/components/ConvergencePlot.tsx Modify P2
frontend/src/components/plotly/PlotlyConvergencePlot.tsx Modify P2
frontend/src/pages/Analysis.tsx Modify P3
docs/protocols/operations/OP_08_GENERATE_REPORT.md Create P3

7.4 Color Palette (Atomaste Theme)

/* Primary */
--atomaste-cyan: #00d4e6;
--atomaste-cyan-light: #34d399;
--atomaste-cyan-dark: #0891b2;

/* Status */
--status-running: #22c55e;
--status-paused: #eab308;
--status-stopped: #6b7280;
--status-error: #ef4444;
--status-complete: #00d4e6;

/* Visualization */
--chart-pareto: #00d4e6;
--chart-good: #3b82f6;
--chart-neutral: #64748b;
--chart-bad: #475569;
--chart-infeasible: #ef4444;

/* Backgrounds */
--bg-primary: #0a0f1a;
--bg-card: #1e293b;
--bg-input: #334155;

/* Text */
--text-primary: #f1f5f9;
--text-secondary: #94a3b8;
--text-muted: #64748b;

Approval

  • User Review: Plan reviewed and approved
  • Technical Review: Implementation approach validated
  • Resource Allocation: Time allocated for each phase

Ready to begin implementation upon approval.