- Add validation framework (config, model, results, study validators) - Add Claude Code skills (create-study, run-optimization, generate-report, troubleshoot, analyze-model) - Add Atomizer Dashboard (React frontend + FastAPI backend) - Reorganize docs into structured directories (00-09) - Add neural surrogate modules and training infrastructure - Add multi-objective optimization support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
32 KiB
Atomizer Dashboard - Master Plan
Version: 1.0 Date: November 21, 2025 Status: Planning Phase
Executive Summary
A modern, real-time web dashboard for Atomizer that provides:
- Study Configurator - Interactive UI + LLM chat interface for study setup
- Live Dashboard - Real-time optimization monitoring with charts/graphs
- Results Viewer - Rich markdown report display with interactive visualizations
Architecture Overview
Tech Stack Recommendation
Backend
- FastAPI - Modern Python web framework
- Native async support for real-time updates
- Automatic OpenAPI documentation
- WebSocket support for live streaming
- Easy integration with existing Python codebase
Frontend
- React - Component-based UI framework
- Vite - Fast development and build tool
- TailwindCSS - Utility-first styling
- Recharts - React charting library
- React Markdown - Markdown rendering with code highlighting
- Socket.IO (or native WebSocket) - Real-time communication
State Management
- React Query (TanStack Query) - Server state management
- Automatic caching and refetching
- Real-time updates
- Optimistic updates
Database (Optional Enhancement)
- SQLite (already using via Optuna) - Study metadata
- File-based JSON for real-time data (current approach works well)
Application Structure
atomizer-dashboard/
├── backend/
│ ├── api/
│ │ ├── main.py # FastAPI app entry
│ │ ├── routes/
│ │ │ ├── studies.py # Study CRUD operations
│ │ │ ├── optimization.py # Start/stop/monitor optimization
│ │ │ ├── llm.py # LLM chat interface
│ │ │ └── reports.py # Report generation/viewing
│ │ ├── websocket/
│ │ │ └── optimization_stream.py # Real-time optimization updates
│ │ └── services/
│ │ ├── study_service.py # Study management logic
│ │ ├── optimization_service.py # Optimization runner
│ │ └── llm_service.py # LLM integration
│ └── requirements.txt
│
├── frontend/
│ ├── src/
│ │ ├── pages/
│ │ │ ├── Configurator.tsx # Study configuration page
│ │ │ ├── Dashboard.tsx # Live optimization dashboard
│ │ │ └── Results.tsx # Results viewer
│ │ ├── components/
│ │ │ ├── StudyForm.tsx # Manual study configuration
│ │ │ ├── LLMChat.tsx # Chat interface with Claude
│ │ │ ├── LiveCharts.tsx # Real-time optimization charts
│ │ │ ├── MarkdownReport.tsx # Markdown report renderer
│ │ │ └── ParameterTable.tsx # Design variables table
│ │ ├── hooks/
│ │ │ ├── useStudies.ts # Study data fetching
│ │ │ ├── useOptimization.ts # Optimization control
│ │ │ └── useWebSocket.ts # WebSocket connection
│ │ └── App.tsx
│ └── package.json
│
└── docs/
└── DASHBOARD_MASTER_PLAN.md (this file)
Page 1: Study Configurator
Purpose
Create and configure new optimization studies through:
- Manual form-based configuration
- LLM-assisted natural language setup (future)
Layout
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Study Configurator [Home] [Help]│
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────┬─────────────────────────────────┐ │
│ │ Study Setup │ LLM Assistant (Future) │ │
│ │ │ ┌───────────────────────────┐ │ │
│ │ Study Name: │ │ Chat with Claude Code │ │ │
│ │ [____________] │ │ │ │ │
│ │ │ │ > "Create a study to │ │ │
│ │ Model Files: │ │ tune circular plate │ │ │
│ │ [Browse .prt] │ │ to 115 Hz" │ │ │
│ │ [Browse .sim] │ │ │ │ │
│ │ │ │ Claude: "I'll configure │ │ │
│ │ Design Variables: │ │ the study for you..." │ │ │
│ │ + Add Variable │ │ │ │ │
│ │ • diameter │ │ [Type message...] │ │ │
│ │ [50-150] mm │ └───────────────────────────┘ │ │
│ │ • thickness │ │ │
│ │ [2-10] mm │ Generated Configuration: │ │
│ │ │ ┌───────────────────────────┐ │ │
│ │ Optimization Goal: │ │ • Study: freq_tuning │ │ │
│ │ [Minimize ▼] │ │ • Target: 115.0 Hz │ │ │
│ │ │ │ • Variables: 2 │ │ │
│ │ Target Value: │ │ • Trials: 50 │ │ │
│ │ [115.0] Hz │ │ │ │ │
│ │ Tolerance: [0.1] │ │ [Apply Configuration] │ │ │
│ │ │ └───────────────────────────┘ │ │
│ │ [Advanced Options] │ │ │
│ │ │ │ │
│ │ [Create Study] │ │ │
│ └─────────────────────┴─────────────────────────────────┘ │
│ │
│ Recent Studies │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ • circular_plate_frequency_tuning [View] [Resume] │ │
│ │ • beam_deflection_minimization [View] [Resume] │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Features
Manual Configuration
- Study metadata: Name, description, tags
- Model upload: .prt, .sim, .fem files (drag-and-drop)
- Design variables:
- Add/remove parameters
- Set bounds (min, max, step)
- Units specification
- Objective function:
- Goal type (minimize, maximize, target)
- Target value + tolerance
- Multi-objective support (future)
- Optimization settings:
- Number of trials
- Sampler selection (TPE, CMA-ES, Random)
- Early stopping rules
- Validation rules: Optional constraints
LLM Assistant (Future Phase)
- Chat interface: Embedded terminal-like chat with Claude Code
- Natural language study configuration
- Example: "Create a study to tune the first natural frequency of a circular plate to exactly 115 Hz"
- Real-time configuration generation:
- LLM parses intent
- Generates
workflow_config.jsonandoptimization_config.json - Shows preview of generated config
- User can review and approve
- Iterative refinement:
- User: "Change target to 120 Hz"
- User: "Add thickness constraint < 8mm"
- Context awareness: LLM has access to:
- Uploaded model files
- Available extractors
- Previous studies
- PROTOCOL.md guidelines
API Endpoints
# backend/api/routes/studies.py
@router.post("/studies")
async def create_study(study_config: StudyConfig):
"""Create new study from configuration"""
@router.get("/studies")
async def list_studies():
"""List all studies with metadata"""
@router.get("/studies/{study_id}")
async def get_study(study_id: str):
"""Get study details"""
@router.put("/studies/{study_id}")
async def update_study(study_id: str, config: StudyConfig):
"""Update study configuration"""
@router.delete("/studies/{study_id}")
async def delete_study(study_id: str):
"""Delete study"""
# backend/api/routes/llm.py (Future)
@router.post("/llm/chat")
async def chat_with_llm(message: str, context: dict):
"""Send message to Claude Code, get response + generated config"""
@router.post("/llm/apply-config")
async def apply_llm_config(study_id: str, generated_config: dict):
"""Apply LLM-generated configuration to study"""
Page 2: Live Optimization Dashboard
Purpose
Monitor running optimizations in real-time with interactive visualizations.
Layout
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Live Dashboard [Configurator] [Help]│
├─────────────────────────────────────────────────────────────┤
│ Study: circular_plate_frequency_tuning [Stop] [Pause]│
│ Status: RUNNING Progress: 23/50 trials (46%) │
│ Best: 0.185 Hz Time: 15m 32s ETA: 18m │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────┬───────────────────────────┐ │
│ │ Convergence Plot │ Parameter Space │ │
│ │ │ │ │
│ │ Objective (Hz) │ thickness │ │
│ │ ↑ │ ↑ │ │
│ │ 5 │ • │ 10│ │ │
│ │ │ • │ │ • • │ │
│ │ 3 │ • • │ 8│ • ⭐ • │ │
│ │ │ • • │ │ • • • • │ │
│ │ 1 │ ••• │ 6│ • • │ │
│ │ │ •⭐ │ │ • │ │
│ │ 0 └─────────────────→ Trial │ 4│ │ │
│ │ 0 10 20 30 │ └──────────────→ │ │
│ │ │ 50 100 150 │ │
│ │ Target: 115.0 Hz ±0.1 │ diameter │ │
│ │ Current Best: 115.185 Hz │ ⭐ = Best trial │ │
│ └─────────────────────────────┴───────────────────────────┘ │
│ │
│ ┌─────────────────────────────┬───────────────────────────┐ │
│ │ Recent Trials │ System Stats │ │
│ │ │ │ │
│ │ #23 0.234 Hz ✓ │ CPU: 45% │ │
│ │ #22 1.456 Hz ✓ │ Memory: 2.1 GB │ │
│ │ #21 0.876 Hz ✓ │ NX Sessions: 1 │ │
│ │ #20 0.185 Hz ⭐ NEW BEST │ Solver Queue: 0 │ │
│ │ #19 2.345 Hz ✓ │ │ │
│ │ #18 PRUNED ✗ │ Pruned: 3 (13%) │ │
│ │ │ Success: 20 (87%) │ │
│ └─────────────────────────────┴───────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Strategy Performance (Protocol 10) │ │
│ │ │ │
│ │ Phase: EXPLOITATION (CMA-ES) │ │
│ │ Transition at Trial #15 (confidence: 72%) │ │
│ │ │ │
│ │ TPE (Trials 1-15): Best = 0.485 Hz │ │
│ │ CMA-ES (Trials 16+): Best = 0.185 Hz │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ [View Full Report] [Download Data] [Clone Study] │
└─────────────────────────────────────────────────────────────┘
Features
Real-Time Updates (WebSocket)
- Trial completion: Instant notification when trial finishes
- Best value updates: Highlight new best trials
- Progress tracking: Current trial number, elapsed time, ETA
- Status changes: Running → Paused → Completed
Interactive Charts
-
Convergence Plot
- X-axis: Trial number
- Y-axis: Objective value
- Target line (if applicable)
- Best value trajectory
- Hover: Show trial details
-
Parameter Space Visualization
- 2D scatter plot (for 2D problems)
- 3D scatter plot (for 3D problems, using Three.js)
- High-D: Parallel coordinates plot
- Color-coded by objective value
- Click trial → Show details popup
-
Parameter Importance (Protocol 9)
- Bar chart from Optuna's fANOVA
- Shows which parameters matter most
- Updates after characterization phase
-
Strategy Performance (Protocol 10)
- Timeline showing strategy switches
- Performance comparison table
- Confidence metrics over time
Trial Table
- Recent 10 trials (scrollable to see all)
- Columns: Trial #, Objective, Parameters, Status, Time
- Click row → Expand details:
- Full parameter values
- Simulation time
- Solver logs (if failed)
- Pruning reason (if pruned)
Control Panel
- Stop: Gracefully stop optimization
- Pause: Pause after current trial
- Resume: Continue optimization
- Clone: Create new study with same config
Pruning Diagnostics
- Real-time pruning alerts
- Pruning breakdown (validation, simulation, OP2)
- False positive detection warnings
- Link to detailed pruning log
API Endpoints
# backend/api/routes/optimization.py
@router.post("/studies/{study_id}/start")
async def start_optimization(study_id: str):
"""Start optimization (spawns background process)"""
@router.post("/studies/{study_id}/stop")
async def stop_optimization(study_id: str):
"""Stop optimization gracefully"""
@router.post("/studies/{study_id}/pause")
async def pause_optimization(study_id: str):
"""Pause after current trial"""
@router.get("/studies/{study_id}/status")
async def get_status(study_id: str):
"""Get current optimization status"""
@router.get("/studies/{study_id}/history")
async def get_history(study_id: str, limit: int = 100):
"""Get trial history (reads optimization_history_incremental.json)"""
# backend/api/websocket/optimization_stream.py
@router.websocket("/ws/optimization/{study_id}")
async def optimization_stream(websocket: WebSocket, study_id: str):
"""
WebSocket endpoint for real-time updates.
Watches:
- optimization_history_incremental.json (file watcher)
- pruning_history.json
- study.db (Optuna trial completion events)
Sends:
- trial_completed: { trial_number, objective, params, status }
- new_best: { trial_number, objective }
- status_change: { status: "running" | "paused" | "completed" }
- progress_update: { current, total, eta }
"""
Page 3: Results Viewer
Purpose
Display completed optimization reports with rich markdown rendering and interactive visualizations.
Layout
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Results [Dashboard] [Configurator]│
├─────────────────────────────────────────────────────────────┤
│ Study: circular_plate_frequency_tuning │
│ Status: COMPLETED Trials: 50/50 Time: 35m 12s │
│ Best: 0.185 Hz (Trial #45) Target: 115.0 Hz │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────┬──────────────────────────────────────┐ │
│ │ Navigation │ Report Content │ │
│ │ │ │ │
│ │ • Summary │ # Optimization Report │ │
│ │ • Best Result │ **Study**: circular_plate_... │ │
│ │ • All Trials │ │ │
│ │ • Convergence │ ## Achieved Performance │ │
│ │ • Parameters │ - **First Frequency**: 115.185 Hz │ │
│ │ • Strategy │ - Target: 115.000 Hz │ │
│ │ • Pruning │ - Error: 0.185 Hz (0.16%) │ │
│ │ • Downloads │ │ │
│ │ │ ## Design Parameters │ │
│ │ [Live View] │ - **Inner Diameter**: 94.07 mm │ │
│ │ [Refresh] │ - **Plate Thickness**: 6.14 mm │ │
│ │ │ │ │
│ │ │ ## Convergence Plot │ │
│ │ │ [Interactive Chart Embedded] │ │
│ │ │ │ │
│ │ │ ## Top 10 Trials │ │
│ │ │ | Rank | Trial | Frequency | ... │ │
│ │ │ |------|-------|-----------|------- │ │
│ │ │ | 1 | #45 | 115.185 | ... │ │
│ │ │ │ │
│ └────────────────┴──────────────────────────────────────┘ │
│ │
│ Actions: │
│ [Download Report (MD)] [Download Data (JSON)] [Download │
│ Charts (PNG)] [Clone Study] [Continue Optimization] │
└─────────────────────────────────────────────────────────────┘
Features
Markdown Report Rendering
- Rich formatting: Headings, tables, lists, code blocks
- Syntax highlighting: For code snippets (using highlight.js)
- LaTeX support (future): For mathematical equations
- Auto-linking: File references → clickable links
Embedded Interactive Charts
- Static images replaced with live charts:
- Convergence plot (Recharts)
- Design space scatter (Recharts or Plotly)
- Parameter importance (Recharts)
- Optuna visualizations (converted to Plotly/Recharts)
- Hover tooltips: Show trial details on hover
- Zoom/pan: Interactive exploration
- Toggle series: Show/hide data series
Navigation Sidebar
- Auto-generated TOC: From markdown headings
- Smooth scrolling: Click heading → scroll to section
- Active section highlighting: Current visible section
Live Report Mode
- Watch for changes: File watcher on
OPTIMIZATION_REPORT.md - Auto-refresh: When report is regenerated
- Notification: "Report updated - click to reload"
Data Downloads
- Markdown report: Raw
.mdfile - Trial data: JSON export of
optimization_history_incremental.json - Charts: High-res PNG/SVG exports
- Full study: Zip archive of entire study folder
API Endpoints
# backend/api/routes/reports.py
@router.get("/studies/{study_id}/report")
async def get_report(study_id: str):
"""Get markdown report content (reads 3_reports/OPTIMIZATION_REPORT.md)"""
@router.get("/studies/{study_id}/report/charts/{chart_name}")
async def get_chart(study_id: str, chart_name: str):
"""Get chart image (PNG/SVG)"""
@router.get("/studies/{study_id}/download")
async def download_study(study_id: str, format: str = "json"):
"""Download study data (JSON, CSV, or ZIP)"""
@router.post("/studies/{study_id}/report/regenerate")
async def regenerate_report(study_id: str):
"""Regenerate report from current data"""
Implementation Phases
Phase 1: Backend Foundation (Week 1)
Goal: Create FastAPI backend with basic study management
Tasks:
- Set up FastAPI project structure
- Implement study CRUD endpoints
- Create optimization control endpoints (start/stop/status)
- Add file upload handling
- Integrate with existing Atomizer modules
- Write API documentation (Swagger)
Files to Create:
backend/api/main.pybackend/api/routes/studies.pybackend/api/routes/optimization.pybackend/api/services/study_service.pybackend/requirements.txt
Deliverable: Working REST API for study management
Phase 2: Frontend Shell (Week 2)
Goal: Create React app with routing and basic UI
Tasks:
- Set up Vite + React + TypeScript project
- Configure TailwindCSS
- Create page routing (Configurator, Dashboard, Results)
- Build basic layout components (Header, Sidebar, Footer)
- Implement study list view
- Connect to backend API (React Query setup)
Files to Create:
frontend/src/App.tsxfrontend/src/pages/*.tsxfrontend/src/components/Layout.tsxfrontend/src/hooks/useStudies.tsfrontend/package.json
Deliverable: Navigable UI shell with API integration
Phase 3: Study Configurator Page (Week 3)
Goal: Functional study creation interface
Tasks:
- Build study configuration form
- Add file upload (drag-and-drop)
- Design variable management (add/remove)
- Optimization settings panel
- Form validation
- Study creation workflow
- Recent studies list
Files to Create:
frontend/src/pages/Configurator.tsxfrontend/src/components/StudyForm.tsxfrontend/src/components/FileUpload.tsxfrontend/src/components/VariableEditor.tsx
Deliverable: Working study creation form
Phase 4: Real-Time Dashboard (Week 4-5)
Goal: Live optimization monitoring
Tasks:
- Implement WebSocket connection
- Build real-time charts (Recharts):
- Convergence plot
- Parameter space scatter
- Parameter importance
- Create trial table with auto-update
- Add control panel (start/stop/pause)
- System stats display
- Pruning diagnostics integration
- File watcher for
optimization_history_incremental.json
Files to Create:
frontend/src/pages/Dashboard.tsxfrontend/src/components/LiveCharts.tsxfrontend/src/components/TrialTable.tsxfrontend/src/hooks/useWebSocket.tsbackend/api/websocket/optimization_stream.py
Deliverable: Real-time optimization dashboard
Phase 5: Results Viewer (Week 6)
Goal: Rich markdown report display
Tasks:
- Markdown rendering (react-markdown)
- Code syntax highlighting
- Embedded interactive charts
- Navigation sidebar (auto-generated TOC)
- Live report mode (file watcher)
- Data download endpoints
- Chart export functionality
Files to Create:
frontend/src/pages/Results.tsxfrontend/src/components/MarkdownReport.tsxfrontend/src/components/ReportNavigation.tsxbackend/api/routes/reports.py
Deliverable: Complete results viewer
Phase 6: LLM Integration (Future - Week 7-8)
Goal: Chat-based study configuration
Tasks:
- Backend LLM integration:
- Claude API client
- Context management (uploaded files, PROTOCOL.md)
- Configuration generation from natural language
- Frontend chat interface:
- Chat UI component
- Message streaming
- Configuration preview
- Apply/reject buttons
- Iterative refinement workflow
Files to Create:
backend/api/routes/llm.pybackend/api/services/llm_service.pyfrontend/src/components/LLMChat.tsx
Deliverable: LLM-assisted study configuration
Phase 7: Polish & Deployment (Week 9)
Goal: Production-ready deployment
Tasks:
- Error handling and loading states
- Responsive design (mobile-friendly)
- Performance optimization
- Security (CORS, authentication future)
- Docker containerization
- Deployment documentation
- User guide
Deliverables:
- Docker compose setup
- Deployment guide
- User documentation
Technical Specifications
WebSocket Protocol
Client → Server
{
"action": "subscribe",
"study_id": "circular_plate_frequency_tuning"
}
Server → Client Events
// Trial completed
{
"type": "trial_completed",
"data": {
"trial_number": 23,
"objective": 0.234,
"params": { "diameter": 94.5, "thickness": 6.2 },
"status": "success",
"timestamp": "2025-11-21T10:30:45"
}
}
// New best trial
{
"type": "new_best",
"data": {
"trial_number": 20,
"objective": 0.185,
"params": { "diameter": 94.07, "thickness": 6.14 }
}
}
// Progress update
{
"type": "progress",
"data": {
"current": 23,
"total": 50,
"elapsed_seconds": 932,
"eta_seconds": 1080,
"status": "running"
}
}
// Status change
{
"type": "status_change",
"data": {
"status": "completed",
"reason": "Target achieved"
}
}
File Watching Strategy
Use watchdog (Python) to monitor JSON files:
optimization_history_incremental.json- Trial updatespruning_history.json- Pruning eventsOPTIMIZATION_REPORT.md- Report regeneration
# backend/api/services/file_watcher.py
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class OptimizationWatcher(FileSystemEventHandler):
def on_modified(self, event):
if event.src_path.endswith('optimization_history_incremental.json'):
# Read updated file
# Broadcast to WebSocket clients
await broadcast_update(study_id, new_trial_data)
Security Considerations
Authentication (Future Phase)
- JWT tokens: Secure API access
- Session management: User login/logout
- Role-based access: Admin vs. read-only users
File Upload Security
- File type validation: Only .prt, .sim, .fem allowed
- Size limits: Max 100 MB per file
- Virus scanning (future): ClamAV integration
- Sandboxed storage: Isolated study folders
API Rate Limiting
- Per-endpoint limits: Prevent abuse
- WebSocket connection limits: Max 10 concurrent per study
Performance Optimization
Backend
- Async I/O: All file operations async
- Caching: Redis for study metadata (future)
- Pagination: Large trial lists paginated
- Compression: Gzip responses
Frontend
- Code splitting: Route-based chunks
- Lazy loading: Charts load on demand
- Virtual scrolling: Large trial tables
- Image optimization: Lazy load chart images
- Service worker (future): Offline support
Deployment Options
Option 1: Local Development Server
# Start backend
cd backend
python -m uvicorn api.main:app --reload
# Start frontend
cd frontend
npm run dev
Option 2: Docker Compose (Production)
# docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
ports:
- "8000:8000"
volumes:
- ./studies:/app/studies
environment:
- NX_PATH=/usr/local/nx2412
frontend:
build: ./frontend
ports:
- "3000:3000"
depends_on:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- frontend
- backend
Option 3: Cloud Deployment (Future)
- Backend: AWS Lambda / Google Cloud Run
- Frontend: Vercel / Netlify
- Database: AWS RDS / Google Cloud SQL
- File storage: AWS S3 / Google Cloud Storage
Future Enhancements
Advanced Features
- Multi-user collaboration: Shared studies, comments
- Study comparison: Side-by-side comparison of studies
- Experiment tracking: MLflow integration
- Version control: Git-like versioning for studies
- Automated reporting: Scheduled report generation
- Email notifications: Optimization complete alerts
- Mobile app: React Native companion app
Integrations
- CAD viewers: Embed 3D model viewer (Three.js)
- Simulation previews: Show mesh/results in browser
- Cloud solvers: Run Nastran in cloud
- Jupyter notebooks: Embedded analysis notebooks
- CI/CD: Automated testing for optimization workflows
Success Metrics
User Experience
- Study creation time: < 5 minutes (manual), < 2 minutes (LLM)
- Dashboard refresh rate: < 1 second latency
- Report load time: < 2 seconds
System Performance
- WebSocket latency: < 100ms
- API response time: < 200ms (p95)
- Concurrent users: Support 10+ simultaneous optimizations
Dependencies
Backend
fastapi>=0.104.0
uvicorn[standard]>=0.24.0
python-multipart>=0.0.6
watchdog>=3.0.0
optuna>=3.4.0
pynastran>=1.4.0
python-socketio>=5.10.0
aiofiles>=23.2.1
Frontend
react>=18.2.0
react-router-dom>=6.18.0
@tanstack/react-query>=5.8.0
recharts>=2.10.0
react-markdown>=9.0.0
socket.io-client>=4.7.0
tailwindcss>=3.3.5
Next Steps
- Review this plan: Discuss architecture, tech stack, priorities
- Prototype Phase 1: Build minimal FastAPI backend
- Design mockups: High-fidelity UI designs (Figma)
- Set up development environment: Create project structure
- Begin Phase 1 implementation: Backend foundation
Confirmed Decisions ✅:
- Architecture: REST + WebSocket
- Deployment: Self-hosted (local/Docker)
- Authentication: Future phase
- Design: Desktop-first
- Implementation Priority: Live Dashboard → Study Configurator → Results Viewer
Status: ✅ Approved - Implementation starting with Phase 4 (Live Dashboard)