From ea437d360e19b36d10bdf8377f13be17c74149cb Mon Sep 17 00:00:00 2001 From: Anto01 Date: Tue, 20 Jan 2026 10:03:45 -0500 Subject: [PATCH] docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide - Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0 --- .claude/ATOMIZER_CONTEXT.md | 7 +- .claude/skills/00_BOOTSTRAP.md | 2 +- .../archive/00_BOOTSTRAP_V2.0_archived.md | 2 +- .claude/skills/modules/insights-catalog.md | 4 +- .gitignore | 3 + CLAUDE.md | 8 +- README.md | 16 +- docs/00_INDEX.md | 445 ++--- docs/{02_ARCHITECTURE.md => ARCHITECTURE.md} | 0 docs/GETTING_STARTED.md | 401 ++++ .../GNN_ARCHITECTURE.md | 0 .../NXOPEN_INTELLISENSE_SETUP.md | 0 .../NXOPEN_RESOURCES.md | 0 .../NX_FILE_STRUCTURE_PROTOCOL.md | 0 .../nx_integration.md | 0 .../system_configuration.md | 0 .../historical/01_PROTOCOLS_legacy.md} | 4 +- .../historical/03_GETTING_STARTED_legacy.md} | 0 .../historical/ARCHITECTURE_REFACTOR_NOV17.md | 0 .../historical/BRACKET_STUDY_ISSUES_LOG.md | 0 .../historical/CRITICAL_ISSUES_ROADMAP.md | 0 .../FEATURE_REGISTRY_ARCHITECTURE.md | 0 .../historical/FIX_VALIDATOR_PRUNING.md | 0 .../historical/GOOD_MORNING_NOV18.md | 0 .../historical/INDEX_OLD.md | 0 .../historical/LESSONS_LEARNED.md | 0 ...OPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md | 0 .../historical/NX_EXPRESSION_IMPORT_SYSTEM.md | 0 .../historical/OPTIMIZATION_WORKFLOW.md | Bin .../historical/OPTUNA_DASHBOARD.md | 0 .../PROTOCOL_10_IMPLEMENTATION_SUMMARY.md | 0 .../historical/PRUNING_DIAGNOSTICS.md | 0 .../historical/QUICK_CONFIG_REFERENCE.md | 0 .../historical/STUDY_CONTINUATION_STANDARD.md | 0 .../historical/STUDY_ORGANIZATION.md | 0 .../historical/TODAY_PLAN_NOV18.md | 0 .../marketing}/ATOMIZER_PODCAST_BRIEFING.md | 0 .../PHASE_2_5_INTELLIGENT_GAP_DETECTION.md | 0 .../PHASE_2_7_LLM_INTEGRATION.md | 0 .../PHASE_3_2_INTEGRATION_PLAN.md | 0 .../PHASE_3_2_INTEGRATION_STATUS.md | 0 .../phase_documents/PHASE_3_2_NEXT_STEPS.md | 0 .../PHASE_3_3_VISUALIZATION_AND_CLEANUP.md | 0 .../plans/DASHBOARD_IMPROVEMENT_PLAN.md | 0 .../plans/backend_integration_plan.md | 0 .../plans/dashboard_enhancement_plan.md | 0 .../SESSION_SUMMARY_NOV20.md | 0 .../SESSION_SUMMARY_PHASE_2_5_TO_2_7.md | 0 .../SESSION_SUMMARY_PHASE_2_8.md | 0 .../SESSION_SUMMARY_PHASE_2_9.md | 0 .../SESSION_SUMMARY_PHASE_3.md | 0 .../SESSION_SUMMARY_PHASE_3_1.md | 0 .../ATOMIZER_STATE_ASSESSMENT_NOV25.md | 0 .../sessions/Phase_1_2_Implementation_Plan.md | 0 .../sessions/Phase_1_3_Implementation_Plan.md | 0 .../sessions/Today_Todo.md | 0 .../sessions/dashboard_initial_prompt.md | 0 .../ATOMIZER_ARCHITECTURE_OVERVIEW.md | 2 +- .../ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md | 0 .../ATOMIZER_DASHBOARD_GAP_ANALYSIS.md | 0 .../ATOMIZER_FIELD_INTEGRATION_PLAN.md | 0 .../ATOMIZER_USER_GUIDE.md | 0 .../DEVELOPMENT_GUIDANCE.md | 0 .../DEVELOPMENT_ROADMAP.md | 0 .../LOGGING_MIGRATION_GUIDE.md | 0 .../NASTRAN_VISUALIZATION_RESEARCH.md | 0 .../NN_SURROGATE_AUTOMATION_PLAN.md | 0 .../Philosophy.md | 0 docs/{09_DIAGRAMS => diagrams}/00_INDEX.md | 0 .../architecture_overview.md | 0 .../protocol_workflows.md | 0 ...TOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md | 0 docs/{04_USER_GUIDES => guides}/CANVAS.md | 0 docs/{04_USER_GUIDES => guides}/DASHBOARD.md | 0 .../DASHBOARD_IMPLEMENTATION_STATUS.md | 0 .../DASHBOARD_MASTER_PLAN.md | 0 .../DASHBOARD_REACT_IMPLEMENTATION.md | 0 .../DASHBOARD_SESSION_SUMMARY.md | 0 .../NEURAL_FEATURES_COMPLETE.md | 0 .../NEURAL_WORKFLOW_TUTORIAL.md | 0 .../PHYSICS_LOSS_GUIDE.md | 0 .../TRAINING_DATA_EXPORT_GUIDE.md | 0 .../{04_USER_GUIDES => guides}/hybrid_mode.md | 0 docs/{06_PHYSICS => physics}/00_INDEX.md | 0 .../ZERNIKE_FUNDAMENTALS.md | 0 .../ZERNIKE_OPD_METHOD.md | 0 .../ATOMIZER_CONTEXT_ENGINEERING_PLAN.md | 4 +- ...TUDY_INTERVIEW_MODE_IMPLEMENTATION_PLAN.md | 2 +- docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md | 737 +++++++ docs/plans/CLAUDE_CANVAS_PROJECT.md | 1239 ++++++++++++ .../DASHBOARD_CLAUDE_CODE_INTEGRATION.md | 693 +++++++ docs/plans/SAAS_ATOMIZER_ROADMAP.md | 863 +++++++++ docs/{ => plans}/TODO_NXOPEN_MCP_SETUP.md | 0 .../UNIFIED_CONFIGURATION_ARCHITECTURE.md | 1697 +++++++++++++++++ docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md | 495 +++++ docs/plans/UNIFIED_CONFIG_QUICKSTART.md | 92 + docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md | 286 +++ docs/reference/DEEP_INVESTIGATION_PROMPT.md | 197 ++ .../EXECUTION_PLAN_GENERATOR_PROMPT.md | 322 ++++ docs/reviews/ARCHITECTURE_REVIEW.md | 730 +++++++ docs/reviews/ATOMIZER_ARCHITECTURE.svg | 519 +++++ docs/reviews/COMPONENT_RELATIONSHIPS.md | 529 +++++ studies/M1_Mirror/README.md | 8 +- 103 files changed, 8980 insertions(+), 327 deletions(-) rename docs/{02_ARCHITECTURE.md => ARCHITECTURE.md} (100%) create mode 100644 docs/GETTING_STARTED.md rename docs/{05_API_REFERENCE => api}/GNN_ARCHITECTURE.md (100%) rename docs/{05_API_REFERENCE => api}/NXOPEN_INTELLISENSE_SETUP.md (100%) rename docs/{05_API_REFERENCE => api}/NXOPEN_RESOURCES.md (100%) rename docs/{05_API_REFERENCE => api}/NX_FILE_STRUCTURE_PROTOCOL.md (100%) rename docs/{05_API_REFERENCE => api}/nx_integration.md (100%) rename docs/{05_API_REFERENCE => api}/system_configuration.md (100%) rename docs/{01_PROTOCOLS.md => archive/historical/01_PROTOCOLS_legacy.md} (99%) rename docs/{03_GETTING_STARTED.md => archive/historical/03_GETTING_STARTED_legacy.md} (100%) rename docs/{08_ARCHIVE => archive}/historical/ARCHITECTURE_REFACTOR_NOV17.md (100%) rename docs/{08_ARCHIVE => archive}/historical/BRACKET_STUDY_ISSUES_LOG.md (100%) rename docs/{08_ARCHIVE => archive}/historical/CRITICAL_ISSUES_ROADMAP.md (100%) rename docs/{08_ARCHIVE => archive}/historical/FEATURE_REGISTRY_ARCHITECTURE.md (100%) rename docs/{08_ARCHIVE => archive}/historical/FIX_VALIDATOR_PRUNING.md (100%) rename docs/{08_ARCHIVE => archive}/historical/GOOD_MORNING_NOV18.md (100%) rename docs/{08_ARCHIVE => archive}/historical/INDEX_OLD.md (100%) rename docs/{08_ARCHIVE => archive}/historical/LESSONS_LEARNED.md (100%) rename docs/{08_ARCHIVE => archive}/historical/NXOPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md (100%) rename docs/{08_ARCHIVE => archive}/historical/NX_EXPRESSION_IMPORT_SYSTEM.md (100%) rename docs/{08_ARCHIVE => archive}/historical/OPTIMIZATION_WORKFLOW.md (100%) rename docs/{08_ARCHIVE => archive}/historical/OPTUNA_DASHBOARD.md (100%) rename docs/{08_ARCHIVE => archive}/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md (100%) rename docs/{08_ARCHIVE => archive}/historical/PRUNING_DIAGNOSTICS.md (100%) rename docs/{08_ARCHIVE => archive}/historical/QUICK_CONFIG_REFERENCE.md (100%) rename docs/{08_ARCHIVE => archive}/historical/STUDY_CONTINUATION_STANDARD.md (100%) rename docs/{08_ARCHIVE => archive}/historical/STUDY_ORGANIZATION.md (100%) rename docs/{08_ARCHIVE => archive}/historical/TODAY_PLAN_NOV18.md (100%) rename docs/{ => archive/marketing}/ATOMIZER_PODCAST_BRIEFING.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_2_5_INTELLIGENT_GAP_DETECTION.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_2_7_LLM_INTEGRATION.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_3_2_INTEGRATION_PLAN.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_3_2_INTEGRATION_STATUS.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_3_2_NEXT_STEPS.md (100%) rename docs/{08_ARCHIVE => archive}/phase_documents/PHASE_3_3_VISUALIZATION_AND_CLEANUP.md (100%) rename docs/{08_ARCHIVE => archive}/plans/DASHBOARD_IMPROVEMENT_PLAN.md (100%) rename docs/{08_ARCHIVE => archive}/plans/backend_integration_plan.md (100%) rename docs/{08_ARCHIVE => archive}/plans/dashboard_enhancement_plan.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_NOV20.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_PHASE_2_5_TO_2_7.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_PHASE_2_8.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_PHASE_2_9.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_PHASE_3.md (100%) rename docs/{08_ARCHIVE => archive}/session_summaries/SESSION_SUMMARY_PHASE_3_1.md (100%) rename docs/{08_ARCHIVE => archive}/sessions/ATOMIZER_STATE_ASSESSMENT_NOV25.md (100%) rename docs/{08_ARCHIVE => archive}/sessions/Phase_1_2_Implementation_Plan.md (100%) rename docs/{08_ARCHIVE => archive}/sessions/Phase_1_3_Implementation_Plan.md (100%) rename docs/{08_ARCHIVE => archive}/sessions/Today_Todo.md (100%) rename docs/{08_ARCHIVE => archive}/sessions/dashboard_initial_prompt.md (100%) rename docs/{07_DEVELOPMENT => development}/ATOMIZER_ARCHITECTURE_OVERVIEW.md (99%) rename docs/{07_DEVELOPMENT => development}/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md (100%) rename docs/{07_DEVELOPMENT => development}/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md (100%) rename docs/{07_DEVELOPMENT => development}/ATOMIZER_FIELD_INTEGRATION_PLAN.md (100%) rename docs/{07_DEVELOPMENT => development}/ATOMIZER_USER_GUIDE.md (100%) rename docs/{07_DEVELOPMENT => development}/DEVELOPMENT_GUIDANCE.md (100%) rename docs/{07_DEVELOPMENT => development}/DEVELOPMENT_ROADMAP.md (100%) rename docs/{07_DEVELOPMENT => development}/LOGGING_MIGRATION_GUIDE.md (100%) rename docs/{07_DEVELOPMENT => development}/NASTRAN_VISUALIZATION_RESEARCH.md (100%) rename docs/{07_DEVELOPMENT => development}/NN_SURROGATE_AUTOMATION_PLAN.md (100%) rename docs/{07_DEVELOPMENT => development}/Philosophy.md (100%) rename docs/{09_DIAGRAMS => diagrams}/00_INDEX.md (100%) rename docs/{09_DIAGRAMS => diagrams}/architecture_overview.md (100%) rename docs/{09_DIAGRAMS => diagrams}/protocol_workflows.md (100%) rename docs/{04_USER_GUIDES => guides}/ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md (100%) rename docs/{04_USER_GUIDES => guides}/CANVAS.md (100%) rename docs/{04_USER_GUIDES => guides}/DASHBOARD.md (100%) rename docs/{04_USER_GUIDES => guides}/DASHBOARD_IMPLEMENTATION_STATUS.md (100%) rename docs/{04_USER_GUIDES => guides}/DASHBOARD_MASTER_PLAN.md (100%) rename docs/{04_USER_GUIDES => guides}/DASHBOARD_REACT_IMPLEMENTATION.md (100%) rename docs/{04_USER_GUIDES => guides}/DASHBOARD_SESSION_SUMMARY.md (100%) rename docs/{04_USER_GUIDES => guides}/NEURAL_FEATURES_COMPLETE.md (100%) rename docs/{04_USER_GUIDES => guides}/NEURAL_WORKFLOW_TUTORIAL.md (100%) rename docs/{04_USER_GUIDES => guides}/PHYSICS_LOSS_GUIDE.md (100%) rename docs/{04_USER_GUIDES => guides}/TRAINING_DATA_EXPORT_GUIDE.md (100%) rename docs/{04_USER_GUIDES => guides}/hybrid_mode.md (100%) rename docs/{06_PHYSICS => physics}/00_INDEX.md (100%) rename docs/{06_PHYSICS => physics}/ZERNIKE_FUNDAMENTALS.md (100%) rename docs/{06_PHYSICS => physics}/ZERNIKE_OPD_METHOD.md (100%) create mode 100644 docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md create mode 100644 docs/plans/CLAUDE_CANVAS_PROJECT.md create mode 100644 docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md create mode 100644 docs/plans/SAAS_ATOMIZER_ROADMAP.md rename docs/{ => plans}/TODO_NXOPEN_MCP_SETUP.md (100%) create mode 100644 docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md create mode 100644 docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md create mode 100644 docs/plans/UNIFIED_CONFIG_QUICKSTART.md create mode 100644 docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md create mode 100644 docs/reference/DEEP_INVESTIGATION_PROMPT.md create mode 100644 docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md create mode 100644 docs/reviews/ARCHITECTURE_REVIEW.md create mode 100644 docs/reviews/ATOMIZER_ARCHITECTURE.svg create mode 100644 docs/reviews/COMPONENT_RELATIONSHIPS.md diff --git a/.claude/ATOMIZER_CONTEXT.md b/.claude/ATOMIZER_CONTEXT.md index d131ba85..cbbe3e1d 100644 --- a/.claude/ATOMIZER_CONTEXT.md +++ b/.claude/ATOMIZER_CONTEXT.md @@ -10,7 +10,7 @@ Load this FIRST on every new session, then route to specific protocols. **Atomizer** is an LLM-first FEA (Finite Element Analysis) optimization framework. Users describe optimization problems in natural language, and Claude orchestrates the entire workflow: model introspection, config generation, optimization execution, and results analysis. -**Philosophy**: Talk, don't click. Engineers describe what they want; AI handles the rest. +**Philosophy**: LLM-driven optimization. Engineers describe what they want; AI handles the rest. --- @@ -501,7 +501,8 @@ The `DashboardDB` class creates Optuna-compatible schema for dashboard integrati | Component | Version | Last Updated | |-----------|---------|--------------| -| ATOMIZER_CONTEXT | 1.8 | 2025-12-28 | +| ATOMIZER_CONTEXT | 2.0 | 2026-01-20 | +| Documentation Structure | 2.0 | 2026-01-20 | | BaseOptimizationRunner | 1.0 | 2025-12-07 | | GenericSurrogate | 1.0 | 2025-12-07 | | Study State Detector | 1.0 | 2025-12-07 | @@ -520,4 +521,4 @@ The `DashboardDB` class creates Optuna-compatible schema for dashboard integrati --- -*Atomizer: Where engineers talk, AI optimizes.* +*Atomizer: LLM-driven structural optimization for engineering.* diff --git a/.claude/skills/00_BOOTSTRAP.md b/.claude/skills/00_BOOTSTRAP.md index d5fe157b..5517c666 100644 --- a/.claude/skills/00_BOOTSTRAP.md +++ b/.claude/skills/00_BOOTSTRAP.md @@ -24,7 +24,7 @@ requires_skills: [] **Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant. -**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute. +**Core Philosophy**: LLM-driven optimization. Users describe what they want; you configure and execute. **NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run. diff --git a/.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md b/.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md index a17e96d0..b716ee83 100644 --- a/.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md +++ b/.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md @@ -21,7 +21,7 @@ requires_skills: [] **Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant. -**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute. +**Core Philosophy**: LLM-driven optimization. Users describe what they want; you configure and execute. --- diff --git a/.claude/skills/modules/insights-catalog.md b/.claude/skills/modules/insights-catalog.md index d6ab081b..d6dabbe7 100644 --- a/.claude/skills/modules/insights-catalog.md +++ b/.claude/skills/modules/insights-catalog.md @@ -112,7 +112,7 @@ python -m optimization_engine.insights generate studies/my_mirror --type zernike | 1-10 µm | **RECOMMENDED**: Use OPD method | | < 1 µm | Both methods equivalent | -**Related Documentation**: [ZERNIKE_OPD_METHOD.md](../../../docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md) +**Related Documentation**: [ZERNIKE_OPD_METHOD.md](../../../docs/physics/ZERNIKE_OPD_METHOD.md) --- @@ -284,6 +284,6 @@ python -m optimization_engine.insights recommend studies/my_study ## Related Documentation - **Protocol Specification**: `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` -- **OPD Method Physics**: `docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md` +- **OPD Method Physics**: `docs/physics/ZERNIKE_OPD_METHOD.md` - **Zernike Integration**: `docs/ZERNIKE_INTEGRATION.md` - **Extractor Catalog**: `.claude/skills/modules/extractors-catalog.md` diff --git a/.gitignore b/.gitignore index 7d84ef0f..ed39cf62 100644 --- a/.gitignore +++ b/.gitignore @@ -109,3 +109,6 @@ _dat_run*.dat # Claude session temp files .claude-mcp-*.json .claude-prompt-*.md + +# Auto-generated documentation (regenerate with: python -m optimization_engine.auto_doc all) +docs/generated/ diff --git a/CLAUDE.md b/CLAUDE.md index c10a4e73..5445ecc4 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -68,7 +68,7 @@ This file provides: ## Core Philosophy -**Talk, don't click.** Users describe what they want in plain language. You interpret, configure, execute, and explain. +**LLM-driven optimization framework.** Users describe what they want in plain language. You interpret, configure, execute, and explain. ## Context Loading Layers @@ -207,9 +207,9 @@ Atomizer/ | Feature | Documentation | |---------|--------------| -| **Canvas Builder** | `docs/04_USER_GUIDES/CANVAS.md` | -| **Dashboard Overview** | `docs/04_USER_GUIDES/DASHBOARD.md` | -| **Implementation Status** | `docs/04_USER_GUIDES/DASHBOARD_IMPLEMENTATION_STATUS.md` | +| **Canvas Builder** | `docs/guides/CANVAS.md` | +| **Dashboard Overview** | `docs/guides/DASHBOARD.md` | +| **Implementation Status** | `docs/guides/DASHBOARD_IMPLEMENTATION_STATUS.md` | **Canvas V3.1 Features (AtomizerSpec v2.0):** - **AtomizerSpec v2.0**: Unified JSON configuration format diff --git a/README.md b/README.md index 5c6cf1fc..def0f4af 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,11 @@ # Atomizer -> **Talk, don't click.** AI-native structural optimization for Siemens NX with neural network acceleration. +> **LLM-driven structural optimization framework** for Siemens NX with neural network acceleration. [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![NX 2506+](https://img.shields.io/badge/NX-2506+-orange.svg)](https://www.plm.automation.siemens.com/global/en/products/nx/) [![License](https://img.shields.io/badge/license-Proprietary-red.svg)](LICENSE) -[![Neural](https://img.shields.io/badge/neural-GNN%20powered-purple.svg)](docs/06_PHYSICS/) +[![Neural](https://img.shields.io/badge/neural-GNN%20powered-purple.svg)](docs/physics/) --- @@ -280,7 +280,7 @@ Atomizer/ ├── studies/ # Optimization studies by geometry ├── docs/ # Documentation │ ├── protocols/ # Protocol specifications -│ └── 06_PHYSICS/ # Physics domain docs +│ └── physics/ # Physics domain docs ├── knowledge_base/ # LAC persistent learning │ └── lac/ # Session insights, failures, patterns └── nx_journals/ # NX Open automation scripts @@ -306,12 +306,12 @@ Atomizer/ | [CLAUDE.md](CLAUDE.md) | System instructions for Claude | | [.claude/ATOMIZER_CONTEXT.md](.claude/ATOMIZER_CONTEXT.md) | Session context loader | | [docs/protocols/](docs/protocols/) | Protocol specifications | -| [docs/06_PHYSICS/](docs/06_PHYSICS/) | Physics domain documentation | +| [docs/physics/](docs/physics/) | Physics domain documentation | ### Physics Documentation -- [ZERNIKE_FUNDAMENTALS.md](docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md) - Zernike polynomial basics -- [ZERNIKE_OPD_METHOD.md](docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md) - OPD method for lateral displacement +- [ZERNIKE_FUNDAMENTALS.md](docs/physics/ZERNIKE_FUNDAMENTALS.md) - Zernike polynomial basics +- [ZERNIKE_OPD_METHOD.md](docs/physics/ZERNIKE_OPD_METHOD.md) - OPD method for lateral displacement --- @@ -356,8 +356,8 @@ Python and dependencies are pre-configured. Do not install additional packages. ## License -Proprietary - Atomaste 2025 +Proprietary - Atomaste 2026 --- -*Atomizer: Where engineers talk, AI optimizes.* +*Atomizer: LLM-driven structural optimization for engineering.* diff --git a/docs/00_INDEX.md b/docs/00_INDEX.md index 1266ef44..a987c121 100644 --- a/docs/00_INDEX.md +++ b/docs/00_INDEX.md @@ -1,9 +1,7 @@ # Atomizer Documentation Index -**Welcome to the Atomizer documentation!** This index provides a structured navigation hub for all documentation resources. - -**Last Updated**: 2025-11-25 -**Project Version**: 0.95.0 (95% complete - Neural Integration Complete!) +**Last Updated**: 2026-01-20 +**Project Version**: 1.0.0 (AtomizerSpec v2.0 - Full LLM Integration) --- @@ -11,361 +9,212 @@ New to Atomizer? Start here: -1. **[README.md](../README.md)** - Project overview, philosophy, and quick start guide -2. **[Getting Started Tutorial](HOW_TO_EXTEND_OPTIMIZATION.md)** - Create your first optimization study -3. **[Neural Features Guide](NEURAL_FEATURES_COMPLETE.md)** - Neural network acceleration (NEW!) -4. **[Example Studies](../studies/)** - Working examples (UAV arm with neural, bracket) +1. **[README.md](../README.md)** - Project overview and philosophy +2. **[Getting Started](GETTING_STARTED.md)** - Installation, first study, dashboard +3. **[Protocol System](protocols/README.md)** - How Atomizer is organized +4. **[Example Studies](../studies/)** - Working examples --- ## Documentation Structure -### 🧠 Neural Network Acceleration (NEW!) +### Core Documentation -**Core Neural Documentation**: -- **[NEURAL_FEATURES_COMPLETE.md](NEURAL_FEATURES_COMPLETE.md)** - Complete guide to all neural features -- **[NEURAL_WORKFLOW_TUTORIAL.md](NEURAL_WORKFLOW_TUTORIAL.md)** - Step-by-step: data → training → optimization -- **[GNN_ARCHITECTURE.md](GNN_ARCHITECTURE.md)** - Technical deep-dive into GNN models -- **[PHYSICS_LOSS_GUIDE.md](PHYSICS_LOSS_GUIDE.md)** - Loss function selection guide +| Document | Purpose | +|----------|---------| +| **[GETTING_STARTED.md](GETTING_STARTED.md)** | Setup, first study, dashboard basics | +| **[ARCHITECTURE.md](ARCHITECTURE.md)** | System architecture, hooks, data flow | +| **[protocols/README.md](protocols/README.md)** | Protocol Operating System overview | -**Integration Documentation**: -- **[ATOMIZER_FIELD_INTEGRATION_PLAN.md](ATOMIZER_FIELD_INTEGRATION_PLAN.md)** - Integration roadmap (COMPLETE) -- **[ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md](ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md)** - API reference +### Protocol System -**Quick Commands**: -```bash -# Run neural-accelerated optimization -python run_optimization.py --trials 5000 --use-neural +The Protocol Operating System (POS) provides structured workflows: -# Train new model -cd atomizer-field && python train_parametric.py --epochs 200 +``` +protocols/ +├── README.md # Protocol system overview +├── operations/ # How-to guides (OP_01-08) +│ ├── OP_01_CREATE_STUDY.md +│ ├── OP_02_RUN_OPTIMIZATION.md +│ ├── OP_03_MONITOR_PROGRESS.md +│ ├── OP_04_ANALYZE_RESULTS.md +│ ├── OP_05_EXPORT_TRAINING_DATA.md +│ ├── OP_06_TROUBLESHOOT.md +│ ├── OP_07_DISK_OPTIMIZATION.md +│ └── OP_08_GENERATE_REPORT.md +├── system/ # Technical specifications (SYS_10-18) +│ ├── SYS_10_IMSO.md # Intelligent optimization +│ ├── SYS_11_MULTI_OBJECTIVE.md +│ ├── SYS_12_EXTRACTOR_LIBRARY.md +│ ├── SYS_13_DASHBOARD_TRACKING.md +│ ├── SYS_14_NEURAL_ACCELERATION.md +│ ├── SYS_15_METHOD_SELECTOR.md +│ ├── SYS_16_SELF_AWARE_TURBO.md +│ ├── SYS_17_STUDY_INSIGHTS.md +│ └── SYS_18_CONTEXT_ENGINEERING.md +└── extensions/ # Extensibility (EXT_01-04) + ├── EXT_01_CREATE_EXTRACTOR.md + ├── EXT_02_CREATE_HOOK.md + ├── EXT_03_CREATE_PROTOCOL.md + └── EXT_04_CREATE_SKILL.md ``` -### 📋 01. Core Specifications +### User Guides -**[PROTOCOLS.md](PROTOCOLS.md)** - Master protocol specifications (ALL PROTOCOLS IN ONE PLACE) -- Protocol 10: Intelligent Multi-Strategy Optimization (IMSO) -- Protocol 11: Multi-Objective Support (MANDATORY for all components) -- Protocol 13: Real-Time Dashboard Tracking +Located in `guides/`: -**Individual Protocol Documents** (detailed specifications): -- [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - Adaptive characterization, landscape analysis -- [PROTOCOL_10_V2_IMPLEMENTATION.md](PROTOCOL_10_V2_IMPLEMENTATION.md) - Implementation summary -- [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) - Bug fixes and improvements -- [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) - Multi-objective requirements -- [FIX_SUMMARY_PROTOCOL_11.md](FIX_SUMMARY_PROTOCOL_11.md) - Protocol 11 bug fixes -- [PROTOCOL_13_DASHBOARD.md](PROTOCOL_13_DASHBOARD.md) - Dashboard implementation complete spec +| Guide | Purpose | +|-------|---------| +| **[CANVAS.md](guides/CANVAS.md)** | Visual study builder (AtomizerSpec v2.0) | +| **[DASHBOARD.md](guides/DASHBOARD.md)** | Dashboard overview and features | +| **[NEURAL_FEATURES_COMPLETE.md](guides/NEURAL_FEATURES_COMPLETE.md)** | Neural acceleration guide | +| **[NEURAL_WORKFLOW_TUTORIAL.md](guides/NEURAL_WORKFLOW_TUTORIAL.md)** | Data → Training → Optimization | +| **[hybrid_mode.md](guides/hybrid_mode.md)** | Hybrid FEA/NN optimization | +| **[TRAINING_DATA_EXPORT_GUIDE.md](guides/TRAINING_DATA_EXPORT_GUIDE.md)** | Exporting data for neural training | -### 🏗️ 02. Architecture & Design +### API Reference -**Visual Architecture** (🆕 Comprehensive Diagrams): -- [**Architecture Overview**](09_DIAGRAMS/architecture_overview.md) - Complete system architecture with Mermaid diagrams - - High-level system architecture - - Component interactions - - Data flow diagrams - - Philosophy and design principles - - Technology stack -- [**Protocol Workflows**](09_DIAGRAMS/protocol_workflows.md) - Detailed protocol execution flows - - Protocol 10: IMSO workflow - - Protocol 11: Multi-objective decision trees - - Protocol 13: Real-time tracking - - LLM-assisted workflow (Hybrid Mode) - - All protocols integrated +Located in `api/`: -**System Architecture**: -- [HOOK_ARCHITECTURE.md](HOOK_ARCHITECTURE.md) - Plugin system and lifecycle hooks -- [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md) - NX Nastran integration details -- [SYSTEM_CONFIGURATION.md](SYSTEM_CONFIGURATION.md) - Configuration format and options +| Document | Purpose | +|----------|---------| +| **[system_configuration.md](api/system_configuration.md)** | Configuration format reference | +| **[nx_integration.md](api/nx_integration.md)** | NX Open API integration | +| **[GNN_ARCHITECTURE.md](api/GNN_ARCHITECTURE.md)** | Graph Neural Network details | +| **[NXOPEN_RESOURCES.md](api/NXOPEN_RESOURCES.md)** | NX Open documentation resources | -**Extractors & Data Flow**: -- [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Unified extractor library (Protocol 12) +### Physics Documentation -### 📊 03. Dashboard +Located in `physics/`: -**Dashboard Documentation**: -- [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) - Complete 3-page architecture blueprint -- [DASHBOARD_REACT_IMPLEMENTATION.md](DASHBOARD_REACT_IMPLEMENTATION.md) - React frontend implementation guide -- [DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md) - Current progress and testing -- [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) - Features and usage summary +| Document | Purpose | +|----------|---------| +| **[ZERNIKE_FUNDAMENTALS.md](physics/ZERNIKE_FUNDAMENTALS.md)** | Zernike polynomial basics | +| **[ZERNIKE_OPD_METHOD.md](physics/ZERNIKE_OPD_METHOD.md)** | OPD method for mirror optimization | -**Quick Commands**: -```bash -# Start backend (port 8000) -cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload --port 8000 +### Development -# Start frontend (port 3001) -cd atomizer-dashboard/frontend && npm run dev -``` +Located in `development/`: -### 🔧 04. Development +| Document | Purpose | +|----------|---------| +| **[DEVELOPMENT_GUIDANCE.md](development/DEVELOPMENT_GUIDANCE.md)** | Development guidelines | +| **[DEVELOPMENT_ROADMAP.md](development/DEVELOPMENT_ROADMAP.md)** | Future plans | +| **[Philosophy.md](development/Philosophy.md)** | Design philosophy | -**For Contributors**: -- [../DEVELOPMENT.md](../DEVELOPMENT.md) - Development guide, workflow, testing -- [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md) - Daily development planning example -- [LESSONS_LEARNED.md](LESSONS_LEARNED.md) - Lessons from development sessions +### Diagrams -**Phase Planning**: -- [PHASE_3_1_COMPLETION_SUMMARY.md](PHASE_3_1_COMPLETION_SUMMARY.md) - Phase 3.1 completion -- [PHASE_3_2_INTEGRATION_PLAN.md](PHASE_3_2_INTEGRATION_PLAN.md) - Current phase plan +Located in `diagrams/`: -### 📖 05. User Guides - -**Creating & Running Studies**: -- [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Complete guide - - Creating custom extractors - - Defining objectives - - Setting up design variables - - Configuring constraints - -**Using the Dashboard**: -- Start with [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) -- See [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) for full capabilities - -**Multi-Objective Optimization**: -- Read [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) -- Check example: `studies/bracket_stiffness_optimization_V3/` - -### 🔬 06. Advanced Topics - -**Intelligent Optimization (Protocol 10)**: -- [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - How it works -- [PROTOCOL_10_V2_IMPLEMENTATION.md](PROTOCOL_10_V2_IMPLEMENTATION.md) - Implementation details -- [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) - Bug fixes and improvements - -**LLM Integration** (Hybrid Mode): -- [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md) - Using LLM-assisted workflows - -**NX Integration**: -- [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md) - Session handling, solving, extraction -- [NASTRAN_VISUALIZATION_RESEARCH.md](NASTRAN_VISUALIZATION_RESEARCH.md) - Visualizing OP2/BDF results with pyNastran + PyVista - -### 📚 07. Session Summaries & Historical - -**Recent Sessions** (Nov 2025): -- [GOOD_MORNING_NOV18.md](GOOD_MORNING_NOV18.md) - Morning summary Nov 18 -- [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) - Dashboard completion -- [PROTOCOL_13_DASHBOARD.md](PROTOCOL_13_DASHBOARD.md) - Protocol 13 summary - -**Historical Documents** (archived for reference): -- Various session summaries in docs/ folder -- Phase completion documents - -### 🎨 09. Visual Diagrams - -**Comprehensive Visual Documentation**: -- [**Diagram Index**](09_DIAGRAMS/00_INDEX.md) - All visual documentation hub -- [**Architecture Overview**](09_DIAGRAMS/architecture_overview.md) - System architecture diagrams -- [**Protocol Workflows**](09_DIAGRAMS/protocol_workflows.md) - Protocol execution flows - -**Viewing Diagrams**: -- Render automatically in GitHub and VS Code (with Markdown Preview Mermaid extension) -- Copy to https://mermaid.live/ for online viewing -- Supported by MkDocs, Docusaurus, and most documentation generators +| Document | Purpose | +|----------|---------| +| **[architecture_overview.md](diagrams/architecture_overview.md)** | System architecture diagrams | +| **[protocol_workflows.md](diagrams/protocol_workflows.md)** | Protocol execution flows | --- -## Documentation by Role +## Documentation by Task -### For New Users +### Creating Studies -Start here for a guided learning path: +1. **[GETTING_STARTED.md](GETTING_STARTED.md)** - First study tutorial +2. **[OP_01_CREATE_STUDY.md](protocols/operations/OP_01_CREATE_STUDY.md)** - Detailed creation protocol +3. **[guides/CANVAS.md](guides/CANVAS.md)** - Visual study builder -1. Read [../README.md](../README.md) - Understand the project -2. Review [PROTOCOLS.md](PROTOCOLS.md) - Learn about the architecture -3. Try [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Build your first study -4. Explore dashboard with [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) +### Running Optimizations -### For Developers +1. **[OP_02_RUN_OPTIMIZATION.md](protocols/operations/OP_02_RUN_OPTIMIZATION.md)** - Run protocol +2. **[SYS_10_IMSO.md](protocols/system/SYS_10_IMSO.md)** - Intelligent optimization +3. **[SYS_15_METHOD_SELECTOR.md](protocols/system/SYS_15_METHOD_SELECTOR.md)** - Method selection -Contributing to Atomizer: +### Neural Acceleration -1. [../DEVELOPMENT.md](../DEVELOPMENT.md) - Development workflow and guidelines -2. [PROTOCOLS.md](PROTOCOLS.md) - Understand protocol-based architecture -3. [HOOK_ARCHITECTURE.md](HOOK_ARCHITECTURE.md) - Plugin system internals -4. [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Extractor library details +1. **[guides/NEURAL_FEATURES_COMPLETE.md](guides/NEURAL_FEATURES_COMPLETE.md)** - Overview +2. **[SYS_14_NEURAL_ACCELERATION.md](protocols/system/SYS_14_NEURAL_ACCELERATION.md)** - Technical spec +3. **[guides/NEURAL_WORKFLOW_TUTORIAL.md](guides/NEURAL_WORKFLOW_TUTORIAL.md)** - Step-by-step -### For Researchers +### Analyzing Results -Using Atomizer for research: +1. **[OP_04_ANALYZE_RESULTS.md](protocols/operations/OP_04_ANALYZE_RESULTS.md)** - Analysis protocol +2. **[SYS_17_STUDY_INSIGHTS.md](protocols/system/SYS_17_STUDY_INSIGHTS.md)** - Physics visualizations -1. [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - Intelligent optimization algorithms -2. [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) - Multi-objective capabilities -3. [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) - Visualization and analysis tools -4. Example studies in `studies/` folder +### Troubleshooting + +1. **[OP_06_TROUBLESHOOT.md](protocols/operations/OP_06_TROUBLESHOOT.md)** - Troubleshooting guide +2. **[GETTING_STARTED.md#troubleshooting](GETTING_STARTED.md#troubleshooting)** - Common issues --- -## Protocol Quick Reference +## Quick Reference -| Protocol | Name | Status | Priority | Version | -|----------|------|--------|----------|---------| -| **10** | Intelligent Multi-Strategy Optimization | ✅ Complete | P0 | v2.1 | -| **11** | Multi-Objective Support | ✅ Complete | P0 (MANDATORY) | v1.0 | -| **13** | Real-Time Dashboard Tracking | ✅ Complete | P1 | v1.0 | -| **Neural** | GNN Acceleration (AtomizerField) | ✅ Complete | P0 | v1.0 | +### Protocol Summary -**See [PROTOCOLS.md](PROTOCOLS.md) for complete specifications.** +| Protocol | Name | Purpose | +|----------|------|---------| +| **OP_01** | Create Study | Study creation workflow | +| **OP_02** | Run Optimization | Execution workflow | +| **OP_03** | Monitor Progress | Real-time monitoring | +| **OP_04** | Analyze Results | Results analysis | +| **OP_06** | Troubleshoot | Debugging issues | +| **SYS_10** | IMSO | Intelligent optimization | +| **SYS_12** | Extractors | Physics extraction library | +| **SYS_14** | Neural | Neural network acceleration | -## Neural Features Quick Reference - -| Feature | Status | Performance | -|---------|--------|-------------| -| **Parametric GNN** | ✅ Production | 4.5ms inference, 2,200x speedup | -| **Field Predictor GNN** | ✅ Production | 50ms inference, full field output | -| **Physics-Informed Loss** | ✅ Production | <5% prediction error | -| **Hybrid Optimization** | ✅ Production | 97% NN usage rate | -| **Uncertainty Quantification** | ✅ Production | Ensemble-based confidence | -| **Training Pipeline** | ✅ Production | BDF/OP2 → GNN → Deploy | - -**See [NEURAL_FEATURES_COMPLETE.md](NEURAL_FEATURES_COMPLETE.md) for details.** - ---- - -## Common Tasks - -### Running an Optimization +### Essential Commands ```bash -# Navigate to study -cd studies/my_study +# Activate environment +conda activate atomizer # Run optimization -python run_optimization.py --trials 50 +python run_optimization.py --start --trials 50 -# View in dashboard -# Open http://localhost:3001 and select study +# Resume interrupted run +python run_optimization.py --start --resume + +# Start dashboard +python launch_dashboard.py + +# Neural turbo mode +python run_nn_optimization.py --turbo --nn-trials 5000 ``` -### Creating a New Study +### Key Extractors -```bash -# Use template (recommended) -python create_study.py --name my_study --model path/to/model.prt +| ID | Physics | Function | +|----|---------|----------| +| E1 | Displacement | `extract_displacement()` | +| E2 | Frequency | `extract_frequency()` | +| E3 | Stress | `extract_solid_stress()` | +| E4 | Mass (BDF) | `extract_mass_from_bdf()` | +| E5 | Mass (CAD) | `extract_mass_from_expression()` | +| E22 | Zernike OPD | `extract_zernike_opd()` | -# Or manually -mkdir -p studies/my_study/1_setup/model -# Copy model files -# Edit optimization_config.json -# Create run_optimization.py -``` - -### Checking Protocol 10 Intelligence Reports - -```bash -# View characterization progress -cat studies/my_study/2_results/intelligent_optimizer/characterization_progress.json - -# View final intelligence report -cat studies/my_study/2_results/intelligent_optimizer/intelligence_report.json - -# View strategy transitions -cat studies/my_study/2_results/intelligent_optimizer/strategy_transitions.json -``` +Full catalog: [SYS_12_EXTRACTOR_LIBRARY.md](protocols/system/SYS_12_EXTRACTOR_LIBRARY.md) --- -## File Organization +## Archive -``` -Atomizer/ -├── README.md # Project overview -├── DEVELOPMENT.md # Development guide -├── docs/ -│ ├── 00_INDEX.md # THIS FILE - Documentation hub -│ ├── PROTOCOLS.md # Master protocol specifications -│ ├── PROTOCOL_10_*.md # Protocol 10 detailed docs -│ ├── PROTOCOL_11_*.md # Protocol 11 detailed docs -│ ├── PROTOCOL_13_*.md # Protocol 13 detailed docs -│ ├── DASHBOARD_*.md # Dashboard documentation -│ ├── HOOK_ARCHITECTURE.md # Plugin system -│ ├── NX_SESSION_MANAGEMENT.md # NX integration -│ ├── HOW_TO_EXTEND_OPTIMIZATION.md # User guide -│ └── [session summaries] # Historical documents -├── optimization_engine/ # Core optimization code -├── atomizer-dashboard/ # Dashboard frontend & backend -├── studies/ # Optimization studies -└── examples/ # Example models -``` +Historical documents are preserved in `archive/`: + +- `archive/historical/` - Legacy documents, old protocols +- `archive/marketing/` - Briefings, presentations +- `archive/session_summaries/` - Past development sessions --- -## Getting Help +## LLM Resources -### Documentation Issues +For Claude/AI integration: -- **Missing information?** Check [PROTOCOLS.md](PROTOCOLS.md) for comprehensive specs -- **Protocol questions?** See individual protocol docs (PROTOCOL_XX_*.md) -- **Dashboard issues?** Check [DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md) - -### Technical Issues - -- **NX integration problems?** See [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md) -- **Multi-objective errors?** Check [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) -- **Protocol 10 not working?** See [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) - -### Community - -- **GitHub Issues**: https://github.com/yourusername/Atomizer/issues -- **Discussions**: https://github.com/yourusername/Atomizer/discussions -- **Email**: your.email@example.com +| Resource | Purpose | +|----------|---------| +| **[../CLAUDE.md](../CLAUDE.md)** | System instructions | +| **[../.claude/ATOMIZER_CONTEXT.md](../.claude/ATOMIZER_CONTEXT.md)** | Session context | +| **[../.claude/skills/](../.claude/skills/)** | Skill modules | --- -## Document Conventions - -### Naming System - -Documentation files use numbered prefixes for organization: -- `00_*` - Index and navigation files -- `01_*` - Core specifications (protocols) -- `02_*` - Architecture documentation -- `03_*` - User guides -- Individual protocol docs use descriptive names (PROTOCOL_XX_NAME.md) - -### Status Indicators - -- ✅ Complete - Fully implemented and tested -- 🔨 In Progress - Active development -- 📋 Planned - Design phase -- ⏳ Pending - Not yet started - -### Version Format - -- **Major.Minor.Patch** (e.g., v2.1.0) -- **Major**: Breaking changes or architectural redesign -- **Minor**: New features, backward compatible -- **Patch**: Bug fixes - ---- - -## Contributing to Documentation - -### Updating Documentation - -1. Keep [00_INDEX.md](00_INDEX.md) (this file) up to date with new docs -2. Update [PROTOCOLS.md](PROTOCOLS.md) when adding/modifying protocols -3. Maintain [../DEVELOPMENT.md](../DEVELOPMENT.md) with current status -4. Add session summaries for major development sessions - -### Documentation Style - -- Use clear, concise language -- Include code examples -- Add diagrams for complex concepts -- Follow Markdown best practices -- Keep table of contents updated - -### Review Process - -1. Create pull request with documentation changes -2. Ensure cross-references are valid -3. Update index files (this file, PROTOCOLS.md) -4. Check for broken links - ---- - -**Last Updated**: 2025-11-21 -**Maintained By**: Atomizer Development Team -**Next Review**: When new protocols or major features are added - -For questions about this documentation structure, open an issue on GitHub. +**Last Updated**: 2026-01-20 +**Maintained By**: Antoine / Atomaste diff --git a/docs/02_ARCHITECTURE.md b/docs/ARCHITECTURE.md similarity index 100% rename from docs/02_ARCHITECTURE.md rename to docs/ARCHITECTURE.md diff --git a/docs/GETTING_STARTED.md b/docs/GETTING_STARTED.md new file mode 100644 index 00000000..53ac1cef --- /dev/null +++ b/docs/GETTING_STARTED.md @@ -0,0 +1,401 @@ +# Getting Started with Atomizer + +**Last Updated**: 2026-01-20 + +This guide walks you through setting up Atomizer and running your first optimization study. + +--- + +## Table of Contents + +1. [Prerequisites](#prerequisites) +2. [Quick Setup](#quick-setup) +3. [Project Structure](#project-structure) +4. [Your First Optimization Study](#your-first-optimization-study) +5. [Using the Dashboard](#using-the-dashboard) +6. [Neural Acceleration (Optional)](#neural-acceleration-optional) +7. [Next Steps](#next-steps) + +--- + +## Prerequisites + +### Required Software + +| Software | Version | Purpose | +|----------|---------|---------| +| **Siemens NX** | 2506+ | CAD/FEA modeling | +| **NX Nastran** | Included with NX | FEA solver | +| **Python** | 3.10+ | Core engine | +| **Anaconda** | Latest | Environment management | +| **Git** | Latest | Version control | + +### Hardware Recommendations + +- **RAM**: 16GB minimum, 32GB recommended +- **Storage**: SSD with 50GB+ free space (FEA files are large) +- **CPU**: Multi-core for parallel FEA runs + +--- + +## Quick Setup + +### 1. Clone the Repository + +```bash +git clone http://192.168.86.50:3000/Antoine/Atomizer.git +cd Atomizer +``` + +### 2. Activate the Conda Environment + +The `atomizer` environment is pre-configured with all dependencies. + +```bash +conda activate atomizer +``` + +**Important**: Always use this environment. Do not install additional packages. + +### 3. Verify Installation + +```bash +# Check Python path +python --version # Should show Python 3.10+ + +# Verify core imports +python -c "from optimization_engine.core.runner import OptimizationRunner; print('OK')" +``` + +### 4. Configure NX Path (if needed) + +The default NX installation path is `C:\Program Files\Siemens\NX2506\`. If yours differs, update it in your study's `optimization_config.json`: + +```json +{ + "nx_settings": { + "nx_install_path": "C:\\Program Files\\Siemens\\NX2506" + } +} +``` + +--- + +## Project Structure + +``` +Atomizer/ +├── CLAUDE.md # AI assistant instructions +├── README.md # Project overview +├── .claude/ # LLM configuration +│ ├── ATOMIZER_CONTEXT.md # Session context +│ └── skills/ # Claude skill modules +├── optimization_engine/ # Core Python package +│ ├── core/ # Optimization runners +│ ├── extractors/ # Physics extraction (20+) +│ ├── nx/ # NX/Nastran integration +│ ├── gnn/ # Neural network surrogates +│ └── study/ # Study management +├── atomizer-dashboard/ # Web dashboard +│ ├── backend/ # FastAPI server +│ └── frontend/ # React UI +├── studies/ # Your optimization studies +│ ├── M1_Mirror/ # Mirror studies +│ ├── Simple_Bracket/ # Bracket studies +│ └── ... +├── docs/ # Documentation +│ ├── protocols/ # Protocol Operating System +│ ├── guides/ # User guides +│ └── physics/ # Physics documentation +└── knowledge_base/ # Learning system (LAC) +``` + +--- + +## Your First Optimization Study + +### Option A: Using Claude (Recommended) + +The easiest way to create a study is through natural language: + +``` +You: "Create a new study to optimize my bracket for minimum mass + with stress under 200 MPa. The model is at C:\Models\bracket.prt" + +Claude: [Analyzes model, creates study, generates configuration] +``` + +### Option B: Manual Creation + +#### Step 1: Create Study Directory + +```bash +# Create study under appropriate geometry type +mkdir -p studies/Simple_Bracket/my_first_study/1_setup/model +mkdir -p studies/Simple_Bracket/my_first_study/2_iterations +mkdir -p studies/Simple_Bracket/my_first_study/3_results +``` + +#### Step 2: Copy NX Files + +Copy your NX model files to the study: +- `Model.prt` - Geometry part +- `Model_fem1.fem` - FEM file +- `Model_sim1.sim` - Simulation file +- `Model_fem1_i.prt` - Idealized part (IMPORTANT!) + +```bash +cp /path/to/your/model/* studies/Simple_Bracket/my_first_study/1_setup/model/ +``` + +#### Step 3: Create Configuration + +Create `optimization_config.json` in your study root: + +```json +{ + "study_name": "my_first_study", + "description": "Bracket mass optimization with stress constraint", + + "design_variables": [ + { + "name": "thickness", + "expression_name": "web_thickness", + "min": 2.0, + "max": 10.0, + "initial": 5.0 + } + ], + + "objectives": [ + { + "name": "mass", + "type": "minimize", + "extractor": "extract_mass_from_bdf" + } + ], + + "constraints": [ + { + "name": "max_stress", + "type": "less_than", + "value": 200.0, + "extractor": "extract_solid_stress", + "extractor_args": {"element_type": "ctetra"} + } + ], + + "optimization": { + "method": "TPE", + "n_trials": 50 + }, + + "nx_settings": { + "nx_install_path": "C:\\Program Files\\Siemens\\NX2506", + "simulation_timeout_s": 600 + } +} +``` + +#### Step 4: Run the Optimization + +```bash +cd studies/Simple_Bracket/my_first_study +python run_optimization.py --start --trials 50 +``` + +### Understanding the Output + +During optimization: +- **Trial folders** are created in `2_iterations/` (trial_0001, trial_0002, ...) +- **Results** are logged to `3_results/study.db` (Optuna database) +- **Progress** is printed to console and logged to `optimization.log` + +``` +Trial 15/50: mass=2.34 kg, stress=185.2 MPa [FEASIBLE] + Best so far: mass=2.12 kg (trial #12) +``` + +--- + +## Using the Dashboard + +### Starting the Dashboard + +```bash +# From project root +python launch_dashboard.py +``` + +This starts: +- **Backend**: FastAPI at http://localhost:8000 +- **Frontend**: React at http://localhost:3003 + +### Dashboard Features + +| Tab | Purpose | +|-----|---------| +| **Home** | Study selection, creation | +| **Canvas** | Visual study builder (AtomizerSpec v2.0) | +| **Dashboard** | Real-time monitoring, convergence plots | +| **Analysis** | Pareto fronts, parallel coordinates | +| **Insights** | Physics visualizations (Zernike, stress fields) | + +### Canvas Builder + +The Canvas provides a visual, node-based interface: + +1. **Add Model Node** - Select your .sim file +2. **Add Design Variables** - Link to NX expressions +3. **Add Extractors** - Choose physics to extract +4. **Add Objectives** - Define what to optimize +5. **Connect Nodes** - Create the optimization flow +6. **Execute** - Generate and run the study + +--- + +## Neural Acceleration (Optional) + +For studies with 50+ completed FEA trials, you can train a neural surrogate for 2000x+ speedup. + +### When to Use Neural Acceleration + +| Scenario | Use Neural? | +|----------|-------------| +| < 30 trials needed | No - FEA is fine | +| 30-100 trials | Maybe - depends on FEA time | +| > 100 trials | Yes - significant time savings | +| Exploratory optimization | Yes - explore more designs | + +### Training a Surrogate + +```bash +cd studies/M1_Mirror/m1_mirror_adaptive_V15 + +# Train on existing FEA data +python -m optimization_engine.gnn.train_zernike_gnn V15 --epochs 200 +``` + +### Running Turbo Mode + +```bash +# Run 5000 GNN predictions, validate top candidates with FEA +python run_nn_optimization.py --turbo --nn-trials 5000 +``` + +### Performance Comparison + +| Method | Time per Evaluation | 100 Trials | +|--------|---------------------|------------| +| FEA only | 10-30 minutes | 17-50 hours | +| GNN Turbo | 4.5 milliseconds | ~30 seconds | + +--- + +## Next Steps + +### Learn the Protocol System + +Atomizer uses a layered protocol system: + +| Layer | Location | Purpose | +|-------|----------|---------| +| **Operations** | `docs/protocols/operations/` | How to create, run, analyze | +| **System** | `docs/protocols/system/` | Technical specifications | +| **Extensions** | `docs/protocols/extensions/` | How to extend Atomizer | + +Key protocols to read: +- **OP_01**: Creating studies +- **OP_02**: Running optimizations +- **SYS_12**: Available extractors +- **SYS_14**: Neural acceleration + +### Explore Available Extractors + +Atomizer includes 20+ physics extractors: + +| Category | Examples | +|----------|----------| +| **Mechanical** | Displacement, stress, strain energy | +| **Modal** | Frequency, mode shapes | +| **Thermal** | Temperature, heat flux | +| **Mass** | BDF mass, CAD mass | +| **Optical** | Zernike wavefront error | + +Full catalog: `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` + +### Use Claude for Complex Tasks + +For complex optimizations, describe your goals naturally: + +``` +"Set up a multi-objective optimization for my UAV arm: + - Minimize mass + - Maximize first natural frequency + - Keep stress under 150 MPa + Use NSGA-II with 100 trials" +``` + +Claude will: +1. Analyze your model +2. Suggest appropriate extractors +3. Configure the optimization +4. Generate all necessary files + +--- + +## Troubleshooting + +### Common Issues + +| Issue | Solution | +|-------|----------| +| "NX not found" | Check `nx_install_path` in config | +| "Mesh not updating" | Ensure `*_i.prt` (idealized part) is copied | +| "Solver timeout" | Increase `simulation_timeout_s` | +| "Import error" | Verify `conda activate atomizer` | + +### Getting Help + +1. Check `docs/protocols/operations/OP_06_TROUBLESHOOT.md` +2. Ask Claude: "Why is my optimization failing?" +3. Review `3_results/optimization.log` + +--- + +## Quick Reference + +### Essential Commands + +```bash +# Activate environment +conda activate atomizer + +# Run optimization +python run_optimization.py --start --trials 50 + +# Resume interrupted run +python run_optimization.py --start --resume + +# Test single trial +python run_optimization.py --test + +# Start dashboard +python launch_dashboard.py + +# Check study status +python -c "from optimization_engine.study.state import get_study_status; print(get_study_status('.'))" +``` + +### Key Files + +| File | Purpose | +|------|---------| +| `optimization_config.json` | Study configuration | +| `atomizer_spec.json` | AtomizerSpec v2.0 (Canvas) | +| `run_optimization.py` | FEA optimization script | +| `3_results/study.db` | Optuna database | + +--- + +*Ready to optimize? Start with a simple study, then explore advanced features like neural acceleration and multi-objective optimization.* diff --git a/docs/05_API_REFERENCE/GNN_ARCHITECTURE.md b/docs/api/GNN_ARCHITECTURE.md similarity index 100% rename from docs/05_API_REFERENCE/GNN_ARCHITECTURE.md rename to docs/api/GNN_ARCHITECTURE.md diff --git a/docs/05_API_REFERENCE/NXOPEN_INTELLISENSE_SETUP.md b/docs/api/NXOPEN_INTELLISENSE_SETUP.md similarity index 100% rename from docs/05_API_REFERENCE/NXOPEN_INTELLISENSE_SETUP.md rename to docs/api/NXOPEN_INTELLISENSE_SETUP.md diff --git a/docs/05_API_REFERENCE/NXOPEN_RESOURCES.md b/docs/api/NXOPEN_RESOURCES.md similarity index 100% rename from docs/05_API_REFERENCE/NXOPEN_RESOURCES.md rename to docs/api/NXOPEN_RESOURCES.md diff --git a/docs/05_API_REFERENCE/NX_FILE_STRUCTURE_PROTOCOL.md b/docs/api/NX_FILE_STRUCTURE_PROTOCOL.md similarity index 100% rename from docs/05_API_REFERENCE/NX_FILE_STRUCTURE_PROTOCOL.md rename to docs/api/NX_FILE_STRUCTURE_PROTOCOL.md diff --git a/docs/05_API_REFERENCE/nx_integration.md b/docs/api/nx_integration.md similarity index 100% rename from docs/05_API_REFERENCE/nx_integration.md rename to docs/api/nx_integration.md diff --git a/docs/05_API_REFERENCE/system_configuration.md b/docs/api/system_configuration.md similarity index 100% rename from docs/05_API_REFERENCE/system_configuration.md rename to docs/api/system_configuration.md diff --git a/docs/01_PROTOCOLS.md b/docs/archive/historical/01_PROTOCOLS_legacy.md similarity index 99% rename from docs/01_PROTOCOLS.md rename to docs/archive/historical/01_PROTOCOLS_legacy.md index 1abb7271..6cf9bf73 100644 --- a/docs/01_PROTOCOLS.md +++ b/docs/archive/historical/01_PROTOCOLS_legacy.md @@ -1,7 +1,7 @@ # Atomizer Protocol Specifications -**Last Updated**: 2025-11-21 -**Status**: Active +**Last Updated**: 2026-01-20 +**Status**: ARCHIVED - See docs/protocols/ for current protocol documentation **Applies To**: All Atomizer optimization systems --- diff --git a/docs/03_GETTING_STARTED.md b/docs/archive/historical/03_GETTING_STARTED_legacy.md similarity index 100% rename from docs/03_GETTING_STARTED.md rename to docs/archive/historical/03_GETTING_STARTED_legacy.md diff --git a/docs/08_ARCHIVE/historical/ARCHITECTURE_REFACTOR_NOV17.md b/docs/archive/historical/ARCHITECTURE_REFACTOR_NOV17.md similarity index 100% rename from docs/08_ARCHIVE/historical/ARCHITECTURE_REFACTOR_NOV17.md rename to docs/archive/historical/ARCHITECTURE_REFACTOR_NOV17.md diff --git a/docs/08_ARCHIVE/historical/BRACKET_STUDY_ISSUES_LOG.md b/docs/archive/historical/BRACKET_STUDY_ISSUES_LOG.md similarity index 100% rename from docs/08_ARCHIVE/historical/BRACKET_STUDY_ISSUES_LOG.md rename to docs/archive/historical/BRACKET_STUDY_ISSUES_LOG.md diff --git a/docs/08_ARCHIVE/historical/CRITICAL_ISSUES_ROADMAP.md b/docs/archive/historical/CRITICAL_ISSUES_ROADMAP.md similarity index 100% rename from docs/08_ARCHIVE/historical/CRITICAL_ISSUES_ROADMAP.md rename to docs/archive/historical/CRITICAL_ISSUES_ROADMAP.md diff --git a/docs/08_ARCHIVE/historical/FEATURE_REGISTRY_ARCHITECTURE.md b/docs/archive/historical/FEATURE_REGISTRY_ARCHITECTURE.md similarity index 100% rename from docs/08_ARCHIVE/historical/FEATURE_REGISTRY_ARCHITECTURE.md rename to docs/archive/historical/FEATURE_REGISTRY_ARCHITECTURE.md diff --git a/docs/08_ARCHIVE/historical/FIX_VALIDATOR_PRUNING.md b/docs/archive/historical/FIX_VALIDATOR_PRUNING.md similarity index 100% rename from docs/08_ARCHIVE/historical/FIX_VALIDATOR_PRUNING.md rename to docs/archive/historical/FIX_VALIDATOR_PRUNING.md diff --git a/docs/08_ARCHIVE/historical/GOOD_MORNING_NOV18.md b/docs/archive/historical/GOOD_MORNING_NOV18.md similarity index 100% rename from docs/08_ARCHIVE/historical/GOOD_MORNING_NOV18.md rename to docs/archive/historical/GOOD_MORNING_NOV18.md diff --git a/docs/08_ARCHIVE/historical/INDEX_OLD.md b/docs/archive/historical/INDEX_OLD.md similarity index 100% rename from docs/08_ARCHIVE/historical/INDEX_OLD.md rename to docs/archive/historical/INDEX_OLD.md diff --git a/docs/08_ARCHIVE/historical/LESSONS_LEARNED.md b/docs/archive/historical/LESSONS_LEARNED.md similarity index 100% rename from docs/08_ARCHIVE/historical/LESSONS_LEARNED.md rename to docs/archive/historical/LESSONS_LEARNED.md diff --git a/docs/08_ARCHIVE/historical/NXOPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md b/docs/archive/historical/NXOPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md similarity index 100% rename from docs/08_ARCHIVE/historical/NXOPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md rename to docs/archive/historical/NXOPEN_DOCUMENTATION_INTEGRATION_STRATEGY.md diff --git a/docs/08_ARCHIVE/historical/NX_EXPRESSION_IMPORT_SYSTEM.md b/docs/archive/historical/NX_EXPRESSION_IMPORT_SYSTEM.md similarity index 100% rename from docs/08_ARCHIVE/historical/NX_EXPRESSION_IMPORT_SYSTEM.md rename to docs/archive/historical/NX_EXPRESSION_IMPORT_SYSTEM.md diff --git a/docs/08_ARCHIVE/historical/OPTIMIZATION_WORKFLOW.md b/docs/archive/historical/OPTIMIZATION_WORKFLOW.md similarity index 100% rename from docs/08_ARCHIVE/historical/OPTIMIZATION_WORKFLOW.md rename to docs/archive/historical/OPTIMIZATION_WORKFLOW.md diff --git a/docs/08_ARCHIVE/historical/OPTUNA_DASHBOARD.md b/docs/archive/historical/OPTUNA_DASHBOARD.md similarity index 100% rename from docs/08_ARCHIVE/historical/OPTUNA_DASHBOARD.md rename to docs/archive/historical/OPTUNA_DASHBOARD.md diff --git a/docs/08_ARCHIVE/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md b/docs/archive/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md similarity index 100% rename from docs/08_ARCHIVE/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md rename to docs/archive/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md diff --git a/docs/08_ARCHIVE/historical/PRUNING_DIAGNOSTICS.md b/docs/archive/historical/PRUNING_DIAGNOSTICS.md similarity index 100% rename from docs/08_ARCHIVE/historical/PRUNING_DIAGNOSTICS.md rename to docs/archive/historical/PRUNING_DIAGNOSTICS.md diff --git a/docs/08_ARCHIVE/historical/QUICK_CONFIG_REFERENCE.md b/docs/archive/historical/QUICK_CONFIG_REFERENCE.md similarity index 100% rename from docs/08_ARCHIVE/historical/QUICK_CONFIG_REFERENCE.md rename to docs/archive/historical/QUICK_CONFIG_REFERENCE.md diff --git a/docs/08_ARCHIVE/historical/STUDY_CONTINUATION_STANDARD.md b/docs/archive/historical/STUDY_CONTINUATION_STANDARD.md similarity index 100% rename from docs/08_ARCHIVE/historical/STUDY_CONTINUATION_STANDARD.md rename to docs/archive/historical/STUDY_CONTINUATION_STANDARD.md diff --git a/docs/08_ARCHIVE/historical/STUDY_ORGANIZATION.md b/docs/archive/historical/STUDY_ORGANIZATION.md similarity index 100% rename from docs/08_ARCHIVE/historical/STUDY_ORGANIZATION.md rename to docs/archive/historical/STUDY_ORGANIZATION.md diff --git a/docs/08_ARCHIVE/historical/TODAY_PLAN_NOV18.md b/docs/archive/historical/TODAY_PLAN_NOV18.md similarity index 100% rename from docs/08_ARCHIVE/historical/TODAY_PLAN_NOV18.md rename to docs/archive/historical/TODAY_PLAN_NOV18.md diff --git a/docs/ATOMIZER_PODCAST_BRIEFING.md b/docs/archive/marketing/ATOMIZER_PODCAST_BRIEFING.md similarity index 100% rename from docs/ATOMIZER_PODCAST_BRIEFING.md rename to docs/archive/marketing/ATOMIZER_PODCAST_BRIEFING.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_2_5_INTELLIGENT_GAP_DETECTION.md b/docs/archive/phase_documents/PHASE_2_5_INTELLIGENT_GAP_DETECTION.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_2_5_INTELLIGENT_GAP_DETECTION.md rename to docs/archive/phase_documents/PHASE_2_5_INTELLIGENT_GAP_DETECTION.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_2_7_LLM_INTEGRATION.md b/docs/archive/phase_documents/PHASE_2_7_LLM_INTEGRATION.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_2_7_LLM_INTEGRATION.md rename to docs/archive/phase_documents/PHASE_2_7_LLM_INTEGRATION.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_3_2_INTEGRATION_PLAN.md b/docs/archive/phase_documents/PHASE_3_2_INTEGRATION_PLAN.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_3_2_INTEGRATION_PLAN.md rename to docs/archive/phase_documents/PHASE_3_2_INTEGRATION_PLAN.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_3_2_INTEGRATION_STATUS.md b/docs/archive/phase_documents/PHASE_3_2_INTEGRATION_STATUS.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_3_2_INTEGRATION_STATUS.md rename to docs/archive/phase_documents/PHASE_3_2_INTEGRATION_STATUS.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_3_2_NEXT_STEPS.md b/docs/archive/phase_documents/PHASE_3_2_NEXT_STEPS.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_3_2_NEXT_STEPS.md rename to docs/archive/phase_documents/PHASE_3_2_NEXT_STEPS.md diff --git a/docs/08_ARCHIVE/phase_documents/PHASE_3_3_VISUALIZATION_AND_CLEANUP.md b/docs/archive/phase_documents/PHASE_3_3_VISUALIZATION_AND_CLEANUP.md similarity index 100% rename from docs/08_ARCHIVE/phase_documents/PHASE_3_3_VISUALIZATION_AND_CLEANUP.md rename to docs/archive/phase_documents/PHASE_3_3_VISUALIZATION_AND_CLEANUP.md diff --git a/docs/08_ARCHIVE/plans/DASHBOARD_IMPROVEMENT_PLAN.md b/docs/archive/plans/DASHBOARD_IMPROVEMENT_PLAN.md similarity index 100% rename from docs/08_ARCHIVE/plans/DASHBOARD_IMPROVEMENT_PLAN.md rename to docs/archive/plans/DASHBOARD_IMPROVEMENT_PLAN.md diff --git a/docs/08_ARCHIVE/plans/backend_integration_plan.md b/docs/archive/plans/backend_integration_plan.md similarity index 100% rename from docs/08_ARCHIVE/plans/backend_integration_plan.md rename to docs/archive/plans/backend_integration_plan.md diff --git a/docs/08_ARCHIVE/plans/dashboard_enhancement_plan.md b/docs/archive/plans/dashboard_enhancement_plan.md similarity index 100% rename from docs/08_ARCHIVE/plans/dashboard_enhancement_plan.md rename to docs/archive/plans/dashboard_enhancement_plan.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_NOV20.md b/docs/archive/session_summaries/SESSION_SUMMARY_NOV20.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_NOV20.md rename to docs/archive/session_summaries/SESSION_SUMMARY_NOV20.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_5_TO_2_7.md b/docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_5_TO_2_7.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_5_TO_2_7.md rename to docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_5_TO_2_7.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_8.md b/docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_8.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_8.md rename to docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_8.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_9.md b/docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_9.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_2_9.md rename to docs/archive/session_summaries/SESSION_SUMMARY_PHASE_2_9.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_3.md b/docs/archive/session_summaries/SESSION_SUMMARY_PHASE_3.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_3.md rename to docs/archive/session_summaries/SESSION_SUMMARY_PHASE_3.md diff --git a/docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_3_1.md b/docs/archive/session_summaries/SESSION_SUMMARY_PHASE_3_1.md similarity index 100% rename from docs/08_ARCHIVE/session_summaries/SESSION_SUMMARY_PHASE_3_1.md rename to docs/archive/session_summaries/SESSION_SUMMARY_PHASE_3_1.md diff --git a/docs/08_ARCHIVE/sessions/ATOMIZER_STATE_ASSESSMENT_NOV25.md b/docs/archive/sessions/ATOMIZER_STATE_ASSESSMENT_NOV25.md similarity index 100% rename from docs/08_ARCHIVE/sessions/ATOMIZER_STATE_ASSESSMENT_NOV25.md rename to docs/archive/sessions/ATOMIZER_STATE_ASSESSMENT_NOV25.md diff --git a/docs/08_ARCHIVE/sessions/Phase_1_2_Implementation_Plan.md b/docs/archive/sessions/Phase_1_2_Implementation_Plan.md similarity index 100% rename from docs/08_ARCHIVE/sessions/Phase_1_2_Implementation_Plan.md rename to docs/archive/sessions/Phase_1_2_Implementation_Plan.md diff --git a/docs/08_ARCHIVE/sessions/Phase_1_3_Implementation_Plan.md b/docs/archive/sessions/Phase_1_3_Implementation_Plan.md similarity index 100% rename from docs/08_ARCHIVE/sessions/Phase_1_3_Implementation_Plan.md rename to docs/archive/sessions/Phase_1_3_Implementation_Plan.md diff --git a/docs/08_ARCHIVE/sessions/Today_Todo.md b/docs/archive/sessions/Today_Todo.md similarity index 100% rename from docs/08_ARCHIVE/sessions/Today_Todo.md rename to docs/archive/sessions/Today_Todo.md diff --git a/docs/08_ARCHIVE/sessions/dashboard_initial_prompt.md b/docs/archive/sessions/dashboard_initial_prompt.md similarity index 100% rename from docs/08_ARCHIVE/sessions/dashboard_initial_prompt.md rename to docs/archive/sessions/dashboard_initial_prompt.md diff --git a/docs/07_DEVELOPMENT/ATOMIZER_ARCHITECTURE_OVERVIEW.md b/docs/development/ATOMIZER_ARCHITECTURE_OVERVIEW.md similarity index 99% rename from docs/07_DEVELOPMENT/ATOMIZER_ARCHITECTURE_OVERVIEW.md rename to docs/development/ATOMIZER_ARCHITECTURE_OVERVIEW.md index ec851d83..feb7d04a 100644 --- a/docs/07_DEVELOPMENT/ATOMIZER_ARCHITECTURE_OVERVIEW.md +++ b/docs/development/ATOMIZER_ARCHITECTURE_OVERVIEW.md @@ -43,7 +43,7 @@ graph LR style Atomizer fill:#e1f5fe ``` -**Core Philosophy**: "Talk, don't click." +**Core Philosophy**: LLM-driven FEA optimization. --- diff --git a/docs/07_DEVELOPMENT/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md b/docs/development/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md similarity index 100% rename from docs/07_DEVELOPMENT/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md rename to docs/development/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md diff --git a/docs/07_DEVELOPMENT/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md b/docs/development/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md similarity index 100% rename from docs/07_DEVELOPMENT/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md rename to docs/development/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md diff --git a/docs/07_DEVELOPMENT/ATOMIZER_FIELD_INTEGRATION_PLAN.md b/docs/development/ATOMIZER_FIELD_INTEGRATION_PLAN.md similarity index 100% rename from docs/07_DEVELOPMENT/ATOMIZER_FIELD_INTEGRATION_PLAN.md rename to docs/development/ATOMIZER_FIELD_INTEGRATION_PLAN.md diff --git a/docs/07_DEVELOPMENT/ATOMIZER_USER_GUIDE.md b/docs/development/ATOMIZER_USER_GUIDE.md similarity index 100% rename from docs/07_DEVELOPMENT/ATOMIZER_USER_GUIDE.md rename to docs/development/ATOMIZER_USER_GUIDE.md diff --git a/docs/07_DEVELOPMENT/DEVELOPMENT_GUIDANCE.md b/docs/development/DEVELOPMENT_GUIDANCE.md similarity index 100% rename from docs/07_DEVELOPMENT/DEVELOPMENT_GUIDANCE.md rename to docs/development/DEVELOPMENT_GUIDANCE.md diff --git a/docs/07_DEVELOPMENT/DEVELOPMENT_ROADMAP.md b/docs/development/DEVELOPMENT_ROADMAP.md similarity index 100% rename from docs/07_DEVELOPMENT/DEVELOPMENT_ROADMAP.md rename to docs/development/DEVELOPMENT_ROADMAP.md diff --git a/docs/07_DEVELOPMENT/LOGGING_MIGRATION_GUIDE.md b/docs/development/LOGGING_MIGRATION_GUIDE.md similarity index 100% rename from docs/07_DEVELOPMENT/LOGGING_MIGRATION_GUIDE.md rename to docs/development/LOGGING_MIGRATION_GUIDE.md diff --git a/docs/07_DEVELOPMENT/NASTRAN_VISUALIZATION_RESEARCH.md b/docs/development/NASTRAN_VISUALIZATION_RESEARCH.md similarity index 100% rename from docs/07_DEVELOPMENT/NASTRAN_VISUALIZATION_RESEARCH.md rename to docs/development/NASTRAN_VISUALIZATION_RESEARCH.md diff --git a/docs/07_DEVELOPMENT/NN_SURROGATE_AUTOMATION_PLAN.md b/docs/development/NN_SURROGATE_AUTOMATION_PLAN.md similarity index 100% rename from docs/07_DEVELOPMENT/NN_SURROGATE_AUTOMATION_PLAN.md rename to docs/development/NN_SURROGATE_AUTOMATION_PLAN.md diff --git a/docs/07_DEVELOPMENT/Philosophy.md b/docs/development/Philosophy.md similarity index 100% rename from docs/07_DEVELOPMENT/Philosophy.md rename to docs/development/Philosophy.md diff --git a/docs/09_DIAGRAMS/00_INDEX.md b/docs/diagrams/00_INDEX.md similarity index 100% rename from docs/09_DIAGRAMS/00_INDEX.md rename to docs/diagrams/00_INDEX.md diff --git a/docs/09_DIAGRAMS/architecture_overview.md b/docs/diagrams/architecture_overview.md similarity index 100% rename from docs/09_DIAGRAMS/architecture_overview.md rename to docs/diagrams/architecture_overview.md diff --git a/docs/09_DIAGRAMS/protocol_workflows.md b/docs/diagrams/protocol_workflows.md similarity index 100% rename from docs/09_DIAGRAMS/protocol_workflows.md rename to docs/diagrams/protocol_workflows.md diff --git a/docs/04_USER_GUIDES/ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md b/docs/guides/ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md similarity index 100% rename from docs/04_USER_GUIDES/ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md rename to docs/guides/ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md diff --git a/docs/04_USER_GUIDES/CANVAS.md b/docs/guides/CANVAS.md similarity index 100% rename from docs/04_USER_GUIDES/CANVAS.md rename to docs/guides/CANVAS.md diff --git a/docs/04_USER_GUIDES/DASHBOARD.md b/docs/guides/DASHBOARD.md similarity index 100% rename from docs/04_USER_GUIDES/DASHBOARD.md rename to docs/guides/DASHBOARD.md diff --git a/docs/04_USER_GUIDES/DASHBOARD_IMPLEMENTATION_STATUS.md b/docs/guides/DASHBOARD_IMPLEMENTATION_STATUS.md similarity index 100% rename from docs/04_USER_GUIDES/DASHBOARD_IMPLEMENTATION_STATUS.md rename to docs/guides/DASHBOARD_IMPLEMENTATION_STATUS.md diff --git a/docs/04_USER_GUIDES/DASHBOARD_MASTER_PLAN.md b/docs/guides/DASHBOARD_MASTER_PLAN.md similarity index 100% rename from docs/04_USER_GUIDES/DASHBOARD_MASTER_PLAN.md rename to docs/guides/DASHBOARD_MASTER_PLAN.md diff --git a/docs/04_USER_GUIDES/DASHBOARD_REACT_IMPLEMENTATION.md b/docs/guides/DASHBOARD_REACT_IMPLEMENTATION.md similarity index 100% rename from docs/04_USER_GUIDES/DASHBOARD_REACT_IMPLEMENTATION.md rename to docs/guides/DASHBOARD_REACT_IMPLEMENTATION.md diff --git a/docs/04_USER_GUIDES/DASHBOARD_SESSION_SUMMARY.md b/docs/guides/DASHBOARD_SESSION_SUMMARY.md similarity index 100% rename from docs/04_USER_GUIDES/DASHBOARD_SESSION_SUMMARY.md rename to docs/guides/DASHBOARD_SESSION_SUMMARY.md diff --git a/docs/04_USER_GUIDES/NEURAL_FEATURES_COMPLETE.md b/docs/guides/NEURAL_FEATURES_COMPLETE.md similarity index 100% rename from docs/04_USER_GUIDES/NEURAL_FEATURES_COMPLETE.md rename to docs/guides/NEURAL_FEATURES_COMPLETE.md diff --git a/docs/04_USER_GUIDES/NEURAL_WORKFLOW_TUTORIAL.md b/docs/guides/NEURAL_WORKFLOW_TUTORIAL.md similarity index 100% rename from docs/04_USER_GUIDES/NEURAL_WORKFLOW_TUTORIAL.md rename to docs/guides/NEURAL_WORKFLOW_TUTORIAL.md diff --git a/docs/04_USER_GUIDES/PHYSICS_LOSS_GUIDE.md b/docs/guides/PHYSICS_LOSS_GUIDE.md similarity index 100% rename from docs/04_USER_GUIDES/PHYSICS_LOSS_GUIDE.md rename to docs/guides/PHYSICS_LOSS_GUIDE.md diff --git a/docs/04_USER_GUIDES/TRAINING_DATA_EXPORT_GUIDE.md b/docs/guides/TRAINING_DATA_EXPORT_GUIDE.md similarity index 100% rename from docs/04_USER_GUIDES/TRAINING_DATA_EXPORT_GUIDE.md rename to docs/guides/TRAINING_DATA_EXPORT_GUIDE.md diff --git a/docs/04_USER_GUIDES/hybrid_mode.md b/docs/guides/hybrid_mode.md similarity index 100% rename from docs/04_USER_GUIDES/hybrid_mode.md rename to docs/guides/hybrid_mode.md diff --git a/docs/06_PHYSICS/00_INDEX.md b/docs/physics/00_INDEX.md similarity index 100% rename from docs/06_PHYSICS/00_INDEX.md rename to docs/physics/00_INDEX.md diff --git a/docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md b/docs/physics/ZERNIKE_FUNDAMENTALS.md similarity index 100% rename from docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md rename to docs/physics/ZERNIKE_FUNDAMENTALS.md diff --git a/docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md b/docs/physics/ZERNIKE_OPD_METHOD.md similarity index 100% rename from docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md rename to docs/physics/ZERNIKE_OPD_METHOD.md diff --git a/docs/plans/ATOMIZER_CONTEXT_ENGINEERING_PLAN.md b/docs/plans/ATOMIZER_CONTEXT_ENGINEERING_PLAN.md index 02b33c4b..ba384485 100644 --- a/docs/plans/ATOMIZER_CONTEXT_ENGINEERING_PLAN.md +++ b/docs/plans/ATOMIZER_CONTEXT_ENGINEERING_PLAN.md @@ -750,7 +750,7 @@ The system prompt is structured to maximize KV-cache hits: ``` [SECTION 1: STABLE - Never changes] - Atomizer identity and capabilities -- Core principles (talk don't click) +- Core principles (LLM-driven optimization) - Tool schemas and definitions - Base protocol routing table @@ -780,7 +780,7 @@ You are assisting with **Atomizer**, an LLM-first FEA optimization framework. - Neural acceleration (600-1000x speedup) ## Principles -1. Talk, don't click - users describe goals in plain language +1. LLM-driven - users describe goals in plain language 2. Never modify master models - work on copies 3. Always validate before running 4. Document everything diff --git a/docs/plans/ATOMIZER_STUDY_INTERVIEW_MODE_IMPLEMENTATION_PLAN.md b/docs/plans/ATOMIZER_STUDY_INTERVIEW_MODE_IMPLEMENTATION_PLAN.md index 69c00937..474155cc 100644 --- a/docs/plans/ATOMIZER_STUDY_INTERVIEW_MODE_IMPLEMENTATION_PLAN.md +++ b/docs/plans/ATOMIZER_STUDY_INTERVIEW_MODE_IMPLEMENTATION_PLAN.md @@ -50,7 +50,7 @@ Implement an **Interview Mode** that systematically gathers engineering requirem | Principle | Implementation | |-----------|----------------| -| **Talk, don't click** | Natural conversation, not forms | +| **Conversational interface** | Natural conversation, not forms | | **Intelligence first** | Auto-detect what's possible, ask about intent | | **No assumptions** | Ask instead of guessing on critical decisions | | **Adaptive depth** | Simple studies = fewer questions | diff --git a/docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md b/docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md new file mode 100644 index 00000000..f654ecd6 --- /dev/null +++ b/docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md @@ -0,0 +1,737 @@ +# Claude + Canvas Integration V2 + +## The Vision + +**Side-by-side LLM + Canvas** where: +1. **Claude talks → Canvas updates in real-time** (user sees nodes appear/change) +2. **User tweaks Canvas → Claude sees changes** (bi-directional sync) +3. **Full Claude Code-level power** through the dashboard chat +4. **Interview-driven study creation** entirely through chat + +The user can: +- Describe what they want in natural language +- Watch the canvas build itself +- Make quick manual tweaks +- Continue the conversation with Claude seeing their changes +- Have Claude execute protocols, create files, run optimizations + +--- + +## Current State vs Target + +### What We Have Now + +``` +┌──────────────────┐ ┌──────────────────┐ +│ Chat Panel │ │ Canvas │ +│ (Power Mode) │ │ (SpecRenderer) │ +├──────────────────┤ ├──────────────────┤ +│ - Anthropic API │ │ - Loads spec │ +│ - Write tools │ │ - User edits │ +│ - spec_modified │--->│ - Auto-refresh │ +│ events │ │ on event │ +└──────────────────┘ └──────────────────┘ + │ │ + │ No real-time │ + │ canvas state │ + │ in Claude context │ + └──────────────────────┘ +``` + +**Gaps:** +1. Claude doesn't see current canvas state in real-time +2. No interview engine for guided study creation +3. Limited tool set (no file ops, no protocol execution) +4. No streaming for tool calls +5. Mode switching requires reconnection + +### What We Want + +``` +┌───────────────────────────────────────────────────────────────────┐ +│ ATOMIZER DASHBOARD │ +├────────────────────────────┬──────────────────────────────────────┤ +│ │ │ +│ CHAT PANEL │ CANVAS │ +│ (Atomizer Assistant) │ (SpecRenderer) │ +│ │ │ +│ ┌──────────────────────┐ │ ┌────────────────────────────────┐ │ +│ │ "Create a bracket │ │ │ │ │ +│ │ optimization with │ │ │ [DV: thickness] │ │ +│ │ mass and stiffness" │ │ │ │ │ │ +│ └──────────────────────┘ │ │ ▼ │ │ +│ │ │ │ [Model Node] │ │ +│ ▼ │ │ │ │ │ +│ ┌──────────────────────┐ │ │ ▼ │ │ +│ │ 🔧 Adding thickness │ │ │ [Ext: mass]──>[Obj: min] │ │ +│ │ 🔧 Adding mass ext │◄─┼──┤ [Ext: disp]──>[Obj: min] │ │ +│ │ 🔧 Adding objective │ │ │ │ │ +│ │ │ │ │ (nodes appear in real-time) │ │ +│ │ ✓ Study configured! │ │ │ │ │ +│ └──────────────────────┘ │ └────────────────────────────────┘ │ +│ │ │ +│ ┌──────────────────────┐ │ User can click any node to edit │ +│ │ Claude sees the │ │ Claude sees user's edits │ +│ │ canvas state and │◄─┼──────────────────────────────────────│ +│ │ user's manual edits │ │ │ +│ └──────────────────────┘ │ │ +└────────────────────────────┴──────────────────────────────────────┘ +``` + +--- + +## Architecture + +### 1. WebSocket Hub (Bi-directional Sync) + +``` + ┌─────────────────────┐ + │ WebSocket Hub │ + │ (Single Connection)│ + └─────────┬───────────┘ + │ + ┌────────────────────┼────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Chat Panel │ │ Canvas │ │ Spec Store │ +│ │ │ │ │ │ +│ - Send messages │ │ - User edits │ │ - Single source │ +│ - Receive text │ │ - Node add/del │ │ of truth │ +│ - See tool calls│ │ - Edge changes │ │ - Validates │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ + +Message Types: + Client → Server: + { type: "message", content: "..." } # Chat message + { type: "canvas_edit", patch: {...} } # User made canvas change + { type: "set_study", study_id: "..." } # Switch study + { type: "ping" } # Heartbeat + + Server → Client: + { type: "text", content: "...", done: false } # Streaming text + { type: "tool_start", tool: "...", input: {...} } + { type: "tool_result", tool: "...", result: "..." } + { type: "spec_updated", spec: {...} } # Full spec after change + { type: "canvas_patch", patch: {...} } # Incremental update + { type: "done" } # Response complete + { type: "pong" } # Heartbeat response +``` + +### 2. Enhanced Claude Agent + +The `AtomizerClaudeAgent` needs to be more like **Claude Code**: + +```python +class AtomizerClaudeAgent: + """Full-power Claude agent with Claude Code-like capabilities""" + + def __init__(self, study_id: Optional[str] = None): + self.client = anthropic.Anthropic() + self.study_id = study_id + self.spec_store = SpecStore(study_id) # Real-time spec access + self.interview_state = None # For guided creation + self.tools = self._define_full_tools() + + async def chat_stream( + self, + message: str, + conversation: List[Dict], + canvas_state: Optional[Dict] = None # Current canvas from frontend + ) -> AsyncGenerator[Dict, None]: + """Stream responses with tool calls""" + + # Build context with current canvas state + system = self._build_system_prompt(canvas_state) + + # Stream the response + with self.client.messages.stream( + model="claude-sonnet-4-20250514", + max_tokens=8192, + system=system, + messages=conversation + [{"role": "user", "content": message}], + tools=self.tools + ) as stream: + for event in stream: + if event.type == "content_block_delta": + if event.delta.type == "text_delta": + yield {"type": "text", "content": event.delta.text} + + elif event.type == "content_block_start": + if event.content_block.type == "tool_use": + yield { + "type": "tool_start", + "tool": event.content_block.name, + "input": {} # Will be completed + } + + # Handle tool calls after stream + response = stream.get_final_message() + for block in response.content: + if block.type == "tool_use": + result = await self._execute_tool(block.name, block.input) + yield { + "type": "tool_result", + "tool": block.name, + "result": result["result"], + "spec_changed": result.get("spec_changed", False) + } + + # If spec changed, send the updated spec + if result.get("spec_changed"): + yield { + "type": "spec_updated", + "spec": self.spec_store.get_dict() + } +``` + +### 3. Full Tool Set + +Claude needs more tools to match Claude Code power: + +```python +FULL_TOOLS = [ + # === READ TOOLS === + "read_study_config", # Read atomizer_spec.json + "query_trials", # Query optimization database + "list_studies", # List available studies + "read_file", # Read any file in study + "list_files", # List files in study directory + "read_nx_expressions", # Get NX model expressions + + # === WRITE TOOLS (Spec Modification) === + "add_design_variable", # Add DV to spec + "add_extractor", # Add extractor (built-in or custom) + "add_objective", # Add objective + "add_constraint", # Add constraint + "update_spec_field", # Update any spec field by path + "remove_node", # Remove any node by ID + "update_canvas_layout", # Reposition nodes for better layout + + # === STUDY MANAGEMENT === + "create_study", # Create new study directory + spec + "clone_study", # Clone existing study + "validate_spec", # Validate current spec + "migrate_config", # Migrate legacy config to spec v2 + + # === OPTIMIZATION CONTROL === + "start_optimization", # Start optimization run + "stop_optimization", # Stop running optimization + "get_optimization_status",# Check if running, trial count + + # === FILE OPERATIONS === + "write_file", # Write file to study directory + "create_directory", # Create directory in study + + # === NX INTEGRATION === + "introspect_model", # Get model info (expressions, features) + "suggest_design_vars", # AI-suggest design variables from model + + # === INTERVIEW/GUIDED CREATION === + "start_interview", # Begin guided study creation + "process_answer", # Process user's interview answer + "get_interview_state", # Get current interview progress +] +``` + +### 4. Interview Engine Integration + +The interview happens **through chat**, not a separate UI: + +```python +class InterviewEngine: + """Guided study creation through conversation""" + + PHASES = [ + ("model", "Let's set up your model. What's the path to your NX simulation file?"), + ("objectives", "What do you want to optimize? (e.g., minimize mass, minimize displacement)"), + ("design_vars", "Which parameters can I vary? I can suggest some based on your model."), + ("constraints", "Any constraints to respect? (e.g., max stress, min frequency)"), + ("method", "I recommend {method} for this problem. Should I configure it?"), + ("review", "Here's the complete configuration. Ready to create the study?"), + ] + + def __init__(self, spec_store: SpecStore): + self.spec_store = spec_store + self.current_phase = 0 + self.collected_data = {} + + def get_current_question(self) -> str: + phase_name, question = self.PHASES[self.current_phase] + # Customize question based on collected data + if phase_name == "method": + method = self._recommend_method() + question = question.format(method=method) + return question + + def process_answer(self, answer: str) -> Dict: + """Process answer and build spec incrementally""" + phase_name, _ = self.PHASES[self.current_phase] + + # Extract structured data from answer + extracted = self._extract_for_phase(phase_name, answer) + self.collected_data[phase_name] = extracted + + # Update spec with extracted data + spec_update = self._apply_to_spec(phase_name, extracted) + + # Advance to next phase + self.current_phase += 1 + + return { + "phase": phase_name, + "extracted": extracted, + "spec_update": spec_update, + "next_question": self.get_current_question() if self.current_phase < len(self.PHASES) else None, + "complete": self.current_phase >= len(self.PHASES) + } +``` + +Claude uses the interview through tools: + +```python +async def _tool_start_interview(self, params: Dict) -> str: + """Start guided study creation""" + self.interview_state = InterviewEngine(self.spec_store) + return { + "status": "started", + "first_question": self.interview_state.get_current_question() + } + +async def _tool_process_answer(self, params: Dict) -> str: + """Process user's answer in interview""" + if not self.interview_state: + return {"error": "No interview in progress"} + + result = self.interview_state.process_answer(params["answer"]) + + if result["spec_update"]: + # Spec was updated - this will trigger canvas update + return { + "status": "updated", + "spec_changed": True, + "next_question": result["next_question"], + "complete": result["complete"] + } + + return result +``` + +--- + +## Frontend Implementation + +### 1. Unified WebSocket Hook + +```typescript +// hooks/useAtomizerSocket.ts +export function useAtomizerSocket(studyId: string | undefined) { + const [spec, setSpec] = useState(null); + const [messages, setMessages] = useState([]); + const [isThinking, setIsThinking] = useState(false); + const [currentTool, setCurrentTool] = useState(null); + + const ws = useRef(null); + + // Single WebSocket connection for everything + useEffect(() => { + const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:'; + const host = import.meta.env.DEV ? 'localhost:8001' : window.location.host; + ws.current = new WebSocket(`${protocol}//${host}/api/atomizer/ws`); + + ws.current.onmessage = (event) => { + const data = JSON.parse(event.data); + + switch (data.type) { + case 'text': + // Streaming text from Claude + setMessages(prev => { + const last = prev[prev.length - 1]; + if (last?.role === 'assistant' && !last.complete) { + return [...prev.slice(0, -1), { + ...last, + content: last.content + data.content + }]; + } + return [...prev, { + id: Date.now().toString(), + role: 'assistant', + content: data.content, + complete: false + }]; + }); + break; + + case 'tool_start': + setCurrentTool(data.tool); + // Add tool indicator to chat + setMessages(prev => [...prev, { + id: Date.now().toString(), + role: 'tool', + tool: data.tool, + status: 'running' + }]); + break; + + case 'tool_result': + setCurrentTool(null); + // Update tool message with result + setMessages(prev => prev.map(m => + m.role === 'tool' && m.tool === data.tool && m.status === 'running' + ? { ...m, status: 'complete', result: data.result } + : m + )); + break; + + case 'spec_updated': + // Canvas gets the new spec - this is the magic! + setSpec(data.spec); + break; + + case 'done': + setIsThinking(false); + // Mark last message as complete + setMessages(prev => prev.map((m, i) => + i === prev.length - 1 ? { ...m, complete: true } : m + )); + break; + } + }; + + // Set study context + if (studyId) { + ws.current.onopen = () => { + ws.current?.send(JSON.stringify({ + type: 'set_study', + study_id: studyId + })); + }; + } + + return () => ws.current?.close(); + }, [studyId]); + + // Send message + const sendMessage = useCallback((content: string) => { + if (!ws.current) return; + + setIsThinking(true); + setMessages(prev => [...prev, { + id: Date.now().toString(), + role: 'user', + content + }]); + + ws.current.send(JSON.stringify({ + type: 'message', + content + })); + }, []); + + // Notify Claude about canvas edits + const notifyCanvasEdit = useCallback((patch: any) => { + ws.current?.send(JSON.stringify({ + type: 'canvas_edit', + patch + })); + }, []); + + return { + spec, + messages, + isThinking, + currentTool, + sendMessage, + notifyCanvasEdit + }; +} +``` + +### 2. Integrated Canvas View + +```typescript +// pages/CanvasView.tsx (revised) +export function CanvasView() { + const { '*': studyId } = useParams(); + + // Single hook manages everything + const { + spec, + messages, + isThinking, + currentTool, + sendMessage, + notifyCanvasEdit + } = useAtomizerSocket(studyId); + + // When user edits canvas, notify Claude + const handleSpecChange = useCallback((newSpec: AtomizerSpec) => { + // This is called by SpecRenderer when user makes edits + notifyCanvasEdit({ + type: 'spec_replace', + spec: newSpec + }); + }, [notifyCanvasEdit]); + + return ( +
+ {/* Canvas - receives spec from WebSocket */} +
+ +
+ + {/* Chat Panel */} +
+ +
+
+ ); +} +``` + +### 3. Visual Feedback for Tool Calls + +When Claude calls a tool, the canvas shows visual feedback: + +```typescript +// components/canvas/SpecRenderer.tsx +function SpecRenderer({ spec, highlightNode, onChange }) { + // When a tool is targeting a node, highlight it + const getNodeStyle = (nodeId: string) => { + if (highlightNode === nodeId) { + return { + boxShadow: '0 0 0 3px #f59e0b', // Amber glow + animation: 'pulse 1s infinite' + }; + } + return {}; + }; + + // When new nodes are added, animate them + const [newNodes, setNewNodes] = useState>(new Set()); + + useEffect(() => { + if (spec) { + const currentIds = new Set([ + ...spec.design_variables.map(d => d.id), + ...spec.extractors.map(e => e.id), + ...spec.objectives.map(o => o.id), + ...spec.constraints.map(c => c.id) + ]); + + // Find truly new nodes + const added = [...currentIds].filter(id => !prevIds.current.has(id)); + if (added.length > 0) { + setNewNodes(new Set(added)); + setTimeout(() => setNewNodes(new Set()), 1000); // Clear animation + } + prevIds.current = currentIds; + } + }, [spec]); + + // Render with animations + return ( + ({ + ...n, + style: { + ...getNodeStyle(n.id), + ...(newNodes.has(n.id) ? { animation: 'slideIn 0.5s ease-out' } : {}) + } + }))} /> + ); +} +``` + +--- + +## User Experience Flow + +### Flow 1: Create Study Through Chat + +``` +User: "Create a bracket optimization. I want to minimize mass while keeping + stress below 200 MPa. The thickness can vary from 2mm to 10mm." + +Claude: [Internal: Parse intent, no interview needed - user gave full spec] + + 🔧 Creating study "bracket_optimization"... + 🔧 Adding design variable: thickness [2mm - 10mm] + 🔧 Adding extractor: mass (BDF mass extraction) + 🔧 Adding extractor: max_stress (von Mises stress) + 🔧 Adding objective: minimize mass + 🔧 Adding constraint: stress <= 200 MPa + + ✓ Study created! I've configured: + - 1 design variable (thickness) + - 2 extractors (mass, stress) + - 1 objective (minimize mass) + - 1 constraint (max stress) + + The canvas shows your optimization setup. You can: + - Click any node to adjust settings + - Ask me to add more variables or constraints + - Say "run" to start optimization + +[Canvas shows: DV→Model→Extractors→Objectives/Constraints, all connected] +``` + +### Flow 2: Interview-Guided Creation + +``` +User: "Help me set up a new optimization" + +Claude: Let's set up your optimization step by step. + + First, what's the path to your NX simulation file? + (You can drag & drop or type the path) + +User: "studies/M1_Mirror/model.sim" + +Claude: 🔧 Loading model... + [tool: introspect_model] + + Found your model! It has 15 expressions. I noticed these look like + good design variables: + - rib_thickness (current: 8mm) + - mirror_depth (current: 50mm) + - support_angle (current: 45°) + + What are you trying to optimize? + - Minimize mass? + - Minimize wavefront error (WFE)? + - Both? (I can set up multi-objective) + +[Canvas shows: Model node appears] + +User: "minimize WFE and mass" + +Claude: 🔧 Adding Zernike WFE extractor + 🔧 Adding mass extractor + 🔧 Adding objectives (NSGA-II multi-objective) + + Got it - multi-objective optimization for WFE and mass. + + Which parameters should I vary? + I suggest: rib_thickness, mirror_depth, support_angle + +[Canvas shows: Model→WFE Extractor→Obj, Model→Mass Extractor→Obj] + +User: "Yes, use those, but keep support_angle fixed" + +Claude: 🔧 Adding design variable: rib_thickness [4mm - 16mm] + 🔧 Adding design variable: mirror_depth [30mm - 80mm] + + Perfect. Any constraints? + - Maximum mass? + - Maximum stress? + - Minimum frequency? + +[Canvas shows: 2 DVs connected to Model] + +User: "max stress 150 MPa" + +Claude: 🔧 Adding stress extractor + 🔧 Adding constraint: stress <= 150 MPa + + ✓ Configuration complete! Ready to optimize? + +[Canvas shows: Full optimization flow] +``` + +### Flow 3: User Edits Canvas, Claude Sees + +``` +[User clicks thickness DV node, changes max from 10mm to 15mm] + +[WebSocket sends: { type: "canvas_edit", patch: { path: "design_variables.0.bounds.max", value: 15 } }] + +Claude: [Receives canvas edit notification in context] + +User: "Is this range reasonable?" + +Claude: I see you changed thickness max to 15mm. That's quite thick for a + bracket - typical ranges are 2-10mm. + + With 15mm max, you might see: + - Higher mass designs explored + - Potentially better stiffness + - Longer optimization time (larger search space) + + Want me to adjust or keep it? +``` + +--- + +## Implementation Plan + +### Phase 1: Unified WebSocket (1 week) + +1. **Backend**: Create `/api/atomizer/ws` endpoint + - Single connection for chat + spec sync + - Streaming response support + - Canvas edit notifications + +2. **Frontend**: Create `useAtomizerSocket` hook + - Replaces `useChat` + `useSpecWebSocket` + - Single source of truth for spec state + +3. **Integration**: Wire SpecRenderer to socket + - Receive spec updates from Claude's tools + - Send edit notifications back + +### Phase 2: Enhanced Tools (1 week) + +1. Add remaining write tools +2. Implement `introspect_model` for NX expression discovery +3. Add `create_study` for new study creation +4. Add file operation tools + +### Phase 3: Interview Engine (1 week) + +1. Implement `InterviewEngine` class +2. Add interview tools to Claude +3. Test guided creation flow +4. Add smart defaults and recommendations + +### Phase 4: Polish (1 week) + +1. Visual feedback for tool calls +2. Node highlight during modification +3. Animation for new nodes +4. Error recovery and reconnection +5. Performance optimization + +--- + +## Success Metrics + +1. **Creation Time**: User can create complete study in <3 minutes through chat +2. **Edit Latency**: Canvas updates within 200ms of Claude's tool call +3. **Sync Reliability**: 100% of user edits reflected in Claude's context +4. **Interview Success**: 90% of studies created through interview are valid + +--- + +## Key Differences from Current Implementation + +| Current | Target | +|---------|--------| +| Separate chat/canvas WebSockets | Single unified WebSocket | +| Claude doesn't see canvas state | Real-time canvas state in context | +| Manual spec refresh | Automatic spec push on changes | +| No interview engine | Guided creation through chat | +| Limited tools | Full Claude Code-like tool set | +| Mode switching breaks connection | Seamless power mode | + +--- + +*This is the architecture that makes Atomizer truly powerful - where Claude and Canvas work together as one system.* diff --git a/docs/plans/CLAUDE_CANVAS_PROJECT.md b/docs/plans/CLAUDE_CANVAS_PROJECT.md new file mode 100644 index 00000000..372a7924 --- /dev/null +++ b/docs/plans/CLAUDE_CANVAS_PROJECT.md @@ -0,0 +1,1239 @@ +# Claude + Canvas Integration Project + +## Project Overview + +**Project Name**: Unified Claude + Canvas Integration +**Goal**: Transform Atomizer's dashboard into a bi-directional Claude + Canvas experience where Claude and the user co-edit the same `atomizer_spec.json` in real-time. + +**Core Principle**: `atomizer_spec.json` is the single source of truth. Both Claude (via tools) and the user (via canvas UI) read and write to it, with changes instantly reflected on both sides. + +--- + +## Current State + +### What Exists (✅ Working) + +| Component | File | Status | +|-----------|------|--------| +| Power Mode WebSocket | `backend/api/routes/claude.py` | ✅ `/ws/power` endpoint | +| Write Tools | `backend/api/services/claude_agent.py` | ✅ 6 tools implemented | +| `spec_modified` Events | `backend/api/routes/claude.py:480-487` | ✅ Sent on tool use | +| Canvas Reload | `frontend/src/hooks/useChat.ts:247-262` | ✅ Triggers `reloadSpec()` | +| Spec Store | `frontend/src/hooks/useSpecStore.ts` | ✅ Manages spec state | +| Spec Renderer | `frontend/src/components/canvas/SpecRenderer.tsx` | ✅ Renders spec as nodes | + +### What's Missing (❌ To Build) + +| Feature | Priority | Effort | +|---------|----------|--------| +| Canvas state in Claude's context | P0 | Easy | +| Full spec in `spec_updated` payload | P0 | Easy | +| User edits notify Claude | P0 | Easy | +| Streaming responses | P1 | Medium | +| `create_study` tool | P1 | Medium | +| Interview Engine | P2 | Medium | +| Tool call UI indicators | P2 | Easy | +| Node animations | P3 | Easy | + +--- + +## Architecture + +### Target Data Flow + +``` +┌─────────────────────────────────────────────────────────────────────┐ +│ atomizer_spec.json │ +│ (Single Source of Truth) │ +└───────────────────────────────┬─────────────────────────────────────┘ + │ + ┌───────────┴───────────┐ + │ │ + ▼ ▼ +┌───────────────────────────┐ ┌───────────────────────────┐ +│ Claude Agent │ │ Canvas UI │ +│ (AtomizerClaudeAgent) │ │ (SpecRenderer) │ +├───────────────────────────┤ ├───────────────────────────┤ +│ • Reads spec for context │ │ • Renders spec as nodes │ +│ • Writes via tools │ │ • User edits nodes │ +│ • Sees user's edits │ │ • Receives Claude's edits │ +└───────────────────────────┘ └───────────────────────────┘ + │ │ + │ WebSocket │ + │ (bi-directional) │ + │ │ + └───────────┬───────────┘ + │ + ┌───────────┴───────────┐ + │ /api/claude/ws/power │ + │ (Enhanced Endpoint) │ + └───────────────────────┘ +``` + +### Message Protocol (Enhanced) + +**Client → Server:** +```typescript +// Chat message +{ type: "message", content: "Add a thickness variable 2-10mm" } + +// User edited canvas (NEW) +{ type: "canvas_edit", spec: { /* full spec */ } } + +// Switch study +{ type: "set_study", study_id: "bracket_v1" } + +// Heartbeat +{ type: "ping" } +``` + +**Server → Client:** +```typescript +// Streaming text (NEW - replaces single "text" message) +{ type: "text_delta", content: "Adding" } +{ type: "text_delta", content: " thickness" } +{ type: "text_delta", content: " variable..." } + +// Tool started (NEW) +{ type: "tool_start", tool: "add_design_variable", input: {...} } + +// Tool completed +{ type: "tool_result", tool: "add_design_variable", result: "✓ Added..." } + +// Spec updated - NOW INCLUDES FULL SPEC (CHANGED) +{ type: "spec_updated", spec: { /* full atomizer_spec.json */ } } + +// Response complete +{ type: "done" } + +// Error +{ type: "error", message: "..." } + +// Heartbeat response +{ type: "pong" } +``` + +--- + +## Implementation Tasks + +### Phase 1: Core Bi-directional Sync (P0) + +#### Task 1.1: Add Canvas State to Claude's Context +**File**: `backend/api/services/claude_agent.py` + +**Current** (`_build_system_prompt`): +```python +if self.study_id and self.study_dir and self.study_dir.exists(): + context = self._get_study_context() + base_prompt += f"\n## Current Study: {self.study_id}\n{context}\n" +``` + +**Change**: Add method to format current spec as context, call it from WebSocket handler. + +```python +def set_canvas_state(self, spec: Dict[str, Any]) -> None: + """Update the current canvas state for context""" + self.canvas_state = spec + +def _format_canvas_context(self) -> str: + """Format current canvas state for Claude's system prompt""" + if not self.canvas_state: + return "" + + spec = self.canvas_state + lines = ["\n## Current Canvas State\n"] + lines.append("The user can see this canvas. When you modify it, they see changes in real-time.\n") + + # Design Variables + dvs = spec.get('design_variables', []) + if dvs: + lines.append(f"**Design Variables ({len(dvs)}):**") + for dv in dvs: + bounds = dv.get('bounds', {}) + lines.append(f" - `{dv.get('id')}`: {dv.get('name')} [{bounds.get('min')}, {bounds.get('max')}]") + + # Extractors + exts = spec.get('extractors', []) + if exts: + lines.append(f"\n**Extractors ({len(exts)}):**") + for ext in exts: + lines.append(f" - `{ext.get('id')}`: {ext.get('name')} ({ext.get('type')})") + + # Objectives + objs = spec.get('objectives', []) + if objs: + lines.append(f"\n**Objectives ({len(objs)}):**") + for obj in objs: + lines.append(f" - `{obj.get('id')}`: {obj.get('name')} ({obj.get('direction')})") + + # Constraints + cons = spec.get('constraints', []) + if cons: + lines.append(f"\n**Constraints ({len(cons)}):**") + for con in cons: + lines.append(f" - `{con.get('id')}`: {con.get('name')} {con.get('operator')} {con.get('threshold')}") + + # Model + model = spec.get('model', {}) + if model.get('sim', {}).get('path'): + lines.append(f"\n**Model**: {model['sim']['path']}") + + return "\n".join(lines) +``` + +**Update `_build_system_prompt`**: +```python +def _build_system_prompt(self) -> str: + base_prompt = """...""" # existing prompt + + # Add study context + if self.study_id and self.study_dir and self.study_dir.exists(): + context = self._get_study_context() + base_prompt += f"\n## Current Study: {self.study_id}\n{context}\n" + + # Add canvas state (NEW) + canvas_context = self._format_canvas_context() + if canvas_context: + base_prompt += canvas_context + + return base_prompt +``` + +--- + +#### Task 1.2: Send Full Spec in `spec_updated` +**File**: `backend/api/routes/claude.py` + +**Current** (line 483-487): +```python +await websocket.send_json({ + "type": "spec_modified", + "tool": tool_call["tool"], + "changes": tool_call["result_preview"], +}) +``` + +**Change**: Send full spec instead of just changes. + +```python +# After any write tool completes, send full spec +if tool_call["tool"] in ["add_design_variable", "add_extractor", + "add_objective", "add_constraint", + "update_spec_field", "remove_node"]: + # Load the updated spec + spec = agent.load_current_spec() + await websocket.send_json({ + "type": "spec_updated", + "tool": tool_call["tool"], + "spec": spec, # Full spec! + }) +``` + +**Add to `AtomizerClaudeAgent`**: +```python +def load_current_spec(self) -> Optional[Dict[str, Any]]: + """Load the current atomizer_spec.json""" + if not self.study_dir: + return None + spec_path = self.study_dir / "atomizer_spec.json" + if not spec_path.exists(): + return None + with open(spec_path, 'r', encoding='utf-8') as f: + return json.load(f) +``` + +--- + +#### Task 1.3: Handle User Canvas Edits +**File**: `backend/api/routes/claude.py` + +**Add to `power_mode_websocket`** (after line 524): +```python +elif data.get("type") == "canvas_edit": + # User made a manual edit to the canvas + spec = data.get("spec") + if spec: + # Update agent's canvas state so Claude sees the change + agent.set_canvas_state(spec) + # Optionally save to file if the frontend already saved + # (or let frontend handle saving) + await websocket.send_json({ + "type": "canvas_edit_received", + "acknowledged": True + }) +``` + +--- + +#### Task 1.4: Frontend - Send Canvas Edits to Claude +**File**: `frontend/src/hooks/useChat.ts` + +**Add to state/hook**: +```typescript +// Add to sendMessage or create new function +const notifyCanvasEdit = useCallback((spec: AtomizerSpec) => { + if (wsRef.current?.readyState === WebSocket.OPEN) { + wsRef.current.send(JSON.stringify({ + type: 'canvas_edit', + spec: spec + })); + } +}, []); +``` + +**Return from hook**: +```typescript +return { + // ... existing + notifyCanvasEdit, // NEW +}; +``` + +--- + +#### Task 1.5: Frontend - Use Full Spec from `spec_updated` +**File**: `frontend/src/hooks/useChat.ts` + +**Current** (line 247-262): +```typescript +case 'spec_modified': + console.log('[useChat] Spec was modified by assistant:', data.tool, data.changes); + if (onCanvasModification) { + onCanvasModification({ + action: 'add_node', + data: { _refresh: true, tool: data.tool, changes: data.changes }, + }); + } + break; +``` + +**Change**: Use the spec directly instead of triggering reload. + +```typescript +case 'spec_updated': + console.log('[useChat] Spec updated by assistant:', data.tool); + // Directly update spec store instead of triggering HTTP reload + if (data.spec && onSpecUpdated) { + onSpecUpdated(data.spec); + } + break; +``` + +**Add callback to hook options**: +```typescript +interface UseChatOptions { + // ... existing + onSpecUpdated?: (spec: AtomizerSpec) => void; // NEW +} +``` + +--- + +#### Task 1.6: Wire Canvas to Use Direct Spec Updates +**File**: `frontend/src/pages/CanvasView.tsx` + +**Current** (line 57-64): +```typescript +onCanvasModification: chatPowerMode ? (modification) => { + console.log('Canvas modification from Claude:', modification); + showNotification(`Claude: ${modification.action}...`); + reloadSpec(); +} : undefined, +``` + +**Change**: Use `onSpecUpdated` callback. + +```typescript +const { setSpec } = useSpecStore(); + +// In useChat options: +onSpecUpdated: chatPowerMode ? (spec) => { + console.log('Spec updated by Claude'); + setSpec(spec); // Direct update, no HTTP reload + showNotification('Canvas updated by Claude'); +} : undefined, +``` + +--- + +### Phase 2: Streaming Responses (P1) + +#### Task 2.1: Implement Streaming in Claude Agent +**File**: `backend/api/services/claude_agent.py` + +**Add new method**: +```python +async def chat_stream( + self, + message: str, + conversation_history: List[Dict[str, Any]] +) -> AsyncGenerator[Dict[str, Any], None]: + """ + Stream chat response with tool calls. + Yields events: text_delta, tool_start, tool_result, done + """ + # Rebuild system prompt with current canvas state + self.system_prompt = self._build_system_prompt() + + messages = conversation_history + [{"role": "user", "content": message}] + + # Use streaming API + with self.client.messages.stream( + model="claude-sonnet-4-20250514", + max_tokens=4096, + system=self.system_prompt, + messages=messages, + tools=self.tools + ) as stream: + for event in stream: + if event.type == "content_block_delta": + if hasattr(event.delta, "text"): + yield {"type": "text_delta", "content": event.delta.text} + + # Get final response for tool handling + response = stream.get_final_message() + + # Process tool calls + for block in response.content: + if block.type == "tool_use": + yield {"type": "tool_start", "tool": block.name, "input": block.input} + + # Execute tool + result = self._execute_tool_sync(block.name, block.input) + + yield {"type": "tool_result", "tool": block.name, "result": result} + + # If spec changed, yield updated spec + if block.name in ["add_design_variable", "add_extractor", + "add_objective", "add_constraint", + "update_spec_field", "remove_node"]: + spec = self.load_current_spec() + if spec: + yield {"type": "spec_updated", "spec": spec} + + yield {"type": "done"} +``` + +--- + +#### Task 2.2: Use Streaming in WebSocket Handler +**File**: `backend/api/routes/claude.py` + +**Change `power_mode_websocket`** (line 464-501): +```python +if data.get("type") == "message": + content = data.get("content", "") + if not content: + continue + + try: + # Stream the response + async for event in agent.chat_stream(content, conversation_history): + await websocket.send_json(event) + + # Update conversation history + # (need to track this differently with streaming) + + except Exception as e: + import traceback + traceback.print_exc() + await websocket.send_json({ + "type": "error", + "message": str(e), + }) +``` + +--- + +#### Task 2.3: Frontend - Handle Streaming Text +**File**: `frontend/src/hooks/useChat.ts` + +**Add state for streaming**: +```typescript +const [streamingText, setStreamingText] = useState(''); +``` + +**Handle `text_delta`**: +```typescript +case 'text_delta': + setStreamingText(prev => prev + data.content); + break; + +case 'done': + // Finalize the streaming message + if (streamingText) { + setState(prev => ({ + ...prev, + messages: [...prev.messages, { + id: Date.now().toString(), + role: 'assistant', + content: streamingText + }] + })); + setStreamingText(''); + } + setState(prev => ({ ...prev, isThinking: false })); + break; +``` + +--- + +### Phase 3: Study Creation (P1) + +#### Task 3.1: Add `create_study` Tool +**File**: `backend/api/services/claude_agent.py` + +**Add to `_define_tools()`**: +```python +{ + "name": "create_study", + "description": "Create a new optimization study with directory structure and atomizer_spec.json. Use this when the user wants to start a new optimization from scratch.", + "input_schema": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "Study name in snake_case (e.g., 'bracket_mass_v1')" + }, + "category": { + "type": "string", + "description": "Category folder (e.g., 'Simple_Bracket', 'M1_Mirror'). Optional." + }, + "model_path": { + "type": "string", + "description": "Path to NX simulation file (.sim). Optional, can be set later." + }, + "description": { + "type": "string", + "description": "Brief description of the optimization goal" + } + }, + "required": ["name"] + } +}, +``` + +**Add implementation**: +```python +def _tool_create_study(self, params: Dict[str, Any]) -> str: + """Create a new study with directory structure and atomizer_spec.json""" + study_name = params['name'] + category = params.get('category', '') + model_path = params.get('model_path', '') + description = params.get('description', '') + + # Build study path + if category: + study_dir = STUDIES_DIR / category / study_name + else: + study_dir = STUDIES_DIR / study_name + + # Check if exists + if study_dir.exists(): + return f"✗ Study '{study_name}' already exists at {study_dir}" + + # Create directory structure + study_dir.mkdir(parents=True, exist_ok=True) + (study_dir / "1_setup").mkdir(exist_ok=True) + (study_dir / "2_iterations").mkdir(exist_ok=True) + (study_dir / "3_results").mkdir(exist_ok=True) + + # Create initial spec + spec = { + "meta": { + "version": "2.0", + "study_name": study_name, + "description": description, + "created_at": datetime.now().isoformat(), + "created_by": "claude_agent" + }, + "model": { + "sim": { + "path": model_path, + "solver": "nastran" + } + }, + "design_variables": [], + "extractors": [], + "objectives": [], + "constraints": [], + "optimization": { + "algorithm": {"type": "TPE"}, + "budget": {"max_trials": 100} + }, + "canvas": { + "edges": [], + "layout_version": "2.0" + } + } + + # Save spec + spec_path = study_dir / "atomizer_spec.json" + with open(spec_path, 'w', encoding='utf-8') as f: + json.dump(spec, f, indent=2) + + # Update agent context + self.study_id = f"{category}/{study_name}" if category else study_name + self.study_dir = study_dir + self.canvas_state = spec + + return f"✓ Created study '{study_name}' at {study_dir}\n\nThe canvas is now showing this empty study. You can start adding design variables, extractors, and objectives." +``` + +**Add to tool dispatcher**: +```python +elif tool_name == "create_study": + return self._tool_create_study(tool_input) +``` + +--- + +### Phase 4: Interview Engine (P2) + +#### Task 4.1: Create Interview Engine Class +**File**: `backend/api/services/interview_engine.py` (NEW) + +```python +""" +Interview Engine for guided study creation. + +Walks the user through creating an optimization study step-by-step, +building the atomizer_spec.json incrementally. +""" + +from typing import Dict, List, Optional, Any +from dataclasses import dataclass, field +from enum import Enum +import re + + +class InterviewPhase(Enum): + WELCOME = "welcome" + MODEL = "model" + OBJECTIVES = "objectives" + DESIGN_VARS = "design_vars" + CONSTRAINTS = "constraints" + METHOD = "method" + REVIEW = "review" + COMPLETE = "complete" + + +@dataclass +class InterviewState: + phase: InterviewPhase = InterviewPhase.WELCOME + collected: Dict[str, Any] = field(default_factory=dict) + spec: Dict[str, Any] = field(default_factory=dict) + model_expressions: List[Dict] = field(default_factory=list) + + +class InterviewEngine: + """Guided study creation through conversation""" + + PHASE_QUESTIONS = { + InterviewPhase.WELCOME: "What kind of optimization do you want to set up? (e.g., minimize mass of a bracket, reduce wavefront error of a mirror)", + InterviewPhase.MODEL: "What's the path to your NX simulation file (.sim)?\n(You can type the path or I can help you find it)", + InterviewPhase.OBJECTIVES: "What do you want to optimize?\n\nCommon objectives:\n- Minimize mass/weight\n- Minimize displacement (maximize stiffness)\n- Minimize stress\n- Minimize wavefront error (WFE)\n\nYou can have multiple objectives (multi-objective optimization).", + InterviewPhase.DESIGN_VARS: "Which parameters should vary during optimization?\n\n{suggestions}", + InterviewPhase.CONSTRAINTS: "Any constraints to respect?\n\nExamples:\n- Maximum stress ≤ 200 MPa\n- Minimum frequency ≥ 50 Hz\n- Maximum mass ≤ 5 kg\n\n(Say 'none' if no constraints)", + InterviewPhase.METHOD: "Based on your setup, I recommend **{method}**.\n\nReason: {reason}\n\nShould I use this method?", + InterviewPhase.REVIEW: "Here's your configuration:\n\n{summary}\n\nReady to create the study? (yes/no)", + } + + def __init__(self): + self.state = InterviewState() + self._init_spec() + + def _init_spec(self): + """Initialize empty spec structure""" + self.state.spec = { + "meta": {"version": "2.0"}, + "model": {"sim": {"path": "", "solver": "nastran"}}, + "design_variables": [], + "extractors": [], + "objectives": [], + "constraints": [], + "optimization": { + "algorithm": {"type": "TPE"}, + "budget": {"max_trials": 100} + }, + "canvas": {"edges": [], "layout_version": "2.0"} + } + + def get_current_question(self) -> str: + """Get the question for the current phase""" + question = self.PHASE_QUESTIONS.get(self.state.phase, "") + + # Dynamic substitutions + if self.state.phase == InterviewPhase.DESIGN_VARS: + suggestions = self._format_dv_suggestions() + question = question.format(suggestions=suggestions) + elif self.state.phase == InterviewPhase.METHOD: + method, reason = self._recommend_method() + question = question.format(method=method, reason=reason) + elif self.state.phase == InterviewPhase.REVIEW: + summary = self._format_summary() + question = question.format(summary=summary) + + return question + + def process_answer(self, answer: str) -> Dict[str, Any]: + """Process user's answer and advance interview""" + phase = self.state.phase + result = { + "phase": phase.value, + "spec_changes": [], + "next_phase": None, + "question": None, + "complete": False, + "error": None + } + + try: + if phase == InterviewPhase.WELCOME: + self._process_welcome(answer) + elif phase == InterviewPhase.MODEL: + self._process_model(answer, result) + elif phase == InterviewPhase.OBJECTIVES: + self._process_objectives(answer, result) + elif phase == InterviewPhase.DESIGN_VARS: + self._process_design_vars(answer, result) + elif phase == InterviewPhase.CONSTRAINTS: + self._process_constraints(answer, result) + elif phase == InterviewPhase.METHOD: + self._process_method(answer, result) + elif phase == InterviewPhase.REVIEW: + self._process_review(answer, result) + + # Advance to next phase + self._advance_phase() + + if self.state.phase == InterviewPhase.COMPLETE: + result["complete"] = True + else: + result["next_phase"] = self.state.phase.value + result["question"] = self.get_current_question() + + except Exception as e: + result["error"] = str(e) + + return result + + def _advance_phase(self): + """Move to next phase""" + phases = list(InterviewPhase) + current_idx = phases.index(self.state.phase) + if current_idx < len(phases) - 1: + self.state.phase = phases[current_idx + 1] + + def _process_welcome(self, answer: str): + """Extract optimization type from welcome""" + self.state.collected["goal"] = answer + + # Try to infer study name + words = answer.lower().split() + if "bracket" in words: + self.state.collected["geometry_type"] = "bracket" + elif "mirror" in words: + self.state.collected["geometry_type"] = "mirror" + elif "beam" in words: + self.state.collected["geometry_type"] = "beam" + + def _process_model(self, answer: str, result: Dict): + """Extract model path""" + # Extract path from answer + path = answer.strip().strip('"').strip("'") + self.state.spec["model"]["sim"]["path"] = path + self.state.collected["model_path"] = path + result["spec_changes"].append(f"Set model path: {path}") + + def _process_objectives(self, answer: str, result: Dict): + """Extract objectives from natural language""" + answer_lower = answer.lower() + objectives = [] + extractors = [] + + # Mass/weight + if any(w in answer_lower for w in ["mass", "weight", "light"]): + extractors.append({ + "id": "ext_mass", + "name": "Mass", + "type": "bdf_mass", + "enabled": True + }) + objectives.append({ + "id": "obj_mass", + "name": "Mass", + "direction": "minimize", + "source": {"extractor_id": "ext_mass", "output_key": "mass"}, + "enabled": True + }) + result["spec_changes"].append("Added objective: minimize mass") + + # Displacement/stiffness + if any(w in answer_lower for w in ["displacement", "stiff", "deflection"]): + extractors.append({ + "id": "ext_disp", + "name": "Max Displacement", + "type": "displacement", + "config": {"node_set": "all", "direction": "magnitude"}, + "enabled": True + }) + objectives.append({ + "id": "obj_disp", + "name": "Max Displacement", + "direction": "minimize", + "source": {"extractor_id": "ext_disp", "output_key": "max_displacement"}, + "enabled": True + }) + result["spec_changes"].append("Added objective: minimize displacement") + + # Stress + if "stress" in answer_lower and "constraint" not in answer_lower: + extractors.append({ + "id": "ext_stress", + "name": "Max Stress", + "type": "stress", + "config": {"stress_type": "von_mises"}, + "enabled": True + }) + objectives.append({ + "id": "obj_stress", + "name": "Max Stress", + "direction": "minimize", + "source": {"extractor_id": "ext_stress", "output_key": "max_stress"}, + "enabled": True + }) + result["spec_changes"].append("Added objective: minimize stress") + + # WFE/wavefront + if any(w in answer_lower for w in ["wfe", "wavefront", "optical", "zernike"]): + extractors.append({ + "id": "ext_wfe", + "name": "Wavefront Error", + "type": "zernike", + "config": {"terms": [4, 5, 6, 7, 8, 9, 10, 11]}, + "enabled": True + }) + objectives.append({ + "id": "obj_wfe", + "name": "WFE RMS", + "direction": "minimize", + "source": {"extractor_id": "ext_wfe", "output_key": "wfe_rms"}, + "enabled": True + }) + result["spec_changes"].append("Added objective: minimize wavefront error") + + self.state.spec["extractors"].extend(extractors) + self.state.spec["objectives"].extend(objectives) + self.state.collected["objectives"] = [o["name"] for o in objectives] + + def _process_design_vars(self, answer: str, result: Dict): + """Extract design variables""" + answer_lower = answer.lower() + dvs = [] + + # Parse patterns like "thickness 2-10mm" or "thickness from 2 to 10" + # Pattern: name [range] + patterns = [ + r'(\w+)\s+(\d+(?:\.\d+)?)\s*[-–to]+\s*(\d+(?:\.\d+)?)\s*(mm|deg|°)?', + r'(\w+)\s+\[(\d+(?:\.\d+)?),?\s*(\d+(?:\.\d+)?)\]', + ] + + for pattern in patterns: + matches = re.findall(pattern, answer_lower) + for match in matches: + name = match[0] + min_val = float(match[1]) + max_val = float(match[2]) + unit = match[3] if len(match) > 3 else "" + + dv = { + "id": f"dv_{name}", + "name": name.replace("_", " ").title(), + "expression_name": name, + "type": "continuous", + "bounds": {"min": min_val, "max": max_val}, + "baseline": (min_val + max_val) / 2, + "enabled": True + } + if unit: + dv["units"] = unit + dvs.append(dv) + result["spec_changes"].append(f"Added design variable: {name} [{min_val}, {max_val}]") + + # If no pattern matched, try to use suggestions + if not dvs and self.state.model_expressions: + # User might have said "yes" or named expressions without ranges + for expr in self.state.model_expressions[:3]: # Use top 3 + if expr["name"].lower() in answer_lower or "yes" in answer_lower or "all" in answer_lower: + val = expr["value"] + dv = { + "id": f"dv_{expr['name']}", + "name": expr["name"], + "expression_name": expr["name"], + "type": "continuous", + "bounds": {"min": val * 0.5, "max": val * 1.5}, + "baseline": val, + "enabled": True + } + dvs.append(dv) + result["spec_changes"].append(f"Added design variable: {expr['name']} [{val*0.5:.2f}, {val*1.5:.2f}]") + + self.state.spec["design_variables"].extend(dvs) + self.state.collected["design_vars"] = [dv["name"] for dv in dvs] + + def _process_constraints(self, answer: str, result: Dict): + """Extract constraints""" + answer_lower = answer.lower() + + if answer_lower in ["none", "no", "skip", "n/a"]: + return + + constraints = [] + + # Pattern: "stress < 200" or "stress <= 200 MPa" + stress_match = re.search(r'stress\s*([<>=≤≥]+)\s*(\d+(?:\.\d+)?)', answer_lower) + if stress_match: + op = stress_match.group(1).replace("≤", "<=").replace("≥", ">=") + val = float(stress_match.group(2)) + + # Add extractor if not exists + if not any(e["type"] == "stress" for e in self.state.spec["extractors"]): + self.state.spec["extractors"].append({ + "id": "ext_stress_con", + "name": "Stress for Constraint", + "type": "stress", + "config": {"stress_type": "von_mises"}, + "enabled": True + }) + + constraints.append({ + "id": "con_stress", + "name": "Max Stress", + "operator": op if op in ["<=", ">=", "<", ">", "=="] else "<=", + "threshold": val, + "source": {"extractor_id": "ext_stress_con", "output_key": "max_stress"}, + "enabled": True + }) + result["spec_changes"].append(f"Added constraint: stress {op} {val}") + + # Similar patterns for frequency, mass, displacement... + + self.state.spec["constraints"].extend(constraints) + + def _process_method(self, answer: str, result: Dict): + """Confirm or change optimization method""" + answer_lower = answer.lower() + + if any(w in answer_lower for w in ["yes", "ok", "sure", "good", "proceed"]): + # Keep recommended method + pass + elif "nsga" in answer_lower: + self.state.spec["optimization"]["algorithm"]["type"] = "NSGA-II" + elif "tpe" in answer_lower: + self.state.spec["optimization"]["algorithm"]["type"] = "TPE" + elif "cma" in answer_lower: + self.state.spec["optimization"]["algorithm"]["type"] = "CMA-ES" + + result["spec_changes"].append(f"Set method: {self.state.spec['optimization']['algorithm']['type']}") + + def _process_review(self, answer: str, result: Dict): + """Confirm or revise""" + answer_lower = answer.lower() + + if any(w in answer_lower for w in ["yes", "ok", "create", "proceed", "looks good"]): + result["complete"] = True + else: + # User wants changes - stay in review or go back + result["error"] = "What would you like to change?" + + def _recommend_method(self) -> tuple: + """Recommend optimization method based on problem""" + n_obj = len(self.state.spec["objectives"]) + n_dv = len(self.state.spec["design_variables"]) + + if n_obj > 1: + return "NSGA-II", f"You have {n_obj} objectives, which requires multi-objective optimization" + elif n_dv > 10: + return "CMA-ES", f"With {n_dv} design variables, CMA-ES handles high dimensions well" + else: + return "TPE", "TPE (Bayesian optimization) is efficient for single-objective problems" + + def _format_dv_suggestions(self) -> str: + """Format design variable suggestions""" + if self.state.model_expressions: + lines = ["I found these expressions in your model:"] + for expr in self.state.model_expressions[:5]: + lines.append(f" - {expr['name']} = {expr['value']}") + lines.append("\nWhich ones should vary? (or describe your own)") + return "\n".join(lines) + return "Describe the parameters and their ranges (e.g., 'thickness 2-10mm, width 5-20mm')" + + def _format_summary(self) -> str: + """Format configuration summary""" + spec = self.state.spec + lines = [] + + lines.append(f"**Model**: {spec['model']['sim']['path'] or 'Not set'}") + + lines.append(f"\n**Design Variables ({len(spec['design_variables'])}):**") + for dv in spec["design_variables"]: + b = dv["bounds"] + lines.append(f" - {dv['name']}: [{b['min']}, {b['max']}]") + + lines.append(f"\n**Objectives ({len(spec['objectives'])}):**") + for obj in spec["objectives"]: + lines.append(f" - {obj['direction']} {obj['name']}") + + lines.append(f"\n**Constraints ({len(spec['constraints'])}):**") + if spec["constraints"]: + for con in spec["constraints"]: + lines.append(f" - {con['name']} {con['operator']} {con['threshold']}") + else: + lines.append(" - None") + + lines.append(f"\n**Method**: {spec['optimization']['algorithm']['type']}") + + return "\n".join(lines) + + def get_spec(self) -> Dict[str, Any]: + """Get the built spec""" + return self.state.spec + + def set_model_expressions(self, expressions: List[Dict]): + """Set model expressions for DV suggestions""" + self.state.model_expressions = expressions +``` + +--- + +#### Task 4.2: Add Interview Tools to Claude Agent +**File**: `backend/api/services/claude_agent.py` + +**Add tools**: +```python +{ + "name": "start_interview", + "description": "Start a guided interview to create a new optimization study. Use this when the user wants help setting up an optimization but hasn't provided full details.", + "input_schema": { + "type": "object", + "properties": {}, + "required": [] + } +}, +{ + "name": "interview_answer", + "description": "Process the user's answer during an interview. Extract relevant information and advance the interview.", + "input_schema": { + "type": "object", + "properties": { + "answer": { + "type": "string", + "description": "The user's answer to the current interview question" + } + }, + "required": ["answer"] + } +}, +``` + +**Add state and implementations**: +```python +def __init__(self, study_id: Optional[str] = None): + # ... existing + self.interview: Optional[InterviewEngine] = None + +def _tool_start_interview(self, params: Dict[str, Any]) -> str: + """Start guided study creation""" + from api.services.interview_engine import InterviewEngine + self.interview = InterviewEngine() + question = self.interview.get_current_question() + return f"Let's set up your optimization step by step.\n\n{question}" + +def _tool_interview_answer(self, params: Dict[str, Any]) -> str: + """Process interview answer""" + if not self.interview: + return "No interview in progress. Use start_interview first." + + result = self.interview.process_answer(params["answer"]) + + response_parts = [] + + # Show what was extracted + if result["spec_changes"]: + response_parts.append("**Updated:**") + for change in result["spec_changes"]: + response_parts.append(f" ✓ {change}") + + if result["error"]: + response_parts.append(f"\n{result['error']}") + elif result["complete"]: + # Create the study + spec = self.interview.get_spec() + # ... create study directory and save spec + response_parts.append("\n✓ **Interview complete!** Creating your study...") + self.canvas_state = spec + elif result["question"]: + response_parts.append(f"\n{result['question']}") + + return "\n".join(response_parts) +``` + +--- + +### Phase 5: UI Polish (P2/P3) + +#### Task 5.1: Tool Call Indicators +**File**: `frontend/src/components/chat/ToolIndicator.tsx` (NEW) + +```typescript +import { Loader2, Check, Variable, Cpu, Target, Lock, FolderPlus, Search, Wrench } from 'lucide-react'; + +interface ToolIndicatorProps { + tool: string; + status: 'running' | 'complete'; + result?: string; +} + +const TOOL_ICONS: Record> = { + add_design_variable: Variable, + add_extractor: Cpu, + add_objective: Target, + add_constraint: Lock, + create_study: FolderPlus, + introspect_model: Search, +}; + +const TOOL_LABELS: Record = { + add_design_variable: 'Adding design variable', + add_extractor: 'Adding extractor', + add_objective: 'Adding objective', + add_constraint: 'Adding constraint', + create_study: 'Creating study', + update_spec_field: 'Updating configuration', + remove_node: 'Removing node', +}; + +export function ToolIndicator({ tool, status, result }: ToolIndicatorProps) { + const Icon = TOOL_ICONS[tool] || Wrench; + const label = TOOL_LABELS[tool] || tool; + + return ( +
+ {status === 'running' ? ( + + ) : ( + + )} + + {label} + {status === 'complete' && result && ( + {result} + )} +
+ ); +} +``` + +--- + +#### Task 5.2: Display Tool Calls in Chat +**File**: `frontend/src/components/chat/ChatMessage.tsx` + +**Add tool call rendering**: +```typescript +import { ToolIndicator } from './ToolIndicator'; + +// In message rendering: +{message.toolCalls?.map((tc, idx) => ( + +))} +``` + +--- + +## File Summary + +### Files to Modify + +| File | Changes | +|------|---------| +| `backend/api/services/claude_agent.py` | Add canvas context, streaming, create_study, interview tools | +| `backend/api/routes/claude.py` | Send full spec, handle canvas_edit, use streaming | +| `frontend/src/hooks/useChat.ts` | Add notifyCanvasEdit, handle streaming, onSpecUpdated | +| `frontend/src/pages/CanvasView.tsx` | Wire onSpecUpdated, pass notifyCanvasEdit | +| `frontend/src/components/canvas/SpecRenderer.tsx` | Call notifyCanvasEdit on user edits | + +### Files to Create + +| File | Purpose | +|------|---------| +| `backend/api/services/interview_engine.py` | Guided study creation | +| `frontend/src/components/chat/ToolIndicator.tsx` | Tool call UI | + +--- + +## Testing Checklist + +### Phase 1 Tests +- [ ] Claude mentions current canvas state in response +- [ ] Canvas updates without HTTP reload when Claude modifies spec +- [ ] User edits canvas → ask Claude about it → Claude knows the change + +### Phase 2 Tests +- [ ] Text streams in as Claude types +- [ ] Tool calls show "running" then "complete" status + +### Phase 3 Tests +- [ ] "Create a new study called bracket_v2" creates directory + spec +- [ ] Canvas shows new empty study + +### Phase 4 Tests +- [ ] "Help me set up an optimization" starts interview +- [ ] Each answer updates canvas incrementally +- [ ] Interview completes with valid spec + +--- + +## Ralph Loop Execution Notes + +1. **Start with Phase 1** - It's the foundation for everything else +2. **Test each task individually** before moving to next +3. **The key insight**: `atomizer_spec.json` is the single source of truth +4. **Don't break existing functionality** - Power mode should still work + +### Quick Wins (Do First) +1. Task 1.1 - Add canvas state to Claude context (easy, high value) +2. Task 1.2 - Send full spec in response (easy, removes HTTP reload) +3. Task 1.5 - Frontend use spec directly (easy, faster updates) + +### Recommended Order +``` +1.1 → 1.2 → 1.5 → 1.6 → 1.3 → 1.4 → 2.1 → 2.2 → 2.3 → 3.1 → 4.1 → 4.2 → 5.1 → 5.2 +``` + +--- + +## Success Criteria + +The project is complete when: + +1. **Bi-directional sync works**: Claude modifies → Canvas updates instantly. User edits → Claude sees in next message. + +2. **Streaming works**: Text appears as Claude types, tool calls show in real-time. + +3. **Study creation works**: User can say "create bracket optimization with mass objective" and see it built on canvas. + +4. **Interview works**: User can say "help me set up an optimization" and be guided through the process. + +5. **No HTTP reloads**: Canvas updates purely through WebSocket. diff --git a/docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md b/docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md new file mode 100644 index 00000000..f5a32e82 --- /dev/null +++ b/docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md @@ -0,0 +1,693 @@ +# Dashboard Claude Code Integration Plan + +**Date**: January 16, 2026 +**Status**: 🟢 IMPLEMENTED +**Priority**: CRITICAL +**Implemented**: January 16, 2026 + +--- + +## Problem Statement + +The dashboard chat assistant is **fundamentally underpowered** compared to Claude Code CLI. Users expect the same level of intelligence, proactivity, and capability when interacting with the dashboard as they get from the terminal. + +### Current Experience (Terminal - Claude Code CLI) +``` +User: "Add 10 new design variables to the M1 mirror study" + +Claude Code: +1. Reads optimization_config.json +2. Understands the current structure +3. Adds 10 variables with intelligent defaults +4. ACTUALLY MODIFIES the file +5. Shows the diff +6. Can immediately run/test +``` + +### Current Experience (Dashboard Chat) +``` +User: "Add 10 new design variables" + +Dashboard Chat: +1. Calls MCP tool canvas_add_node +2. Returns JSON instruction +3. Frontend SHOULD apply it but doesn't +4. Nothing visible happens +5. User frustrated +``` + +--- + +## Root Cause Analysis + +### Issue 1: MCP Tools Don't Actually Modify Anything + +The current MCP tools (`canvas_add_node`, etc.) just return instructions like: +```json +{ + "success": true, + "modification": { + "action": "add_node", + "nodeType": "designVar", + "data": {...} + } +} +``` + +The **frontend is supposed to receive and apply these**, but: +- WebSocket message handling may not process tool results +- No automatic application of modifications +- User sees "success" message but nothing changes + +### Issue 2: Claude API vs Claude Code CLI + +| Capability | Claude API (Dashboard) | Claude Code CLI (Terminal) | +|------------|------------------------|---------------------------| +| Read files | Via MCP tool | Native | +| Write files | Via MCP tool (limited) | Native | +| Run commands | Via MCP tool (limited) | Native | +| Edit in place | NO | YES | +| Git operations | NO | YES | +| Multi-step reasoning | Limited | Full | +| Tool chaining | Awkward | Natural | +| Context window | 200k | Unlimited (summarization) | + +### Issue 3: Model Capability Gap + +Dashboard uses Claude API (likely Sonnet or Haiku for cost). Terminal uses **Opus 4.5** with full Claude Code capabilities. + +--- + +## Proposed Solution: Claude Code CLI Backend + +Instead of MCP tools calling Python scripts, **spawn actual Claude Code CLI sessions** in the backend that have full power. + +### Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ DASHBOARD FRONTEND │ +├─────────────────────────────────────────────────────────────────┤ +│ Canvas Builder │ Chat Panel │ Study Views │ Results │ +└────────┬────────────────┬─────────────────────────────────────┬─┘ + │ │ │ + │ WebSocket │ REST API │ + ▼ ▼ │ +┌─────────────────────────────────────────────────────────────────┐ +│ BACKEND (FastAPI) │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────────────────────────────────────────────────────┐ │ +│ │ CLAUDE CODE SESSION MANAGER │ │ +│ │ │ │ +│ │ - Spawns claude CLI processes │ │ +│ │ - Maintains conversation context │ │ +│ │ - Streams output back to frontend │ │ +│ │ - Has FULL Atomizer codebase access │ │ +│ │ - Uses Opus 4.5 model │ │ +│ │ - Can edit files, run commands, modify studies │ │ +│ │ │ │ +│ └─────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────┐ │ +│ │ ATOMIZER CODEBASE │ │ +│ │ │ │ +│ │ studies/ optimization_engine/ │ │ +│ │ M1_Mirror/ extractors/ │ │ +│ │ optimization_config.json runner.py │ │ +│ │ run_optimization.py ... │ │ +│ │ │ │ +│ └─────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Key Changes + +1. **Backend spawns Claude Code CLI** instead of calling Claude API +2. **Full file system access** - Claude can read/write any file +3. **Full command execution** - Run Python, git, npm, etc. +4. **Opus 4.5 model** - Same intelligence as terminal +5. **Stream output** - Real-time feedback to user +6. **Canvas sync** - After Claude modifies files, canvas reloads from config + +--- + +## Implementation Plan + +### Phase 1: Claude Code CLI Session Manager + +**File**: `atomizer-dashboard/backend/api/services/claude_code_session.py` + +```python +""" +Claude Code CLI Session Manager + +Spawns actual Claude Code CLI processes with full Atomizer access. +This gives dashboard users the same power as terminal users. +""" + +import asyncio +import json +import os +import subprocess +from pathlib import Path +from typing import AsyncGenerator, Dict, List, Optional + +ATOMIZER_ROOT = Path(__file__).parent.parent.parent.parent.parent + +class ClaudeCodeSession: + """ + Manages a Claude Code CLI session. + + Unlike MCP tools, this spawns the actual claude CLI which has: + - Full file system access + - Full command execution + - Opus 4.5 model + - All Claude Code capabilities + """ + + def __init__(self, session_id: str, study_id: Optional[str] = None): + self.session_id = session_id + self.study_id = study_id + self.canvas_state: Optional[Dict] = None # Current canvas state from frontend + self.working_dir = ATOMIZER_ROOT + if study_id: + study_path = ATOMIZER_ROOT / "studies" / study_id + if study_path.exists(): + self.working_dir = study_path + + def set_canvas_state(self, canvas_state: Dict): + """Update canvas state from frontend""" + self.canvas_state = canvas_state + + async def send_message(self, message: str) -> AsyncGenerator[str, None]: + """ + Send message to Claude Code CLI and stream response. + + Uses claude CLI with: + - --print for output + - --dangerously-skip-permissions for full access (controlled environment) + - Runs from Atomizer root to get CLAUDE.md context automatically + - Study-specific context injected into prompt + """ + # Build study-specific context + study_context = self._build_study_context() if self.study_id else "" + + # The user's message with study context prepended + full_message = f"""## Current Study Context +{study_context} + +## User Request +{message} + +Remember: You have FULL power to edit files. Make the actual changes, don't just describe them.""" + + # Write prompt to a temp file (better than stdin for complex prompts) + prompt_file = ATOMIZER_ROOT / f".claude-prompt-{self.session_id}.md" + prompt_file.write_text(full_message) + + try: + # Spawn claude CLI from ATOMIZER_ROOT so it picks up CLAUDE.md + # This gives it full Atomizer context automatically + process = await asyncio.create_subprocess_exec( + "claude", + "--print", + "--dangerously-skip-permissions", # Full access in controlled env + "-p", str(prompt_file), # Read prompt from file + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + cwd=str(ATOMIZER_ROOT), # Run from root to get CLAUDE.md + env={ + **os.environ, + "ATOMIZER_STUDY": self.study_id or "", + "ATOMIZER_STUDY_PATH": str(self.working_dir), + } + ) + + # Stream output + stdout, stderr = await process.communicate() + + if stdout: + yield stdout.decode() + if stderr and process.returncode != 0: + yield f"\n[Error]: {stderr.decode()}" + + finally: + # Clean up prompt file + if prompt_file.exists(): + prompt_file.unlink() + + def _build_system_prompt(self) -> str: + """Build Atomizer-aware system prompt with full context""" + + # Load CLAUDE.md for Atomizer system instructions + claude_md_path = ATOMIZER_ROOT / "CLAUDE.md" + claude_md_content = "" + if claude_md_path.exists(): + claude_md_content = claude_md_path.read_text() + + # Load study-specific context + study_context = "" + if self.study_id: + study_context = self._build_study_context() + + prompt = f"""# Atomizer Dashboard Assistant + +You are running as the Atomizer Dashboard Assistant with FULL Claude Code CLI capabilities. +You have the same power as a terminal Claude Code session. + +## Atomizer System Instructions +{claude_md_content[:8000]} # Truncate if too long + +## Your Capabilities + +You can and MUST: +- Read and EDIT any file in the codebase +- Modify optimization_config.json directly +- Update run_optimization.py +- Run Python scripts +- Execute git commands +- Create new studies +- Modify existing studies + +When the user asks to add design variables, objectives, or other config changes: +1. Read the current config file +2. Make the actual modifications using Edit tool +3. Save the file +4. Report what you changed with a diff + +DO NOT just return instructions - ACTUALLY MAKE THE CHANGES. + +## Current Context + +**Atomizer Root**: {ATOMIZER_ROOT} +**Working Directory**: {self.working_dir} + +{study_context} + +## Important Paths + +- Studies: {ATOMIZER_ROOT / 'studies'} +- Extractors: {ATOMIZER_ROOT / 'optimization_engine' / 'extractors'} +- Protocols: {ATOMIZER_ROOT / 'docs' / 'protocols'} + +## After Making Changes + +After modifying any study files: +1. Confirm the changes were saved +2. Show the relevant diff +3. The dashboard canvas will auto-refresh to reflect your changes +""" + return prompt + + def _build_study_context(self) -> str: + """Build detailed context for the active study""" + context = f"## Active Study: {self.study_id}\n\n" + + # Find and read optimization_config.json + config_path = self.working_dir / "1_setup" / "optimization_config.json" + if not config_path.exists(): + config_path = self.working_dir / "optimization_config.json" + + if config_path.exists(): + import json + try: + config = json.loads(config_path.read_text()) + context += f"**Config File**: `{config_path}`\n\n" + + # Design variables summary + dvs = config.get("design_variables", []) + if dvs: + context += "### Design Variables\n\n" + context += "| Name | Min | Max | Baseline | Unit |\n" + context += "|------|-----|-----|----------|------|\n" + for dv in dvs[:15]: + name = dv.get("name", dv.get("expression_name", "?")) + min_v = dv.get("min", dv.get("lower", "?")) + max_v = dv.get("max", dv.get("upper", "?")) + baseline = dv.get("baseline", "-") + unit = dv.get("units", dv.get("unit", "-")) + context += f"| {name} | {min_v} | {max_v} | {baseline} | {unit} |\n" + if len(dvs) > 15: + context += f"\n*... and {len(dvs) - 15} more*\n" + context += "\n" + + # Objectives + objs = config.get("objectives", []) + if objs: + context += "### Objectives\n\n" + for obj in objs: + name = obj.get("name", "?") + direction = obj.get("direction", "minimize") + weight = obj.get("weight", 1) + context += f"- **{name}**: {direction} (weight: {weight})\n" + context += "\n" + + # Extraction method (for Zernike) + ext_method = config.get("extraction_method", {}) + if ext_method: + context += "### Extraction Method\n\n" + context += f"- Type: {ext_method.get('type', '?')}\n" + context += f"- Class: {ext_method.get('class', '?')}\n" + if ext_method.get("inner_radius"): + context += f"- Inner Radius: {ext_method.get('inner_radius')}\n" + context += "\n" + + # Zernike settings + zernike = config.get("zernike_settings", {}) + if zernike: + context += "### Zernike Settings\n\n" + context += f"- Modes: {zernike.get('n_modes', '?')}\n" + context += f"- Filter Low Orders: {zernike.get('filter_low_orders', '?')}\n" + context += f"- Subcases: {zernike.get('subcases', [])}\n" + context += "\n" + + # Algorithm + method = config.get("method", config.get("optimization", {}).get("sampler", "TPE")) + max_trials = config.get("max_trials", config.get("optimization", {}).get("n_trials", 100)) + context += f"### Algorithm\n\n" + context += f"- Method: {method}\n" + context += f"- Max Trials: {max_trials}\n\n" + + except Exception as e: + context += f"*Error reading config: {e}*\n\n" + else: + context += "*No optimization_config.json found*\n\n" + + # Check for run_optimization.py + run_opt_path = self.working_dir / "run_optimization.py" + if run_opt_path.exists(): + context += f"**Run Script**: `{run_opt_path}` (exists)\n\n" + + # Check results + db_path = self.working_dir / "3_results" / "study.db" + if db_path.exists(): + context += "**Results Database**: exists\n" + # Could query trial count here + else: + context += "**Results Database**: not found (no optimization run yet)\n" + + return context +``` + +### Phase 2: WebSocket Handler for Claude Code + +**File**: `atomizer-dashboard/backend/api/routes/claude_code.py` + +```python +""" +Claude Code WebSocket Routes + +Provides WebSocket endpoint that connects to actual Claude Code CLI. +""" + +from fastapi import APIRouter, WebSocket, WebSocketDisconnect +from api.services.claude_code_session import ClaudeCodeSession +import uuid + +router = APIRouter() + +# Active sessions +sessions: Dict[str, ClaudeCodeSession] = {} + +@router.websocket("/ws/{study_id}") +async def claude_code_websocket(websocket: WebSocket, study_id: str = None): + """ + WebSocket for full Claude Code CLI access. + + This gives dashboard users the SAME power as terminal users. + """ + await websocket.accept() + + session_id = str(uuid.uuid4())[:8] + session = ClaudeCodeSession(session_id, study_id) + sessions[session_id] = session + + try: + while True: + data = await websocket.receive_json() + + if data.get("type") == "message": + content = data.get("content", "") + + # Stream response from Claude Code CLI + async for chunk in session.send_message(content): + await websocket.send_json({ + "type": "text", + "content": chunk, + }) + + await websocket.send_json({"type": "done"}) + + # After response, trigger canvas refresh + await websocket.send_json({ + "type": "refresh_canvas", + "study_id": study_id, + }) + + except WebSocketDisconnect: + sessions.pop(session_id, None) +``` + +### Phase 3: Frontend - Use Claude Code Endpoint + +**File**: `atomizer-dashboard/frontend/src/hooks/useClaudeCode.ts` + +```typescript +/** + * Hook for Claude Code CLI integration + * + * Connects to backend that spawns actual Claude Code CLI processes. + * This gives full power: file editing, command execution, etc. + */ + +export function useClaudeCode(studyId?: string) { + const [messages, setMessages] = useState([]); + const [isThinking, setIsThinking] = useState(false); + const wsRef = useRef(null); + + // Reload canvas after Claude makes changes + const { loadFromConfig } = useCanvasStore(); + + useEffect(() => { + // Connect to Claude Code WebSocket + const ws = new WebSocket(`ws://${location.host}/api/claude-code/ws/${studyId || ''}`); + + ws.onmessage = (event) => { + const data = JSON.parse(event.data); + + if (data.type === 'text') { + // Stream Claude's response + appendToLastMessage(data.content); + } + else if (data.type === 'done') { + setIsThinking(false); + } + else if (data.type === 'refresh_canvas') { + // Claude made file changes - reload canvas from config + reloadCanvasFromStudy(data.study_id); + } + }; + + wsRef.current = ws; + return () => ws.close(); + }, [studyId]); + + const sendMessage = async (content: string) => { + setIsThinking(true); + addMessage({ role: 'user', content }); + addMessage({ role: 'assistant', content: '', isStreaming: true }); + + wsRef.current?.send(JSON.stringify({ + type: 'message', + content, + })); + }; + + return { messages, isThinking, sendMessage }; +} +``` + +### Phase 4: Canvas Auto-Refresh + +When Claude modifies `optimization_config.json`, the canvas should automatically reload: + +```typescript +// In AtomizerCanvas.tsx or useCanvasChat.ts + +const reloadCanvasFromStudy = async (studyId: string) => { + // Fetch fresh config from backend + const response = await fetch(`/api/studies/${studyId}/config`); + const config = await response.json(); + + // Reload canvas + loadFromConfig(config); + + // Notify user + showNotification('Canvas updated with Claude\'s changes'); +}; +``` + +### Phase 5: Smart Prompting for Canvas Context + +When user sends a message from canvas view, include canvas state: + +```typescript +const sendCanvasMessage = (userMessage: string) => { + const canvasContext = generateCanvasMarkdown(); + + const enrichedMessage = ` +## Current Canvas State +${canvasContext} + +## User Request +${userMessage} + +When making changes, modify the actual optimization_config.json file. +After changes, the canvas will auto-refresh. +`; + + sendMessage(enrichedMessage); +}; +``` + +--- + +## Expected Behavior After Implementation + +### Example 1: Add Design Variables +``` +User: "Add 10 new design variables for hole diameters, range 5-25mm" + +Claude Code (in dashboard): +1. Reads studies/M1_Mirror/.../optimization_config.json +2. Adds 10 entries to design_variables array: + - hole_diameter_1: [5, 25] mm + - hole_diameter_2: [5, 25] mm + - ... (10 total) +3. WRITES the file +4. Reports: "Added 10 design variables to optimization_config.json" +5. Frontend receives "refresh_canvas" signal +6. Canvas reloads and shows 10 new nodes +7. User sees actual changes +``` + +### Example 2: Modify Optimization +``` +User: "Change the algorithm to CMA-ES with 500 trials and add a stress constraint < 200 MPa" + +Claude Code (in dashboard): +1. Reads config +2. Changes method: "TPE" -> "CMA-ES" +3. Changes max_trials: 100 -> 500 +4. Adds constraint: {name: "stress_limit", operator: "<=", value: 200, unit: "MPa"} +5. WRITES the file +6. Reports changes +7. Canvas refreshes with updated algorithm node and new constraint node +``` + +### Example 3: Complex Multi-File Changes +``` +User: "Add a new Zernike extractor for the secondary mirror and connect it to a new objective" + +Claude Code (in dashboard): +1. Reads config +2. Adds extractor to extractors array +3. Adds objective connected to extractor +4. If needed, modifies run_optimization.py to import new extractor +5. WRITES all modified files +6. Canvas refreshes with new extractor and objective nodes, properly connected +``` + +--- + +## Implementation Checklist + +### Phase 1: Backend Claude Code Session +- [x] Create `claude_code_session.py` with session manager +- [x] Implement `send_message()` with CLI spawning +- [x] Build Atomizer-aware system prompt +- [x] Handle study context (working directory) +- [x] Stream output properly + +### Phase 2: WebSocket Route +- [x] Create `/api/claude-code/ws/{study_id}` endpoint +- [x] Handle message routing +- [x] Implement `refresh_canvas` signal +- [x] Session cleanup on disconnect + +### Phase 3: Frontend Hook +- [x] Create `useClaudeCode.ts` hook +- [x] Connect to Claude Code WebSocket +- [x] Handle streaming responses +- [x] Handle canvas refresh signals + +### Phase 4: Canvas Auto-Refresh +- [x] Add `reloadCanvasFromStudy()` function +- [x] Wire refresh signal to canvas store +- [x] Add loading state during refresh +- [x] Show notification on refresh (system message) + +### Phase 5: Chat Panel Integration +- [x] Update ChatPanel to use `useClaudeCode` +- [x] Include canvas context in messages +- [x] Add "Claude Code" indicator in UI (mode toggle) +- [x] Show when Claude is editing files + +### Phase 6: Testing +- [ ] Test adding design variables +- [ ] Test modifying objectives +- [ ] Test complex multi-file changes +- [ ] Test canvas refresh after changes +- [ ] Test error handling + +## Implementation Notes + +### Files Created/Modified + +**Backend:** +- `atomizer-dashboard/backend/api/services/claude_code_session.py` - New session manager +- `atomizer-dashboard/backend/api/routes/claude_code.py` - New WebSocket routes +- `atomizer-dashboard/backend/api/main.py` - Added claude_code router + +**Frontend:** +- `atomizer-dashboard/frontend/src/hooks/useClaudeCode.ts` - New hook for Claude Code CLI +- `atomizer-dashboard/frontend/src/components/canvas/AtomizerCanvas.tsx` - Added mode toggle +- `atomizer-dashboard/frontend/src/components/chat/ChatMessage.tsx` - Added system message support + +--- + +## Security Considerations + +Claude Code CLI with `--dangerously-skip-permissions` has full system access. Mitigations: + +1. **Sandboxed environment**: Dashboard runs on user's machine, not public server +2. **Study-scoped working directory**: Claude starts in study folder +3. **Audit logging**: Log all file modifications +4. **User confirmation**: Option to require approval for destructive operations + +--- + +## Cost Considerations + +Using Opus 4.5 via Claude Code CLI is more expensive than Sonnet API. Options: + +1. **Default to Sonnet, upgrade on request**: "Use full power mode" button +2. **Per-session token budget**: Warn user when approaching limit +3. **Cache common operations**: Pre-generate responses for common tasks + +--- + +## Success Criteria + +1. **Parity with terminal**: Dashboard chat can do everything Claude Code CLI can +2. **Real modifications**: Files actually change, not just instructions +3. **Canvas sync**: Canvas reflects file changes immediately +4. **Intelligent defaults**: Claude makes smart choices without asking clarifying questions +5. **Proactive behavior**: Claude anticipates needs and handles edge cases + +--- + +*This document to be implemented by Claude Code CLI* diff --git a/docs/plans/SAAS_ATOMIZER_ROADMAP.md b/docs/plans/SAAS_ATOMIZER_ROADMAP.md new file mode 100644 index 00000000..6e3c4927 --- /dev/null +++ b/docs/plans/SAAS_ATOMIZER_ROADMAP.md @@ -0,0 +1,863 @@ +# SaaS-Level Atomizer Roadmap (Revised) + +## Executive Summary + +This roadmap transforms Atomizer into a **SaaS-grade, LLM-assisted structural optimization platform** with the core innovation being **side-by-side Claude + Canvas** integration where: + +1. **Claude talks → Canvas updates in real-time** (user sees nodes appear/change) +2. **User tweaks Canvas → Claude sees changes** (bi-directional sync) +3. **Full Claude Code-level power** through the dashboard chat +4. **Interview-driven study creation** entirely through conversation + +**Vision**: An engineer opens Atomizer, describes their optimization goal, watches the canvas build itself, makes quick tweaks, and starts optimization—all through natural conversation with full visual feedback. + +--- + +## The Core Innovation: Unified Claude + Canvas + +The power of Atomizer is the **side-by-side experience**: + +``` +┌───────────────────────────────────────────────────────────────────┐ +│ ATOMIZER DASHBOARD │ +├────────────────────────────┬──────────────────────────────────────┤ +│ CHAT PANEL │ CANVAS │ +│ (Atomizer Assistant) │ (SpecRenderer) │ +│ │ │ +│ "Create a bracket │ [DV: thickness] │ +│ optimization with │ │ │ +│ mass and stiffness" │ ▼ │ +│ │ │ [Model Node] │ +│ ▼ │ │ │ +│ 🔧 Adding thickness │ ▼ │ +│ 🔧 Adding mass ext ◄───┼──►[Ext: mass]──>[Obj: min mass] │ +│ 🔧 Adding stiffness ◄───┼──►[Ext: disp]──>[Obj: min disp] │ +│ ✓ Study configured! │ │ +│ │ (nodes appear in real-time) │ +│ User clicks a node ───────┼──► Claude sees the edit │ +│ │ │ +└────────────────────────────┴──────────────────────────────────────┘ +``` + +--- + +## Current State vs Target + +### What We Have + +| Component | Status | Notes | +|-----------|--------|-------| +| Power Mode WebSocket | ✅ Implemented | `/ws/power` endpoint with write tools | +| Write Tools | ✅ Implemented | add_design_variable, add_extractor, etc. | +| spec_modified Events | ✅ Implemented | Frontend receives notifications | +| Canvas Auto-reload | ✅ Implemented | Triggers on spec_modified | +| Streaming Responses | ❌ Missing | Currently waits for full response | +| Canvas State → Claude | ❌ Missing | Claude doesn't see current canvas | +| Interview Engine | ❌ Missing | No guided creation | +| Unified WebSocket | ❌ Missing | Separate connections for chat/spec | + +### Target Architecture + +``` + ┌─────────────────────┐ + │ Unified WebSocket │ + │ /api/atomizer/ws │ + └─────────┬───────────┘ + │ + Bi-directional Sync + │ + ┌────────────────────┼────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Chat Panel │ │ Canvas │ │ Spec Store │ +│ │ │ │ │ │ +│ Send messages │ │ User edits → │ │ Single source │ +│ Stream text │ │ Notify Claude │ │ of truth │ +│ See tool calls │ │ Receive updates │ │ Validates all │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ +``` + +--- + +## Phase 1: Unified WebSocket Hub (1-2 weeks) + +### Goal: Single connection for chat + canvas + spec sync + +### 1.1 Backend: Unified WebSocket Endpoint + +```python +# atomizer-dashboard/backend/api/routes/atomizer_ws.py + +@router.websocket("/api/atomizer/ws") +async def atomizer_websocket(websocket: WebSocket): + """ + Unified WebSocket for Atomizer Dashboard. + + Handles: + - Chat messages with streaming responses + - Spec modifications with real-time canvas updates + - Canvas edit notifications from user + - Study context switching + """ + await websocket.accept() + + agent = AtomizerClaudeAgent() + conversation: List[Dict] = [] + current_spec: Optional[Dict] = None + + try: + while True: + data = await websocket.receive_json() + + if data["type"] == "message": + # Chat message - stream response + async for event in agent.chat_stream( + message=data["content"], + conversation=conversation, + canvas_state=current_spec # Claude sees current canvas! + ): + await websocket.send_json(event) + + # If spec changed, update our local copy + if event.get("type") == "spec_updated": + current_spec = event["spec"] + + elif data["type"] == "canvas_edit": + # User made a manual edit - update spec and tell Claude + current_spec = apply_patch(current_spec, data["patch"]) + # Next message to Claude will include updated spec + + elif data["type"] == "set_study": + # Switch study context + agent.set_study(data["study_id"]) + current_spec = agent.load_spec() + await websocket.send_json({ + "type": "spec_updated", + "spec": current_spec + }) + + except WebSocketDisconnect: + pass +``` + +### 1.2 Enhanced Claude Agent with Streaming + +```python +class AtomizerClaudeAgent: + """Full-power Claude agent with Claude Code-like capabilities""" + + async def chat_stream( + self, + message: str, + conversation: List[Dict], + canvas_state: Optional[Dict] = None + ) -> AsyncGenerator[Dict, None]: + """Stream responses with tool calls""" + + # Build system prompt with current canvas state + system = self._build_system_prompt() + if canvas_state: + system += self._format_canvas_context(canvas_state) + + messages = conversation + [{"role": "user", "content": message}] + + # Use streaming API + with self.client.messages.stream( + model="claude-sonnet-4-20250514", + max_tokens=8192, + system=system, + messages=messages, + tools=self.tools + ) as stream: + current_text = "" + + for event in stream: + if event.type == "content_block_delta": + if hasattr(event.delta, "text"): + current_text += event.delta.text + yield {"type": "text", "content": event.delta.text, "done": False} + + # Get final message for tool calls + response = stream.get_final_message() + + # Process tool calls + for block in response.content: + if block.type == "tool_use": + yield {"type": "tool_start", "tool": block.name, "input": block.input} + + result = await self._execute_tool(block.name, block.input) + + yield {"type": "tool_result", "tool": block.name, "result": result["preview"]} + + if result.get("spec_changed"): + yield {"type": "spec_updated", "spec": self.spec_store.get()} + + yield {"type": "done"} + + def _format_canvas_context(self, spec: Dict) -> str: + """Format current canvas state for Claude's context""" + lines = ["\n## Current Canvas State\n"] + + if spec.get("design_variables"): + lines.append(f"**Design Variables ({len(spec['design_variables'])}):**") + for dv in spec["design_variables"]: + lines.append(f" - {dv['name']}: [{dv['bounds']['min']}, {dv['bounds']['max']}]") + + if spec.get("extractors"): + lines.append(f"\n**Extractors ({len(spec['extractors'])}):**") + for ext in spec["extractors"]: + lines.append(f" - {ext['name']} ({ext['type']})") + + if spec.get("objectives"): + lines.append(f"\n**Objectives ({len(spec['objectives'])}):**") + for obj in spec["objectives"]: + lines.append(f" - {obj['name']} ({obj['direction']})") + + if spec.get("constraints"): + lines.append(f"\n**Constraints ({len(spec['constraints'])}):**") + for con in spec["constraints"]: + lines.append(f" - {con['name']} {con['operator']} {con['threshold']}") + + lines.append("\nThe user can see this canvas. When you modify it, they see changes in real-time.") + + return "\n".join(lines) +``` + +### 1.3 Frontend: Unified Hook + +```typescript +// hooks/useAtomizerSocket.ts +export function useAtomizerSocket(studyId?: string) { + const [spec, setSpec] = useState(null); + const [messages, setMessages] = useState([]); + const [isThinking, setIsThinking] = useState(false); + const [streamingText, setStreamingText] = useState(""); + const [activeTool, setActiveTool] = useState(null); + + const ws = useRef(null); + + useEffect(() => { + const url = `ws://localhost:8001/api/atomizer/ws`; + ws.current = new WebSocket(url); + + ws.current.onopen = () => { + if (studyId) { + ws.current?.send(JSON.stringify({ type: "set_study", study_id: studyId })); + } + }; + + ws.current.onmessage = (event) => { + const data = JSON.parse(event.data); + + switch (data.type) { + case "text": + setStreamingText(prev => prev + data.content); + break; + + case "tool_start": + setActiveTool(data.tool); + break; + + case "tool_result": + setActiveTool(null); + break; + + case "spec_updated": + setSpec(data.spec); // Canvas updates automatically! + break; + + case "done": + // Finalize message + setMessages(prev => [...prev, { + role: "assistant", + content: streamingText + }]); + setStreamingText(""); + setIsThinking(false); + break; + } + }; + + return () => ws.current?.close(); + }, [studyId]); + + const sendMessage = useCallback((content: string) => { + setIsThinking(true); + setMessages(prev => [...prev, { role: "user", content }]); + ws.current?.send(JSON.stringify({ type: "message", content })); + }, []); + + const notifyCanvasEdit = useCallback((path: string, value: any) => { + ws.current?.send(JSON.stringify({ + type: "canvas_edit", + patch: { path, value } + })); + }, []); + + return { + spec, + messages, + streamingText, + isThinking, + activeTool, + sendMessage, + notifyCanvasEdit + }; +} +``` + +--- + +## Phase 2: Full Tool Set (1-2 weeks) + +### Goal: Claude Code-level power through dashboard + +### 2.1 Tool Categories + +```python +ATOMIZER_TOOLS = { + # === SPEC MODIFICATION (Already Implemented) === + "add_design_variable": "Add a design variable to the optimization", + "add_extractor": "Add a physics extractor (mass, stress, displacement, custom)", + "add_objective": "Add an optimization objective", + "add_constraint": "Add a constraint", + "update_spec_field": "Update any field in the spec by JSON path", + "remove_node": "Remove a node from the spec", + + # === READ/QUERY === + "read_study_config": "Read the full atomizer_spec.json", + "query_trials": "Query optimization trial data", + "list_studies": "List all available studies", + "get_optimization_status": "Check if optimization is running", + + # === STUDY MANAGEMENT (New) === + "create_study": "Create a new study directory with atomizer_spec.json", + "clone_study": "Clone an existing study as a starting point", + "validate_spec": "Validate the current spec for errors", + + # === NX INTEGRATION (New) === + "introspect_model": "Analyze NX model for expressions and features", + "suggest_design_vars": "AI-suggest design variables from model", + "list_model_expressions": "List all expressions in the NX model", + + # === OPTIMIZATION CONTROL (New) === + "start_optimization": "Start the optimization run", + "stop_optimization": "Stop a running optimization", + + # === INTERVIEW (New) === + "start_interview": "Begin guided study creation interview", + "get_interview_progress": "Get current interview state", +} +``` + +### 2.2 Create Study Tool + +```python +async def _tool_create_study(self, params: Dict) -> Dict: + """Create a new study with atomizer_spec.json""" + study_name = params["name"] + study_dir = STUDIES_DIR / study_name + + # Create directory structure + study_dir.mkdir(parents=True, exist_ok=True) + (study_dir / "1_setup").mkdir(exist_ok=True) + (study_dir / "2_iterations").mkdir(exist_ok=True) + (study_dir / "3_results").mkdir(exist_ok=True) + + # Create initial spec + spec = { + "meta": { + "version": "2.0", + "study_name": study_name, + "created_at": datetime.now().isoformat(), + "created_by": "claude_agent" + }, + "model": { + "sim": {"path": params.get("model_path", ""), "solver": "nastran"} + }, + "design_variables": [], + "extractors": [], + "objectives": [], + "constraints": [], + "optimization": { + "algorithm": {"type": "TPE"}, + "budget": {"max_trials": 100} + }, + "canvas": {"edges": [], "layout_version": "2.0"} + } + + # Save spec + spec_path = study_dir / "atomizer_spec.json" + with open(spec_path, "w") as f: + json.dump(spec, f, indent=2) + + # Update agent's study context + self.study_id = study_name + self.study_dir = study_dir + + return { + "preview": f"✓ Created study '{study_name}' at {study_dir}", + "spec_changed": True, + "study_id": study_name + } +``` + +### 2.3 NX Introspection Tool + +```python +async def _tool_introspect_model(self, params: Dict) -> Dict: + """Analyze NX model for design variable candidates""" + model_path = params.get("model_path") or self._find_model_path() + + if not model_path or not Path(model_path).exists(): + return {"preview": "✗ Model file not found", "spec_changed": False} + + # Use NX session to get expressions + expressions = await self._get_nx_expressions(model_path) + + # Classify expressions as potential DVs + candidates = [] + for expr in expressions: + score = self._score_dv_candidate(expr) + if score > 0.5: + candidates.append({ + "name": expr["name"], + "value": expr["value"], + "formula": expr.get("formula", ""), + "score": score, + "suggested_bounds": self._suggest_bounds(expr) + }) + + # Sort by score + candidates.sort(key=lambda x: x["score"], reverse=True) + + return { + "preview": f"Found {len(expressions)} expressions, {len(candidates)} are DV candidates", + "expressions": expressions, + "candidates": candidates[:10], # Top 10 + "spec_changed": False + } + +def _score_dv_candidate(self, expr: Dict) -> float: + """Score expression as design variable candidate""" + score = 0.0 + name = expr["name"].lower() + + # Geometric parameters score high + if any(kw in name for kw in ["thickness", "width", "height", "radius", "diameter", "depth"]): + score += 0.4 + + # Numeric with reasonable value + if isinstance(expr["value"], (int, float)) and expr["value"] > 0: + score += 0.2 + + # Not a formula (pure number) + if not expr.get("formula") or expr["formula"] == str(expr["value"]): + score += 0.2 + + # Common design parameter names + if any(kw in name for kw in ["rib", "web", "flange", "support", "angle"]): + score += 0.2 + + return min(score, 1.0) +``` + +--- + +## Phase 3: Interview Engine (1-2 weeks) + +### Goal: Guided study creation through conversation + +### 3.1 Interview Engine + +```python +class InterviewEngine: + """Guided study creation through conversation""" + + PHASES = [ + ("welcome", "What kind of optimization do you want to set up?"), + ("model", "What's the path to your NX simulation file (.sim)?"), + ("objectives", "What do you want to optimize? (e.g., minimize mass, minimize displacement)"), + ("design_vars", "Which parameters should vary? I can suggest some from your model."), + ("constraints", "Any constraints to respect? (e.g., max stress ≤ 200 MPa)"), + ("method", "I recommend {method} for this. Sound good?"), + ("review", "Here's your configuration. Ready to create the study?"), + ] + + def __init__(self): + self.phase_index = 0 + self.collected = {} + self.spec_builder = SpecBuilder() + + def get_current_question(self) -> str: + phase, question = self.PHASES[self.phase_index] + + # Dynamic question customization + if phase == "method": + method = self._recommend_method() + question = question.format(method=method) + elif phase == "design_vars" and self.collected.get("model_expressions"): + candidates = self.collected["model_expressions"][:5] + question += f"\n\nI found these candidates: {', '.join(c['name'] for c in candidates)}" + + return question + + def process_answer(self, answer: str) -> Dict: + """Process user's answer and advance interview""" + phase, _ = self.PHASES[self.phase_index] + + # Extract structured data based on phase + extracted = self._extract_for_phase(phase, answer) + self.collected[phase] = extracted + + # Build spec incrementally + spec_changes = self.spec_builder.apply(phase, extracted) + + # Advance + self.phase_index += 1 + complete = self.phase_index >= len(self.PHASES) + + return { + "phase_completed": phase, + "extracted": extracted, + "spec_changes": spec_changes, + "next_question": None if complete else self.get_current_question(), + "complete": complete, + "spec": self.spec_builder.get_spec() if complete else None + } + + def _extract_for_phase(self, phase: str, answer: str) -> Dict: + """Extract structured data from natural language answer""" + if phase == "model": + # Extract file path + return {"path": self._extract_path(answer)} + + elif phase == "objectives": + # Extract objectives + objectives = [] + if "mass" in answer.lower() or "weight" in answer.lower(): + direction = "minimize" if "minimize" in answer.lower() or "reduce" in answer.lower() else "minimize" + objectives.append({"name": "mass", "direction": direction}) + if "displacement" in answer.lower() or "stiff" in answer.lower(): + objectives.append({"name": "max_displacement", "direction": "minimize"}) + if "stress" in answer.lower(): + objectives.append({"name": "max_stress", "direction": "minimize"}) + if "wfe" in answer.lower() or "wavefront" in answer.lower(): + objectives.append({"name": "wfe", "direction": "minimize"}) + return {"objectives": objectives} + + elif phase == "constraints": + # Extract constraints + constraints = [] + import re + # Pattern: "stress < 200 MPa" or "max stress <= 200" + stress_match = re.search(r'stress[^0-9]*([<>=]+)\s*(\d+)', answer.lower()) + if stress_match: + constraints.append({ + "name": "max_stress", + "operator": stress_match.group(1), + "threshold": float(stress_match.group(2)) + }) + return {"constraints": constraints} + + return {"raw": answer} + + def _recommend_method(self) -> str: + """Recommend optimization method based on collected info""" + objectives = self.collected.get("objectives", {}).get("objectives", []) + if len(objectives) > 1: + return "NSGA-II (multi-objective)" + return "TPE (Bayesian optimization)" +``` + +### 3.2 Interview Tool Integration + +```python +async def _tool_start_interview(self, params: Dict) -> Dict: + """Start guided study creation""" + self.interview = InterviewEngine() + question = self.interview.get_current_question() + + return { + "preview": f"Starting interview.\n\n{question}", + "interview_started": True, + "spec_changed": False + } + +async def _tool_interview_answer(self, params: Dict) -> Dict: + """Process interview answer""" + if not self.interview: + return {"preview": "No interview in progress", "spec_changed": False} + + result = self.interview.process_answer(params["answer"]) + + response = f"Got it: {result['phase_completed']}\n\n" + + if result["spec_changes"]: + response += "Updated configuration:\n" + for change in result["spec_changes"]: + response += f" ✓ {change}\n" + + if result["next_question"]: + response += f"\n{result['next_question']}" + elif result["complete"]: + response += "\n✓ Interview complete! Creating study..." + # Auto-create the study + self.spec_store.set(result["spec"]) + + return { + "preview": response, + "spec_changed": result["complete"], + "complete": result["complete"] + } +``` + +--- + +## Phase 4: Visual Polish (1 week) + +### Goal: Beautiful, responsive canvas updates + +### 4.1 Tool Call Visualization + +```typescript +// components/chat/ToolCallIndicator.tsx +function ToolCallIndicator({ tool, status }: { tool: string; status: 'running' | 'complete' }) { + const icons: Record = { + add_design_variable: , + add_extractor: , + add_objective: , + add_constraint: , + create_study: , + introspect_model: , + }; + + return ( +
+ {status === 'running' ? ( + + ) : ( + + )} + {icons[tool] || } + + {formatToolName(tool)} + +
+ ); +} +``` + +### 4.2 Canvas Node Animation + +```typescript +// components/canvas/AnimatedNode.tsx +function AnimatedNode({ data, isNew, isHighlighted }) { + return ( + + {/* Node content */} + + ); +} +``` + +### 4.3 Connection Line Animation + +```typescript +// Draw animated line when Claude adds an edge +function AnimatedEdge({ source, target, isNew }) { + return ( + + ); +} +``` + +--- + +## User Experience Flows + +### Flow 1: Quick Creation (Experienced User) + +``` +User: "Create bracket optimization, minimize mass, thickness 2-10mm, max stress 200 MPa" + +Claude: [Parses complete intent] + 🔧 Creating study "bracket_optimization" + 🔧 Adding design variable: thickness [2mm - 10mm] + 🔧 Adding extractor: mass + 🔧 Adding extractor: max_stress + 🔧 Adding objective: minimize mass + 🔧 Adding constraint: stress ≤ 200 MPa + + ✓ Created! Canvas shows your setup. Click any node to adjust. + +[Canvas animates: DV appears → Model → Extractors → Objectives/Constraints] +[Total time: ~5 seconds] +``` + +### Flow 2: Guided Interview (New User) + +``` +User: "Help me set up an optimization" + +Claude: What kind of optimization do you want to set up? + (e.g., bracket stiffness, mirror WFE, beam stress) + +User: "Mirror surface quality optimization" + +Claude: Got it - mirror optimization! + + What's the path to your NX simulation file? + (You can drag & drop or paste the path) + +User: "studies/M1_Mirror/model.sim" + +Claude: 🔧 Loading model... + + Found your model with 15 expressions. These look like good design variables: + - rib_thickness (8mm) + - mirror_depth (50mm) + - support_angle (45°) + + What are you trying to optimize? + +[Canvas: Model node appears] + +User: "Minimize wavefront error and mass" + +Claude: 🔧 Adding Zernike WFE extractor + 🔧 Adding mass extractor + 🔧 Adding objectives (multi-objective) + + Multi-objective optimization! I recommend NSGA-II. + + Which parameters should vary? + +[Canvas: Extractors and objectives appear, connected to model] + +User: "Use rib_thickness and mirror_depth, 50% range on each" + +Claude: 🔧 Adding rib_thickness [4mm - 12mm] + 🔧 Adding mirror_depth [25mm - 75mm] + + Any constraints? + +[Canvas: DVs appear, connected to model] + +User: "Keep stress under 150 MPa" + +Claude: 🔧 Adding stress extractor + 🔧 Adding constraint: stress ≤ 150 MPa + + ✓ Configuration complete! + + Your setup: + - 2 design variables + - 3 extractors (WFE, mass, stress) + - 2 objectives (multi-objective) + - 1 constraint + + Ready to run? Say "start" or adjust the canvas first. + +[Canvas: Full optimization flow visible] +``` + +### Flow 3: User Edits, Claude Adapts + +``` +[User clicks rib_thickness node, changes max from 12mm to 20mm] + +User: "Is this a good range?" + +Claude: I see you expanded rib_thickness to [4mm - 20mm]. + + That's a wide range! Considerations: + - Thicker ribs = more mass, more stiffness + - 20mm might exceed typical manufacturing limits + - Wider range = more trials to explore + + If you're exploring, this is fine. For production, + I'd suggest [4mm - 15mm]. + + Want me to adjust, or keep your change? +``` + +--- + +## Implementation Priority + +### Week 1-2: Unified WebSocket +- [ ] Create `/api/atomizer/ws` endpoint +- [ ] Implement streaming in `AtomizerClaudeAgent` +- [ ] Create `useAtomizerSocket` hook +- [ ] Wire canvas to receive spec updates +- [ ] Add canvas edit notifications + +### Week 3-4: Tools & Interview +- [ ] Add `create_study` tool +- [ ] Add `introspect_model` tool +- [ ] Implement `InterviewEngine` +- [ ] Add interview tools +- [ ] Test guided creation flow + +### Week 5: Polish +- [ ] Tool call indicators in chat +- [ ] Node appear/highlight animations +- [ ] Edge draw animations +- [ ] Error recovery & reconnection +- [ ] Performance optimization + +--- + +## Success Metrics + +| Metric | Target | +|--------|--------| +| Study creation time (experienced) | < 30 seconds | +| Study creation time (interview) | < 3 minutes | +| Canvas update latency | < 200ms | +| User edit → Claude context | < 100ms | +| Interview completion rate | > 90% | + +--- + +## Key Files to Modify + +### Backend +- `atomizer-dashboard/backend/api/routes/atomizer_ws.py` (new) +- `atomizer-dashboard/backend/api/services/claude_agent.py` (enhance) +- `atomizer-dashboard/backend/api/services/interview_engine.py` (new) + +### Frontend +- `atomizer-dashboard/frontend/src/hooks/useAtomizerSocket.ts` (new) +- `atomizer-dashboard/frontend/src/pages/CanvasView.tsx` (update) +- `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` (update) +- `atomizer-dashboard/frontend/src/components/chat/ToolCallIndicator.tsx` (new) + +--- + +*This architecture makes Atomizer uniquely powerful: natural language + visual feedback + full control, all in one seamless experience.* diff --git a/docs/TODO_NXOPEN_MCP_SETUP.md b/docs/plans/TODO_NXOPEN_MCP_SETUP.md similarity index 100% rename from docs/TODO_NXOPEN_MCP_SETUP.md rename to docs/plans/TODO_NXOPEN_MCP_SETUP.md diff --git a/docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md b/docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md new file mode 100644 index 00000000..c8fa30a3 --- /dev/null +++ b/docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md @@ -0,0 +1,1697 @@ +# Atomizer Unified Configuration Architecture +## Master Design Document + +**Version**: 1.0 +**Date**: January 2026 +**Status**: Proposed Architecture +**Author**: Atomizer Architecture Review + +--- + +## Executive Summary + +This document presents a comprehensive analysis of Atomizer's current configuration architecture and proposes a **Unified Configuration System** that establishes a single source of truth for optimization studies. The new architecture enables: + +1. **Single Source of Truth**: One canonical JSON schema (`AtomizerSpec`) used everywhere +2. **Bidirectional Sync**: Canvas ↔ Config ↔ Backend with zero data loss +3. **Intelligent Manipulation**: Claude can dynamically modify configs, add custom functions, and validate changes +4. **Real-time Collaboration**: WebSocket-driven updates between Dashboard, Canvas, and Claude Assistant + +--- + +## Table of Contents + +1. [Current State Analysis](#1-current-state-analysis) +2. [Problem Statement](#2-problem-statement) +3. [Proposed Architecture](#3-proposed-architecture) +4. [AtomizerSpec Schema](#4-atomizerspec-schema) +5. [Component Architecture](#5-component-architecture) +6. [Intelligent Assistant Integration](#6-intelligent-assistant-integration) +7. [API Design](#7-api-design) +8. [Migration Strategy](#8-migration-strategy) +9. [Implementation Roadmap](#9-implementation-roadmap) + +--- + +## 1. Current State Analysis + +### 1.1 Architecture Overview + +The current system has **four distinct configuration representations** that don't fully align: + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ CURRENT ARCHITECTURE (FRAGMENTED) │ +└─────────────────────────────────────────────────────────────────────────────┘ + + Canvas UI Backend Optimization + ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ + │ ReactFlow │ │ FastAPI │ │ Python Engine │ + │ Nodes/Edges │ │ Routes │ │ ConfigManager │ + │ │ │ │ │ │ + │ ┌──────────┐ │ │ ┌──────────┐ │ │ ┌──────────┐ │ + │ │ Canvas │──┼──LOSSY──┼─▶│ Intent │──┼──LOSSY──┼─▶│ Config │ │ + │ │ State │ │ │ │ JSON │ │ │ │ .json │ │ + │ └──────────┘ │ │ └──────────┘ │ │ └──────────┘ │ + │ ▲ │ │ │ │ │ │ │ + │ │ LOSSY │ │ ▼ │ │ ▼ │ + │ │ │ │ ┌──────────┐ │ │ ┌──────────┐ │ + │ Load from │◀─LOSSY──┼──│ Claude │ │ │ │ Optuna │ │ + │ config │ │ │ Assistant│ │ │ │ Database │ │ + └────────────────┘ └────────────────┘ └────────────────┘ +``` + +### 1.2 Current Config Formats (4+ Variants) + +| Variant | Used By | Key Differences | +|---------|---------|-----------------| +| **Mirror/Zernike** | m1_mirror studies | `extractor_config`, `zernike_settings`, subcases | +| **Drone/Structural** | drone_gimbal | `extraction.{action, domain, params}`, `bounds[]` | +| **Canvas Intent** | Canvas UI | Simplified, loses metadata | +| **Legacy Schema** | optimization_config_schema.json | `parameter`, `bounds[]`, `goal` | + +### 1.3 Field Naming Chaos + +| Concept | Canvas Intent | Mirror Config | Drone Config | Schema | +|---------|---------------|---------------|--------------|--------| +| DV bounds | `min`, `max` | `min`, `max` | `bounds: [min, max]` | `bounds: [min, max]` | +| DV name | `name` | `expression_name` | `parameter` | `parameter` | +| Objective dir | `direction` | `direction` | `goal` | `goal` | +| Extractor | `extractor: "E5"` | `extractor_config: {}` | `extraction: {}` | `extraction: {}` | +| Trials | `max_trials` | `n_trials` | `n_trials` | `n_trials` | + +### 1.4 Data Flow Problems + +``` +PROBLEM 1: Information Loss During Canvas → Config Conversion +═══════════════════════════════════════════════════════════════ + +Canvas Node: Intent: Config: +┌─────────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Extractor E8 │ │ extractors: [{ │ │ objectives: [{ │ +│ - nModes: 40 │───────▶│ id: "E8", │───────▶│ extractor: │ +│ - innerRadius: 58 │ LOST │ name: "..." │ LOST │ "E8" │ +│ - subcases: [1,2,3] │───────▶│ }] │───────▶│ }] │ +│ - filterLowOrders: 6│ │ │ │ │ +└─────────────────────┘ └─────────────────┘ └─────────────────┘ + │ │ + │ MISSING: │ MISSING: + │ - nModes │ - All config + │ - innerRadius │ - zernike_settings + │ - subcases │ - extraction_method + │ - filterLowOrders │ + + +PROBLEM 2: Config → Canvas Loading is Incomplete +═══════════════════════════════════════════════════════════════ + +Real Config (m1_mirror): Canvas Display: +┌───────────────────────────────┐ ┌─────────────────────────┐ +│ design_variables: [ │ │ 11 DesignVar nodes │ ✓ +│ { name, min, max, ... } │ │ │ +│ ] │ │ │ +│ │ │ │ +│ objectives: [ │ │ 3 Objective nodes │ ✓ +│ { extractor_config: { │ │ (but extractor_config │ +│ target_subcase: "3", │ │ NOT displayed) │ ✗ +│ metric: "rel_filtered_rms"│ │ │ +│ }} │ │ │ +│ ] │ │ │ +│ │ │ │ +│ hard_constraints: [ │ │ Converted to objectives │ ✗ +│ { name, limit, penalty } │ │ (loses constraint type) │ +│ ] │ │ │ +│ │ │ │ +│ sat_settings: { │ │ NOT LOADED AT ALL │ ✗ +│ n_ensemble_models: 10, │ │ │ +│ hidden_dims: [256, 128] │ │ │ +│ } │ │ │ +└───────────────────────────────┘ └─────────────────────────┘ + + +PROBLEM 3: No Bidirectional Sync +═══════════════════════════════════════════════════════════════ + +User edits config.json manually + │ + ▼ +Canvas doesn't know about change + │ + ▼ +User refreshes canvas → loses their canvas-only edits + │ + ▼ +Conflict between what config says and what canvas shows +``` + +### 1.5 Missing Backend Endpoints + +| Required Endpoint | Status | Impact | +|-------------------|--------|--------| +| `POST /studies/create-from-intent` | Missing | Canvas can't create studies | +| `POST /studies/{id}/validate-config` | Missing | No schema validation | +| `PUT /studies/{id}/config` (atomic) | Partial | No conflict detection | +| `POST /canvas/apply-modification` | Missing | Claude can't modify canvas | +| `WebSocket /studies/{id}/sync` | Missing | No real-time updates | + +--- + +## 2. Problem Statement + +### 2.1 Core Issues + +1. **No Single Source of Truth**: Multiple config formats mean the same study looks different depending on where you view it + +2. **Lossy Conversions**: Converting Canvas → Intent → Config loses information; converting Config → Canvas is incomplete + +3. **No Real-time Sync**: Changes in one place (config file, canvas, Claude) don't propagate to others + +4. **Claude Can't Modify Canvas Directly**: MCP tools return JSON but nothing applies changes to the frontend + +5. **No Intelligent Extensibility**: User can't ask Claude to "add a custom extraction function" because there's no plugin system + +### 2.2 User Pain Points + +| Scenario | Current Experience | Desired Experience | +|----------|-------------------|-------------------| +| "Load my existing study into canvas" | Partial load, loses extractor settings | Complete bidirectional load | +| "Claude, change the mass objective weight to 5" | Claude returns JSON, user must manually apply | Canvas updates in real-time | +| "I need a custom Zernike RMS calculation" | Must edit Python code directly | Claude adds function, canvas shows new node | +| "Show me what the config looks like" | Multiple representations exist | One canonical view everywhere | + +--- + +## 3. Proposed Architecture + +### 3.1 Unified Architecture Overview + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ PROPOSED ARCHITECTURE (UNIFIED) │ +└─────────────────────────────────────────────────────────────────────────────┘ + + ┌─────────────────────┐ + │ AtomizerSpec │ + │ (Single Source │ + │ of Truth) │ + │ │ + │ atomizer_spec.json │ + └──────────┬──────────┘ + │ + ┌──────────────────────────┼──────────────────────────┐ + │ │ │ + ▼ ▼ ▼ + ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ + │ Canvas UI │ │ Backend API │ │ Claude Assistant│ + │ │ │ │ │ │ + │ ReactFlow + │◀─────▶│ FastAPI + │◀─────▶│ MCP Tools + │ + │ SpecRenderer │ WS │ SpecManager │ WS │ SpecModifier │ + │ │ │ │ │ │ + └─────────────────┘ └─────────────────┘ └─────────────────┘ + │ │ │ + │ WebSocket Real-time Sync Bus │ + └──────────────────────────┴──────────────────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ Optimization │ + │ Engine │ + │ │ + │ Uses AtomizerSpec │ + │ directly │ + └─────────────────────┘ +``` + +### 3.2 Key Principles + +1. **AtomizerSpec is THE config**: One JSON schema, versioned, validated everywhere +2. **No Intermediate Formats**: Canvas renders AtomizerSpec directly; no "Intent" conversion +3. **Changes Flow Through Backend**: All modifications go through SpecManager +4. **WebSocket Sync**: All clients receive real-time updates +5. **Extensible Functions**: Custom extractors/functions defined IN the spec + +### 3.3 Data Flow (New) + +``` + AtomizerSpec (atomizer_spec.json) + │ + │ Read by all + ┌─────────────────────────┼─────────────────────────┐ + │ │ │ + ▼ ▼ ▼ + Canvas Renderer Backend Loader Claude Reader + │ │ │ + │ User edits node │ │ "Change weight" + │ │ │ + ▼ ▼ ▼ + PATCH /spec/nodes/{id} (validates) MCP: spec_modify + │ │ │ + └─────────────────────────┼─────────────────────────┘ + │ + ▼ + SpecManager + ┌─────────────────────────┐ + │ 1. Validate change │ + │ 2. Apply to spec │ + │ 3. Persist to disk │ + │ 4. Broadcast via WS │ + └─────────────────────────┘ + │ + │ WebSocket: spec_updated + ┌─────────────────────────┼─────────────────────────┐ + │ │ │ + ▼ ▼ ▼ + Canvas re-renders Other clients Claude sees new state +``` + +--- + +## 4. AtomizerSpec Schema + +### 4.1 Schema Design Philosophy + +- **Self-describing**: Schema version and capabilities in the file +- **Complete**: ALL information needed for optimization, canvas, and reporting +- **Extensible**: Custom functions, extractors, and plugins declared in-spec +- **Validated**: JSON Schema validation at every boundary + +### 4.2 AtomizerSpec v2.0 Schema + +```json +{ + "$schema": "https://atomizer.io/schemas/atomizer_spec_v2.json", + "$id": "atomizer_spec", + + "meta": { + "version": "2.0", + "created": "2026-01-17T12:00:00Z", + "modified": "2026-01-17T14:30:00Z", + "created_by": "canvas", + "modified_by": "claude", + "study_name": "m1_mirror_optimization_v15", + "description": "Multi-objective mirror optimization with Zernike WFE", + "tags": ["mirror", "zernike", "multi-objective"] + }, + + "model": { + "nx_part": { + "path": "C:/Studies/m1_mirror/model/M1_Mirror.prt", + "hash": "sha256:abc123...", + "idealized_part": "M1_Mirror_fem1_i.prt" + }, + "fem": { + "path": "model/M1_Mirror_fem1.fem", + "element_count": 45000, + "node_count": 12000 + }, + "sim": { + "path": "model/M1_Mirror_sim1.sim", + "solver": "nastran", + "solution_type": "SOL101", + "subcases": [ + { "id": 1, "name": "Gravity 0deg", "type": "static" }, + { "id": 2, "name": "Gravity 20deg", "type": "static" }, + { "id": 3, "name": "Gravity 40deg", "type": "static" } + ] + } + }, + + "design_variables": [ + { + "id": "dv_001", + "name": "Lateral Rib Count", + "expression_name": "n_lateral_ribs", + "type": "integer", + "bounds": { "min": 6, "max": 12 }, + "baseline": 9, + "units": "count", + "enabled": true, + "description": "Number of lateral support ribs", + "canvas_position": { "x": 50, "y": 100 } + }, + { + "id": "dv_002", + "name": "Facesheet Thickness", + "expression_name": "facesheet_t", + "type": "continuous", + "bounds": { "min": 2.0, "max": 8.0 }, + "baseline": 4.5, + "units": "mm", + "enabled": true, + "step": 0.1, + "canvas_position": { "x": 50, "y": 200 } + } + ], + + "extractors": [ + { + "id": "ext_001", + "name": "Zernike WFE Extractor", + "type": "zernike_opd", + "builtin": true, + "config": { + "inner_radius_mm": 58.0, + "outer_radius_mm": 330.0, + "n_modes": 40, + "filter_low_orders": 6, + "displacement_unit": "mm", + "reference_subcase": 1 + }, + "outputs": [ + { "name": "wfe_rms_20deg", "subcase": 2, "metric": "filtered_rms_nm" }, + { "name": "wfe_rms_40deg", "subcase": 3, "metric": "filtered_rms_nm" }, + { "name": "wfe_p2v_40deg", "subcase": 3, "metric": "pv_nm" } + ], + "canvas_position": { "x": 740, "y": 100 } + }, + { + "id": "ext_002", + "name": "Mass Extractor", + "type": "mass", + "builtin": true, + "config": { + "source": "expression", + "expression_name": "total_mass_kg" + }, + "outputs": [ + { "name": "mass_kg", "metric": "total" } + ], + "canvas_position": { "x": 740, "y": 300 } + }, + { + "id": "ext_003", + "name": "Custom Manufacturability Score", + "type": "custom_function", + "builtin": false, + "function": { + "name": "calc_manufacturability", + "module": "custom_extractors.manufacturability", + "signature": "(design_vars: dict, fem_results: dict) -> float", + "source_code": "def calc_manufacturability(design_vars, fem_results):\n # Custom logic here\n rib_count = design_vars['n_lateral_ribs']\n thickness = design_vars['facesheet_t']\n score = 100 - (rib_count * 5) - (10 - thickness) * 3\n return max(0, min(100, score))" + }, + "outputs": [ + { "name": "mfg_score", "metric": "score" } + ], + "canvas_position": { "x": 740, "y": 500 } + } + ], + + "objectives": [ + { + "id": "obj_001", + "name": "Minimize WFE at 40deg", + "direction": "minimize", + "weight": 5.0, + "source": { + "extractor_id": "ext_001", + "output_name": "wfe_rms_40deg" + }, + "target": 5.0, + "units": "nm", + "canvas_position": { "x": 1020, "y": 100 } + }, + { + "id": "obj_002", + "name": "Minimize Mass", + "direction": "minimize", + "weight": 1.0, + "source": { + "extractor_id": "ext_002", + "output_name": "mass_kg" + }, + "target": 100.0, + "units": "kg", + "canvas_position": { "x": 1020, "y": 200 } + } + ], + + "constraints": [ + { + "id": "con_001", + "name": "Max Mass Limit", + "type": "hard", + "operator": "<=", + "threshold": 120.0, + "source": { + "extractor_id": "ext_002", + "output_name": "mass_kg" + }, + "penalty_config": { + "method": "quadratic", + "weight": 1000.0, + "margin": 5.0 + }, + "canvas_position": { "x": 1020, "y": 400 } + }, + { + "id": "con_002", + "name": "Minimum Manufacturability", + "type": "soft", + "operator": ">=", + "threshold": 60.0, + "source": { + "extractor_id": "ext_003", + "output_name": "mfg_score" + }, + "penalty_config": { + "method": "linear", + "weight": 10.0 + }, + "canvas_position": { "x": 1020, "y": 500 } + } + ], + + "optimization": { + "algorithm": { + "type": "NSGA-II", + "config": { + "population_size": 50, + "n_generations": 100, + "mutation_prob": null, + "crossover_prob": 0.9, + "seed": 42 + } + }, + "budget": { + "max_trials": 500, + "max_time_hours": 48, + "convergence_patience": 50 + }, + "surrogate": { + "enabled": true, + "type": "ensemble", + "config": { + "n_models": 10, + "architecture": [256, 128, 64], + "train_every_n_trials": 20, + "min_training_samples": 30, + "acquisition_candidates": 10000, + "fea_validations_per_round": 5 + } + }, + "canvas_position": { "x": 1300, "y": 150 } + }, + + "workflow": { + "stages": [ + { + "id": "stage_exploration", + "name": "Design Space Exploration", + "algorithm": "RandomSearch", + "trials": 30, + "purpose": "Build initial training data" + }, + { + "id": "stage_optimization", + "name": "Surrogate-Assisted Optimization", + "algorithm": "SAT_v3", + "trials": 470, + "purpose": "Efficient optimization with neural acceleration" + } + ], + "transitions": [ + { + "from": "stage_exploration", + "to": "stage_optimization", + "condition": "trial_count >= 30" + } + ] + }, + + "reporting": { + "auto_report": true, + "report_triggers": ["new_pareto", "every_50_trials", "convergence"], + "insights": [ + { + "type": "zernike_visualization", + "for_trials": "pareto_front", + "config": { "include_html": true } + }, + { + "type": "convergence_plot", + "config": { "show_pareto_evolution": true } + } + ] + }, + + "canvas": { + "layout_version": "2.0", + "viewport": { "x": 0, "y": 0, "zoom": 1.0 }, + "edges": [ + { "source": "dv_001", "target": "model" }, + { "source": "dv_002", "target": "model" }, + { "source": "model", "target": "solver" }, + { "source": "solver", "target": "ext_001" }, + { "source": "solver", "target": "ext_002" }, + { "source": "ext_001", "target": "obj_001" }, + { "source": "ext_002", "target": "obj_002" }, + { "source": "ext_002", "target": "con_001" }, + { "source": "obj_001", "target": "optimization" }, + { "source": "obj_002", "target": "optimization" }, + { "source": "con_001", "target": "optimization" } + ], + "groups": [ + { "id": "grp_inputs", "name": "Design Inputs", "node_ids": ["dv_001", "dv_002"] }, + { "id": "grp_physics", "name": "Physics Extraction", "node_ids": ["ext_001", "ext_002"] } + ] + } +} +``` + +### 4.3 Schema Versioning + +| Version | Status | Key Changes | +|---------|--------|-------------| +| 1.0 | Legacy | Original `optimization_config.json` | +| 2.0 | **Proposed** | Unified spec with canvas, custom functions, workflow | +| 2.1 | Future | Multi-fidelity, parallel execution | + +### 4.4 JSON Schema Definition + +```json +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://atomizer.io/schemas/atomizer_spec_v2.json", + "title": "AtomizerSpec v2.0", + "type": "object", + "required": ["meta", "model", "design_variables", "extractors", "objectives", "optimization"], + + "properties": { + "meta": { + "type": "object", + "required": ["version", "study_name"], + "properties": { + "version": { "type": "string", "pattern": "^2\\.\\d+$" }, + "study_name": { "type": "string", "pattern": "^[a-z0-9_]+$" }, + "created": { "type": "string", "format": "date-time" }, + "modified": { "type": "string", "format": "date-time" }, + "created_by": { "enum": ["canvas", "claude", "api", "migration"] }, + "modified_by": { "type": "string" }, + "description": { "type": "string" }, + "tags": { "type": "array", "items": { "type": "string" } } + } + }, + + "design_variables": { + "type": "array", + "minItems": 1, + "items": { + "$ref": "#/definitions/design_variable" + } + }, + + "extractors": { + "type": "array", + "minItems": 1, + "items": { + "$ref": "#/definitions/extractor" + } + }, + + "objectives": { + "type": "array", + "minItems": 1, + "items": { + "$ref": "#/definitions/objective" + } + } + }, + + "definitions": { + "design_variable": { + "type": "object", + "required": ["id", "name", "expression_name", "type", "bounds"], + "properties": { + "id": { "type": "string", "pattern": "^dv_\\d{3}$" }, + "name": { "type": "string" }, + "expression_name": { "type": "string" }, + "type": { "enum": ["continuous", "integer", "categorical"] }, + "bounds": { + "type": "object", + "required": ["min", "max"], + "properties": { + "min": { "type": "number" }, + "max": { "type": "number" } + } + }, + "baseline": { "type": "number" }, + "units": { "type": "string" }, + "enabled": { "type": "boolean", "default": true }, + "canvas_position": { "$ref": "#/definitions/position" } + } + }, + + "extractor": { + "type": "object", + "required": ["id", "name", "type", "outputs"], + "properties": { + "id": { "type": "string", "pattern": "^ext_\\d{3}$" }, + "name": { "type": "string" }, + "type": { "type": "string" }, + "builtin": { "type": "boolean" }, + "config": { "type": "object" }, + "function": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "module": { "type": "string" }, + "source_code": { "type": "string" } + } + }, + "outputs": { + "type": "array", + "items": { + "type": "object", + "required": ["name"], + "properties": { + "name": { "type": "string" }, + "metric": { "type": "string" }, + "subcase": { "type": "integer" } + } + } + } + } + }, + + "objective": { + "type": "object", + "required": ["id", "name", "direction", "source"], + "properties": { + "id": { "type": "string", "pattern": "^obj_\\d{3}$" }, + "name": { "type": "string" }, + "direction": { "enum": ["minimize", "maximize"] }, + "weight": { "type": "number", "minimum": 0 }, + "source": { + "type": "object", + "required": ["extractor_id", "output_name"], + "properties": { + "extractor_id": { "type": "string" }, + "output_name": { "type": "string" } + } + }, + "target": { "type": "number" }, + "units": { "type": "string" } + } + }, + + "position": { + "type": "object", + "properties": { + "x": { "type": "number" }, + "y": { "type": "number" } + } + } + } +} +``` + +--- + +## 5. Component Architecture + +### 5.1 Backend: SpecManager Service + +```python +# atomizer-dashboard/backend/api/services/spec_manager.py + +from pathlib import Path +from datetime import datetime +import json +import hashlib +from typing import Optional, Dict, Any, List +from pydantic import BaseModel +import jsonschema + +class SpecManager: + """ + Central service for managing AtomizerSpec. + All modifications go through this service. + """ + + def __init__(self, study_path: Path): + self.study_path = study_path + self.spec_path = study_path / "atomizer_spec.json" + self.schema = self._load_schema() + self._subscribers: List[WebSocketConnection] = [] + + def load(self) -> Dict[str, Any]: + """Load and validate the spec.""" + with open(self.spec_path) as f: + spec = json.load(f) + self._validate(spec) + return spec + + def save(self, spec: Dict[str, Any], modified_by: str = "api"): + """Save spec with validation and broadcast.""" + # Update metadata + spec["meta"]["modified"] = datetime.utcnow().isoformat() + "Z" + spec["meta"]["modified_by"] = modified_by + + # Validate + self._validate(spec) + + # Compute hash for conflict detection + spec_hash = self._compute_hash(spec) + + # Atomic write + temp_path = self.spec_path.with_suffix(".tmp") + with open(temp_path, "w") as f: + json.dump(spec, f, indent=2) + temp_path.replace(self.spec_path) + + # Broadcast to all subscribers + self._broadcast({ + "type": "spec_updated", + "hash": spec_hash, + "modified_by": modified_by, + "timestamp": spec["meta"]["modified"] + }) + + return spec_hash + + def patch(self, path: str, value: Any, modified_by: str = "api") -> Dict[str, Any]: + """ + Apply a JSON Patch-style modification. + path: JSONPath like "design_variables[0].bounds.max" + """ + spec = self.load() + self._apply_patch(spec, path, value) + self.save(spec, modified_by) + return spec + + def add_node(self, node_type: str, data: Dict[str, Any], modified_by: str = "canvas"): + """Add a new node (design var, extractor, objective, etc.)""" + spec = self.load() + + # Generate ID + node_id = self._generate_id(node_type, spec) + data["id"] = node_id + + # Add canvas position if not provided + if "canvas_position" not in data: + data["canvas_position"] = self._auto_position(node_type, spec) + + # Add to appropriate section + section = self._get_section_for_type(node_type) + spec[section].append(data) + + self.save(spec, modified_by) + return node_id + + def remove_node(self, node_id: str, modified_by: str = "canvas"): + """Remove a node and all edges referencing it.""" + spec = self.load() + + # Find and remove node + for section in ["design_variables", "extractors", "objectives", "constraints"]: + spec[section] = [n for n in spec.get(section, []) if n.get("id") != node_id] + + # Remove edges referencing this node + if "canvas" in spec and "edges" in spec["canvas"]: + spec["canvas"]["edges"] = [ + e for e in spec["canvas"]["edges"] + if e["source"] != node_id and e["target"] != node_id + ] + + self.save(spec, modified_by) + + def add_custom_function(self, name: str, code: str, outputs: List[str], modified_by: str = "claude"): + """ + Add a custom extractor function. + Claude can call this to add new physics extraction logic. + """ + spec = self.load() + + # Validate Python syntax + compile(code, f"", "exec") + + extractor = { + "id": self._generate_id("ext", spec), + "name": name, + "type": "custom_function", + "builtin": False, + "function": { + "name": name, + "module": "custom_extractors.dynamic", + "source_code": code + }, + "outputs": [{"name": o, "metric": "custom"} for o in outputs] + } + + spec["extractors"].append(extractor) + self.save(spec, modified_by) + + return extractor["id"] + + def validate_and_report(self) -> Dict[str, Any]: + """Run full validation and return detailed report.""" + spec = self.load() + + report = { + "valid": True, + "errors": [], + "warnings": [], + "summary": {} + } + + # Schema validation + try: + self._validate(spec) + except jsonschema.ValidationError as e: + report["valid"] = False + report["errors"].append({ + "type": "schema", + "path": list(e.absolute_path), + "message": e.message + }) + + # Semantic validation + self._validate_semantic(spec, report) + + # Summary + report["summary"] = { + "design_variables": len(spec.get("design_variables", [])), + "extractors": len(spec.get("extractors", [])), + "objectives": len(spec.get("objectives", [])), + "constraints": len(spec.get("constraints", [])), + "custom_functions": sum(1 for e in spec.get("extractors", []) if not e.get("builtin", True)) + } + + return report + + def subscribe(self, ws: WebSocketConnection): + """Subscribe to spec changes.""" + self._subscribers.append(ws) + + def unsubscribe(self, ws: WebSocketConnection): + """Unsubscribe from spec changes.""" + self._subscribers.remove(ws) + + def _broadcast(self, message: Dict[str, Any]): + """Broadcast to all subscribers.""" + for ws in self._subscribers: + ws.send_json(message) +``` + +### 5.2 Frontend: SpecRenderer Component + +```typescript +// atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx + +import React, { useCallback, useEffect } from 'react'; +import ReactFlow, { Node, Edge, useNodesState, useEdgesState } from 'reactflow'; +import { useSpecStore } from '../../hooks/useSpecStore'; +import { useWebSocket } from '../../hooks/useWebSocket'; + +interface AtomizerSpec { + meta: SpecMeta; + design_variables: DesignVariable[]; + extractors: Extractor[]; + objectives: Objective[]; + constraints: Constraint[]; + optimization: OptimizationConfig; + canvas: CanvasConfig; +} + +export function SpecRenderer({ studyId }: { studyId: string }) { + const { spec, setSpec, updateNode, addNode, removeNode } = useSpecStore(); + const { subscribe, send } = useWebSocket(`/api/studies/${studyId}/sync`); + + // Convert spec to ReactFlow nodes + const [nodes, setNodes, onNodesChange] = useNodesState( + specToNodes(spec) + ); + + // Convert spec edges to ReactFlow edges + const [edges, setEdges, onEdgesChange] = useEdgesState( + spec?.canvas?.edges?.map(edgeToReactFlow) || [] + ); + + // Subscribe to real-time updates + useEffect(() => { + const unsubscribe = subscribe('spec_updated', (data) => { + // Reload spec from server + fetchSpec(studyId).then(setSpec); + }); + + return unsubscribe; + }, [studyId]); + + // When spec changes, update nodes + useEffect(() => { + if (spec) { + setNodes(specToNodes(spec)); + setEdges(spec.canvas?.edges?.map(edgeToReactFlow) || []); + } + }, [spec]); + + // Handle node property changes + const onNodeDataChange = useCallback((nodeId: string, data: Partial) => { + // Send PATCH to backend + send({ + type: 'patch_node', + node_id: nodeId, + data + }); + }, [send]); + + // Handle node position changes (for canvas layout) + const onNodeDragStop = useCallback((event: React.MouseEvent, node: Node) => { + send({ + type: 'update_position', + node_id: node.id, + position: node.position + }); + }, [send]); + + return ( + + {/* Toolbars, panels, etc. */} + + ); +} + +// Convert AtomizerSpec to ReactFlow nodes +function specToNodes(spec: AtomizerSpec | null): Node[] { + if (!spec) return []; + + const nodes: Node[] = []; + + // Model node (synthetic) + nodes.push({ + id: 'model', + type: 'model', + position: { x: 280, y: 50 }, + data: { + label: spec.meta.study_name, + filePath: spec.model.sim.path, + configured: true + } + }); + + // Solver node (synthetic) + nodes.push({ + id: 'solver', + type: 'solver', + position: { x: 510, y: 50 }, + data: { + label: spec.model.sim.solution_type, + solverType: spec.model.sim.solution_type, + configured: true + } + }); + + // Design variables + for (const dv of spec.design_variables) { + nodes.push({ + id: dv.id, + type: 'designVar', + position: dv.canvas_position || { x: 50, y: nodes.length * 100 }, + data: { + label: dv.name, + expressionName: dv.expression_name, + minValue: dv.bounds.min, + maxValue: dv.bounds.max, + baseline: dv.baseline, + unit: dv.units, + enabled: dv.enabled, + configured: true + } + }); + } + + // Extractors + for (const ext of spec.extractors) { + nodes.push({ + id: ext.id, + type: 'extractor', + position: ext.canvas_position || { x: 740, y: nodes.length * 100 }, + data: { + label: ext.name, + extractorId: ext.id, + extractorType: ext.type, + builtin: ext.builtin, + config: ext.config, + outputs: ext.outputs, + hasCustomCode: !ext.builtin, + configured: true + } + }); + } + + // Objectives + for (const obj of spec.objectives) { + nodes.push({ + id: obj.id, + type: 'objective', + position: obj.canvas_position || { x: 1020, y: nodes.length * 100 }, + data: { + label: obj.name, + name: obj.name, + direction: obj.direction, + weight: obj.weight, + target: obj.target, + units: obj.units, + source: obj.source, + configured: true + } + }); + } + + // Constraints + for (const con of spec.constraints || []) { + nodes.push({ + id: con.id, + type: 'constraint', + position: con.canvas_position || { x: 1020, y: nodes.length * 100 }, + data: { + label: con.name, + name: con.name, + operator: con.operator, + threshold: con.threshold, + constraintType: con.type, + source: con.source, + configured: true + } + }); + } + + // Optimization node + nodes.push({ + id: 'optimization', + type: 'algorithm', + position: spec.optimization.canvas_position || { x: 1300, y: 150 }, + data: { + label: spec.optimization.algorithm.type, + method: spec.optimization.algorithm.type, + maxTrials: spec.optimization.budget.max_trials, + config: spec.optimization.algorithm.config, + surrogate: spec.optimization.surrogate, + configured: true + } + }); + + return nodes; +} +``` + +### 5.3 MCP Tools for Claude + +```typescript +// mcp-server/atomizer-tools/src/tools/spec_tools.ts + +export const specTools: AtomizerTool[] = [ + { + definition: { + name: "spec_get", + description: "Get the current AtomizerSpec for a study. Returns the complete configuration.", + inputSchema: { + type: "object", + properties: { + study_name: { type: "string", description: "Study name" } + }, + required: ["study_name"] + } + }, + handler: async (args) => { + const spec = await specManager.load(args.study_name); + return { content: [{ type: "text", text: JSON.stringify(spec, null, 2) }] }; + } + }, + + { + definition: { + name: "spec_modify", + description: "Modify part of an AtomizerSpec. Changes are validated and broadcast to all clients.", + inputSchema: { + type: "object", + properties: { + study_name: { type: "string" }, + modifications: { + type: "array", + items: { + type: "object", + properties: { + operation: { enum: ["set", "add", "remove"] }, + path: { type: "string", description: "JSONPath to the field" }, + value: { description: "New value (for set/add)" } + } + } + } + }, + required: ["study_name", "modifications"] + } + }, + handler: async (args) => { + const results = []; + for (const mod of args.modifications) { + switch (mod.operation) { + case "set": + await specManager.patch(mod.path, mod.value, "claude"); + results.push(`Set ${mod.path}`); + break; + case "add": + const id = await specManager.addNode(mod.path, mod.value, "claude"); + results.push(`Added ${id} at ${mod.path}`); + break; + case "remove": + await specManager.removeNode(mod.path, "claude"); + results.push(`Removed ${mod.path}`); + break; + } + } + return { + content: [{ + type: "text", + text: JSON.stringify({ success: true, applied: results }, null, 2) + }] + }; + } + }, + + { + definition: { + name: "spec_add_custom_extractor", + description: "Add a custom Python function as an extractor. The function will be available in the optimization workflow.", + inputSchema: { + type: "object", + properties: { + study_name: { type: "string" }, + function_name: { type: "string", description: "Name of the function" }, + description: { type: "string" }, + code: { type: "string", description: "Python source code" }, + outputs: { + type: "array", + items: { type: "string" }, + description: "List of output names this function produces" + }, + dependencies: { + type: "array", + items: { type: "string" }, + description: "Python packages required (must be installed)" + } + }, + required: ["study_name", "function_name", "code", "outputs"] + } + }, + handler: async (args) => { + // Validate code syntax + const validation = await validatePythonCode(args.code); + if (!validation.valid) { + return { + content: [{ type: "text", text: JSON.stringify({ error: validation.error }) }], + isError: true + }; + } + + const extractorId = await specManager.addCustomFunction( + args.function_name, + args.code, + args.outputs, + "claude" + ); + + return { + content: [{ + type: "text", + text: JSON.stringify({ + success: true, + extractor_id: extractorId, + message: `Custom extractor "${args.function_name}" added. Canvas will update automatically.`, + outputs: args.outputs + }, null, 2) + }] + }; + } + }, + + { + definition: { + name: "spec_validate", + description: "Validate an AtomizerSpec and return detailed report.", + inputSchema: { + type: "object", + properties: { + study_name: { type: "string" } + }, + required: ["study_name"] + } + }, + handler: async (args) => { + const report = await specManager.validateAndReport(args.study_name); + return { content: [{ type: "text", text: JSON.stringify(report, null, 2) }] }; + } + }, + + { + definition: { + name: "spec_create_from_description", + description: "Create a new AtomizerSpec from a natural language description. Claude interprets the description and generates a complete spec.", + inputSchema: { + type: "object", + properties: { + study_name: { type: "string" }, + description: { type: "string", description: "Natural language description of the optimization" }, + model_path: { type: "string", description: "Path to the NX model" } + }, + required: ["study_name", "description", "model_path"] + } + }, + handler: async (args) => { + // This would invoke the StudyCreator with Claude interpretation + // ... + } + } +]; +``` + +--- + +## 6. Intelligent Assistant Integration + +### 6.1 Claude Capabilities + +With the unified spec, Claude can: + +| Capability | Example User Request | Claude Action | +|------------|---------------------|---------------| +| **Modify Parameters** | "Set mass weight to 3" | `spec_modify` on `objectives[1].weight` | +| **Add Custom Logic** | "Add a manufacturing cost function based on rib count" | `spec_add_custom_extractor` with Python code | +| **Restructure Workflow** | "Change from single-objective to multi-objective" | Modify `optimization.algorithm.type` and restructure objectives | +| **Validate Setup** | "Check if my config is valid" | `spec_validate` and explain issues | +| **Explain Canvas** | "What does this optimization do?" | Read spec, generate natural language summary | + +### 6.2 Example Interaction + +``` +User: "Claude, I want to add a manufacturability score that penalizes designs with more than 10 ribs" + +Claude: I'll add a custom extractor for manufacturability scoring. Let me create the function: + +[spec_add_custom_extractor] +function_name: "manufacturability_score" +code: """ +def manufacturability_score(design_vars, fem_results): + ''' + Calculate manufacturability score based on rib count. + Higher score = easier to manufacture. + ''' + rib_count = design_vars.get('n_lateral_ribs', 0) + + if rib_count <= 8: + score = 100 # Easy + elif rib_count <= 10: + score = 80 - (rib_count - 8) * 10 # Moderate + else: + score = 60 - (rib_count - 10) * 15 # Difficult, penalized + + return max(0, score) +""" +outputs: ["mfg_score"] + +Done! I've added a manufacturability extractor that: +- Returns 100 for 8 or fewer ribs (easy to manufacture) +- Returns 60-80 for 9-10 ribs (moderate) +- Returns <60 for 11+ ribs (penalized) + +The canvas has been updated with a new "Manufacturability Score" node. + +Would you like me to: +1. Add this as a soft constraint (mfg_score >= 60)? +2. Add it as an objective to maximize? +``` + +### 6.3 Real-time Canvas Updates + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Real-time Update Flow │ +└─────────────────────────────────────────────────────────────────┘ + +User types: "Change mass weight to 5" + │ + ▼ +┌─────────────────┐ +│ Claude receives │ +│ message via WS │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ Claude calls │ +│ spec_modify │ +│ path: objectives│ +│ [1].weight = 5 │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ SpecManager │ +│ - Validates │ +│ - Saves spec │ +│ - Broadcasts │ +└────────┬────────┘ + │ + ├────────────────────────────────────┐ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│ Canvas receives │ │ Claude confirms │ +│ spec_updated │ │ "Weight updated │ +│ WebSocket msg │ │ to 5" │ +└────────┬────────┘ └─────────────────┘ + │ + ▼ +┌─────────────────┐ +│ Canvas re- │ +│ renders node │ +│ with new weight │ +│ "Mass (w=5)" │ +└─────────────────┘ +``` + +--- + +## 7. API Design + +### 7.1 REST Endpoints + +```yaml +# OpenAPI 3.0 specification (partial) + +paths: + /api/studies/{study_id}/spec: + get: + summary: Get AtomizerSpec + responses: + 200: + content: + application/json: + schema: + $ref: '#/components/schemas/AtomizerSpec' + + put: + summary: Replace entire AtomizerSpec + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/AtomizerSpec' + + patch: + summary: Partial update to spec + requestBody: + content: + application/json: + schema: + type: object + properties: + path: + type: string + description: JSONPath to field + value: + description: New value + modified_by: + type: string + + /api/studies/{study_id}/spec/nodes: + post: + summary: Add a new node + requestBody: + content: + application/json: + schema: + type: object + properties: + type: + enum: [designVar, extractor, objective, constraint] + data: + type: object + + /api/studies/{study_id}/spec/nodes/{node_id}: + patch: + summary: Update node properties + delete: + summary: Remove node + + /api/studies/{study_id}/spec/validate: + post: + summary: Validate spec and return report + + /api/studies/{study_id}/spec/custom-functions: + post: + summary: Add custom extractor function + requestBody: + content: + application/json: + schema: + type: object + required: [name, code, outputs] + properties: + name: + type: string + code: + type: string + outputs: + type: array + items: + type: string +``` + +### 7.2 WebSocket Protocol + +```typescript +// WebSocket Messages + +// Client -> Server +interface ClientMessage { + type: 'subscribe' | 'patch_node' | 'add_node' | 'remove_node' | 'update_position'; + study_id: string; + // ... type-specific fields +} + +// Server -> Client +interface ServerMessage { + type: 'spec_updated' | 'validation_error' | 'node_added' | 'node_removed' | 'connection_ack'; + timestamp: string; + // ... type-specific fields +} + +// Example: spec_updated +{ + "type": "spec_updated", + "timestamp": "2026-01-17T14:30:00Z", + "modified_by": "claude", + "hash": "sha256:abc123...", + "changes": [ + { "path": "objectives[1].weight", "old": 1, "new": 5 } + ] +} +``` + +--- + +## 8. Migration Strategy + +### 8.1 Backward Compatibility + +```python +# optimization_engine/config/migrator.py + +class SpecMigrator: + """ + Migrate old optimization_config.json to AtomizerSpec v2. + """ + + MIGRATION_MAP = { + # Old field -> New field + "design_variables.*.parameter": "design_variables.*.expression_name", + "design_variables.*.bounds": lambda dv: {"min": dv["bounds"][0], "max": dv["bounds"][1]}, + "objectives.*.goal": "objectives.*.direction", + "optimization_settings.n_trials": "optimization.budget.max_trials", + "optimization_settings.sampler": lambda s: {"type": s.replace("Sampler", "")}, + } + + def migrate(self, old_config: Dict) -> Dict: + """Convert old config to AtomizerSpec v2.""" + spec = { + "meta": { + "version": "2.0", + "created_by": "migration", + "study_name": old_config.get("study_name", "migrated_study") + }, + "design_variables": [], + "extractors": [], + "objectives": [], + "constraints": [], + "optimization": {}, + "canvas": {"edges": []} + } + + # Migrate design variables + for i, dv in enumerate(old_config.get("design_variables", [])): + spec["design_variables"].append(self._migrate_dv(dv, i)) + + # Migrate objectives + for i, obj in enumerate(old_config.get("objectives", [])): + extractor_id, extractor = self._infer_extractor(obj) + if extractor not in [e["id"] for e in spec["extractors"]]: + spec["extractors"].append(extractor) + spec["objectives"].append(self._migrate_objective(obj, i, extractor_id)) + + # ... continue for other sections + + return spec + + def _migrate_dv(self, dv: Dict, index: int) -> Dict: + """Migrate a design variable.""" + bounds = dv.get("bounds", [dv.get("min", 0), dv.get("max", 1)]) + return { + "id": f"dv_{index:03d}", + "name": dv.get("name", dv.get("parameter", f"param_{index}")), + "expression_name": dv.get("expression_name", dv.get("parameter")), + "type": dv.get("type", "continuous"), + "bounds": {"min": bounds[0], "max": bounds[1]}, + "baseline": dv.get("baseline"), + "units": dv.get("units", dv.get("description", "")), + "enabled": dv.get("enabled", True) + } +``` + +### 8.2 Migration Script + +```bash +# CLI tool for batch migration +python -m optimization_engine.config.migrate \ + --input studies/*/optimization_config.json \ + --output-suffix "_spec_v2.json" \ + --validate \ + --dry-run +``` + +--- + +## 9. Implementation Roadmap + +### Phase 1: Foundation (Weeks 1-3) + +| Task | Priority | Effort | +|------|----------|--------| +| Define AtomizerSpec JSON Schema v2.0 | P0 | 2 days | +| Create SpecManager service (Python) | P0 | 3 days | +| Implement spec validation | P0 | 2 days | +| Add REST endpoints for spec CRUD | P0 | 3 days | +| Create migration script | P0 | 3 days | +| Update optimization engine to use spec | P1 | 4 days | + +**Deliverable**: Backend can read/write/validate AtomizerSpec + +### Phase 2: Frontend Integration (Weeks 4-6) + +| Task | Priority | Effort | +|------|----------|--------| +| Create SpecRenderer component | P0 | 4 days | +| Replace useCanvasStore with useSpecStore | P0 | 3 days | +| Implement spec ↔ canvas conversion | P0 | 3 days | +| Add WebSocket sync | P1 | 3 days | +| Update node panels for full spec fields | P1 | 4 days | +| Test bidirectional editing | P0 | 3 days | + +**Deliverable**: Canvas renders and edits AtomizerSpec directly + +### Phase 3: Claude Integration (Weeks 7-9) + +| Task | Priority | Effort | +|------|----------|--------| +| Implement spec_* MCP tools | P0 | 4 days | +| Add custom function support | P1 | 4 days | +| Test Claude → Canvas updates | P0 | 3 days | +| Add natural language study creation | P2 | 5 days | +| Create Claude assistant prompts | P1 | 2 days | + +**Deliverable**: Claude can read, modify, and extend AtomizerSpec + +### Phase 4: Polish & Testing (Weeks 10-12) + +| Task | Priority | Effort | +|------|----------|--------| +| Migrate all existing studies | P0 | 3 days | +| Integration testing | P0 | 5 days | +| Documentation | P1 | 3 days | +| Performance optimization | P2 | 3 days | +| User acceptance testing | P0 | 4 days | + +**Deliverable**: Production-ready unified configuration system + +--- + +## Appendix A: Comparison Table + +| Feature | Current | Proposed | +|---------|---------|----------| +| Config formats | 4+ variants | 1 (AtomizerSpec) | +| Canvas ↔ Config sync | Lossy, manual | Lossless, automatic | +| Claude modifications | Returns JSON for manual copy | Direct spec modification | +| Custom extractors | Edit Python code | Declare in spec | +| Real-time updates | None | WebSocket-driven | +| Validation | Partial Python | Full JSON Schema + semantic | +| Canvas layout | Lost on reload | Persisted in spec | + +--- + +## Appendix B: Glossary + +| Term | Definition | +|------|------------| +| **AtomizerSpec** | The unified JSON configuration format (v2.0) | +| **SpecManager** | Backend service for all spec operations | +| **SpecRenderer** | Frontend component that renders spec as canvas | +| **Custom Extractor** | User-defined Python function declared in spec | +| **Spec Patch** | Partial update to a single field in the spec | + +--- + +## Appendix C: File Locations (Proposed) + +``` +atomizer-dashboard/ +├── backend/api/ +│ ├── services/ +│ │ └── spec_manager.py # NEW: Core spec management +│ ├── routes/ +│ │ └── spec.py # NEW: REST endpoints for spec +│ └── schemas/ +│ └── atomizer_spec_v2.json # NEW: JSON Schema +│ +├── frontend/src/ +│ ├── hooks/ +│ │ └── useSpecStore.ts # NEW: Spec state management +│ ├── components/canvas/ +│ │ └── SpecRenderer.tsx # NEW: Spec-based canvas +│ └── lib/spec/ +│ ├── types.ts # NEW: TypeScript types +│ └── converter.ts # NEW: Spec ↔ ReactFlow +│ +mcp-server/atomizer-tools/ +└── src/tools/ + └── spec_tools.ts # NEW: MCP tools for spec + +optimization_engine/ +├── config/ +│ ├── migrator.py # NEW: v1 → v2 migration +│ └── spec_loader.py # NEW: AtomizerSpec loader +└── schemas/ + └── atomizer_spec_v2.json # Canonical schema +``` + +--- + +*This document is the master plan for implementing Atomizer's Unified Configuration Architecture. All implementation work should reference this document.* diff --git a/docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md b/docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md new file mode 100644 index 00000000..6ad9ff4c --- /dev/null +++ b/docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md @@ -0,0 +1,495 @@ +# Unified Configuration Architecture - Execution Plan + +**Project**: AtomizerSpec v2.0 Implementation +**Reference Document**: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` +**Schema Definition**: `optimization_engine/schemas/atomizer_spec_v2.json` +**Status**: Ready for Implementation + +--- + +## Project Overview + +Transform Atomizer's fragmented configuration system into a unified architecture where: +- One JSON schema (AtomizerSpec v2.0) is the single source of truth +- Canvas, Backend, Claude, and Optimization Engine all use the same spec +- Real-time WebSocket sync keeps all clients updated +- Claude can dynamically modify specs and add custom functions + +--- + +## Phase Structure + +| Phase | Name | Duration | Focus | +|-------|------|----------|-------| +| 1 | Foundation | Week 1-3 | Backend SpecManager, REST API, Migration | +| 2 | Frontend | Week 4-6 | SpecRenderer, WebSocket Sync, Store | +| 3 | Claude Integration | Week 7-9 | MCP Tools, Custom Functions | +| 4 | Polish & Testing | Week 10-12 | Migration, Testing, Documentation | + +--- + +## Implementation Order (Critical Path) + +``` +1. Schema & Types (P1.1-P1.3) + └── 2. SpecManager Service (P1.4-P1.7) + └── 3. REST Endpoints (P1.8-P1.12) + ├── 4. Migration Script (P1.13-P1.16) + └── 5. Frontend Store (P2.1-P2.4) + └── 6. SpecRenderer (P2.5-P2.10) + └── 7. WebSocket Sync (P2.11-P2.15) + └── 8. MCP Tools (P3.1-P3.8) + └── 9. Custom Functions (P3.9-P3.14) + └── 10. Testing & Polish (P4.1-P4.12) +``` + +--- + +## PHASE 1: Foundation (Backend) + +### P1.1 - Create TypeScript types from JSON Schema +- **File**: `atomizer-dashboard/frontend/src/types/atomizer-spec.ts` +- **Action**: Generate TypeScript interfaces matching `atomizer_spec_v2.json` +- **Reference**: Schema at `optimization_engine/schemas/atomizer_spec_v2.json` +- **Acceptance**: Types compile, cover all schema definitions + +### P1.2 - Create Python Pydantic models from JSON Schema +- **File**: `optimization_engine/config/spec_models.py` +- **Action**: Create Pydantic models for AtomizerSpec validation +- **Reference**: Schema at `optimization_engine/schemas/atomizer_spec_v2.json` +- **Acceptance**: Models validate example specs correctly + +### P1.3 - Create spec validation utility +- **File**: `optimization_engine/config/spec_validator.py` +- **Action**: JSON Schema validation + semantic validation (bounds, references) +- **Dependencies**: P1.2 +- **Acceptance**: Validates good specs, rejects invalid with clear errors + +### P1.4 - Create SpecManager core class +- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py` +- **Action**: Implement `load()`, `save()`, `_validate()`, `_compute_hash()` +- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.1 +- **Dependencies**: P1.3 +- **Acceptance**: Can load/save/validate atomizer_spec.json files + +### P1.5 - Add SpecManager patch functionality +- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py` +- **Action**: Implement `patch()` method for JSONPath-style updates +- **Dependencies**: P1.4 +- **Acceptance**: Can update nested fields with conflict detection + +### P1.6 - Add SpecManager node operations +- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py` +- **Action**: Implement `add_node()`, `remove_node()`, `_generate_id()`, `_auto_position()` +- **Dependencies**: P1.5 +- **Acceptance**: Can add/remove design vars, extractors, objectives, constraints + +### P1.7 - Add SpecManager custom function support +- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py` +- **Action**: Implement `add_custom_function()` with Python syntax validation +- **Dependencies**: P1.6 +- **Acceptance**: Can add custom extractors with validated Python code + +### P1.8 - Create spec REST router +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: Create FastAPI router for spec endpoints +- **Dependencies**: P1.7 +- **Acceptance**: Router imports and mounts correctly + +### P1.9 - Implement GET /studies/{study_id}/spec +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: Return full AtomizerSpec for a study +- **Dependencies**: P1.8 +- **Acceptance**: Returns valid spec JSON, 404 for missing studies + +### P1.10 - Implement PUT /studies/{study_id}/spec +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: Replace entire spec with validation +- **Dependencies**: P1.9 +- **Acceptance**: Validates, saves, returns new hash + +### P1.11 - Implement PATCH /studies/{study_id}/spec +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: Partial update with JSONPath +- **Dependencies**: P1.10 +- **Acceptance**: Updates specific fields, broadcasts change + +### P1.12 - Implement POST /studies/{study_id}/spec/validate +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: Validate spec and return detailed report +- **Dependencies**: P1.11 +- **Acceptance**: Returns errors, warnings, summary + +### P1.13 - Create config migration base +- **File**: `optimization_engine/config/migrator.py` +- **Action**: Create SpecMigrator class with field mapping +- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 8 +- **Acceptance**: Class structure ready for migration logic + +### P1.14 - Implement design variable migration +- **File**: `optimization_engine/config/migrator.py` +- **Action**: Migrate `bounds[]` → `bounds.min/max`, `parameter` → `expression_name` +- **Dependencies**: P1.13 +- **Acceptance**: All DV formats convert correctly + +### P1.15 - Implement objective/constraint migration +- **File**: `optimization_engine/config/migrator.py` +- **Action**: Migrate `goal` → `direction`, extraction configs to new format +- **Dependencies**: P1.14 +- **Acceptance**: Objectives and constraints convert correctly + +### P1.16 - Implement full config migration +- **File**: `optimization_engine/config/migrator.py` +- **Action**: Complete migration including canvas positions, extractors inference +- **Dependencies**: P1.15 +- **Acceptance**: Can migrate any existing optimization_config.json to AtomizerSpec + +### P1.17 - Create migration CLI tool +- **File**: `tools/migrate_to_spec_v2.py` +- **Action**: CLI for batch migration with dry-run support +- **Dependencies**: P1.16 +- **Acceptance**: `python tools/migrate_to_spec_v2.py --dry-run studies/*` + +--- + +## PHASE 2: Frontend Integration + +### P2.1 - Create useSpecStore hook +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` +- **Action**: Zustand store for AtomizerSpec state management +- **Dependencies**: P1.1 +- **Acceptance**: Store holds spec, provides typed accessors + +### P2.2 - Add spec loading to useSpecStore +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` +- **Action**: Implement `loadSpec(studyId)` fetching from API +- **Dependencies**: P2.1, P1.9 +- **Acceptance**: Loads spec from backend, updates store + +### P2.3 - Add spec modification to useSpecStore +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` +- **Action**: Implement `patchSpec()`, `addNode()`, `removeNode()` calling API +- **Dependencies**: P2.2, P1.11 +- **Acceptance**: Modifications persist to backend + +### P2.4 - Add optimistic updates to useSpecStore +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` +- **Action**: Update local state immediately, rollback on error +- **Dependencies**: P2.3 +- **Acceptance**: UI feels instant, handles errors gracefully + +### P2.5 - Create specToNodes converter +- **File**: `atomizer-dashboard/frontend/src/lib/spec/converter.ts` +- **Action**: Convert AtomizerSpec to ReactFlow nodes array +- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.2 +- **Dependencies**: P1.1 +- **Acceptance**: All node types render correctly + +### P2.6 - Create specToEdges converter +- **File**: `atomizer-dashboard/frontend/src/lib/spec/converter.ts` +- **Action**: Convert spec.canvas.edges to ReactFlow edges +- **Dependencies**: P2.5 +- **Acceptance**: All connections render correctly + +### P2.7 - Create SpecRenderer component +- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` +- **Action**: ReactFlow component that renders from useSpecStore +- **Dependencies**: P2.5, P2.6, P2.4 +- **Acceptance**: Canvas displays spec correctly + +### P2.8 - Wire node editing to spec updates +- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` +- **Action**: Node data changes call useSpecStore.patchSpec() +- **Dependencies**: P2.7 +- **Acceptance**: Editing node properties persists to spec + +### P2.9 - Wire node position to spec updates +- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` +- **Action**: Drag-drop updates canvas_position in spec +- **Dependencies**: P2.8 +- **Acceptance**: Layout persists across reloads + +### P2.10 - Update node panels for full spec fields +- **Files**: `atomizer-dashboard/frontend/src/components/canvas/panels/*.tsx` +- **Action**: Update all node config panels to show/edit full spec fields +- **Dependencies**: P2.8 +- **Acceptance**: All spec fields are editable in UI + +### P2.11 - Create WebSocket connection hook +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts` +- **Action**: WebSocket connection to `/api/studies/{id}/sync` +- **Dependencies**: P2.4 +- **Acceptance**: Connects, handles reconnection + +### P2.12 - Create WebSocket backend endpoint +- **File**: `atomizer-dashboard/backend/api/routes/spec.py` +- **Action**: WebSocket endpoint for spec sync +- **Dependencies**: P1.12 +- **Acceptance**: Accepts connections, tracks subscribers + +### P2.13 - Implement spec_updated broadcast +- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py` +- **Action**: SpecManager broadcasts to all subscribers on save +- **Dependencies**: P2.12 +- **Acceptance**: All connected clients receive updates + +### P2.14 - Handle spec_updated in frontend +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts` +- **Action**: On spec_updated message, refresh spec from store +- **Dependencies**: P2.11, P2.13 +- **Acceptance**: Changes from other clients appear in real-time + +### P2.15 - Add conflict detection +- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` +- **Action**: Compare hashes, warn on conflict, offer merge/overwrite +- **Dependencies**: P2.14 +- **Acceptance**: Concurrent edits don't silently overwrite + +### P2.16 - Replace CanvasView with SpecRenderer +- **File**: `atomizer-dashboard/frontend/src/pages/CanvasView.tsx` +- **Action**: Switch from useCanvasStore to useSpecStore + SpecRenderer +- **Dependencies**: P2.10, P2.15 +- **Acceptance**: Canvas page uses new spec-based system + +--- + +## PHASE 3: Claude Integration + +### P3.1 - Create spec_get MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to retrieve full AtomizerSpec +- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.3 +- **Dependencies**: P1.9 +- **Acceptance**: Claude can read spec via MCP + +### P3.2 - Create spec_modify MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to apply modifications (set, add, remove operations) +- **Dependencies**: P3.1, P1.11 +- **Acceptance**: Claude can modify spec fields + +### P3.3 - Create spec_add_node MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to add design vars, extractors, objectives, constraints +- **Dependencies**: P3.2 +- **Acceptance**: Claude can add nodes to canvas + +### P3.4 - Create spec_remove_node MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to remove nodes (with edge cleanup) +- **Dependencies**: P3.3 +- **Acceptance**: Claude can remove nodes + +### P3.5 - Create spec_validate MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to validate spec and return report +- **Dependencies**: P1.12 +- **Acceptance**: Claude can check spec validity + +### P3.6 - Create spec_add_custom_extractor MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to add custom Python functions as extractors +- **Dependencies**: P1.7, P3.3 +- **Acceptance**: Claude can add custom extraction logic + +### P3.7 - Register spec tools in MCP server +- **File**: `mcp-server/atomizer-tools/src/index.ts` +- **Action**: Import and register all spec_* tools +- **Dependencies**: P3.1-P3.6 +- **Acceptance**: Tools appear in MCP tool list + +### P3.8 - Update ContextBuilder for spec awareness +- **File**: `atomizer-dashboard/backend/api/services/context_builder.py` +- **Action**: Include spec summary in Claude context, mention spec tools +- **Dependencies**: P3.7 +- **Acceptance**: Claude knows about spec tools in context + +### P3.9 - Create custom extractor runtime loader +- **File**: `optimization_engine/extractors/custom_loader.py` +- **Action**: Dynamically load and execute custom functions from spec +- **Dependencies**: P1.7 +- **Acceptance**: Custom functions execute during optimization + +### P3.10 - Integrate custom extractors into optimization runner +- **File**: `optimization_engine/core/runner.py` +- **Action**: Check spec for custom extractors, load and use them +- **Dependencies**: P3.9 +- **Acceptance**: Optimization uses custom extractors defined in spec + +### P3.11 - Add custom extractor node type to canvas +- **File**: `atomizer-dashboard/frontend/src/components/canvas/nodes/CustomExtractorNode.tsx` +- **Action**: New node type showing custom function with code preview +- **Dependencies**: P2.10 +- **Acceptance**: Custom extractors display distinctly in canvas + +### P3.12 - Add code editor for custom extractors +- **File**: `atomizer-dashboard/frontend/src/components/canvas/panels/CustomExtractorPanel.tsx` +- **Action**: Monaco editor for viewing/editing custom Python code +- **Dependencies**: P3.11 +- **Acceptance**: Users can view and edit custom function code + +### P3.13 - Create spec_create_from_description MCP tool +- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts` +- **Action**: Tool to create new spec from natural language + model path +- **Dependencies**: P3.6, P1.16 +- **Acceptance**: Claude can create studies from descriptions + +### P3.14 - Update Claude prompts for spec workflow +- **File**: `atomizer-dashboard/backend/api/services/context_builder.py` +- **Action**: Update system prompts to guide Claude on using spec tools +- **Dependencies**: P3.13 +- **Acceptance**: Claude naturally uses spec tools for modifications + +--- + +## PHASE 4: Polish & Testing + +### P4.1 - Migrate m1_mirror studies +- **Action**: Run migration on all m1_mirror_* studies +- **Dependencies**: P1.17 +- **Acceptance**: All studies have valid atomizer_spec.json + +### P4.2 - Migrate drone_gimbal study +- **Action**: Run migration on drone_gimbal study +- **Dependencies**: P1.17 +- **Acceptance**: Study has valid atomizer_spec.json + +### P4.3 - Migrate all remaining studies +- **Action**: Run migration on all studies in studies/ +- **Dependencies**: P4.1, P4.2 +- **Acceptance**: All studies migrated, validated + +### P4.4 - Create spec unit tests +- **File**: `tests/test_spec_manager.py` +- **Action**: Unit tests for SpecManager operations +- **Dependencies**: P1.7 +- **Acceptance**: All SpecManager methods tested + +### P4.5 - Create spec API integration tests +- **File**: `tests/test_spec_api.py` +- **Action**: Integration tests for REST endpoints +- **Dependencies**: P1.12 +- **Acceptance**: All endpoints tested + +### P4.6 - Create migration tests +- **File**: `tests/test_migrator.py` +- **Action**: Test migration with various config formats +- **Dependencies**: P1.16 +- **Acceptance**: All config variants migrate correctly + +### P4.7 - Create frontend component tests +- **File**: `atomizer-dashboard/frontend/src/__tests__/SpecRenderer.test.tsx` +- **Action**: Test SpecRenderer with various specs +- **Dependencies**: P2.16 +- **Acceptance**: Canvas renders correctly for all spec types + +### P4.8 - Create WebSocket sync tests +- **File**: `tests/test_spec_sync.py` +- **Action**: Test real-time sync between multiple clients +- **Dependencies**: P2.15 +- **Acceptance**: Changes propagate correctly + +### P4.9 - Create MCP tools tests +- **File**: `mcp-server/atomizer-tools/src/__tests__/spec_tools.test.ts` +- **Action**: Test all spec_* MCP tools +- **Dependencies**: P3.7 +- **Acceptance**: All tools work correctly + +### P4.10 - End-to-end testing +- **Action**: Full workflow test: create study in canvas, modify via Claude, run optimization +- **Dependencies**: P4.1-P4.9 +- **Acceptance**: Complete workflow works + +### P4.11 - Update documentation +- **Files**: `docs/04_USER_GUIDES/CANVAS.md`, `docs/04_USER_GUIDES/DASHBOARD.md` +- **Action**: Document new spec-based workflow +- **Dependencies**: P4.10 +- **Acceptance**: Documentation reflects new system + +### P4.12 - Update CLAUDE.md +- **File**: `CLAUDE.md` +- **Action**: Add spec tools documentation, update context loading +- **Dependencies**: P4.11 +- **Acceptance**: Claude Code sessions know about spec system + +--- + +## File Summary + +### New Files to Create + +| File | Phase | Purpose | +|------|-------|---------| +| `atomizer-dashboard/frontend/src/types/atomizer-spec.ts` | P1 | TypeScript types | +| `optimization_engine/config/spec_models.py` | P1 | Pydantic models | +| `optimization_engine/config/spec_validator.py` | P1 | Validation logic | +| `atomizer-dashboard/backend/api/services/spec_manager.py` | P1 | Core spec service | +| `atomizer-dashboard/backend/api/routes/spec.py` | P1 | REST endpoints | +| `optimization_engine/config/migrator.py` | P1 | Config migration | +| `tools/migrate_to_spec_v2.py` | P1 | Migration CLI | +| `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` | P2 | Spec state store | +| `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts` | P2 | WebSocket sync | +| `atomizer-dashboard/frontend/src/lib/spec/converter.ts` | P2 | Spec ↔ ReactFlow | +| `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` | P2 | New canvas component | +| `mcp-server/atomizer-tools/src/tools/spec_tools.ts` | P3 | MCP tools | +| `optimization_engine/extractors/custom_loader.py` | P3 | Custom function loader | +| `atomizer-dashboard/frontend/src/components/canvas/nodes/CustomExtractorNode.tsx` | P3 | Custom node type | +| `atomizer-dashboard/frontend/src/components/canvas/panels/CustomExtractorPanel.tsx` | P3 | Code editor panel | + +### Files to Modify + +| File | Phase | Changes | +|------|-------|---------| +| `atomizer-dashboard/backend/api/main.py` | P1 | Mount spec router | +| `mcp-server/atomizer-tools/src/index.ts` | P3 | Register spec tools | +| `atomizer-dashboard/backend/api/services/context_builder.py` | P3 | Update Claude context | +| `optimization_engine/core/runner.py` | P3 | Custom extractor support | +| `atomizer-dashboard/frontend/src/pages/CanvasView.tsx` | P2 | Use SpecRenderer | +| `CLAUDE.md` | P4 | Document spec system | + +--- + +## Success Criteria + +### Phase 1 Complete When: +- [ ] SpecManager can load/save/validate specs +- [ ] All REST endpoints return correct responses +- [ ] Migration tool converts existing configs + +### Phase 2 Complete When: +- [ ] Canvas renders from AtomizerSpec +- [ ] Edits persist to spec file +- [ ] WebSocket sync works between clients + +### Phase 3 Complete When: +- [ ] Claude can read and modify specs via MCP +- [ ] Custom extractors work in optimization +- [ ] Claude can create studies from descriptions + +### Phase 4 Complete When: +- [ ] All existing studies migrated +- [ ] All tests pass +- [ ] Documentation updated + +--- + +## Risk Mitigation + +| Risk | Mitigation | +|------|------------| +| Existing studies break | Migration tool with dry-run, keep old configs as backup | +| WebSocket complexity | Start with polling, add WebSocket as enhancement | +| Custom code security | Sandbox execution, syntax validation, no imports | +| Performance with large specs | Lazy loading, incremental updates | + +--- + +## Quick Reference + +**Master Design**: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` +**Schema**: `optimization_engine/schemas/atomizer_spec_v2.json` +**This Plan**: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` + +--- + +*Execute tasks in order. Each task has clear acceptance criteria. Reference the master design document for detailed specifications.* diff --git a/docs/plans/UNIFIED_CONFIG_QUICKSTART.md b/docs/plans/UNIFIED_CONFIG_QUICKSTART.md new file mode 100644 index 00000000..f262870a --- /dev/null +++ b/docs/plans/UNIFIED_CONFIG_QUICKSTART.md @@ -0,0 +1,92 @@ +# AtomizerSpec v2.0 - Quick Start Guide + +## TL;DR + +**Goal**: Replace 4+ config formats with one unified `atomizer_spec.json` that Canvas, Backend, Claude, and Optimization Engine all use. + +**Effort**: ~50 tasks across 4 phases (~12 weeks) + +**Key Files**: +- Design: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` +- Tasks: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` +- Schema: `optimization_engine/schemas/atomizer_spec_v2.json` +- Prompts: `docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md` + +--- + +## Start Autonomous Implementation + +### 1. Open Claude CLI +```bash +claude --dangerously-skip-permissions +``` + +### 2. Paste Initial Prompt +``` +You are implementing the AtomizerSpec v2.0 Unified Configuration Architecture for Atomizer. + +Read these documents before starting: +1. `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design +2. `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Task list (50+ tasks) +3. `optimization_engine/schemas/atomizer_spec_v2.json` - JSON Schema + +Implement tasks in order (P1.1 → P4.12). Use TodoWrite to track progress. Commit after each logical group. Reference the design document for code examples. + +Begin by reading the documents and loading tasks into TodoWrite. Start with P1.1. +``` + +### 3. Let It Run +Claude will work through the task list autonomously, committing as it goes. + +--- + +## Phase Summary + +| Phase | Tasks | Focus | Key Deliverables | +|-------|-------|-------|------------------| +| **1** | P1.1-P1.17 | Backend | SpecManager, REST API, Migration | +| **2** | P2.1-P2.16 | Frontend | SpecRenderer, WebSocket Sync | +| **3** | P3.1-P3.14 | Claude | MCP Tools, Custom Functions | +| **4** | P4.1-P4.12 | Polish | Tests, Migration, Docs | + +--- + +## Key Concepts + +**AtomizerSpec**: Single JSON file containing everything needed for optimization AND canvas display. + +**SpecManager**: Backend service that validates, saves, and broadcasts spec changes. + +**SpecRenderer**: Frontend component that renders canvas directly from spec. + +**Real-time Sync**: WebSocket broadcasts changes to all connected clients. + +**Custom Functions**: Python code stored in spec, loaded dynamically during optimization. + +--- + +## Success Metrics + +- [ ] Canvas loads/saves without data loss +- [ ] Claude can modify specs via MCP tools +- [ ] All existing studies migrated +- [ ] WebSocket sync works between clients +- [ ] Custom extractors execute in optimization +- [ ] All tests pass + +--- + +## If Resuming + +``` +Continue implementing AtomizerSpec v2.0. + +Reference: +- `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` + +Check current todo list, find last completed task, continue from there. +``` + +--- + +*Full details in the referenced documents.* diff --git a/docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md b/docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md new file mode 100644 index 00000000..238038ca --- /dev/null +++ b/docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md @@ -0,0 +1,286 @@ +# Ralph Loop Prompt: AtomizerSpec v2.0 Implementation + +**Copy this prompt to start the autonomous implementation loop.** + +--- + +## Claude CLI Command + +```bash +claude --dangerously-skip-permissions +``` + +--- + +## Initial Prompt (Copy This) + +``` +You are implementing the AtomizerSpec v2.0 Unified Configuration Architecture for Atomizer. + +## Project Context + +Read these documents before starting: +1. `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design document with architecture, schemas, and component designs +2. `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Detailed task list with 50+ tasks across 4 phases +3. `optimization_engine/schemas/atomizer_spec_v2.json` - The JSON Schema definition + +## Your Mission + +Implement the AtomizerSpec v2.0 system following the execution plan. Work through tasks in order (P1.1 → P1.2 → ... → P4.12). + +## Rules + +1. **Follow the plan**: Execute tasks in the order specified in UNIFIED_CONFIG_EXECUTION_PLAN.md +2. **Check dependencies**: Don't start a task until its dependencies are complete +3. **Use TodoWrite**: Track all tasks, mark in_progress when starting, completed when done +4. **Test as you go**: Verify each task meets its acceptance criteria before moving on +5. **Commit regularly**: Commit after completing each logical group of tasks (e.g., after P1.7, after P1.12) +6. **Reference the design**: The master document has code examples - use them +7. **Ask if blocked**: If you encounter a blocker, explain what's wrong and what you need + +## Starting Point + +1. Read the three reference documents listed above +2. Load the current task list into TodoWrite +3. Start with P1.1 (Create TypeScript types from JSON Schema) +4. Work through each task, marking completion as you go + +## Commit Message Format + +Use conventional commits: +- `feat(spec): Add SpecManager core class (P1.4)` +- `feat(spec-api): Implement GET /studies/{id}/spec (P1.9)` +- `feat(frontend): Create useSpecStore hook (P2.1)` +- `test(spec): Add SpecManager unit tests (P4.4)` + +## Progress Tracking + +After completing each phase, summarize: +- Tasks completed +- Files created/modified +- Any deviations from plan +- Blockers encountered + +Begin by reading the reference documents and loading tasks into TodoWrite. +``` + +--- + +## Continuation Prompt (When Resuming) + +``` +Continue implementing the AtomizerSpec v2.0 system. + +Reference documents: +- `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design +- `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Task list + +Check where we left off: +1. Read the current todo list +2. Find the last completed task +3. Continue with the next task in sequence + +If you need context on what was done, check recent git commits. + +Resume implementation. +``` + +--- + +## Phase-Specific Prompts + +### Start Phase 1 (Foundation) +``` +Begin Phase 1 of AtomizerSpec implementation - Foundation/Backend. + +Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 1: Foundation" + +Tasks P1.1 through P1.17 focus on: +- TypeScript and Python types from JSON Schema +- SpecManager service (load, save, validate, patch, nodes) +- REST API endpoints +- Migration tool + +Start with P1.1: Create TypeScript types from the JSON Schema at `optimization_engine/schemas/atomizer_spec_v2.json`. + +Work through each task in order, tracking with TodoWrite. +``` + +### Start Phase 2 (Frontend) +``` +Begin Phase 2 of AtomizerSpec implementation - Frontend Integration. + +Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 2: Frontend Integration" + +Tasks P2.1 through P2.16 focus on: +- useSpecStore hook (Zustand state management) +- Spec ↔ ReactFlow conversion +- SpecRenderer component +- WebSocket real-time sync +- Replacing old canvas with spec-based canvas + +Prerequisites: Phase 1 must be complete (P1.1-P1.17). + +Start with P2.1: Create useSpecStore hook. + +Work through each task in order, tracking with TodoWrite. +``` + +### Start Phase 3 (Claude Integration) +``` +Begin Phase 3 of AtomizerSpec implementation - Claude Integration. + +Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 3: Claude Integration" + +Tasks P3.1 through P3.14 focus on: +- MCP tools for spec operations (spec_get, spec_modify, spec_add_node, etc.) +- Custom extractor support (add Python functions via Claude) +- Runtime loading of custom functions +- Natural language study creation + +Prerequisites: Phases 1 and 2 must be complete. + +Start with P3.1: Create spec_get MCP tool. + +Work through each task in order, tracking with TodoWrite. +``` + +### Start Phase 4 (Polish & Testing) +``` +Begin Phase 4 of AtomizerSpec implementation - Polish & Testing. + +Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 4: Polish & Testing" + +Tasks P4.1 through P4.12 focus on: +- Migrating all existing studies to AtomizerSpec v2 +- Unit tests, integration tests, e2e tests +- Documentation updates + +Prerequisites: Phases 1, 2, and 3 must be complete. + +Start with P4.1: Migrate m1_mirror studies. + +Work through each task in order, tracking with TodoWrite. +``` + +--- + +## Troubleshooting Prompts + +### If Stuck on a Task +``` +I'm blocked on task [TASK_ID]. + +The issue is: [describe the problem] + +What I've tried: [list attempts] + +Please help me resolve this so I can continue with the execution plan. +``` + +### If Tests Fail +``` +Tests are failing for [component]. + +Error: [paste error] + +This is blocking task [TASK_ID]. Help me fix the tests so I can mark the task complete. +``` + +### If Design Needs Clarification +``` +Task [TASK_ID] requires [specific thing] but the design document doesn't specify [what's unclear]. + +Options I see: +1. [option A] +2. [option B] + +Which approach should I take? Or should I update the design document? +``` + +--- + +## Full Task List for TodoWrite + +Copy this to initialize the todo list: + +``` +P1.1 - Create TypeScript types from JSON Schema +P1.2 - Create Python Pydantic models from JSON Schema +P1.3 - Create spec validation utility +P1.4 - Create SpecManager core class +P1.5 - Add SpecManager patch functionality +P1.6 - Add SpecManager node operations +P1.7 - Add SpecManager custom function support +P1.8 - Create spec REST router +P1.9 - Implement GET /studies/{study_id}/spec +P1.10 - Implement PUT /studies/{study_id}/spec +P1.11 - Implement PATCH /studies/{study_id}/spec +P1.12 - Implement POST /studies/{study_id}/spec/validate +P1.13 - Create config migration base +P1.14 - Implement design variable migration +P1.15 - Implement objective/constraint migration +P1.16 - Implement full config migration +P1.17 - Create migration CLI tool +P2.1 - Create useSpecStore hook +P2.2 - Add spec loading to useSpecStore +P2.3 - Add spec modification to useSpecStore +P2.4 - Add optimistic updates to useSpecStore +P2.5 - Create specToNodes converter +P2.6 - Create specToEdges converter +P2.7 - Create SpecRenderer component +P2.8 - Wire node editing to spec updates +P2.9 - Wire node position to spec updates +P2.10 - Update node panels for full spec fields +P2.11 - Create WebSocket connection hook +P2.12 - Create WebSocket backend endpoint +P2.13 - Implement spec_updated broadcast +P2.14 - Handle spec_updated in frontend +P2.15 - Add conflict detection +P2.16 - Replace CanvasView with SpecRenderer +P3.1 - Create spec_get MCP tool +P3.2 - Create spec_modify MCP tool +P3.3 - Create spec_add_node MCP tool +P3.4 - Create spec_remove_node MCP tool +P3.5 - Create spec_validate MCP tool +P3.6 - Create spec_add_custom_extractor MCP tool +P3.7 - Register spec tools in MCP server +P3.8 - Update ContextBuilder for spec awareness +P3.9 - Create custom extractor runtime loader +P3.10 - Integrate custom extractors into optimization runner +P3.11 - Add custom extractor node type to canvas +P3.12 - Add code editor for custom extractors +P3.13 - Create spec_create_from_description MCP tool +P3.14 - Update Claude prompts for spec workflow +P4.1 - Migrate m1_mirror studies +P4.2 - Migrate drone_gimbal study +P4.3 - Migrate all remaining studies +P4.4 - Create spec unit tests +P4.5 - Create spec API integration tests +P4.6 - Create migration tests +P4.7 - Create frontend component tests +P4.8 - Create WebSocket sync tests +P4.9 - Create MCP tools tests +P4.10 - End-to-end testing +P4.11 - Update documentation +P4.12 - Update CLAUDE.md +``` + +--- + +## Expected Outcomes + +After successful completion: + +1. **Single Source of Truth**: `atomizer_spec.json` used everywhere +2. **Bidirectional Sync**: Canvas ↔ Spec with no data loss +3. **Real-time Updates**: WebSocket keeps all clients in sync +4. **Claude Integration**: Claude can read/modify/create specs via MCP +5. **Custom Functions**: Users can add Python extractors through UI/Claude +6. **All Studies Migrated**: Existing configs converted to v2 format +7. **Full Test Coverage**: Unit, integration, and e2e tests passing +8. **Updated Documentation**: User guides reflect new workflow + +--- + +*This prompt file enables autonomous implementation of the AtomizerSpec v2.0 system using the Ralph Loop pattern.* diff --git a/docs/reference/DEEP_INVESTIGATION_PROMPT.md b/docs/reference/DEEP_INVESTIGATION_PROMPT.md new file mode 100644 index 00000000..1b049873 --- /dev/null +++ b/docs/reference/DEEP_INVESTIGATION_PROMPT.md @@ -0,0 +1,197 @@ +# Deep Investigation & Architecture Analysis Prompt + +**Purpose**: Reusable instructions for conducting thorough system investigations and producing actionable design documents. + +--- + +## Full Investigation Prompt + +Use this when you need a comprehensive analysis of a system, architecture, or problem: + +```markdown +Please conduct a comprehensive investigation and produce a master design document. + +### Investigation Phase + +1. **Multi-Agent Exploration**: Launch parallel exploration agents to examine: + - Frontend components and state management + - Backend services, routes, and data flow + - Core engine/library code and schemas + - Configuration files and existing patterns + - Any MCP tools, APIs, or integration points + +2. **Source of Truth Analysis**: Identify ALL places where the same concept is represented: + - Data schemas/types (JSON, TypeScript, Python) + - Configuration formats and their variants + - State management across components + - Document inconsistencies and naming conflicts + +3. **Data Flow Mapping**: Trace how data moves through the system: + - Entry points → Processing → Storage → Display + - Identify lossy conversions and sync gaps + - Note missing endpoints or broken connections + +### Documentation Phase + +4. **Current State Report**: Document what exists with: + - Architecture diagrams (text-based) + - Comparison tables showing format inconsistencies + - Data flow diagrams showing where information is lost + - Clear "What's Wrong / What's OK" assessment + +5. **Problem Statement**: Articulate: + - Core issues with severity ratings + - User pain points with concrete scenarios + - Technical debt and architectural gaps + +### Design Phase + +6. **Proposed Architecture**: Design the solution with: + - Single source of truth principle + - Complete schema definition (JSON Schema if applicable) + - Component responsibilities and interfaces + - API/endpoint specifications + - Real-time sync strategy if needed + +7. **Integration Design**: Show how components connect: + - Frontend ↔ Backend ↔ Engine data contracts + - Bidirectional sync mechanisms + - AI/Assistant integration points + - Extensibility patterns (plugins, custom functions) + +### Planning Phase + +8. **Implementation Roadmap**: Break into phases with: + - Clear deliverables per phase + - Priority (P0/P1/P2) and effort estimates + - Dependencies between tasks + - Migration strategy for existing data + +9. **Appendices**: Include: + - Glossary of terms + - File location references + - Comparison tables (before/after) + - Code examples for key components + +### Output Format + +Produce: +1. **Master Document** (Markdown): Complete design document with all sections +2. **Schema Files**: Actual JSON Schema or type definitions ready to use +3. **Executive Summary**: Key findings, what's broken, proposed solution, timeline + +### Quality Standards + +- Read actual source files, don't assume +- Use tables for comparisons (makes inconsistencies obvious) +- Include text-based diagrams for architecture and data flow +- Provide concrete code examples, not just descriptions +- Make it actionable: someone should be able to implement from this document +``` + +--- + +## Quick Version + +For faster investigations or when you already have context: + +```markdown +Deep investigation with master document output: + +1. **Explore**: Parallel agents examine frontend, backend, engine, schemas +2. **Map**: Trace data flow, identify all representations of same concept +3. **Compare**: Tables showing format inconsistencies and naming conflicts +4. **Diagram**: Architecture and data flow (text-based) +5. **Assess**: What's wrong (severity) / What's OK / User pain points +6. **Design**: Single source of truth, complete schema, API specs +7. **Plan**: Phased roadmap with priorities and effort estimates +8. **Deliver**: Master document + actual schema files + executive summary + +Standards: Read actual code, use comparison tables, include diagrams, make it actionable. +``` + +--- + +## Trigger Phrases + +Use these phrases to invoke deep analysis behavior: + +| Phrase | Effect | +|--------|--------| +| "Make a very deep and thoughtful research on..." | Triggers comprehensive multi-agent exploration | +| "Produce a master document that will lead to implementation" | Ensures actionable output with roadmap | +| "Investigate with multi-agent exploration" | Parallel exploration of different system areas | +| "Map all representations and identify inconsistencies" | Source of truth analysis with comparison tables | +| "Design a single source of truth architecture" | Unified schema/format design | +| "Include comparison tables and data flow diagrams" | Visual documentation requirements | +| "Make it actionable with implementation roadmap" | Phased planning with priorities | + +--- + +## Example Usage + +### For Architecture Overhaul +``` +Please conduct a deep investigation on how [Component A] and [Component B] +share data. I need a master document that: +- Maps all data representations across the system +- Identifies inconsistencies with comparison tables +- Proposes a unified architecture +- Includes implementation roadmap + +Make it actionable - someone should be able to implement from this document. +``` + +### For Problem Diagnosis +``` +Deep investigation needed: [describe the problem] + +Explore the codebase to understand: +- Where this data/logic currently lives +- How it flows through the system +- What's broken and why + +Produce a report with: +- Root cause analysis +- Data flow diagrams +- Proposed fix with implementation steps +``` + +### For New Feature Design +``` +I want to add [feature]. Before implementing, conduct a deep analysis: + +1. How does similar functionality work in the codebase? +2. What components would be affected? +3. What's the cleanest integration approach? + +Produce a design document with architecture, API specs, and phased roadmap. +``` + +--- + +## Output Checklist + +A good deep investigation should produce: + +- [ ] **Architecture diagram** (text-based, showing component relationships) +- [ ] **Data flow diagram** (showing how data moves, where it's transformed) +- [ ] **Comparison tables** (format inconsistencies, naming conflicts) +- [ ] **Problem severity matrix** (Critical/High/Medium/Low ratings) +- [ ] **User pain points** (concrete scenarios, not abstract) +- [ ] **Proposed schema/types** (actual JSON Schema or TypeScript) +- [ ] **API specifications** (endpoints, request/response formats) +- [ ] **Implementation roadmap** (phased, with effort estimates) +- [ ] **Migration strategy** (for existing data/code) +- [ ] **Code examples** (for key components) + +--- + +## Related Documents + +- Example output: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` +- Example schema: `optimization_engine/schemas/atomizer_spec_v2.json` + +--- + +*This prompt template was developed during the Atomizer Unified Configuration Architecture investigation (January 2026).* diff --git a/docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md b/docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md new file mode 100644 index 00000000..8f6ca82c --- /dev/null +++ b/docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md @@ -0,0 +1,322 @@ +# Execution Plan & Ralph Loop Generator + +**Purpose**: Instructions for generating comprehensive execution plans with autonomous implementation prompts from any design document. + +--- + +## When to Use + +After completing a deep investigation that produced: +- A master design document (architecture, schemas, component designs) +- Clear understanding of what needs to be built + +Use this to generate: +- Detailed task list with dependencies +- File-level implementation instructions +- Autonomous execution prompts (Ralph Loop) +- Quick reference guides + +--- + +## Generator Prompt + +```markdown +Based on the design document at [PATH_TO_DESIGN_DOC], create a comprehensive execution plan for autonomous implementation. + +### Task Breakdown Requirements + +1. **Granularity**: Break work into tasks that take 15-60 minutes each +2. **Atomic**: Each task should produce a testable/verifiable output +3. **Sequential IDs**: Use format P{phase}.{number} (e.g., P1.1, P1.2, P2.1) +4. **Dependencies**: Explicitly list which tasks must complete first + +### For Each Task, Specify: + +- **Task ID & Name**: P1.1 - Create TypeScript types from JSON Schema +- **File(s)**: Exact file path(s) to create or modify +- **Action**: Specific implementation instructions +- **Reference**: Section of design doc with details/examples +- **Dependencies**: List of prerequisite task IDs +- **Acceptance Criteria**: How to verify task is complete + +### Structure the Plan As: + +1. **Project Overview**: Goal, reference docs, success metrics +2. **Phase Structure**: Table showing phases, duration, focus areas +3. **Implementation Order**: Critical path diagram (text-based) +4. **Phase Sections**: Detailed tasks grouped by phase +5. **File Summary**: Tables of files to create and modify +6. **Success Criteria**: Checkboxes for each phase completion +7. **Risk Mitigation**: Known risks and how to handle them + +### Also Generate: + +1. **Ralph Loop Prompts**: + - Initial prompt to start autonomous execution + - Continuation prompt for resuming + - Phase-specific prompts + - Troubleshooting prompts + - Full task list for TodoWrite initialization + +2. **Quick Start Guide**: + - TL;DR summary + - Copy-paste commands to start + - Phase summary table + - Key concepts glossary + - Success metrics checklist + +### Output Files: + +- `{PROJECT}_EXECUTION_PLAN.md` - Detailed task list +- `{PROJECT}_RALPH_PROMPT.md` - Autonomous execution prompts +- `{PROJECT}_QUICKSTART.md` - Quick reference guide +``` + +--- + +## Example Usage + +### Input +``` +Based on the design document at `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`, +create a comprehensive execution plan for autonomous implementation. + +The project involves: +- Backend services (Python/FastAPI) +- Frontend components (React/TypeScript) +- MCP tools integration +- Database migrations + +Generate the execution plan, Ralph Loop prompts, and quick start guide. +``` + +### Output Structure +``` +docs/plans/ +├── {PROJECT}_EXECUTION_PLAN.md # 50+ detailed tasks +├── {PROJECT}_RALPH_PROMPT.md # Autonomous prompts +└── {PROJECT}_QUICKSTART.md # TL;DR guide +``` + +--- + +## Task Template + +Use this template for each task: + +```markdown +### P{X}.{Y} - {Task Name} +- **File**: `path/to/file.ext` +- **Action**: {What to implement} +- **Reference**: {Design doc section} Section X.Y +- **Dependencies**: P{X}.{Z}, P{X}.{W} +- **Acceptance**: {How to verify completion} +``` + +--- + +## Phase Template + +```markdown +## PHASE {N}: {Phase Name} + +### P{N}.1 - {First Task} +- **File**: `path/to/file` +- **Action**: {Description} +- **Dependencies**: {None or list} +- **Acceptance**: {Criteria} + +### P{N}.2 - {Second Task} +... +``` + +--- + +## Ralph Prompt Template + +```markdown +# Ralph Loop Prompt: {Project Name} + +## Claude CLI Command +\`\`\`bash +claude --dangerously-skip-permissions +\`\`\` + +## Initial Prompt (Copy This) +\`\`\` +You are implementing {Project Description}. + +## Project Context + +Read these documents before starting: +1. `{path/to/design_doc}` - Master design document +2. `{path/to/execution_plan}` - Detailed task list +3. `{path/to/schema_or_types}` - Type definitions (if applicable) + +## Your Mission + +Implement the system following the execution plan. Work through tasks in order (P1.1 → P1.2 → ... → P{N}.{M}). + +## Rules + +1. **Follow the plan**: Execute tasks in order specified +2. **Check dependencies**: Don't start until prerequisites complete +3. **Use TodoWrite**: Track all tasks, mark progress +4. **Test as you go**: Verify acceptance criteria +5. **Commit regularly**: After each logical group +6. **Reference the design**: Use code examples from docs +7. **Ask if blocked**: Explain blockers clearly + +Begin by reading the reference documents and loading tasks into TodoWrite. +\`\`\` + +## Continuation Prompt +\`\`\` +Continue implementing {Project Name}. + +Reference: `{path/to/execution_plan}` + +Check current todo list, find last completed task, continue from there. +\`\`\` + +## Full Task List for TodoWrite +\`\`\` +P1.1 - {Task name} +P1.2 - {Task name} +... +\`\`\` +``` + +--- + +## Quick Start Template + +```markdown +# {Project Name} - Quick Start Guide + +## TL;DR + +**Goal**: {One sentence description} + +**Effort**: ~{N} tasks across {M} phases (~{W} weeks) + +**Key Files**: +- Design: `{path}` +- Tasks: `{path}` +- Prompts: `{path}` + +--- + +## Start Autonomous Implementation + +### 1. Open Claude CLI +\`\`\`bash +claude --dangerously-skip-permissions +\`\`\` + +### 2. Paste Initial Prompt +\`\`\` +{Condensed version of initial prompt} +\`\`\` + +### 3. Let It Run + +--- + +## Phase Summary + +| Phase | Tasks | Focus | Deliverables | +|-------|-------|-------|--------------| +| 1 | P1.1-P1.{N} | {Focus} | {Key outputs} | +| 2 | P2.1-P2.{N} | {Focus} | {Key outputs} | +... + +--- + +## Success Metrics + +- [ ] {Metric 1} +- [ ] {Metric 2} +... +``` + +--- + +## Quality Checklist + +Before finalizing the execution plan, verify: + +- [ ] Every task has a specific file path +- [ ] Every task has clear acceptance criteria +- [ ] Dependencies form a valid DAG (no cycles) +- [ ] Tasks are ordered so dependencies come first +- [ ] Phase boundaries make logical sense +- [ ] Commits points are identified (after groups of related tasks) +- [ ] Risk mitigations are documented +- [ ] Ralph prompts reference all necessary documents +- [ ] Quick start is actually quick (fits on one screen) +- [ ] Task list for TodoWrite is complete and copy-pasteable + +--- + +## Complexity Guidelines + +| Project Size | Tasks | Phases | Typical Duration | +|--------------|-------|--------|------------------| +| Small | 10-20 | 2 | 1-2 weeks | +| Medium | 20-50 | 3-4 | 3-6 weeks | +| Large | 50-100 | 4-6 | 6-12 weeks | +| Epic | 100+ | 6+ | 12+ weeks | + +--- + +## Common Patterns + +### Backend-First Pattern +``` +Phase 1: Data models, validation, core service +Phase 2: REST API endpoints +Phase 3: Frontend integration +Phase 4: Testing & polish +``` + +### Full-Stack Feature Pattern +``` +Phase 1: Schema/types, backend service, API +Phase 2: Frontend components, state management +Phase 3: Integration, real-time sync +Phase 4: Testing, documentation +``` + +### Migration Pattern +``` +Phase 1: New system alongside old +Phase 2: Migration tooling +Phase 3: Gradual cutover +Phase 4: Deprecate old system +``` + +### Integration Pattern +``` +Phase 1: Define contracts/interfaces +Phase 2: Implement adapters +Phase 3: Wire up connections +Phase 4: Testing, error handling +``` + +--- + +## Tips for Better Plans + +1. **Start with the end**: Define success criteria first, then work backward +2. **Identify the critical path**: What's the minimum to get something working? +3. **Group related tasks**: Makes commits logical and rollback easier +4. **Front-load risky tasks**: Discover blockers early +5. **Include buffer**: Things always take longer than expected +6. **Make tasks testable**: "It works" is not acceptance criteria +7. **Reference existing code**: Point to similar patterns in codebase +8. **Consider parallelism**: Some phases can overlap if teams split work + +--- + +*Use this template to generate execution plans for any project following a design document.* diff --git a/docs/reviews/ARCHITECTURE_REVIEW.md b/docs/reviews/ARCHITECTURE_REVIEW.md new file mode 100644 index 00000000..1f871062 --- /dev/null +++ b/docs/reviews/ARCHITECTURE_REVIEW.md @@ -0,0 +1,730 @@ +# Atomizer Architecture Review + +**Date**: January 2026 +**Version**: 2.0 (AtomizerSpec unified configuration) +**Author**: Architecture Review + +--- + +## Executive Summary + +Atomizer is a structural optimization platform that enables engineers to optimize FEA (Finite Element Analysis) designs through a visual canvas interface and AI-powered assistance. The architecture follows a **single source of truth** pattern where all configuration flows through `atomizer_spec.json`. + +### Key Strengths +- **Unified Configuration**: One JSON file defines the entire optimization study +- **Type Safety**: Pydantic validation at every modification point +- **Real-time Collaboration**: WebSocket-based sync between all clients +- **Responsive UI**: Optimistic updates with background synchronization +- **AI Integration**: Claude can modify configurations in Power Mode + +### Architecture Quality Score: **8.5/10** + +| Aspect | Score | Notes | +|--------|-------|-------| +| Data Integrity | 9/10 | Single source of truth, hash-based conflict detection | +| Type Safety | 9/10 | Pydantic models throughout backend | +| Extensibility | 8/10 | Custom extractors, algorithms supported | +| Performance | 8/10 | Optimistic updates, WebSocket streaming | +| Maintainability | 8/10 | Clear separation of concerns | +| Documentation | 7/10 | Good inline docs, needs more high-level guides | + +--- + +## 1. Configuration Layer + +### 1.1 AtomizerSpec v2.0 - The Single Source of Truth + +**Location**: `studies/{study_name}/atomizer_spec.json` + +The AtomizerSpec is the heart of Atomizer's configuration. Every component reads from and writes to this single file. + +``` +atomizer_spec.json +├── meta # Study metadata +│ ├── version: "2.0" +│ ├── study_name +│ ├── created_by # canvas | claude | api | migration +│ └── modified_at +├── model # NX model files +│ ├── sim: { path, solver } +│ ├── nx_part: { path } +│ └── fem: { path } +├── design_variables[] # Parameters to optimize +│ ├── id: "dv_001" +│ ├── name, expression_name +│ ├── type: continuous | discrete +│ ├── bounds: { min, max } +│ └── canvas_position +├── extractors[] # Physics result extractors +│ ├── id: "ext_001" +│ ├── type: mass | displacement | stress | zernike | custom +│ ├── config: {} +│ └── outputs: [{ name, metric }] +├── objectives[] # Optimization goals +│ ├── id: "obj_001" +│ ├── direction: minimize | maximize +│ ├── weight +│ └── source: { extractor_id, output_key } +├── constraints[] # Hard/soft constraints +│ ├── id: "con_001" +│ ├── operator: <= | >= | == +│ ├── threshold +│ └── source: { extractor_id, output_key } +├── optimization # Algorithm settings +│ ├── algorithm: { type, config } +│ ├── budget: { max_trials } +│ └── surrogate: { enabled, type } +└── canvas # UI layout state + └── edges[] +``` + +### 1.2 Node ID Convention + +All configurable elements use unique IDs with prefixes: + +| Prefix | Element Type | Example | +|--------|--------------|---------| +| `dv_` | Design Variable | `dv_001`, `dv_002` | +| `ext_` | Extractor | `ext_001`, `ext_002` | +| `obj_` | Objective | `obj_001`, `obj_002` | +| `con_` | Constraint | `con_001`, `con_002` | + +IDs are auto-generated: `{prefix}{max_existing + 1:03d}` + +### 1.3 Legacy Configuration + +**File**: `optimization_config.json` (deprecated) + +Legacy studies may have this file. The `SpecMigrator` automatically converts them to AtomizerSpec v2.0 on load. + +```python +from optimization_engine.config.migrator import SpecMigrator +migrator = SpecMigrator(study_dir) +spec = migrator.migrate_file(legacy_path, spec_path) +``` + +--- + +## 2. Frontend Architecture + +### 2.1 Technology Stack + +| Layer | Technology | Purpose | +|-------|------------|---------| +| Build | Vite + TypeScript | Fast bundling, type safety | +| UI | React 18 + TailwindCSS | Component framework | +| State | Zustand | Lightweight global state | +| Canvas | ReactFlow | Graph visualization | +| Communication | Fetch + WebSocket | API + real-time sync | + +### 2.2 Directory Structure + +``` +atomizer-dashboard/frontend/src/ +├── components/ +│ ├── canvas/ # Canvas components +│ │ ├── AtomizerCanvas # Main wrapper +│ │ ├── SpecRenderer # Spec → ReactFlow +│ │ ├── nodes/ # Node type components +│ │ └── panels/ # Side panels +│ └── chat/ # Claude chat UI +├── hooks/ +│ ├── useSpecStore.ts # Central state (Zustand) +│ ├── useChat.ts # Claude integration +│ ├── useCanvasStore.ts # Local canvas state +│ └── useSpecWebSocket.ts # Real-time sync +├── lib/ +│ └── spec/converter.ts # Spec ↔ ReactFlow +├── types/ +│ └── atomizer-spec.ts # TypeScript definitions +└── pages/ + ├── CanvasView.tsx # Main canvas page + ├── Home.tsx # Study selection + └── Setup.tsx # Study wizard +``` + +### 2.3 State Management Architecture + +``` +┌─────────────────────────────────────────────────────────────┐ +│ useSpecStore (Zustand) │ +│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌───────────────┐ │ +│ │ spec │ │ hash │ │ isDirty │ │ selectedNode │ │ +│ └────┬────┘ └────┬────┘ └────┬────┘ └───────┬───────┘ │ +└───────┼────────────┼────────────┼───────────────┼──────────┘ + │ │ │ │ + ▼ ▼ ▼ ▼ + ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────────┐ + │Canvas │ │Conflict │ │ Save │ │ NodeConfig │ + │ Render │ │Detection│ │ Button │ │ Panel │ + └─────────┘ └─────────┘ └─────────┘ └─────────────┘ +``` + +**Key Actions**: +- `loadSpec(studyId)` - Fetch spec from backend +- `patchSpec(path, value)` - Update with conflict check +- `patchSpecOptimistic(path, value)` - Fire-and-forget update +- `addNode(type, data)` - Add design var/extractor/etc. +- `removeNode(nodeId)` - Delete with edge cleanup + +### 2.4 Canvas Rendering Pipeline + +``` +AtomizerSpec JSON + │ + ▼ +specToNodes() [converter.ts] + │ + ├──► Model Node (synthetic) + ├──► Solver Node (synthetic) + ├──► DesignVar Nodes × N + ├──► Extractor Nodes × N + ├──► Objective Nodes × N + ├──► Constraint Nodes × N + ├──► Algorithm Node (synthetic) + └──► Surrogate Node (optional) + │ + ▼ +ReactFlow Component + │ + ▼ +Interactive Canvas +``` + +**Layout Constants**: +``` +Design Variables: x = 50 +Model: x = 280 +Solver: x = 510 +Extractors: x = 740 +Objectives: x = 1020 +Constraints: x = 1020 (offset y) +Algorithm: x = 1300 +Surrogate: x = 1530 +Row Height: 100px +``` + +--- + +## 3. Backend Architecture + +### 3.1 Technology Stack + +| Layer | Technology | Purpose | +|-------|------------|---------| +| Framework | FastAPI | Async REST + WebSocket | +| Validation | Pydantic | Schema enforcement | +| Database | SQLite | Trial storage (Optuna schema) | +| LLM | Anthropic Claude | AI assistance | + +### 3.2 Directory Structure + +``` +atomizer-dashboard/backend/api/ +├── main.py # FastAPI app +├── routes/ +│ ├── spec.py # Spec CRUD + WebSocket +│ ├── optimization.py # Run management +│ ├── claude.py # Chat sessions +│ └── files.py # File operations +└── services/ + ├── spec_manager.py # Central spec management + ├── claude_agent.py # Claude with tools + ├── context_builder.py # System prompts + └── session_manager.py # WebSocket sessions +``` + +### 3.3 SpecManager Service + +**The SpecManager is the gatekeeper for all spec modifications.** + +```python +class SpecManager: + def __init__(self, study_path: Path): + self.study_path = study_path + self.spec_file = study_path / "atomizer_spec.json" + self.subscribers: List[WebSocket] = [] + + # Loading + def load_spec(self) -> AtomizerSpec + def load_raw(self) -> dict + + # Validation + def validate(self, spec) -> ValidationReport + def validate_semantic(self, spec) -> ValidationReport + + # Modifications + def patch_spec(self, path, value) -> dict + def add_node(self, type, data) -> str + def update_node(self, id, updates) -> None + def remove_node(self, id) -> None + + # Persistence + def save_spec(self, spec) -> dict # Atomic write + hash + def compute_hash(self, spec) -> str + + # Real-time sync + def subscribe(self, ws: WebSocket) + def broadcast(self, message: dict) +``` + +**Modification Flow**: +1. Load current spec +2. Apply modification +3. Validate with Pydantic +4. Atomic write to disk +5. Compute new hash +6. Broadcast to all WebSocket subscribers +7. Return hash + timestamp + +### 3.4 REST API Endpoints + +| Method | Endpoint | Purpose | +|--------|----------|---------| +| GET | `/api/studies/{id}/spec` | Load full spec | +| GET | `/api/studies/{id}/spec/hash` | Get current hash | +| PUT | `/api/studies/{id}/spec` | Replace entire spec | +| PATCH | `/api/studies/{id}/spec` | JSONPath patch | +| POST | `/api/studies/{id}/spec/nodes` | Add node | +| PATCH | `/api/studies/{id}/spec/nodes/{nid}` | Update node | +| DELETE | `/api/studies/{id}/spec/nodes/{nid}` | Remove node | +| POST | `/api/studies/{id}/spec/validate` | Validate spec | +| WS | `/api/studies/{id}/spec/sync` | Real-time sync | + +### 3.5 Conflict Detection + +Uses SHA256 hash of spec content: + +``` +Client A loads spec (hash: abc123) +Client B loads spec (hash: abc123) + +Client A modifies → sends with hash abc123 + Server: hash matches → apply → new hash: def456 + Server: broadcast to all clients + +Client B modifies → sends with hash abc123 + Server: hash mismatch (expected def456) + Server: return 409 Conflict + Client B: reload latest spec +``` + +--- + +## 4. Optimization Engine + +### 4.1 Directory Structure + +``` +optimization_engine/ +├── config/ # Configuration management +│ ├── spec_models.py # Pydantic models +│ ├── spec_validator.py # Semantic validation +│ └── migrator.py # Legacy migration +├── extractors/ # Physics extractors +│ ├── extract_displacement.py +│ ├── extract_stress.py +│ ├── extract_mass_*.py +│ ├── extract_zernike*.py +│ └── custom_extractor_loader.py +├── core/ # Optimization algorithms +│ ├── runner.py # Main loop +│ ├── method_selector.py # Algorithm selection +│ └── intelligent_optimizer.py # IMSO +├── nx/ # NX integration +│ ├── solver.py # Nastran execution +│ └── updater.py # Parameter updates +├── study/ # Study management +│ ├── creator.py +│ └── state.py +└── utils/ + ├── dashboard_db.py # Optuna schema + └── trial_manager.py # Trial CRUD +``` + +### 4.2 Extractor Library + +| ID | Type | Function | Inputs | +|----|------|----------|--------| +| E1 | Displacement | Max/RMS displacement | OP2, subcase | +| E2 | Frequency | Eigenvalue | OP2, mode | +| E3 | Stress | Von Mises, principal | OP2, element set | +| E4 | Mass (BDF) | Total mass | BDF file | +| E5 | Mass (Expr) | NX expression | NX session | +| E8-10 | Zernike | OPD polynomial fit | OP2, grid config | + +**Custom Extractor Pattern**: +```python +def extract_volume(op2_path: str) -> Dict[str, float]: + from pyNastran.op2.op2 import OP2 + op2 = OP2() + op2.read_op2(op2_path) + # ... calculation + return {"volume_mm3": calculated_volume} +``` + +### 4.3 Trial Storage + +**Folder Structure**: +``` +studies/{study}/2_iterations/ +├── trial_0001/ +│ ├── params.json # Input parameters +│ ├── results.json # Objectives/constraints +│ ├── _meta.json # Metadata +│ └── *.op2, *.fem # FEA outputs +├── trial_0002/ +└── ... +``` + +**Database Schema** (Optuna-compatible SQLite): +```sql +trials (trial_id, study_id, number, state, created_at) +trial_params (trial_id, param_name, param_value) +trial_values (trial_id, objective_id, value) +trial_user_attributes (trial_id, key, value_json) +``` + +--- + +## 5. Claude Integration + +### 5.1 Two Operation Modes + +| Mode | Endpoint | Capabilities | Use Case | +|------|----------|--------------|----------| +| **User** | `/ws` | Read-only, MCP tools | Safe exploration | +| **Power** | `/ws/power` | Full write access | Canvas modification | + +### 5.2 Power Mode Tools + +```python +# claude_agent.py - Direct API tools +add_design_variable(name, min, max, baseline, units) +add_extractor(name, type, config, custom_code) +add_objective(name, direction, weight, extractor_id) +add_constraint(name, operator, threshold, extractor_id) +update_spec_field(path, value) # JSONPath update +remove_node(node_id) +``` + +### 5.3 Context Building + +The `ContextBuilder` assembles rich system prompts: + +``` +# Atomizer Assistant + +## Current Mode: POWER (full write access) + +## Current Study: bracket_optimization +- Design Variables: 3 (thickness, angle, radius) +- Extractors: 2 (Displacement, Mass) +- Objectives: 2 (Min mass, Max stiffness) +- Constraints: 1 (mass <= 0.2 kg) +- Status: 47/100 trials complete + +## Canvas State +8 nodes, 11 edges +[Node list with IDs and types...] + +## Available Tools +- add_design_variable: Add a new design variable +- add_extractor: Add physics extractor +- add_objective: Add optimization objective +- add_constraint: Add constraint +- update_spec_field: Update any field by JSONPath +- remove_node: Remove element by ID + +**ACT IMMEDIATELY** when asked to modify things. +``` + +--- + +## 6. Data Flow Diagrams + +### 6.1 Canvas Edit Flow + +``` +User edits node property + │ + ▼ +┌─────────────────────────────┐ +│ useSpecStore.patchSpec() │ +│ ┌─────────────────────────┐ │ +│ │ 1. Optimistic UI update │ │ +│ └───────────┬─────────────┘ │ +│ │ │ +│ ┌───────────▼─────────────┐ │ +│ │ 2. Async PATCH request │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Backend: SpecManager │ +│ ┌─────────────────────────┐ │ +│ │ 3. JSONPath parse │ │ +│ │ 4. Apply modification │ │ +│ │ 5. Pydantic validate │ │ +│ │ 6. Atomic file write │ │ +│ │ 7. Compute new hash │ │ +│ │ 8. Broadcast to clients │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ All WebSocket Clients │ +│ ┌─────────────────────────┐ │ +│ │ 9. Receive spec_updated │ │ +│ │ 10. Update local hash │ │ +│ │ 11. Re-render if needed │ │ +│ └─────────────────────────┘ │ +└─────────────────────────────┘ +``` + +### 6.2 Optimization Run Flow + +``` +User clicks "Run" + │ + ▼ +┌─────────────────────────────┐ +│ POST /api/optimization/start│ +│ { study_id, trials, method }│ +└─────────────┬───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Backend: Spawn runner │ +│ ┌─────────────────────────┐ │ +│ │ 1. Load spec │ │ +│ │ 2. Initialize Optuna │ │ +│ │ 3. Create trial folders │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ┌─────────┼─────────┐ + │ │ │ + ▼ ▼ ▼ +┌───────────────────────────────────────────────────────┐ +│ For each trial (1 to N): │ +│ ┌───────────────────────────────────────────────────┐ │ +│ │ 4. Optuna suggests parameters │ │ +│ │ 5. Update NX expressions │ │ +│ │ 6. Run Nastran simulation │ │ +│ │ 7. Extract physics results │ │ +│ │ 8. Compute objectives/constraints │ │ +│ │ 9. Save to trial folder + database │ │ +│ │ 10. Send WebSocket update │ │ +│ └─────────────────────────────────────────────────┘ │ +└───────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Frontend: Real-time updates │ +│ ┌─────────────────────────┐ │ +│ │ Update convergence plot │ │ +│ │ Update trial table │ │ +│ │ Show best design │ │ +│ └─────────────────────────┘ │ +└─────────────────────────────┘ +``` + +### 6.3 Claude Canvas Modification Flow + +``` +User: "Add volume extractor with constraint <= 1000" + │ + ▼ +┌─────────────────────────────┐ +│ WebSocket: /ws/power │ +│ { type: 'message', │ +│ content: '...', │ +│ canvas_state: {...} } │ +└─────────────┬───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ AtomizerClaudeAgent │ +│ ┌─────────────────────────┐ │ +│ │ 1. Build context prompt │ │ +│ │ 2. Send to Claude API │ │ +│ │ 3. Claude decides tools │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Claude Tool Calls: │ +│ ┌─────────────────────────┐ │ +│ │ add_extractor( │ │ +│ │ name="Volume", │ │ +│ │ type="custom", │ │ +│ │ code="..." ) │ │ +│ │ │ │ +│ │ add_constraint( │ │ +│ │ name="Max Volume", │ │ +│ │ operator="<=", │ │ +│ │ threshold=1000 ) │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Each tool modifies spec: │ +│ ┌─────────────────────────┐ │ +│ │ Load → Modify → Save │ │ +│ │ Send spec_modified │ │ +│ └───────────┬─────────────┘ │ +└─────────────┼───────────────┘ + │ + ▼ +┌─────────────────────────────┐ +│ Frontend receives events: │ +│ ┌─────────────────────────┐ │ +│ │ spec_modified → reload │ │ +│ │ Canvas shows new nodes │ │ +│ └─────────────────────────┘ │ +└─────────────────────────────┘ +``` + +--- + +## 7. Component Relationships + +### 7.1 Frontend Component Hierarchy + +``` + +├── +│ ├── +│ │ ├── +│ │ │ ├── × N +│ │ │ ├── × N +│ │ │ ├── × N +│ │ │ ├── × N +│ │ │ ├── +│ │ │ ├── +│ │ │ └── +│ │ ├── +│ │ ├── +│ │ ├── +│ │ └── +│ └── +│ ├── × N +│ └── × M +├── +│ └── +└── + └── +``` + +### 7.2 Backend Service Dependencies + +``` +FastAPI App +│ +├── spec.py ─────────► SpecManager +│ ├── Pydantic Models +│ └── File I/O + Hash +│ +├── claude.py ───────► AtomizerClaudeAgent +│ ├── ContextBuilder +│ ├── Anthropic Client +│ └── Write Tools +│ +├── optimization.py ─► Runner Process +│ ├── TrialManager +│ ├── NX Solver +│ └── Extractors +│ +└── WebSocket Hub ◄─── All routes broadcast +``` + +--- + +## 8. Critical Patterns + +### 8.1 Modification Pattern + +**Always use SpecManager for modifications:** + +```python +# ❌ WRONG: Direct file write +with open("atomizer_spec.json", "w") as f: + json.dump(spec, f) + +# ✅ CORRECT: Use SpecManager +manager = SpecManager(study_path) +spec = manager.load_spec() +spec.objectives[0].weight = 2.0 +manager.save_spec(spec) +``` + +### 8.2 Optimistic Update Pattern + +```typescript +// 1. Update UI immediately +setSpec(modifiedSpec); + +// 2. Async sync to backend +patchSpec(path, value) + .then(({ hash }) => setHash(hash)) + .catch(() => setSpec(originalSpec)); // Rollback on failure +``` + +### 8.3 WebSocket Sync Pattern + +``` +Client A ─────► Server ─────► Client B + │ │ │ + │ PATCH │ │ + ├─────────────►│ │ + │ │ broadcast │ + │ ├─────────────►│ + │ │ │ + │◄─────────────┤◄─────────────┤ + │ ack + hash │ │ +``` + +--- + +## 9. Potential Improvements + +### 9.1 Current Limitations + +1. **No Undo/Redo**: Canvas modifications are immediate +2. **Single File Lock**: No distributed locking for multi-user +3. **Memory-only Sessions**: Session state lost on restart +4. **Limited Offline**: Requires backend connection + +### 9.2 Recommended Enhancements + +| Priority | Enhancement | Benefit | +|----------|-------------|---------| +| High | Operation history with undo | Better UX | +| High | Persistent sessions (Redis) | Scalability | +| Medium | Spec versioning/branching | Experimentation | +| Medium | Batch operations API | Performance | +| Low | Offline canvas editing | Flexibility | + +--- + +## 10. Conclusion + +Atomizer's architecture is **well-designed for its purpose**: enabling engineers to configure and run FEA optimizations through a visual interface with AI assistance. + +**Strongest Points**: +- Single source of truth eliminates sync issues +- Pydantic ensures data integrity +- WebSocket enables real-time collaboration +- Optimistic updates provide responsive UX + +**Areas for Attention**: +- Add undo/redo for canvas operations +- Consider persistent session storage for production +- Expand test coverage for spec migrations + +The architecture is **production-ready** for single-user/small-team scenarios and can be enhanced for enterprise deployment. diff --git a/docs/reviews/ATOMIZER_ARCHITECTURE.svg b/docs/reviews/ATOMIZER_ARCHITECTURE.svg new file mode 100644 index 00000000..dd3e37f1 --- /dev/null +++ b/docs/reviews/ATOMIZER_ARCHITECTURE.svg @@ -0,0 +1,519 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ATOMIZER ARCHITECTURE - Intelligence Layout + + + AtomizerSpec v2.0 | Single Source of Truth Pattern | January 2026 + + + + + + + FRONTEND (React + TypeScript) + + + + CanvasView + ReactFlow Canvas + Node Rendering + Edge Connections + + + + ChatPanel + Claude Integration + User/Power Mode + Tool Call Display + + + + useSpecStore + Zustand Store + spec | hash | isDirty + selectedNode | validation + patchSpec() | addNode() + + + + useChat + WebSocket Hook + messages | isThinking + mode: user | power + sendMessage() | switchMode() + + + + spec/converter.ts + specToNodes() | nodesToSpec() | Auto-layout: DVs→Model→Solver→Extractors→Objectives→Algorithm + + + + types/atomizer-spec.ts + TypeScript Definitions + + + + panels/*.tsx + NodeConfig | Validation | Execute + + + + + + + BACKEND (FastAPI + Python) + + + + routes/ + spec.py - CRUD + WS sync + claude.py - Chat sessions + optimization.py - Run control + files.py - Upload/download + + + + services/ + spec_manager.py + claude_agent.py + context_builder.py + session_manager.py + + + + SpecManager (Central Gatekeeper) + load_spec() → validate() → patch_spec() → save_spec() + Pydantic Validation | Atomic Writes | SHA256 Hashing + WebSocket Broadcast | Conflict Detection + All modifications must go through SpecManager + + + + REST API + GET/PUT/PATCH /spec + POST/PATCH/DELETE /nodes + POST /validate + + + + WebSocket + /spec/sync - Real-time sync + /ws - User mode chat + /ws/power - Power mode + + + + + + atomizer_spec.json + SINGLE SOURCE OF TRUTH + + + + Structure + meta: { version, name } + model: { sim, fem } + design_variables[] + extractors[] + objectives[] + constraints[] + + + + Node IDs + dv_001, dv_002... + ext_001, ext_002... + obj_001, obj_002... + con_001, con_002... + + canvas_position + + + + + + OPTIMIZATION ENGINE + + + + core/ + runner.py - Main loop + method_selector.py + intelligent_optimizer.py + gradient_optimizer.py + + + + extractors/ + displacement, stress + frequency, mass + zernike_opd + custom_extractor_loader + + + + config/ + spec_models.py (Pydantic) + spec_validator.py + migrator.py (legacy→v2) + + + + nx/ + solver.py - Nastran exec + updater.py - Param update + session_manager.py + + + + utils/ + dashboard_db.py (Optuna) + trial_manager.py + + + + study/ + creator.py, state.py + reset.py + + + + Algorithms: TPE | CMA-ES | NSGA-II | IMSO | SAT_v3 | RandomSearch + + + + + + CLAUDE INTEGRATION + + + + AtomizerClaudeAgent + Direct Anthropic API + Tools (Power Mode): + add_design_variable() + add_extractor() + add_objective() + add_constraint() + update_spec_field() + remove_node() + + + + ContextBuilder + System prompt assembly + Study context injection + Canvas state embedding + + + + User + Read-only + MCP tools + + + Power + Full write + Direct API + + + + + + FILE SYSTEM + + + + studies/{name}/ + atomizer_spec.json + 1_setup/model/ + *.prt, *.sim, *.fem + 2_iterations/ + trial_NNNN/ + 3_results/ + study.db + + + + study.db (Optuna) + trials + trial_id | state | created + trial_params + param_name | param_value + trial_values + objective_id | value + + + + + + + REST + + + + WS + + + + Read/Write + + + + Load + + + + Modify + + + + File + + + + Trials + + + + Chat WS + + + + + DATA FLOW LEGEND + + + + REST API + + + + WebSocket + + + + Spec I/O + + + + Claude + + + + Engine + + + + + All modifications flow through atomizer_spec.json (Single Source of Truth) + + + + + + + CANVAS NODE TYPES (ReactFlow) + + + + + + DesignVar + dv_001 + bounds, baseline + + + + Model + synthetic + .sim path + + + + Solver + synthetic + SOL 101/103 + + + + Extractor + ext_001 + type, outputs + + + + Objective + obj_001 + direction, weight + + + + Constraint + con_001 + operator, threshold + + + + + + + Algorithm + synthetic + TPE/CMA-ES + + + + Surrogate + optional + MLP/GNN + + + + Layout: DVs → Model → Solver → Extractors → Objectives/Constraints → Algorithm + + + + + + + + KEY ARCHITECTURAL PATTERNS + + + + Optimistic Updates + 1. Update UI immediately + 2. Async sync to backend + + + + Conflict Detection + SHA256 hash comparison + 409 Conflict → reload + + + + WS Broadcast + All clients synced + Real-time updates + + + + Pydantic Validation + Every modification validated + Schema + semantic checks + + + + Atomic Writes + No partial updates + Write → hash → broadcast + + + + Node ID Convention + dv_, ext_, obj_, con_ + Auto-increment: {prefix}{N:03d} + + + + + + + AtomizerSpec v2.0 | React 18 + Vite + Zustand | FastAPI + Pydantic | SQLite (Optuna) | Anthropic Claude API + + + Optimization Algorithms: TPE | CMA-ES | NSGA-II | IMSO | SAT_v3 | GNN Surrogates | Extractors: displacement, stress, frequency, mass, zernike + + + + + + + INTELLIGENCE LAYER + + + + Claude Power Mode + Direct spec modifications via tools + No permission prompts + + + + IMSO (Intelligent Multi-Stage) + Adaptive algorithm selection + Surrogate-assisted optimization + + + + GNN Surrogates + Neural network FEA predictions + 10x speedup via Zernike GNN + + + + Self-Aware Turbo (SAT v3) + Uncertainty-aware exploration + Adaptive FEA validation + + + + LAC (Learning Core) + Session insights persistence + Cross-study knowledge + + diff --git a/docs/reviews/COMPONENT_RELATIONSHIPS.md b/docs/reviews/COMPONENT_RELATIONSHIPS.md new file mode 100644 index 00000000..89d8ce9e --- /dev/null +++ b/docs/reviews/COMPONENT_RELATIONSHIPS.md @@ -0,0 +1,529 @@ +# Atomizer Component Relationships + +**Date**: January 2026 +**Version**: 2.0 + +This document details how Atomizer's components interact with each other. + +--- + +## 1. Configuration Files Hierarchy + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ atomizer_spec.json (v2.0) │ +│ SINGLE SOURCE OF TRUTH │ +└───────────────────────────┬─────────────────────────────────────┘ + │ + ┌───────────────────┼───────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│ Canvas UI │ │ Backend API │ │ Opt Engine │ +│ (ReactFlow) │ │ (FastAPI) │ │ (Python) │ +└───────────────┘ └───────────────┘ └───────────────┘ +``` + +### Relationship Table + +| Component | Reads From | Writes To | Frequency | +|-----------|------------|-----------|-----------| +| Canvas (React) | atomizer_spec.json | atomizer_spec.json | Real-time | +| Backend (FastAPI) | atomizer_spec.json | atomizer_spec.json | Per request | +| Engine (Python) | atomizer_spec.json | study.db | Per trial | +| Claude Agent | atomizer_spec.json | atomizer_spec.json | Per command | + +--- + +## 2. Frontend Store Dependencies + +``` + ┌─────────────────────┐ + │ useSpecStore │ + │ (Zustand) │ + │ │ + │ spec: AtomizerSpec │ + │ hash: string │ + │ isDirty: boolean │ + │ selectedNodeId │ + └──────────┬──────────┘ + │ + ┌───────────────────┼───────────────────┐ + │ │ │ + ▼ ▼ ▼ + ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ + │ CanvasView │ │ NodeConfig │ │ Validation │ + │ │ │ Panel │ │ Panel │ + │ Subscribes: │ │ │ │ │ + │ - spec │ │ Subscribes: │ │ Subscribes: │ + │ - hash │ │ - spec │ │ - validation │ + │ - isDirty │ │ - selected │ │ │ + └──────────────┘ └──────────────┘ └──────────────┘ + + ┌─────────────────────┐ + │ useChat │ + │ (Hook) │ + │ │ + │ messages: Message[] │ + │ isThinking: bool │ + │ mode: user|power │ + │ sessionId: string │ + └──────────┬──────────┘ + │ + ▼ + ┌─────────────────────┐ + │ ChatPanel │ + │ │ + │ Displays messages │ + │ Tool call cards │ + │ Mode toggle │ + └─────────────────────┘ +``` + +### State Flow + +1. **User edits node** → `useSpecStore.patchSpec(path, value)` +2. **Optimistic update** → UI immediately reflects change +3. **Async PATCH** → Backend validates and saves +4. **WebSocket broadcast** → All clients receive `spec_updated` +5. **Hash comparison** → Check for conflicts + +--- + +## 3. Backend Service Dependencies + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ FastAPI App │ +└─────────────────────────────────────────────────────────────────┘ + │ + ┌──────────────────────┼──────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│ spec.py │ │ claude.py │ │optimization.py│ +│ (Routes) │ │ (Routes) │ │ (Routes) │ +└───────┬───────┘ └───────┬───────┘ └───────┬───────┘ + │ │ │ + ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│ SpecManager │◄─────│ClaudeAgent │ │ Runner │ +│ │ │ │ │ (subprocess) │ +│ - load_spec() │ │ - tools │ │ │ +│ - save_spec() │ │ - chat() │ │ - run() │ +│ - patch_spec()│ │ │ │ - stream() │ +│ - broadcast() │ │ │ │ │ +└───────┬───────┘ └───────┬───────┘ └───────┬───────┘ + │ │ │ + └──────────────────────┼──────────────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ atomizer_spec.json │ + │ (Disk) │ + └─────────────────────┘ +``` + +### Service Responsibilities + +| Service | Responsibility | Dependencies | +|---------|---------------|--------------| +| SpecManager | All spec CRUD, validation, broadcasting | Pydantic models | +| ClaudeAgent | AI chat with tools, context building | Anthropic API, SpecManager | +| ContextBuilder | System prompt assembly | Study data, canvas state | +| SessionManager | WebSocket lifecycle | In-memory store | +| Runner | Optimization loop | NX solver, extractors | + +--- + +## 4. Optimization Engine Module Dependencies + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ optimization_engine/ │ +└─────────────────────────────────────────────────────────────────┘ + +┌─────────────┐ ┌─────────────┐ ┌─────────────┐ +│ config/ │ │ core/ │ │ nx/ │ +│ │ │ │ │ │ +│spec_models │◄─────│ runner │─────►│ solver │ +│spec_validator│ │ IMSO │ │ updater │ +│ migrator │ │ gradient │ │ session │ +└──────┬──────┘ └──────┬──────┘ └─────────────┘ + │ │ + │ ┌──────┴──────┐ + │ │ │ + │ ▼ ▼ + │ ┌─────────────┐ ┌─────────────┐ + │ │ extractors/ │ │ surrogates │ + │ │ │ │ │ + │ │displacement │ │ MLP │ + │ │ stress │ │ GNN │ + │ │ zernike │ │ ensemble │ + │ │ custom │ │ │ + │ └─────────────┘ └─────────────┘ + │ + ▼ +┌─────────────┐ ┌─────────────┐ +│ study/ │ │ utils/ │ +│ │ │ │ +│ creator │ │dashboard_db │ +│ state │ │trial_manager│ +│ reset │ │ archiver │ +└─────────────┘ └─────────────┘ +``` + +### Module Interaction Flow + +``` +atomizer_spec.json + │ + ▼ +┌───────────────┐ +│ spec_models │ ← Pydantic validation +└───────┬───────┘ + │ + ▼ +┌───────────────┐ +│ runner │ ← Main optimization loop +└───────┬───────┘ + │ +┌───────┴───────┐ +│ │ +▼ ▼ +┌─────────┐ ┌─────────┐ +│ nx/ │ │extract/ │ +│ solver │ │ors │ +└────┬────┘ └────┬────┘ + │ │ + └─────┬─────┘ + │ + ▼ + ┌─────────────┐ + │ study.db │ ← Trial persistence + └─────────────┘ +``` + +--- + +## 5. WebSocket Event Flow + +### 5.1 Spec Sync WebSocket + +``` +Frontend Backend File System + │ │ │ + │ ── PATCH /spec ──────► │ │ + │ │ ── write ──────────────► │ + │ │ ◄── confirm ──────────── │ + │ │ │ + │ ◄── spec_updated ───── │ ← broadcast to all │ + │ │ │ + │ (update local hash) │ │ +``` + +### 5.2 Claude Chat WebSocket + +``` +Frontend Backend Claude API + │ │ │ + │ ── message + state ──► │ │ + │ │ ── prompt ─────────────► │ + │ │ ◄── tool_call ────────── │ + │ ◄── tool_call ──────── │ │ + │ │ (execute tool) │ + │ │ ── tool_result ────────► │ + │ ◄── tool_result ────── │ ◄── response ─────────── │ + │ ◄── text ───────────── │ │ + │ ◄── done ───────────── │ │ + │ │ │ + │ (if spec_modified) │ │ + │ ◄── spec_modified ──── │ │ + │ (reload spec) │ │ +``` + +### 5.3 Optimization Stream WebSocket + +``` +Frontend Backend Runner Process + │ │ │ + │ ── start_opt ────────► │ ── spawn ──────────────► │ + │ │ │ + │ │ ◄── trial_start ──────── │ + │ ◄── trial_start ────── │ │ + │ │ │ + │ │ ◄── trial_complete ───── │ + │ ◄── trial_complete ─── │ │ + │ │ │ + │ (repeat for N trials) │ │ + │ │ │ + │ ◄── optimization_done ─ │ ◄── done ─────────────── │ +``` + +--- + +## 6. Data Transformation Pipeline + +### 6.1 Canvas → Spec → ReactFlow + +``` +User interacts with Canvas + │ + ▼ +┌─────────────────────────┐ +│ ReactFlow onChange │ +│ (nodes, edges) │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ useSpecStore.patch() │ +│ JSONPath modification │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ Backend SpecManager │ +│ Pydantic validation │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ atomizer_spec.json │ +│ (persisted) │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ spec/converter.ts │ +│ specToNodes() │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ ReactFlow renders │ +│ updated canvas │ +└─────────────────────────┘ +``` + +### 6.2 Spec → Optimization → Results + +``` +atomizer_spec.json + │ + ▼ +┌─────────────────────────┐ +│ Runner.load_config() │ +│ Parse spec structure │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ Optuna.create_study() │ +│ Initialize sampler │ +└───────────┬─────────────┘ + │ + ┌────────┴────────┐ + │ Trial Loop │ + │ (1 to N) │ + └────────┬────────┘ + │ + ▼ +┌─────────────────────────┐ +│ NX solver.solve() │ +│ Update expressions │ +│ Run Nastran │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ extractors.extract() │ +│ Read OP2 results │ +└───────────┬─────────────┘ + │ + ▼ +┌─────────────────────────┐ +│ TrialManager.save() │ +│ - params.json │ +│ - results.json │ +│ - study.db INSERT │ +└─────────────────────────┘ +``` + +--- + +## 7. Error Handling Chain + +``` +Frontend Error + │ + ├──► Validation Error (UI shows inline) + │ + ├──► Network Error (retry with exponential backoff) + │ + └──► Conflict Error (409) → reload spec → show diff + +Backend Error + │ + ├──► Pydantic ValidationError → 422 with field details + │ + ├──► SpecNotFoundError → 404 + │ + ├──► SpecConflictError → 409 with current hash + │ + └──► Internal Error → 500 with traceback (dev mode) + +Engine Error + │ + ├──► NX Connection Error → retry with backoff + │ + ├──► Solver Error → mark trial FAILED, continue + │ + ├──► Extractor Error → log warning, use fallback + │ + └──► Database Error → rollback transaction +``` + +--- + +## 8. Security Boundaries + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ User Browser │ +│ │ +│ ┌──────────────────────────────────────────────────────────┐ │ +│ │ React Application │ │ +│ │ - No direct file access │ │ +│ │ - All data via REST/WebSocket │ │ +│ └──────────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────────────┘ + │ + (HTTP/WS) + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ FastAPI Backend │ +│ │ +│ ┌──────────────────────────────────────────────────────────┐ │ +│ │ Input Validation │ │ +│ │ - Pydantic schema validation │ │ +│ │ - Path sanitization │ │ +│ │ - Rate limiting (optional) │ │ +│ └──────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────────────────────────────────────────────┐ │ +│ │ File System Access │ │ +│ │ - Restricted to studies/ directory │ │ +│ │ - No path traversal (../) │ │ +│ └──────────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ Optimization Engine │ +│ │ +│ - Runs as separate process │ +│ - Limited to study directory │ +│ - NX subprocess sandboxed │ +└─────────────────────────────────────────────────────────────────┘ +``` + +--- + +## 9. Deployment Topology + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Development Environment │ +│ │ +│ Frontend: npm run dev → localhost:5173 │ +│ Backend: uvicorn --reload → localhost:8001 │ +│ NX: Local NX installation │ +│ Claude: Anthropic API (cloud) │ +└─────────────────────────────────────────────────────────────────┘ + +┌─────────────────────────────────────────────────────────────────┐ +│ Production Environment │ +│ │ +│ Frontend: Static files served by Nginx │ +│ Backend: Gunicorn + Uvicorn workers │ +│ NX: Licensed NX server │ +│ Database: SQLite (can upgrade to PostgreSQL) │ +│ Claude: Anthropic API (cloud) │ +└─────────────────────────────────────────────────────────────────┘ +``` + +--- + +## 10. Version Compatibility Matrix + +| Component | Required Version | Notes | +|-----------|-----------------|-------| +| Python | 3.10+ | Type hints, async | +| Node.js | 18+ | ES modules | +| React | 18+ | Concurrent features | +| FastAPI | 0.100+ | Pydantic v2 | +| Pydantic | 2.0+ | New validation API | +| ReactFlow | 11+ | Custom nodes | +| Optuna | 3.0+ | Trial management | +| NX | 12+ (tested 2306) | NX Open API | +| Claude | claude-3-5-sonnet | Tool use | + +--- + +## 11. Testing Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Test Pyramid │ +└─────────────────────────────────────────────────────────────────┘ + + ┌─────────────┐ + │ E2E │ ← Playwright/Cypress + │ (few) │ Full user flows + └──────┬──────┘ + │ + ┌────────────┴────────────┐ + │ │ + ▼ ▼ + ┌─────────────┐ ┌─────────────┐ + │ Integration │ │ Integration │ + │ (frontend) │ │ (backend) │ + └──────┬──────┘ └──────┬──────┘ + │ │ + ▼ ▼ + ┌─────────────┐ ┌─────────────┐ + │ Unit │ │ Unit │ + │ (React) │ │ (Python) │ + │ Jest/Vitest│ │ pytest │ + └─────────────┘ └─────────────┘ +``` + +### Test Coverage Goals + +| Layer | Target | Current | +|-------|--------|---------| +| Unit (Python) | 80% | ~70% | +| Unit (React) | 70% | ~50% | +| Integration | 60% | ~40% | +| E2E | Key flows | Manual | + +--- + +## Summary + +Atomizer's architecture is built on these core principles: + +1. **Single Source of Truth**: All configuration in `atomizer_spec.json` +2. **Type Safety**: Pydantic on backend, TypeScript on frontend +3. **Real-time Sync**: WebSocket broadcast for multi-client coordination +4. **Optimistic Updates**: Responsive UI with async persistence +5. **Modular Engine**: Pluggable extractors, algorithms, surrogates + +The architecture is robust for its intended use case (engineering optimization) and can scale horizontally by: +- Adding more backend workers +- Sharding studies across directories +- Upgrading to distributed database (PostgreSQL) +- Adding Redis for session state diff --git a/studies/M1_Mirror/README.md b/studies/M1_Mirror/README.md index 2f024c8a..4fba28f6 100644 --- a/studies/M1_Mirror/README.md +++ b/studies/M1_Mirror/README.md @@ -249,7 +249,7 @@ V6 (TPE) → V7 (CMA-ES whiffle) → V8 (CMA-ES lateral) → V9 (CMA-ES co > **Important**: `abs(RMS_target - RMS_ref)` is WRONG. Always use `extract_relative()` for relative metrics. -See: [docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md](../../docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md) +See: [docs/physics/ZERNIKE_OPD_METHOD.md](../../docs/physics/ZERNIKE_OPD_METHOD.md) ### 8.2 WFE Convention @@ -323,12 +323,12 @@ python -m optimization_engine.insights generate . --type zernike_opd_comparison | Document | Description | |----------|-------------| | [M1_MIRROR_CAMPAIGN_SUMMARY.md](m1_mirror_adaptive_V15/M1_MIRROR_CAMPAIGN_SUMMARY.md) | V11-V15 campaign analysis | -| [docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md](../../docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md) | Zernike analysis basics | -| [docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md](../../docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md) | OPD method for lateral supports | +| [docs/physics/ZERNIKE_FUNDAMENTALS.md](../../docs/physics/ZERNIKE_FUNDAMENTALS.md) | Zernike analysis basics | +| [docs/physics/ZERNIKE_OPD_METHOD.md](../../docs/physics/ZERNIKE_OPD_METHOD.md) | OPD method for lateral supports | | [.claude/skills/modules/extractors-catalog.md](../../.claude/skills/modules/extractors-catalog.md) | Extractor quick reference | --- *M1 Mirror Optimization Project* *Atomizer Framework* -*Last Updated: 2025-12-23* +*Last Updated: 2026-01-20*