Files
Anto01 73a7b9d9f1 feat: Add dashboard chat integration and MCP server
Major changes:
- Dashboard: WebSocket-based chat with session management
- Dashboard: New chat components (ChatPane, ChatInput, ModeToggle)
- Dashboard: Enhanced UI with parallel coordinates chart
- MCP Server: New atomizer-tools server for Claude integration
- Extractors: Enhanced Zernike OPD extractor
- Reports: Improved report generator

New studies (configs and scripts only):
- M1 Mirror: Cost reduction campaign studies
- Simple Beam, Simple Bracket, UAV Arm studies

Note: Large iteration data (2_iterations/, best_design_archive/)
excluded via .gitignore - kept on local Gitea only.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:53:55 -05:00
..

Experimental LLM Features (Archived)

Status: Archived for post-MVP development Date Archived: November 24, 2025

Purpose

This directory contains experimental LLM integration code that was explored during early development phases. These features are archived (not deleted) for potential future use after the MVP is stable and shipped.

MVP LLM Integration Strategy

For the MVP, LLM integration is achieved through:

  • Claude Code Development Assistant: Interactive development-time assistance
  • Claude Skills (.claude/skills/):
    • create-study.md - Interactive study scaffolding
    • analyze-workflow.md - Workflow classification and analysis

This approach provides LLM assistance without adding runtime dependencies or complexity to the core optimization engine.

Archived Experimental Files

1. llm_optimization_runner.py

Experimental runner that makes runtime LLM API calls during optimization. This attempted to automate:

  • Extractor generation
  • Inline calculations
  • Post-processing hooks

Why Archived: Adds runtime dependencies, API costs, and complexity. The centralized extractor library (optimization_engine/extractors/) provides better maintainability.

2. llm_workflow_analyzer.py

LLM-based workflow analysis for automated study setup.

Why Archived: The analyze-workflow Claude skill provides the same functionality through development-time assistance, without runtime overhead.

3. inline_code_generator.py

Auto-generates inline Python calculations from natural language.

Why Archived: Manual calculation definition in optimization_config.json is clearer and more maintainable for MVP.

4. hook_generator.py

Auto-generates post-processing hooks from natural language descriptions.

Why Archived: The plugin system (optimization_engine/plugins/) with manual hook definition is more robust and debuggable.

5. report_generator.py

LLM-based report generation from optimization results.

Why Archived: Dashboard provides rich visualizations. LLM summaries can be added post-MVP if needed.

6. extractor_orchestrator.py

Orchestrates LLM-based extractor generation and management.

Why Archived: Centralized extractor library (optimization_engine/extractors/) is the production approach. No code generation needed at runtime.

When to Revisit

Consider reviving these experimental features after MVP if:

  1. MVP is stable and well-tested
  2. Users request more automation
  3. Core architecture is mature enough to support optional LLM features
  4. Clear ROI on LLM API costs vs manual configuration time

Production Architecture (MVP)

For reference, the stable production components are:

optimization_engine/
├── runner.py                    # Production optimization runner
├── extractors/                  # Centralized extractor library
│   ├── __init__.py
│   ├── base.py
│   ├── displacement.py
│   ├── stress.py
│   ├── frequency.py
│   └── mass.py
├── plugins/                     # Plugin system (hooks)
│   ├── __init__.py
│   └── hook_manager.py
├── nx_solver.py                 # NX simulation interface
├── nx_updater.py                # NX expression updates
└── visualizer.py                # Result plotting

.claude/skills/                  # Claude Code skills
├── create-study.md              # Interactive study creation
└── analyze-workflow.md          # Workflow analysis

Migration Notes

If you need to use any of these experimental files:

  1. They are functional but not maintained
  2. Update imports to optimization_engine.future.{module_name}
  3. Install any additional dependencies (LLM client libraries)
  4. Be aware of API costs for LLM calls

Questions?

For MVP development questions, refer to the DEVELOPMENT.md guide or the MVP plan in docs/07_DEVELOPMENT/Today_Todo.md.