Files
Atomizer/optimization_engine/future/README.md
Anto01 d228ccec66 refactor: Archive experimental LLM features for MVP stability (Phase 1.1)
Moved experimental LLM integration code to optimization_engine/future/:
- llm_optimization_runner.py - Runtime LLM API runner
- llm_workflow_analyzer.py - Workflow analysis
- inline_code_generator.py - Auto-generate calculations
- hook_generator.py - Auto-generate hooks
- report_generator.py - LLM report generation
- extractor_orchestrator.py - Extractor orchestration

Added comprehensive optimization_engine/future/README.md explaining:
- MVP LLM strategy (Claude Code skills, not runtime LLM)
- Why files were archived
- When to revisit post-MVP
- Production architecture reference

Production runner confirmed: optimization_engine/runner.py is sole active runner.

This establishes clear separation between:
- Production code (stable, no runtime LLM dependencies)
- Experimental code (archived for post-MVP exploration)

Part of Phase 1: Core Stabilization & Organization for MVP

Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-24 09:12:36 -05:00

4.2 KiB

Experimental LLM Features (Archived)

Status: Archived for post-MVP development Date Archived: November 24, 2025

Purpose

This directory contains experimental LLM integration code that was explored during early development phases. These features are archived (not deleted) for potential future use after the MVP is stable and shipped.

MVP LLM Integration Strategy

For the MVP, LLM integration is achieved through:

  • Claude Code Development Assistant: Interactive development-time assistance
  • Claude Skills (.claude/skills/):
    • create-study.md - Interactive study scaffolding
    • analyze-workflow.md - Workflow classification and analysis

This approach provides LLM assistance without adding runtime dependencies or complexity to the core optimization engine.

Archived Experimental Files

1. llm_optimization_runner.py

Experimental runner that makes runtime LLM API calls during optimization. This attempted to automate:

  • Extractor generation
  • Inline calculations
  • Post-processing hooks

Why Archived: Adds runtime dependencies, API costs, and complexity. The centralized extractor library (optimization_engine/extractors/) provides better maintainability.

2. llm_workflow_analyzer.py

LLM-based workflow analysis for automated study setup.

Why Archived: The analyze-workflow Claude skill provides the same functionality through development-time assistance, without runtime overhead.

3. inline_code_generator.py

Auto-generates inline Python calculations from natural language.

Why Archived: Manual calculation definition in optimization_config.json is clearer and more maintainable for MVP.

4. hook_generator.py

Auto-generates post-processing hooks from natural language descriptions.

Why Archived: The plugin system (optimization_engine/plugins/) with manual hook definition is more robust and debuggable.

5. report_generator.py

LLM-based report generation from optimization results.

Why Archived: Dashboard provides rich visualizations. LLM summaries can be added post-MVP if needed.

6. extractor_orchestrator.py

Orchestrates LLM-based extractor generation and management.

Why Archived: Centralized extractor library (optimization_engine/extractors/) is the production approach. No code generation needed at runtime.

When to Revisit

Consider reviving these experimental features after MVP if:

  1. MVP is stable and well-tested
  2. Users request more automation
  3. Core architecture is mature enough to support optional LLM features
  4. Clear ROI on LLM API costs vs manual configuration time

Production Architecture (MVP)

For reference, the stable production components are:

optimization_engine/
├── runner.py                    # Production optimization runner
├── extractors/                  # Centralized extractor library
│   ├── __init__.py
│   ├── base.py
│   ├── displacement.py
│   ├── stress.py
│   ├── frequency.py
│   └── mass.py
├── plugins/                     # Plugin system (hooks)
│   ├── __init__.py
│   └── hook_manager.py
├── nx_solver.py                 # NX simulation interface
├── nx_updater.py                # NX expression updates
└── visualizer.py                # Result plotting

.claude/skills/                  # Claude Code skills
├── create-study.md              # Interactive study creation
└── analyze-workflow.md          # Workflow analysis

Migration Notes

If you need to use any of these experimental files:

  1. They are functional but not maintained
  2. Update imports to optimization_engine.future.{module_name}
  3. Install any additional dependencies (LLM client libraries)
  4. Be aware of API costs for LLM calls

Questions?

For MVP development questions, refer to the DEVELOPMENT.md guide or the MVP plan in docs/07_DEVELOPMENT/Today_Todo.md.