Files
Atomizer/optimization_engine/future/llm_optimization_runner.py

536 lines
20 KiB
Python
Raw Permalink Normal View History

feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
"""
LLM-Enhanced Optimization Runner - Phase 3.2
Flexible LLM-enhanced optimization runner that integrates:
- Phase 2.7: LLM workflow analysis
- Phase 2.8: Inline code generation (optional)
- Phase 2.9: Post-processing hook generation (optional)
- Phase 3.0: pyNastran research agent (optional)
- Phase 3.1: Extractor orchestration (optional)
This runner enables users to describe optimization goals in natural language
and choose to leverage automated code generation, manual coding, or a hybrid approach.
Author: Atomizer Development Team
Version: 0.1.0 (Phase 3.2)
Last Updated: 2025-01-16
"""
from pathlib import Path
from typing import Dict, Any, List, Optional
import json
import logging
import optuna
from datetime import datetime
from optimization_engine.extractor_orchestrator import ExtractorOrchestrator
from optimization_engine.inline_code_generator import InlineCodeGenerator
from optimization_engine.hook_generator import HookGenerator
from optimization_engine.plugins.hook_manager import HookManager
logger = logging.getLogger(__name__)
class LLMOptimizationRunner:
"""
LLM-enhanced optimization runner with flexible automation options.
This runner empowers users to leverage LLM-assisted code generation for:
- OP2 result extractors (Phase 3.1) - optional
- Inline calculations (Phase 2.8) - optional
- Post-processing hooks (Phase 2.9) - optional
Users can describe goals in natural language and choose automated generation,
manual coding, or a hybrid approach based on their needs.
"""
def __init__(self,
llm_workflow: Dict[str, Any],
model_updater: callable,
simulation_runner: callable,
study_name: str = "llm_optimization",
output_dir: Optional[Path] = None):
"""
Initialize LLM-driven optimization runner.
Args:
llm_workflow: Output from Phase 2.7 LLM analysis with:
- engineering_features: List of FEA operations
- inline_calculations: List of simple math operations
- post_processing_hooks: List of custom calculations
- optimization: Dict with algorithm, design_variables, etc.
model_updater: Function(design_vars: Dict) -> None
feat: Phase 3.2 Task 1.2 - Wire LLMOptimizationRunner to production Task 1.2 Complete: LLM Mode Integration with Production Runner =============================================================== Overview: This commit completes Task 1.2 of Phase 3.2, which wires the LLMOptimizationRunner to the production optimization infrastructure. Natural language optimization is now available via the unified run_optimization.py entry point. Key Accomplishments: - ✅ LLM workflow validation and error handling - ✅ Interface contracts verified (model_updater, simulation_runner) - ✅ Comprehensive integration test suite (5/5 tests passing) - ✅ Example walkthrough for users - ✅ Documentation updated to reflect LLM mode availability Files Modified: 1. optimization_engine/llm_optimization_runner.py - Fixed docstring: simulation_runner signature now correctly documented - Interface: Callable[[Dict], Path] (takes design_vars, returns OP2 file) 2. optimization_engine/run_optimization.py - Added LLM workflow validation (lines 184-193) - Required fields: engineering_features, optimization, design_variables - Added error handling for runner initialization (lines 220-252) - Graceful failure with actionable error messages 3. tests/test_phase_3_2_llm_mode.py - Fixed path issue for running from tests/ directory - Added cwd parameter and ../ to path Files Created: 1. tests/test_task_1_2_integration.py (443 lines) - Test 1: LLM Workflow Validation - Test 2: Interface Contracts - Test 3: LLMOptimizationRunner Structure - Test 4: Error Handling - Test 5: Component Integration - ALL TESTS PASSING ✅ 2. examples/llm_mode_simple_example.py (167 lines) - Complete walkthrough of LLM mode workflow - Natural language request → Auto-generated code → Optimization - Uses test_env to avoid environment issues 3. docs/PHASE_3_2_INTEGRATION_PLAN.md - Detailed 4-week integration roadmap - Week 1 tasks, deliverables, and validation criteria - Tasks 1.1-1.4 with explicit acceptance criteria Documentation Updates: 1. README.md - Changed LLM mode from "Future - Phase 2" to "Available Now!" - Added natural language optimization example - Listed auto-generated components (extractors, hooks, calculations) - Updated status: Phase 3.2 Week 1 COMPLETE 2. DEVELOPMENT.md - Added Phase 3.2 Integration section - Listed Week 1 tasks with completion status 3. DEVELOPMENT_GUIDANCE.md - Updated active phase to Phase 3.2 - Added LLM mode milestone completion Verified Integration: - ✅ model_updater interface: Callable[[Dict], None] - ✅ simulation_runner interface: Callable[[Dict], Path] - ✅ LLM workflow validation catches missing fields - ✅ Error handling for initialization failures - ✅ Component structure verified (ExtractorOrchestrator, HookGenerator, etc.) Known Gaps (Out of Scope for Task 1.2): - LLMWorkflowAnalyzer Claude Code integration returns empty workflow (This is Phase 2.7 component work, not Task 1.2 integration) - Manual mode (--config) not yet fully integrated (Task 1.2 focuses on LLM mode wiring only) Test Results: ============= [OK] PASSED: LLM Workflow Validation [OK] PASSED: Interface Contracts [OK] PASSED: LLMOptimizationRunner Initialization [OK] PASSED: Error Handling [OK] PASSED: Component Integration Task 1.2 Integration Status: ✅ VERIFIED Next Steps: - Task 1.3: Minimal working example (completed in this commit) - Task 1.4: End-to-end integration test - Week 2: Robustness & Safety (validation, fallbacks, tests, audit trail) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:48:40 -05:00
Updates NX expressions in the CAD model and saves changes.
simulation_runner: Function(design_vars: Dict) -> Path
Runs FEM simulation with updated design variables.
Returns path to OP2 results file.
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
study_name: Name for Optuna study
output_dir: Directory for results
"""
self.llm_workflow = llm_workflow
self.model_updater = model_updater
self.simulation_runner = simulation_runner
self.study_name = study_name
if output_dir is None:
output_dir = Path.cwd() / "optimization_results" / study_name
self.output_dir = Path(output_dir)
self.output_dir.mkdir(parents=True, exist_ok=True)
# Save LLM workflow configuration for transparency and documentation
workflow_config_file = self.output_dir / "llm_workflow_config.json"
with open(workflow_config_file, 'w') as f:
json.dump(llm_workflow, f, indent=2)
logger.info(f"LLM workflow configuration saved to: {workflow_config_file}")
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
# Initialize automation components
self._initialize_automation()
# Optuna study
self.study = None
self.history = []
logger.info(f"LLMOptimizationRunner initialized for study: {study_name}")
def _initialize_automation(self):
"""Initialize all automation components from LLM workflow."""
logger.info("Initializing automation components...")
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# Phase 3.1: Extractor Orchestrator (NEW ARCHITECTURE)
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
logger.info(" - Phase 3.1: Extractor Orchestrator")
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# NEW: Pass output_dir only for manifest, extractors go to core library
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
self.orchestrator = ExtractorOrchestrator(
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
extractors_dir=self.output_dir, # Only for manifest file
use_core_library=True # Enable centralized library
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
)
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# Generate extractors from LLM workflow (stored in core library now)
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
self.extractors = self.orchestrator.process_llm_workflow(self.llm_workflow)
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
logger.info(f" {len(self.extractors)} extractor(s) available from core library")
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
# Phase 2.8: Inline Code Generator
logger.info(" - Phase 2.8: Inline Code Generator")
self.inline_generator = InlineCodeGenerator()
self.inline_code = []
for calc in self.llm_workflow.get('inline_calculations', []):
generated = self.inline_generator.generate_from_llm_output(calc)
self.inline_code.append(generated.code)
logger.info(f" Generated {len(self.inline_code)} inline calculation(s)")
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# Phase 2.9: Hook Generator (TODO: Should also use centralized library in future)
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
logger.info(" - Phase 2.9: Hook Generator")
self.hook_generator = HookGenerator()
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# For now, hooks are not generated per-study unless they're truly custom
# Most hooks should be in the core library (optimization_engine/hooks/)
post_processing_hooks = self.llm_workflow.get('post_processing_hooks', [])
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
if post_processing_hooks:
logger.info(f" Note: {len(post_processing_hooks)} custom hooks requested")
logger.info(" Future: These should also use centralized library")
# TODO: Implement hook library system similar to extractors
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
# Phase 1: Hook Manager
logger.info(" - Phase 1: Hook Manager")
self.hook_manager = HookManager()
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
# Load system hooks from core library
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
system_hooks_dir = Path(__file__).parent / 'plugins'
if system_hooks_dir.exists():
self.hook_manager.load_plugins_from_directory(system_hooks_dir)
summary = self.hook_manager.get_summary()
refactor: Implement centralized extractor library to eliminate code duplication MAJOR ARCHITECTURE REFACTOR - Clean Study Folders Problem Identified by User: "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time." - Every substudy was generating duplicate extractor code - Study folders polluted with reusable library code (generated_extractors/, generated_hooks/) - No code reuse across studies - Not production-grade architecture Solution - Centralized Library System: Implemented smart library with signature-based deduplication: - Core extractors in optimization_engine/extractors/ - Studies only store metadata (extractors_manifest.json) - Clean separation: studies = data, core = code Architecture: BEFORE (BAD): studies/my_study/ generated_extractors/ ❌ Code pollution! extract_displacement.py extract_von_mises_stress.py generated_hooks/ ❌ Code pollution! llm_workflow_config.json results.json AFTER (GOOD): optimization_engine/extractors/ ✓ Core library extract_displacement.py extract_stress.py catalog.json studies/my_study/ extractors_manifest.json ✓ Just references! llm_workflow_config.json ✓ Config optimization_results.json ✓ Results New Components: 1. ExtractorLibrary (extractor_library.py) - Signature-based deduplication - Centralized catalog (catalog.json) - Study manifest generation - Reusability across all studies 2. Updated ExtractorOrchestrator - Uses core library instead of per-study generation - Creates manifest instead of copying code - Backward compatible (legacy mode available) 3. Updated LLMOptimizationRunner - Removed generated_extractors/ directory creation - Removed generated_hooks/ directory creation - Uses core library exclusively 4. Updated Tests - Verifies extractors_manifest.json exists - Checks for clean study folder structure - All 18/18 checks pass Results: Study folders NOW ONLY contain: ✓ extractors_manifest.json - references to core library ✓ llm_workflow_config.json - study configuration ✓ optimization_results.json - optimization results ✓ optimization_history.json - trial history ✓ .db file - Optuna database Core library contains: ✓ extract_displacement.py - reusable across ALL studies ✓ extract_von_mises_stress.py - reusable across ALL studies ✓ extract_mass.py - reusable across ALL studies ✓ catalog.json - tracks all extractors with signatures Benefits: - Clean, professional study folder structure - Code reuse eliminates duplication - Library grows over time, studies stay clean - Production-grade architecture - "Insanely good engineering software that evolves with time" Testing: E2E test passes with clean folder structure - No generated_extractors/ pollution - Manifest correctly references library - Core library populated with reusable extractors - Study folder professional and minimal Documentation: - Added comprehensive architecture doc (docs/ARCHITECTURE_REFACTOR_NOV17.md) - Includes migration guide - Documents future work (hooks library, versioning, CLI tools) Next Steps: - Apply same architecture to hooks library - Add auto-generated documentation for library - Implement versioning for reproducibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 09:00:10 -05:00
logger.info(f" Loaded {summary['enabled_hooks']} hook(s) from core library")
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
logger.info("Automation components initialized successfully!")
def _create_optuna_study(self) -> optuna.Study:
"""Create Optuna study from LLM workflow optimization settings."""
opt_config = self.llm_workflow.get('optimization', {})
# Determine direction (minimize or maximize)
direction = opt_config.get('direction', 'minimize')
# Create study
study = optuna.create_study(
study_name=self.study_name,
direction=direction,
storage=f"sqlite:///{self.output_dir / f'{self.study_name}.db'}",
load_if_exists=True
)
logger.info(f"Created Optuna study: {self.study_name} (direction: {direction})")
return study
def _objective(self, trial: optuna.Trial) -> float:
"""
Optuna objective function - LLM-enhanced with flexible automation!
This function leverages LLM workflow analysis with user-configurable automation:
1. Suggests design variables from LLM analysis
2. Updates model
3. Runs simulation
4. Extracts results (using generated or manual extractors)
5. Executes inline calculations (generated or manual)
6. Executes post-calculation hooks (generated or manual)
7. Returns objective value
Args:
trial: Optuna trial
Returns:
Objective value
"""
trial_number = trial.number
logger.info(f"\n{'='*80}")
logger.info(f"Trial {trial_number} starting...")
logger.info(f"{'='*80}")
# ====================================================================
# STEP 1: Suggest Design Variables
# ====================================================================
design_vars_config = self.llm_workflow.get('optimization', {}).get('design_variables', [])
design_vars = {}
for var_config in design_vars_config:
var_name = var_config['parameter']
# Parse bounds - LLM returns 'bounds' as [min, max]
if 'bounds' in var_config:
var_min, var_max = var_config['bounds']
else:
# Fallback to old format
var_min = var_config.get('min', 0.0)
var_max = var_config.get('max', 1.0)
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
# Suggest value using Optuna
design_vars[var_name] = trial.suggest_float(var_name, var_min, var_max)
logger.info(f"Design variables: {design_vars}")
# Execute pre-solve hooks
self.hook_manager.execute_hooks('pre_solve', {
'trial_number': trial_number,
'design_variables': design_vars
})
# ====================================================================
# STEP 2: Update Model
# ====================================================================
logger.info("Updating model...")
self.model_updater(design_vars)
# ====================================================================
# STEP 3: Run Simulation
# ====================================================================
logger.info("Running simulation...")
# NOTE: We do NOT pass design_vars to simulation_runner because:
# 1. The PRT file was already updated by model_updater (via NX import journal)
# 2. The solver just needs to load the SIM which references the updated PRT
# 3. Passing design_vars would use hardcoded expression names that don't match our model
op2_file = self.simulation_runner()
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
logger.info(f"Simulation complete: {op2_file}")
# Execute post-solve hooks
self.hook_manager.execute_hooks('post_solve', {
'trial_number': trial_number,
'op2_file': op2_file
})
# ====================================================================
# STEP 4: Extract Results (Phase 3.1 - Auto-Generated Extractors)
# ====================================================================
logger.info("Extracting results...")
results = {}
for extractor in self.extractors:
try:
extraction_result = self.orchestrator.execute_extractor(
extractor.name,
Path(op2_file),
subcase=1
)
results.update(extraction_result)
logger.info(f" {extractor.name}: {list(extraction_result.keys())}")
except Exception as e:
logger.error(f"Extraction failed for {extractor.name}: {e}")
# Continue with other extractors
# Execute post-extraction hooks
self.hook_manager.execute_hooks('post_extraction', {
'trial_number': trial_number,
'results': results
})
# ====================================================================
# STEP 5: Inline Calculations (Phase 2.8 - Auto-Generated Code)
# ====================================================================
logger.info("Executing inline calculations...")
calculations = {}
calc_namespace = {**results, **calculations} # Make results available
for calc_code in self.inline_code:
try:
exec(calc_code, calc_namespace)
# Extract newly created variables
for key, value in calc_namespace.items():
if key not in results and not key.startswith('_'):
calculations[key] = value
logger.info(f" Executed: {calc_code[:50]}...")
except Exception as e:
logger.error(f"Inline calculation failed: {e}")
logger.info(f"Calculations: {calculations}")
# ====================================================================
# STEP 6: Post-Calculation Hooks (Phase 2.9 - Auto-Generated Hooks)
# ====================================================================
logger.info("Executing post-calculation hooks...")
hook_results = self.hook_manager.execute_hooks('post_calculation', {
'trial_number': trial_number,
'design_variables': design_vars,
'results': results,
'calculations': calculations
})
# Merge hook results
final_context = {**results, **calculations}
for hook_result in hook_results:
if hook_result:
final_context.update(hook_result)
logger.info(f"Hook results: {hook_results}")
# ====================================================================
# STEP 7: Extract Objective Value
# ====================================================================
# Try to get objective from hooks first
objective = None
# Check hook results for 'objective' or 'weighted_objective'
for hook_result in hook_results:
if hook_result:
if 'objective' in hook_result:
objective = hook_result['objective']
break
elif 'weighted_objective' in hook_result:
objective = hook_result['weighted_objective']
break
# Fallback: use first extracted result
if objective is None:
# Try common objective names
for key in ['max_displacement', 'max_stress', 'max_von_mises']:
if key in final_context:
objective = final_context[key]
logger.warning(f"No explicit objective found, using: {key}")
break
if objective is None:
raise ValueError("Could not determine objective value from results/calculations/hooks")
logger.info(f"Objective value: {objective}")
# Save trial history
trial_data = {
'trial_number': trial_number,
'design_variables': design_vars,
'results': results,
'calculations': calculations,
'objective': objective
}
self.history.append(trial_data)
# Incremental save - write history after each trial
# This allows monitoring progress in real-time
self._save_incremental_history()
return float(objective)
def run_optimization(self, n_trials: int = 50) -> Dict[str, Any]:
"""
Run LLM-enhanced optimization with flexible automation.
Args:
n_trials: Number of optimization trials
Returns:
Dict with:
- best_params: Best design variable values
- best_value: Best objective value
- history: Complete trial history
"""
logger.info(f"\n{'='*80}")
logger.info(f"Starting LLM-Driven Optimization")
logger.info(f"{'='*80}")
logger.info(f"Study: {self.study_name}")
logger.info(f"Trials: {n_trials}")
logger.info(f"Output: {self.output_dir}")
logger.info(f"{'='*80}\n")
# Create study
self.study = self._create_optuna_study()
# Run optimization
self.study.optimize(self._objective, n_trials=n_trials)
# Get results
best_trial = self.study.best_trial
results = {
'best_params': best_trial.params,
'best_value': best_trial.value,
'best_trial_number': best_trial.number,
'history': self.history
}
# Save results
self._save_results(results)
logger.info(f"\n{'='*80}")
logger.info("Optimization Complete!")
logger.info(f"{'='*80}")
logger.info(f"Best value: {results['best_value']}")
logger.info(f"Best params: {results['best_params']}")
logger.info(f"Results saved to: {self.output_dir}")
logger.info(f"{'='*80}\n")
return results
def _save_incremental_history(self):
"""
Save trial history incrementally after each trial.
This allows real-time monitoring of optimization progress.
"""
history_file = self.output_dir / "optimization_history_incremental.json"
# Convert history to JSON-serializable format
serializable_history = []
for trial in self.history:
trial_copy = trial.copy()
# Convert any numpy types to native Python types
for key in ['results', 'calculations', 'design_variables']:
if key in trial_copy:
trial_copy[key] = {k: float(v) if isinstance(v, (int, float)) else v
for k, v in trial_copy[key].items()}
if 'objective' in trial_copy:
trial_copy['objective'] = float(trial_copy['objective'])
serializable_history.append(trial_copy)
# Write to file
with open(history_file, 'w') as f:
json.dump(serializable_history, f, indent=2, default=str)
def _save_results(self, results: Dict[str, Any]):
"""Save optimization results to file."""
results_file = self.output_dir / "optimization_results.json"
# Make history JSON serializable
serializable_results = {
'best_params': results['best_params'],
'best_value': results['best_value'],
'best_trial_number': results['best_trial_number'],
'timestamp': datetime.now().isoformat(),
'study_name': self.study_name,
'n_trials': len(results['history'])
}
with open(results_file, 'w') as f:
json.dump(serializable_results, f, indent=2)
logger.info(f"Results saved to: {results_file}")
def main():
"""Test LLM-driven optimization runner."""
print("=" * 80)
print("Phase 3.2: LLM-Driven Optimization Runner Test")
print("=" * 80)
print()
# Example LLM workflow (from Phase 2.7)
llm_workflow = {
"engineering_features": [
{
"action": "extract_displacement",
"domain": "result_extraction",
"description": "Extract displacement from OP2",
"params": {"result_type": "displacement"}
}
],
"inline_calculations": [
{
"action": "normalize",
"params": {
"input": "max_displacement",
"reference": "max_allowed_disp",
"value": 5.0
},
"code_hint": "norm_disp = max_displacement / 5.0"
}
],
"post_processing_hooks": [
{
"action": "weighted_objective",
"params": {
"inputs": ["norm_disp"],
"weights": [1.0],
"objective": "minimize"
}
}
],
"optimization": {
"algorithm": "TPE",
"direction": "minimize",
"design_variables": [
{
"parameter": "wall_thickness",
"min": 3.0,
"max": 8.0,
"type": "continuous"
}
]
}
}
print("LLM Workflow Configuration:")
print(f" Engineering features: {len(llm_workflow['engineering_features'])}")
print(f" Inline calculations: {len(llm_workflow['inline_calculations'])}")
print(f" Post-processing hooks: {len(llm_workflow['post_processing_hooks'])}")
print(f" Design variables: {len(llm_workflow['optimization']['design_variables'])}")
print()
# Dummy functions for testing
def dummy_model_updater(design_vars):
print(f" [Dummy] Updating model with: {design_vars}")
def dummy_simulation_runner():
print(" [Dummy] Running simulation...")
# Return path to test OP2
return Path("tests/bracket_sim1-solution_1.op2")
# Initialize runner
print("Initializing LLM-driven optimization runner...")
runner = LLMOptimizationRunner(
llm_workflow=llm_workflow,
model_updater=dummy_model_updater,
simulation_runner=dummy_simulation_runner,
study_name="test_llm_optimization"
)
print()
print("=" * 80)
print("Runner initialized successfully!")
print("Ready to run optimization with auto-generated code!")
print("=" * 80)
if __name__ == '__main__':
main()