refactor: Archive experimental LLM features for MVP stability (Phase 1.1)
Moved experimental LLM integration code to optimization_engine/future/: - llm_optimization_runner.py - Runtime LLM API runner - llm_workflow_analyzer.py - Workflow analysis - inline_code_generator.py - Auto-generate calculations - hook_generator.py - Auto-generate hooks - report_generator.py - LLM report generation - extractor_orchestrator.py - Extractor orchestration Added comprehensive optimization_engine/future/README.md explaining: - MVP LLM strategy (Claude Code skills, not runtime LLM) - Why files were archived - When to revisit post-MVP - Production architecture reference Production runner confirmed: optimization_engine/runner.py is sole active runner. This establishes clear separation between: - Production code (stable, no runtime LLM dependencies) - Experimental code (archived for post-MVP exploration) Part of Phase 1: Core Stabilization & Organization for MVP Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
105
optimization_engine/future/README.md
Normal file
105
optimization_engine/future/README.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Experimental LLM Features (Archived)
|
||||
|
||||
**Status**: Archived for post-MVP development
|
||||
**Date Archived**: November 24, 2025
|
||||
|
||||
## Purpose
|
||||
|
||||
This directory contains experimental LLM integration code that was explored during early development phases. These features are archived (not deleted) for potential future use after the MVP is stable and shipped.
|
||||
|
||||
## MVP LLM Integration Strategy
|
||||
|
||||
For the **MVP**, LLM integration is achieved through:
|
||||
- **Claude Code Development Assistant**: Interactive development-time assistance
|
||||
- **Claude Skills** (`.claude/skills/`):
|
||||
- `create-study.md` - Interactive study scaffolding
|
||||
- `analyze-workflow.md` - Workflow classification and analysis
|
||||
|
||||
This approach provides LLM assistance **without adding runtime dependencies** or complexity to the core optimization engine.
|
||||
|
||||
## Archived Experimental Files
|
||||
|
||||
### 1. `llm_optimization_runner.py`
|
||||
Experimental runner that makes runtime LLM API calls during optimization. This attempted to automate:
|
||||
- Extractor generation
|
||||
- Inline calculations
|
||||
- Post-processing hooks
|
||||
|
||||
**Why Archived**: Adds runtime dependencies, API costs, and complexity. The centralized extractor library (`optimization_engine/extractors/`) provides better maintainability.
|
||||
|
||||
### 2. `llm_workflow_analyzer.py`
|
||||
LLM-based workflow analysis for automated study setup.
|
||||
|
||||
**Why Archived**: The `analyze-workflow` Claude skill provides the same functionality through development-time assistance, without runtime overhead.
|
||||
|
||||
### 3. `inline_code_generator.py`
|
||||
Auto-generates inline Python calculations from natural language.
|
||||
|
||||
**Why Archived**: Manual calculation definition in `optimization_config.json` is clearer and more maintainable for MVP.
|
||||
|
||||
### 4. `hook_generator.py`
|
||||
Auto-generates post-processing hooks from natural language descriptions.
|
||||
|
||||
**Why Archived**: The plugin system (`optimization_engine/plugins/`) with manual hook definition is more robust and debuggable.
|
||||
|
||||
### 5. `report_generator.py`
|
||||
LLM-based report generation from optimization results.
|
||||
|
||||
**Why Archived**: Dashboard provides rich visualizations. LLM summaries can be added post-MVP if needed.
|
||||
|
||||
### 6. `extractor_orchestrator.py`
|
||||
Orchestrates LLM-based extractor generation and management.
|
||||
|
||||
**Why Archived**: Centralized extractor library (`optimization_engine/extractors/`) is the production approach. No code generation needed at runtime.
|
||||
|
||||
## When to Revisit
|
||||
|
||||
Consider reviving these experimental features **after MVP** if:
|
||||
1. ✅ MVP is stable and well-tested
|
||||
2. ✅ Users request more automation
|
||||
3. ✅ Core architecture is mature enough to support optional LLM features
|
||||
4. ✅ Clear ROI on LLM API costs vs manual configuration time
|
||||
|
||||
## Production Architecture (MVP)
|
||||
|
||||
For reference, the **stable production** components are:
|
||||
|
||||
```
|
||||
optimization_engine/
|
||||
├── runner.py # Production optimization runner
|
||||
├── extractors/ # Centralized extractor library
|
||||
│ ├── __init__.py
|
||||
│ ├── base.py
|
||||
│ ├── displacement.py
|
||||
│ ├── stress.py
|
||||
│ ├── frequency.py
|
||||
│ └── mass.py
|
||||
├── plugins/ # Plugin system (hooks)
|
||||
│ ├── __init__.py
|
||||
│ └── hook_manager.py
|
||||
├── nx_solver.py # NX simulation interface
|
||||
├── nx_updater.py # NX expression updates
|
||||
└── visualizer.py # Result plotting
|
||||
|
||||
.claude/skills/ # Claude Code skills
|
||||
├── create-study.md # Interactive study creation
|
||||
└── analyze-workflow.md # Workflow analysis
|
||||
```
|
||||
|
||||
## Migration Notes
|
||||
|
||||
If you need to use any of these experimental files:
|
||||
1. They are functional but not maintained
|
||||
2. Update imports to `optimization_engine.future.{module_name}`
|
||||
3. Install any additional dependencies (LLM client libraries)
|
||||
4. Be aware of API costs for LLM calls
|
||||
|
||||
## Related Documents
|
||||
|
||||
- [`docs/07_DEVELOPMENT/Today_Todo.md`](../../docs/07_DEVELOPMENT/Today_Todo.md) - MVP Development Plan
|
||||
- [`DEVELOPMENT.md`](../../DEVELOPMENT.md) - Development guide
|
||||
- [`.claude/skills/create-study.md`](../../.claude/skills/create-study.md) - Study creation skill
|
||||
|
||||
## Questions?
|
||||
|
||||
For MVP development questions, refer to the [DEVELOPMENT.md](../../DEVELOPMENT.md) guide or the MVP plan in `docs/07_DEVELOPMENT/Today_Todo.md`.
|
||||
394
optimization_engine/future/extractor_orchestrator.py
Normal file
394
optimization_engine/future/extractor_orchestrator.py
Normal file
@@ -0,0 +1,394 @@
|
||||
"""
|
||||
Extractor Orchestrator - Phase 3.1
|
||||
|
||||
Integrates Phase 2.7 LLM workflow analysis with Phase 3 pyNastran research agent
|
||||
to automatically generate and manage OP2 extractors.
|
||||
|
||||
This orchestrator:
|
||||
1. Takes Phase 2.7 LLM output (engineering_features)
|
||||
2. Uses Phase 3 research agent to generate extractors
|
||||
3. Saves generated extractors to result_extractors/
|
||||
4. Provides dynamic loading for optimization runtime
|
||||
|
||||
Author: Atomizer Development Team
|
||||
Version: 0.1.0 (Phase 3.1)
|
||||
Last Updated: 2025-01-16
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, List, Optional
|
||||
from pathlib import Path
|
||||
import importlib.util
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
|
||||
from optimization_engine.pynastran_research_agent import PyNastranResearchAgent, ExtractionPattern
|
||||
from optimization_engine.extractor_library import ExtractorLibrary, create_study_manifest
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class GeneratedExtractor:
|
||||
"""Represents a generated extractor module."""
|
||||
name: str
|
||||
file_path: Path
|
||||
function_name: str
|
||||
extraction_pattern: ExtractionPattern
|
||||
params: Dict[str, Any]
|
||||
|
||||
|
||||
class ExtractorOrchestrator:
|
||||
"""
|
||||
Orchestrates automatic extractor generation from LLM workflow analysis.
|
||||
|
||||
This class bridges Phase 2.7 (LLM analysis) and Phase 3 (pyNastran research)
|
||||
to create a complete end-to-end automation pipeline.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
extractors_dir: Optional[Path] = None,
|
||||
knowledge_base_path: Optional[Path] = None,
|
||||
use_core_library: bool = True):
|
||||
"""
|
||||
Initialize the orchestrator.
|
||||
|
||||
Args:
|
||||
extractors_dir: Directory to save study manifest (not extractor code!)
|
||||
knowledge_base_path: Path to pyNastran pattern knowledge base
|
||||
use_core_library: Use centralized library (True) or per-study generation (False, legacy)
|
||||
"""
|
||||
self.use_core_library = use_core_library
|
||||
|
||||
if extractors_dir is None:
|
||||
extractors_dir = Path(__file__).parent / "result_extractors" / "generated"
|
||||
|
||||
self.extractors_dir = Path(extractors_dir)
|
||||
self.extractors_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Initialize Phase 3 research agent
|
||||
self.research_agent = PyNastranResearchAgent(knowledge_base_path)
|
||||
|
||||
# Initialize centralized library (NEW ARCHITECTURE)
|
||||
if use_core_library:
|
||||
self.library = ExtractorLibrary()
|
||||
logger.info(f"Using centralized extractor library: {self.library.library_dir}")
|
||||
else:
|
||||
self.library = None
|
||||
logger.warning("Using legacy per-study extractor generation (not recommended)")
|
||||
|
||||
# Registry of generated extractors for this session
|
||||
self.extractors: Dict[str, GeneratedExtractor] = {}
|
||||
self.extractor_signatures: List[str] = [] # Track which library extractors were used
|
||||
|
||||
logger.info(f"ExtractorOrchestrator initialized")
|
||||
|
||||
def process_llm_workflow(self, llm_output: Dict[str, Any]) -> List[GeneratedExtractor]:
|
||||
"""
|
||||
Process Phase 2.7 LLM workflow output and generate all required extractors.
|
||||
|
||||
Args:
|
||||
llm_output: Dict with structure:
|
||||
{
|
||||
"engineering_features": [
|
||||
{
|
||||
"action": "extract_1d_element_forces",
|
||||
"domain": "result_extraction",
|
||||
"description": "Extract element forces from CBAR in Z direction",
|
||||
"params": {
|
||||
"element_types": ["CBAR"],
|
||||
"result_type": "element_force",
|
||||
"direction": "Z"
|
||||
}
|
||||
}
|
||||
],
|
||||
"inline_calculations": [...],
|
||||
"post_processing_hooks": [...],
|
||||
"optimization": {...}
|
||||
}
|
||||
|
||||
Returns:
|
||||
List of GeneratedExtractor objects
|
||||
"""
|
||||
engineering_features = llm_output.get('engineering_features', [])
|
||||
|
||||
generated_extractors = []
|
||||
|
||||
for feature in engineering_features:
|
||||
domain = feature.get('domain', '')
|
||||
|
||||
# Only process result extraction features
|
||||
if domain == 'result_extraction':
|
||||
logger.info(f"Processing extraction feature: {feature.get('action')}")
|
||||
|
||||
try:
|
||||
extractor = self.generate_extractor_from_feature(feature)
|
||||
generated_extractors.append(extractor)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to generate extractor for {feature.get('action')}: {e}")
|
||||
# Continue with other features
|
||||
|
||||
# NEW ARCHITECTURE: Create study manifest (not copy code)
|
||||
if self.use_core_library and self.library and self.extractor_signatures:
|
||||
create_study_manifest(self.extractor_signatures, self.extractors_dir)
|
||||
logger.info("Study manifest created - extractors referenced from core library")
|
||||
|
||||
logger.info(f"Generated {len(generated_extractors)} extractors")
|
||||
return generated_extractors
|
||||
|
||||
def generate_extractor_from_feature(self, feature: Dict[str, Any]) -> GeneratedExtractor:
|
||||
"""
|
||||
Generate a single extractor from an engineering feature.
|
||||
|
||||
Args:
|
||||
feature: Engineering feature dict from Phase 2.7 LLM
|
||||
|
||||
Returns:
|
||||
GeneratedExtractor object
|
||||
"""
|
||||
action = feature.get('action', '')
|
||||
description = feature.get('description', '')
|
||||
params = feature.get('params', {})
|
||||
|
||||
# Prepare request for Phase 3 research agent
|
||||
research_request = {
|
||||
'action': action,
|
||||
'domain': 'result_extraction',
|
||||
'description': description,
|
||||
'params': params
|
||||
}
|
||||
|
||||
# Use Phase 3 research agent to find/generate extraction pattern
|
||||
logger.info(f"Researching extraction pattern for: {action}")
|
||||
pattern = self.research_agent.research_extraction(research_request)
|
||||
|
||||
# Generate complete extractor code
|
||||
logger.info(f"Generating extractor code using pattern: {pattern.name}")
|
||||
extractor_code = self.research_agent.generate_extractor_code(research_request)
|
||||
|
||||
# NEW ARCHITECTURE: Use centralized library
|
||||
if self.use_core_library and self.library:
|
||||
# Add to/retrieve from core library (deduplication happens here)
|
||||
file_path = self.library.get_or_create(feature, extractor_code)
|
||||
|
||||
# Track signature for study manifest
|
||||
signature = self.library._compute_signature(feature)
|
||||
self.extractor_signatures.append(signature)
|
||||
|
||||
logger.info(f"Extractor available in core library: {file_path}")
|
||||
else:
|
||||
# LEGACY: Save to per-study directory
|
||||
filename = self._action_to_filename(action)
|
||||
file_path = self.extractors_dir / filename
|
||||
|
||||
logger.info(f"Saving extractor to study directory (legacy): {file_path}")
|
||||
with open(file_path, 'w') as f:
|
||||
f.write(extractor_code)
|
||||
|
||||
# Extract function name from generated code
|
||||
function_name = self._extract_function_name(extractor_code)
|
||||
|
||||
# Create GeneratedExtractor object
|
||||
extractor = GeneratedExtractor(
|
||||
name=action,
|
||||
file_path=file_path,
|
||||
function_name=function_name,
|
||||
extraction_pattern=pattern,
|
||||
params=params
|
||||
)
|
||||
|
||||
# Register in session
|
||||
self.extractors[action] = extractor
|
||||
|
||||
logger.info(f"Successfully generated extractor: {action} → {function_name}")
|
||||
return extractor
|
||||
|
||||
def _action_to_filename(self, action: str) -> str:
|
||||
"""Convert action name to Python filename."""
|
||||
# e.g., "extract_1d_element_forces" → "extract_1d_element_forces.py"
|
||||
return f"{action}.py"
|
||||
|
||||
def _extract_function_name(self, code: str) -> str:
|
||||
"""Extract the main function name from generated code."""
|
||||
# Look for "def function_name(" pattern
|
||||
import re
|
||||
match = re.search(r'def\s+(\w+)\s*\(', code)
|
||||
if match:
|
||||
return match.group(1)
|
||||
return "extract" # fallback
|
||||
|
||||
def load_extractor(self, extractor_name: str) -> Any:
|
||||
"""
|
||||
Dynamically load a generated extractor module.
|
||||
|
||||
Args:
|
||||
extractor_name: Name of the extractor (action name)
|
||||
|
||||
Returns:
|
||||
The extractor function (callable)
|
||||
"""
|
||||
if extractor_name not in self.extractors:
|
||||
raise ValueError(f"Extractor '{extractor_name}' not found in registry")
|
||||
|
||||
extractor = self.extractors[extractor_name]
|
||||
|
||||
# Dynamic import
|
||||
spec = importlib.util.spec_from_file_location(extractor_name, extractor.file_path)
|
||||
if spec is None or spec.loader is None:
|
||||
raise ImportError(f"Could not load extractor from {extractor.file_path}")
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
# Get the function
|
||||
if not hasattr(module, extractor.function_name):
|
||||
raise AttributeError(f"Function '{extractor.function_name}' not found in {extractor_name}")
|
||||
|
||||
return getattr(module, extractor.function_name)
|
||||
|
||||
def execute_extractor(self,
|
||||
extractor_name: str,
|
||||
op2_file: Path,
|
||||
**kwargs) -> Dict[str, Any]:
|
||||
"""
|
||||
Load and execute an extractor.
|
||||
|
||||
Args:
|
||||
extractor_name: Name of the extractor
|
||||
op2_file: Path to OP2 file
|
||||
**kwargs: Additional arguments for the extractor
|
||||
|
||||
Returns:
|
||||
Extraction results dictionary
|
||||
"""
|
||||
logger.info(f"Executing extractor: {extractor_name}")
|
||||
|
||||
# Load the extractor function
|
||||
extractor_func = self.load_extractor(extractor_name)
|
||||
|
||||
# Get extractor params - filter to only relevant params for each pattern
|
||||
extractor = self.extractors[extractor_name]
|
||||
pattern_name = extractor.extraction_pattern.name
|
||||
|
||||
# Pattern-specific parameter filtering
|
||||
if pattern_name == 'displacement':
|
||||
# Displacement extractor only takes op2_file and subcase
|
||||
params = {k: v for k, v in kwargs.items() if k in ['subcase']}
|
||||
elif pattern_name == 'cbar_force':
|
||||
# CBAR force takes direction, subcase
|
||||
params = {k: v for k, v in kwargs.items() if k in ['direction', 'subcase']}
|
||||
elif pattern_name == 'solid_stress':
|
||||
# Solid stress takes element_type, subcase
|
||||
params = {k: v for k, v in kwargs.items() if k in ['element_type', 'subcase']}
|
||||
else:
|
||||
# Generic - pass all kwargs
|
||||
params = kwargs.copy()
|
||||
|
||||
# Execute
|
||||
try:
|
||||
result = extractor_func(op2_file, **params)
|
||||
logger.info(f"Extraction successful: {extractor_name}")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Extraction failed: {extractor_name} - {e}")
|
||||
raise
|
||||
|
||||
def get_summary(self) -> Dict[str, Any]:
|
||||
"""Get summary of all generated extractors."""
|
||||
return {
|
||||
'total_extractors': len(self.extractors),
|
||||
'extractors': [
|
||||
{
|
||||
'name': name,
|
||||
'file': str(ext.file_path),
|
||||
'function': ext.function_name,
|
||||
'pattern': ext.extraction_pattern.name,
|
||||
'params': ext.params
|
||||
}
|
||||
for name, ext in self.extractors.items()
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""Test the extractor orchestrator with Phase 2.7 example."""
|
||||
print("=" * 80)
|
||||
print("Phase 3.1: Extractor Orchestrator Test")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
# Phase 2.7 LLM output example (CBAR forces)
|
||||
llm_output = {
|
||||
"engineering_features": [
|
||||
{
|
||||
"action": "extract_1d_element_forces",
|
||||
"domain": "result_extraction",
|
||||
"description": "Extract element forces from CBAR in Z direction from OP2",
|
||||
"params": {
|
||||
"element_types": ["CBAR"],
|
||||
"result_type": "element_force",
|
||||
"direction": "Z"
|
||||
}
|
||||
}
|
||||
],
|
||||
"inline_calculations": [
|
||||
{
|
||||
"action": "calculate_average",
|
||||
"params": {"input": "forces_z", "operation": "mean"}
|
||||
},
|
||||
{
|
||||
"action": "find_minimum",
|
||||
"params": {"input": "forces_z", "operation": "min"}
|
||||
}
|
||||
],
|
||||
"post_processing_hooks": [
|
||||
{
|
||||
"action": "comparison",
|
||||
"params": {
|
||||
"inputs": ["min_force", "avg_force"],
|
||||
"operation": "ratio",
|
||||
"output_name": "min_to_avg_ratio"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
print("Test Input: Phase 2.7 LLM Output")
|
||||
print(f" Engineering features: {len(llm_output['engineering_features'])}")
|
||||
print(f" Inline calculations: {len(llm_output['inline_calculations'])}")
|
||||
print(f" Post-processing hooks: {len(llm_output['post_processing_hooks'])}")
|
||||
print()
|
||||
|
||||
# Initialize orchestrator
|
||||
orchestrator = ExtractorOrchestrator()
|
||||
|
||||
# Process LLM workflow
|
||||
print("1. Processing LLM workflow...")
|
||||
extractors = orchestrator.process_llm_workflow(llm_output)
|
||||
|
||||
print(f" Generated {len(extractors)} extractors:")
|
||||
for ext in extractors:
|
||||
print(f" - {ext.name} → {ext.function_name}() in {ext.file_path.name}")
|
||||
print()
|
||||
|
||||
# Show summary
|
||||
print("2. Orchestrator summary:")
|
||||
summary = orchestrator.get_summary()
|
||||
print(f" Total extractors: {summary['total_extractors']}")
|
||||
for ext_info in summary['extractors']:
|
||||
print(f" {ext_info['name']}:")
|
||||
print(f" Pattern: {ext_info['pattern']}")
|
||||
print(f" File: {ext_info['file']}")
|
||||
print(f" Function: {ext_info['function']}")
|
||||
print()
|
||||
|
||||
print("=" * 80)
|
||||
print("Phase 3.1 Test Complete!")
|
||||
print("=" * 80)
|
||||
print()
|
||||
print("Next step: Test extractor execution on real OP2 file")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
947
optimization_engine/future/hook_generator.py
Normal file
947
optimization_engine/future/hook_generator.py
Normal file
@@ -0,0 +1,947 @@
|
||||
"""
|
||||
Post-Processing Hook Generator - Phase 2.9
|
||||
|
||||
Auto-generates middleware Python scripts for post-processing operations in optimization workflows.
|
||||
|
||||
This handles the "post_processing_hooks" from Phase 2.7 LLM analysis.
|
||||
|
||||
Hook scripts sit between optimization steps to:
|
||||
- Calculate custom objective functions
|
||||
- Combine multiple metrics with weights
|
||||
- Apply complex formulas
|
||||
- Transform results for next step
|
||||
|
||||
Examples:
|
||||
- Weighted objective: 0.7 * norm_stress + 0.3 * norm_disp
|
||||
- Custom constraint: max_stress / yield_strength < 1.0
|
||||
- Multi-criteria metric: sqrt(stress^2 + disp^2)
|
||||
|
||||
Author: Atomizer Development Team
|
||||
Version: 0.1.0 (Phase 2.9)
|
||||
Last Updated: 2025-01-16
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, List, Optional
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
import textwrap
|
||||
|
||||
|
||||
@dataclass
|
||||
class GeneratedHook:
|
||||
"""Result of hook generation."""
|
||||
script_name: str
|
||||
script_content: str
|
||||
inputs_required: List[str]
|
||||
outputs_created: List[str]
|
||||
description: str
|
||||
hook_type: str # 'weighted_objective', 'custom_formula', 'constraint', etc.
|
||||
|
||||
|
||||
class HookGenerator:
|
||||
"""
|
||||
Generates post-processing hook scripts for optimization workflows.
|
||||
|
||||
Hook scripts are standalone Python modules that execute between optimization
|
||||
steps to perform custom calculations, combine metrics, or transform results.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the hook generator."""
|
||||
self.supported_hook_types = {
|
||||
'weighted_objective',
|
||||
'weighted_combination',
|
||||
'custom_formula',
|
||||
'constraint_check',
|
||||
'multi_objective',
|
||||
'custom_metric',
|
||||
'comparison',
|
||||
'threshold_check'
|
||||
}
|
||||
|
||||
def generate_from_llm_output(self, hook_spec: Dict[str, Any]) -> GeneratedHook:
|
||||
"""
|
||||
Generate hook script from LLM-analyzed post-processing requirement.
|
||||
|
||||
Args:
|
||||
hook_spec: Dictionary from LLM with keys:
|
||||
- action: str (e.g., "weighted_objective")
|
||||
- description: str
|
||||
- params: dict with inputs/weights/formula/etc.
|
||||
|
||||
Returns:
|
||||
GeneratedHook with complete Python script
|
||||
"""
|
||||
action = hook_spec.get('action', '').lower()
|
||||
params = hook_spec.get('params', {})
|
||||
description = hook_spec.get('description', '')
|
||||
|
||||
# Determine hook type and generate appropriate script
|
||||
if 'weighted' in action or 'combination' in action:
|
||||
return self._generate_weighted_objective(params, description)
|
||||
|
||||
elif 'formula' in action or 'custom' in action:
|
||||
return self._generate_custom_formula(params, description)
|
||||
|
||||
elif 'constraint' in action or 'check' in action:
|
||||
return self._generate_constraint_check(params, description)
|
||||
|
||||
elif 'comparison' in action or 'compare' in action:
|
||||
return self._generate_comparison(params, description)
|
||||
|
||||
else:
|
||||
# Generic hook
|
||||
return self._generate_generic_hook(action, params, description)
|
||||
|
||||
def _generate_weighted_objective(self, params: Dict[str, Any],
|
||||
description: str) -> GeneratedHook:
|
||||
"""
|
||||
Generate weighted objective function hook.
|
||||
|
||||
Example params:
|
||||
{
|
||||
"inputs": ["norm_stress", "norm_disp"],
|
||||
"weights": [0.7, 0.3],
|
||||
"formula": "0.7 * norm_stress + 0.3 * norm_disp", # optional
|
||||
"objective": "minimize"
|
||||
}
|
||||
"""
|
||||
inputs = params.get('inputs', [])
|
||||
weights = params.get('weights', [])
|
||||
formula = params.get('formula', '')
|
||||
objective = params.get('objective', 'minimize')
|
||||
|
||||
# Validate inputs and weights match
|
||||
if len(inputs) != len(weights):
|
||||
weights = [1.0 / len(inputs)] * len(inputs) # Equal weights if mismatch
|
||||
|
||||
# Generate script name
|
||||
script_name = f"hook_weighted_objective_{'_'.join(inputs)}.py"
|
||||
|
||||
# Build formula if not provided
|
||||
if not formula:
|
||||
terms = [f"{w} * {inp}" for w, inp in zip(weights, inputs)]
|
||||
formula = " + ".join(terms)
|
||||
|
||||
# Generate script content
|
||||
script_content = f'''"""
|
||||
Weighted Objective Function Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
{description}
|
||||
|
||||
Inputs: {', '.join(inputs)}
|
||||
Weights: {', '.join(map(str, weights))}
|
||||
Formula: {formula}
|
||||
Objective: {objective}
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def weighted_objective({', '.join(inputs)}):
|
||||
"""
|
||||
Calculate weighted objective from multiple inputs.
|
||||
|
||||
Args:
|
||||
{self._format_args_doc(inputs)}
|
||||
|
||||
Returns:
|
||||
float: Weighted objective value
|
||||
"""
|
||||
result = {formula}
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main entry point for hook execution.
|
||||
Reads inputs from JSON file, calculates objective, writes output.
|
||||
"""
|
||||
# Parse command line arguments
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {{}} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
if not input_file.exists():
|
||||
print(f"Error: Input file {{input_file}} not found")
|
||||
sys.exit(1)
|
||||
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
{self._format_input_extraction(inputs)}
|
||||
|
||||
# Calculate weighted objective
|
||||
result = weighted_objective({', '.join(inputs)})
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "weighted_objective_result.json"
|
||||
output = {{
|
||||
"weighted_objective": result,
|
||||
"objective_type": "{objective}",
|
||||
"inputs_used": {{{', '.join([f'"{inp}": {inp}' for inp in inputs])}}},
|
||||
"formula": "{formula}"
|
||||
}}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"Weighted objective calculated: {{result:.6f}}")
|
||||
print(f"Result saved to: {{output_file}}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
'''
|
||||
|
||||
return GeneratedHook(
|
||||
script_name=script_name,
|
||||
script_content=script_content,
|
||||
inputs_required=inputs,
|
||||
outputs_created=['weighted_objective'],
|
||||
description=description or f"Weighted combination of {', '.join(inputs)}",
|
||||
hook_type='weighted_objective'
|
||||
)
|
||||
|
||||
def _generate_custom_formula(self, params: Dict[str, Any],
|
||||
description: str) -> GeneratedHook:
|
||||
"""
|
||||
Generate custom formula hook.
|
||||
|
||||
Example params:
|
||||
{
|
||||
"inputs": ["max_stress", "yield_strength"],
|
||||
"formula": "max_stress / yield_strength",
|
||||
"output_name": "safety_factor"
|
||||
}
|
||||
"""
|
||||
inputs = params.get('inputs', [])
|
||||
formula = params.get('formula', '')
|
||||
output_name = params.get('output_name', 'custom_result')
|
||||
|
||||
if not formula:
|
||||
raise ValueError("Custom formula hook requires 'formula' parameter")
|
||||
|
||||
script_name = f"hook_custom_{output_name}.py"
|
||||
|
||||
script_content = f'''"""
|
||||
Custom Formula Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
{description}
|
||||
|
||||
Formula: {output_name} = {formula}
|
||||
Inputs: {', '.join(inputs)}
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def calculate_{output_name}({', '.join(inputs)}):
|
||||
"""
|
||||
Calculate custom metric using formula.
|
||||
|
||||
Args:
|
||||
{self._format_args_doc(inputs)}
|
||||
|
||||
Returns:
|
||||
float: {output_name}
|
||||
"""
|
||||
{output_name} = {formula}
|
||||
return {output_name}
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {{}} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
{self._format_input_extraction(inputs)}
|
||||
|
||||
# Calculate result
|
||||
result = calculate_{output_name}({', '.join(inputs)})
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "{output_name}_result.json"
|
||||
output = {{
|
||||
"{output_name}": result,
|
||||
"formula": "{formula}",
|
||||
"inputs_used": {{{', '.join([f'"{inp}": {inp}' for inp in inputs])}}}
|
||||
}}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"{output_name} = {{result:.6f}}")
|
||||
print(f"Result saved to: {{output_file}}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
'''
|
||||
|
||||
return GeneratedHook(
|
||||
script_name=script_name,
|
||||
script_content=script_content,
|
||||
inputs_required=inputs,
|
||||
outputs_created=[output_name],
|
||||
description=description or f"Custom formula: {formula}",
|
||||
hook_type='custom_formula'
|
||||
)
|
||||
|
||||
def _generate_constraint_check(self, params: Dict[str, Any],
|
||||
description: str) -> GeneratedHook:
|
||||
"""
|
||||
Generate constraint checking hook.
|
||||
|
||||
Example params:
|
||||
{
|
||||
"inputs": ["max_stress", "yield_strength"],
|
||||
"condition": "max_stress < yield_strength",
|
||||
"threshold": 1.0,
|
||||
"constraint_name": "stress_limit"
|
||||
}
|
||||
"""
|
||||
inputs = params.get('inputs', [])
|
||||
condition = params.get('condition', '')
|
||||
threshold = params.get('threshold', 1.0)
|
||||
constraint_name = params.get('constraint_name', 'constraint')
|
||||
|
||||
script_name = f"hook_constraint_{constraint_name}.py"
|
||||
|
||||
script_content = f'''"""
|
||||
Constraint Check Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
{description}
|
||||
|
||||
Constraint: {condition}
|
||||
Threshold: {threshold}
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def check_{constraint_name}({', '.join(inputs)}):
|
||||
"""
|
||||
Check constraint condition.
|
||||
|
||||
Args:
|
||||
{self._format_args_doc(inputs)}
|
||||
|
||||
Returns:
|
||||
tuple: (satisfied: bool, value: float, violation: float)
|
||||
"""
|
||||
value = {condition if condition else f"{inputs[0]} / {threshold}"}
|
||||
satisfied = value <= {threshold}
|
||||
violation = max(0.0, value - {threshold})
|
||||
|
||||
return satisfied, value, violation
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {{}} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
{self._format_input_extraction(inputs)}
|
||||
|
||||
# Check constraint
|
||||
satisfied, value, violation = check_{constraint_name}({', '.join(inputs)})
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "{constraint_name}_check.json"
|
||||
output = {{
|
||||
"constraint_name": "{constraint_name}",
|
||||
"satisfied": satisfied,
|
||||
"value": value,
|
||||
"threshold": {threshold},
|
||||
"violation": violation,
|
||||
"inputs_used": {{{', '.join([f'"{inp}": {inp}' for inp in inputs])}}}
|
||||
}}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
status = "SATISFIED" if satisfied else "VIOLATED"
|
||||
print(f"Constraint {{status}}: {{value:.6f}} (threshold: {threshold})")
|
||||
if not satisfied:
|
||||
print(f"Violation: {{violation:.6f}}")
|
||||
print(f"Result saved to: {{output_file}}")
|
||||
|
||||
return value
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
'''
|
||||
|
||||
return GeneratedHook(
|
||||
script_name=script_name,
|
||||
script_content=script_content,
|
||||
inputs_required=inputs,
|
||||
outputs_created=[constraint_name, f'{constraint_name}_satisfied', f'{constraint_name}_violation'],
|
||||
description=description or f"Constraint check: {condition}",
|
||||
hook_type='constraint_check'
|
||||
)
|
||||
|
||||
def _generate_comparison(self, params: Dict[str, Any],
|
||||
description: str) -> GeneratedHook:
|
||||
"""
|
||||
Generate comparison hook (min/max ratio, difference, etc.).
|
||||
|
||||
Example params:
|
||||
{
|
||||
"inputs": ["min_force", "avg_force"],
|
||||
"operation": "ratio",
|
||||
"output_name": "min_to_avg_ratio"
|
||||
}
|
||||
"""
|
||||
inputs = params.get('inputs', [])
|
||||
operation = params.get('operation', 'ratio').lower()
|
||||
output_name = params.get('output_name', f"{operation}_result")
|
||||
|
||||
if len(inputs) < 2:
|
||||
raise ValueError("Comparison hook requires at least 2 inputs")
|
||||
|
||||
# Determine formula based on operation
|
||||
if operation == 'ratio':
|
||||
formula = f"{inputs[0]} / {inputs[1]}"
|
||||
elif operation == 'difference':
|
||||
formula = f"{inputs[0]} - {inputs[1]}"
|
||||
elif operation == 'percent_difference':
|
||||
formula = f"(({inputs[0]} - {inputs[1]}) / {inputs[1]}) * 100.0"
|
||||
else:
|
||||
formula = f"{inputs[0]} / {inputs[1]}" # Default to ratio
|
||||
|
||||
script_name = f"hook_compare_{output_name}.py"
|
||||
|
||||
script_content = f'''"""
|
||||
Comparison Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
{description}
|
||||
|
||||
Operation: {operation}
|
||||
Formula: {output_name} = {formula}
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def compare_{operation}({', '.join(inputs)}):
|
||||
"""
|
||||
Compare values using {operation}.
|
||||
|
||||
Args:
|
||||
{self._format_args_doc(inputs)}
|
||||
|
||||
Returns:
|
||||
float: Comparison result
|
||||
"""
|
||||
result = {formula}
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {{}} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
{self._format_input_extraction(inputs)}
|
||||
|
||||
# Calculate comparison
|
||||
result = compare_{operation}({', '.join(inputs)})
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "{output_name}.json"
|
||||
output = {{
|
||||
"{output_name}": result,
|
||||
"operation": "{operation}",
|
||||
"formula": "{formula}",
|
||||
"inputs_used": {{{', '.join([f'"{inp}": {inp}' for inp in inputs])}}}
|
||||
}}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"{output_name} = {{result:.6f}}")
|
||||
print(f"Result saved to: {{output_file}}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
'''
|
||||
|
||||
return GeneratedHook(
|
||||
script_name=script_name,
|
||||
script_content=script_content,
|
||||
inputs_required=inputs,
|
||||
outputs_created=[output_name],
|
||||
description=description or f"{operation.capitalize()} of {', '.join(inputs)}",
|
||||
hook_type='comparison'
|
||||
)
|
||||
|
||||
def _generate_generic_hook(self, action: str, params: Dict[str, Any],
|
||||
description: str) -> GeneratedHook:
|
||||
"""Generate generic hook for unknown action types."""
|
||||
inputs = params.get('inputs', ['input_value'])
|
||||
formula = params.get('formula', 'input_value')
|
||||
output_name = params.get('output_name', 'result')
|
||||
|
||||
script_name = f"hook_generic_{action.replace(' ', '_')}.py"
|
||||
|
||||
script_content = f'''"""
|
||||
Generic Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
{description}
|
||||
|
||||
Action: {action}
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def process({', '.join(inputs)}):
|
||||
"""Process inputs according to action."""
|
||||
# TODO: Implement {action}
|
||||
result = {formula}
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {{}} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
{self._format_input_extraction(inputs)}
|
||||
|
||||
result = process({', '.join(inputs)})
|
||||
|
||||
output_file = input_file.parent / "{output_name}.json"
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump({{"result": result}}, f, indent=2)
|
||||
|
||||
print(f"Result: {{result}}")
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
'''
|
||||
|
||||
return GeneratedHook(
|
||||
script_name=script_name,
|
||||
script_content=script_content,
|
||||
inputs_required=inputs,
|
||||
outputs_created=[output_name],
|
||||
description=description or f"Generic hook: {action}",
|
||||
hook_type='generic'
|
||||
)
|
||||
|
||||
def _format_args_doc(self, args: List[str]) -> str:
|
||||
"""Format argument documentation for docstrings."""
|
||||
lines = []
|
||||
for arg in args:
|
||||
lines.append(f" {arg}: float")
|
||||
return '\n'.join(lines)
|
||||
|
||||
def _format_input_extraction(self, inputs: List[str]) -> str:
|
||||
"""Format input extraction code."""
|
||||
lines = []
|
||||
for inp in inputs:
|
||||
lines.append(f' {inp} = inputs.get("{inp}")')
|
||||
lines.append(f' if {inp} is None:')
|
||||
lines.append(f' print(f"Error: Required input \'{inp}\' not found")')
|
||||
lines.append(f' sys.exit(1)')
|
||||
return '\n'.join(lines)
|
||||
|
||||
def generate_batch(self, hook_specs: List[Dict[str, Any]]) -> List[GeneratedHook]:
|
||||
"""
|
||||
Generate multiple hook scripts.
|
||||
|
||||
Args:
|
||||
hook_specs: List of hook specifications from LLM
|
||||
|
||||
Returns:
|
||||
List of GeneratedHook objects
|
||||
"""
|
||||
return [self.generate_from_llm_output(spec) for spec in hook_specs]
|
||||
|
||||
def save_hook_to_file(self, hook: GeneratedHook, output_dir: Path) -> Path:
|
||||
"""
|
||||
Save generated hook script to file.
|
||||
|
||||
Args:
|
||||
hook: GeneratedHook object
|
||||
output_dir: Directory to save script
|
||||
|
||||
Returns:
|
||||
Path to saved script file
|
||||
"""
|
||||
output_dir = Path(output_dir)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
script_path = output_dir / hook.script_name
|
||||
with open(script_path, 'w') as f:
|
||||
f.write(hook.script_content)
|
||||
|
||||
return script_path
|
||||
|
||||
def generate_hook_registry(self, hooks: List[GeneratedHook], output_file: Path):
|
||||
"""
|
||||
Generate a registry file documenting all hooks.
|
||||
|
||||
Args:
|
||||
hooks: List of generated hooks
|
||||
output_file: Path to registry JSON file
|
||||
"""
|
||||
registry = {
|
||||
"hooks": [
|
||||
{
|
||||
"name": hook.script_name,
|
||||
"type": hook.hook_type,
|
||||
"description": hook.description,
|
||||
"inputs": hook.inputs_required,
|
||||
"outputs": hook.outputs_created
|
||||
}
|
||||
for hook in hooks
|
||||
]
|
||||
}
|
||||
|
||||
import json
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(registry, f, indent=2)
|
||||
|
||||
def generate_lifecycle_hook(self, hook_spec: Dict[str, Any],
|
||||
hook_point: str = "post_calculation") -> str:
|
||||
"""
|
||||
Generate a hook compatible with Atomizer's lifecycle hook system (Phase 1).
|
||||
|
||||
This creates a hook that integrates with HookManager and can be loaded
|
||||
from the plugins directory structure.
|
||||
|
||||
Args:
|
||||
hook_spec: Hook specification from LLM (same as generate_from_llm_output)
|
||||
hook_point: Which lifecycle point to hook into (default: post_calculation)
|
||||
|
||||
Returns:
|
||||
Complete Python module content with register_hooks() function
|
||||
|
||||
Example output file: optimization_engine/plugins/post_calculation/weighted_objective.py
|
||||
"""
|
||||
# Generate the core hook logic first
|
||||
generated_hook = self.generate_from_llm_output(hook_spec)
|
||||
|
||||
action = hook_spec.get('action', '').lower()
|
||||
params = hook_spec.get('params', {})
|
||||
description = hook_spec.get('description', '')
|
||||
|
||||
# Extract function name from hook type
|
||||
if 'weighted' in action:
|
||||
func_name = "weighted_objective_hook"
|
||||
elif 'formula' in action or 'custom' in action:
|
||||
output_name = params.get('output_name', 'custom_result')
|
||||
func_name = f"{output_name}_hook"
|
||||
elif 'constraint' in action:
|
||||
constraint_name = params.get('constraint_name', 'constraint')
|
||||
func_name = f"{constraint_name}_hook"
|
||||
elif 'comparison' in action:
|
||||
operation = params.get('operation', 'comparison')
|
||||
func_name = f"{operation}_hook"
|
||||
else:
|
||||
func_name = "custom_hook"
|
||||
|
||||
# Build the lifecycle-compatible hook module
|
||||
module_content = f'''"""
|
||||
{description}
|
||||
Auto-generated lifecycle hook by Atomizer Phase 2.9
|
||||
|
||||
Hook Point: {hook_point}
|
||||
Inputs: {', '.join(generated_hook.inputs_required)}
|
||||
Outputs: {', '.join(generated_hook.outputs_created)}
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def {func_name}(context: Dict[str, Any]) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
{description}
|
||||
|
||||
Args:
|
||||
context: Hook context containing:
|
||||
- trial_number: Current optimization trial
|
||||
- results: Dictionary with extracted FEA results
|
||||
- calculations: Dictionary with inline calculation results
|
||||
|
||||
Returns:
|
||||
Dictionary with calculated values to add to context
|
||||
"""
|
||||
logger.info(f"Executing {func_name} for trial {{context.get('trial_number', 'unknown')}}")
|
||||
|
||||
# Extract inputs from context
|
||||
results = context.get('results', {{}})
|
||||
calculations = context.get('calculations', {{}})
|
||||
|
||||
'''
|
||||
|
||||
# Add input extraction based on hook type
|
||||
for input_var in generated_hook.inputs_required:
|
||||
module_content += f''' {input_var} = calculations.get('{input_var}') or results.get('{input_var}')
|
||||
if {input_var} is None:
|
||||
logger.error(f"Required input '{input_var}' not found in context")
|
||||
raise ValueError(f"Missing required input: {input_var}")
|
||||
|
||||
'''
|
||||
|
||||
# Add the core calculation logic
|
||||
if 'weighted' in action:
|
||||
inputs = params.get('inputs', [])
|
||||
weights = params.get('weights', [])
|
||||
formula = params.get('formula', '')
|
||||
if not formula:
|
||||
terms = [f"{w} * {inp}" for w, inp in zip(weights, inputs)]
|
||||
formula = " + ".join(terms)
|
||||
|
||||
module_content += f''' # Calculate weighted objective
|
||||
result = {formula}
|
||||
|
||||
logger.info(f"Weighted objective calculated: {{result:.6f}}")
|
||||
|
||||
return {{
|
||||
'weighted_objective': result,
|
||||
'{generated_hook.outputs_created[0]}': result
|
||||
}}
|
||||
'''
|
||||
|
||||
elif 'formula' in action or 'custom' in action:
|
||||
formula = params.get('formula', '')
|
||||
output_name = params.get('output_name', 'custom_result')
|
||||
|
||||
module_content += f''' # Calculate using custom formula
|
||||
{output_name} = {formula}
|
||||
|
||||
logger.info(f"{output_name} = {{{output_name}:.6f}}")
|
||||
|
||||
return {{
|
||||
'{output_name}': {output_name}
|
||||
}}
|
||||
'''
|
||||
|
||||
elif 'constraint' in action:
|
||||
condition = params.get('condition', '')
|
||||
threshold = params.get('threshold', 1.0)
|
||||
constraint_name = params.get('constraint_name', 'constraint')
|
||||
|
||||
module_content += f''' # Check constraint
|
||||
value = {condition if condition else f"{generated_hook.inputs_required[0]} / {threshold}"}
|
||||
satisfied = value <= {threshold}
|
||||
violation = max(0.0, value - {threshold})
|
||||
|
||||
status = "SATISFIED" if satisfied else "VIOLATED"
|
||||
logger.info(f"Constraint {{status}}: {{value:.6f}} (threshold: {threshold})")
|
||||
|
||||
return {{
|
||||
'{constraint_name}': value,
|
||||
'{constraint_name}_satisfied': satisfied,
|
||||
'{constraint_name}_violation': violation
|
||||
}}
|
||||
'''
|
||||
|
||||
elif 'comparison' in action:
|
||||
operation = params.get('operation', 'ratio').lower()
|
||||
inputs = params.get('inputs', [])
|
||||
output_name = params.get('output_name', f"{operation}_result")
|
||||
|
||||
if operation == 'ratio':
|
||||
formula = f"{inputs[0]} / {inputs[1]}"
|
||||
elif operation == 'difference':
|
||||
formula = f"{inputs[0]} - {inputs[1]}"
|
||||
elif operation == 'percent_difference':
|
||||
formula = f"(({inputs[0]} - {inputs[1]}) / {inputs[1]}) * 100.0"
|
||||
else:
|
||||
formula = f"{inputs[0]} / {inputs[1]}"
|
||||
|
||||
module_content += f''' # Calculate comparison
|
||||
result = {formula}
|
||||
|
||||
logger.info(f"{output_name} = {{result:.6f}}")
|
||||
|
||||
return {{
|
||||
'{output_name}': result
|
||||
}}
|
||||
'''
|
||||
|
||||
# Add registration function for HookManager
|
||||
module_content += f'''
|
||||
|
||||
def register_hooks(hook_manager):
|
||||
"""
|
||||
Register this hook with the HookManager.
|
||||
|
||||
This function is called automatically when the plugin is loaded.
|
||||
|
||||
Args:
|
||||
hook_manager: The HookManager instance
|
||||
"""
|
||||
hook_manager.register_hook(
|
||||
hook_point='{hook_point}',
|
||||
function={func_name},
|
||||
description="{description}",
|
||||
name="{func_name}",
|
||||
priority=100,
|
||||
enabled=True
|
||||
)
|
||||
logger.info(f"Registered {func_name} at {hook_point}")
|
||||
'''
|
||||
|
||||
return module_content
|
||||
|
||||
|
||||
def main():
|
||||
"""Test the hook generator."""
|
||||
print("=" * 80)
|
||||
print("Phase 2.9: Post-Processing Hook Generator Test")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
generator = HookGenerator()
|
||||
|
||||
# Test cases from Phase 2.7 LLM output
|
||||
test_hooks = [
|
||||
{
|
||||
"action": "weighted_objective",
|
||||
"description": "Combine normalized stress (70%) and displacement (30%)",
|
||||
"params": {
|
||||
"inputs": ["norm_stress", "norm_disp"],
|
||||
"weights": [0.7, 0.3],
|
||||
"objective": "minimize"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "custom_formula",
|
||||
"description": "Calculate safety factor",
|
||||
"params": {
|
||||
"inputs": ["max_stress", "yield_strength"],
|
||||
"formula": "yield_strength / max_stress",
|
||||
"output_name": "safety_factor"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "comparison",
|
||||
"description": "Compare min force to average",
|
||||
"params": {
|
||||
"inputs": ["min_force", "avg_force"],
|
||||
"operation": "ratio",
|
||||
"output_name": "min_to_avg_ratio"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "constraint_check",
|
||||
"description": "Check if stress is below yield",
|
||||
"params": {
|
||||
"inputs": ["max_stress", "yield_strength"],
|
||||
"condition": "max_stress / yield_strength",
|
||||
"threshold": 1.0,
|
||||
"constraint_name": "yield_constraint"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
print("Test Hook Generation:")
|
||||
print()
|
||||
|
||||
for i, hook_spec in enumerate(test_hooks, 1):
|
||||
print(f"{i}. {hook_spec['description']}")
|
||||
hook = generator.generate_from_llm_output(hook_spec)
|
||||
print(f" Script: {hook.script_name}")
|
||||
print(f" Type: {hook.hook_type}")
|
||||
print(f" Inputs: {', '.join(hook.inputs_required)}")
|
||||
print(f" Outputs: {', '.join(hook.outputs_created)}")
|
||||
print()
|
||||
|
||||
# Generate and save example hooks
|
||||
print("=" * 80)
|
||||
print("Example: Weighted Objective Hook Script")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
weighted_hook = generator.generate_from_llm_output(test_hooks[0])
|
||||
print(weighted_hook.script_content)
|
||||
|
||||
# Save hooks to files
|
||||
output_dir = Path("generated_hooks")
|
||||
print("=" * 80)
|
||||
print(f"Saving generated hooks to: {output_dir}")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
generated_hooks = generator.generate_batch(test_hooks)
|
||||
for hook in generated_hooks:
|
||||
script_path = generator.save_hook_to_file(hook, output_dir)
|
||||
print(f"[OK] Saved: {script_path}")
|
||||
|
||||
# Generate registry
|
||||
registry_path = output_dir / "hook_registry.json"
|
||||
generator.generate_hook_registry(generated_hooks, registry_path)
|
||||
print(f"[OK] Registry: {registry_path}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
473
optimization_engine/future/inline_code_generator.py
Normal file
473
optimization_engine/future/inline_code_generator.py
Normal file
@@ -0,0 +1,473 @@
|
||||
"""
|
||||
Inline Code Generator - Phase 2.8
|
||||
|
||||
Auto-generates simple Python code for mathematical operations that don't require
|
||||
external documentation or research.
|
||||
|
||||
This handles the "inline_calculations" from Phase 2.7 LLM analysis.
|
||||
|
||||
Examples:
|
||||
- Calculate average: avg = sum(values) / len(values)
|
||||
- Find minimum: min_val = min(values)
|
||||
- Normalize: norm_val = value / divisor
|
||||
- Calculate percentage: pct = (value / baseline) * 100
|
||||
|
||||
Author: Atomizer Development Team
|
||||
Version: 0.1.0 (Phase 2.8)
|
||||
Last Updated: 2025-01-16
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, List, Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class GeneratedCode:
|
||||
"""Result of code generation."""
|
||||
code: str
|
||||
variables_used: List[str]
|
||||
variables_created: List[str]
|
||||
imports_needed: List[str]
|
||||
description: str
|
||||
|
||||
|
||||
class InlineCodeGenerator:
|
||||
"""
|
||||
Generates Python code for simple mathematical operations.
|
||||
|
||||
This class takes structured calculation descriptions (from LLM Phase 2.7)
|
||||
and generates clean, executable Python code.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the code generator."""
|
||||
self.supported_operations = {
|
||||
'mean', 'average', 'avg',
|
||||
'min', 'minimum',
|
||||
'max', 'maximum',
|
||||
'sum', 'total',
|
||||
'count', 'length',
|
||||
'normalize', 'norm',
|
||||
'percentage', 'percent', 'pct',
|
||||
'ratio',
|
||||
'difference', 'diff',
|
||||
'add', 'subtract', 'multiply', 'divide',
|
||||
'abs', 'absolute',
|
||||
'sqrt', 'square_root',
|
||||
'power', 'pow'
|
||||
}
|
||||
|
||||
def generate_from_llm_output(self, calculation: Dict[str, Any]) -> GeneratedCode:
|
||||
"""
|
||||
Generate code from LLM-analyzed calculation.
|
||||
|
||||
Args:
|
||||
calculation: Dictionary from LLM with keys:
|
||||
- action: str (e.g., "calculate_average")
|
||||
- description: str
|
||||
- params: dict with input/operation/etc.
|
||||
- code_hint: str (optional, from LLM)
|
||||
|
||||
Returns:
|
||||
GeneratedCode with executable Python code
|
||||
"""
|
||||
action = calculation.get('action', '')
|
||||
params = calculation.get('params', {})
|
||||
description = calculation.get('description', '')
|
||||
code_hint = calculation.get('code_hint', '')
|
||||
|
||||
# If LLM provided a code hint, validate and use it
|
||||
if code_hint:
|
||||
return self._from_code_hint(code_hint, params, description)
|
||||
|
||||
# Otherwise, generate from action/params
|
||||
return self._from_action_params(action, params, description)
|
||||
|
||||
def _from_code_hint(self, code_hint: str, params: Dict[str, Any],
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate from LLM-provided code hint."""
|
||||
# Extract variable names from code hint
|
||||
variables_used = self._extract_input_variables(code_hint, params)
|
||||
variables_created = self._extract_output_variables(code_hint)
|
||||
imports_needed = self._extract_imports_needed(code_hint)
|
||||
|
||||
return GeneratedCode(
|
||||
code=code_hint.strip(),
|
||||
variables_used=variables_used,
|
||||
variables_created=variables_created,
|
||||
imports_needed=imports_needed,
|
||||
description=description
|
||||
)
|
||||
|
||||
def _from_action_params(self, action: str, params: Dict[str, Any],
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate code from action name and parameters."""
|
||||
operation = params.get('operation', '').lower()
|
||||
input_var = params.get('input', 'values')
|
||||
divisor = params.get('divisor')
|
||||
baseline = params.get('baseline')
|
||||
current = params.get('current')
|
||||
|
||||
# Detect operation type
|
||||
if any(op in action.lower() or op in operation for op in ['avg', 'average', 'mean']):
|
||||
return self._generate_average(input_var, description)
|
||||
|
||||
elif any(op in action.lower() or op in operation for op in ['min', 'minimum']):
|
||||
return self._generate_min(input_var, description)
|
||||
|
||||
elif any(op in action.lower() or op in operation for op in ['max', 'maximum']):
|
||||
return self._generate_max(input_var, description)
|
||||
|
||||
elif any(op in action.lower() for op in ['normalize', 'norm']) and divisor:
|
||||
return self._generate_normalization(input_var, divisor, description)
|
||||
|
||||
elif any(op in action.lower() for op in ['percentage', 'percent', 'pct', 'increase']):
|
||||
current = params.get('current')
|
||||
baseline = params.get('baseline')
|
||||
if current and baseline:
|
||||
return self._generate_percentage_change(current, baseline, description)
|
||||
elif divisor:
|
||||
return self._generate_percentage(input_var, divisor, description)
|
||||
|
||||
elif 'sum' in action.lower() or 'total' in action.lower():
|
||||
return self._generate_sum(input_var, description)
|
||||
|
||||
elif 'ratio' in action.lower():
|
||||
inputs = params.get('inputs', [])
|
||||
if len(inputs) >= 2:
|
||||
return self._generate_ratio(inputs[0], inputs[1], description)
|
||||
|
||||
# Fallback: generic operation
|
||||
return self._generate_generic(action, params, description)
|
||||
|
||||
def _generate_average(self, input_var: str, description: str) -> GeneratedCode:
|
||||
"""Generate code to calculate average."""
|
||||
output_var = f"avg_{input_var}" if not input_var.startswith('avg') else input_var.replace('input', 'avg')
|
||||
code = f"{output_var} = sum({input_var}) / len({input_var})"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Calculate average of {input_var}"
|
||||
)
|
||||
|
||||
def _generate_min(self, input_var: str, description: str) -> GeneratedCode:
|
||||
"""Generate code to find minimum."""
|
||||
output_var = f"min_{input_var}" if not input_var.startswith('min') else input_var.replace('input', 'min')
|
||||
code = f"{output_var} = min({input_var})"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Find minimum of {input_var}"
|
||||
)
|
||||
|
||||
def _generate_max(self, input_var: str, description: str) -> GeneratedCode:
|
||||
"""Generate code to find maximum."""
|
||||
output_var = f"max_{input_var}" if not input_var.startswith('max') else input_var.replace('input', 'max')
|
||||
code = f"{output_var} = max({input_var})"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Find maximum of {input_var}"
|
||||
)
|
||||
|
||||
def _generate_normalization(self, input_var: str, divisor: float,
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate code to normalize a value."""
|
||||
output_var = f"norm_{input_var}" if not input_var.startswith('norm') else input_var
|
||||
code = f"{output_var} = {input_var} / {divisor}"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Normalize {input_var} by {divisor}"
|
||||
)
|
||||
|
||||
def _generate_percentage_change(self, current: str, baseline: str,
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate code to calculate percentage change."""
|
||||
# Infer output variable name from inputs
|
||||
if 'mass' in current.lower() or 'mass' in baseline.lower():
|
||||
output_var = "mass_increase_pct"
|
||||
else:
|
||||
output_var = f"{current}_vs_{baseline}_pct"
|
||||
|
||||
code = f"{output_var} = (({current} - {baseline}) / {baseline}) * 100.0"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[current, baseline],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Calculate percentage change from {baseline} to {current}"
|
||||
)
|
||||
|
||||
def _generate_percentage(self, input_var: str, divisor: float,
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate code to calculate percentage."""
|
||||
output_var = f"pct_{input_var}"
|
||||
code = f"{output_var} = ({input_var} / {divisor}) * 100.0"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Calculate percentage of {input_var} vs {divisor}"
|
||||
)
|
||||
|
||||
def _generate_sum(self, input_var: str, description: str) -> GeneratedCode:
|
||||
"""Generate code to calculate sum."""
|
||||
output_var = f"total_{input_var}" if not input_var.startswith('total') else input_var
|
||||
code = f"{output_var} = sum({input_var})"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Calculate sum of {input_var}"
|
||||
)
|
||||
|
||||
def _generate_ratio(self, numerator: str, denominator: str,
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate code to calculate ratio."""
|
||||
output_var = f"{numerator}_to_{denominator}_ratio"
|
||||
code = f"{output_var} = {numerator} / {denominator}"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[numerator, denominator],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Calculate ratio of {numerator} to {denominator}"
|
||||
)
|
||||
|
||||
def _generate_generic(self, action: str, params: Dict[str, Any],
|
||||
description: str) -> GeneratedCode:
|
||||
"""Generate generic calculation code."""
|
||||
# Extract operation from action name
|
||||
operation = action.lower().replace('calculate_', '').replace('find_', '').replace('get_', '')
|
||||
input_var = params.get('input', 'value')
|
||||
output_var = f"{operation}_result"
|
||||
|
||||
# Try to infer code from parameters
|
||||
if 'formula' in params:
|
||||
code = f"{output_var} = {params['formula']}"
|
||||
else:
|
||||
code = f"{output_var} = {input_var} # TODO: Implement {action}"
|
||||
|
||||
return GeneratedCode(
|
||||
code=code,
|
||||
variables_used=[input_var] if input_var != 'value' else [],
|
||||
variables_created=[output_var],
|
||||
imports_needed=[],
|
||||
description=description or f"Generic calculation: {action}"
|
||||
)
|
||||
|
||||
def _extract_input_variables(self, code: str, params: Dict[str, Any]) -> List[str]:
|
||||
"""Extract input variable names from code."""
|
||||
variables = []
|
||||
|
||||
# Get from params if available
|
||||
if 'input' in params:
|
||||
variables.append(params['input'])
|
||||
if 'inputs' in params:
|
||||
variables.extend(params.get('inputs', []))
|
||||
if 'current' in params:
|
||||
variables.append(params['current'])
|
||||
if 'baseline' in params:
|
||||
variables.append(params['baseline'])
|
||||
|
||||
# Extract from code (variables on right side of =)
|
||||
if '=' in code:
|
||||
rhs = code.split('=', 1)[1]
|
||||
# Simple extraction of variable names (alphanumeric + underscore)
|
||||
import re
|
||||
found_vars = re.findall(r'\b[a-zA-Z_][a-zA-Z0-9_]*\b', rhs)
|
||||
variables.extend([v for v in found_vars if v not in ['sum', 'min', 'max', 'len', 'abs']])
|
||||
|
||||
return list(set(variables)) # Remove duplicates
|
||||
|
||||
def _extract_output_variables(self, code: str) -> List[str]:
|
||||
"""Extract output variable names from code."""
|
||||
# Variables on left side of =
|
||||
if '=' in code:
|
||||
lhs = code.split('=', 1)[0].strip()
|
||||
return [lhs]
|
||||
return []
|
||||
|
||||
def _extract_imports_needed(self, code: str) -> List[str]:
|
||||
"""Extract required imports from code."""
|
||||
imports = []
|
||||
|
||||
# Check for math functions
|
||||
if any(func in code for func in ['sqrt', 'pow', 'log', 'exp', 'sin', 'cos']):
|
||||
imports.append('import math')
|
||||
|
||||
# Check for numpy functions
|
||||
if any(func in code for func in ['np.', 'numpy.']):
|
||||
imports.append('import numpy as np')
|
||||
|
||||
return imports
|
||||
|
||||
def generate_batch(self, calculations: List[Dict[str, Any]]) -> List[GeneratedCode]:
|
||||
"""
|
||||
Generate code for multiple calculations.
|
||||
|
||||
Args:
|
||||
calculations: List of calculation dictionaries from LLM
|
||||
|
||||
Returns:
|
||||
List of GeneratedCode objects
|
||||
"""
|
||||
return [self.generate_from_llm_output(calc) for calc in calculations]
|
||||
|
||||
def generate_executable_script(self, calculations: List[Dict[str, Any]],
|
||||
inputs: Dict[str, Any] = None) -> str:
|
||||
"""
|
||||
Generate a complete executable Python script with all calculations.
|
||||
|
||||
Args:
|
||||
calculations: List of calculations
|
||||
inputs: Optional input values for testing
|
||||
|
||||
Returns:
|
||||
Complete Python script as string
|
||||
"""
|
||||
generated = self.generate_batch(calculations)
|
||||
|
||||
# Collect all imports
|
||||
all_imports = []
|
||||
for code in generated:
|
||||
all_imports.extend(code.imports_needed)
|
||||
all_imports = list(set(all_imports)) # Remove duplicates
|
||||
|
||||
# Build script
|
||||
lines = []
|
||||
|
||||
# Header
|
||||
lines.append('"""')
|
||||
lines.append('Auto-generated inline calculations')
|
||||
lines.append('Generated by Atomizer Phase 2.8 Inline Code Generator')
|
||||
lines.append('"""')
|
||||
lines.append('')
|
||||
|
||||
# Imports
|
||||
if all_imports:
|
||||
lines.extend(all_imports)
|
||||
lines.append('')
|
||||
|
||||
# Input values (if provided for testing)
|
||||
if inputs:
|
||||
lines.append('# Input values')
|
||||
for var_name, value in inputs.items():
|
||||
lines.append(f'{var_name} = {repr(value)}')
|
||||
lines.append('')
|
||||
|
||||
# Calculations
|
||||
lines.append('# Inline calculations')
|
||||
for code_obj in generated:
|
||||
lines.append(f'# {code_obj.description}')
|
||||
lines.append(code_obj.code)
|
||||
lines.append('')
|
||||
|
||||
return '\n'.join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
"""Test the inline code generator."""
|
||||
print("=" * 80)
|
||||
print("Phase 2.8: Inline Code Generator Test")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
generator = InlineCodeGenerator()
|
||||
|
||||
# Test cases from Phase 2.7 LLM output
|
||||
test_calculations = [
|
||||
{
|
||||
"action": "normalize_stress",
|
||||
"description": "Normalize stress by 200 MPa",
|
||||
"params": {
|
||||
"input": "max_stress",
|
||||
"divisor": 200.0,
|
||||
"units": "MPa"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "normalize_displacement",
|
||||
"description": "Normalize displacement by 5 mm",
|
||||
"params": {
|
||||
"input": "max_disp_y",
|
||||
"divisor": 5.0,
|
||||
"units": "mm"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "calculate_mass_increase",
|
||||
"description": "Calculate mass increase percentage vs baseline",
|
||||
"params": {
|
||||
"current": "panel_total_mass",
|
||||
"baseline": "baseline_mass"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "calculate_average",
|
||||
"description": "Calculate average of extracted forces",
|
||||
"params": {
|
||||
"input": "forces_z",
|
||||
"operation": "mean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"action": "find_minimum",
|
||||
"description": "Find minimum force value",
|
||||
"params": {
|
||||
"input": "forces_z",
|
||||
"operation": "min"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
print("Test Calculations:")
|
||||
print()
|
||||
|
||||
for i, calc in enumerate(test_calculations, 1):
|
||||
print(f"{i}. {calc['description']}")
|
||||
code_obj = generator.generate_from_llm_output(calc)
|
||||
print(f" Generated Code: {code_obj.code}")
|
||||
print(f" Inputs: {', '.join(code_obj.variables_used)}")
|
||||
print(f" Outputs: {', '.join(code_obj.variables_created)}")
|
||||
print()
|
||||
|
||||
# Generate complete script
|
||||
print("=" * 80)
|
||||
print("Complete Executable Script:")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
test_inputs = {
|
||||
'max_stress': 150.5,
|
||||
'max_disp_y': 3.2,
|
||||
'panel_total_mass': 2.8,
|
||||
'baseline_mass': 2.5,
|
||||
'forces_z': [10.5, 12.3, 8.9, 11.2, 9.8]
|
||||
}
|
||||
|
||||
script = generator.generate_executable_script(test_calculations, test_inputs)
|
||||
print(script)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
535
optimization_engine/future/llm_optimization_runner.py
Normal file
535
optimization_engine/future/llm_optimization_runner.py
Normal file
@@ -0,0 +1,535 @@
|
||||
"""
|
||||
LLM-Enhanced Optimization Runner - Phase 3.2
|
||||
|
||||
Flexible LLM-enhanced optimization runner that integrates:
|
||||
- Phase 2.7: LLM workflow analysis
|
||||
- Phase 2.8: Inline code generation (optional)
|
||||
- Phase 2.9: Post-processing hook generation (optional)
|
||||
- Phase 3.0: pyNastran research agent (optional)
|
||||
- Phase 3.1: Extractor orchestration (optional)
|
||||
|
||||
This runner enables users to describe optimization goals in natural language
|
||||
and choose to leverage automated code generation, manual coding, or a hybrid approach.
|
||||
|
||||
Author: Atomizer Development Team
|
||||
Version: 0.1.0 (Phase 3.2)
|
||||
Last Updated: 2025-01-16
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List, Optional
|
||||
import json
|
||||
import logging
|
||||
import optuna
|
||||
from datetime import datetime
|
||||
|
||||
from optimization_engine.extractor_orchestrator import ExtractorOrchestrator
|
||||
from optimization_engine.inline_code_generator import InlineCodeGenerator
|
||||
from optimization_engine.hook_generator import HookGenerator
|
||||
from optimization_engine.plugins.hook_manager import HookManager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LLMOptimizationRunner:
|
||||
"""
|
||||
LLM-enhanced optimization runner with flexible automation options.
|
||||
|
||||
This runner empowers users to leverage LLM-assisted code generation for:
|
||||
- OP2 result extractors (Phase 3.1) - optional
|
||||
- Inline calculations (Phase 2.8) - optional
|
||||
- Post-processing hooks (Phase 2.9) - optional
|
||||
|
||||
Users can describe goals in natural language and choose automated generation,
|
||||
manual coding, or a hybrid approach based on their needs.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
llm_workflow: Dict[str, Any],
|
||||
model_updater: callable,
|
||||
simulation_runner: callable,
|
||||
study_name: str = "llm_optimization",
|
||||
output_dir: Optional[Path] = None):
|
||||
"""
|
||||
Initialize LLM-driven optimization runner.
|
||||
|
||||
Args:
|
||||
llm_workflow: Output from Phase 2.7 LLM analysis with:
|
||||
- engineering_features: List of FEA operations
|
||||
- inline_calculations: List of simple math operations
|
||||
- post_processing_hooks: List of custom calculations
|
||||
- optimization: Dict with algorithm, design_variables, etc.
|
||||
model_updater: Function(design_vars: Dict) -> None
|
||||
Updates NX expressions in the CAD model and saves changes.
|
||||
simulation_runner: Function(design_vars: Dict) -> Path
|
||||
Runs FEM simulation with updated design variables.
|
||||
Returns path to OP2 results file.
|
||||
study_name: Name for Optuna study
|
||||
output_dir: Directory for results
|
||||
"""
|
||||
self.llm_workflow = llm_workflow
|
||||
self.model_updater = model_updater
|
||||
self.simulation_runner = simulation_runner
|
||||
self.study_name = study_name
|
||||
|
||||
if output_dir is None:
|
||||
output_dir = Path.cwd() / "optimization_results" / study_name
|
||||
self.output_dir = Path(output_dir)
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Save LLM workflow configuration for transparency and documentation
|
||||
workflow_config_file = self.output_dir / "llm_workflow_config.json"
|
||||
with open(workflow_config_file, 'w') as f:
|
||||
json.dump(llm_workflow, f, indent=2)
|
||||
logger.info(f"LLM workflow configuration saved to: {workflow_config_file}")
|
||||
|
||||
# Initialize automation components
|
||||
self._initialize_automation()
|
||||
|
||||
# Optuna study
|
||||
self.study = None
|
||||
self.history = []
|
||||
|
||||
logger.info(f"LLMOptimizationRunner initialized for study: {study_name}")
|
||||
|
||||
def _initialize_automation(self):
|
||||
"""Initialize all automation components from LLM workflow."""
|
||||
logger.info("Initializing automation components...")
|
||||
|
||||
# Phase 3.1: Extractor Orchestrator (NEW ARCHITECTURE)
|
||||
logger.info(" - Phase 3.1: Extractor Orchestrator")
|
||||
# NEW: Pass output_dir only for manifest, extractors go to core library
|
||||
self.orchestrator = ExtractorOrchestrator(
|
||||
extractors_dir=self.output_dir, # Only for manifest file
|
||||
use_core_library=True # Enable centralized library
|
||||
)
|
||||
|
||||
# Generate extractors from LLM workflow (stored in core library now)
|
||||
self.extractors = self.orchestrator.process_llm_workflow(self.llm_workflow)
|
||||
logger.info(f" {len(self.extractors)} extractor(s) available from core library")
|
||||
|
||||
# Phase 2.8: Inline Code Generator
|
||||
logger.info(" - Phase 2.8: Inline Code Generator")
|
||||
self.inline_generator = InlineCodeGenerator()
|
||||
self.inline_code = []
|
||||
|
||||
for calc in self.llm_workflow.get('inline_calculations', []):
|
||||
generated = self.inline_generator.generate_from_llm_output(calc)
|
||||
self.inline_code.append(generated.code)
|
||||
|
||||
logger.info(f" Generated {len(self.inline_code)} inline calculation(s)")
|
||||
|
||||
# Phase 2.9: Hook Generator (TODO: Should also use centralized library in future)
|
||||
logger.info(" - Phase 2.9: Hook Generator")
|
||||
self.hook_generator = HookGenerator()
|
||||
|
||||
# For now, hooks are not generated per-study unless they're truly custom
|
||||
# Most hooks should be in the core library (optimization_engine/hooks/)
|
||||
post_processing_hooks = self.llm_workflow.get('post_processing_hooks', [])
|
||||
|
||||
if post_processing_hooks:
|
||||
logger.info(f" Note: {len(post_processing_hooks)} custom hooks requested")
|
||||
logger.info(" Future: These should also use centralized library")
|
||||
# TODO: Implement hook library system similar to extractors
|
||||
|
||||
# Phase 1: Hook Manager
|
||||
logger.info(" - Phase 1: Hook Manager")
|
||||
self.hook_manager = HookManager()
|
||||
|
||||
# Load system hooks from core library
|
||||
system_hooks_dir = Path(__file__).parent / 'plugins'
|
||||
if system_hooks_dir.exists():
|
||||
self.hook_manager.load_plugins_from_directory(system_hooks_dir)
|
||||
|
||||
summary = self.hook_manager.get_summary()
|
||||
logger.info(f" Loaded {summary['enabled_hooks']} hook(s) from core library")
|
||||
|
||||
logger.info("Automation components initialized successfully!")
|
||||
|
||||
def _create_optuna_study(self) -> optuna.Study:
|
||||
"""Create Optuna study from LLM workflow optimization settings."""
|
||||
opt_config = self.llm_workflow.get('optimization', {})
|
||||
|
||||
# Determine direction (minimize or maximize)
|
||||
direction = opt_config.get('direction', 'minimize')
|
||||
|
||||
# Create study
|
||||
study = optuna.create_study(
|
||||
study_name=self.study_name,
|
||||
direction=direction,
|
||||
storage=f"sqlite:///{self.output_dir / f'{self.study_name}.db'}",
|
||||
load_if_exists=True
|
||||
)
|
||||
|
||||
logger.info(f"Created Optuna study: {self.study_name} (direction: {direction})")
|
||||
return study
|
||||
|
||||
def _objective(self, trial: optuna.Trial) -> float:
|
||||
"""
|
||||
Optuna objective function - LLM-enhanced with flexible automation!
|
||||
|
||||
This function leverages LLM workflow analysis with user-configurable automation:
|
||||
1. Suggests design variables from LLM analysis
|
||||
2. Updates model
|
||||
3. Runs simulation
|
||||
4. Extracts results (using generated or manual extractors)
|
||||
5. Executes inline calculations (generated or manual)
|
||||
6. Executes post-calculation hooks (generated or manual)
|
||||
7. Returns objective value
|
||||
|
||||
Args:
|
||||
trial: Optuna trial
|
||||
|
||||
Returns:
|
||||
Objective value
|
||||
"""
|
||||
trial_number = trial.number
|
||||
logger.info(f"\n{'='*80}")
|
||||
logger.info(f"Trial {trial_number} starting...")
|
||||
logger.info(f"{'='*80}")
|
||||
|
||||
# ====================================================================
|
||||
# STEP 1: Suggest Design Variables
|
||||
# ====================================================================
|
||||
design_vars_config = self.llm_workflow.get('optimization', {}).get('design_variables', [])
|
||||
|
||||
design_vars = {}
|
||||
for var_config in design_vars_config:
|
||||
var_name = var_config['parameter']
|
||||
|
||||
# Parse bounds - LLM returns 'bounds' as [min, max]
|
||||
if 'bounds' in var_config:
|
||||
var_min, var_max = var_config['bounds']
|
||||
else:
|
||||
# Fallback to old format
|
||||
var_min = var_config.get('min', 0.0)
|
||||
var_max = var_config.get('max', 1.0)
|
||||
|
||||
# Suggest value using Optuna
|
||||
design_vars[var_name] = trial.suggest_float(var_name, var_min, var_max)
|
||||
|
||||
logger.info(f"Design variables: {design_vars}")
|
||||
|
||||
# Execute pre-solve hooks
|
||||
self.hook_manager.execute_hooks('pre_solve', {
|
||||
'trial_number': trial_number,
|
||||
'design_variables': design_vars
|
||||
})
|
||||
|
||||
# ====================================================================
|
||||
# STEP 2: Update Model
|
||||
# ====================================================================
|
||||
logger.info("Updating model...")
|
||||
self.model_updater(design_vars)
|
||||
|
||||
# ====================================================================
|
||||
# STEP 3: Run Simulation
|
||||
# ====================================================================
|
||||
logger.info("Running simulation...")
|
||||
# NOTE: We do NOT pass design_vars to simulation_runner because:
|
||||
# 1. The PRT file was already updated by model_updater (via NX import journal)
|
||||
# 2. The solver just needs to load the SIM which references the updated PRT
|
||||
# 3. Passing design_vars would use hardcoded expression names that don't match our model
|
||||
op2_file = self.simulation_runner()
|
||||
logger.info(f"Simulation complete: {op2_file}")
|
||||
|
||||
# Execute post-solve hooks
|
||||
self.hook_manager.execute_hooks('post_solve', {
|
||||
'trial_number': trial_number,
|
||||
'op2_file': op2_file
|
||||
})
|
||||
|
||||
# ====================================================================
|
||||
# STEP 4: Extract Results (Phase 3.1 - Auto-Generated Extractors)
|
||||
# ====================================================================
|
||||
logger.info("Extracting results...")
|
||||
|
||||
results = {}
|
||||
for extractor in self.extractors:
|
||||
try:
|
||||
extraction_result = self.orchestrator.execute_extractor(
|
||||
extractor.name,
|
||||
Path(op2_file),
|
||||
subcase=1
|
||||
)
|
||||
results.update(extraction_result)
|
||||
logger.info(f" {extractor.name}: {list(extraction_result.keys())}")
|
||||
except Exception as e:
|
||||
logger.error(f"Extraction failed for {extractor.name}: {e}")
|
||||
# Continue with other extractors
|
||||
|
||||
# Execute post-extraction hooks
|
||||
self.hook_manager.execute_hooks('post_extraction', {
|
||||
'trial_number': trial_number,
|
||||
'results': results
|
||||
})
|
||||
|
||||
# ====================================================================
|
||||
# STEP 5: Inline Calculations (Phase 2.8 - Auto-Generated Code)
|
||||
# ====================================================================
|
||||
logger.info("Executing inline calculations...")
|
||||
|
||||
calculations = {}
|
||||
calc_namespace = {**results, **calculations} # Make results available
|
||||
|
||||
for calc_code in self.inline_code:
|
||||
try:
|
||||
exec(calc_code, calc_namespace)
|
||||
# Extract newly created variables
|
||||
for key, value in calc_namespace.items():
|
||||
if key not in results and not key.startswith('_'):
|
||||
calculations[key] = value
|
||||
|
||||
logger.info(f" Executed: {calc_code[:50]}...")
|
||||
except Exception as e:
|
||||
logger.error(f"Inline calculation failed: {e}")
|
||||
|
||||
logger.info(f"Calculations: {calculations}")
|
||||
|
||||
# ====================================================================
|
||||
# STEP 6: Post-Calculation Hooks (Phase 2.9 - Auto-Generated Hooks)
|
||||
# ====================================================================
|
||||
logger.info("Executing post-calculation hooks...")
|
||||
|
||||
hook_results = self.hook_manager.execute_hooks('post_calculation', {
|
||||
'trial_number': trial_number,
|
||||
'design_variables': design_vars,
|
||||
'results': results,
|
||||
'calculations': calculations
|
||||
})
|
||||
|
||||
# Merge hook results
|
||||
final_context = {**results, **calculations}
|
||||
for hook_result in hook_results:
|
||||
if hook_result:
|
||||
final_context.update(hook_result)
|
||||
|
||||
logger.info(f"Hook results: {hook_results}")
|
||||
|
||||
# ====================================================================
|
||||
# STEP 7: Extract Objective Value
|
||||
# ====================================================================
|
||||
|
||||
# Try to get objective from hooks first
|
||||
objective = None
|
||||
|
||||
# Check hook results for 'objective' or 'weighted_objective'
|
||||
for hook_result in hook_results:
|
||||
if hook_result:
|
||||
if 'objective' in hook_result:
|
||||
objective = hook_result['objective']
|
||||
break
|
||||
elif 'weighted_objective' in hook_result:
|
||||
objective = hook_result['weighted_objective']
|
||||
break
|
||||
|
||||
# Fallback: use first extracted result
|
||||
if objective is None:
|
||||
# Try common objective names
|
||||
for key in ['max_displacement', 'max_stress', 'max_von_mises']:
|
||||
if key in final_context:
|
||||
objective = final_context[key]
|
||||
logger.warning(f"No explicit objective found, using: {key}")
|
||||
break
|
||||
|
||||
if objective is None:
|
||||
raise ValueError("Could not determine objective value from results/calculations/hooks")
|
||||
|
||||
logger.info(f"Objective value: {objective}")
|
||||
|
||||
# Save trial history
|
||||
trial_data = {
|
||||
'trial_number': trial_number,
|
||||
'design_variables': design_vars,
|
||||
'results': results,
|
||||
'calculations': calculations,
|
||||
'objective': objective
|
||||
}
|
||||
self.history.append(trial_data)
|
||||
|
||||
# Incremental save - write history after each trial
|
||||
# This allows monitoring progress in real-time
|
||||
self._save_incremental_history()
|
||||
|
||||
return float(objective)
|
||||
|
||||
def run_optimization(self, n_trials: int = 50) -> Dict[str, Any]:
|
||||
"""
|
||||
Run LLM-enhanced optimization with flexible automation.
|
||||
|
||||
Args:
|
||||
n_trials: Number of optimization trials
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
- best_params: Best design variable values
|
||||
- best_value: Best objective value
|
||||
- history: Complete trial history
|
||||
"""
|
||||
logger.info(f"\n{'='*80}")
|
||||
logger.info(f"Starting LLM-Driven Optimization")
|
||||
logger.info(f"{'='*80}")
|
||||
logger.info(f"Study: {self.study_name}")
|
||||
logger.info(f"Trials: {n_trials}")
|
||||
logger.info(f"Output: {self.output_dir}")
|
||||
logger.info(f"{'='*80}\n")
|
||||
|
||||
# Create study
|
||||
self.study = self._create_optuna_study()
|
||||
|
||||
# Run optimization
|
||||
self.study.optimize(self._objective, n_trials=n_trials)
|
||||
|
||||
# Get results
|
||||
best_trial = self.study.best_trial
|
||||
|
||||
results = {
|
||||
'best_params': best_trial.params,
|
||||
'best_value': best_trial.value,
|
||||
'best_trial_number': best_trial.number,
|
||||
'history': self.history
|
||||
}
|
||||
|
||||
# Save results
|
||||
self._save_results(results)
|
||||
|
||||
logger.info(f"\n{'='*80}")
|
||||
logger.info("Optimization Complete!")
|
||||
logger.info(f"{'='*80}")
|
||||
logger.info(f"Best value: {results['best_value']}")
|
||||
logger.info(f"Best params: {results['best_params']}")
|
||||
logger.info(f"Results saved to: {self.output_dir}")
|
||||
logger.info(f"{'='*80}\n")
|
||||
|
||||
return results
|
||||
|
||||
def _save_incremental_history(self):
|
||||
"""
|
||||
Save trial history incrementally after each trial.
|
||||
This allows real-time monitoring of optimization progress.
|
||||
"""
|
||||
history_file = self.output_dir / "optimization_history_incremental.json"
|
||||
|
||||
# Convert history to JSON-serializable format
|
||||
serializable_history = []
|
||||
for trial in self.history:
|
||||
trial_copy = trial.copy()
|
||||
# Convert any numpy types to native Python types
|
||||
for key in ['results', 'calculations', 'design_variables']:
|
||||
if key in trial_copy:
|
||||
trial_copy[key] = {k: float(v) if isinstance(v, (int, float)) else v
|
||||
for k, v in trial_copy[key].items()}
|
||||
if 'objective' in trial_copy:
|
||||
trial_copy['objective'] = float(trial_copy['objective'])
|
||||
serializable_history.append(trial_copy)
|
||||
|
||||
# Write to file
|
||||
with open(history_file, 'w') as f:
|
||||
json.dump(serializable_history, f, indent=2, default=str)
|
||||
|
||||
def _save_results(self, results: Dict[str, Any]):
|
||||
"""Save optimization results to file."""
|
||||
results_file = self.output_dir / "optimization_results.json"
|
||||
|
||||
# Make history JSON serializable
|
||||
serializable_results = {
|
||||
'best_params': results['best_params'],
|
||||
'best_value': results['best_value'],
|
||||
'best_trial_number': results['best_trial_number'],
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'study_name': self.study_name,
|
||||
'n_trials': len(results['history'])
|
||||
}
|
||||
|
||||
with open(results_file, 'w') as f:
|
||||
json.dump(serializable_results, f, indent=2)
|
||||
|
||||
logger.info(f"Results saved to: {results_file}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Test LLM-driven optimization runner."""
|
||||
print("=" * 80)
|
||||
print("Phase 3.2: LLM-Driven Optimization Runner Test")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
# Example LLM workflow (from Phase 2.7)
|
||||
llm_workflow = {
|
||||
"engineering_features": [
|
||||
{
|
||||
"action": "extract_displacement",
|
||||
"domain": "result_extraction",
|
||||
"description": "Extract displacement from OP2",
|
||||
"params": {"result_type": "displacement"}
|
||||
}
|
||||
],
|
||||
"inline_calculations": [
|
||||
{
|
||||
"action": "normalize",
|
||||
"params": {
|
||||
"input": "max_displacement",
|
||||
"reference": "max_allowed_disp",
|
||||
"value": 5.0
|
||||
},
|
||||
"code_hint": "norm_disp = max_displacement / 5.0"
|
||||
}
|
||||
],
|
||||
"post_processing_hooks": [
|
||||
{
|
||||
"action": "weighted_objective",
|
||||
"params": {
|
||||
"inputs": ["norm_disp"],
|
||||
"weights": [1.0],
|
||||
"objective": "minimize"
|
||||
}
|
||||
}
|
||||
],
|
||||
"optimization": {
|
||||
"algorithm": "TPE",
|
||||
"direction": "minimize",
|
||||
"design_variables": [
|
||||
{
|
||||
"parameter": "wall_thickness",
|
||||
"min": 3.0,
|
||||
"max": 8.0,
|
||||
"type": "continuous"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
print("LLM Workflow Configuration:")
|
||||
print(f" Engineering features: {len(llm_workflow['engineering_features'])}")
|
||||
print(f" Inline calculations: {len(llm_workflow['inline_calculations'])}")
|
||||
print(f" Post-processing hooks: {len(llm_workflow['post_processing_hooks'])}")
|
||||
print(f" Design variables: {len(llm_workflow['optimization']['design_variables'])}")
|
||||
print()
|
||||
|
||||
# Dummy functions for testing
|
||||
def dummy_model_updater(design_vars):
|
||||
print(f" [Dummy] Updating model with: {design_vars}")
|
||||
|
||||
def dummy_simulation_runner():
|
||||
print(" [Dummy] Running simulation...")
|
||||
# Return path to test OP2
|
||||
return Path("tests/bracket_sim1-solution_1.op2")
|
||||
|
||||
# Initialize runner
|
||||
print("Initializing LLM-driven optimization runner...")
|
||||
runner = LLMOptimizationRunner(
|
||||
llm_workflow=llm_workflow,
|
||||
model_updater=dummy_model_updater,
|
||||
simulation_runner=dummy_simulation_runner,
|
||||
study_name="test_llm_optimization"
|
||||
)
|
||||
|
||||
print()
|
||||
print("=" * 80)
|
||||
print("Runner initialized successfully!")
|
||||
print("Ready to run optimization with auto-generated code!")
|
||||
print("=" * 80)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
423
optimization_engine/future/llm_workflow_analyzer.py
Normal file
423
optimization_engine/future/llm_workflow_analyzer.py
Normal file
@@ -0,0 +1,423 @@
|
||||
"""
|
||||
LLM-Powered Workflow Analyzer - Phase 2.7
|
||||
|
||||
Uses Claude (LLM) to intelligently analyze user requests instead of dumb regex patterns.
|
||||
This is what we should have built from the start!
|
||||
|
||||
Integration modes:
|
||||
1. Claude Code Skill (preferred for development) - uses Claude Code's built-in AI
|
||||
2. Anthropic API (fallback for standalone) - requires API key
|
||||
|
||||
Author: Atomizer Development Team
|
||||
Version: 0.2.0 (Phase 2.7)
|
||||
Last Updated: 2025-01-16
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
from typing import List, Dict, Any, Optional
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
from anthropic import Anthropic
|
||||
HAS_ANTHROPIC = True
|
||||
except ImportError:
|
||||
HAS_ANTHROPIC = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class WorkflowStep:
|
||||
"""A single step in an optimization workflow."""
|
||||
action: str
|
||||
domain: str
|
||||
params: Dict[str, Any]
|
||||
step_type: str # 'engineering_feature', 'inline_calculation', 'post_processing_hook'
|
||||
priority: int = 0
|
||||
|
||||
|
||||
class LLMWorkflowAnalyzer:
|
||||
"""
|
||||
Uses Claude LLM to intelligently analyze optimization requests.
|
||||
NO MORE DUMB REGEX PATTERNS!
|
||||
|
||||
Integration modes:
|
||||
1. Claude Code integration (use_claude_code=True) - preferred for development
|
||||
2. Direct API (api_key provided) - for standalone execution
|
||||
3. Fallback heuristics (neither provided) - basic pattern matching
|
||||
"""
|
||||
|
||||
def __init__(self, api_key: Optional[str] = None, use_claude_code: bool = True):
|
||||
"""
|
||||
Initialize LLM analyzer.
|
||||
|
||||
Args:
|
||||
api_key: Anthropic API key (optional, for standalone mode)
|
||||
use_claude_code: Use Claude Code skill for analysis (default: True)
|
||||
"""
|
||||
self.use_claude_code = use_claude_code
|
||||
self.client = None
|
||||
|
||||
if api_key and HAS_ANTHROPIC:
|
||||
self.client = Anthropic(api_key=api_key)
|
||||
self.use_claude_code = False # Prefer direct API if key provided
|
||||
|
||||
def analyze_request(self, user_request: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Use Claude to analyze the request and extract workflow steps intelligently.
|
||||
|
||||
Returns:
|
||||
{
|
||||
'engineering_features': [...],
|
||||
'inline_calculations': [...],
|
||||
'post_processing_hooks': [...],
|
||||
'optimization': {...}
|
||||
}
|
||||
"""
|
||||
|
||||
prompt = f"""You are analyzing a structural optimization request for the Atomizer system.
|
||||
|
||||
USER REQUEST:
|
||||
{user_request}
|
||||
|
||||
Your task: Break this down into atomic workflow steps and classify each step.
|
||||
|
||||
STEP TYPES:
|
||||
1. ENGINEERING FEATURES - Complex FEA/CAE operations needing specialized knowledge:
|
||||
- Extract results from OP2 files (displacement, stress, strain, element forces, etc.)
|
||||
- Modify FEA properties (CBUSH/CBAR stiffness, PCOMP layup, material properties)
|
||||
- Run simulations (SOL101, SOL103, etc.)
|
||||
- Create/modify geometry in NX
|
||||
|
||||
2. INLINE CALCULATIONS - Simple math operations (auto-generate Python):
|
||||
- Calculate average, min, max, sum
|
||||
- Compare values, compute ratios
|
||||
- Statistical operations
|
||||
|
||||
3. POST-PROCESSING HOOKS - Custom calculations between FEA steps:
|
||||
- Custom objective functions combining multiple results
|
||||
- Data transformations
|
||||
- Filtering/aggregation logic
|
||||
|
||||
4. OPTIMIZATION - Algorithm and configuration:
|
||||
- Optuna, genetic algorithm, etc.
|
||||
- Design variables and their ranges
|
||||
- Multi-objective vs single objective
|
||||
|
||||
IMPORTANT DISTINCTIONS:
|
||||
- "extract forces from 1D elements" → ENGINEERING FEATURE (needs pyNastran/OP2 knowledge)
|
||||
- "find average of forces" → INLINE CALCULATION (simple Python: sum/len)
|
||||
- "compare max to average and create metric" → POST-PROCESSING HOOK (custom logic)
|
||||
- Element forces vs Reaction forces are DIFFERENT (element internal forces vs nodal reactions)
|
||||
- CBUSH vs CBAR are different element types with different properties
|
||||
|
||||
Return a JSON object with this EXACT structure:
|
||||
{{
|
||||
"engineering_features": [
|
||||
{{
|
||||
"action": "extract_1d_element_forces",
|
||||
"domain": "result_extraction",
|
||||
"description": "Extract element forces from 1D elements (CBAR/CBUSH) in Z direction",
|
||||
"params": {{
|
||||
"element_types": ["CBAR", "CBUSH"],
|
||||
"result_type": "element_force",
|
||||
"direction": "Z"
|
||||
}}
|
||||
}}
|
||||
],
|
||||
"inline_calculations": [
|
||||
{{
|
||||
"action": "calculate_average",
|
||||
"description": "Calculate average of extracted forces",
|
||||
"params": {{
|
||||
"input": "forces_z",
|
||||
"operation": "mean"
|
||||
}}
|
||||
}},
|
||||
{{
|
||||
"action": "find_minimum",
|
||||
"description": "Find minimum force value",
|
||||
"params": {{
|
||||
"input": "forces_z",
|
||||
"operation": "min"
|
||||
}}
|
||||
}}
|
||||
],
|
||||
"post_processing_hooks": [
|
||||
{{
|
||||
"action": "custom_objective_metric",
|
||||
"description": "Compare minimum to average and create objective metric",
|
||||
"params": {{
|
||||
"inputs": ["min_force", "avg_force"],
|
||||
"formula": "min_force / avg_force",
|
||||
"objective": "minimize"
|
||||
}}
|
||||
}}
|
||||
],
|
||||
"optimization": {{
|
||||
"algorithm": "genetic_algorithm",
|
||||
"design_variables": [
|
||||
{{
|
||||
"parameter": "cbar_stiffness_x",
|
||||
"type": "FEA_property",
|
||||
"element_type": "CBAR"
|
||||
}}
|
||||
],
|
||||
"objectives": [
|
||||
{{
|
||||
"type": "minimize",
|
||||
"target": "custom_objective_metric"
|
||||
}}
|
||||
]
|
||||
}}
|
||||
}}
|
||||
|
||||
Analyze the request and return ONLY the JSON, no other text."""
|
||||
|
||||
if self.client:
|
||||
# Use Claude API
|
||||
response = self.client.messages.create(
|
||||
model="claude-sonnet-4-20250514",
|
||||
max_tokens=4000,
|
||||
messages=[{
|
||||
"role": "user",
|
||||
"content": prompt
|
||||
}]
|
||||
)
|
||||
|
||||
# Extract JSON from response
|
||||
content = response.content[0].text
|
||||
|
||||
# Find JSON in response
|
||||
start = content.find('{')
|
||||
end = content.rfind('}') + 1
|
||||
json_str = content[start:end]
|
||||
|
||||
return json.loads(json_str)
|
||||
else:
|
||||
# Fallback: return a template showing expected format
|
||||
return {
|
||||
"engineering_features": [],
|
||||
"inline_calculations": [],
|
||||
"post_processing_hooks": [],
|
||||
"optimization": {},
|
||||
"error": "No API key provided - cannot analyze request"
|
||||
}
|
||||
|
||||
def to_workflow_steps(self, analysis: Dict[str, Any]) -> List[WorkflowStep]:
|
||||
"""Convert LLM analysis to WorkflowStep objects."""
|
||||
steps = []
|
||||
priority = 0
|
||||
|
||||
# Add engineering features
|
||||
for feature in analysis.get('engineering_features', []):
|
||||
steps.append(WorkflowStep(
|
||||
action=feature['action'],
|
||||
domain=feature['domain'],
|
||||
params=feature.get('params', {}),
|
||||
step_type='engineering_feature',
|
||||
priority=priority
|
||||
))
|
||||
priority += 1
|
||||
|
||||
# Add inline calculations
|
||||
for calc in analysis.get('inline_calculations', []):
|
||||
steps.append(WorkflowStep(
|
||||
action=calc['action'],
|
||||
domain='calculation',
|
||||
params=calc.get('params', {}),
|
||||
step_type='inline_calculation',
|
||||
priority=priority
|
||||
))
|
||||
priority += 1
|
||||
|
||||
# Add post-processing hooks
|
||||
for hook in analysis.get('post_processing_hooks', []):
|
||||
steps.append(WorkflowStep(
|
||||
action=hook['action'],
|
||||
domain='post_processing',
|
||||
params=hook.get('params', {}),
|
||||
step_type='post_processing_hook',
|
||||
priority=priority
|
||||
))
|
||||
priority += 1
|
||||
|
||||
# Add optimization
|
||||
opt = analysis.get('optimization', {})
|
||||
if opt:
|
||||
steps.append(WorkflowStep(
|
||||
action='optimize',
|
||||
domain='optimization',
|
||||
params=opt,
|
||||
step_type='engineering_feature',
|
||||
priority=priority
|
||||
))
|
||||
|
||||
return steps
|
||||
|
||||
def get_summary(self, analysis: Dict[str, Any]) -> str:
|
||||
"""Generate human-readable summary of the analysis."""
|
||||
lines = []
|
||||
lines.append("LLM Workflow Analysis")
|
||||
lines.append("=" * 80)
|
||||
lines.append("")
|
||||
|
||||
# Engineering features
|
||||
eng_features = analysis.get('engineering_features', [])
|
||||
lines.append(f"Engineering Features (Need Research): {len(eng_features)}")
|
||||
for feature in eng_features:
|
||||
lines.append(f" - {feature['action']}")
|
||||
lines.append(f" Description: {feature.get('description', 'N/A')}")
|
||||
lines.append(f" Domain: {feature['domain']}")
|
||||
lines.append("")
|
||||
|
||||
# Inline calculations
|
||||
inline_calcs = analysis.get('inline_calculations', [])
|
||||
lines.append(f"Inline Calculations (Auto-Generate): {len(inline_calcs)}")
|
||||
for calc in inline_calcs:
|
||||
lines.append(f" - {calc['action']}")
|
||||
lines.append(f" Description: {calc.get('description', 'N/A')}")
|
||||
lines.append("")
|
||||
|
||||
# Post-processing hooks
|
||||
hooks = analysis.get('post_processing_hooks', [])
|
||||
lines.append(f"Post-Processing Hooks (Generate Middleware): {len(hooks)}")
|
||||
for hook in hooks:
|
||||
lines.append(f" - {hook['action']}")
|
||||
lines.append(f" Description: {hook.get('description', 'N/A')}")
|
||||
if 'formula' in hook.get('params', {}):
|
||||
lines.append(f" Formula: {hook['params']['formula']}")
|
||||
lines.append("")
|
||||
|
||||
# Optimization
|
||||
opt = analysis.get('optimization', {})
|
||||
if opt:
|
||||
lines.append("Optimization Configuration:")
|
||||
lines.append(f" Algorithm: {opt.get('algorithm', 'N/A')}")
|
||||
if 'design_variables' in opt:
|
||||
lines.append(f" Design Variables: {len(opt['design_variables'])}")
|
||||
for var in opt['design_variables']:
|
||||
lines.append(f" - {var.get('parameter', 'N/A')} ({var.get('type', 'N/A')})")
|
||||
if 'objectives' in opt:
|
||||
lines.append(f" Objectives:")
|
||||
for obj in opt['objectives']:
|
||||
lines.append(f" - {obj.get('type', 'N/A')} {obj.get('target', 'N/A')}")
|
||||
lines.append("")
|
||||
|
||||
# Summary
|
||||
total_steps = len(eng_features) + len(inline_calcs) + len(hooks) + (1 if opt else 0)
|
||||
lines.append(f"Total Steps: {total_steps}")
|
||||
lines.append(f" Engineering: {len(eng_features)} (need research/documentation)")
|
||||
lines.append(f" Simple Math: {len(inline_calcs)} (auto-generate Python)")
|
||||
lines.append(f" Hooks: {len(hooks)} (generate middleware)")
|
||||
lines.append(f" Optimization: {1 if opt else 0}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
"""Test the LLM workflow analyzer."""
|
||||
import os
|
||||
|
||||
print("=" * 80)
|
||||
print("LLM-Powered Workflow Analyzer Test")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
# Test request
|
||||
request = """I want to extract forces in direction Z of all the 1D elements and find the average of it,
|
||||
then find the minimum value and compare it to the average, then assign it to a objective metric that needs to be minimized.
|
||||
|
||||
I want to iterate on the FEA properties of the Cbar element stiffness in X to make the objective function minimized.
|
||||
|
||||
I want to use genetic algorithm to iterate and optimize this"""
|
||||
|
||||
print("User Request:")
|
||||
print(request)
|
||||
print()
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
# Get API key from environment
|
||||
api_key = os.environ.get('ANTHROPIC_API_KEY')
|
||||
|
||||
if not api_key:
|
||||
print("WARNING: No ANTHROPIC_API_KEY found in environment")
|
||||
print("Set it with: export ANTHROPIC_API_KEY=your_key_here")
|
||||
print()
|
||||
print("Showing expected output format instead...")
|
||||
print()
|
||||
|
||||
# Show what the output should look like
|
||||
expected = {
|
||||
"engineering_features": [
|
||||
{
|
||||
"action": "extract_1d_element_forces",
|
||||
"domain": "result_extraction",
|
||||
"description": "Extract element forces from 1D elements in Z direction",
|
||||
"params": {
|
||||
"element_types": ["CBAR"],
|
||||
"result_type": "element_force",
|
||||
"direction": "Z"
|
||||
}
|
||||
}
|
||||
],
|
||||
"inline_calculations": [
|
||||
{
|
||||
"action": "calculate_average",
|
||||
"description": "Calculate average of extracted forces",
|
||||
"params": {"input": "forces_z", "operation": "mean"}
|
||||
},
|
||||
{
|
||||
"action": "find_minimum",
|
||||
"description": "Find minimum force value",
|
||||
"params": {"input": "forces_z", "operation": "min"}
|
||||
}
|
||||
],
|
||||
"post_processing_hooks": [
|
||||
{
|
||||
"action": "custom_objective_metric",
|
||||
"description": "Compare minimum to average",
|
||||
"params": {
|
||||
"inputs": ["min_force", "avg_force"],
|
||||
"formula": "min_force / avg_force",
|
||||
"objective": "minimize"
|
||||
}
|
||||
}
|
||||
],
|
||||
"optimization": {
|
||||
"algorithm": "genetic_algorithm",
|
||||
"design_variables": [
|
||||
{"parameter": "cbar_stiffness_x", "type": "FEA_property"}
|
||||
],
|
||||
"objectives": [{"type": "minimize", "target": "custom_objective_metric"}]
|
||||
}
|
||||
}
|
||||
|
||||
analyzer = LLMWorkflowAnalyzer()
|
||||
print(analyzer.get_summary(expected))
|
||||
return
|
||||
|
||||
# Use LLM to analyze
|
||||
analyzer = LLMWorkflowAnalyzer(api_key=api_key)
|
||||
|
||||
print("Calling Claude to analyze request...")
|
||||
print()
|
||||
|
||||
analysis = analyzer.analyze_request(request)
|
||||
|
||||
print("LLM Analysis Complete!")
|
||||
print()
|
||||
print(analyzer.get_summary(analysis))
|
||||
|
||||
print()
|
||||
print("=" * 80)
|
||||
print("Raw JSON Analysis:")
|
||||
print("=" * 80)
|
||||
print(json.dumps(analysis, indent=2))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
134
optimization_engine/future/report_generator.py
Normal file
134
optimization_engine/future/report_generator.py
Normal file
@@ -0,0 +1,134 @@
|
||||
"""
|
||||
Report Generator Utility
|
||||
Generates Markdown/HTML/PDF reports for optimization studies
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
import markdown
|
||||
from datetime import datetime
|
||||
|
||||
def generate_study_report(
|
||||
study_dir: Path,
|
||||
output_format: str = "markdown",
|
||||
include_llm_summary: bool = False
|
||||
) -> Optional[Path]:
|
||||
"""
|
||||
Generate a report for the study.
|
||||
|
||||
Args:
|
||||
study_dir: Path to the study directory
|
||||
output_format: 'markdown', 'html', or 'pdf'
|
||||
include_llm_summary: Whether to include AI-generated summary
|
||||
|
||||
Returns:
|
||||
Path to the generated report file
|
||||
"""
|
||||
try:
|
||||
# Load data
|
||||
config_path = study_dir / "1_setup" / "optimization_config.json"
|
||||
history_path = study_dir / "2_results" / "optimization_history_incremental.json"
|
||||
|
||||
if not config_path.exists() or not history_path.exists():
|
||||
return None
|
||||
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
|
||||
with open(history_path) as f:
|
||||
history = json.load(f)
|
||||
|
||||
# Find best trial
|
||||
best_trial = None
|
||||
if history:
|
||||
best_trial = min(history, key=lambda x: x['objective'])
|
||||
|
||||
# Generate Markdown content
|
||||
md_content = f"""# Optimization Report: {config.get('study_name', study_dir.name)}
|
||||
|
||||
**Date**: {datetime.now().strftime('%Y-%m-%d %H:%M')}
|
||||
**Status**: {'Completed' if len(history) >= config.get('optimization_settings', {}).get('n_trials', 50) else 'In Progress'}
|
||||
|
||||
## Executive Summary
|
||||
{_generate_summary(history, best_trial, include_llm_summary)}
|
||||
|
||||
## Study Configuration
|
||||
- **Objectives**: {', '.join([o['name'] for o in config.get('objectives', [])])}
|
||||
- **Design Variables**: {len(config.get('design_variables', []))} variables
|
||||
- **Total Trials**: {len(history)}
|
||||
|
||||
## Best Result (Trial #{best_trial['trial_number'] if best_trial else 'N/A'})
|
||||
- **Objective Value**: {best_trial['objective'] if best_trial else 'N/A'}
|
||||
- **Parameters**:
|
||||
"""
|
||||
|
||||
if best_trial:
|
||||
for k, v in best_trial['design_variables'].items():
|
||||
md_content += f" - **{k}**: {v:.4f}\n"
|
||||
|
||||
md_content += "\n## Optimization Progress\n"
|
||||
md_content += "The optimization process showed convergence towards the optimal solution.\n"
|
||||
|
||||
# Save report based on format
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
output_dir = study_dir / "2_results"
|
||||
|
||||
if output_format in ['markdown', 'md']:
|
||||
output_path = output_dir / f"optimization_report_{timestamp}.md"
|
||||
with open(output_path, 'w') as f:
|
||||
f.write(md_content)
|
||||
|
||||
elif output_format == 'html':
|
||||
output_path = output_dir / f"optimization_report_{timestamp}.html"
|
||||
html_content = markdown.markdown(md_content)
|
||||
# Add basic styling
|
||||
styled_html = f"""
|
||||
<html>
|
||||
<head>
|
||||
<style>
|
||||
body {{ font-family: sans-serif; max-width: 800px; margin: 0 auto; padding: 20px; }}
|
||||
h1 {{ color: #2563eb; }}
|
||||
h2 {{ border-bottom: 1px solid #e5e7eb; padding-bottom: 10px; margin-top: 30px; }}
|
||||
code {{ background: #f3f4f6; padding: 2px 4px; rounded: 4px; }}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
{html_content}
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
with open(output_path, 'w') as f:
|
||||
f.write(styled_html)
|
||||
|
||||
elif output_format == 'pdf':
|
||||
# Requires weasyprint
|
||||
try:
|
||||
from weasyprint import HTML
|
||||
output_path = output_dir / f"optimization_report_{timestamp}.pdf"
|
||||
html_content = markdown.markdown(md_content)
|
||||
HTML(string=html_content).write_pdf(str(output_path))
|
||||
except ImportError:
|
||||
print("WeasyPrint not installed, falling back to HTML")
|
||||
return generate_study_report(study_dir, 'html', include_llm_summary)
|
||||
|
||||
return output_path
|
||||
|
||||
except Exception as e:
|
||||
print(f"Report generation error: {e}")
|
||||
return None
|
||||
|
||||
def _generate_summary(history, best_trial, use_llm):
|
||||
if use_llm:
|
||||
return "[AI Summary Placeholder] The optimization successfully identified a design that minimizes mass while satisfying all constraints."
|
||||
|
||||
if not history:
|
||||
return "No trials completed yet."
|
||||
|
||||
improvement = 0
|
||||
if len(history) > 1:
|
||||
first = history[0]['objective']
|
||||
best = best_trial['objective']
|
||||
improvement = ((first - best) / first) * 100
|
||||
|
||||
return f"The optimization run completed {len(history)} trials. The best design found (Trial #{best_trial['trial_number']}) achieved an objective value of {best_trial['objective']:.4f}, representing a {improvement:.1f}% improvement over the initial design."
|
||||
Reference in New Issue
Block a user