feat: Add substudy system with live history tracking and workflow fixes

Major Features:
- Hierarchical substudy system (like NX Solutions/Subcases)
  * Shared model files across all substudies
  * Independent configuration per substudy
  * Continuation support from previous substudies
  * Real-time incremental history updates
- Live history tracking with optimization_history_incremental.json
- Complete bracket_displacement_maximizing study with substudy examples

Core Fixes:
- Fixed expression update workflow to pass design_vars through simulation_runner
  * Restored working NX journal expression update mechanism
  * OP2 timestamp verification instead of file deletion
  * Resolved issue where all trials returned identical objective values
- Fixed LLMOptimizationRunner to pass design variables to simulation runner
- Enhanced NXSolver with timestamp-based file regeneration verification

New Components:
- optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner
- optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard
- studies/bracket_displacement_maximizing/ - Complete substudy example
  * run_substudy.py - Substudy runner with continuation
  * run_optimization.py - Standalone optimization runner
  * config/substudy_template.json - Template for new substudies
  * substudies/coarse_exploration/ - 20-trial coarse search
  * substudies/fine_tuning/ - 50-trial refinement (continuation example)
  * SUBSTUDIES_README.md - Complete substudy documentation

Technical Improvements:
- Incremental history saving after each trial (optimization_history_incremental.json)
- Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve
- Trial indexing fix in substudy result saving
- Updated README with substudy system documentation

Testing:
- Successfully ran 20-trial coarse_exploration substudy
- Verified different objective values across trials (workflow fix validated)
- Confirmed live history updates in real-time
- Tested shared model file usage across substudies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-16 21:29:54 -05:00
parent 90a9e020d8
commit 2f3afc3813
126 changed files with 15592 additions and 97 deletions

View File

@@ -6,13 +6,13 @@
## Overview
Phase 3 implements **automated research and code generation** for OP2 result extraction using pyNastran. The system can:
Phase 3 implements **LLM-enhanced research and code generation** for OP2 result extraction using pyNastran. The system can:
1. Research pyNastran documentation to find appropriate APIs
2. Generate complete, executable Python extraction code
3. Store learned patterns in a knowledge base
4. Auto-generate extractors from Phase 2.7 LLM output
This completes the **zero-manual-coding vision**: Users describe optimization goals in natural language → System generates all required code automatically.
This enables **LLM-enhanced optimization workflows**: Users can describe goals in natural language and optionally have the system generate code automatically, or write custom extractors manually as needed.
## Objectives Achieved
@@ -287,7 +287,7 @@ def min_to_avg_ratio_hook(context):
return {'min_to_avg_ratio': result, 'objective': result}
```
**Result**: Complete optimization setup from natural language → Zero manual coding! 🚀
**Result**: LLM-enhanced optimization setup from natural language with flexible automation! 🚀
## Testing
@@ -483,13 +483,13 @@ Phase 3 successfully implements **automated OP2 extraction code generation** usi
- ✅ Knowledge base architecture
- ✅ 3 core extraction patterns (displacement, stress, force)
This completes the **zero-manual-coding pipeline**:
This enables the **LLM-enhanced automation pipeline**:
- Phase 2.7: LLM analyzes natural language → engineering features
- Phase 2.8: Inline calculation code generation
- Phase 2.9: Post-processing hook generation
- **Phase 3: OP2 extraction code generation**
- Phase 2.8: Inline calculation code generation (optional)
- Phase 2.9: Post-processing hook generation (optional)
- **Phase 3: OP2 extraction code generation (optional)**
Users can now describe optimization goals in natural language and the system generates ALL required code automatically! 🎉
Users can describe optimization goals in natural language and choose to leverage automated code generation, manual coding, or a hybrid approach! 🎉
## Related Documentation

View File

@@ -6,19 +6,19 @@
## Overview
Phase 3.1 completes the **zero-manual-coding automation pipeline** by integrating:
Phase 3.1 completes the **LLM-enhanced automation pipeline** by integrating:
- **Phase 2.7**: LLM workflow analysis
- **Phase 3.0**: pyNastran research agent
- **Phase 2.8**: Inline code generation
- **Phase 2.9**: Post-processing hook generation
The result: Users describe optimization goals in natural language → System automatically generates ALL required code from request to execution!
The result: Users can describe optimization goals in natural language and choose to leverage automatic code generation, manual coding, or a hybrid approach!
## Objectives Achieved
### ✅ Complete Automation Pipeline
### ✅ LLM-Enhanced Automation Pipeline
**From User Request to Execution - Zero Manual Coding:**
**From User Request to Execution - Flexible LLM-Assisted Workflow:**
```
User Natural Language Request
@@ -298,7 +298,7 @@ Trial N completed
Objective value: 0.072357
```
**ZERO manual coding from user request to Optuna trial!** 🚀
**LLM-enhanced workflow with optional automation from user request to Optuna trial!** 🚀
## Key Integration Points
@@ -429,9 +429,9 @@ Result: PASSED!
## Benefits
### 1. Complete Automation
### 1. LLM-Enhanced Flexibility
**Before** (Manual workflow):
**Traditional Manual Workflow**:
```
1. User describes optimization
2. Engineer manually writes OP2 extractor
@@ -441,32 +441,33 @@ Result: PASSED!
Time: Hours to days
```
**After** (Automated workflow):
**LLM-Enhanced Workflow**:
```
1. User describes optimization in natural language
2. System generates ALL code automatically
Time: Seconds
2. System offers to generate code automatically OR user writes custom code
3. Hybrid approach: mix automated and manual components as needed
Time: Seconds to minutes (user choice)
```
### 2. Zero Learning Curve
### 2. Reduced Learning Curve
Users don't need to know:
- pyNastran API
- OP2 file structure
- Python coding
- Optimization framework
LLM assistance helps users who are unfamiliar with:
- pyNastran API (can still write custom extractors if desired)
- OP2 file structure (LLM provides templates)
- Python coding best practices (LLM generates examples)
- Optimization framework patterns (LLM suggests approaches)
They only need to describe **what they want** in natural language!
Users can describe goals in natural language and choose their preferred level of automation!
### 3. Correct by Construction
### 3. Quality LLM-Generated Code
Generated code uses:
When using automated generation, code uses:
- ✅ Proven extraction patterns from research agent
- ✅ Correct API paths from documentation
- ✅ Proper data structure access
- ✅ Error handling and validation
No manual bugs!
Users can review, modify, or replace generated code as needed!
### 4. Extensible
@@ -570,10 +571,10 @@ None - Phase 3.1 is purely additive!
## Summary
Phase 3.1 successfully completes the **zero-manual-coding automation pipeline**:
Phase 3.1 successfully completes the **LLM-enhanced automation pipeline**:
- ✅ Orchestrator integrates Phase 2.7 + Phase 3.0
-Automatic extractor generation from LLM output
-Optional automatic extractor generation from LLM output
- ✅ Dynamic loading and execution on real OP2 files
- ✅ Smart parameter filtering per pattern type
- ✅ Multi-extractor support
@@ -581,28 +582,28 @@ Phase 3.1 successfully completes the **zero-manual-coding automation pipeline**:
- ✅ Extraction successful: max_disp=0.361783mm
- ✅ Normalized objective calculated: 0.072357
**Complete Automation Verified:**
**LLM-Enhanced Workflow Verified:**
```
Natural Language Request
Phase 2.7 LLM → Engineering Features
Phase 3.1 Orchestrator → Generated Extractors
Phase 3.1 Orchestrator → Generated Extractors (or manual extractors)
Phase 3.0 Research Agent → OP2 Extraction Code
Phase 3.0 Research Agent → OP2 Extraction Code (optional)
Execution on Real OP2 → Results
Phase 2.8 Inline Calc → Calculations
Phase 2.8 Inline Calc → Calculations (optional)
Phase 2.9 Hooks → Objective Value
Phase 2.9 Hooks → Objective Value (optional)
Optuna Trial Complete
ZERO MANUAL CODING! 🚀
LLM-ENHANCED WITH USER FLEXIBILITY! 🚀
```
Users can now describe optimization goals in natural language and the system automatically generates and executes ALL required code from request to final objective value!
Users can describe optimization goals in natural language and choose to leverage automated code generation, write custom code, or use a hybrid approach as needed!
## Related Documentation