feat: Add substudy system with live history tracking and workflow fixes

Major Features:
- Hierarchical substudy system (like NX Solutions/Subcases)
  * Shared model files across all substudies
  * Independent configuration per substudy
  * Continuation support from previous substudies
  * Real-time incremental history updates
- Live history tracking with optimization_history_incremental.json
- Complete bracket_displacement_maximizing study with substudy examples

Core Fixes:
- Fixed expression update workflow to pass design_vars through simulation_runner
  * Restored working NX journal expression update mechanism
  * OP2 timestamp verification instead of file deletion
  * Resolved issue where all trials returned identical objective values
- Fixed LLMOptimizationRunner to pass design variables to simulation runner
- Enhanced NXSolver with timestamp-based file regeneration verification

New Components:
- optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner
- optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard
- studies/bracket_displacement_maximizing/ - Complete substudy example
  * run_substudy.py - Substudy runner with continuation
  * run_optimization.py - Standalone optimization runner
  * config/substudy_template.json - Template for new substudies
  * substudies/coarse_exploration/ - 20-trial coarse search
  * substudies/fine_tuning/ - 50-trial refinement (continuation example)
  * SUBSTUDIES_README.md - Complete substudy documentation

Technical Improvements:
- Incremental history saving after each trial (optimization_history_incremental.json)
- Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve
- Trial indexing fix in substudy result saving
- Updated README with substudy system documentation

Testing:
- Successfully ran 20-trial coarse_exploration substudy
- Verified different objective values across trials (workflow fix validated)
- Confirmed live history updates in real-time
- Tested shared model file usage across substudies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-11-16 21:29:54 -05:00
parent 90a9e020d8
commit 2f3afc3813
126 changed files with 15592 additions and 97 deletions

View File

@@ -158,9 +158,11 @@ python run_5trial_test.py
- **Smart Logging**: Detailed per-trial logs + high-level optimization progress tracking
- **Plugin System**: Extensible hooks at pre-solve, post-solve, and post-extraction points
- **Study Management**: Isolated study folders with automatic result organization
- **Substudy System**: NX-like hierarchical studies with shared models and independent configurations
- **Live History Tracking**: Real-time incremental JSON updates for monitoring progress
- **Resume Capability**: Interrupt and resume optimizations without data loss
- **Web Dashboard**: Real-time monitoring and configuration UI
- **Example Study**: Bracket stress minimization with full documentation
- **Example Study**: Bracket displacement maximization with full substudy workflow
**🚀 What's Next**: Natural language optimization configuration via LLM interface (Phase 2)
@@ -200,15 +202,19 @@ Atomizer/
│ └── scripts/ # NX expression extraction
├── studies/ # Optimization studies
│ ├── README.md # Comprehensive studies guide
│ └── bracket_stress_minimization/ # Example study
│ └── bracket_displacement_maximizing/ # Example study with substudies
│ ├── README.md # Study documentation
│ ├── model/ # FEA model files (.prt, .sim, .fem)
│ ├── optimization_config_stress_displacement.json
── optimization_results/ # Generated results (gitignored)
├── optimization.log # High-level progress log
├── trial_logs/ # Detailed per-trial logs
├── history.json # Complete optimization history
└── study_*.db # Optuna database
│ ├── SUBSTUDIES_README.md # Substudy system guide
│ ├── model/ # Shared FEA model files (.prt, .sim, .fem)
── config/ # Substudy configuration templates
├── substudies/ # Independent substudy results
├── coarse_exploration/ # Fast 20-trial coarse search
│ │ ├── config.json
│ ├── optimization_history_incremental.json # Live updates
│ │ │ └── best_design.json
│ │ └── fine_tuning/ # Refined 50-trial optimization
│ ├── run_substudy.py # Substudy runner with continuation support
│ └── run_optimization.py # Standalone optimization runner
├── tests/ # Unit and integration tests
│ ├── test_hooks_with_bracket.py
│ ├── run_5trial_test.py
@@ -219,25 +225,35 @@ Atomizer/
└── README.md # This file
```
## Example: Bracket Stress Minimization
## Example: Bracket Displacement Maximization with Substudies
A complete working example is in `studies/bracket_stress_minimization/`:
A complete working example is in `studies/bracket_displacement_maximizing/`:
```bash
# Run the bracket optimization (50 trials, TPE sampler)
python tests/test_journal_optimization.py
# Run standalone optimization (20 trials)
cd studies/bracket_displacement_maximizing
python run_optimization.py
# View results
python dashboard/start_dashboard.py
# Open http://localhost:8080 in browser
# Or run a substudy (hierarchical organization)
python run_substudy.py coarse_exploration # 20-trial coarse search
python run_substudy.py fine_tuning # 50-trial refinement with continuation
# View live progress
cat substudies/coarse_exploration/optimization_history_incremental.json
```
**What it does**:
1. Loads `Bracket_sim1.sim` with wall thickness = 5mm
2. Varies thickness from 3-8mm over 50 trials
3. Runs FEA solve for each trial
4. Extracts max stress and displacement from OP2
5. Finds optimal thickness that minimizes stress
1. Loads `Bracket_sim1.sim` with parametric geometry
2. Varies `tip_thickness` (15-25mm) and `support_angle` (20-40°)
3. Runs FEA solve for each trial using NX journal mode
4. Extracts displacement and stress from OP2 files
5. Maximizes displacement while maintaining safety factor >= 4.0
**Substudy System**:
- **Shared Models**: All substudies use the same model files
- **Independent Configs**: Each substudy has its own parameter bounds and settings
- **Continuation Support**: Fine-tuning substudy continues from coarse exploration results
- **Live History**: Real-time JSON updates for monitoring progress
**Results** (typical):
- Best thickness: ~4.2mm
@@ -317,32 +333,32 @@ User: "Why did trial #34 perform best?"
- 95%+ expected accuracy with full nuance detection
- [x] **Phase 2.8**: Inline Code Generation ✅
- Auto-generates Python code for simple math operations
- LLM-generates Python code for simple math operations
- Handles avg/min/max, normalization, percentage calculations
- Direct integration with Phase 2.7 LLM output
- Zero manual coding for trivial operations
- Optional automated code generation for calculations
- [x] **Phase 2.9**: Post-Processing Hook Generation ✅
- Auto-generates standalone Python middleware scripts
- LLM-generates standalone Python middleware scripts
- Integrated with Phase 1 lifecycle hook system
- Handles weighted objectives, custom formulas, constraints, comparisons
- Complete JSON-based I/O for optimization loops
- Zero manual scripting for post-processing operations
- Optional automated scripting for post-processing operations
- [x] **Phase 3**: pyNastran Documentation Integration ✅
- Automated OP2 extraction code generation
- LLM-enhanced OP2 extraction code generation
- Documentation research via WebFetch
- 3 core extraction patterns (displacement, stress, force)
- Knowledge base for learned patterns
- Successfully tested on real OP2 files
- Zero manual coding for result extraction!
- Optional automated code generation for result extraction!
- [x] **Phase 3.1**: Complete Automation Pipeline ✅
- [x] **Phase 3.1**: LLM-Enhanced Automation Pipeline ✅
- Extractor orchestrator integrates Phase 2.7 + Phase 3.0
- Automatic extractor generation from LLM output
- Optional automatic extractor generation from LLM output
- Dynamic loading and execution on real OP2 files
- End-to-end test passed: Request → Code → Execution → Objective
- ZERO MANUAL CODING - Complete automation achieved!
- LLM-enhanced workflow with user flexibility achieved!
### Next Priorities