docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
This commit is contained in:
1716
docs/archive/historical/01_PROTOCOLS_legacy.md
Normal file
1716
docs/archive/historical/01_PROTOCOLS_legacy.md
Normal file
File diff suppressed because it is too large
Load Diff
297
docs/archive/historical/03_GETTING_STARTED_legacy.md
Normal file
297
docs/archive/historical/03_GETTING_STARTED_legacy.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# How to Extend an Optimization Study
|
||||
|
||||
**Date**: November 20, 2025
|
||||
|
||||
When you want to run more iterations to get better results, you have three options:
|
||||
|
||||
---
|
||||
|
||||
## Option 1: Continue Existing Study (Recommended)
|
||||
|
||||
**Best for**: When you want to keep all previous trial data and just add more iterations
|
||||
|
||||
**Advantages**:
|
||||
- Preserves all existing trials
|
||||
- Continues from current best result
|
||||
- Uses accumulated knowledge from previous trials
|
||||
- More efficient (no wasted trials)
|
||||
|
||||
**Process**:
|
||||
|
||||
### Step 1: Wait for current optimization to finish
|
||||
Check if the v2.1 test is still running:
|
||||
```bash
|
||||
# On Windows
|
||||
tasklist | findstr python
|
||||
|
||||
# Check background job status
|
||||
# Look for the running optimization process
|
||||
```
|
||||
|
||||
### Step 2: Run the continuation script
|
||||
```bash
|
||||
cd studies/circular_plate_protocol10_v2_1_test
|
||||
python continue_optimization.py
|
||||
```
|
||||
|
||||
### Step 3: Configure number of additional trials
|
||||
Edit [continue_optimization.py:29](../studies/circular_plate_protocol10_v2_1_test/continue_optimization.py#L29):
|
||||
```python
|
||||
# CONFIGURE THIS: Number of additional trials to run
|
||||
ADDITIONAL_TRIALS = 50 # Change to 100 for total of ~150 trials
|
||||
```
|
||||
|
||||
**Example**: If you ran 50 trials initially and want 100 total:
|
||||
- Set `ADDITIONAL_TRIALS = 50`
|
||||
- Study will run trials #50-99 (continuing from where it left off)
|
||||
- All 100 trials will be in the same study database
|
||||
|
||||
---
|
||||
|
||||
## Option 2: Modify Config and Restart
|
||||
|
||||
**Best for**: When you want a completely fresh start with more iterations
|
||||
|
||||
**Advantages**:
|
||||
- Clean slate optimization
|
||||
- Good for testing different configurations
|
||||
- Simpler to understand (one continuous run)
|
||||
|
||||
**Disadvantages**:
|
||||
- Loses all previous trial data
|
||||
- Wastes computational budget if previous trials were good
|
||||
|
||||
**Process**:
|
||||
|
||||
### Step 1: Stop any running optimization
|
||||
```bash
|
||||
# Kill the running process if needed
|
||||
# On Windows, find the PID and:
|
||||
taskkill /PID <process_id> /F
|
||||
```
|
||||
|
||||
### Step 2: Edit optimization config
|
||||
Edit [studies/circular_plate_protocol10_v2_1_test/1_setup/optimization_config.json](../studies/circular_plate_protocol10_v2_1_test/1_setup/optimization_config.json):
|
||||
```json
|
||||
{
|
||||
"trials": {
|
||||
"n_trials": 100, // Changed from 50 to 100
|
||||
"timeout_per_trial": 3600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Delete old results
|
||||
```bash
|
||||
cd studies/circular_plate_protocol10_v2_1_test
|
||||
|
||||
# Delete old database and history
|
||||
del 2_results\study.db
|
||||
del 2_results\optimization_history_incremental.json
|
||||
del 2_results\intelligent_optimizer\*.*
|
||||
```
|
||||
|
||||
### Step 4: Rerun optimization
|
||||
```bash
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Option 3: Wait and Evaluate First
|
||||
|
||||
**Best for**: When you're not sure if more iterations are needed
|
||||
|
||||
**Process**:
|
||||
|
||||
### Step 1: Wait for current test to finish
|
||||
The v2.1 test is currently running with 50 trials. Let it complete first.
|
||||
|
||||
### Step 2: Check results
|
||||
```bash
|
||||
cd studies/circular_plate_protocol10_v2_1_test
|
||||
|
||||
# View optimization report
|
||||
type 3_reports\OPTIMIZATION_REPORT.md
|
||||
|
||||
# Or check test summary
|
||||
type 2_results\test_summary.json
|
||||
```
|
||||
|
||||
### Step 3: Evaluate performance
|
||||
Look at:
|
||||
- **Best error**: Is it < 0.1 Hz? (target achieved)
|
||||
- **Convergence**: Has it plateaued or still improving?
|
||||
- **Pruning rate**: < 5% is good
|
||||
|
||||
### Step 4: Decide next action
|
||||
- **If target achieved**: Done! No need for more trials
|
||||
- **If converging**: Add 20-30 more trials (Option 1)
|
||||
- **If struggling**: May need algorithm adjustment, not more trials
|
||||
|
||||
---
|
||||
|
||||
## Comparison Table
|
||||
|
||||
| Feature | Option 1: Continue | Option 2: Restart | Option 3: Wait |
|
||||
|---------|-------------------|-------------------|----------------|
|
||||
| Preserves data | ✅ Yes | ❌ No | ✅ Yes |
|
||||
| Efficient | ✅ Very | ❌ Wasteful | ✅ Most |
|
||||
| Easy to set up | ✅ Simple | ⚠️ Moderate | ✅ Simplest |
|
||||
| Best use case | Adding more trials | Testing new config | Evaluating first |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Example: Extending to 100 Trials
|
||||
|
||||
Let's say the v2.1 test (50 trials) finishes with:
|
||||
- Best error: 0.25 Hz (not at target yet)
|
||||
- Convergence: Still improving
|
||||
- Pruning rate: 4% (good)
|
||||
|
||||
**Recommendation**: Continue with 50 more trials (Option 1)
|
||||
|
||||
### Step-by-step:
|
||||
|
||||
1. **Check current status**:
|
||||
```python
|
||||
import optuna
|
||||
storage = "sqlite:///studies/circular_plate_protocol10_v2_1_test/2_results/study.db"
|
||||
study = optuna.load_study(study_name="circular_plate_protocol10_v2_1_test", storage=storage)
|
||||
|
||||
print(f"Current trials: {len(study.trials)}")
|
||||
print(f"Best error: {study.best_value:.4f} Hz")
|
||||
```
|
||||
|
||||
2. **Edit continuation script**:
|
||||
```python
|
||||
# In continue_optimization.py line 29
|
||||
ADDITIONAL_TRIALS = 50 # Will reach ~100 total
|
||||
```
|
||||
|
||||
3. **Run continuation**:
|
||||
```bash
|
||||
cd studies/circular_plate_protocol10_v2_1_test
|
||||
python continue_optimization.py
|
||||
```
|
||||
|
||||
4. **Monitor progress**:
|
||||
- Watch console output for trial results
|
||||
- Check `optimization_history_incremental.json` for updates
|
||||
- Look for convergence (error decreasing)
|
||||
|
||||
5. **Verify results**:
|
||||
```python
|
||||
# After completion
|
||||
study = optuna.load_study(...)
|
||||
print(f"Total trials: {len(study.trials)}") # Should be ~100
|
||||
print(f"Final best error: {study.best_value:.4f} Hz")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Understanding Trial Counts
|
||||
|
||||
**Important**: The "total trials" count includes both successful and pruned trials.
|
||||
|
||||
Example breakdown:
|
||||
```
|
||||
Total trials: 50
|
||||
├── Successful: 47 (94%)
|
||||
│ └── Used for optimization
|
||||
└── Pruned: 3 (6%)
|
||||
└── Rejected (invalid parameters, simulation failures)
|
||||
```
|
||||
|
||||
When you add 50 more trials:
|
||||
```
|
||||
Total trials: 100
|
||||
├── Successful: ~94 (94%)
|
||||
└── Pruned: ~6 (6%)
|
||||
```
|
||||
|
||||
The optimization algorithm only learns from **successful trials**, so:
|
||||
- 50 successful trials ≈ 53 total trials (with 6% pruning)
|
||||
- 100 successful trials ≈ 106 total trials (with 6% pruning)
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### When to Add More Trials:
|
||||
✅ Error still decreasing (not converged yet)
|
||||
✅ Close to target but need refinement
|
||||
✅ Exploring new parameter regions
|
||||
|
||||
### When NOT to Add More Trials:
|
||||
❌ Error has plateaued for 20+ trials
|
||||
❌ Already achieved target tolerance
|
||||
❌ High pruning rate (>10%) - fix validation instead
|
||||
❌ Wrong algorithm selected - fix strategy selector instead
|
||||
|
||||
### How Many to Add:
|
||||
- **Close to target** (within 2x tolerance): Add 20-30 trials
|
||||
- **Moderate distance** (2-5x tolerance): Add 50 trials
|
||||
- **Far from target** (>5x tolerance): Investigate root cause first
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Long Runs
|
||||
|
||||
For runs with 100+ trials (several hours):
|
||||
|
||||
### Option A: Run in background (Windows)
|
||||
```bash
|
||||
# Start minimized
|
||||
start /MIN python continue_optimization.py
|
||||
```
|
||||
|
||||
### Option B: Use screen/tmux (if available)
|
||||
```bash
|
||||
# Not standard on Windows, but useful on Linux/Mac
|
||||
tmux new -s optimization
|
||||
python continue_optimization.py
|
||||
# Detach: Ctrl+B, then D
|
||||
# Reattach: tmux attach -t optimization
|
||||
```
|
||||
|
||||
### Option C: Monitor progress file
|
||||
```python
|
||||
# Check progress without interrupting
|
||||
import json
|
||||
with open('2_results/optimization_history_incremental.json') as f:
|
||||
history = json.load(f)
|
||||
|
||||
print(f"Completed trials: {len(history)}")
|
||||
best = min(history, key=lambda x: x['objective'])
|
||||
print(f"Current best: {best['objective']:.4f} Hz")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: "Study not found in database"
|
||||
**Cause**: Initial optimization hasn't run yet or database corrupted
|
||||
**Fix**: Run `run_optimization.py` first to create the initial study
|
||||
|
||||
### Issue: Continuation starts from trial #0
|
||||
**Cause**: Study database exists but is empty
|
||||
**Fix**: Delete database and run fresh optimization
|
||||
|
||||
### Issue: NX session conflicts
|
||||
**Cause**: Multiple NX sessions accessing same model
|
||||
**Solution**: NX Session Manager handles this automatically, but verify:
|
||||
```python
|
||||
from optimization_engine.nx_session_manager import NXSessionManager
|
||||
mgr = NXSessionManager()
|
||||
print(mgr.get_status_report())
|
||||
```
|
||||
|
||||
### Issue: High pruning rate in continuation
|
||||
**Cause**: Optimization exploring extreme parameter regions
|
||||
**Fix**: Simulation validator should prevent this, but verify rules are active
|
||||
|
||||
---
|
||||
|
||||
**Summary**: For your case (wanting 100 iterations), use **Option 1** with the `continue_optimization.py` script. Set `ADDITIONAL_TRIALS = 50` and run it after the current test finishes.
|
||||
284
docs/archive/historical/ARCHITECTURE_REFACTOR_NOV17.md
Normal file
284
docs/archive/historical/ARCHITECTURE_REFACTOR_NOV17.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# Architecture Refactor: Centralized Library System
|
||||
**Date**: November 17, 2025
|
||||
**Phase**: 3.2 Architecture Cleanup
|
||||
**Author**: Claude Code (with Antoine's direction)
|
||||
|
||||
## Problem Statement
|
||||
|
||||
You identified a critical architectural flaw:
|
||||
|
||||
> "ok, now, quick thing, why do very basic hooks get recreated and stored in the substudies? those should be just core accessed hooked right? is it only because its a test?
|
||||
>
|
||||
> What I need in studies is the config, files, setup, report, results etc not core hooks, those should go in atomizer hooks library with their doc etc no? I mean, applied only info = studies, and reusdable and core functions = atomizer foundation.
|
||||
>
|
||||
> My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time."
|
||||
|
||||
### Old Architecture (BAD):
|
||||
```
|
||||
studies/
|
||||
simple_beam_optimization/
|
||||
2_substudies/
|
||||
test_e2e_3trials_XXX/
|
||||
generated_extractors/ ❌ Code pollution!
|
||||
extract_displacement.py
|
||||
extract_von_mises_stress.py
|
||||
extract_mass.py
|
||||
generated_hooks/ ❌ Code pollution!
|
||||
custom_hook.py
|
||||
llm_workflow_config.json
|
||||
optimization_results.json
|
||||
```
|
||||
|
||||
**Problems**:
|
||||
- Every substudy duplicates extractor code
|
||||
- Study folders polluted with reusable code
|
||||
- No code reuse across studies
|
||||
- Mess! Not production-grade engineering software
|
||||
|
||||
### New Architecture (GOOD):
|
||||
```
|
||||
optimization_engine/
|
||||
extractors/ ✓ Core reusable library
|
||||
extract_displacement.py
|
||||
extract_stress.py
|
||||
extract_mass.py
|
||||
catalog.json ✓ Tracks all extractors
|
||||
|
||||
hooks/ ✓ Core reusable library
|
||||
(future implementation)
|
||||
|
||||
studies/
|
||||
simple_beam_optimization/
|
||||
2_substudies/
|
||||
my_optimization/
|
||||
extractors_manifest.json ✓ Just references!
|
||||
llm_workflow_config.json ✓ Study config
|
||||
optimization_results.json ✓ Results
|
||||
optimization_history.json ✓ History
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Clean study folders (only metadata)
|
||||
- ✅ Reusable core libraries
|
||||
- ✅ Deduplication (same extractor = single file)
|
||||
- ✅ Production-grade architecture
|
||||
- ✅ Evolves with time (library grows, studies stay clean)
|
||||
|
||||
## Implementation
|
||||
|
||||
### 1. Extractor Library Manager (`extractor_library.py`)
|
||||
|
||||
New smart library system with:
|
||||
- **Signature-based deduplication**: Two extractors with same functionality = one file
|
||||
- **Catalog tracking**: `catalog.json` tracks all library extractors
|
||||
- **Study manifests**: Studies just reference which extractors they used
|
||||
|
||||
```python
|
||||
class ExtractorLibrary:
|
||||
def get_or_create(self, llm_feature, extractor_code):
|
||||
"""Add to library or reuse existing."""
|
||||
signature = self._compute_signature(llm_feature)
|
||||
|
||||
if signature in self.catalog:
|
||||
# Reuse existing!
|
||||
return self.library_dir / self.catalog[signature]['filename']
|
||||
else:
|
||||
# Add new to library
|
||||
self.catalog[signature] = {...}
|
||||
return extractor_file
|
||||
```
|
||||
|
||||
### 2. Updated Components
|
||||
|
||||
**ExtractorOrchestrator** (`extractor_orchestrator.py`):
|
||||
- Now uses `ExtractorLibrary` instead of per-study generation
|
||||
- Creates `extractors_manifest.json` instead of copying code
|
||||
- Backward compatible (legacy mode available)
|
||||
|
||||
**LLMOptimizationRunner** (`llm_optimization_runner.py`):
|
||||
- Removed per-study `generated_extractors/` directory creation
|
||||
- Removed per-study `generated_hooks/` directory creation
|
||||
- Uses core library exclusively
|
||||
|
||||
**Test Suite** (`test_phase_3_2_e2e.py`):
|
||||
- Updated to check for `extractors_manifest.json` instead of `generated_extractors/`
|
||||
- Verifies clean study folder structure
|
||||
|
||||
## Results
|
||||
|
||||
### Before Refactor:
|
||||
```
|
||||
test_e2e_3trials_XXX/
|
||||
├── generated_extractors/ ❌ 3 Python files
|
||||
│ ├── extract_displacement.py
|
||||
│ ├── extract_von_mises_stress.py
|
||||
│ └── extract_mass.py
|
||||
├── generated_hooks/ ❌ Hook files
|
||||
├── llm_workflow_config.json
|
||||
└── optimization_results.json
|
||||
```
|
||||
|
||||
### After Refactor:
|
||||
```
|
||||
test_e2e_3trials_XXX/
|
||||
├── extractors_manifest.json ✅ Just references!
|
||||
├── llm_workflow_config.json ✅ Study config
|
||||
├── optimization_results.json ✅ Results
|
||||
└── optimization_history.json ✅ History
|
||||
|
||||
optimization_engine/extractors/ ✅ Core library
|
||||
├── extract_displacement.py
|
||||
├── extract_von_mises_stress.py
|
||||
├── extract_mass.py
|
||||
└── catalog.json
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
E2E test now passes with clean folder structure:
|
||||
- ✅ `extractors_manifest.json` created
|
||||
- ✅ Core library populated with 3 extractors
|
||||
- ✅ NO `generated_extractors/` pollution
|
||||
- ✅ Study folder clean and professional
|
||||
|
||||
Test output:
|
||||
```
|
||||
Verifying outputs...
|
||||
[OK] Output directory created
|
||||
[OK] History file created
|
||||
[OK] Results file created
|
||||
[OK] Extractors manifest (references core library)
|
||||
|
||||
Checks passed: 18/18
|
||||
[SUCCESS] END-TO-END TEST PASSED!
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### For Future Studies:
|
||||
|
||||
**What changed**:
|
||||
- Extractors are now in `optimization_engine/extractors/` (core library)
|
||||
- Study folders only contain `extractors_manifest.json` (not code)
|
||||
|
||||
**No action required**:
|
||||
- System automatically uses new architecture
|
||||
- Backward compatible (legacy mode available with `use_core_library=False`)
|
||||
|
||||
### For Developers:
|
||||
|
||||
**To add new extractors**:
|
||||
1. LLM generates extractor code
|
||||
2. `ExtractorLibrary.get_or_create()` checks if already exists
|
||||
3. If new: adds to `optimization_engine/extractors/`
|
||||
4. If exists: reuses existing file
|
||||
5. Study gets manifest reference, not copy of code
|
||||
|
||||
**To view library**:
|
||||
```python
|
||||
from optimization_engine.extractor_library import ExtractorLibrary
|
||||
|
||||
library = ExtractorLibrary()
|
||||
print(library.get_library_summary())
|
||||
```
|
||||
|
||||
## Next Steps (Future Work)
|
||||
|
||||
1. **Hook Library System**: Implement same architecture for hooks
|
||||
- Currently: Hooks still use legacy per-study generation
|
||||
- Future: `optimization_engine/hooks/` library like extractors
|
||||
|
||||
2. **Library Documentation**: Auto-generate docs for each extractor
|
||||
- Extract docstrings from library extractors
|
||||
- Create browsable documentation
|
||||
|
||||
3. **Versioning**: Track extractor versions for reproducibility
|
||||
- Tag extractors with creation date/version
|
||||
- Allow studies to pin specific versions
|
||||
|
||||
4. **CLI Tool**: View and manage library
|
||||
- `python -m optimization_engine.extractors list`
|
||||
- `python -m optimization_engine.extractors info <signature>`
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. **New Files**:
|
||||
- `optimization_engine/extractor_library.py` - Core library manager
|
||||
- `optimization_engine/extractors/__init__.py` - Package init
|
||||
- `optimization_engine/extractors/catalog.json` - Library catalog
|
||||
- `docs/ARCHITECTURE_REFACTOR_NOV17.md` - This document
|
||||
|
||||
2. **Modified Files**:
|
||||
- `optimization_engine/extractor_orchestrator.py` - Use library instead of per-study
|
||||
- `optimization_engine/llm_optimization_runner.py` - Remove per-study directories
|
||||
- `tests/test_phase_3_2_e2e.py` - Check for manifest instead of directories
|
||||
|
||||
## Commit Message
|
||||
|
||||
```
|
||||
refactor: Implement centralized extractor library to eliminate code duplication
|
||||
|
||||
MAJOR ARCHITECTURE REFACTOR - Clean Study Folders
|
||||
|
||||
Problem:
|
||||
- Every substudy was generating duplicate extractor code
|
||||
- Study folders polluted with reusable library code
|
||||
- No code reuse across studies
|
||||
- Not production-grade architecture
|
||||
|
||||
Solution:
|
||||
Implemented centralized library system:
|
||||
- Core extractors in optimization_engine/extractors/
|
||||
- Signature-based deduplication
|
||||
- Studies only store metadata (extractors_manifest.json)
|
||||
- Clean separation: studies = data, core = code
|
||||
|
||||
Changes:
|
||||
1. Created ExtractorLibrary with smart deduplication
|
||||
2. Updated ExtractorOrchestrator to use core library
|
||||
3. Updated LLMOptimizationRunner to stop creating per-study directories
|
||||
4. Updated tests to verify clean study folder structure
|
||||
|
||||
Results:
|
||||
BEFORE: study folder with generated_extractors/ directory (code pollution)
|
||||
AFTER: study folder with extractors_manifest.json (just references)
|
||||
|
||||
Core library: optimization_engine/extractors/
|
||||
- extract_displacement.py
|
||||
- extract_von_mises_stress.py
|
||||
- extract_mass.py
|
||||
- catalog.json (tracks all extractors)
|
||||
|
||||
Study folders NOW ONLY contain:
|
||||
- extractors_manifest.json (references to core library)
|
||||
- llm_workflow_config.json (study configuration)
|
||||
- optimization_results.json (results)
|
||||
- optimization_history.json (trial history)
|
||||
|
||||
Production-grade architecture for "insanely good engineering software that evolves with time"
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
## Summary for Morning
|
||||
|
||||
**What was done**:
|
||||
1. ✅ Created centralized extractor library system
|
||||
2. ✅ Eliminated per-study code duplication
|
||||
3. ✅ Clean study folder architecture
|
||||
4. ✅ E2E tests pass with new structure
|
||||
5. ✅ Comprehensive documentation
|
||||
|
||||
**What you'll see**:
|
||||
- Studies now only contain metadata (no code!)
|
||||
- Core library in `optimization_engine/extractors/`
|
||||
- Professional, production-grade architecture
|
||||
|
||||
**Ready for**:
|
||||
- Continue Phase 3.2 development
|
||||
- Same approach for hooks library (next iteration)
|
||||
- Building "insanely good engineering software"
|
||||
|
||||
Have a good night! ✨
|
||||
599
docs/archive/historical/BRACKET_STUDY_ISSUES_LOG.md
Normal file
599
docs/archive/historical/BRACKET_STUDY_ISSUES_LOG.md
Normal file
@@ -0,0 +1,599 @@
|
||||
# Bracket Stiffness Optimization - Issues Log
|
||||
**Date**: November 21, 2025
|
||||
**Study**: bracket_stiffness_optimization
|
||||
**Protocol**: Protocol 10 (IMSO)
|
||||
|
||||
## Executive Summary
|
||||
Attempted to create a new bracket stiffness optimization study using Protocol 10. Encountered **8 critical issues** that prevented the study from running successfully. All issues are protocol violations that should be prevented by better templates, validation, and documentation.
|
||||
|
||||
---
|
||||
|
||||
## Issue #1: Unicode/Emoji Characters Breaking Windows Console
|
||||
**Severity**: CRITICAL
|
||||
**Category**: Output Formatting
|
||||
**Protocol Violation**: Using non-ASCII characters in code output
|
||||
|
||||
### What Happened
|
||||
Code contained unicode symbols (≤, ✓, ✗, 🎯, 📊, ⚠) in print statements, causing:
|
||||
```
|
||||
UnicodeEncodeError: 'charmap' codec can't encode character '\u2264' in position 17
|
||||
```
|
||||
|
||||
### Root Cause
|
||||
- Windows cmd uses cp1252 encoding by default
|
||||
- Unicode symbols not in cp1252 cause crashes
|
||||
- User explicitly requested NO emojis/unicode in previous sessions
|
||||
|
||||
### Files Affected
|
||||
- `run_optimization.py` (multiple print statements)
|
||||
- `bracket_stiffness_extractor.py` (print statements)
|
||||
- `export_displacement_field.py` (success messages)
|
||||
|
||||
### Fix Applied
|
||||
Replace ALL unicode with ASCII equivalents:
|
||||
- `≤` → `<=`
|
||||
- `✓` → `[OK]`
|
||||
- `✗` → `[X]`
|
||||
- `⚠` → `[!]`
|
||||
- `🎯` → `[BEST]`
|
||||
- etc.
|
||||
|
||||
### Protocol Fix Required
|
||||
**MANDATORY RULE**: Never use unicode symbols or emojis in any Python code that prints to console.
|
||||
|
||||
Create `atomizer/utils/safe_print.py`:
|
||||
```python
|
||||
"""Windows-safe printing utilities - ASCII only"""
|
||||
|
||||
def print_success(msg):
|
||||
print(f"[OK] {msg}")
|
||||
|
||||
def print_error(msg):
|
||||
print(f"[X] {msg}")
|
||||
|
||||
def print_warning(msg):
|
||||
print(f"[!] {msg}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue #2: Hardcoded NX Version Instead of Using config.py
|
||||
**Severity**: CRITICAL
|
||||
**Category**: Configuration Management
|
||||
**Protocol Violation**: Not using central configuration
|
||||
|
||||
### What Happened
|
||||
Code hardcoded `nastran_version="2306"` but user has NX 2412 installed:
|
||||
```
|
||||
FileNotFoundError: Could not auto-detect NX 2306 installation
|
||||
```
|
||||
|
||||
User explicitly asked: "isn't it in the protocole to use the actual config in config.py????"
|
||||
|
||||
### Root Cause
|
||||
- Ignored `config.py` which has `NX_VERSION = "2412"`
|
||||
- Hardcoded old version number
|
||||
- Same issue in bracket_stiffness_extractor.py line 152
|
||||
|
||||
### Files Affected
|
||||
- `run_optimization.py` line 85
|
||||
- `bracket_stiffness_extractor.py` line 152
|
||||
|
||||
### Fix Applied
|
||||
```python
|
||||
import config as atomizer_config
|
||||
|
||||
nx_solver = NXSolver(
|
||||
nastran_version=atomizer_config.NX_VERSION, # Use central config
|
||||
timeout=atomizer_config.NASTRAN_TIMEOUT,
|
||||
)
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**MANDATORY RULE**: ALWAYS import and use `config.py` for ALL system paths and versions.
|
||||
|
||||
Add validation check in all study templates:
|
||||
```python
|
||||
# Validate using central config
|
||||
assert 'atomizer_config' in dir(), "Must import config as atomizer_config"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue #3: Module Name Collision (config vs config parameter)
|
||||
**Severity**: HIGH
|
||||
**Category**: Code Quality
|
||||
**Protocol Violation**: Poor naming conventions
|
||||
|
||||
### What Happened
|
||||
```python
|
||||
import config # Module named 'config'
|
||||
|
||||
def create_objective_function(config: dict, ...): # Parameter named 'config'
|
||||
# Inside function:
|
||||
nastran_version=config.NX_VERSION # ERROR: config is the dict, not the module!
|
||||
```
|
||||
|
||||
Error: `AttributeError: 'dict' object has no attribute 'NX_VERSION'`
|
||||
|
||||
### Root Cause
|
||||
Variable shadowing - parameter `config` shadows imported module `config`
|
||||
|
||||
### Fix Applied
|
||||
```python
|
||||
import config as atomizer_config # Unique name
|
||||
|
||||
def create_objective_function(config: dict, ...):
|
||||
nastran_version=atomizer_config.NX_VERSION # Now unambiguous
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**MANDATORY RULE**: Always import config as `atomizer_config` to prevent collisions.
|
||||
|
||||
Update all templates and examples to use:
|
||||
```python
|
||||
import config as atomizer_config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue #4: Protocol 10 Didn't Support Multi-Objective Optimization
|
||||
**Severity**: CRITICAL
|
||||
**Category**: Feature Gap
|
||||
**Protocol Violation**: Protocol 10 documentation claims multi-objective support but doesn't implement it
|
||||
|
||||
### What Happened
|
||||
Protocol 10 (`IntelligentOptimizer`) hardcoded `direction='minimize'` for single-objective only.
|
||||
Multi-objective problems (like bracket: maximize stiffness, minimize mass) couldn't use Protocol 10.
|
||||
|
||||
### Root Cause
|
||||
`IntelligentOptimizer.optimize()` didn't accept `directions` parameter
|
||||
`_create_study()` always created single-objective studies
|
||||
|
||||
### Fix Applied
|
||||
Enhanced `intelligent_optimizer.py`:
|
||||
```python
|
||||
def optimize(self, ..., directions: Optional[list] = None):
|
||||
self.directions = directions
|
||||
|
||||
def _create_study(self):
|
||||
if self.directions is not None:
|
||||
# Multi-objective
|
||||
study = optuna.create_study(directions=self.directions, ...)
|
||||
else:
|
||||
# Single-objective (backward compatible)
|
||||
study = optuna.create_study(direction='minimize', ...)
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**PROTOCOL 10 UPDATE**: Document and test multi-objective support.
|
||||
|
||||
Add to Protocol 10 documentation:
|
||||
- Single-objective: `directions=None` or `directions=["minimize"]`
|
||||
- Multi-objective: `directions=["minimize", "maximize", ...]`
|
||||
- Update all examples to show both cases
|
||||
|
||||
---
|
||||
|
||||
## Issue #5: Wrong Solution Name Parameter to NX Solver
|
||||
**Severity**: HIGH
|
||||
**Category**: NX API Usage
|
||||
**Protocol Violation**: Incorrect understanding of NX solution naming
|
||||
|
||||
### What Happened
|
||||
Passed `solution_name="Bracket_sim1"` to NX solver, causing:
|
||||
```
|
||||
NXOpen.NXException: No object found with this name: Solution[Bracket_sim1]
|
||||
```
|
||||
|
||||
All trials pruned because solver couldn't find solution.
|
||||
|
||||
### Root Cause
|
||||
- NX solver looks for "Solution[<name>]" object
|
||||
- Solution name should be "Solution 1", not the sim file name
|
||||
- Passing `None` solves all solutions in .sim file (correct for most cases)
|
||||
|
||||
### Fix Applied
|
||||
```python
|
||||
result = nx_solver.run_simulation(
|
||||
sim_file=sim_file,
|
||||
solution_name=None # Solve all solutions
|
||||
)
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**DOCUMENTATION**: Clarify `solution_name` parameter in NX solver docs.
|
||||
|
||||
Default should be `None` (solve all solutions). Only specify when you need to solve a specific solution from a multi-solution .sim file.
|
||||
|
||||
---
|
||||
|
||||
## Issue #6: NX Journal Needs to Open Simulation File
|
||||
**Severity**: HIGH
|
||||
**Category**: NX Journal Design
|
||||
**Protocol Violation**: Journal assumes file is already open
|
||||
|
||||
### What Happened
|
||||
`export_displacement_field.py` expected a simulation to already be open:
|
||||
```python
|
||||
workSimPart = theSession.Parts.BaseWork
|
||||
if workSimPart is None:
|
||||
print("ERROR: No work part loaded")
|
||||
return 1
|
||||
```
|
||||
|
||||
When called via `run_journal.exe`, NX starts with no files open.
|
||||
|
||||
### Root Cause
|
||||
Journal template didn't handle opening the sim file
|
||||
|
||||
### Fix Applied
|
||||
Enhanced journal to open sim file:
|
||||
```python
|
||||
def main(args):
|
||||
# Accept sim file path as argument
|
||||
if len(args) > 0:
|
||||
sim_file = Path(args[0])
|
||||
else:
|
||||
sim_file = Path(__file__).parent / "Bracket_sim1.sim"
|
||||
|
||||
# Open the simulation
|
||||
basePart1, partLoadStatus1 = theSession.Parts.OpenBaseDisplay(str(sim_file))
|
||||
partLoadStatus1.Dispose()
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**JOURNAL TEMPLATE**: All NX journals should handle opening required files.
|
||||
|
||||
Create standard journal template that:
|
||||
1. Accepts file paths as arguments
|
||||
2. Opens required files (part, sim, fem)
|
||||
3. Performs operation
|
||||
4. Closes gracefully
|
||||
|
||||
---
|
||||
|
||||
## Issue #7: Subprocess Check Fails on NX sys.exit(0)
|
||||
**Severity**: MEDIUM
|
||||
**Category**: NX Integration
|
||||
**Protocol Violation**: Incorrect error handling for NX journals
|
||||
|
||||
### What Happened
|
||||
```python
|
||||
subprocess.run([nx_exe, journal], check=True) # Raises exception even on success!
|
||||
```
|
||||
|
||||
NX's `run_journal.exe` returns non-zero exit code even when journal exits with `sys.exit(0)`.
|
||||
The stderr shows:
|
||||
```
|
||||
SystemExit: 0 <-- Success!
|
||||
```
|
||||
|
||||
But subprocess.run with `check=True` raises `CalledProcessError`.
|
||||
|
||||
### Root Cause
|
||||
NX wraps Python journals and reports `sys.exit()` as a "Syntax error" in stderr, even for exit code 0.
|
||||
|
||||
### Fix Applied
|
||||
Don't use `check=True`. Instead, verify output file was created:
|
||||
```python
|
||||
result = subprocess.run([nx_exe, journal], capture_output=True, text=True)
|
||||
if not output_file.exists():
|
||||
raise RuntimeError(f"Journal completed but output file not created")
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**NX SOLVER WRAPPER**: Never use `check=True` for NX journal execution.
|
||||
|
||||
Create `nx_utils.run_journal_safe()`:
|
||||
```python
|
||||
def run_journal_safe(journal_path, expected_outputs=[]):
|
||||
"""Run NX journal and verify outputs, ignoring exit code"""
|
||||
result = subprocess.run([NX_RUN_JOURNAL, journal_path],
|
||||
capture_output=True, text=True)
|
||||
|
||||
for output_file in expected_outputs:
|
||||
if not Path(output_file).exists():
|
||||
raise RuntimeError(f"Journal failed: {output_file} not created")
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue #8: OP2 File Naming Mismatch
|
||||
**Severity**: HIGH
|
||||
**Category**: File Path Management
|
||||
**Protocol Violation**: Assumed file naming instead of detecting actual names
|
||||
|
||||
### What Happened
|
||||
Extractor looked for `Bracket_sim1.op2` but NX created `bracket_sim1-solution_1.op2`:
|
||||
```
|
||||
ERROR: OP2 file not found: Bracket_sim1.op2
|
||||
```
|
||||
|
||||
### Root Cause
|
||||
- NX creates OP2 with lowercase sim base name
|
||||
- NX adds `-solution_1` suffix
|
||||
- Extractor hardcoded expected name without checking
|
||||
|
||||
### Fix Applied
|
||||
```python
|
||||
self.sim_base = Path(sim_file).stem
|
||||
self.op2_file = self.model_dir / f"{self.sim_base.lower()}-solution_1.op2"
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**FILE DETECTION**: Never hardcode output file names. Always detect or construct from input names.
|
||||
|
||||
Create `nx_utils.find_op2_file()`:
|
||||
```python
|
||||
def find_op2_file(sim_file: Path, working_dir: Path) -> Path:
|
||||
"""Find OP2 file generated by NX simulation"""
|
||||
sim_base = sim_file.stem.lower()
|
||||
|
||||
# Try common patterns
|
||||
patterns = [
|
||||
f"{sim_base}-solution_1.op2",
|
||||
f"{sim_base}.op2",
|
||||
f"{sim_base}-*.op2",
|
||||
]
|
||||
|
||||
for pattern in patterns:
|
||||
matches = list(working_dir.glob(pattern))
|
||||
if matches:
|
||||
return matches[0] # Return first match
|
||||
|
||||
raise FileNotFoundError(f"No OP2 file found for {sim_file}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue #9: Field Data Extractor Expects CSV, NX Exports Custom Format
|
||||
**Severity**: CRITICAL
|
||||
**Category**: Data Format Mismatch
|
||||
**Protocol Violation**: Generic extractor not actually generic
|
||||
|
||||
### What Happened
|
||||
```
|
||||
ERROR: No valid data found in column 'z(mm)'
|
||||
```
|
||||
|
||||
### Root Cause
|
||||
NX field export format:
|
||||
```
|
||||
FIELD: [ResultProbe] : [TABLE]
|
||||
INDEP VAR: [step] : [] : [] : [0]
|
||||
INDEP VAR: [node_id] : [] : [] : [5]
|
||||
DEP VAR: [x] : [Length] : [mm] : [0]
|
||||
START DATA
|
||||
0, 396, -0.086716040968895
|
||||
0, 397, -0.087386816740036
|
||||
...
|
||||
END DATA
|
||||
```
|
||||
|
||||
This is NOT a CSV with headers! But `FieldDataExtractor` uses:
|
||||
```python
|
||||
reader = csv.DictReader(f) # Expects CSV headers!
|
||||
value = float(row[self.result_column]) # Looks for column 'z(mm)'
|
||||
```
|
||||
|
||||
### Fix Required
|
||||
`FieldDataExtractor` needs complete rewrite to handle NX field format:
|
||||
|
||||
```python
|
||||
def _parse_nx_field_file(self, file_path: Path) -> np.ndarray:
|
||||
"""Parse NX field export format (.fld)"""
|
||||
values = []
|
||||
in_data_section = False
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
for line in f:
|
||||
if line.startswith('START DATA'):
|
||||
in_data_section = True
|
||||
continue
|
||||
if line.startswith('END DATA'):
|
||||
break
|
||||
|
||||
if in_data_section:
|
||||
parts = line.strip().split(',')
|
||||
if len(parts) >= 3:
|
||||
try:
|
||||
value = float(parts[2].strip()) # Third column is value
|
||||
values.append(value)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
return np.array(values)
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**CRITICAL**: Fix `FieldDataExtractor` to actually parse NX field format.
|
||||
|
||||
The extractor claims to be "generic" and "reusable" but only works with CSV files, not NX field exports!
|
||||
|
||||
---
|
||||
|
||||
## Issue #10: Grid Point Forces Not Requested in OP2 Output
|
||||
**Severity**: CRITICAL - BLOCKING ALL TRIALS
|
||||
**Category**: NX Simulation Configuration
|
||||
**Protocol Violation**: Missing output request validation
|
||||
|
||||
### What Happened
|
||||
ALL trials (44-74+) are being pruned with the same error:
|
||||
```
|
||||
ERROR: Extraction failed: No grid point forces found in OP2 file
|
||||
```
|
||||
|
||||
Simulation completes successfully:
|
||||
- NX solver runs without errors
|
||||
- OP2 file is generated and regenerated with fresh timestamps
|
||||
- Displacement field is exported successfully
|
||||
- Field data is parsed correctly
|
||||
|
||||
But stiffness calculation fails because applied force cannot be extracted from OP2.
|
||||
|
||||
### Root Cause
|
||||
The NX simulation is not configured to output grid point forces to the OP2 file.
|
||||
|
||||
Nastran requires explicit output requests in the Case Control section. The bracket simulation likely only requests:
|
||||
- Displacement results
|
||||
- Stress results (maybe)
|
||||
|
||||
But does NOT request:
|
||||
- Grid point forces (GPFORCE)
|
||||
|
||||
Without this output request, the OP2 file contains nodal displacements but not reaction forces at grid points.
|
||||
|
||||
### Evidence
|
||||
From stiffness_calculator.py (optimization_engine/extractors/stiffness_calculator.py):
|
||||
```python
|
||||
# Extract applied force from OP2
|
||||
force_results = self.op2_extractor.extract_force(component=self.force_component)
|
||||
# Raises: ValueError("No grid point forces found in OP2 file")
|
||||
```
|
||||
|
||||
The OP2Extractor tries to read `op2.grid_point_forces` which is empty because NX didn't request this output.
|
||||
|
||||
### Fix Required
|
||||
**Option A: Modify NX Simulation Configuration (Recommended)**
|
||||
|
||||
Open `Bracket_sim1.sim` in NX and add grid point forces output request:
|
||||
1. Edit Solution 1
|
||||
2. Go to "Solution Control" or "Output Requests"
|
||||
3. Add "Grid Point Forces" to output requests
|
||||
4. Save simulation
|
||||
|
||||
This will add to the Nastran deck:
|
||||
```
|
||||
GPFORCE = ALL
|
||||
```
|
||||
|
||||
**Option B: Extract Forces from Load Definition (Alternative)**
|
||||
|
||||
If the applied load is constant and defined in the model, extract it from the .sim file or model expressions instead of relying on OP2:
|
||||
```python
|
||||
# In bracket_stiffness_extractor.py
|
||||
def _get_applied_force_from_model(self):
|
||||
"""Extract applied force magnitude from model definition"""
|
||||
# Load is 1000N in Z-direction based on model setup
|
||||
return 1000.0 # N
|
||||
```
|
||||
|
||||
This is less robust but works if the load is constant.
|
||||
|
||||
**Option C: Enhance OP2Extractor to Read from F06 File**
|
||||
|
||||
Nastran always writes grid point forces to the F06 text file. Add F06 parsing as fallback:
|
||||
```python
|
||||
def extract_force(self, component='fz'):
|
||||
# Try OP2 first
|
||||
if self.op2.grid_point_forces:
|
||||
return self._extract_from_op2(component)
|
||||
|
||||
# Fallback to F06 file
|
||||
f06_file = self.op2_file.with_suffix('.f06')
|
||||
if f06_file.exists():
|
||||
return self._extract_from_f06(f06_file, component)
|
||||
|
||||
raise ValueError("No grid point forces found in OP2 or F06 file")
|
||||
```
|
||||
|
||||
### Protocol Fix Required
|
||||
**MANDATORY VALIDATION**: Add pre-flight check for required output requests.
|
||||
|
||||
Create `nx_utils.validate_simulation_outputs()`:
|
||||
```python
|
||||
def validate_simulation_outputs(sim_file: Path, required_outputs: list):
|
||||
"""
|
||||
Validate that NX simulation has required output requests configured.
|
||||
|
||||
Args:
|
||||
sim_file: Path to .sim file
|
||||
required_outputs: List of required outputs, e.g.,
|
||||
['displacement', 'stress', 'grid_point_forces']
|
||||
|
||||
Raises:
|
||||
ValueError: If required outputs are not configured
|
||||
"""
|
||||
# Parse .sim file or generated .dat file to check output requests
|
||||
# Provide helpful error message with instructions to add missing outputs
|
||||
pass
|
||||
```
|
||||
|
||||
Call this validation BEFORE starting optimization:
|
||||
```python
|
||||
# In run_optimization.py, before optimizer.optimize()
|
||||
validate_simulation_outputs(
|
||||
sim_file=sim_file,
|
||||
required_outputs=['displacement', 'grid_point_forces']
|
||||
)
|
||||
```
|
||||
|
||||
### Immediate Action
|
||||
**For bracket study**: Open Bracket_sim1.sim in NX and add Grid Point Forces output request.
|
||||
|
||||
---
|
||||
|
||||
## Summary of Protocol Fixes Needed
|
||||
|
||||
### HIGH PRIORITY (Blocking)
|
||||
1. ✅ Fix `FieldDataExtractor` to parse NX field format
|
||||
2. ✅ Create "no unicode" rule and safe_print utilities
|
||||
3. ✅ Enforce config.py usage in all templates
|
||||
4. ✅ Update Protocol 10 for multi-objective support
|
||||
5. ❌ **CURRENT BLOCKER**: Fix grid point forces extraction (Issue #10)
|
||||
|
||||
### MEDIUM PRIORITY (Quality)
|
||||
5. ✅ Create NX journal template with file opening
|
||||
6. ✅ Create nx_utils.run_journal_safe() wrapper
|
||||
7. ✅ Create nx_utils.find_op2_file() detection
|
||||
8. ✅ Add naming convention (import config as atomizer_config)
|
||||
|
||||
### DOCUMENTATION
|
||||
9. ✅ Document solution_name parameter behavior
|
||||
10. ✅ Update Protocol 10 docs with multi-objective examples
|
||||
11. ✅ Create "Windows Compatibility Guide"
|
||||
12. ✅ Add field file format documentation
|
||||
|
||||
---
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
### What Went Wrong
|
||||
1. **Generic tools weren't actually generic** - FieldDataExtractor only worked for CSV
|
||||
2. **No validation of central config usage** - Easy to forget to import
|
||||
3. **Unicode symbols slip in during development** - Need linter check
|
||||
4. **Subprocess error handling assumed standard behavior** - NX is non-standard
|
||||
5. **File naming assumptions instead of detection** - Brittle
|
||||
6. **Protocol 10 feature gap** - Claims multi-objective but didn't implement it
|
||||
7. **Journal templates incomplete** - Didn't handle file opening
|
||||
|
||||
### What Should Have Been Caught
|
||||
- Pre-flight validation script should check:
|
||||
- ✅ No unicode in any .py files
|
||||
- ✅ All studies import config.py
|
||||
- ✅ All output files use detected names, not hardcoded
|
||||
- ✅ All journals can run standalone (no assumptions about open files)
|
||||
|
||||
### Time Lost
|
||||
- Approximately 60+ minutes debugging issues that should have been prevented
|
||||
- Would have been 5 minutes to run successfully with proper templates
|
||||
|
||||
---
|
||||
|
||||
## Action Items
|
||||
|
||||
1. [ ] Rewrite FieldDataExtractor to handle NX format
|
||||
2. [ ] Create pre-flight validation script
|
||||
3. [ ] Update all study templates
|
||||
4. [ ] Add linter rules for unicode detection
|
||||
5. [ ] Create nx_utils module with safe wrappers
|
||||
6. [ ] Update Protocol 10 documentation
|
||||
7. [ ] Create Windows compatibility guide
|
||||
8. [ ] Add integration tests for NX file formats
|
||||
|
||||
---
|
||||
|
||||
**Next Step**: Fix FieldDataExtractor and test complete workflow end-to-end.
|
||||
236
docs/archive/historical/CRITICAL_ISSUES_ROADMAP.md
Normal file
236
docs/archive/historical/CRITICAL_ISSUES_ROADMAP.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# CRITICAL ISSUES - IMMEDIATE ACTION REQUIRED
|
||||
|
||||
**Date:** 2025-11-21
|
||||
**Status:** 🚨 BLOCKING PRODUCTION USE
|
||||
|
||||
## Issue 1: Real-Time Tracking Files - **MANDATORY EVERY ITERATION**
|
||||
|
||||
### Current State ❌
|
||||
- Intelligent optimizer only writes tracking files at END of optimization
|
||||
- Dashboard cannot show real-time progress
|
||||
- No visibility into optimizer state during execution
|
||||
|
||||
### Required Behavior ✅
|
||||
```
|
||||
AFTER EVERY SINGLE TRIAL:
|
||||
1. Write optimizer_state.json (current strategy, confidence, phase)
|
||||
2. Write strategy_history.json (append new recommendation)
|
||||
3. Write landscape_snapshot.json (current analysis if available)
|
||||
4. Write trial_log.json (append trial result with timestamp)
|
||||
```
|
||||
|
||||
### Implementation Plan
|
||||
1. Create `RealtimeCallback` class that triggers after each trial
|
||||
2. Hook into `study.optimize(..., callbacks=[realtime_callback])`
|
||||
3. Write incremental JSON files to `intelligent_optimizer/` folder
|
||||
4. Files must be atomic writes (temp file + rename)
|
||||
|
||||
### Files to Modify
|
||||
- `optimization_engine/intelligent_optimizer.py` - Add callback system
|
||||
- New file: `optimization_engine/realtime_tracking.py` - Callback implementation
|
||||
|
||||
---
|
||||
|
||||
## Issue 2: Dashboard - Complete Overhaul Required
|
||||
|
||||
###Current Problems ❌
|
||||
1. **No Pareto front plot** for multi-objective
|
||||
2. **No parallel coordinates** for high-dimensional visualization
|
||||
3. **Units hardcoded/wrong** - should read from optimization_config.json
|
||||
4. **Convergence plot backwards** - X-axis should be trial number (already is, but user reports issue)
|
||||
5. **No objective normalization** - raw values make comparison difficult
|
||||
6. **Missing intelligent optimizer panel** - no real-time strategy display
|
||||
7. **Poor UX** - not professional looking
|
||||
|
||||
### Required Features ✅
|
||||
|
||||
#### A. Intelligent Optimizer Panel (NEW)
|
||||
```typescript
|
||||
<OptimizerPanel>
|
||||
- Current Phase: "Characterization" | "Optimization" | "Refinement"
|
||||
- Current Strategy: "TPE" | "CMA-ES" | "Random" | "GP-BO"
|
||||
- Confidence: 0.95 (progress bar)
|
||||
- Trials in Phase: 15/30
|
||||
- Strategy Transitions: Timeline view
|
||||
- Landscape Type: "Smooth Unimodal" | "Rugged Multi-modal" | etc.
|
||||
</OptimizerPanel>
|
||||
```
|
||||
|
||||
#### B. Pareto Front Plot (Multi-Objective)
|
||||
```typescript
|
||||
<ParetoPlot objectives={study.objectives}>
|
||||
- 2D scatter: objective1 vs objective2
|
||||
- Color by constraint satisfaction
|
||||
- Interactive: click to see design variables
|
||||
- Dominance regions shaded
|
||||
</ParetoPlot>
|
||||
```
|
||||
|
||||
#### C. Parallel Coordinates (Multi-Objective)
|
||||
```typescript
|
||||
<ParallelCoordinates>
|
||||
- One axis per design variable + objectives
|
||||
- Lines colored by Pareto front membership
|
||||
- Interactive brushing to filter solutions
|
||||
</ParallelCoordinates>
|
||||
```
|
||||
|
||||
#### D. Dynamic Units & Metadata
|
||||
```typescript
|
||||
// Read from optimization_config.json
|
||||
interface StudyMetadata {
|
||||
objectives: Array<{name: string, type: 'minimize'|'maximize', unit?: string}>
|
||||
design_variables: Array<{name: string, unit?: string, min: number, max: number}>
|
||||
constraints: Array<{name: string, type: string, value: number}>
|
||||
}
|
||||
```
|
||||
|
||||
#### E. Normalized Objectives
|
||||
```typescript
|
||||
// Option 1: Min-Max normalization (0-1 scale)
|
||||
normalized = (value - min) / (max - min)
|
||||
|
||||
// Option 2: Z-score normalization
|
||||
normalized = (value - mean) / stddev
|
||||
```
|
||||
|
||||
### Implementation Plan
|
||||
1. **Backend:** Add `/api/studies/{id}/metadata` endpoint (read config)
|
||||
2. **Backend:** Add `/api/studies/{id}/optimizer-state` endpoint (read real-time JSON)
|
||||
3. **Frontend:** Create `<OptimizerPanel>` component
|
||||
4. **Frontend:** Create `<ParetoPlot>` component (use Recharts)
|
||||
5. **Frontend:** Create `<ParallelCoordinates>` component (use D3.js or Plotly)
|
||||
6. **Frontend:** Refactor `Dashboard.tsx` with new layout
|
||||
|
||||
---
|
||||
|
||||
## Issue 3: Multi-Objective Strategy Selection (FIXED ✅)
|
||||
|
||||
**Status:** Completed - Protocol 12 implemented
|
||||
- Multi-objective now uses: Random (8 trials) → TPE with multivariate
|
||||
- No longer stuck on random for entire optimization
|
||||
|
||||
---
|
||||
|
||||
## Issue 4: Missing Tracking Files in V2 Study
|
||||
|
||||
### Root Cause
|
||||
V2 study ran with OLD code (before Protocol 12). All 30 trials used random strategy.
|
||||
|
||||
### Solution
|
||||
Re-run V2 study with fixed optimizer:
|
||||
```bash
|
||||
cd studies/bracket_stiffness_optimization_V2
|
||||
# Clear old results
|
||||
del /Q 2_results\study.db
|
||||
rd /S /Q 2_results\intelligent_optimizer
|
||||
# Run with new code
|
||||
python run_optimization.py --trials 50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Priority Order
|
||||
|
||||
### P0 - CRITICAL (Do Immediately)
|
||||
1. ✅ Fix multi-objective strategy selector (DONE - Protocol 12)
|
||||
2. 🚧 Implement per-trial tracking callback
|
||||
3. 🚧 Add intelligent optimizer panel to dashboard
|
||||
4. 🚧 Add Pareto front plot
|
||||
|
||||
### P1 - HIGH (Do Today)
|
||||
5. Add parallel coordinates plot
|
||||
6. Implement dynamic units (read from config)
|
||||
7. Add objective normalization toggle
|
||||
|
||||
### P2 - MEDIUM (Do This Week)
|
||||
8. Improve dashboard UX/layout
|
||||
9. Add hypervolume indicator for multi-objective
|
||||
10. Create optimization report generator
|
||||
|
||||
---
|
||||
|
||||
## Testing Protocol
|
||||
|
||||
After implementing each fix:
|
||||
|
||||
1. **Per-Trial Tracking Test**
|
||||
```bash
|
||||
# Run optimization and check files appear immediately
|
||||
python run_optimization.py --trials 10
|
||||
# Verify: intelligent_optimizer/*.json files update EVERY trial
|
||||
```
|
||||
|
||||
2. **Dashboard Test**
|
||||
```bash
|
||||
# Start backend + frontend
|
||||
# Navigate to http://localhost:3001
|
||||
# Verify: All panels update in real-time
|
||||
# Verify: Pareto front appears for multi-objective
|
||||
# Verify: Units match optimization_config.json
|
||||
```
|
||||
|
||||
3. **Multi-Objective Test**
|
||||
```bash
|
||||
# Re-run bracket_stiffness_optimization_V2
|
||||
# Verify: Strategy switches from random → TPE after 8 trials
|
||||
# Verify: Tracking files generated every trial
|
||||
# Verify: Pareto front has 10+ solutions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Architecture
|
||||
|
||||
### Realtime Tracking System
|
||||
```
|
||||
intelligent_optimizer/
|
||||
├── optimizer_state.json # Updated every trial
|
||||
├── strategy_history.json # Append-only log
|
||||
├── landscape_snapshots.json # Updated when landscape analyzed
|
||||
├── trial_log.json # Append-only with timestamps
|
||||
├── confidence_history.json # Confidence over time
|
||||
└── strategy_transitions.json # When/why strategy changed
|
||||
```
|
||||
|
||||
### Dashboard Data Flow
|
||||
```
|
||||
Trial Complete
|
||||
↓
|
||||
Optuna Callback
|
||||
↓
|
||||
Write JSON Files (atomic)
|
||||
↓
|
||||
Backend API detects file change
|
||||
↓
|
||||
WebSocket broadcast to frontend
|
||||
↓
|
||||
Dashboard components update
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Estimated Effort
|
||||
|
||||
- **Per-Trial Tracking:** 2-3 hours
|
||||
- **Dashboard Overhaul:** 6-8 hours
|
||||
- Optimizer Panel: 1 hour
|
||||
- Pareto Plot: 2 hours
|
||||
- Parallel Coordinates: 2 hours
|
||||
- Dynamic Units: 1 hour
|
||||
- Layout/UX: 2 hours
|
||||
|
||||
**Total:** 8-11 hours for production-ready system
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ **After implementation:**
|
||||
1. User can see optimizer strategy change in real-time
|
||||
2. Intelligent optimizer folder updates EVERY trial (not batched)
|
||||
3. Dashboard shows Pareto front for multi-objective studies
|
||||
4. Dashboard units are dynamic (read from config)
|
||||
5. Dashboard is professional quality (like Optuna Dashboard or Weights & Biases)
|
||||
6. No hardcoded assumptions (Hz, single-objective, etc.)
|
||||
|
||||
843
docs/archive/historical/FEATURE_REGISTRY_ARCHITECTURE.md
Normal file
843
docs/archive/historical/FEATURE_REGISTRY_ARCHITECTURE.md
Normal file
@@ -0,0 +1,843 @@
|
||||
# Feature Registry Architecture
|
||||
|
||||
> Comprehensive guide to Atomizer's LLM-instructed feature database system
|
||||
|
||||
**Last Updated**: 2025-01-16
|
||||
**Status**: Phase 2 - Design Document
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Vision and Goals](#vision-and-goals)
|
||||
2. [Feature Categorization System](#feature-categorization-system)
|
||||
3. [Feature Registry Structure](#feature-registry-structure)
|
||||
4. [LLM Instruction Format](#llm-instruction-format)
|
||||
5. [Feature Documentation Strategy](#feature-documentation-strategy)
|
||||
6. [Dynamic Tool Building](#dynamic-tool-building)
|
||||
7. [Examples](#examples)
|
||||
8. [Implementation Plan](#implementation-plan)
|
||||
|
||||
---
|
||||
|
||||
## Vision and Goals
|
||||
|
||||
### Core Philosophy
|
||||
|
||||
Atomizer's feature registry is not just a catalog - it's an **LLM instruction system** that enables:
|
||||
|
||||
1. **Self-Documentation**: Features describe themselves to the LLM
|
||||
2. **Intelligent Composition**: LLM can combine features into workflows
|
||||
3. **Autonomous Proposals**: LLM suggests new features based on user needs
|
||||
4. **Structured Customization**: Users customize the tool through natural language
|
||||
5. **Continuous Evolution**: Feature database grows as users add capabilities
|
||||
|
||||
### Key Principles
|
||||
|
||||
- **Feature Types Are First-Class**: Engineering, software, UI, and analysis features are equally important
|
||||
- **Location-Aware**: Features know where their code lives and how to use it
|
||||
- **Metadata-Rich**: Each feature has enough context for LLM to understand and use it
|
||||
- **Composable**: Features can be combined into higher-level workflows
|
||||
- **Extensible**: New feature types can be added without breaking the system
|
||||
|
||||
---
|
||||
|
||||
## Feature Categorization System
|
||||
|
||||
### Primary Feature Dimensions
|
||||
|
||||
Features are organized along **three dimensions**:
|
||||
|
||||
#### Dimension 1: Domain (WHAT it does)
|
||||
- **Engineering**: Physics-based operations (stress, thermal, modal, etc.)
|
||||
- **Software**: Core algorithms and infrastructure (optimization, hooks, path resolution)
|
||||
- **UI**: User-facing components (dashboard, reports, visualization)
|
||||
- **Analysis**: Post-processing and decision support (sensitivity, Pareto, surrogate quality)
|
||||
|
||||
#### Dimension 2: Lifecycle Stage (WHEN it runs)
|
||||
- **Pre-Mesh**: Before meshing (geometry operations)
|
||||
- **Pre-Solve**: Before FEA solve (parameter updates, logging)
|
||||
- **Solve**: During FEA execution (solver control)
|
||||
- **Post-Solve**: After solve, before extraction (file validation)
|
||||
- **Post-Extraction**: After result extraction (logging, analysis)
|
||||
- **Post-Optimization**: After optimization completes (reporting, visualization)
|
||||
|
||||
#### Dimension 3: Abstraction Level (HOW it's used)
|
||||
- **Primitive**: Low-level functions (extract_stress, update_expression)
|
||||
- **Composite**: Mid-level workflows (RSS_metric, weighted_objective)
|
||||
- **Workflow**: High-level operations (run_optimization, generate_report)
|
||||
|
||||
### Feature Type Classification
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ FEATURE UNIVERSE │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────────┼─────────────────────┐
|
||||
│ │ │
|
||||
ENGINEERING SOFTWARE UI
|
||||
│ │ │
|
||||
┌───┴───┐ ┌────┴────┐ ┌─────┴─────┐
|
||||
│ │ │ │ │ │
|
||||
Extractors Metrics Optimization Hooks Dashboard Reports
|
||||
│ │ │ │ │ │
|
||||
Stress RSS Optuna Pre-Solve Widgets HTML
|
||||
Thermal SCF TPE Post-Solve Controls PDF
|
||||
Modal FOS Sampler Post-Extract Charts Markdown
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Feature Registry Structure
|
||||
|
||||
### JSON Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_registry": {
|
||||
"version": "0.2.0",
|
||||
"last_updated": "2025-01-16",
|
||||
"categories": {
|
||||
"engineering": { ... },
|
||||
"software": { ... },
|
||||
"ui": { ... },
|
||||
"analysis": { ... }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Feature Entry Schema
|
||||
|
||||
Each feature has:
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_id": "unique_identifier",
|
||||
"name": "Human-Readable Name",
|
||||
"description": "What this feature does (for LLM understanding)",
|
||||
"category": "engineering|software|ui|analysis",
|
||||
"subcategory": "extractors|metrics|optimization|hooks|...",
|
||||
"lifecycle_stage": "pre_solve|post_solve|post_extraction|...",
|
||||
"abstraction_level": "primitive|composite|workflow",
|
||||
"implementation": {
|
||||
"file_path": "relative/path/to/implementation.py",
|
||||
"function_name": "function_or_class_name",
|
||||
"entry_point": "how to invoke this feature"
|
||||
},
|
||||
"interface": {
|
||||
"inputs": [
|
||||
{
|
||||
"name": "parameter_name",
|
||||
"type": "str|int|float|dict|list",
|
||||
"required": true,
|
||||
"description": "What this parameter does",
|
||||
"units": "mm|MPa|Hz|none",
|
||||
"example": "example_value"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "output_name",
|
||||
"type": "float|dict|list",
|
||||
"description": "What this output represents",
|
||||
"units": "mm|MPa|Hz|none"
|
||||
}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"features": ["feature_id_1", "feature_id_2"],
|
||||
"libraries": ["optuna", "pyNastran"],
|
||||
"nx_version": "2412"
|
||||
},
|
||||
"usage_examples": [
|
||||
{
|
||||
"description": "Example scenario",
|
||||
"code": "example_code_snippet",
|
||||
"natural_language": "How user would request this"
|
||||
}
|
||||
],
|
||||
"composition_hints": {
|
||||
"combines_with": ["feature_id_3", "feature_id_4"],
|
||||
"typical_workflows": ["workflow_name_1"],
|
||||
"prerequisites": ["feature that must run before this"]
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Antoine Polvé",
|
||||
"created": "2025-01-16",
|
||||
"status": "stable|experimental|deprecated",
|
||||
"tested": true,
|
||||
"documentation_url": "docs/features/feature_name.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## LLM Instruction Format
|
||||
|
||||
### How LLM Uses the Registry
|
||||
|
||||
The feature registry serves as a **structured instruction manual** for the LLM:
|
||||
|
||||
#### 1. Discovery Phase
|
||||
```
|
||||
User: "I want to minimize stress on my bracket"
|
||||
|
||||
LLM reads registry:
|
||||
→ Finds category="engineering", subcategory="extractors"
|
||||
→ Discovers "stress_extractor" feature
|
||||
→ Reads: "Extracts von Mises stress from OP2 files"
|
||||
→ Checks composition_hints: combines_with=["optimization_runner"]
|
||||
|
||||
LLM response: "I'll use the stress_extractor feature to minimize stress.
|
||||
This requires an OP2 file from NX solve."
|
||||
```
|
||||
|
||||
#### 2. Composition Phase
|
||||
```
|
||||
User: "Add a custom RSS metric combining stress and displacement"
|
||||
|
||||
LLM reads registry:
|
||||
→ Finds abstraction_level="composite" examples
|
||||
→ Discovers "rss_metric" template feature
|
||||
→ Reads interface: inputs=[stress_value, displacement_value]
|
||||
→ Checks composition_hints: combines_with=["stress_extractor", "displacement_extractor"]
|
||||
|
||||
LLM generates new composite feature following the pattern
|
||||
```
|
||||
|
||||
#### 3. Proposal Phase
|
||||
```
|
||||
User: "What features could help me analyze fatigue life?"
|
||||
|
||||
LLM reads registry:
|
||||
→ Searches category="engineering", subcategory="extractors"
|
||||
→ Finds: stress_extractor, displacement_extractor (exist)
|
||||
→ Doesn't find: fatigue_extractor (missing)
|
||||
→ Reads composition_hints for similar features
|
||||
|
||||
LLM proposes: "I can create a fatigue_life_extractor that:
|
||||
1. Extracts stress history from OP2
|
||||
2. Applies rainflow counting algorithm
|
||||
3. Uses S-N curve to estimate fatigue life
|
||||
|
||||
This would be similar to stress_extractor but with
|
||||
time-series analysis. Should I implement it?"
|
||||
```
|
||||
|
||||
#### 4. Execution Phase
|
||||
```
|
||||
User: "Run the optimization"
|
||||
|
||||
LLM reads registry:
|
||||
→ Finds abstraction_level="workflow", feature_id="run_optimization"
|
||||
→ Reads implementation.entry_point
|
||||
→ Checks dependencies: ["optuna", "nx_solver", "stress_extractor"]
|
||||
→ Reads lifecycle_stage to understand execution order
|
||||
|
||||
LLM executes: python optimization_engine/runner.py
|
||||
```
|
||||
|
||||
### Natural Language Mapping
|
||||
|
||||
Each feature includes `natural_language` examples showing how users might request it:
|
||||
|
||||
```json
|
||||
"usage_examples": [
|
||||
{
|
||||
"natural_language": [
|
||||
"minimize stress",
|
||||
"reduce von Mises stress",
|
||||
"find lowest stress configuration",
|
||||
"optimize for minimum stress"
|
||||
],
|
||||
"maps_to": {
|
||||
"feature": "stress_extractor",
|
||||
"objective": "minimize",
|
||||
"metric": "max_von_mises"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
This enables LLM to understand user intent and select correct features.
|
||||
|
||||
---
|
||||
|
||||
## Feature Documentation Strategy
|
||||
|
||||
### Multi-Location Documentation
|
||||
|
||||
Features are documented in **three places**, each serving different purposes:
|
||||
|
||||
#### 1. Feature Registry (feature_registry.json)
|
||||
**Purpose**: LLM instruction and discovery
|
||||
**Location**: `optimization_engine/feature_registry.json`
|
||||
**Content**:
|
||||
- Structured metadata
|
||||
- Interface definitions
|
||||
- Composition hints
|
||||
- Usage examples
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"feature_id": "stress_extractor",
|
||||
"name": "Stress Extractor",
|
||||
"description": "Extracts von Mises stress from OP2 files",
|
||||
"category": "engineering",
|
||||
"subcategory": "extractors"
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. Code Implementation (*.py files)
|
||||
**Purpose**: Actual functionality
|
||||
**Location**: Codebase (e.g., `optimization_engine/result_extractors/extractors.py`)
|
||||
**Content**:
|
||||
- Python code with docstrings
|
||||
- Type hints
|
||||
- Implementation details
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
def extract_stress_from_op2(op2_file: Path) -> dict:
|
||||
"""
|
||||
Extracts von Mises stress from OP2 file.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 file
|
||||
|
||||
Returns:
|
||||
dict with max_von_mises, min_von_mises, avg_von_mises
|
||||
"""
|
||||
# Implementation...
|
||||
```
|
||||
|
||||
#### 3. Feature Documentation (docs/features/*.md)
|
||||
**Purpose**: Human-readable guides and tutorials
|
||||
**Location**: `docs/features/`
|
||||
**Content**:
|
||||
- Detailed explanations
|
||||
- Extended examples
|
||||
- Best practices
|
||||
- Troubleshooting
|
||||
|
||||
**Example**: `docs/features/stress_extractor.md`
|
||||
```markdown
|
||||
# Stress Extractor
|
||||
|
||||
## Overview
|
||||
Extracts von Mises stress from NX Nastran OP2 files.
|
||||
|
||||
## When to Use
|
||||
- Structural optimization where stress is the objective
|
||||
- Constraint checking (yield stress limits)
|
||||
- Multi-objective with stress as one objective
|
||||
|
||||
## Example Workflows
|
||||
[detailed examples...]
|
||||
```
|
||||
|
||||
### Documentation Flow
|
||||
|
||||
```
|
||||
User Request
|
||||
↓
|
||||
LLM reads feature_registry.json (discovers feature)
|
||||
↓
|
||||
LLM reads code docstrings (understands interface)
|
||||
↓
|
||||
LLM reads docs/features/*.md (if complex usage needed)
|
||||
↓
|
||||
LLM composes workflow using features
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dynamic Tool Building
|
||||
|
||||
### How LLM Builds New Features
|
||||
|
||||
The registry enables **autonomous feature creation** through templates and patterns:
|
||||
|
||||
#### Step 1: Pattern Recognition
|
||||
```
|
||||
User: "I need thermal stress extraction"
|
||||
|
||||
LLM:
|
||||
1. Reads existing feature: stress_extractor
|
||||
2. Identifies pattern: OP2 parsing → result extraction → return dict
|
||||
3. Finds similar features: displacement_extractor
|
||||
4. Recognizes template: engineering.extractors
|
||||
```
|
||||
|
||||
#### Step 2: Feature Generation
|
||||
```
|
||||
LLM generates new feature following pattern:
|
||||
{
|
||||
"feature_id": "thermal_stress_extractor",
|
||||
"name": "Thermal Stress Extractor",
|
||||
"description": "Extracts thermal stress from OP2 files (steady-state heat transfer analysis)",
|
||||
"category": "engineering",
|
||||
"subcategory": "extractors",
|
||||
"lifecycle_stage": "post_extraction",
|
||||
"abstraction_level": "primitive",
|
||||
"implementation": {
|
||||
"file_path": "optimization_engine/result_extractors/thermal_extractors.py",
|
||||
"function_name": "extract_thermal_stress_from_op2",
|
||||
"entry_point": "from optimization_engine.result_extractors.thermal_extractors import extract_thermal_stress_from_op2"
|
||||
},
|
||||
# ... rest of schema
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 3: Code Generation
|
||||
```python
|
||||
# LLM writes implementation following stress_extractor pattern
|
||||
def extract_thermal_stress_from_op2(op2_file: Path) -> dict:
|
||||
"""
|
||||
Extracts thermal stress from OP2 file.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 file from thermal analysis
|
||||
|
||||
Returns:
|
||||
dict with max_thermal_stress, temperature_at_max_stress
|
||||
"""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
op2 = OP2()
|
||||
op2.read_op2(op2_file)
|
||||
|
||||
# Extract thermal stress (element type depends on analysis)
|
||||
thermal_stress = op2.thermal_stress_data
|
||||
|
||||
return {
|
||||
'max_thermal_stress': thermal_stress.max(),
|
||||
'temperature_at_max_stress': # ...
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 4: Registration
|
||||
```
|
||||
LLM adds to feature_registry.json
|
||||
LLM creates docs/features/thermal_stress_extractor.md
|
||||
LLM updates CHANGELOG.md with new feature
|
||||
LLM runs tests to validate implementation
|
||||
```
|
||||
|
||||
### Feature Composition Examples
|
||||
|
||||
#### Example 1: RSS Metric (Composite Feature)
|
||||
```
|
||||
User: "Create RSS metric combining stress and displacement"
|
||||
|
||||
LLM composes from primitives:
|
||||
stress_extractor + displacement_extractor → rss_metric
|
||||
|
||||
Generated feature:
|
||||
{
|
||||
"feature_id": "rss_stress_displacement",
|
||||
"abstraction_level": "composite",
|
||||
"dependencies": {
|
||||
"features": ["stress_extractor", "displacement_extractor"]
|
||||
},
|
||||
"composition_hints": {
|
||||
"composed_from": ["stress_extractor", "displacement_extractor"],
|
||||
"composition_type": "root_sum_square"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example 2: Complete Workflow
|
||||
```
|
||||
User: "Run bracket optimization minimizing stress"
|
||||
|
||||
LLM composes workflow from features:
|
||||
1. study_manager (create study folder)
|
||||
2. nx_updater (update wall_thickness parameter)
|
||||
3. nx_solver (run FEA)
|
||||
4. stress_extractor (extract results)
|
||||
5. optimization_runner (Optuna TPE loop)
|
||||
6. report_generator (create HTML report)
|
||||
|
||||
Each step uses a feature from registry with proper sequencing
|
||||
based on lifecycle_stage metadata.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Engineering Feature (Stress Extractor)
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_id": "stress_extractor",
|
||||
"name": "Stress Extractor",
|
||||
"description": "Extracts von Mises stress from NX Nastran OP2 files",
|
||||
"category": "engineering",
|
||||
"subcategory": "extractors",
|
||||
"lifecycle_stage": "post_extraction",
|
||||
"abstraction_level": "primitive",
|
||||
"implementation": {
|
||||
"file_path": "optimization_engine/result_extractors/extractors.py",
|
||||
"function_name": "extract_stress_from_op2",
|
||||
"entry_point": "from optimization_engine.result_extractors.extractors import extract_stress_from_op2"
|
||||
},
|
||||
"interface": {
|
||||
"inputs": [
|
||||
{
|
||||
"name": "op2_file",
|
||||
"type": "Path",
|
||||
"required": true,
|
||||
"description": "Path to OP2 file from NX solve",
|
||||
"example": "bracket_sim1-solution_1.op2"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "max_von_mises",
|
||||
"type": "float",
|
||||
"description": "Maximum von Mises stress across all elements",
|
||||
"units": "MPa"
|
||||
},
|
||||
{
|
||||
"name": "element_id_at_max",
|
||||
"type": "int",
|
||||
"description": "Element ID where max stress occurs"
|
||||
}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"features": [],
|
||||
"libraries": ["pyNastran"],
|
||||
"nx_version": "2412"
|
||||
},
|
||||
"usage_examples": [
|
||||
{
|
||||
"description": "Minimize stress in bracket optimization",
|
||||
"code": "result = extract_stress_from_op2(Path('bracket.op2'))\nmax_stress = result['max_von_mises']",
|
||||
"natural_language": [
|
||||
"minimize stress",
|
||||
"reduce von Mises stress",
|
||||
"find lowest stress configuration"
|
||||
]
|
||||
}
|
||||
],
|
||||
"composition_hints": {
|
||||
"combines_with": ["displacement_extractor", "mass_extractor"],
|
||||
"typical_workflows": ["structural_optimization", "stress_minimization"],
|
||||
"prerequisites": ["nx_solver"]
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Antoine Polvé",
|
||||
"created": "2025-01-10",
|
||||
"status": "stable",
|
||||
"tested": true,
|
||||
"documentation_url": "docs/features/stress_extractor.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Software Feature (Hook Manager)
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_id": "hook_manager",
|
||||
"name": "Hook Manager",
|
||||
"description": "Manages plugin lifecycle hooks for optimization workflow",
|
||||
"category": "software",
|
||||
"subcategory": "infrastructure",
|
||||
"lifecycle_stage": "all",
|
||||
"abstraction_level": "composite",
|
||||
"implementation": {
|
||||
"file_path": "optimization_engine/plugins/hook_manager.py",
|
||||
"function_name": "HookManager",
|
||||
"entry_point": "from optimization_engine.plugins.hook_manager import HookManager"
|
||||
},
|
||||
"interface": {
|
||||
"inputs": [
|
||||
{
|
||||
"name": "hook_type",
|
||||
"type": "str",
|
||||
"required": true,
|
||||
"description": "Lifecycle point: pre_solve, post_solve, post_extraction",
|
||||
"example": "pre_solve"
|
||||
},
|
||||
{
|
||||
"name": "context",
|
||||
"type": "dict",
|
||||
"required": true,
|
||||
"description": "Context data passed to hooks (trial_number, design_variables, etc.)"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "execution_history",
|
||||
"type": "list",
|
||||
"description": "List of hooks executed with timestamps and success status"
|
||||
}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"features": [],
|
||||
"libraries": [],
|
||||
"nx_version": null
|
||||
},
|
||||
"usage_examples": [
|
||||
{
|
||||
"description": "Execute pre-solve hooks before FEA",
|
||||
"code": "hook_manager.execute_hooks('pre_solve', context={'trial': 1})",
|
||||
"natural_language": [
|
||||
"run pre-solve plugins",
|
||||
"execute hooks before solving"
|
||||
]
|
||||
}
|
||||
],
|
||||
"composition_hints": {
|
||||
"combines_with": ["detailed_logger", "optimization_logger"],
|
||||
"typical_workflows": ["optimization_runner"],
|
||||
"prerequisites": []
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Antoine Polvé",
|
||||
"created": "2025-01-16",
|
||||
"status": "stable",
|
||||
"tested": true,
|
||||
"documentation_url": "docs/features/hook_manager.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 3: UI Feature (Dashboard Widget)
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_id": "optimization_progress_chart",
|
||||
"name": "Optimization Progress Chart",
|
||||
"description": "Real-time chart showing optimization convergence",
|
||||
"category": "ui",
|
||||
"subcategory": "dashboard_widgets",
|
||||
"lifecycle_stage": "post_optimization",
|
||||
"abstraction_level": "composite",
|
||||
"implementation": {
|
||||
"file_path": "dashboard/frontend/components/ProgressChart.js",
|
||||
"function_name": "OptimizationProgressChart",
|
||||
"entry_point": "new OptimizationProgressChart(containerId)"
|
||||
},
|
||||
"interface": {
|
||||
"inputs": [
|
||||
{
|
||||
"name": "trial_data",
|
||||
"type": "list[dict]",
|
||||
"required": true,
|
||||
"description": "List of trial results with objective values",
|
||||
"example": "[{trial: 1, value: 45.3}, {trial: 2, value: 42.1}]"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "chart_element",
|
||||
"type": "HTMLElement",
|
||||
"description": "Rendered chart DOM element"
|
||||
}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"features": [],
|
||||
"libraries": ["Chart.js"],
|
||||
"nx_version": null
|
||||
},
|
||||
"usage_examples": [
|
||||
{
|
||||
"description": "Display optimization progress in dashboard",
|
||||
"code": "chart = new OptimizationProgressChart('chart-container')\nchart.update(trial_data)",
|
||||
"natural_language": [
|
||||
"show optimization progress",
|
||||
"display convergence chart",
|
||||
"visualize trial results"
|
||||
]
|
||||
}
|
||||
],
|
||||
"composition_hints": {
|
||||
"combines_with": ["trial_history_table", "best_parameters_display"],
|
||||
"typical_workflows": ["dashboard_view", "result_monitoring"],
|
||||
"prerequisites": ["optimization_runner"]
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Antoine Polvé",
|
||||
"created": "2025-01-10",
|
||||
"status": "stable",
|
||||
"tested": true,
|
||||
"documentation_url": "docs/features/dashboard_widgets.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 4: Analysis Feature (Surrogate Quality Checker)
|
||||
|
||||
```json
|
||||
{
|
||||
"feature_id": "surrogate_quality_checker",
|
||||
"name": "Surrogate Quality Checker",
|
||||
"description": "Evaluates surrogate model quality using R², CV score, and confidence intervals",
|
||||
"category": "analysis",
|
||||
"subcategory": "decision_support",
|
||||
"lifecycle_stage": "post_optimization",
|
||||
"abstraction_level": "composite",
|
||||
"implementation": {
|
||||
"file_path": "optimization_engine/analysis/surrogate_quality.py",
|
||||
"function_name": "check_surrogate_quality",
|
||||
"entry_point": "from optimization_engine.analysis.surrogate_quality import check_surrogate_quality"
|
||||
},
|
||||
"interface": {
|
||||
"inputs": [
|
||||
{
|
||||
"name": "trial_data",
|
||||
"type": "list[dict]",
|
||||
"required": true,
|
||||
"description": "Trial history with design variables and objectives"
|
||||
},
|
||||
{
|
||||
"name": "min_r_squared",
|
||||
"type": "float",
|
||||
"required": false,
|
||||
"description": "Minimum acceptable R² threshold",
|
||||
"example": "0.9"
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "r_squared",
|
||||
"type": "float",
|
||||
"description": "Coefficient of determination",
|
||||
"units": "none"
|
||||
},
|
||||
{
|
||||
"name": "cv_score",
|
||||
"type": "float",
|
||||
"description": "Cross-validation score",
|
||||
"units": "none"
|
||||
},
|
||||
{
|
||||
"name": "quality_verdict",
|
||||
"type": "str",
|
||||
"description": "EXCELLENT|GOOD|POOR based on metrics"
|
||||
}
|
||||
]
|
||||
},
|
||||
"dependencies": {
|
||||
"features": ["optimization_runner"],
|
||||
"libraries": ["sklearn", "numpy"],
|
||||
"nx_version": null
|
||||
},
|
||||
"usage_examples": [
|
||||
{
|
||||
"description": "Check if surrogate is reliable for predictions",
|
||||
"code": "quality = check_surrogate_quality(trial_data)\nif quality['r_squared'] > 0.9:\n print('Surrogate is reliable')",
|
||||
"natural_language": [
|
||||
"check surrogate quality",
|
||||
"is surrogate reliable",
|
||||
"can I trust the surrogate model"
|
||||
]
|
||||
}
|
||||
],
|
||||
"composition_hints": {
|
||||
"combines_with": ["sensitivity_analysis", "pareto_front_analyzer"],
|
||||
"typical_workflows": ["post_optimization_analysis", "decision_support"],
|
||||
"prerequisites": ["optimization_runner"]
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Antoine Polvé",
|
||||
"created": "2025-01-16",
|
||||
"status": "experimental",
|
||||
"tested": false,
|
||||
"documentation_url": "docs/features/surrogate_quality_checker.md"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 2 Week 1: Foundation
|
||||
|
||||
#### Day 1-2: Create Initial Registry
|
||||
- [ ] Create `optimization_engine/feature_registry.json`
|
||||
- [ ] Document 15-20 existing features across all categories
|
||||
- [ ] Add engineering features (stress_extractor, displacement_extractor)
|
||||
- [ ] Add software features (hook_manager, optimization_runner, nx_solver)
|
||||
- [ ] Add UI features (dashboard widgets)
|
||||
|
||||
#### Day 3-4: LLM Skill Setup
|
||||
- [ ] Create `.claude/skills/atomizer.md`
|
||||
- [ ] Define how LLM should read and use feature_registry.json
|
||||
- [ ] Add feature discovery examples
|
||||
- [ ] Add feature composition examples
|
||||
- [ ] Test LLM's ability to navigate registry
|
||||
|
||||
#### Day 5: Documentation
|
||||
- [ ] Create `docs/features/` directory
|
||||
- [ ] Write feature guides for key features
|
||||
- [ ] Link registry entries to documentation
|
||||
- [ ] Update DEVELOPMENT.md with registry usage
|
||||
|
||||
### Phase 2 Week 2: LLM Integration
|
||||
|
||||
#### Natural Language Parser
|
||||
- [ ] Intent classification using registry metadata
|
||||
- [ ] Entity extraction for design variables, objectives
|
||||
- [ ] Feature selection based on user request
|
||||
- [ ] Workflow composition from features
|
||||
|
||||
### Future Phases: Feature Expansion
|
||||
|
||||
#### Phase 3: Code Generation
|
||||
- [ ] Template features for common patterns
|
||||
- [ ] Validation rules for generated code
|
||||
- [ ] Auto-registration of new features
|
||||
|
||||
#### Phase 4-7: Continuous Evolution
|
||||
- [ ] User-contributed features
|
||||
- [ ] Pattern learning from usage
|
||||
- [ ] Best practices extraction
|
||||
- [ ] Self-documentation updates
|
||||
|
||||
---
|
||||
|
||||
## Benefits of This Architecture
|
||||
|
||||
### For Users
|
||||
- **Natural language control**: "minimize stress" → LLM selects stress_extractor
|
||||
- **Intelligent suggestions**: LLM proposes features based on context
|
||||
- **No configuration files**: LLM generates config from conversation
|
||||
|
||||
### For Developers
|
||||
- **Clear structure**: Features organized by domain, lifecycle, abstraction
|
||||
- **Easy extension**: Add new features following templates
|
||||
- **Self-documenting**: Registry serves as API documentation
|
||||
|
||||
### For LLM
|
||||
- **Comprehensive context**: All capabilities in one place
|
||||
- **Composition guidance**: Knows how features combine
|
||||
- **Natural language mapping**: Understands user intent
|
||||
- **Pattern recognition**: Can generate new features from templates
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Create initial feature_registry.json** with 15-20 existing features
|
||||
2. **Test LLM navigation** with Claude skill
|
||||
3. **Validate registry structure** with real user requests
|
||||
4. **Iterate on metadata** based on LLM's needs
|
||||
5. **Build out documentation** in docs/features/
|
||||
|
||||
---
|
||||
|
||||
**Maintained by**: Antoine Polvé (antoine@atomaste.com)
|
||||
**Repository**: [GitHub - Atomizer](https://github.com/yourusername/Atomizer)
|
||||
113
docs/archive/historical/FIX_VALIDATOR_PRUNING.md
Normal file
113
docs/archive/historical/FIX_VALIDATOR_PRUNING.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Validator Pruning Investigation - November 20, 2025
|
||||
|
||||
## DEPRECATED - This document is retained for historical reference only.
|
||||
|
||||
**Status**: Investigation completed. Aspect ratio validation approach was abandoned.
|
||||
|
||||
---
|
||||
|
||||
## Original Problem
|
||||
|
||||
The v2.1 and v2.2 tests showed 18-20% pruning rate. Investigation revealed two separate issues:
|
||||
|
||||
### Issue 1: Validator Not Enforcing Rules (FIXED, then REMOVED)
|
||||
|
||||
The `_validate_circular_plate_aspect_ratio()` method initially returned only **warnings**, not **rejections**.
|
||||
|
||||
**Fix Applied**: Changed to return hard rejections for aspect ratio violations.
|
||||
|
||||
**Result**: All pruned trials in v2.2 still had VALID aspect ratios (5.0-50.0 range).
|
||||
|
||||
**Conclusion**: Aspect ratio violations were NOT the cause of pruning.
|
||||
|
||||
### Issue 2: pyNastran False Positives (ROOT CAUSE)
|
||||
|
||||
All pruned trials failed due to pyNastran FATAL flag sensitivity:
|
||||
- ✅ Nastran simulations succeeded (F06 files have no errors)
|
||||
- ⚠️ FATAL flag in OP2 header (benign warning)
|
||||
- ❌ pyNastran throws exception when reading OP2
|
||||
- ❌ Valid trials incorrectly marked as failed
|
||||
|
||||
**Evidence**: All 9 pruned trials in v2.2 had:
|
||||
- `is_pynastran_fatal_flag: true`
|
||||
- `f06_has_fatal_errors: false`
|
||||
- Valid aspect ratios within bounds
|
||||
|
||||
---
|
||||
|
||||
## Final Solution (Post-v2.3)
|
||||
|
||||
### Aspect Ratio Validation REMOVED
|
||||
|
||||
After deploying v2.3 with aspect ratio validation, user feedback revealed:
|
||||
|
||||
**User Requirement**: "I never asked for this check, where does that come from?"
|
||||
|
||||
**Issue**: Arbitrary aspect ratio limits (5.0-50.0) without:
|
||||
- User approval
|
||||
- Physical justification for circular plate modal analysis
|
||||
- Visibility in optimization_config.json
|
||||
|
||||
**Fix Applied**:
|
||||
- Removed ALL aspect ratio validation from circular_plate model type
|
||||
- Validator now returns empty rules `{}`
|
||||
- Relies solely on Optuna parameter bounds (50-150mm diameter, 2-10mm thickness)
|
||||
|
||||
**User Requirements Established**:
|
||||
1. **No arbitrary checks** - validation rules must be proposed, not automatic
|
||||
2. **Configurable validation** - rules should be visible in optimization_config.json
|
||||
3. **Parameter bounds suffice** - ranges already define feasibility
|
||||
4. **Physical justification required** - any constraint needs clear reasoning
|
||||
|
||||
### Real Solution: Robust OP2 Extraction
|
||||
|
||||
**Module**: [optimization_engine/op2_extractor.py](../optimization_engine/op2_extractor.py)
|
||||
|
||||
Multi-strategy extraction that handles pyNastran issues:
|
||||
1. Standard OP2 read
|
||||
2. Lenient read (debug=False, skip benign flags)
|
||||
3. F06 fallback parsing
|
||||
|
||||
See [PRUNING_DIAGNOSTICS.md](PRUNING_DIAGNOSTICS.md) for details.
|
||||
|
||||
---
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
1. **Validator is for simulation failures, not arbitrary physics assumptions**
|
||||
- Parameter bounds already define feasible ranges
|
||||
- Don't add validation rules without user approval
|
||||
|
||||
2. **18% pruning was pyNastran false positives, not validation issues**
|
||||
- All pruned trials had valid parameters
|
||||
- Robust extraction eliminates these false positives
|
||||
|
||||
3. **Transparency is critical**
|
||||
- Validation rules must be visible in optimization_config.json
|
||||
- Arbitrary constraints confuse users and reject valid designs
|
||||
|
||||
---
|
||||
|
||||
## Current State
|
||||
|
||||
**File**: [simulation_validator.py](../optimization_engine/simulation_validator.py:41-45)
|
||||
|
||||
```python
|
||||
if model_type == 'circular_plate':
|
||||
# NOTE: Only use parameter bounds for validation
|
||||
# No arbitrary aspect ratio checks - let Optuna explore the full parameter space
|
||||
# Modal analysis is robust and doesn't need strict aspect ratio limits
|
||||
return {}
|
||||
```
|
||||
|
||||
**Impact**: Clean separation of concerns
|
||||
- **Parameter bounds** = Feasibility (user-defined ranges)
|
||||
- **Validator** = Genuine simulation failures (e.g., mesh errors, solver crashes)
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [SESSION_SUMMARY_NOV20.md](SESSION_SUMMARY_NOV20.md) - Complete session documentation
|
||||
- [PRUNING_DIAGNOSTICS.md](PRUNING_DIAGNOSTICS.md) - Robust extraction solution
|
||||
- [optimization_engine/simulation_validator.py](../optimization_engine/simulation_validator.py) - Current validator implementation
|
||||
323
docs/archive/historical/GOOD_MORNING_NOV18.md
Normal file
323
docs/archive/historical/GOOD_MORNING_NOV18.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# Good Morning! November 18, 2025
|
||||
|
||||
## What's Ready for You Today
|
||||
|
||||
Last night you requested documentation for Hybrid Mode and today's testing plan. Everything is ready!
|
||||
|
||||
---
|
||||
|
||||
## 📚 New Documentation Created
|
||||
|
||||
### 1. **Hybrid Mode Guide** - Your Production Mode
|
||||
[docs/HYBRID_MODE_GUIDE.md](docs/HYBRID_MODE_GUIDE.md)
|
||||
|
||||
**What it covers**:
|
||||
- ✅ Complete workflow: Natural language → Claude creates JSON → 90% automation
|
||||
- ✅ Step-by-step walkthrough with real examples
|
||||
- ✅ Beam optimization example (working code)
|
||||
- ✅ Troubleshooting guide
|
||||
- ✅ Tips for success
|
||||
|
||||
**Why this mode?**
|
||||
- No API key required (use Claude Code/Desktop)
|
||||
- 90% automation with 10% effort
|
||||
- Full transparency - you see and approve the workflow JSON
|
||||
- Production ready with centralized library system
|
||||
|
||||
### 2. **Today's Testing Plan**
|
||||
[docs/TODAY_PLAN_NOV18.md](docs/TODAY_PLAN_NOV18.md)
|
||||
|
||||
**4 Tests Planned** (2-3 hours total):
|
||||
|
||||
**Test 1: Verify Beam Optimization** (30 min)
|
||||
- Confirm parameter bounds fix (20-30mm not 0.2-1.0mm)
|
||||
- Verify clean study folders (no code pollution)
|
||||
- Check core library system working
|
||||
|
||||
**Test 2: Create New Optimization** (1 hour)
|
||||
- Use Claude to create workflow JSON from natural language
|
||||
- Run cantilever plate optimization
|
||||
- Verify library reuse (deduplication working)
|
||||
|
||||
**Test 3: Validate Deduplication** (15 min)
|
||||
- Run same workflow twice
|
||||
- Confirm extractors reused, not duplicated
|
||||
- Verify core library size unchanged
|
||||
|
||||
**Test 4: Dashboard Visualization** (30 min - OPTIONAL)
|
||||
- View results in web dashboard
|
||||
- Check plots and trial history
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quick Start: Test 1
|
||||
|
||||
Ready to jump in? Here's Test 1:
|
||||
|
||||
```python
|
||||
# Create: studies/simple_beam_optimization/test_today.py
|
||||
|
||||
from pathlib import Path
|
||||
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
|
||||
|
||||
study_dir = Path("studies/simple_beam_optimization")
|
||||
workflow_json = study_dir / "1_setup/workflow_config.json"
|
||||
prt_file = study_dir / "1_setup/model/Beam.prt"
|
||||
sim_file = study_dir / "1_setup/model/Beam_sim1.sim"
|
||||
output_dir = study_dir / "2_substudies/test_nov18_verification"
|
||||
|
||||
print("="*80)
|
||||
print("TEST 1: BEAM OPTIMIZATION VERIFICATION")
|
||||
print("="*80)
|
||||
print()
|
||||
print("Running 5 trials to verify system...")
|
||||
print()
|
||||
|
||||
runner = LLMOptimizationRunner(
|
||||
llm_workflow_file=workflow_json,
|
||||
prt_file=prt_file,
|
||||
sim_file=sim_file,
|
||||
output_dir=output_dir,
|
||||
n_trials=5 # Just 5 for verification
|
||||
)
|
||||
|
||||
study = runner.run()
|
||||
|
||||
print()
|
||||
print("="*80)
|
||||
print("TEST 1 RESULTS")
|
||||
print("="*80)
|
||||
print()
|
||||
print(f"Best design found:")
|
||||
print(f" beam_half_core_thickness: {study.best_params['beam_half_core_thickness']:.2f} mm")
|
||||
print(f" beam_face_thickness: {study.best_params['beam_face_thickness']:.2f} mm")
|
||||
print(f" holes_diameter: {study.best_params['holes_diameter']:.2f} mm")
|
||||
print(f" hole_count: {study.best_params['hole_count']}")
|
||||
print()
|
||||
print("[SUCCESS] Optimization completed!")
|
||||
```
|
||||
|
||||
Then run:
|
||||
```bash
|
||||
python studies/simple_beam_optimization/test_today.py
|
||||
```
|
||||
|
||||
**Expected**: Completes in ~15 minutes with realistic parameter values (20-30mm range).
|
||||
|
||||
---
|
||||
|
||||
## 📖 What Was Done Last Night
|
||||
|
||||
### Bugs Fixed
|
||||
1. ✅ Parameter range bug (0.2-1.0mm → 20-30mm)
|
||||
2. ✅ Workflow config auto-save for transparency
|
||||
3. ✅ Study folder architecture cleaned up
|
||||
|
||||
### Architecture Refactor
|
||||
- ✅ Centralized extractor library created
|
||||
- ✅ Signature-based deduplication implemented
|
||||
- ✅ Study folders now clean (only metadata, no code)
|
||||
- ✅ Production-grade structure achieved
|
||||
|
||||
### Documentation
|
||||
- ✅ [MORNING_SUMMARY_NOV17.md](MORNING_SUMMARY_NOV17.md) - Last night's work
|
||||
- ✅ [docs/ARCHITECTURE_REFACTOR_NOV17.md](docs/ARCHITECTURE_REFACTOR_NOV17.md) - Technical details
|
||||
- ✅ [docs/HYBRID_MODE_GUIDE.md](docs/HYBRID_MODE_GUIDE.md) - How to use Hybrid Mode
|
||||
- ✅ [docs/TODAY_PLAN_NOV18.md](docs/TODAY_PLAN_NOV18.md) - Today's testing plan
|
||||
|
||||
### All Tests Passing
|
||||
- ✅ E2E test: 18/18 checks
|
||||
- ✅ Parameter ranges verified
|
||||
- ✅ Clean study folders verified
|
||||
- ✅ Core library working
|
||||
|
||||
---
|
||||
|
||||
## 🗺️ Current Status: Atomizer Project
|
||||
|
||||
**Overall Completion**: 85-90%
|
||||
|
||||
**Phase Status**:
|
||||
- Phase 1 (Plugin System): 100% ✅
|
||||
- Phases 2.5-3.1 (LLM Intelligence): 85% ✅
|
||||
- Phase 3.2 Week 1 (Integration): 100% ✅
|
||||
- Phase 3.2 Week 2 (Robustness): Starting today
|
||||
|
||||
**What Works**:
|
||||
- ✅ Manual mode (JSON config) - 100% production ready
|
||||
- ✅ Hybrid mode (Claude helps create JSON) - 90% ready, recommended
|
||||
- ✅ Centralized library system - 100% working
|
||||
- ✅ Auto-generation of extractors - 100% working
|
||||
- ✅ Clean study folders - 100% working
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Your Vision: "Insanely Good Engineering Software"
|
||||
|
||||
**Last night you said**:
|
||||
> "My study folder is a mess, why? I want some order and real structure to develop an insanly good engineering software that evolve with time."
|
||||
|
||||
**Status**: ✅ ACHIEVED
|
||||
|
||||
**Before**:
|
||||
```
|
||||
studies/my_study/
|
||||
├── generated_extractors/ ❌ Code pollution!
|
||||
├── generated_hooks/ ❌ Code pollution!
|
||||
├── llm_workflow_config.json
|
||||
└── optimization_results.json
|
||||
```
|
||||
|
||||
**Now**:
|
||||
```
|
||||
optimization_engine/extractors/ ✓ Core library
|
||||
├── extract_displacement.py
|
||||
├── extract_von_mises_stress.py
|
||||
├── extract_mass.py
|
||||
└── catalog.json ✓ Tracks all
|
||||
|
||||
studies/my_study/
|
||||
├── extractors_manifest.json ✓ Just references!
|
||||
├── llm_workflow_config.json ✓ Study config
|
||||
├── optimization_results.json ✓ Results only
|
||||
└── optimization_history.json ✓ History only
|
||||
```
|
||||
|
||||
**Architecture Quality**:
|
||||
- ✅ Production-grade structure
|
||||
- ✅ Code reuse (library grows, studies stay clean)
|
||||
- ✅ Deduplication (same extractor = single file)
|
||||
- ✅ Evolves with time (library expands)
|
||||
- ✅ Clean separation (studies = data, core = code)
|
||||
|
||||
---
|
||||
|
||||
## 📋 Recommended Path Today
|
||||
|
||||
### Option 1: Quick Verification (1 hour)
|
||||
1. Run Test 1 (beam optimization - 30 min)
|
||||
2. Review documentation (30 min)
|
||||
3. Ready to use for real work
|
||||
|
||||
### Option 2: Complete Testing (3 hours)
|
||||
1. Run all 4 tests from [TODAY_PLAN_NOV18.md](docs/TODAY_PLAN_NOV18.md)
|
||||
2. Validate architecture thoroughly
|
||||
3. Build confidence in system
|
||||
|
||||
### Option 3: Jump to Real Work (2 hours)
|
||||
1. Describe your real optimization to me
|
||||
2. I'll create workflow JSON
|
||||
3. Run optimization with Hybrid Mode
|
||||
4. Get real results today!
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
### Step 1: Review Documentation
|
||||
```bash
|
||||
# Open these files in VSCode
|
||||
code docs/HYBRID_MODE_GUIDE.md # How Hybrid Mode works
|
||||
code docs/TODAY_PLAN_NOV18.md # Today's testing plan
|
||||
code MORNING_SUMMARY_NOV17.md # Last night's work
|
||||
```
|
||||
|
||||
### Step 2: Run Test 1
|
||||
```bash
|
||||
# Create and run verification test
|
||||
code studies/simple_beam_optimization/test_today.py
|
||||
python studies/simple_beam_optimization/test_today.py
|
||||
```
|
||||
|
||||
### Step 3: Choose Your Path
|
||||
Tell me what you want to do:
|
||||
- **"Let's run all the tests"** → I'll guide you through all 4 tests
|
||||
- **"I want to optimize [describe]"** → I'll create workflow JSON for you
|
||||
- **"Show me the architecture"** → I'll explain the new library system
|
||||
- **"I have questions about [topic]"** → I'll answer
|
||||
|
||||
---
|
||||
|
||||
## 📁 Files to Review
|
||||
|
||||
**Key Documentation**:
|
||||
- [docs/HYBRID_MODE_GUIDE.md](docs/HYBRID_MODE_GUIDE.md) - Complete guide
|
||||
- [docs/TODAY_PLAN_NOV18.md](docs/TODAY_PLAN_NOV18.md) - Testing plan
|
||||
- [docs/ARCHITECTURE_REFACTOR_NOV17.md](docs/ARCHITECTURE_REFACTOR_NOV17.md) - Technical details
|
||||
|
||||
**Key Code**:
|
||||
- [optimization_engine/llm_optimization_runner.py](optimization_engine/llm_optimization_runner.py) - Hybrid Mode orchestrator
|
||||
- [optimization_engine/extractor_library.py](optimization_engine/extractor_library.py) - Core library system
|
||||
- [optimization_engine/extractor_orchestrator.py](optimization_engine/extractor_orchestrator.py) - Auto-generation
|
||||
|
||||
**Example Workflow**:
|
||||
- [studies/simple_beam_optimization/1_setup/workflow_config.json](studies/simple_beam_optimization/1_setup/workflow_config.json) - Working example
|
||||
|
||||
---
|
||||
|
||||
## 💡 Quick Tips
|
||||
|
||||
### Using Hybrid Mode
|
||||
1. Describe optimization in natural language (to me, Claude Code)
|
||||
2. I create workflow JSON for you
|
||||
3. Run LLMOptimizationRunner with JSON
|
||||
4. System auto-generates extractors and runs optimization
|
||||
5. Results saved with full audit trail
|
||||
|
||||
### Benefits
|
||||
- ✅ No API key needed (use me via Claude Desktop)
|
||||
- ✅ 90% automation (only JSON creation is manual)
|
||||
- ✅ Full transparency (you review JSON before running)
|
||||
- ✅ Production ready (clean architecture)
|
||||
- ✅ Code reuse (library system)
|
||||
|
||||
### Success Criteria
|
||||
After testing, you should see:
|
||||
- Parameter values in correct range (20-30mm not 0.2-1.0mm)
|
||||
- Study folders clean (only 5 files)
|
||||
- Core library contains extractors
|
||||
- Optimization completes successfully
|
||||
- Results make engineering sense
|
||||
|
||||
---
|
||||
|
||||
## 🎊 What's Different Now
|
||||
|
||||
**Before (Nov 16)**:
|
||||
- Study folders polluted with code
|
||||
- No deduplication
|
||||
- Parameter range bug (0.2-1.0mm)
|
||||
- No workflow documentation
|
||||
|
||||
**Now (Nov 18)**:
|
||||
- ✅ Clean study folders (only metadata)
|
||||
- ✅ Centralized library with deduplication
|
||||
- ✅ Parameter ranges fixed (20-30mm)
|
||||
- ✅ Workflow config auto-saved
|
||||
- ✅ Production-grade architecture
|
||||
- ✅ Complete documentation
|
||||
- ✅ Testing plan ready
|
||||
|
||||
---
|
||||
|
||||
## Ready to Start?
|
||||
|
||||
Tell me:
|
||||
1. **"Let's test!"** - I'll guide you through Test 1
|
||||
2. **"I want to optimize [your problem]"** - I'll create workflow JSON
|
||||
3. **"Explain [topic]"** - I'll clarify any aspect
|
||||
4. **"Let's look at [file]"** - I'll review code with you
|
||||
|
||||
**Your quote from last night**:
|
||||
> "I like it! please document this (hybrid) and the plan for today. Lets kick start this"
|
||||
|
||||
Everything is documented and ready. Let's kick start this! 🚀
|
||||
|
||||
---
|
||||
|
||||
**Status**: All systems ready ✅
|
||||
**Tests**: Passing ✅
|
||||
**Documentation**: Complete ✅
|
||||
**Architecture**: Production-grade ✅
|
||||
|
||||
**Have a great Monday morning!** ☕
|
||||
277
docs/archive/historical/INDEX_OLD.md
Normal file
277
docs/archive/historical/INDEX_OLD.md
Normal file
@@ -0,0 +1,277 @@
|
||||
# Atomizer Documentation Index
|
||||
|
||||
**Last Updated**: November 21, 2025
|
||||
|
||||
Quick navigation to all Atomizer documentation.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
### New Users
|
||||
1. **[GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md)** - Start here! Morning summary and quick start
|
||||
2. **[HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md)** - Complete guide to 90% automation without API key
|
||||
3. **[TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md)** - Testing plan with step-by-step instructions
|
||||
|
||||
### For Developers
|
||||
1. **[DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md)** - Comprehensive status report and strategic direction
|
||||
2. **[DEVELOPMENT.md](../DEVELOPMENT.md)** - Detailed task tracking and completed work
|
||||
3. **[DEVELOPMENT_ROADMAP.md](../DEVELOPMENT_ROADMAP.md)** - Long-term vision and phase-by-phase plan
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation by Topic
|
||||
|
||||
### Architecture & Design
|
||||
|
||||
**[ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md)**
|
||||
- Centralized library system explained
|
||||
- Before/after architecture comparison
|
||||
- Migration guide
|
||||
- Implementation details
|
||||
- 400+ lines of comprehensive technical documentation
|
||||
|
||||
**[PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md)** ⭐ **Advanced**
|
||||
- Intelligent Multi-Strategy Optimization
|
||||
- Adaptive characterization phase
|
||||
- Automatic algorithm selection (GP-BO, CMA-ES, TPE)
|
||||
- Two-study architecture explained
|
||||
- 41% reduction in trials vs TPE alone
|
||||
|
||||
### Operation Modes
|
||||
|
||||
**[HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md)** ⭐ **Recommended**
|
||||
- What is Hybrid Mode (90% automation)
|
||||
- Step-by-step workflow
|
||||
- Real examples with code
|
||||
- Troubleshooting guide
|
||||
- Tips for success
|
||||
- No API key required!
|
||||
|
||||
**Full LLM Mode** (Documented in [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md))
|
||||
- 100% natural language interaction
|
||||
- Requires Claude API key
|
||||
- Currently 85% complete
|
||||
- Future upgrade path from Hybrid Mode
|
||||
|
||||
**Manual Mode** (Documented in [../README.md](../README.md))
|
||||
- Traditional JSON configuration
|
||||
- 100% production ready
|
||||
- Full control over every parameter
|
||||
|
||||
### Testing & Validation
|
||||
|
||||
**[TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md)**
|
||||
- 4 comprehensive tests (2-3 hours)
|
||||
- Test 1: Verify beam optimization (30 min)
|
||||
- Test 2: Create new optimization (1 hour)
|
||||
- Test 3: Validate deduplication (15 min)
|
||||
- Test 4: Dashboard visualization (30 min - optional)
|
||||
|
||||
### Dashboard & Monitoring
|
||||
|
||||
**[DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md)** ⭐ **New**
|
||||
- Complete dashboard architecture
|
||||
- 3-page dashboard system (Configurator, Live Dashboard, Results Viewer)
|
||||
- Tech stack recommendations (FastAPI + React + WebSocket)
|
||||
- Implementation phases
|
||||
- WebSocket protocol specification
|
||||
|
||||
**[DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md)**
|
||||
- Current implementation status
|
||||
- Completed features (backend + live dashboard)
|
||||
- Testing instructions
|
||||
- Next steps (React frontend)
|
||||
|
||||
**[DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md)**
|
||||
- Implementation session summary
|
||||
- Features demonstrated
|
||||
- How to use the dashboard
|
||||
- Troubleshooting guide
|
||||
|
||||
**[../atomizer-dashboard/README.md](../atomizer-dashboard/README.md)**
|
||||
- Quick start guide
|
||||
- API documentation
|
||||
- Dashboard features overview
|
||||
|
||||
### Recent Updates
|
||||
|
||||
**[MORNING_SUMMARY_NOV17.md](../MORNING_SUMMARY_NOV17.md)**
|
||||
- Critical bugs fixed (parameter ranges)
|
||||
- Major architecture refactor
|
||||
- New components created
|
||||
- Test results (18/18 checks passing)
|
||||
|
||||
**[GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md)**
|
||||
- Ready-to-start summary
|
||||
- Quick start instructions
|
||||
- File review checklist
|
||||
- Current status overview
|
||||
|
||||
---
|
||||
|
||||
## 🗂️ By User Role
|
||||
|
||||
### I'm an Engineer (Want to Use Atomizer)
|
||||
|
||||
**Start Here**:
|
||||
1. [GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md) - Overview and quick start
|
||||
2. [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md) - How to use Hybrid Mode
|
||||
3. [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md) - Try Test 1 to verify system
|
||||
|
||||
**Then**:
|
||||
- Run your first optimization with Hybrid Mode
|
||||
- Review beam optimization example
|
||||
- Ask Claude to create workflow JSON for your problem
|
||||
- Monitor live with the dashboard ([../atomizer-dashboard/README.md](../atomizer-dashboard/README.md))
|
||||
|
||||
### I'm a Developer (Want to Extend Atomizer)
|
||||
|
||||
**Start Here**:
|
||||
1. [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md) - Full status and priorities
|
||||
2. [ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md) - New architecture
|
||||
3. [DEVELOPMENT.md](../DEVELOPMENT.md) - Task tracking
|
||||
|
||||
**Then**:
|
||||
- Review core library system code
|
||||
- Check extractor_library.py implementation
|
||||
- Read migration guide for adding new extractors
|
||||
|
||||
### I'm Managing the Project (Want Big Picture)
|
||||
|
||||
**Start Here**:
|
||||
1. [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md) - Comprehensive status report
|
||||
2. [DEVELOPMENT_ROADMAP.md](../DEVELOPMENT_ROADMAP.md) - Long-term vision
|
||||
3. [MORNING_SUMMARY_NOV17.md](../MORNING_SUMMARY_NOV17.md) - Recent progress
|
||||
|
||||
**Key Metrics**:
|
||||
- Overall completion: 85-90%
|
||||
- Phase 3.2 Week 1: 100% complete
|
||||
- All tests passing (18/18)
|
||||
- Production-grade architecture achieved
|
||||
|
||||
---
|
||||
|
||||
## 📖 Documentation by Phase
|
||||
|
||||
### Phase 1: Plugin System ✅ 100% Complete
|
||||
- Documented in [DEVELOPMENT.md](../DEVELOPMENT.md)
|
||||
- Architecture in [../README.md](../README.md)
|
||||
|
||||
### Phase 2.5-3.1: LLM Intelligence ✅ 85% Complete
|
||||
- Status: [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md)
|
||||
- Details: [DEVELOPMENT.md](../DEVELOPMENT.md)
|
||||
|
||||
### Phase 3.2: Integration ⏳ Week 1 Complete
|
||||
- Week 1 summary: [MORNING_SUMMARY_NOV17.md](../MORNING_SUMMARY_NOV17.md)
|
||||
- Architecture: [ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md)
|
||||
- User guide: [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md)
|
||||
- Testing: [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md)
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Quick Reference
|
||||
|
||||
### Key Files
|
||||
|
||||
| File | Purpose | Audience |
|
||||
|------|---------|----------|
|
||||
| [GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md) | Quick start summary | Everyone |
|
||||
| [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md) | Complete Hybrid Mode guide | Engineers |
|
||||
| [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md) | Testing plan | Engineers, QA |
|
||||
| [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) | Intelligent optimization guide | Advanced Engineers |
|
||||
| [ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md) | Technical architecture | Developers |
|
||||
| [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md) | Status & priorities | Managers, Developers |
|
||||
| [DEVELOPMENT.md](../DEVELOPMENT.md) | Task tracking | Developers |
|
||||
| [DEVELOPMENT_ROADMAP.md](../DEVELOPMENT_ROADMAP.md) | Long-term vision | Managers |
|
||||
|
||||
### Key Concepts
|
||||
|
||||
**Hybrid Mode** (90% automation)
|
||||
- You describe optimization to Claude
|
||||
- Claude creates workflow JSON
|
||||
- LLMOptimizationRunner does the rest
|
||||
- No API key required
|
||||
- Production ready
|
||||
|
||||
**Centralized Library**
|
||||
- Core extractors in `optimization_engine/extractors/`
|
||||
- Study folders only contain references
|
||||
- Signature-based deduplication
|
||||
- Code reuse across all studies
|
||||
- Clean professional structure
|
||||
|
||||
**Study Folder Structure**
|
||||
```
|
||||
studies/my_optimization/
|
||||
├── extractors_manifest.json # References to core library
|
||||
├── llm_workflow_config.json # What LLM understood
|
||||
├── optimization_results.json # Best design found
|
||||
└── optimization_history.json # All trials
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📝 Recent Changes
|
||||
|
||||
### November 21, 2025
|
||||
- Created [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) - Complete dashboard architecture
|
||||
- Created [DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md) - Implementation tracking
|
||||
- Created [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) - Session summary
|
||||
- Implemented FastAPI backend with WebSocket streaming
|
||||
- Built live dashboard with Chart.js (convergence + parameter space plots)
|
||||
- Added pruning alerts and data export (JSON/CSV)
|
||||
- Created [../atomizer-dashboard/README.md](../atomizer-dashboard/README.md) - Quick start guide
|
||||
|
||||
### November 18, 2025
|
||||
- Created [GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md)
|
||||
- Created [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md)
|
||||
- Created [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md)
|
||||
- Updated [../README.md](../README.md) with new doc links
|
||||
|
||||
### November 17, 2025
|
||||
- Created [MORNING_SUMMARY_NOV17.md](../MORNING_SUMMARY_NOV17.md)
|
||||
- Created [ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md)
|
||||
- Fixed parameter range bug
|
||||
- Implemented centralized library system
|
||||
- All tests passing (18/18)
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Need Help?
|
||||
|
||||
### Common Questions
|
||||
|
||||
**Q: How do I start using Atomizer?**
|
||||
A: Read [GOOD_MORNING_NOV18.md](../GOOD_MORNING_NOV18.md) then follow Test 1 in [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md)
|
||||
|
||||
**Q: What's the difference between modes?**
|
||||
A: See comparison table in [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md#comparison-three-modes)
|
||||
|
||||
**Q: Where is the technical architecture explained?**
|
||||
A: [ARCHITECTURE_REFACTOR_NOV17.md](ARCHITECTURE_REFACTOR_NOV17.md)
|
||||
|
||||
**Q: What's the current development status?**
|
||||
A: [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md)
|
||||
|
||||
**Q: How do I contribute?**
|
||||
A: Read [DEVELOPMENT.md](../DEVELOPMENT.md) for task tracking and priorities
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
See troubleshooting section in:
|
||||
- [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md#troubleshooting)
|
||||
- [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md#if-something-fails)
|
||||
|
||||
---
|
||||
|
||||
## 📬 Contact
|
||||
|
||||
- **Email**: antoine@atomaste.com
|
||||
- **GitHub**: [Report Issues](https://github.com/yourusername/Atomizer/issues)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: November 21, 2025
|
||||
**Atomizer Version**: Phase 3.2 Week 1 Complete + Live Dashboard ✅ (85-90% overall)
|
||||
**Documentation Status**: Comprehensive and up-to-date ✅
|
||||
175
docs/archive/historical/LESSONS_LEARNED.md
Normal file
175
docs/archive/historical/LESSONS_LEARNED.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Lessons Learned - Atomizer Optimization System
|
||||
|
||||
This document captures lessons learned from optimization studies to continuously improve the system.
|
||||
|
||||
## Date: 2025-11-19 - Circular Plate Frequency Tuning Study
|
||||
|
||||
### What Worked Well
|
||||
|
||||
1. **Hybrid Study Creator** - Successfully auto-generated complete optimization workflow
|
||||
- Automatically detected design variables from NX expressions
|
||||
- Correctly matched objectives to available simulation results
|
||||
- Generated working extractor code for eigenvalue extraction
|
||||
- Created comprehensive configuration reports
|
||||
|
||||
2. **Modal Analysis Support** - System now handles eigenvalue extraction properly
|
||||
- Fixed nx_solver.py to select correct solution-specific OP2 files
|
||||
- Solution name parameter properly passed through solve pipeline
|
||||
- Eigenvalue extractor successfully reads LAMA tables from OP2
|
||||
|
||||
3. **Incremental History Tracking** - Added real-time progress monitoring
|
||||
- JSON file updated after each trial
|
||||
- Enables live monitoring of optimization progress
|
||||
- Provides backup if optimization is interrupted
|
||||
|
||||
### Critical Bugs Fixed
|
||||
|
||||
1. **nx_solver OP2 File Selection Bug**
|
||||
- **Problem**: nx_solver was hardcoded to return `-solution_1.op2` files
|
||||
- **Root Cause**: Missing solution_name parameter support in run_simulation()
|
||||
- **Solution**: Added solution_name parameter that dynamically constructs correct OP2 filename
|
||||
- **Location**: [nx_solver.py:181-197](../optimization_engine/nx_solver.py#L181-L197)
|
||||
- **Impact**: HIGH - Blocks all modal analysis optimizations
|
||||
|
||||
2. **Missing Incremental History Tracking**
|
||||
- **Problem**: Generated runners only saved to Optuna database, no live JSON file
|
||||
- **Root Cause**: hybrid_study_creator template didn't include history tracking
|
||||
- **Solution**: Added history initialization and per-trial saving to template
|
||||
- **Location**: [hybrid_study_creator.py:388-436](../optimization_engine/hybrid_study_creator.py#L388-L436)
|
||||
- **Impact**: MEDIUM - User experience issue, no technical blocker
|
||||
|
||||
3. **No Automatic Report Generation**
|
||||
- **Problem**: User had to manually request reports after optimization
|
||||
- **Root Cause**: System wasn't proactive about generating human-readable output
|
||||
- **Solution**: Created generate_report.py and integrated into hybrid runner template
|
||||
- **Location**: [generate_report.py](../optimization_engine/generate_report.py)
|
||||
- **Impact**: MEDIUM - User experience issue
|
||||
|
||||
### System Improvements Made
|
||||
|
||||
1. **Created Automatic Report Generator**
|
||||
- Location: `optimization_engine/generate_report.py`
|
||||
- Generates comprehensive human-readable reports
|
||||
- Includes statistics, top trials, success assessment
|
||||
- Automatically called at end of optimization
|
||||
|
||||
2. **Updated Hybrid Study Creator**
|
||||
- Now generates runners with automatic report generation
|
||||
- Includes incremental history tracking by default
|
||||
- Better documentation in generated code
|
||||
|
||||
3. **Created Lessons Learned Documentation**
|
||||
- This file! To track improvements over time
|
||||
- Should be updated after each study
|
||||
|
||||
### Proactive Behaviors to Add
|
||||
|
||||
1. **Automatic report generation** - DONE ✓
|
||||
- System should automatically generate reports after optimization completes
|
||||
- No need for user to request this
|
||||
|
||||
2. **Progress summaries during long runs**
|
||||
- Could periodically print best-so-far results
|
||||
- Show estimated time remaining
|
||||
- Alert if optimization appears stuck
|
||||
|
||||
3. **Automatic visualization**
|
||||
- Generate plots of design space exploration
|
||||
- Show convergence curves
|
||||
- Visualize parameter sensitivities
|
||||
|
||||
4. **Study validation before running**
|
||||
- Check if design variable bounds make physical sense
|
||||
- Verify baseline simulation runs successfully
|
||||
- Estimate total runtime based on trial time
|
||||
|
||||
### Technical Learnings
|
||||
|
||||
1. **NX Nastran OP2 File Naming**
|
||||
- When solving specific solutions via journal mode: `<base>-<solution_name_lowercase>.op2`
|
||||
- When solving all solutions: Files named `-solution_1`, `-solution_2`, etc.
|
||||
- Solution names must be converted to lowercase and spaces replaced with underscores
|
||||
- Example: "Solution_Normal_Modes" → "solution_normal_modes"
|
||||
|
||||
2. **pyNastran Eigenvalue Access**
|
||||
- Eigenvalues stored in `model.eigenvalues` dict (keyed by subcase)
|
||||
- Each subcase has a `RealEigenvalues` object
|
||||
- Access via `eigenvalues_obj.eigenvalues` (not `.eigrs` or `.data`)
|
||||
- Need to convert eigenvalues to frequencies: `f = sqrt(eigenvalue) / (2*pi)`
|
||||
|
||||
3. **Optuna Study Continuation**
|
||||
- Using `load_if_exists=True` allows resuming interrupted studies
|
||||
- Trial numbers continue from previous runs
|
||||
- History tracking needs to handle this gracefully
|
||||
|
||||
### Future Improvements Needed
|
||||
|
||||
1. **Better Objective Function Formulation**
|
||||
- Current: Minimize absolute error from target
|
||||
- Issue: Doesn't penalize being above vs below target differently
|
||||
- Suggestion: Add constraint handling for hard requirements
|
||||
|
||||
2. **Smarter Initial Sampling**
|
||||
- Current: Pure random sampling
|
||||
- Suggestion: Use Latin hypercube or Sobol sequences for better coverage
|
||||
|
||||
3. **Adaptive Trial Allocation**
|
||||
- Current: Fixed number of trials
|
||||
- Suggestion: Stop automatically when tolerance is met
|
||||
- Or: Increase trials if not converging
|
||||
|
||||
4. **Multi-Objective Support**
|
||||
- Current: Single objective only
|
||||
- Many real problems have multiple competing objectives
|
||||
- Need Pareto frontier visualization
|
||||
|
||||
5. **Sensitivity Analysis**
|
||||
- Automatically identify which design variables matter most
|
||||
- Could reduce dimensionality for faster optimization
|
||||
|
||||
### Template for Future Entries
|
||||
|
||||
```markdown
|
||||
## Date: YYYY-MM-DD - Study Name
|
||||
|
||||
### What Worked Well
|
||||
- ...
|
||||
|
||||
### Critical Bugs Fixed
|
||||
1. **Bug Title**
|
||||
- **Problem**:
|
||||
- **Root Cause**:
|
||||
- **Solution**:
|
||||
- **Location**:
|
||||
- **Impact**:
|
||||
|
||||
### System Improvements Made
|
||||
- ...
|
||||
|
||||
### Proactive Behaviors to Add
|
||||
- ...
|
||||
|
||||
### Technical Learnings
|
||||
- ...
|
||||
|
||||
### Future Improvements Needed
|
||||
- ...
|
||||
```
|
||||
|
||||
## Continuous Improvement Process
|
||||
|
||||
1. **After Each Study**:
|
||||
- Review what went wrong
|
||||
- Document bugs and fixes
|
||||
- Identify missing proactive behaviors
|
||||
- Update this document
|
||||
|
||||
2. **Monthly Review**:
|
||||
- Look for patterns in issues
|
||||
- Prioritize improvements
|
||||
- Update system architecture if needed
|
||||
|
||||
3. **Version Tracking**:
|
||||
- Tag major improvements with version numbers
|
||||
- Keep changelog synchronized
|
||||
- Document breaking changes
|
||||
@@ -0,0 +1,431 @@
|
||||
# NXOpen Documentation Integration Strategy
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the strategy for integrating NXOpen Python documentation into Atomizer's AI-powered code generation system.
|
||||
|
||||
**Target Documentation**: https://docs.sw.siemens.com/en-US/doc/209349590/PL20190529153447339.nxopen_python_ref
|
||||
|
||||
**Goal**: Enable Atomizer to automatically research NXOpen APIs and generate correct code without manual documentation lookup.
|
||||
|
||||
## Current State (Phase 2.7 Complete)
|
||||
|
||||
✅ **Intelligent Workflow Analysis**: LLM detects engineering features needing research
|
||||
✅ **Capability Matching**: System knows what's already implemented
|
||||
✅ **Gap Identification**: Identifies missing FEA/CAE operations
|
||||
|
||||
❌ **Auto-Research**: No automated documentation lookup
|
||||
❌ **Code Generation**: Manual implementation still required
|
||||
|
||||
## Documentation Access Challenges
|
||||
|
||||
### Challenge 1: Authentication Required
|
||||
- Siemens documentation requires login
|
||||
- Not accessible via direct WebFetch
|
||||
- Cannot be scraped programmatically
|
||||
|
||||
### Challenge 2: Dynamic Content
|
||||
- Documentation is JavaScript-rendered
|
||||
- Not available as static HTML
|
||||
- Requires browser automation or API access
|
||||
|
||||
## Integration Strategies
|
||||
|
||||
### Strategy 1: MCP Server (RECOMMENDED) 🚀
|
||||
|
||||
**Concept**: Build a Model Context Protocol (MCP) server for NXOpen documentation
|
||||
|
||||
**How it Works**:
|
||||
```
|
||||
Atomizer (Phase 2.5-2.7)
|
||||
↓
|
||||
Detects: "Need to modify PCOMP ply thickness"
|
||||
↓
|
||||
MCP Server Query: "How to modify PCOMP in NXOpen?"
|
||||
↓
|
||||
MCP Server → Local Documentation Cache or Live Lookup
|
||||
↓
|
||||
Returns: Code examples + API reference
|
||||
↓
|
||||
Phase 2.8-2.9: Auto-generate code
|
||||
```
|
||||
|
||||
**Implementation**:
|
||||
1. **Local Documentation Cache**
|
||||
- Download key NXOpen docs pages locally (one-time setup)
|
||||
- Store as markdown/JSON in `knowledge_base/nxopen/`
|
||||
- Index by module/class/method
|
||||
|
||||
2. **MCP Server**
|
||||
- Runs locally on `localhost:3000`
|
||||
- Provides search/query API
|
||||
- Returns relevant code snippets + documentation
|
||||
|
||||
3. **Integration with Atomizer**
|
||||
- `research_agent.py` calls MCP server
|
||||
- Gets documentation for missing capabilities
|
||||
- Generates code based on examples
|
||||
|
||||
**Advantages**:
|
||||
- ✅ No API consumption costs (runs locally)
|
||||
- ✅ Fast lookups (local cache)
|
||||
- ✅ Works offline after initial setup
|
||||
- ✅ Can be extended to pyNastran docs later
|
||||
|
||||
**Disadvantages**:
|
||||
- Requires one-time manual documentation download
|
||||
- Needs periodic updates for new NX versions
|
||||
|
||||
### Strategy 2: NX Journal Recording (USER-DRIVEN LEARNING) 🎯 **RECOMMENDED!**
|
||||
|
||||
**Concept**: User records NX journals while performing operations, system learns from recorded Python code
|
||||
|
||||
**How it Works**:
|
||||
1. User needs to learn how to "merge FEM nodes"
|
||||
2. User starts journal recording in NX (Tools → Journal → Record)
|
||||
3. User performs the operation manually in NX GUI
|
||||
4. NX automatically generates Python journal showing exact API calls
|
||||
5. User shares journal file with Atomizer
|
||||
6. Atomizer extracts pattern and stores in knowledge base
|
||||
|
||||
**Example Workflow**:
|
||||
```
|
||||
User Action: Merge duplicate FEM nodes in NX
|
||||
↓
|
||||
NX Records: journal_merge_nodes.py
|
||||
↓
|
||||
Contains: session.FemPart().MergeNodes(tolerance=0.001, ...)
|
||||
↓
|
||||
Atomizer learns: "To merge nodes, use FemPart().MergeNodes()"
|
||||
↓
|
||||
Pattern saved to: knowledge_base/nxopen_patterns/fem/merge_nodes.md
|
||||
↓
|
||||
Future requests: Auto-generate code using this pattern!
|
||||
```
|
||||
|
||||
**Real Recorded Journal Example**:
|
||||
```python
|
||||
# User records: "Renumber elements starting from 1000"
|
||||
import NXOpen
|
||||
|
||||
def main():
|
||||
session = NXOpen.Session.GetSession()
|
||||
fem_part = session.Parts.Work.BasePart.FemPart
|
||||
|
||||
# NX generates this automatically!
|
||||
fem_part.RenumberElements(
|
||||
startingNumber=1000,
|
||||
increment=1,
|
||||
applyToAll=True
|
||||
)
|
||||
```
|
||||
|
||||
**Advantages**:
|
||||
- ✅ **User-driven**: Learn exactly what you need, when you need it
|
||||
- ✅ **Accurate**: Code comes directly from NX (can't be wrong!)
|
||||
- ✅ **Comprehensive**: Captures full API signature and parameters
|
||||
- ✅ **No documentation hunting**: NX generates the code for you
|
||||
- ✅ **Builds knowledge base organically**: Grows with actual usage
|
||||
- ✅ **Handles edge cases**: Records exactly how you solved the problem
|
||||
|
||||
**Use Cases Perfect for Journal Recording**:
|
||||
- Merge/renumber FEM nodes
|
||||
- Node/element renumbering
|
||||
- Mesh quality checks
|
||||
- Geometry modifications
|
||||
- Property assignments
|
||||
- Solver setup configurations
|
||||
- Any complex operation hard to find in docs
|
||||
|
||||
**Integration with Atomizer**:
|
||||
```python
|
||||
# User provides recorded journal
|
||||
atomizer.learn_from_journal("journal_merge_nodes.py")
|
||||
|
||||
# System analyzes:
|
||||
# - Identifies API calls (FemPart().MergeNodes)
|
||||
# - Extracts parameters (tolerance, node_ids, etc.)
|
||||
# - Creates reusable pattern
|
||||
# - Stores in knowledge_base with description
|
||||
|
||||
# Future requests automatically use this pattern!
|
||||
```
|
||||
|
||||
### Strategy 3: Python Introspection
|
||||
|
||||
**Concept**: Use Python's introspection to explore NXOpen modules at runtime
|
||||
|
||||
**How it Works**:
|
||||
```python
|
||||
import NXOpen
|
||||
|
||||
# Discover all classes
|
||||
for name in dir(NXOpen):
|
||||
cls = getattr(NXOpen, name)
|
||||
print(f"{name}: {cls.__doc__}")
|
||||
|
||||
# Discover methods
|
||||
for method in dir(NXOpen.Part):
|
||||
print(f"{method}: {getattr(NXOpen.Part, method).__doc__}")
|
||||
```
|
||||
|
||||
**Advantages**:
|
||||
- ✅ No external dependencies
|
||||
- ✅ Always up-to-date with installed NX version
|
||||
- ✅ Includes method signatures automatically
|
||||
|
||||
**Disadvantages**:
|
||||
- ❌ Limited documentation (docstrings often minimal)
|
||||
- ❌ No usage examples
|
||||
- ❌ Requires NX to be running
|
||||
|
||||
### Strategy 4: Hybrid Approach (BEST COMBINATION) 🏆
|
||||
|
||||
**Combine all strategies for maximum effectiveness**:
|
||||
|
||||
**Phase 1 (Immediate)**: Journal Recording + pyNastran
|
||||
1. **For NXOpen**:
|
||||
- User records journals for needed operations
|
||||
- Atomizer learns from recorded code
|
||||
- Builds knowledge base organically
|
||||
|
||||
2. **For Result Extraction**:
|
||||
- Use pyNastran docs (publicly accessible!)
|
||||
- WebFetch documentation as needed
|
||||
- Auto-generate OP2 extraction code
|
||||
|
||||
**Phase 2 (Short Term)**: Pattern Library + Introspection
|
||||
1. **Knowledge Base Growth**:
|
||||
- Store learned patterns from journals
|
||||
- Categorize by domain (FEM, geometry, properties, etc.)
|
||||
- Add examples and parameter descriptions
|
||||
|
||||
2. **Python Introspection**:
|
||||
- Supplement journal learning with introspection
|
||||
- Discover available methods automatically
|
||||
- Validate generated code against signatures
|
||||
|
||||
**Phase 3 (Future)**: MCP Server + Full Automation
|
||||
1. **MCP Integration**:
|
||||
- Build MCP server for documentation lookup
|
||||
- Index knowledge base for fast retrieval
|
||||
- Integrate with NXOpen TSE resources
|
||||
|
||||
2. **Full Automation**:
|
||||
- Auto-generate code for any request
|
||||
- Self-learn from successful executions
|
||||
- Continuous improvement through usage
|
||||
|
||||
**This is the winning strategy!**
|
||||
|
||||
## Recommended Immediate Implementation
|
||||
|
||||
### Step 1: Python Introspection Module
|
||||
|
||||
Create `optimization_engine/nxopen_introspector.py`:
|
||||
```python
|
||||
class NXOpenIntrospector:
|
||||
def get_module_docs(self, module_path: str) -> Dict[str, Any]:
|
||||
"""Get all classes/methods from NXOpen module"""
|
||||
|
||||
def find_methods_for_task(self, task_description: str) -> List[str]:
|
||||
"""Use LLM to match task to NXOpen methods"""
|
||||
|
||||
def generate_code_skeleton(self, method_name: str) -> str:
|
||||
"""Generate code template from method signature"""
|
||||
```
|
||||
|
||||
### Step 2: Knowledge Base Structure
|
||||
|
||||
```
|
||||
knowledge_base/
|
||||
├── nxopen_patterns/
|
||||
│ ├── geometry/
|
||||
│ │ ├── create_part.md
|
||||
│ │ ├── modify_expression.md
|
||||
│ │ └── update_parameter.md
|
||||
│ ├── fea_properties/
|
||||
│ │ ├── modify_pcomp.md
|
||||
│ │ ├── modify_cbar.md
|
||||
│ │ └── modify_cbush.md
|
||||
│ ├── materials/
|
||||
│ │ └── create_material.md
|
||||
│ └── simulation/
|
||||
│ ├── run_solve.md
|
||||
│ └── check_solution.md
|
||||
└── pynastran_patterns/
|
||||
├── op2_extraction/
|
||||
│ ├── stress_extraction.md
|
||||
│ ├── displacement_extraction.md
|
||||
│ └── element_forces.md
|
||||
└── bdf_modification/
|
||||
└── property_updates.md
|
||||
```
|
||||
|
||||
### Step 3: Integration with Research Agent
|
||||
|
||||
Update `research_agent.py`:
|
||||
```python
|
||||
def research_engineering_feature(self, feature_name: str, domain: str):
|
||||
# 1. Check knowledge base first
|
||||
kb_result = self.search_knowledge_base(feature_name)
|
||||
|
||||
# 2. If not found, use introspection
|
||||
if not kb_result:
|
||||
introspection_result = self.introspector.find_methods_for_task(feature_name)
|
||||
|
||||
# 3. Generate code skeleton
|
||||
code = self.introspector.generate_code_skeleton(method)
|
||||
|
||||
# 4. Use LLM to complete implementation
|
||||
full_implementation = self.llm_generate_implementation(code, feature_name)
|
||||
|
||||
# 5. Save to knowledge base for future use
|
||||
self.save_to_knowledge_base(feature_name, full_implementation)
|
||||
```
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 2.8: Inline Code Generator (CURRENT PRIORITY)
|
||||
**Timeline**: Next 1-2 sessions
|
||||
**Scope**: Auto-generate simple math operations
|
||||
|
||||
**What to Build**:
|
||||
- `optimization_engine/inline_code_generator.py`
|
||||
- Takes inline_calculations from Phase 2.7 LLM output
|
||||
- Generates Python code directly
|
||||
- No documentation needed (it's just math!)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
Input: {
|
||||
"action": "normalize_stress",
|
||||
"params": {"input": "max_stress", "divisor": 200.0}
|
||||
}
|
||||
|
||||
Output:
|
||||
norm_stress = max_stress / 200.0
|
||||
```
|
||||
|
||||
### Phase 2.9: Post-Processing Hook Generator
|
||||
**Timeline**: Following Phase 2.8
|
||||
**Scope**: Generate middleware scripts
|
||||
|
||||
**What to Build**:
|
||||
- `optimization_engine/hook_generator.py`
|
||||
- Takes post_processing_hooks from Phase 2.7 LLM output
|
||||
- Generates standalone Python scripts
|
||||
- Handles I/O between FEA steps
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
Input: {
|
||||
"action": "weighted_objective",
|
||||
"params": {
|
||||
"inputs": ["norm_stress", "norm_disp"],
|
||||
"weights": [0.7, 0.3],
|
||||
"formula": "0.7 * norm_stress + 0.3 * norm_disp"
|
||||
}
|
||||
}
|
||||
|
||||
Output: hook script that reads inputs, calculates, writes output
|
||||
```
|
||||
|
||||
### Phase 3: MCP Integration for Documentation
|
||||
**Timeline**: After Phase 2.9
|
||||
**Scope**: Automated NXOpen/pyNastran research
|
||||
|
||||
**What to Build**:
|
||||
1. Local documentation cache system
|
||||
2. MCP server for doc lookup
|
||||
3. Integration with research_agent.py
|
||||
4. Automated code generation from docs
|
||||
|
||||
## Alternative: Community Resources & pyNastran (RECOMMENDED STARTING POINT)
|
||||
|
||||
### pyNastran Documentation (START HERE!) 🚀
|
||||
|
||||
**URL**: https://pynastran-git.readthedocs.io/en/latest/index.html
|
||||
|
||||
**Why Start with pyNastran**:
|
||||
- ✅ Fully open and publicly accessible
|
||||
- ✅ Comprehensive API documentation
|
||||
- ✅ Code examples for every operation
|
||||
- ✅ Already used extensively in Atomizer
|
||||
- ✅ Can WebFetch directly - no authentication needed
|
||||
- ✅ Covers 80% of FEA result extraction needs
|
||||
|
||||
**What pyNastran Handles**:
|
||||
- OP2 file reading (displacement, stress, strain, element forces)
|
||||
- F06 file parsing
|
||||
- BDF/Nastran deck modification
|
||||
- Result post-processing
|
||||
- Nodal/Element data extraction
|
||||
|
||||
**Strategy**: Use pyNastran as the primary documentation source for result extraction, and NXOpen only when modifying geometry/properties in NX.
|
||||
|
||||
### NXOpen Community Resources
|
||||
|
||||
1. **NXOpen TSE** (The Scripting Engineer)
|
||||
- https://nxopentsedocumentation.thescriptingengineer.com/
|
||||
- Extensive examples and tutorials
|
||||
- Can be scraped/cached legally
|
||||
|
||||
2. **GitHub NXOpen Examples**
|
||||
- Search GitHub for "NXOpen" + specific functionality
|
||||
- Real-world code examples
|
||||
- Community-vetted patterns
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate (This Session):
|
||||
1. ✅ Create this strategy document
|
||||
2. ✅ Implement Phase 2.8: Inline Code Generator
|
||||
3. ✅ Test inline code generation (all tests passing!)
|
||||
4. ⏳ Implement Phase 2.9: Post-Processing Hook Generator
|
||||
5. ⏳ Integrate pyNastran documentation lookup via WebFetch
|
||||
|
||||
### Short Term (Next 2-3 Sessions):
|
||||
1. Implement Phase 2.9: Hook Generator
|
||||
2. Build NXOpenIntrospector module
|
||||
3. Start curating knowledge_base/nxopen_patterns/
|
||||
4. Test with real optimization scenarios
|
||||
|
||||
### Medium Term (Phase 3):
|
||||
1. Build local documentation cache
|
||||
2. Implement MCP server
|
||||
3. Integrate automated research
|
||||
4. Full end-to-end code generation
|
||||
|
||||
## Success Metrics
|
||||
|
||||
**Phase 2.8 Success**:
|
||||
- ✅ Auto-generates 100% of inline calculations
|
||||
- ✅ Correct Python syntax every time
|
||||
- ✅ Properly handles variable naming
|
||||
|
||||
**Phase 2.9 Success**:
|
||||
- ✅ Auto-generates functional hook scripts
|
||||
- ✅ Correct I/O handling
|
||||
- ✅ Integrates with optimization loop
|
||||
|
||||
**Phase 3 Success**:
|
||||
- ✅ Automatically finds correct NXOpen methods
|
||||
- ✅ Generates working code 80%+ of the time
|
||||
- ✅ Self-learns from successful patterns
|
||||
|
||||
## Conclusion
|
||||
|
||||
**Recommended Path Forward**:
|
||||
1. Focus on Phase 2.8-2.9 first (inline + hooks)
|
||||
2. Build knowledge base organically as we encounter patterns
|
||||
3. Use Python introspection for discovery
|
||||
4. Build MCP server once we have critical mass of patterns
|
||||
|
||||
This approach:
|
||||
- ✅ Delivers value incrementally
|
||||
- ✅ No external dependencies initially
|
||||
- ✅ Builds towards full automation
|
||||
- ✅ Leverages both LLM intelligence and structured knowledge
|
||||
|
||||
**The documentation will come to us through usage, not upfront scraping!**
|
||||
374
docs/archive/historical/NX_EXPRESSION_IMPORT_SYSTEM.md
Normal file
374
docs/archive/historical/NX_EXPRESSION_IMPORT_SYSTEM.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# NX Expression Import System
|
||||
|
||||
> **Feature**: Robust NX part expression update via .exp file import
|
||||
>
|
||||
> **Status**: ✅ Production Ready (2025-11-17)
|
||||
>
|
||||
> **Impact**: Enables updating ALL NX expressions including those not stored in text format in binary .prt files
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The NX Expression Import System provides a robust method for updating NX part expressions by leveraging NX's native .exp file import functionality through journal scripts.
|
||||
|
||||
### Problem Solved
|
||||
|
||||
Some NX expressions (like `hole_count` in parametric features) are stored in binary .prt file formats that cannot be reliably parsed or updated through text-based regex operations. Traditional binary .prt editing fails for expressions that:
|
||||
- Are used inside feature parameters
|
||||
- Are stored in non-text binary sections
|
||||
- Are linked to parametric pattern features
|
||||
|
||||
### Solution
|
||||
|
||||
Instead of binary .prt editing, use NX's native expression import/export:
|
||||
1. Export all expressions to .exp file format (text-based)
|
||||
2. Create .exp file containing only study design variables with new values
|
||||
3. Import .exp file using NX journal script
|
||||
4. NX updates all expressions natively, including binary-stored ones
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Components
|
||||
|
||||
1. **NXParameterUpdater** ([optimization_engine/nx_updater.py](../optimization_engine/nx_updater.py))
|
||||
- Main class handling expression updates
|
||||
- Provides both legacy (binary edit) and new (NX import) methods
|
||||
- Automatic method selection based on expression type
|
||||
|
||||
2. **import_expressions.py** ([optimization_engine/import_expressions.py](../optimization_engine/import_expressions.py))
|
||||
- NX journal script for importing .exp files
|
||||
- Handles part loading, expression import, model update, and save
|
||||
- Robust error handling and status reporting
|
||||
|
||||
3. **.exp File Format**
|
||||
- Plain text format for NX expressions
|
||||
- Format: `[Units]name=value` or `name=value` (unitless)
|
||||
- Human-readable and LLM-friendly
|
||||
|
||||
### Workflow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ 1. Export ALL expressions to .exp format │
|
||||
│ (NX journal: export_expressions.py) │
|
||||
│ Purpose: Determine units for each expression │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ 2. Create .exp file with ONLY study variables │
|
||||
│ [MilliMeter]beam_face_thickness=22.0 │
|
||||
│ [MilliMeter]beam_half_core_thickness=25.0 │
|
||||
│ [MilliMeter]holes_diameter=280.0 │
|
||||
│ hole_count=12 │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ 3. Run NX journal to import expressions │
|
||||
│ (NX journal: import_expressions.py) │
|
||||
│ - Opens .prt file │
|
||||
│ - Imports .exp using Replace mode │
|
||||
│ - Updates model geometry │
|
||||
│ - Saves .prt file │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ 4. Verify updates │
|
||||
│ - Re-export expressions │
|
||||
│ - Confirm all values updated │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.nx_updater import NXParameterUpdater
|
||||
|
||||
# Create updater
|
||||
prt_file = Path("studies/simple_beam_optimization/model/Beam.prt")
|
||||
updater = NXParameterUpdater(prt_file)
|
||||
|
||||
# Define design variables to update
|
||||
design_vars = {
|
||||
"beam_half_core_thickness": 25.0, # mm
|
||||
"beam_face_thickness": 22.0, # mm
|
||||
"holes_diameter": 280.0, # mm
|
||||
"hole_count": 12 # unitless
|
||||
}
|
||||
|
||||
# Update expressions using NX import (default method)
|
||||
updater.update_expressions(design_vars)
|
||||
|
||||
# Verify updates
|
||||
expressions = updater.get_all_expressions()
|
||||
for name, value in design_vars.items():
|
||||
actual = expressions[name]["value"]
|
||||
print(f"{name}: expected={value}, actual={actual}, match={abs(actual - value) < 0.001}")
|
||||
```
|
||||
|
||||
### Integration in Optimization Loop
|
||||
|
||||
The system is automatically used in optimization workflows:
|
||||
|
||||
```python
|
||||
# In OptimizationRunner
|
||||
for trial in range(n_trials):
|
||||
# Optuna suggests new design variable values
|
||||
design_vars = {
|
||||
"beam_half_core_thickness": trial.suggest_float("beam_half_core_thickness", 10, 40),
|
||||
"holes_diameter": trial.suggest_float("holes_diameter", 150, 450),
|
||||
"hole_count": trial.suggest_int("hole_count", 5, 15),
|
||||
# ... other variables
|
||||
}
|
||||
|
||||
# Update NX model (automatically uses .exp import)
|
||||
updater.update_expressions(design_vars)
|
||||
|
||||
# Run FEM simulation
|
||||
solver.solve(sim_file)
|
||||
|
||||
# Extract results
|
||||
results = extractor.extract(op2_file)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Format: .exp
|
||||
|
||||
### Format Specification
|
||||
|
||||
```
|
||||
[UnitSystem]expression_name=value
|
||||
expression_name=value # For unitless expressions
|
||||
```
|
||||
|
||||
### Example .exp File
|
||||
|
||||
```
|
||||
[MilliMeter]beam_face_thickness=20.0
|
||||
[MilliMeter]beam_half_core_thickness=20.0
|
||||
[MilliMeter]holes_diameter=400.0
|
||||
hole_count=10
|
||||
```
|
||||
|
||||
### Supported Units
|
||||
|
||||
NX units are specified in square brackets:
|
||||
- `[MilliMeter]` - Length in mm
|
||||
- `[Meter]` - Length in m
|
||||
- `[Newton]` - Force in N
|
||||
- `[Kilogram]` - Mass in kg
|
||||
- `[Pascal]` - Pressure/stress in Pa
|
||||
- `[Degree]` - Angle in degrees
|
||||
- No brackets - Unitless values
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### NXParameterUpdater.update_expressions_via_import()
|
||||
|
||||
**Location**: [optimization_engine/nx_updater.py](../optimization_engine/nx_updater.py)
|
||||
|
||||
**Purpose**: Update expressions by creating and importing .exp file
|
||||
|
||||
**Algorithm**:
|
||||
1. Export ALL expressions from .prt to get units information
|
||||
2. Create .exp file with ONLY study variables:
|
||||
- Use units from full export
|
||||
- Format: `[units]name=value` or `name=value`
|
||||
3. Run NX journal script to import .exp file
|
||||
4. Delete temporary .exp file
|
||||
5. Return success/failure status
|
||||
|
||||
**Key Code**:
|
||||
```python
|
||||
def update_expressions_via_import(self, updates: Dict[str, float]):
|
||||
# Get all expressions to determine units
|
||||
all_expressions = self.get_all_expressions(use_exp_export=True)
|
||||
|
||||
# Create .exp file with ONLY study variables
|
||||
exp_file = self.prt_path.parent / f"{self.prt_path.stem}_study_variables.exp"
|
||||
|
||||
with open(exp_file, 'w', encoding='utf-8') as f:
|
||||
for name, value in updates.items():
|
||||
units = all_expressions[name].get('units', '')
|
||||
if units:
|
||||
f.write(f"[{units}]{name}={value}\n")
|
||||
else:
|
||||
f.write(f"{name}={value}\n")
|
||||
|
||||
# Run NX journal to import
|
||||
journal_script = Path(__file__).parent / "import_expressions.py"
|
||||
cmd_str = f'"{self.nx_run_journal_path}" "{journal_script}" -args "{self.prt_path}" "{exp_file}"'
|
||||
result = subprocess.run(cmd_str, capture_output=True, text=True, shell=True)
|
||||
|
||||
# Clean up
|
||||
exp_file.unlink()
|
||||
|
||||
return result.returncode == 0
|
||||
```
|
||||
|
||||
### import_expressions.py Journal
|
||||
|
||||
**Location**: [optimization_engine/import_expressions.py](../optimization_engine/import_expressions.py)
|
||||
|
||||
**Purpose**: NX journal script to import .exp file into .prt file
|
||||
|
||||
**NXOpen API Usage**:
|
||||
```python
|
||||
# Open part file
|
||||
workPart, partLoadStatus1 = theSession.Parts.OpenActiveDisplay(
|
||||
prt_file,
|
||||
NXOpen.DisplayPartOption.AllowAdditional
|
||||
)
|
||||
|
||||
# Import expressions (Replace mode overwrites existing values)
|
||||
expModified, errorMessages = workPart.Expressions.ImportFromFile(
|
||||
exp_file,
|
||||
NXOpen.ExpressionCollection.ImportMode.Replace
|
||||
)
|
||||
|
||||
# Update geometry with new expression values
|
||||
markId = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
|
||||
nErrs = theSession.UpdateManager.DoUpdate(markId)
|
||||
|
||||
# Save part
|
||||
partSaveStatus = workPart.Save(
|
||||
NXOpen.BasePart.SaveComponents.TrueValue,
|
||||
NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation Results
|
||||
|
||||
### Test Case: 4D Beam Optimization
|
||||
|
||||
**Study**: `studies/simple_beam_optimization/`
|
||||
|
||||
**Design Variables**:
|
||||
- `beam_half_core_thickness`: 10-40 mm
|
||||
- `beam_face_thickness`: 10-40 mm
|
||||
- `holes_diameter`: 150-450 mm
|
||||
- `hole_count`: 5-15 (integer, unitless)
|
||||
|
||||
**Problem**: `hole_count` was not updating with binary .prt editing
|
||||
|
||||
**Solution**: Implemented .exp import system
|
||||
|
||||
**Results**:
|
||||
```
|
||||
✅ Trial 0: hole_count=6 (successfully updated from baseline=10)
|
||||
✅ Trial 1: hole_count=15 (successfully updated)
|
||||
✅ Trial 2: hole_count=11 (successfully updated)
|
||||
|
||||
Mesh adaptation confirmed:
|
||||
- Trial 0: 5373 CQUAD4 elements (6 holes)
|
||||
- Trial 1: 5158 CQUAD4 + 1 CTRIA3 (15 holes)
|
||||
- Trial 2: 5318 CQUAD4 (11 holes)
|
||||
|
||||
All 3 trials: ALL 4 variables updated successfully
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advantages
|
||||
|
||||
### Robustness
|
||||
- Works for ALL expression types, not just text-parseable ones
|
||||
- Native NX functionality - no binary file hacks
|
||||
- Handles units automatically
|
||||
- No regex pattern failures
|
||||
|
||||
### Simplicity
|
||||
- .exp format is human-readable
|
||||
- Easy to debug (just open .exp file)
|
||||
- LLM-friendly format
|
||||
|
||||
### Reliability
|
||||
- NX validates expressions during import
|
||||
- Automatic model update after import
|
||||
- Error messages from NX if import fails
|
||||
|
||||
### Performance
|
||||
- Fast: .exp file creation + journal execution < 1 second
|
||||
- No need to parse large .prt files
|
||||
- Minimal I/O operations
|
||||
|
||||
---
|
||||
|
||||
## Comparison: Binary Edit vs .exp Import
|
||||
|
||||
| Aspect | Binary .prt Edit | .exp Import (New) |
|
||||
|--------|------------------|-------------------|
|
||||
| **Expression Coverage** | ~60-80% (text-parseable only) | ✅ 100% (all expressions) |
|
||||
| **Reliability** | Fragile (regex failures) | ✅ Robust (native NX) |
|
||||
| **Units Handling** | Manual regex parsing | ✅ Automatic via .exp format |
|
||||
| **Model Update** | Requires separate step | ✅ Integrated in journal |
|
||||
| **Debugging** | Hard (binary file) | ✅ Easy (.exp is text) |
|
||||
| **Performance** | Fast (direct edit) | Fast (journal execution) |
|
||||
| **Error Handling** | Limited | ✅ Full NX validation |
|
||||
| **Feature Parameters** | ❌ Fails for linked expressions | ✅ Works for all |
|
||||
|
||||
**Recommendation**: Use .exp import by default. Binary edit only for legacy/special cases.
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Batch Updates
|
||||
Currently creates one .exp file per update operation. Could optimize:
|
||||
- Cache .exp file across multiple trials
|
||||
- Only recreate if design variables change
|
||||
|
||||
### Validation
|
||||
Add pre-import validation:
|
||||
- Check expression names exist
|
||||
- Validate value ranges
|
||||
- Warn about unit mismatches
|
||||
|
||||
### Rollback
|
||||
Implement undo capability:
|
||||
- Save original .exp before updates
|
||||
- Restore from backup if import fails
|
||||
|
||||
### Performance Profiling
|
||||
Measure and optimize:
|
||||
- .exp export time
|
||||
- Journal execution time
|
||||
- Model update time
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
### NXOpen Documentation
|
||||
- `NXOpen.ExpressionCollection.ImportFromFile()` - Import expressions from .exp file
|
||||
- `NXOpen.ExpressionCollection.ExportMode.Replace` - Overwrite existing expression values
|
||||
- `NXOpen.Session.UpdateManager.DoUpdate()` - Update model after expression changes
|
||||
|
||||
### Files
|
||||
- [nx_updater.py](../optimization_engine/nx_updater.py) - Main implementation
|
||||
- [import_expressions.py](../optimization_engine/import_expressions.py) - NX journal script
|
||||
- [NXOPEN_INTELLISENSE_SETUP.md](NXOPEN_INTELLISENSE_SETUP.md) - NXOpen development setup
|
||||
|
||||
### Related Features
|
||||
- [OPTIMIZATION_WORKFLOW.md](OPTIMIZATION_WORKFLOW.md) - Overall optimization pipeline
|
||||
- [DEVELOPMENT_GUIDANCE.md](../DEVELOPMENT_GUIDANCE.md) - Development standards
|
||||
- [NX_SOLVER_INTEGRATION.md](archive/NX_SOLVER_INTEGRATION.md) - NX Simcenter integration
|
||||
|
||||
---
|
||||
|
||||
**Author**: Antoine Letarte
|
||||
**Date**: 2025-11-17
|
||||
**Status**: ✅ Production Ready
|
||||
**Version**: 1.0
|
||||
BIN
docs/archive/historical/OPTIMIZATION_WORKFLOW.md
Normal file
BIN
docs/archive/historical/OPTIMIZATION_WORKFLOW.md
Normal file
Binary file not shown.
227
docs/archive/historical/OPTUNA_DASHBOARD.md
Normal file
227
docs/archive/historical/OPTUNA_DASHBOARD.md
Normal file
@@ -0,0 +1,227 @@
|
||||
# Optuna Dashboard Integration
|
||||
|
||||
Atomizer leverages Optuna's built-in dashboard for advanced real-time optimization visualization.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Install Optuna Dashboard
|
||||
|
||||
```bash
|
||||
# Using atomizer environment
|
||||
conda activate atomizer
|
||||
pip install optuna-dashboard
|
||||
```
|
||||
|
||||
### 2. Launch Dashboard for a Study
|
||||
|
||||
```bash
|
||||
# Navigate to your substudy directory
|
||||
cd studies/simple_beam_optimization/substudies/full_optimization_50trials
|
||||
|
||||
# Launch dashboard pointing to the Optuna study database
|
||||
optuna-dashboard sqlite:///optuna_study.db
|
||||
```
|
||||
|
||||
The dashboard will start at http://localhost:8080
|
||||
|
||||
### 3. View During Active Optimization
|
||||
|
||||
```bash
|
||||
# Start optimization in one terminal
|
||||
python studies/simple_beam_optimization/run_optimization.py
|
||||
|
||||
# In another terminal, launch dashboard
|
||||
cd studies/simple_beam_optimization/substudies/full_optimization_50trials
|
||||
optuna-dashboard sqlite:///optuna_study.db
|
||||
```
|
||||
|
||||
The dashboard updates in real-time as new trials complete!
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Features
|
||||
|
||||
### **1. Optimization History**
|
||||
- Interactive plot of objective value vs trial number
|
||||
- Hover to see parameter values for each trial
|
||||
- Zoom and pan for detailed analysis
|
||||
|
||||
### **2. Parallel Coordinate Plot**
|
||||
- Multi-dimensional visualization of parameter space
|
||||
- Each line = one trial, colored by objective value
|
||||
- Instantly see parameter correlations
|
||||
|
||||
### **3. Parameter Importances**
|
||||
- Identifies which parameters most influence the objective
|
||||
- Based on fANOVA (functional ANOVA) analysis
|
||||
- Helps focus optimization efforts
|
||||
|
||||
### **4. Slice Plot**
|
||||
- Shows objective value vs individual parameters
|
||||
- One plot per design variable
|
||||
- Useful for understanding parameter sensitivity
|
||||
|
||||
### **5. Contour Plot**
|
||||
- 2D contour plots of objective surface
|
||||
- Select any two parameters to visualize
|
||||
- Reveals parameter interactions
|
||||
|
||||
### **6. Intermediate Values**
|
||||
- Track metrics during trial execution (if using pruning)
|
||||
- Useful for early stopping of poor trials
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Port
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///optuna_study.db --port 8888
|
||||
```
|
||||
|
||||
### Multiple Studies
|
||||
|
||||
```bash
|
||||
# Compare multiple optimization runs
|
||||
optuna-dashboard sqlite:///substudy1/optuna_study.db sqlite:///substudy2/optuna_study.db
|
||||
```
|
||||
|
||||
### Remote Access
|
||||
|
||||
```bash
|
||||
# Allow connections from other machines
|
||||
optuna-dashboard sqlite:///optuna_study.db --host 0.0.0.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Atomizer Workflow
|
||||
|
||||
### Study Organization
|
||||
|
||||
Each Atomizer substudy has its own Optuna database:
|
||||
|
||||
```
|
||||
studies/simple_beam_optimization/
|
||||
├── substudies/
|
||||
│ ├── full_optimization_50trials/
|
||||
│ │ ├── optuna_study.db # ← Optuna database (SQLite)
|
||||
│ │ ├── optuna_study.pkl # ← Optuna study object (pickle)
|
||||
│ │ ├── history.json # ← Atomizer history
|
||||
│ │ └── plots/ # ← Matplotlib plots
|
||||
│ └── validation_3trials/
|
||||
│ └── optuna_study.db
|
||||
```
|
||||
|
||||
### Visualization Comparison
|
||||
|
||||
**Optuna Dashboard** (Interactive, Web-based):
|
||||
- ✅ Real-time updates during optimization
|
||||
- ✅ Interactive plots (zoom, hover, filter)
|
||||
- ✅ Parameter importance analysis
|
||||
- ✅ Multiple study comparison
|
||||
- ❌ Requires web browser
|
||||
- ❌ Not embeddable in reports
|
||||
|
||||
**Atomizer Matplotlib Plots** (Static, High-quality):
|
||||
- ✅ Publication-quality PNG/PDF exports
|
||||
- ✅ Customizable styling and annotations
|
||||
- ✅ Embeddable in reports and papers
|
||||
- ✅ Offline viewing
|
||||
- ❌ Not interactive
|
||||
- ❌ Not real-time
|
||||
|
||||
**Recommendation**: Use **both**!
|
||||
- Monitor optimization in real-time with Optuna Dashboard
|
||||
- Generate final plots with Atomizer visualizer for reports
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No studies found"
|
||||
|
||||
Make sure you're pointing to the correct database file:
|
||||
|
||||
```bash
|
||||
# Check if optuna_study.db exists
|
||||
ls studies/*/substudies/*/optuna_study.db
|
||||
|
||||
# Use absolute path if needed
|
||||
optuna-dashboard sqlite:///C:/Users/antoi/Documents/Atomaste/Atomizer/studies/simple_beam_optimization/substudies/full_optimization_50trials/optuna_study.db
|
||||
```
|
||||
|
||||
### Database Locked
|
||||
|
||||
If optimization is actively writing to the database:
|
||||
|
||||
```bash
|
||||
# Use read-only mode
|
||||
optuna-dashboard sqlite:///optuna_study.db?mode=ro
|
||||
```
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
```bash
|
||||
# Use different port
|
||||
optuna-dashboard sqlite:///optuna_study.db --port 8888
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
```bash
|
||||
# 1. Start optimization
|
||||
python studies/simple_beam_optimization/run_optimization.py
|
||||
|
||||
# 2. In another terminal, launch Optuna dashboard
|
||||
cd studies/simple_beam_optimization/substudies/full_optimization_50trials
|
||||
optuna-dashboard sqlite:///optuna_study.db
|
||||
|
||||
# 3. Open browser to http://localhost:8080 and watch optimization live
|
||||
|
||||
# 4. After optimization completes, generate static plots
|
||||
python -m optimization_engine.visualizer studies/simple_beam_optimization/substudies/full_optimization_50trials png pdf
|
||||
|
||||
# 5. View final plots
|
||||
explorer studies/simple_beam_optimization/substudies/full_optimization_50trials/plots
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Optuna Dashboard Screenshots
|
||||
|
||||
### Optimization History
|
||||

|
||||
|
||||
### Parallel Coordinate Plot
|
||||

|
||||
|
||||
### Parameter Importance
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Optuna Dashboard Documentation](https://optuna-dashboard.readthedocs.io/)
|
||||
- [Optuna Visualization Module](https://optuna.readthedocs.io/en/stable/reference/visualization/index.html)
|
||||
- [fANOVA Parameter Importance](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.importance.FanovaImportanceEvaluator.html)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Feature | Optuna Dashboard | Atomizer Matplotlib |
|
||||
|---------|-----------------|-------------------|
|
||||
| Real-time updates | ✅ Yes | ❌ No |
|
||||
| Interactive | ✅ Yes | ❌ No |
|
||||
| Parameter importance | ✅ Yes | ⚠️ Manual |
|
||||
| Publication quality | ⚠️ Web only | ✅ PNG/PDF |
|
||||
| Embeddable in docs | ❌ No | ✅ Yes |
|
||||
| Offline viewing | ❌ Needs server | ✅ Yes |
|
||||
| Multi-study comparison | ✅ Yes | ⚠️ Manual |
|
||||
|
||||
**Best Practice**: Use Optuna Dashboard for monitoring and exploration, Atomizer visualizer for final reporting.
|
||||
598
docs/archive/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md
Normal file
598
docs/archive/historical/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,598 @@
|
||||
# Protocol 10: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
## Implementation Summary
|
||||
|
||||
**Date**: November 19, 2025
|
||||
**Status**: ✅ COMPLETE - Production Ready
|
||||
**Author**: Claude (Sonnet 4.5)
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Protocol 10 transforms Atomizer from a **fixed-strategy optimizer** into an **intelligent self-tuning meta-optimizer** that automatically:
|
||||
|
||||
1. **Discovers** problem characteristics through landscape analysis
|
||||
2. **Recommends** the best optimization algorithm based on problem type
|
||||
3. **Adapts** strategy dynamically during optimization if stagnation is detected
|
||||
4. **Tracks** all decisions transparently for learning and debugging
|
||||
|
||||
**User Impact**: Users no longer need to understand optimization algorithms. Atomizer automatically selects CMA-ES for smooth problems, TPE for multimodal landscapes, and switches mid-run if performance stagnates.
|
||||
|
||||
---
|
||||
|
||||
## What Was Built
|
||||
|
||||
### Core Modules (4 new files, ~1200 lines)
|
||||
|
||||
#### 1. **Landscape Analyzer** ([landscape_analyzer.py](../optimization_engine/landscape_analyzer.py))
|
||||
|
||||
**Purpose**: Automatic problem characterization from trial history
|
||||
|
||||
**Key Features**:
|
||||
- **Smoothness Analysis**: Correlation between parameter distance and objective difference
|
||||
- **Multimodality Detection**: DBSCAN clustering of good solutions to find multiple optima
|
||||
- **Parameter Correlation**: Spearman correlation of each parameter with objective
|
||||
- **Noise Estimation**: Coefficient of variation to detect simulation instability
|
||||
- **Landscape Classification**: Categorizes problems into 5 types (smooth_unimodal, smooth_multimodal, rugged_unimodal, rugged_multimodal, noisy)
|
||||
|
||||
**Metrics Computed**:
|
||||
```python
|
||||
{
|
||||
'smoothness': 0.78, # 0-1 scale (higher = smoother)
|
||||
'multimodal': False, # Multiple local optima detected?
|
||||
'n_modes': 1, # Estimated number of local optima
|
||||
'parameter_correlation': {...}, # Per-parameter correlation with objective
|
||||
'noise_level': 0.12, # Estimated noise (0-1 scale)
|
||||
'landscape_type': 'smooth_unimodal' # Classification
|
||||
}
|
||||
```
|
||||
|
||||
**Study-Aware Design**: Uses `study.trials` directly, works across interrupted sessions
|
||||
|
||||
---
|
||||
|
||||
#### 2. **Strategy Selector** ([strategy_selector.py](../optimization_engine/strategy_selector.py))
|
||||
|
||||
**Purpose**: Expert decision tree for algorithm recommendation
|
||||
|
||||
**Decision Logic**:
|
||||
```
|
||||
IF noise > 0.5:
|
||||
→ TPE (robust to noise)
|
||||
ELIF smoothness > 0.7 AND correlation > 0.5:
|
||||
→ CMA-ES (fast convergence for smooth correlated problems)
|
||||
ELIF smoothness > 0.6 AND dimensions <= 5:
|
||||
→ GP-BO (sample efficient for expensive smooth low-D)
|
||||
ELIF multimodal:
|
||||
→ TPE (handles multiple local optima)
|
||||
ELIF dimensions > 5:
|
||||
→ TPE (scales to moderate dimensions)
|
||||
ELSE:
|
||||
→ TPE (safe default)
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```python
|
||||
('cmaes', {
|
||||
'confidence': 0.92,
|
||||
'reasoning': 'Smooth unimodal with strong correlation - CMA-ES converges quickly',
|
||||
'sampler_config': {
|
||||
'type': 'CmaEsSampler',
|
||||
'params': {'restart_strategy': 'ipop'}
|
||||
},
|
||||
'transition_plan': { # Optional
|
||||
'switch_to': 'cmaes',
|
||||
'when': 'error < 1.0 OR trials > 40'
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
**Supported Algorithms**:
|
||||
- **TPE**: Tree-structured Parzen Estimator (Optuna default)
|
||||
- **CMA-ES**: Covariance Matrix Adaptation Evolution Strategy
|
||||
- **GP-BO**: Gaussian Process Bayesian Optimization (placeholder, needs implementation)
|
||||
- **Random**: Random sampling for initial exploration
|
||||
|
||||
---
|
||||
|
||||
#### 3. **Strategy Portfolio Manager** ([strategy_portfolio.py](../optimization_engine/strategy_portfolio.py))
|
||||
|
||||
**Purpose**: Dynamic strategy switching during optimization
|
||||
|
||||
**Key Features**:
|
||||
- **Stagnation Detection**: Identifies when current strategy stops improving
|
||||
- < 0.1% improvement over 10 trials
|
||||
- High variance without improvement (thrashing)
|
||||
- **Performance Tracking**: Records trials used, best value, improvement rate per strategy
|
||||
- **Transition Management**: Logs all switches with reasoning and timestamp
|
||||
- **Study-Aware Persistence**: Saves transition history to JSON files
|
||||
|
||||
**Tracking Files** (saved to `2_results/intelligent_optimizer/`):
|
||||
1. `strategy_transitions.json` - All strategy switch events
|
||||
2. `strategy_performance.json` - Performance breakdown by strategy
|
||||
3. `confidence_history.json` - Confidence snapshots every 5 trials
|
||||
|
||||
**Classes**:
|
||||
- `StrategyTransitionManager`: Manages switching logic and tracking
|
||||
- `AdaptiveStrategyCallback`: Optuna callback for runtime monitoring
|
||||
|
||||
---
|
||||
|
||||
#### 4. **Intelligent Optimizer Orchestrator** ([intelligent_optimizer.py](../optimization_engine/intelligent_optimizer.py))
|
||||
|
||||
**Purpose**: Main entry point coordinating all Protocol 10 components
|
||||
|
||||
**Three-Phase Workflow**:
|
||||
|
||||
**Stage 1: Landscape Characterization (Trials 1-15)**
|
||||
- Run random exploration
|
||||
- Analyze landscape characteristics
|
||||
- Print comprehensive landscape report
|
||||
|
||||
**Stage 2: Strategy Selection (Trial 15)**
|
||||
- Get recommendation from selector
|
||||
- Create new study with recommended sampler
|
||||
- Log decision reasoning
|
||||
|
||||
**Stage 3: Adaptive Optimization (Trials 16+)**
|
||||
- Run optimization with adaptive callbacks
|
||||
- Monitor for stagnation
|
||||
- Switch strategies if needed
|
||||
- Track all transitions
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="my_study",
|
||||
study_dir=Path("studies/my_study/2_results"),
|
||||
config=opt_config,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
results = optimizer.optimize(
|
||||
objective_function=objective,
|
||||
design_variables={'thickness': (2, 10), 'diameter': (50, 150)},
|
||||
n_trials=100,
|
||||
target_value=115.0,
|
||||
tolerance=0.1
|
||||
)
|
||||
```
|
||||
|
||||
**Comprehensive Results**:
|
||||
```python
|
||||
{
|
||||
'best_params': {...},
|
||||
'best_value': 0.185,
|
||||
'total_trials': 100,
|
||||
'final_strategy': 'cmaes',
|
||||
'landscape_analysis': {...},
|
||||
'strategy_recommendation': {...},
|
||||
'transition_history': [...],
|
||||
'strategy_performance': {...}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Documentation
|
||||
|
||||
#### 1. **Protocol 10 Section in PROTOCOL.md**
|
||||
|
||||
Added comprehensive 435-line section covering:
|
||||
- Design philosophy
|
||||
- Three-phase architecture
|
||||
- Component descriptions with code examples
|
||||
- Configuration schema
|
||||
- Console output examples
|
||||
- Report integration
|
||||
- Algorithm portfolio comparison
|
||||
- When to use Protocol 10
|
||||
- Future enhancements
|
||||
|
||||
**Location**: Lines 1455-1889 in [PROTOCOL.md](../PROTOCOL.md)
|
||||
|
||||
#### 2. **Example Configuration File**
|
||||
|
||||
Created fully-commented example configuration demonstrating all Protocol 10 options:
|
||||
|
||||
**Location**: [examples/optimization_config_protocol10.json](../examples/optimization_config_protocol10.json)
|
||||
|
||||
**Key Sections**:
|
||||
- `intelligent_optimization`: Protocol 10 settings
|
||||
- `adaptive_strategy`: Protocol 8 integration
|
||||
- `reporting`: What to generate
|
||||
- `verbosity`: Console output control
|
||||
- `experimental`: Future features
|
||||
|
||||
---
|
||||
|
||||
## How It Works (User Perspective)
|
||||
|
||||
### Traditional Approach (Before Protocol 10)
|
||||
```
|
||||
User: "Optimize my circular plate frequency to 115 Hz"
|
||||
↓
|
||||
User must know: Should I use TPE? CMA-ES? GP-BO? Random?
|
||||
↓
|
||||
User manually configures sampler in JSON
|
||||
↓
|
||||
If wrong choice → slow convergence or failure
|
||||
↓
|
||||
User tries different algorithms manually
|
||||
```
|
||||
|
||||
### Protocol 10 Approach (After Implementation)
|
||||
```
|
||||
User: "Optimize my circular plate frequency to 115 Hz"
|
||||
↓
|
||||
Atomizer: *Runs 15 random trials for characterization*
|
||||
↓
|
||||
Atomizer: *Analyzes landscape → smooth_unimodal, correlation 0.65*
|
||||
↓
|
||||
Atomizer: "Recommending CMA-ES (92% confidence)"
|
||||
↓
|
||||
Atomizer: *Switches to CMA-ES, runs 85 more trials*
|
||||
↓
|
||||
Atomizer: *Detects stagnation at trial 45, considers switch*
|
||||
↓
|
||||
Result: Achieves target in 100 trials (vs 160+ with fixed TPE)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Console Output Example
|
||||
|
||||
```
|
||||
======================================================================
|
||||
STAGE 1: LANDSCAPE CHARACTERIZATION
|
||||
======================================================================
|
||||
|
||||
Trial #10: Objective = 5.234
|
||||
Trial #15: Objective = 3.456
|
||||
|
||||
======================================================================
|
||||
LANDSCAPE ANALYSIS REPORT
|
||||
======================================================================
|
||||
Total Trials Analyzed: 15
|
||||
Dimensionality: 2 parameters
|
||||
|
||||
LANDSCAPE CHARACTERISTICS:
|
||||
Type: SMOOTH_UNIMODAL
|
||||
Smoothness: 0.78 (smooth)
|
||||
Multimodal: NO (1 modes)
|
||||
Noise Level: 0.08 (low)
|
||||
|
||||
PARAMETER CORRELATIONS:
|
||||
inner_diameter: +0.652 (strong positive)
|
||||
plate_thickness: -0.543 (strong negative)
|
||||
|
||||
======================================================================
|
||||
|
||||
======================================================================
|
||||
STAGE 2: STRATEGY SELECTION
|
||||
======================================================================
|
||||
|
||||
======================================================================
|
||||
STRATEGY RECOMMENDATION
|
||||
======================================================================
|
||||
Recommended: CMAES
|
||||
Confidence: 92.0%
|
||||
Reasoning: Smooth unimodal with strong correlation - CMA-ES converges quickly
|
||||
======================================================================
|
||||
|
||||
======================================================================
|
||||
STAGE 3: ADAPTIVE OPTIMIZATION
|
||||
======================================================================
|
||||
|
||||
Trial #25: Objective = 1.234
|
||||
...
|
||||
Trial #100: Objective = 0.185
|
||||
|
||||
======================================================================
|
||||
OPTIMIZATION COMPLETE
|
||||
======================================================================
|
||||
Protocol: Protocol 10: Intelligent Multi-Strategy Optimization
|
||||
Total Trials: 100
|
||||
Best Value: 0.185 (Trial #98)
|
||||
Final Strategy: CMAES
|
||||
======================================================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Existing Protocols
|
||||
|
||||
### Protocol 10 + Protocol 8 (Adaptive Surrogate)
|
||||
- Landscape analyzer provides smoothness metrics for confidence calculation
|
||||
- Confidence metrics inform strategy switching decisions
|
||||
- Both track phase/strategy transitions to JSON
|
||||
|
||||
### Protocol 10 + Protocol 9 (Optuna Visualizations)
|
||||
- Parallel coordinate plots show strategy regions
|
||||
- Parameter importance validates landscape classification
|
||||
- Slice plots confirm smoothness assessment
|
||||
|
||||
### Backward Compatibility
|
||||
- If `intelligent_optimization.enabled = false`, falls back to standard TPE
|
||||
- Existing studies continue to work without modification
|
||||
- Progressive enhancement approach
|
||||
|
||||
---
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
### 1. Study-Aware Architecture
|
||||
**Decision**: All components use `study.trials` not session-based history
|
||||
|
||||
**Rationale**:
|
||||
- Supports interrupted/resumed optimization
|
||||
- Consistent behavior across multiple runs
|
||||
- Leverages Optuna's database persistence
|
||||
|
||||
**Impact**: Protocol 10 works correctly even if optimization is stopped and restarted
|
||||
|
||||
---
|
||||
|
||||
### 2. Three-Phase Workflow
|
||||
**Decision**: Separate characterization, selection, and optimization phases
|
||||
|
||||
**Rationale**:
|
||||
- Initial exploration needed to understand landscape
|
||||
- Can't recommend strategy without data
|
||||
- Clear separation of concerns
|
||||
|
||||
**Trade-off**: Uses 15 trials for characterization (but prevents wasting 100+ trials on wrong algorithm)
|
||||
|
||||
---
|
||||
|
||||
### 3. Transparent Decision Logging
|
||||
**Decision**: Save all landscape analyses, recommendations, and transitions to JSON
|
||||
|
||||
**Rationale**:
|
||||
- Users need to understand WHY decisions were made
|
||||
- Enables debugging and learning
|
||||
- Foundation for future transfer learning
|
||||
|
||||
**Files Created**:
|
||||
- `strategy_transitions.json`
|
||||
- `strategy_performance.json`
|
||||
- `intelligence_report.json`
|
||||
|
||||
---
|
||||
|
||||
### 4. Conservative Switching Thresholds
|
||||
**Decision**: Require 10 trials stagnation + <0.1% improvement before switching
|
||||
|
||||
**Rationale**:
|
||||
- Avoid premature switching from noise
|
||||
- Give each strategy fair chance to prove itself
|
||||
- Reduce thrashing between algorithms
|
||||
|
||||
**Configurable**: Users can adjust `stagnation_window` and `min_improvement_threshold`
|
||||
|
||||
---
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### Memory
|
||||
- Minimal additional memory (~1MB for tracking data structures)
|
||||
- JSON files stored to disk, not kept in memory
|
||||
|
||||
### Runtime
|
||||
- 15-trial characterization overhead (~5% of 100-trial study)
|
||||
- Landscape analysis: ~10ms per check (every 15 trials)
|
||||
- Strategy switching: ~100ms (negligible)
|
||||
|
||||
### Optimization Efficiency
|
||||
- **Expected improvement**: 20-50% faster convergence by selecting optimal algorithm
|
||||
- **Example**: Circular plate study achieved 0.185 error with CMA-ES recommendation vs 0.478 with fixed TPE (61% better)
|
||||
|
||||
---
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
### Unit Tests (Future Work)
|
||||
```python
|
||||
# test_landscape_analyzer.py
|
||||
def test_smooth_unimodal_classification():
|
||||
"""Test landscape analyzer correctly identifies smooth unimodal problems."""
|
||||
|
||||
# test_strategy_selector.py
|
||||
def test_cmaes_recommendation_for_smooth():
|
||||
"""Test selector recommends CMA-ES for smooth correlated problems."""
|
||||
|
||||
# test_strategy_portfolio.py
|
||||
def test_stagnation_detection():
|
||||
"""Test portfolio manager detects stagnation correctly."""
|
||||
```
|
||||
|
||||
### Integration Test
|
||||
```python
|
||||
# Create circular plate study with Protocol 10 enabled
|
||||
# Run 100 trials
|
||||
# Verify:
|
||||
# - Landscape was analyzed at trial 15
|
||||
# - Strategy recommendation was logged
|
||||
# - Final best value better than pure TPE baseline
|
||||
# - All JSON files created correctly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Phase 2 (Next Release)
|
||||
1. **GP-BO Implementation**: Currently placeholder, need scikit-optimize integration
|
||||
2. **Hybrid Strategies**: Automatic GP→CMA-ES transitions with transition logic
|
||||
3. **Report Integration**: Add Protocol 10 section to markdown reports
|
||||
|
||||
### Phase 3 (Advanced)
|
||||
1. **Transfer Learning**: Build database of landscape signatures → best strategies
|
||||
2. **Multi-Armed Bandit**: Thompson sampling for strategy portfolio allocation
|
||||
3. **Parallel Strategies**: Run TPE and CMA-ES concurrently, pick winner
|
||||
4. **Meta-Learning**: Learn optimal switching thresholds from historical data
|
||||
|
||||
### Phase 4 (Research)
|
||||
1. **Neural Landscape Encoder**: Learn landscape embeddings for better classification
|
||||
2. **Automated Algorithm Configuration**: Tune sampler hyperparameters per problem
|
||||
3. **Multi-Objective IMSO**: Extend to Pareto optimization
|
||||
|
||||
---
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### For Existing Studies
|
||||
|
||||
**No changes required** - Protocol 10 is opt-in via configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": false // Keeps existing behavior
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### To Enable Protocol 10
|
||||
|
||||
1. Update `optimization_config.json`:
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization_trials": 15,
|
||||
"stagnation_window": 10,
|
||||
"min_improvement_threshold": 0.001
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Use `IntelligentOptimizer` instead of direct Optuna:
|
||||
```python
|
||||
from optimization_engine.intelligent_optimizer import create_intelligent_optimizer
|
||||
|
||||
optimizer = create_intelligent_optimizer(
|
||||
study_name=study_name,
|
||||
study_dir=results_dir,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
results = optimizer.optimize(
|
||||
objective_function=objective,
|
||||
design_variables=design_vars,
|
||||
n_trials=100
|
||||
)
|
||||
```
|
||||
|
||||
3. Check `2_results/intelligent_optimizer/` for decision logs
|
||||
|
||||
---
|
||||
|
||||
## Known Limitations
|
||||
|
||||
### Current Limitations
|
||||
1. **GP-BO Not Implemented**: Recommendations fall back to TPE (marked as warning)
|
||||
2. **Single Transition**: Only switches once per optimization (can't switch back)
|
||||
3. **No Hybrid Strategies**: GP→CMA-ES planned but not implemented
|
||||
4. **2D Optimized**: Landscape metrics designed for 2-5 parameters
|
||||
|
||||
### Planned Fixes
|
||||
- [ ] Implement GP-BO using scikit-optimize
|
||||
- [ ] Allow multiple strategy switches with hysteresis
|
||||
- [ ] Add hybrid strategy coordinator
|
||||
- [ ] Extend landscape metrics for high-dimensional problems
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Required
|
||||
- `optuna >= 3.0` (TPE, CMA-ES samplers)
|
||||
- `numpy >= 1.20`
|
||||
- `scipy >= 1.7` (statistics, clustering)
|
||||
- `scikit-learn >= 1.0` (DBSCAN clustering)
|
||||
|
||||
### Optional
|
||||
- `scikit-optimize` (for GP-BO implementation)
|
||||
- `plotly` (for Optuna visualizations)
|
||||
|
||||
---
|
||||
|
||||
## Files Created
|
||||
|
||||
### Core Modules
|
||||
1. `optimization_engine/landscape_analyzer.py` (377 lines)
|
||||
2. `optimization_engine/strategy_selector.py` (323 lines)
|
||||
3. `optimization_engine/strategy_portfolio.py` (367 lines)
|
||||
4. `optimization_engine/intelligent_optimizer.py` (438 lines)
|
||||
|
||||
### Documentation
|
||||
5. `PROTOCOL.md` (updated: +435 lines for Protocol 10 section)
|
||||
6. `docs/PROTOCOL_10_IMPLEMENTATION_SUMMARY.md` (this file)
|
||||
|
||||
### Examples
|
||||
7. `examples/optimization_config_protocol10.json` (fully commented config)
|
||||
|
||||
**Total**: ~2200 lines of production code + documentation
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
- [x] Landscape analyzer computes smoothness, multimodality, correlation, noise
|
||||
- [x] Strategy selector implements decision tree with confidence scores
|
||||
- [x] Portfolio manager detects stagnation and executes transitions
|
||||
- [x] Intelligent optimizer orchestrates three-phase workflow
|
||||
- [x] All components study-aware (use `study.trials`)
|
||||
- [x] JSON tracking files saved correctly
|
||||
- [x] Console output formatted with clear phase headers
|
||||
- [x] PROTOCOL.md updated with comprehensive documentation
|
||||
- [x] Example configuration file created
|
||||
- [x] Backward compatibility maintained (opt-in via config)
|
||||
- [x] Dependencies documented
|
||||
- [x] Known limitations documented
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Quantitative
|
||||
- **Code Quality**: 1200+ lines, modular, well-documented
|
||||
- **Coverage**: 4 core components + docs + examples
|
||||
- **Performance**: <5% runtime overhead for 20-50% efficiency gain
|
||||
|
||||
### Qualitative
|
||||
- **User Experience**: "Just enable Protocol 10" - no algorithm expertise needed
|
||||
- **Transparency**: All decisions logged and explained
|
||||
- **Flexibility**: Highly configurable via JSON
|
||||
- **Maintainability**: Clean separation of concerns, extensible architecture
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Protocol 10 successfully transforms Atomizer from a **single-strategy optimizer** into an **intelligent meta-optimizer** that automatically adapts to different FEA problem types.
|
||||
|
||||
**Key Achievement**: Users no longer need to understand TPE vs CMA-ES vs GP-BO - Atomizer figures it out automatically through landscape analysis and intelligent strategy selection.
|
||||
|
||||
**Production Ready**: All core components implemented, tested, and documented. Ready for immediate use with backward compatibility for existing studies.
|
||||
|
||||
**Foundation for Future**: Architecture supports transfer learning, hybrid strategies, and parallel optimization - setting up Atomizer to evolve into a state-of-the-art meta-learning optimization platform.
|
||||
|
||||
---
|
||||
|
||||
**Status**: ✅ **IMPLEMENTATION COMPLETE**
|
||||
|
||||
**Next Steps**:
|
||||
1. Test on real circular plate study
|
||||
2. Implement GP-BO using scikit-optimize
|
||||
3. Add Protocol 10 section to markdown report generator
|
||||
4. Build transfer learning database
|
||||
|
||||
---
|
||||
|
||||
*Generated: November 19, 2025*
|
||||
*Protocol Version: 1.0*
|
||||
*Implementation: Production Ready*
|
||||
367
docs/archive/historical/PRUNING_DIAGNOSTICS.md
Normal file
367
docs/archive/historical/PRUNING_DIAGNOSTICS.md
Normal file
@@ -0,0 +1,367 @@
|
||||
# Pruning Diagnostics - Comprehensive Trial Failure Tracking
|
||||
|
||||
**Created**: November 20, 2025
|
||||
**Status**: ✅ Production Ready
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The pruning diagnostics system provides detailed logging and analysis of failed optimization trials. It helps identify:
|
||||
- **Why trials are failing** (validation, simulation, or extraction)
|
||||
- **Which parameters cause failures**
|
||||
- **False positives** from pyNastran OP2 reader
|
||||
- **Patterns** that can improve validation rules
|
||||
|
||||
---
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Pruning Logger
|
||||
**Module**: [optimization_engine/pruning_logger.py](../optimization_engine/pruning_logger.py)
|
||||
|
||||
Logs every pruned trial with full details:
|
||||
- Parameters that failed
|
||||
- Failure cause (validation, simulation, OP2 extraction)
|
||||
- Error messages and stack traces
|
||||
- F06 file analysis (for OP2 failures)
|
||||
|
||||
### 2. Robust OP2 Extractor
|
||||
**Module**: [optimization_engine/op2_extractor.py](../optimization_engine/op2_extractor.py)
|
||||
|
||||
Handles pyNastran issues gracefully:
|
||||
- Tries multiple extraction strategies
|
||||
- Ignores benign FATAL flags
|
||||
- Falls back to F06 parsing
|
||||
- Prevents false positive failures
|
||||
|
||||
---
|
||||
|
||||
## Usage in Optimization Scripts
|
||||
|
||||
### Basic Integration
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.pruning_logger import PruningLogger
|
||||
from optimization_engine.op2_extractor import robust_extract_first_frequency
|
||||
from optimization_engine.simulation_validator import SimulationValidator
|
||||
|
||||
# Initialize pruning logger
|
||||
results_dir = Path("studies/my_study/2_results")
|
||||
pruning_logger = PruningLogger(results_dir, verbose=True)
|
||||
|
||||
# Initialize validator
|
||||
validator = SimulationValidator(model_type='circular_plate', verbose=True)
|
||||
|
||||
def objective(trial):
|
||||
"""Objective function with comprehensive pruning logging."""
|
||||
|
||||
# Sample parameters
|
||||
params = {
|
||||
'inner_diameter': trial.suggest_float('inner_diameter', 50, 150),
|
||||
'plate_thickness': trial.suggest_float('plate_thickness', 2, 10)
|
||||
}
|
||||
|
||||
# VALIDATION
|
||||
is_valid, warnings = validator.validate(params)
|
||||
if not is_valid:
|
||||
# Log validation failure
|
||||
pruning_logger.log_validation_failure(
|
||||
trial_number=trial.number,
|
||||
design_variables=params,
|
||||
validation_warnings=warnings
|
||||
)
|
||||
raise optuna.TrialPruned()
|
||||
|
||||
# Update CAD and run simulation
|
||||
updater.update_expressions(params)
|
||||
result = solver.run_simulation(str(sim_file), solution_name="Solution_Normal_Modes")
|
||||
|
||||
# SIMULATION FAILURE
|
||||
if not result['success']:
|
||||
pruning_logger.log_simulation_failure(
|
||||
trial_number=trial.number,
|
||||
design_variables=params,
|
||||
error_message=result.get('error', 'Unknown error'),
|
||||
return_code=result.get('return_code'),
|
||||
solver_errors=result.get('errors')
|
||||
)
|
||||
raise optuna.TrialPruned()
|
||||
|
||||
# OP2 EXTRACTION (robust method)
|
||||
op2_file = result['op2_file']
|
||||
f06_file = result.get('f06_file')
|
||||
|
||||
try:
|
||||
frequency = robust_extract_first_frequency(
|
||||
op2_file=op2_file,
|
||||
mode_number=1,
|
||||
f06_file=f06_file,
|
||||
verbose=True
|
||||
)
|
||||
except Exception as e:
|
||||
# Log OP2 extraction failure
|
||||
pruning_logger.log_op2_extraction_failure(
|
||||
trial_number=trial.number,
|
||||
design_variables=params,
|
||||
exception=e,
|
||||
op2_file=op2_file,
|
||||
f06_file=f06_file
|
||||
)
|
||||
raise optuna.TrialPruned()
|
||||
|
||||
# Success - calculate objective
|
||||
return abs(frequency - 115.0)
|
||||
|
||||
# After optimization completes
|
||||
pruning_logger.save_summary()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Files
|
||||
|
||||
### Pruning History (Detailed Log)
|
||||
**File**: `2_results/pruning_history.json`
|
||||
|
||||
Contains every pruned trial with full details:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"trial_number": 0,
|
||||
"timestamp": "2025-11-20T19:09:45.123456",
|
||||
"pruning_cause": "op2_extraction_failure",
|
||||
"design_variables": {
|
||||
"inner_diameter": 126.56,
|
||||
"plate_thickness": 9.17
|
||||
},
|
||||
"exception_type": "ValueError",
|
||||
"exception_message": "There was a Nastran FATAL Error. Check the F06.",
|
||||
"stack_trace": "Traceback (most recent call last)...",
|
||||
"details": {
|
||||
"op2_file": "studies/.../circular_plate_sim1-solution_normal_modes.op2",
|
||||
"op2_exists": true,
|
||||
"op2_size_bytes": 245760,
|
||||
"f06_file": "studies/.../circular_plate_sim1-solution_normal_modes.f06",
|
||||
"is_pynastran_fatal_flag": true,
|
||||
"f06_has_fatal_errors": false,
|
||||
"f06_errors": []
|
||||
}
|
||||
},
|
||||
{
|
||||
"trial_number": 5,
|
||||
"timestamp": "2025-11-20T19:11:23.456789",
|
||||
"pruning_cause": "simulation_failure",
|
||||
"design_variables": {
|
||||
"inner_diameter": 95.2,
|
||||
"plate_thickness": 3.8
|
||||
},
|
||||
"error_message": "Mesh generation failed - element quality below threshold",
|
||||
"details": {
|
||||
"return_code": 1,
|
||||
"solver_errors": ["FATAL: Mesh quality check failed"]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Pruning Summary (Analysis Report)
|
||||
**File**: `2_results/pruning_summary.json`
|
||||
|
||||
Statistical analysis and recommendations:
|
||||
|
||||
```json
|
||||
{
|
||||
"generated": "2025-11-20T19:15:30.123456",
|
||||
"total_pruned_trials": 9,
|
||||
"breakdown": {
|
||||
"validation_failures": 2,
|
||||
"simulation_failures": 1,
|
||||
"op2_extraction_failures": 6
|
||||
},
|
||||
"validation_failure_reasons": {},
|
||||
"simulation_failure_types": {
|
||||
"Mesh generation failed": 1
|
||||
},
|
||||
"op2_extraction_analysis": {
|
||||
"total_op2_failures": 6,
|
||||
"likely_false_positives": 6,
|
||||
"description": "False positives are OP2 extraction failures where pyNastran detected FATAL flag but F06 has no errors"
|
||||
},
|
||||
"recommendations": [
|
||||
"CRITICAL: 6 trials failed due to pyNastran OP2 reader being overly strict. Use robust_extract_first_frequency() to ignore benign FATAL flags and extract valid results."
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Robust OP2 Extraction
|
||||
|
||||
### Problem: pyNastran False Positives
|
||||
|
||||
pyNastran's OP2 reader can be overly strict - it throws exceptions when it sees a FATAL flag in the OP2 header, even if:
|
||||
- The F06 file shows **no errors**
|
||||
- The simulation **completed successfully**
|
||||
- The eigenvalue data **is valid and extractable**
|
||||
|
||||
### Solution: Multi-Strategy Extraction
|
||||
|
||||
The `robust_extract_first_frequency()` function tries multiple strategies:
|
||||
|
||||
```python
|
||||
from optimization_engine.op2_extractor import robust_extract_first_frequency
|
||||
|
||||
frequency = robust_extract_first_frequency(
|
||||
op2_file=Path("results.op2"),
|
||||
mode_number=1,
|
||||
f06_file=Path("results.f06"), # Optional fallback
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
**Strategies** (in order):
|
||||
1. **Standard OP2 read** - Normal pyNastran reading
|
||||
2. **Lenient OP2 read** - `debug=False`, `skip_undefined_matrices=True`
|
||||
3. **F06 fallback** - Parse text file if OP2 fails
|
||||
|
||||
**Output** (verbose mode):
|
||||
```
|
||||
[OP2 EXTRACT] Attempting standard read: circular_plate_sim1-solution_normal_modes.op2
|
||||
[OP2 EXTRACT] ✗ Standard read failed: There was a Nastran FATAL Error
|
||||
[OP2 EXTRACT] Detected pyNastran FATAL flag issue
|
||||
[OP2 EXTRACT] Attempting partial extraction...
|
||||
[OP2 EXTRACT] ✓ Success (lenient mode): 125.1234 Hz
|
||||
[OP2 EXTRACT] Note: pyNastran reported FATAL but data is valid!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Analyzing Pruning Patterns
|
||||
|
||||
### View Summary
|
||||
|
||||
```python
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
# Load pruning summary
|
||||
with open('studies/my_study/2_results/pruning_summary.json') as f:
|
||||
summary = json.load(f)
|
||||
|
||||
print(f"Total pruned: {summary['total_pruned_trials']}")
|
||||
print(f"False positives: {summary['op2_extraction_analysis']['likely_false_positives']}")
|
||||
print("\nRecommendations:")
|
||||
for rec in summary['recommendations']:
|
||||
print(f" - {rec}")
|
||||
```
|
||||
|
||||
### Find Specific Failures
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
# Load detailed history
|
||||
with open('studies/my_study/2_results/pruning_history.json') as f:
|
||||
history = json.load(f)
|
||||
|
||||
# Find all OP2 false positives
|
||||
false_positives = [
|
||||
event for event in history
|
||||
if event['pruning_cause'] == 'op2_extraction_failure'
|
||||
and event['details']['is_pynastran_fatal_flag']
|
||||
and not event['details']['f06_has_fatal_errors']
|
||||
]
|
||||
|
||||
print(f"Found {len(false_positives)} false positives:")
|
||||
for fp in false_positives:
|
||||
params = fp['design_variables']
|
||||
print(f" Trial #{fp['trial_number']}: {params}")
|
||||
```
|
||||
|
||||
### Parameter Analysis
|
||||
|
||||
```python
|
||||
# Find which parameter ranges cause failures
|
||||
import numpy as np
|
||||
|
||||
validation_failures = [e for e in history if e['pruning_cause'] == 'validation_failure']
|
||||
|
||||
diameters = [e['design_variables']['inner_diameter'] for e in validation_failures]
|
||||
thicknesses = [e['design_variables']['plate_thickness'] for e in validation_failures]
|
||||
|
||||
print(f"Validation failures occur at:")
|
||||
print(f" Diameter range: {min(diameters):.1f} - {max(diameters):.1f} mm")
|
||||
print(f" Thickness range: {min(thicknesses):.1f} - {max(thicknesses):.1f} mm")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Impact
|
||||
|
||||
### Before Robust Extraction
|
||||
- **Pruning rate**: 18-20%
|
||||
- **False positives**: ~6-10 per 50 trials
|
||||
- **Wasted time**: ~5 minutes per study
|
||||
|
||||
### After Robust Extraction
|
||||
- **Pruning rate**: <2% (only genuine failures)
|
||||
- **False positives**: 0
|
||||
- **Time saved**: ~4-5 minutes per study
|
||||
- **Better optimization**: More valid trials = better convergence
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
Test the robust extractor on a known "failed" OP2 file:
|
||||
|
||||
```bash
|
||||
python -c "
|
||||
from pathlib import Path
|
||||
from optimization_engine.op2_extractor import robust_extract_first_frequency
|
||||
|
||||
# Use an OP2 file that pyNastran rejects
|
||||
op2_file = Path('studies/circular_plate_protocol10_v2_2_test/1_setup/model/circular_plate_sim1-solution_normal_modes.op2')
|
||||
f06_file = op2_file.with_suffix('.f06')
|
||||
|
||||
try:
|
||||
freq = robust_extract_first_frequency(op2_file, f06_file=f06_file, verbose=True)
|
||||
print(f'\n✓ Successfully extracted: {freq:.6f} Hz')
|
||||
except Exception as e:
|
||||
print(f'\n✗ Extraction failed: {e}')
|
||||
"
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
[OP2 EXTRACT] Attempting standard read: circular_plate_sim1-solution_normal_modes.op2
|
||||
[OP2 EXTRACT] ✗ Standard read failed: There was a Nastran FATAL Error
|
||||
[OP2 EXTRACT] Detected pyNastran FATAL flag issue
|
||||
[OP2 EXTRACT] Attempting partial extraction...
|
||||
[OP2 EXTRACT] ✓ Success (lenient mode): 115.0442 Hz
|
||||
[OP2 EXTRACT] Note: pyNastran reported FATAL but data is valid!
|
||||
|
||||
✓ Successfully extracted: 115.044200 Hz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Feature | Description | File |
|
||||
|---------|-------------|------|
|
||||
| **Pruning Logger** | Comprehensive failure tracking | [pruning_logger.py](../optimization_engine/pruning_logger.py) |
|
||||
| **Robust OP2 Extractor** | Handles pyNastran issues | [op2_extractor.py](../optimization_engine/op2_extractor.py) |
|
||||
| **Pruning History** | Detailed JSON log | `2_results/pruning_history.json` |
|
||||
| **Pruning Summary** | Analysis and recommendations | `2_results/pruning_summary.json` |
|
||||
|
||||
**Status**: ✅ Ready for production use
|
||||
|
||||
**Benefits**:
|
||||
- Zero false positive failures
|
||||
- Detailed diagnostics for genuine failures
|
||||
- Pattern analysis for validation improvements
|
||||
- ~5 minutes saved per 50-trial study
|
||||
81
docs/archive/historical/QUICK_CONFIG_REFERENCE.md
Normal file
81
docs/archive/historical/QUICK_CONFIG_REFERENCE.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Quick Configuration Reference
|
||||
|
||||
## Change NX Version (e.g., when NX 2506 is released)
|
||||
|
||||
**Edit ONE file**: [`config.py`](../config.py)
|
||||
|
||||
```python
|
||||
# Line 14-15
|
||||
NX_VERSION = "2506" # ← Change this
|
||||
NX_INSTALLATION_DIR = Path(f"C:/Program Files/Siemens/NX{NX_VERSION}")
|
||||
```
|
||||
|
||||
**That's it!** All modules automatically use new paths.
|
||||
|
||||
---
|
||||
|
||||
## Change Python Environment
|
||||
|
||||
**Edit ONE file**: [`config.py`](../config.py)
|
||||
|
||||
```python
|
||||
# Line 49
|
||||
PYTHON_ENV_NAME = "my_new_env" # ← Change this
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verify Configuration
|
||||
|
||||
```bash
|
||||
python config.py
|
||||
```
|
||||
|
||||
Output shows all paths and validates they exist.
|
||||
|
||||
---
|
||||
|
||||
## Using Config in Your Code
|
||||
|
||||
```python
|
||||
from config import (
|
||||
NX_RUN_JOURNAL, # Path to run_journal.exe
|
||||
NX_MATERIAL_LIBRARY, # Path to material library XML
|
||||
PYTHON_ENV_NAME, # Current environment name
|
||||
get_nx_journal_command, # Helper function
|
||||
)
|
||||
|
||||
# Generate journal command
|
||||
cmd = get_nx_journal_command(
|
||||
journal_script,
|
||||
arg1,
|
||||
arg2
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Changed?
|
||||
|
||||
**OLD** (hardcoded paths in multiple files):
|
||||
- `optimization_engine/nx_updater.py`: Line 66
|
||||
- `dashboard/api/app.py`: Line 598
|
||||
- `README.md`: Line 92
|
||||
- `docs/NXOPEN_INTELLISENSE_SETUP.md`: Line 269
|
||||
- ...and more
|
||||
|
||||
**NEW** (all use `config.py`):
|
||||
- Edit `config.py` once
|
||||
- All files automatically updated
|
||||
|
||||
---
|
||||
|
||||
## Files Using Config
|
||||
|
||||
- ✅ `optimization_engine/nx_updater.py`
|
||||
- ✅ `dashboard/api/app.py`
|
||||
- Future: All NX-related modules will use config
|
||||
|
||||
---
|
||||
|
||||
**See also**: [SYSTEM_CONFIGURATION.md](SYSTEM_CONFIGURATION.md) for full documentation
|
||||
414
docs/archive/historical/STUDY_CONTINUATION_STANDARD.md
Normal file
414
docs/archive/historical/STUDY_CONTINUATION_STANDARD.md
Normal file
@@ -0,0 +1,414 @@
|
||||
# Study Continuation - Atomizer Standard Feature
|
||||
|
||||
**Date**: November 20, 2025
|
||||
**Status**: ✅ Implemented as Standard Feature
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Study continuation is now a **standardized Atomizer feature** for dashboard integration. It provides a clean API for continuing existing optimization studies with additional trials.
|
||||
|
||||
Previously, continuation was improvised on-demand. Now it's a first-class feature alongside "Start New Optimization".
|
||||
|
||||
---
|
||||
|
||||
## Module
|
||||
|
||||
[optimization_engine/study_continuation.py](../optimization_engine/study_continuation.py)
|
||||
|
||||
---
|
||||
|
||||
## API
|
||||
|
||||
### Main Function: `continue_study()`
|
||||
|
||||
```python
|
||||
from optimization_engine.study_continuation import continue_study
|
||||
|
||||
results = continue_study(
|
||||
study_dir=Path("studies/my_study"),
|
||||
additional_trials=50,
|
||||
objective_function=my_objective,
|
||||
design_variables={'param1': (0, 10), 'param2': (0, 100)},
|
||||
target_value=115.0,
|
||||
tolerance=0.1,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
**Returns**:
|
||||
```python
|
||||
{
|
||||
'study': optuna.Study, # The study object
|
||||
'total_trials': 100, # Total after continuation
|
||||
'successful_trials': 95, # Completed trials
|
||||
'pruned_trials': 5, # Failed trials
|
||||
'best_value': 0.05, # Best objective value
|
||||
'best_params': {...}, # Best parameters
|
||||
'target_achieved': True # If target specified
|
||||
}
|
||||
```
|
||||
|
||||
### Utility Functions
|
||||
|
||||
#### `can_continue_study()`
|
||||
|
||||
Check if a study is ready for continuation:
|
||||
|
||||
```python
|
||||
from optimization_engine.study_continuation import can_continue_study
|
||||
|
||||
can_continue, message = can_continue_study(Path("studies/my_study"))
|
||||
|
||||
if can_continue:
|
||||
print(f"Ready: {message}")
|
||||
# message: "Study 'my_study' ready (current trials: 50)"
|
||||
else:
|
||||
print(f"Cannot continue: {message}")
|
||||
# message: "No study.db found. Run initial optimization first."
|
||||
```
|
||||
|
||||
#### `get_study_status()`
|
||||
|
||||
Get current study information:
|
||||
|
||||
```python
|
||||
from optimization_engine.study_continuation import get_study_status
|
||||
|
||||
status = get_study_status(Path("studies/my_study"))
|
||||
|
||||
if status:
|
||||
print(f"Study: {status['study_name']}")
|
||||
print(f"Trials: {status['total_trials']}")
|
||||
print(f"Success rate: {status['successful_trials']/status['total_trials']*100:.1f}%")
|
||||
print(f"Best: {status['best_value']}")
|
||||
else:
|
||||
print("Study not found or invalid")
|
||||
```
|
||||
|
||||
**Returns**:
|
||||
```python
|
||||
{
|
||||
'study_name': 'my_study',
|
||||
'total_trials': 50,
|
||||
'successful_trials': 47,
|
||||
'pruned_trials': 3,
|
||||
'pruning_rate': 0.06,
|
||||
'best_value': 0.42,
|
||||
'best_params': {'param1': 5.2, 'param2': 78.3}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
### UI Workflow
|
||||
|
||||
When user selects a study in the dashboard:
|
||||
|
||||
```
|
||||
1. User clicks on study → Dashboard calls get_study_status()
|
||||
|
||||
2. Dashboard shows study info card:
|
||||
┌──────────────────────────────────────┐
|
||||
│ Study: circular_plate_test │
|
||||
│ Current Trials: 50 │
|
||||
│ Success Rate: 94% │
|
||||
│ Best Result: 0.42 Hz error │
|
||||
│ │
|
||||
│ [Continue Study] [View Results] │
|
||||
└──────────────────────────────────────┘
|
||||
|
||||
3. User clicks "Continue Study" → Shows form:
|
||||
┌──────────────────────────────────────┐
|
||||
│ Continue Optimization │
|
||||
│ │
|
||||
│ Additional Trials: [50] │
|
||||
│ Target Value (optional): [115.0] │
|
||||
│ Tolerance (optional): [0.1] │
|
||||
│ │
|
||||
│ [Cancel] [Start] │
|
||||
└──────────────────────────────────────┘
|
||||
|
||||
4. User clicks "Start" → Dashboard calls continue_study()
|
||||
|
||||
5. Progress shown in real-time (like initial optimization)
|
||||
```
|
||||
|
||||
### Example Dashboard Code
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.study_continuation import (
|
||||
get_study_status,
|
||||
can_continue_study,
|
||||
continue_study
|
||||
)
|
||||
|
||||
def show_study_panel(study_dir: Path):
|
||||
"""Display study panel with continuation option."""
|
||||
|
||||
# Get current status
|
||||
status = get_study_status(study_dir)
|
||||
|
||||
if not status:
|
||||
print("Study not found or incomplete")
|
||||
return
|
||||
|
||||
# Show study info
|
||||
print(f"Study: {status['study_name']}")
|
||||
print(f"Current Trials: {status['total_trials']}")
|
||||
print(f"Best Result: {status['best_value']:.4f}")
|
||||
|
||||
# Check if can continue
|
||||
can_continue, message = can_continue_study(study_dir)
|
||||
|
||||
if can_continue:
|
||||
# Enable "Continue" button
|
||||
print("✓ Ready to continue")
|
||||
else:
|
||||
# Disable "Continue" button, show reason
|
||||
print(f"✗ Cannot continue: {message}")
|
||||
|
||||
|
||||
def handle_continue_button_click(study_dir: Path, additional_trials: int):
|
||||
"""Handle user clicking 'Continue Study' button."""
|
||||
|
||||
# Load the objective function for this study
|
||||
# (Dashboard needs to reconstruct this from study config)
|
||||
from studies.my_study.run_optimization import objective
|
||||
|
||||
# Continue the study
|
||||
results = continue_study(
|
||||
study_dir=study_dir,
|
||||
additional_trials=additional_trials,
|
||||
objective_function=objective,
|
||||
verbose=True # Stream output to dashboard
|
||||
)
|
||||
|
||||
# Show completion notification
|
||||
if results.get('target_achieved'):
|
||||
notify_user(f"Target achieved! Best: {results['best_value']:.4f}")
|
||||
else:
|
||||
notify_user(f"Completed {additional_trials} trials. Best: {results['best_value']:.4f}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Comparison: Old vs New
|
||||
|
||||
### Before (Improvised)
|
||||
|
||||
Each study needed a custom `continue_optimization.py`:
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── run_optimization.py # Standard (from protocol)
|
||||
├── continue_optimization.py # Improvised (custom for each study)
|
||||
└── 2_results/
|
||||
└── study.db
|
||||
```
|
||||
|
||||
**Problems**:
|
||||
- Not standardized across studies
|
||||
- Manual creation required
|
||||
- No dashboard integration possible
|
||||
- Inconsistent behavior
|
||||
|
||||
### After (Standardized)
|
||||
|
||||
All studies use the same continuation API:
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── run_optimization.py # Standard (from protocol)
|
||||
└── 2_results/
|
||||
└── study.db
|
||||
|
||||
# No continue_optimization.py needed!
|
||||
# Just call continue_study() from anywhere
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Standardized behavior
|
||||
- ✅ Dashboard-ready API
|
||||
- ✅ Consistent across all studies
|
||||
- ✅ No per-study custom code
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Simple Continuation
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.study_continuation import continue_study
|
||||
from studies.my_study.run_optimization import objective
|
||||
|
||||
# Continue with 50 more trials
|
||||
results = continue_study(
|
||||
study_dir=Path("studies/my_study"),
|
||||
additional_trials=50,
|
||||
objective_function=objective
|
||||
)
|
||||
|
||||
print(f"New best: {results['best_value']}")
|
||||
```
|
||||
|
||||
### Example 2: With Target Checking
|
||||
|
||||
```python
|
||||
# Continue until target is met or 100 additional trials
|
||||
results = continue_study(
|
||||
study_dir=Path("studies/circular_plate_test"),
|
||||
additional_trials=100,
|
||||
objective_function=objective,
|
||||
target_value=115.0,
|
||||
tolerance=0.1
|
||||
)
|
||||
|
||||
if results['target_achieved']:
|
||||
print(f"Success! Achieved in {results['total_trials']} total trials")
|
||||
else:
|
||||
print(f"Target not reached. Best: {results['best_value']}")
|
||||
```
|
||||
|
||||
### Example 3: Dashboard Batch Processing
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.study_continuation import get_study_status
|
||||
|
||||
# Find all studies that can be continued
|
||||
studies_dir = Path("studies")
|
||||
|
||||
for study_dir in studies_dir.iterdir():
|
||||
if not study_dir.is_dir():
|
||||
continue
|
||||
|
||||
status = get_study_status(study_dir)
|
||||
|
||||
if status and status['pruning_rate'] > 0.10:
|
||||
print(f"⚠️ {status['study_name']}: High pruning rate ({status['pruning_rate']*100:.1f}%)")
|
||||
print(f" Consider investigating before continuing")
|
||||
elif status:
|
||||
print(f"✓ {status['study_name']}: {status['total_trials']} trials, best={status['best_value']:.4f}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
### Standard Study Directory
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # FEA model files
|
||||
│ ├── workflow_config.json # Contains study_name
|
||||
│ └── optimization_config.json
|
||||
├── 2_results/
|
||||
│ ├── study.db # Optuna database (required for continuation)
|
||||
│ ├── optimization_history_incremental.json
|
||||
│ └── intelligent_optimizer/
|
||||
└── 3_reports/
|
||||
└── OPTIMIZATION_REPORT.md
|
||||
```
|
||||
|
||||
**Required for Continuation**:
|
||||
- `1_setup/workflow_config.json` (contains study_name)
|
||||
- `2_results/study.db` (Optuna database with trial data)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
The API provides clear error messages:
|
||||
|
||||
```python
|
||||
# Study doesn't exist
|
||||
can_continue_study(Path("studies/nonexistent"))
|
||||
# Returns: (False, "No workflow_config.json found in studies/nonexistent/1_setup")
|
||||
|
||||
# Study exists but not run yet
|
||||
can_continue_study(Path("studies/new_study"))
|
||||
# Returns: (False, "No study.db found. Run initial optimization first.")
|
||||
|
||||
# Study database corrupted
|
||||
can_continue_study(Path("studies/bad_study"))
|
||||
# Returns: (False, "Study 'bad_study' not found in database")
|
||||
|
||||
# Study has no trials
|
||||
can_continue_study(Path("studies/empty_study"))
|
||||
# Returns: (False, "Study exists but has no trials yet")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Buttons
|
||||
|
||||
### Two Standard Actions
|
||||
|
||||
Every study in the dashboard should have:
|
||||
|
||||
1. **"Start New Optimization"** → Calls `run_optimization.py`
|
||||
- Requires: Study setup complete
|
||||
- Creates: Fresh study database
|
||||
- Use when: Starting from scratch
|
||||
|
||||
2. **"Continue Study"** → Calls `continue_study()`
|
||||
- Requires: Existing study.db with trials
|
||||
- Preserves: All existing trial data
|
||||
- Use when: Adding more iterations
|
||||
|
||||
Both are now **standardized Atomizer features**.
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
Test the continuation API:
|
||||
|
||||
```bash
|
||||
# Test status check
|
||||
python -c "
|
||||
from pathlib import Path
|
||||
from optimization_engine.study_continuation import get_study_status
|
||||
|
||||
status = get_study_status(Path('studies/circular_plate_protocol10_v2_1_test'))
|
||||
if status:
|
||||
print(f\"Study: {status['study_name']}\")
|
||||
print(f\"Trials: {status['total_trials']}\")
|
||||
print(f\"Best: {status['best_value']}\")
|
||||
"
|
||||
|
||||
# Test continuation check
|
||||
python -c "
|
||||
from pathlib import Path
|
||||
from optimization_engine.study_continuation import can_continue_study
|
||||
|
||||
can_continue, msg = can_continue_study(Path('studies/circular_plate_protocol10_v2_1_test'))
|
||||
print(f\"Can continue: {can_continue}\")
|
||||
print(f\"Message: {msg}\")
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Feature | Before | After |
|
||||
|---------|--------|-------|
|
||||
| Implementation | Improvised per study | Standardized module |
|
||||
| Dashboard integration | Not possible | Full API support |
|
||||
| Consistency | Varies by study | Uniform behavior |
|
||||
| Error handling | Manual | Built-in with messages |
|
||||
| Study status | Manual queries | `get_study_status()` |
|
||||
| Continuation check | Manual | `can_continue_study()` |
|
||||
|
||||
**Status**: ✅ Ready for dashboard integration
|
||||
|
||||
**Module**: [optimization_engine/study_continuation.py](../optimization_engine/study_continuation.py)
|
||||
518
docs/archive/historical/STUDY_ORGANIZATION.md
Normal file
518
docs/archive/historical/STUDY_ORGANIZATION.md
Normal file
@@ -0,0 +1,518 @@
|
||||
# Study Organization Guide
|
||||
|
||||
**Date**: 2025-11-17
|
||||
**Purpose**: Document recommended study directory structure and organization principles
|
||||
|
||||
---
|
||||
|
||||
## Current Organization Analysis
|
||||
|
||||
### Study Directory: `studies/simple_beam_optimization/`
|
||||
|
||||
**Current Structure**:
|
||||
```
|
||||
studies/simple_beam_optimization/
|
||||
├── model/ # Base CAD/FEM model (reference)
|
||||
│ ├── Beam.prt
|
||||
│ ├── Beam_sim1.sim
|
||||
│ ├── beam_sim1-solution_1.op2
|
||||
│ ├── beam_sim1-solution_1.f06
|
||||
│ └── comprehensive_results_analysis.json
|
||||
│
|
||||
├── substudies/ # All optimization runs
|
||||
│ ├── benchmarking/
|
||||
│ │ ├── benchmark_results.json
|
||||
│ │ └── BENCHMARK_REPORT.md
|
||||
│ ├── initial_exploration/
|
||||
│ │ ├── config.json
|
||||
│ │ └── optimization_config.json
|
||||
│ ├── validation_3trials/
|
||||
│ │ ├── trial_000/
|
||||
│ │ ├── trial_001/
|
||||
│ │ ├── trial_002/
|
||||
│ │ ├── best_trial.json
|
||||
│ │ └── optuna_study.pkl
|
||||
│ ├── validation_4d_3trials/
|
||||
│ │ └── [similar structure]
|
||||
│ └── full_optimization_50trials/
|
||||
│ ├── trial_000/
|
||||
│ ├── ... trial_049/
|
||||
│ ├── plots/ # NEW: Auto-generated plots
|
||||
│ ├── history.json
|
||||
│ ├── best_trial.json
|
||||
│ └── optuna_study.pkl
|
||||
│
|
||||
├── README.md # Study overview
|
||||
├── study_metadata.json # Study metadata
|
||||
├── beam_optimization_config.json # Main configuration
|
||||
├── baseline_validation.json # Baseline results
|
||||
├── COMPREHENSIVE_BENCHMARK_RESULTS.md
|
||||
├── OPTIMIZATION_RESULTS_50TRIALS.md
|
||||
└── run_optimization.py # Study-specific runner
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Assessment
|
||||
|
||||
### ✅ What's Working Well
|
||||
|
||||
1. **Substudy Isolation**: Each optimization run (substudy) is self-contained with its own trial directories, making it easy to compare different optimization strategies.
|
||||
|
||||
2. **Centralized Model**: The `model/` directory serves as a reference CAD/FEM model, which all substudies copy from.
|
||||
|
||||
3. **Configuration at Study Level**: `beam_optimization_config.json` provides the main configuration that substudies inherit from.
|
||||
|
||||
4. **Study-Level Documentation**: `README.md` and results markdown files at the study level provide high-level overviews.
|
||||
|
||||
5. **Clear Hierarchy**:
|
||||
- Study = Overall project (e.g., "optimize this beam")
|
||||
- Substudy = Specific optimization run (e.g., "50 trials with TPE sampler")
|
||||
- Trial = Individual design evaluation
|
||||
|
||||
### ⚠️ Issues Found
|
||||
|
||||
1. **Documentation Scattered**: Results documentation is at the study level (`OPTIMIZATION_RESULTS_50TRIALS.md`) but describes a specific substudy (`full_optimization_50trials`).
|
||||
|
||||
2. **Benchmarking Placement**: `substudies/benchmarking/` is not really a "substudy" - it's a validation step that should happen before optimization.
|
||||
|
||||
3. **Missing Substudy Metadata**: Some substudies lack their own README or summary files to explain what they tested.
|
||||
|
||||
4. **Inconsistent Naming**: `validation_3trials` vs `validation_4d_3trials` - unclear what distinguishes them without investigation.
|
||||
|
||||
5. **Study Metadata Incomplete**: `study_metadata.json` lists only "initial_exploration" substudy, but there are 5 substudies present.
|
||||
|
||||
---
|
||||
|
||||
## Recommended Organization
|
||||
|
||||
### Proposed Structure
|
||||
|
||||
```
|
||||
studies/simple_beam_optimization/
|
||||
│
|
||||
├── 1_setup/ # NEW: Pre-optimization setup
|
||||
│ ├── model/ # Reference CAD/FEM model
|
||||
│ │ ├── Beam.prt
|
||||
│ │ ├── Beam_sim1.sim
|
||||
│ │ └── ...
|
||||
│ ├── benchmarking/ # Baseline validation
|
||||
│ │ ├── benchmark_results.json
|
||||
│ │ └── BENCHMARK_REPORT.md
|
||||
│ └── baseline_validation.json
|
||||
│
|
||||
├── 2_substudies/ # Optimization runs
|
||||
│ ├── 01_initial_exploration/
|
||||
│ │ ├── README.md # What was tested, why
|
||||
│ │ ├── config.json
|
||||
│ │ ├── trial_000/
|
||||
│ │ ├── ...
|
||||
│ │ └── results_summary.md # Substudy-specific results
|
||||
│ ├── 02_validation_3d_3trials/
|
||||
│ │ └── [similar structure]
|
||||
│ ├── 03_validation_4d_3trials/
|
||||
│ │ └── [similar structure]
|
||||
│ └── 04_full_optimization_50trials/
|
||||
│ ├── README.md
|
||||
│ ├── trial_000/
|
||||
│ ├── ... trial_049/
|
||||
│ ├── plots/
|
||||
│ ├── history.json
|
||||
│ ├── best_trial.json
|
||||
│ ├── OPTIMIZATION_RESULTS.md # Moved from study level
|
||||
│ └── cleanup_log.json
|
||||
│
|
||||
├── 3_reports/ # NEW: Study-level analysis
|
||||
│ ├── COMPREHENSIVE_BENCHMARK_RESULTS.md
|
||||
│ ├── COMPARISON_ALL_SUBSTUDIES.md # NEW: Compare substudies
|
||||
│ └── final_recommendations.md # NEW: Engineering insights
|
||||
│
|
||||
├── README.md # Study overview
|
||||
├── study_metadata.json # Updated with all substudies
|
||||
├── beam_optimization_config.json # Main configuration
|
||||
└── run_optimization.py # Study-specific runner
|
||||
```
|
||||
|
||||
### Key Changes
|
||||
|
||||
1. **Numbered Directories**: Indicate workflow sequence (setup → substudies → reports)
|
||||
|
||||
2. **Numbered Substudies**: Chronological naming (01_, 02_, 03_) makes progression clear
|
||||
|
||||
3. **Moved Benchmarking**: From `substudies/` to `1_setup/` (it's pre-optimization)
|
||||
|
||||
4. **Substudy-Level Documentation**: Each substudy has:
|
||||
- `README.md` - What was tested, parameters, hypothesis
|
||||
- `OPTIMIZATION_RESULTS.md` - Results and analysis
|
||||
|
||||
5. **Centralized Reports**: All comparative analysis and final recommendations in `3_reports/`
|
||||
|
||||
6. **Updated Metadata**: `study_metadata.json` tracks all substudies with status
|
||||
|
||||
---
|
||||
|
||||
## Comparison: Current vs Proposed
|
||||
|
||||
| Aspect | Current | Proposed | Benefit |
|
||||
|--------|---------|----------|---------|
|
||||
| **Substudy naming** | Descriptive only | Numbered + descriptive | Chronological clarity |
|
||||
| **Documentation** | Mixed levels | Clear hierarchy | Easier to find results |
|
||||
| **Benchmarking** | In substudies/ | In 1_setup/ | Reflects true purpose |
|
||||
| **Model location** | study root | 1_setup/model/ | Grouped with setup |
|
||||
| **Reports** | Study root | 3_reports/ | Centralized analysis |
|
||||
| **Substudy docs** | Minimal | README + results | Self-documenting |
|
||||
| **Metadata** | Incomplete | All substudies tracked | Accurate status |
|
||||
|
||||
---
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### Option 1: Reorganize Existing Study (Recommended)
|
||||
|
||||
**Steps**:
|
||||
1. Create new directory structure
|
||||
2. Move files to new locations
|
||||
3. Update `study_metadata.json`
|
||||
4. Update file references in documentation
|
||||
5. Create missing substudy READMEs
|
||||
|
||||
**Commands**:
|
||||
```bash
|
||||
# Create new structure
|
||||
mkdir -p studies/simple_beam_optimization/1_setup/model
|
||||
mkdir -p studies/simple_beam_optimization/1_setup/benchmarking
|
||||
mkdir -p studies/simple_beam_optimization/2_substudies
|
||||
mkdir -p studies/simple_beam_optimization/3_reports
|
||||
|
||||
# Move model
|
||||
mv studies/simple_beam_optimization/model/* studies/simple_beam_optimization/1_setup/model/
|
||||
|
||||
# Move benchmarking
|
||||
mv studies/simple_beam_optimization/substudies/benchmarking/* studies/simple_beam_optimization/1_setup/benchmarking/
|
||||
|
||||
# Rename and move substudies
|
||||
mv studies/simple_beam_optimization/substudies/initial_exploration studies/simple_beam_optimization/2_substudies/01_initial_exploration
|
||||
mv studies/simple_beam_optimization/substudies/validation_3trials studies/simple_beam_optimization/2_substudies/02_validation_3d_3trials
|
||||
mv studies/simple_beam_optimization/substudies/validation_4d_3trials studies/simple_beam_optimization/2_substudies/03_validation_4d_3trials
|
||||
mv studies/simple_beam_optimization/substudies/full_optimization_50trials studies/simple_beam_optimization/2_substudies/04_full_optimization_50trials
|
||||
|
||||
# Move reports
|
||||
mv studies/simple_beam_optimization/COMPREHENSIVE_BENCHMARK_RESULTS.md studies/simple_beam_optimization/3_reports/
|
||||
mv studies/simple_beam_optimization/OPTIMIZATION_RESULTS_50TRIALS.md studies/simple_beam_optimization/2_substudies/04_full_optimization_50trials/
|
||||
|
||||
# Clean up
|
||||
rm -rf studies/simple_beam_optimization/substudies/
|
||||
rm -rf studies/simple_beam_optimization/model/
|
||||
```
|
||||
|
||||
### Option 2: Apply to Future Studies Only
|
||||
|
||||
Keep existing study as-is, apply new organization to future studies.
|
||||
|
||||
**When to Use**:
|
||||
- Current study is complete and well-understood
|
||||
- Reorganization would break existing scripts/references
|
||||
- Want to test new organization before migrating
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Study-Level Files
|
||||
|
||||
**Required**:
|
||||
- `README.md` - High-level overview, purpose, design variables, objectives
|
||||
- `study_metadata.json` - Metadata, status, substudy registry
|
||||
- `beam_optimization_config.json` - Main configuration (inheritable)
|
||||
- `run_optimization.py` - Study-specific runner script
|
||||
|
||||
**Optional**:
|
||||
- `CHANGELOG.md` - Track configuration changes across substudies
|
||||
- `LESSONS_LEARNED.md` - Engineering insights, dead ends avoided
|
||||
|
||||
### Substudy-Level Files
|
||||
|
||||
**Required** (Generated by Runner):
|
||||
- `trial_XXX/` - Trial directories with CAD/FEM files and results.json
|
||||
- `history.json` - Full optimization history
|
||||
- `best_trial.json` - Best trial metadata
|
||||
- `optuna_study.pkl` - Optuna study object
|
||||
- `config.json` - Substudy-specific configuration
|
||||
|
||||
**Required** (User-Created):
|
||||
- `README.md` - Purpose, hypothesis, parameter choices
|
||||
|
||||
**Optional** (Auto-Generated):
|
||||
- `plots/` - Visualization plots (if post_processing.generate_plots = true)
|
||||
- `cleanup_log.json` - Model cleanup statistics (if post_processing.cleanup_models = true)
|
||||
|
||||
**Optional** (User-Created):
|
||||
- `OPTIMIZATION_RESULTS.md` - Detailed analysis and interpretation
|
||||
|
||||
### Trial-Level Files
|
||||
|
||||
**Always Kept** (Small, Critical):
|
||||
- `results.json` - Extracted objectives, constraints, design variables
|
||||
|
||||
**Kept for Top-N Trials** (Large, Useful):
|
||||
- `Beam.prt` - CAD model
|
||||
- `Beam_sim1.sim` - Simulation setup
|
||||
- `beam_sim1-solution_1.op2` - FEA results (binary)
|
||||
- `beam_sim1-solution_1.f06` - FEA results (text)
|
||||
|
||||
**Cleaned for Poor Trials** (Large, Less Useful):
|
||||
- All `.prt`, `.sim`, `.fem`, `.op2`, `.f06` files deleted
|
||||
- Only `results.json` preserved
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Substudy Names
|
||||
|
||||
**Format**: `NN_descriptive_name`
|
||||
|
||||
**Examples**:
|
||||
- `01_initial_exploration` - First exploration of design space
|
||||
- `02_validation_3d_3trials` - Validate 3 design variables work
|
||||
- `03_validation_4d_3trials` - Validate 4 design variables work
|
||||
- `04_full_optimization_50trials` - Full optimization run
|
||||
- `05_refined_search_30trials` - Refined search in promising region
|
||||
- `06_sensitivity_analysis` - Parameter sensitivity study
|
||||
|
||||
**Guidelines**:
|
||||
- Start with two-digit number (01, 02, ..., 99)
|
||||
- Use underscores for spaces
|
||||
- Be concise but descriptive
|
||||
- Include trial count if relevant
|
||||
|
||||
### Study Names
|
||||
|
||||
**Format**: `descriptive_name` (no numbering)
|
||||
|
||||
**Examples**:
|
||||
- `simple_beam_optimization` - Optimize simple beam
|
||||
- `bracket_displacement_maximizing` - Maximize bracket displacement
|
||||
- `engine_mount_fatigue` - Engine mount fatigue optimization
|
||||
|
||||
**Guidelines**:
|
||||
- Use underscores for spaces
|
||||
- Include part name and optimization goal
|
||||
- Avoid dates (use substudy numbering for chronology)
|
||||
|
||||
---
|
||||
|
||||
## Metadata Format
|
||||
|
||||
### study_metadata.json
|
||||
|
||||
**Recommended Format**:
|
||||
```json
|
||||
{
|
||||
"study_name": "simple_beam_optimization",
|
||||
"description": "Minimize displacement and weight of beam with existing loadcases",
|
||||
"created": "2025-11-17T10:24:09.613688",
|
||||
"status": "active",
|
||||
"design_variables": ["beam_half_core_thickness", "beam_face_thickness", "holes_diameter", "hole_count"],
|
||||
"objectives": ["minimize_displacement", "minimize_stress", "minimize_mass"],
|
||||
"constraints": ["displacement_limit"],
|
||||
"substudies": [
|
||||
{
|
||||
"name": "01_initial_exploration",
|
||||
"created": "2025-11-17T10:30:00",
|
||||
"status": "completed",
|
||||
"trials": 10,
|
||||
"purpose": "Explore design space boundaries"
|
||||
},
|
||||
{
|
||||
"name": "02_validation_3d_3trials",
|
||||
"created": "2025-11-17T11:00:00",
|
||||
"status": "completed",
|
||||
"trials": 3,
|
||||
"purpose": "Validate 3D parameter updates (without hole_count)"
|
||||
},
|
||||
{
|
||||
"name": "03_validation_4d_3trials",
|
||||
"created": "2025-11-17T12:00:00",
|
||||
"status": "completed",
|
||||
"trials": 3,
|
||||
"purpose": "Validate 4D parameter updates (with hole_count)"
|
||||
},
|
||||
{
|
||||
"name": "04_full_optimization_50trials",
|
||||
"created": "2025-11-17T13:00:00",
|
||||
"status": "completed",
|
||||
"trials": 50,
|
||||
"purpose": "Full optimization with all 4 design variables"
|
||||
}
|
||||
],
|
||||
"last_modified": "2025-11-17T15:30:00"
|
||||
}
|
||||
```
|
||||
|
||||
### Substudy README.md Template
|
||||
|
||||
```markdown
|
||||
# [Substudy Name]
|
||||
|
||||
**Date**: YYYY-MM-DD
|
||||
**Status**: [planned | running | completed | failed]
|
||||
**Trials**: N
|
||||
|
||||
## Purpose
|
||||
|
||||
[Why this substudy was created, what hypothesis is being tested]
|
||||
|
||||
## Configuration Changes
|
||||
|
||||
[Compared to previous substudy or baseline config, what changed?]
|
||||
|
||||
- Design variable bounds: [if changed]
|
||||
- Objective weights: [if changed]
|
||||
- Sampler settings: [if changed]
|
||||
|
||||
## Expected Outcome
|
||||
|
||||
[What do you hope to learn or achieve?]
|
||||
|
||||
## Actual Results
|
||||
|
||||
[Fill in after completion]
|
||||
|
||||
- Best objective: X.XX
|
||||
- Feasible designs: N / N_total
|
||||
- Key findings: [summary]
|
||||
|
||||
## Next Steps
|
||||
|
||||
[What substudy should follow based on these results?]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
### Creating a New Substudy
|
||||
|
||||
**Steps**:
|
||||
1. Determine substudy number (next in sequence)
|
||||
2. Create substudy README.md with purpose and changes
|
||||
3. Update configuration if needed
|
||||
4. Run optimization:
|
||||
```bash
|
||||
python run_optimization.py --substudy-name "05_refined_search_30trials"
|
||||
```
|
||||
5. After completion:
|
||||
- Review results
|
||||
- Update substudy README.md with findings
|
||||
- Create OPTIMIZATION_RESULTS.md if significant
|
||||
- Update study_metadata.json
|
||||
|
||||
### Comparing Substudies
|
||||
|
||||
**Create Comparison Report**:
|
||||
```markdown
|
||||
# Substudy Comparison
|
||||
|
||||
| Substudy | Trials | Best Obj | Feasible | Key Finding |
|
||||
|----------|--------|----------|----------|-------------|
|
||||
| 01_initial_exploration | 10 | 1250.3 | 0/10 | Design space too large |
|
||||
| 02_validation_3d_3trials | 3 | 1180.5 | 0/3 | 3D updates work |
|
||||
| 03_validation_4d_3trials | 3 | 1120.2 | 0/3 | hole_count updates work |
|
||||
| 04_full_optimization_50trials | 50 | 842.6 | 0/50 | No feasible designs found |
|
||||
|
||||
**Conclusion**: Constraint appears infeasible. Recommend relaxing displacement limit.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Benefits of Proposed Organization
|
||||
|
||||
### For Users
|
||||
|
||||
1. **Clarity**: Numbered substudies show chronological progression
|
||||
2. **Self-Documenting**: Each substudy explains its purpose
|
||||
3. **Easy Comparison**: All results in one place (3_reports/)
|
||||
4. **Less Clutter**: Study root only has essential files
|
||||
|
||||
### For Developers
|
||||
|
||||
1. **Predictable Structure**: Scripts can rely on consistent paths
|
||||
2. **Automated Discovery**: Easy to find all substudies programmatically
|
||||
3. **Version Control**: Clear history through numbered substudies
|
||||
4. **Scalability**: Works for 5 substudies or 50
|
||||
|
||||
### For Collaboration
|
||||
|
||||
1. **Onboarding**: New team members can understand study progression quickly
|
||||
2. **Documentation**: Substudy READMEs explain decisions made
|
||||
3. **Reproducibility**: Clear configuration history
|
||||
4. **Communication**: Easy to reference specific substudies in discussions
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Should I reorganize my existing study?
|
||||
|
||||
**A**: Only if:
|
||||
- Study is still active (more substudies planned)
|
||||
- Current organization is causing confusion
|
||||
- You have time to update documentation references
|
||||
|
||||
Otherwise, apply to future studies only.
|
||||
|
||||
### Q: What if my substudy doesn't have a fixed trial count?
|
||||
|
||||
**A**: Use descriptive name instead:
|
||||
- `05_refined_search_until_feasible`
|
||||
- `06_sensitivity_sweep`
|
||||
- `07_validation_run`
|
||||
|
||||
### Q: Can I delete old substudies?
|
||||
|
||||
**A**: Generally no. Keep for:
|
||||
- Historical record
|
||||
- Lessons learned
|
||||
- Reproducibility
|
||||
|
||||
If disk space is critical:
|
||||
- Use model cleanup to delete CAD/FEM files
|
||||
- Archive old substudies to external storage
|
||||
- Keep metadata and results.json files
|
||||
|
||||
### Q: Should benchmarking be a substudy?
|
||||
|
||||
**A**: No. Benchmarking validates the baseline model before optimization. It belongs in `1_setup/benchmarking/`.
|
||||
|
||||
### Q: How do I handle multi-stage optimizations?
|
||||
|
||||
**A**: Create separate substudies:
|
||||
- `05_stage1_meet_constraint_20trials`
|
||||
- `06_stage2_minimize_mass_30trials`
|
||||
|
||||
Document the relationship in substudy READMEs.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Current Organization**: Functional but has room for improvement
|
||||
- ✅ Substudy isolation works well
|
||||
- ⚠️ Documentation scattered across levels
|
||||
- ⚠️ Chronology unclear from names alone
|
||||
|
||||
**Proposed Organization**: Clearer hierarchy and progression
|
||||
- 📁 `1_setup/` - Pre-optimization (model, benchmarking)
|
||||
- 📁 `2_substudies/` - Numbered optimization runs
|
||||
- 📁 `3_reports/` - Comparative analysis
|
||||
|
||||
**Next Steps**:
|
||||
1. Decide: Reorganize existing study or apply to future only
|
||||
2. If reorganizing: Follow migration guide
|
||||
3. Update `study_metadata.json` with all substudies
|
||||
4. Create substudy README templates
|
||||
5. Document lessons learned in study-level docs
|
||||
|
||||
**Bottom Line**: The proposed organization makes it easier to understand what was done, why it was done, and what was learned.
|
||||
690
docs/archive/historical/TODAY_PLAN_NOV18.md
Normal file
690
docs/archive/historical/TODAY_PLAN_NOV18.md
Normal file
@@ -0,0 +1,690 @@
|
||||
# Testing Plan - November 18, 2025
|
||||
**Goal**: Validate Hybrid Mode with real optimizations and verify centralized library system
|
||||
|
||||
## Overview
|
||||
|
||||
Today we're testing the newly refactored architecture with real-world optimizations. Focus is on:
|
||||
1. ✅ Hybrid Mode workflow (90% automation, no API key)
|
||||
2. ✅ Centralized extractor library (deduplication)
|
||||
3. ✅ Clean study folder structure
|
||||
4. ✅ Production readiness
|
||||
|
||||
**Estimated Time**: 2-3 hours total
|
||||
|
||||
---
|
||||
|
||||
## Test 1: Verify Beam Optimization (30 minutes)
|
||||
|
||||
### Goal
|
||||
Confirm existing beam optimization works with new architecture.
|
||||
|
||||
### What We're Testing
|
||||
- ✅ Parameter bounds parsing (20-30mm not 0.2-1.0mm!)
|
||||
- ✅ Workflow config auto-saved
|
||||
- ✅ Extractors added to core library
|
||||
- ✅ Study manifest created (not code pollution)
|
||||
- ✅ Clean study folder structure
|
||||
|
||||
### Steps
|
||||
|
||||
#### 1. Review Existing Workflow JSON
|
||||
```bash
|
||||
# Open in VSCode
|
||||
code studies/simple_beam_optimization/1_setup/workflow_config.json
|
||||
```
|
||||
|
||||
**Check**:
|
||||
- Design variable bounds are `[20, 30]` format (not `min`/`max`)
|
||||
- Extraction actions are clear (extract_mass, extract_displacement)
|
||||
- Objectives and constraints specified
|
||||
|
||||
#### 2. Run Short Optimization (5 trials)
|
||||
```python
|
||||
# Create: studies/simple_beam_optimization/test_today.py
|
||||
|
||||
from pathlib import Path
|
||||
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
|
||||
|
||||
study_dir = Path("studies/simple_beam_optimization")
|
||||
workflow_json = study_dir / "1_setup/workflow_config.json"
|
||||
prt_file = study_dir / "1_setup/model/Beam.prt"
|
||||
sim_file = study_dir / "1_setup/model/Beam_sim1.sim"
|
||||
output_dir = study_dir / "2_substudies/test_nov18_verification"
|
||||
|
||||
print("="*80)
|
||||
print("TEST 1: BEAM OPTIMIZATION VERIFICATION")
|
||||
print("="*80)
|
||||
print()
|
||||
print(f"Workflow: {workflow_json}")
|
||||
print(f"Model: {prt_file}")
|
||||
print(f"Output: {output_dir}")
|
||||
print()
|
||||
print("Running 5 trials to verify system...")
|
||||
print()
|
||||
|
||||
runner = LLMOptimizationRunner(
|
||||
llm_workflow_file=workflow_json,
|
||||
prt_file=prt_file,
|
||||
sim_file=sim_file,
|
||||
output_dir=output_dir,
|
||||
n_trials=5 # Just 5 for verification
|
||||
)
|
||||
|
||||
study = runner.run()
|
||||
|
||||
print()
|
||||
print("="*80)
|
||||
print("TEST 1 RESULTS")
|
||||
print("="*80)
|
||||
print()
|
||||
print(f"Best design found:")
|
||||
print(f" beam_half_core_thickness: {study.best_params['beam_half_core_thickness']:.2f} mm")
|
||||
print(f" beam_face_thickness: {study.best_params['beam_face_thickness']:.2f} mm")
|
||||
print(f" holes_diameter: {study.best_params['holes_diameter']:.2f} mm")
|
||||
print(f" hole_count: {study.best_params['hole_count']}")
|
||||
print(f" Objective value: {study.best_value:.6f}")
|
||||
print()
|
||||
print("[SUCCESS] Optimization completed!")
|
||||
```
|
||||
|
||||
Run it:
|
||||
```bash
|
||||
python studies/simple_beam_optimization/test_today.py
|
||||
```
|
||||
|
||||
#### 3. Verify Results
|
||||
|
||||
**Check output directory structure**:
|
||||
```bash
|
||||
# Should contain ONLY these files (no generated_extractors/!)
|
||||
dir studies\simple_beam_optimization\2_substudies\test_nov18_verification
|
||||
```
|
||||
|
||||
**Expected**:
|
||||
```
|
||||
test_nov18_verification/
|
||||
├── extractors_manifest.json ✓ References to core library
|
||||
├── llm_workflow_config.json ✓ What LLM understood
|
||||
├── optimization_results.json ✓ Best design
|
||||
├── optimization_history.json ✓ All trials
|
||||
└── study.db ✓ Optuna database
|
||||
```
|
||||
|
||||
**Check parameter values are realistic**:
|
||||
```python
|
||||
# Create: verify_results.py
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
results_file = Path("studies/simple_beam_optimization/2_substudies/test_nov18_verification/optimization_results.json")
|
||||
with open(results_file) as f:
|
||||
results = json.load(f)
|
||||
|
||||
print("Parameter values:")
|
||||
for param, value in results['best_params'].items():
|
||||
print(f" {param}: {value}")
|
||||
|
||||
# VERIFY: thickness should be 20-30 range (not 0.2-1.0!)
|
||||
thickness = results['best_params']['beam_half_core_thickness']
|
||||
assert 20 <= thickness <= 30, f"FAIL: thickness {thickness} not in 20-30 range!"
|
||||
print()
|
||||
print("[OK] Parameter ranges are correct!")
|
||||
```
|
||||
|
||||
**Check core library**:
|
||||
```python
|
||||
# Create: check_library.py
|
||||
from optimization_engine.extractor_library import ExtractorLibrary
|
||||
|
||||
library = ExtractorLibrary()
|
||||
print(library.get_library_summary())
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
================================================================================
|
||||
ATOMIZER EXTRACTOR LIBRARY
|
||||
================================================================================
|
||||
|
||||
Location: optimization_engine/extractors/
|
||||
Total extractors: 3
|
||||
|
||||
Available Extractors:
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
extract_mass
|
||||
Domain: result_extraction
|
||||
Description: Extract mass from FEA results
|
||||
File: extract_mass.py
|
||||
Signature: 2f58f241a96afb1f
|
||||
|
||||
extract_displacement
|
||||
Domain: result_extraction
|
||||
Description: Extract displacement from FEA results
|
||||
File: extract_displacement.py
|
||||
Signature: 381739e9cada3a48
|
||||
|
||||
extract_von_mises_stress
|
||||
Domain: result_extraction
|
||||
Description: Extract von Mises stress from FEA results
|
||||
File: extract_von_mises_stress.py
|
||||
Signature: 63d54f297f2403e4
|
||||
```
|
||||
|
||||
### Success Criteria
|
||||
- ✅ Optimization completes without errors
|
||||
- ✅ Parameter values in correct range (20-30mm not 0.2-1.0mm)
|
||||
- ✅ Study folder clean (only 5 files, no generated_extractors/)
|
||||
- ✅ extractors_manifest.json exists
|
||||
- ✅ Core library contains 3 extractors
|
||||
- ✅ llm_workflow_config.json saved automatically
|
||||
|
||||
### If It Fails
|
||||
- Check parameter bounds parsing in llm_optimization_runner.py:205-211
|
||||
- Verify NX expression names match workflow JSON
|
||||
- Check OP2 file contains expected results
|
||||
|
||||
---
|
||||
|
||||
## Test 2: Create New Optimization with Claude (1 hour)
|
||||
|
||||
### Goal
|
||||
Use Claude Code to create a brand new optimization from scratch, demonstrating full Hybrid Mode workflow.
|
||||
|
||||
### Scenario
|
||||
You have a cantilever plate that needs optimization:
|
||||
- **Design variables**: plate_thickness (3-8mm), support_width (20-50mm)
|
||||
- **Objective**: Minimize mass
|
||||
- **Constraints**: max_displacement < 1.5mm, max_stress < 150 MPa
|
||||
|
||||
### Steps
|
||||
|
||||
#### 1. Prepare Model (if you have one)
|
||||
```
|
||||
studies/
|
||||
cantilever_plate_optimization/
|
||||
1_setup/
|
||||
model/
|
||||
Plate.prt # Your NX model
|
||||
Plate_sim1.sim # Your FEM setup
|
||||
```
|
||||
|
||||
**If you don't have a real model**, we'll simulate the workflow and use beam model as placeholder.
|
||||
|
||||
#### 2. Describe Optimization to Claude
|
||||
|
||||
Start conversation with Claude Code (this tool!):
|
||||
|
||||
```
|
||||
YOU: I want to optimize a cantilever plate design.
|
||||
|
||||
Design variables:
|
||||
- plate_thickness: 3 to 8 mm
|
||||
- support_width: 20 to 50 mm
|
||||
|
||||
Objective:
|
||||
- Minimize mass
|
||||
|
||||
Constraints:
|
||||
- Maximum displacement < 1.5 mm
|
||||
- Maximum von Mises stress < 150 MPa
|
||||
|
||||
Can you help me create the workflow JSON for Hybrid Mode?
|
||||
```
|
||||
|
||||
#### 3. Claude Creates Workflow JSON
|
||||
|
||||
Claude (me!) will generate something like:
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "cantilever_plate_optimization",
|
||||
"optimization_request": "Minimize mass while keeping displacement < 1.5mm and stress < 150 MPa",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"parameter": "plate_thickness",
|
||||
"bounds": [3, 8],
|
||||
"description": "Plate thickness in mm"
|
||||
},
|
||||
{
|
||||
"parameter": "support_width",
|
||||
"bounds": [20, 50],
|
||||
"description": "Support width in mm"
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "mass",
|
||||
"goal": "minimize",
|
||||
"weight": 1.0,
|
||||
"extraction": {
|
||||
"action": "extract_mass",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "mass",
|
||||
"metric": "total"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_displacement_limit",
|
||||
"type": "less_than",
|
||||
"threshold": 1.5,
|
||||
"extraction": {
|
||||
"action": "extract_displacement",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "displacement",
|
||||
"metric": "max"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "max_stress_limit",
|
||||
"type": "less_than",
|
||||
"threshold": 150,
|
||||
"extraction": {
|
||||
"action": "extract_von_mises_stress",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "stress",
|
||||
"metric": "max"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### 4. Save and Review
|
||||
|
||||
```bash
|
||||
# Save to:
|
||||
# studies/cantilever_plate_optimization/1_setup/workflow_config.json
|
||||
|
||||
# Review in VSCode
|
||||
code studies/cantilever_plate_optimization/1_setup/workflow_config.json
|
||||
```
|
||||
|
||||
**Check**:
|
||||
- Parameter names match your NX expressions EXACTLY
|
||||
- Bounds in correct units (mm)
|
||||
- Extraction actions make sense for your model
|
||||
|
||||
#### 5. Run Optimization
|
||||
|
||||
```python
|
||||
# Create: studies/cantilever_plate_optimization/run_optimization.py
|
||||
|
||||
from pathlib import Path
|
||||
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
|
||||
|
||||
study_dir = Path("studies/cantilever_plate_optimization")
|
||||
workflow_json = study_dir / "1_setup/workflow_config.json"
|
||||
prt_file = study_dir / "1_setup/model/Plate.prt"
|
||||
sim_file = study_dir / "1_setup/model/Plate_sim1.sim"
|
||||
output_dir = study_dir / "2_substudies/optimization_run_001"
|
||||
|
||||
print("="*80)
|
||||
print("TEST 2: NEW CANTILEVER PLATE OPTIMIZATION")
|
||||
print("="*80)
|
||||
print()
|
||||
print("This demonstrates Hybrid Mode workflow:")
|
||||
print(" 1. You described optimization in natural language")
|
||||
print(" 2. Claude created workflow JSON")
|
||||
print(" 3. LLMOptimizationRunner does 90% automation")
|
||||
print()
|
||||
print("Running 10 trials...")
|
||||
print()
|
||||
|
||||
runner = LLMOptimizationRunner(
|
||||
llm_workflow_file=workflow_json,
|
||||
prt_file=prt_file,
|
||||
sim_file=sim_file,
|
||||
output_dir=output_dir,
|
||||
n_trials=10
|
||||
)
|
||||
|
||||
study = runner.run()
|
||||
|
||||
print()
|
||||
print("="*80)
|
||||
print("TEST 2 RESULTS")
|
||||
print("="*80)
|
||||
print()
|
||||
print(f"Best design found:")
|
||||
for param, value in study.best_params.items():
|
||||
print(f" {param}: {value:.2f}")
|
||||
print(f" Objective: {study.best_value:.6f}")
|
||||
print()
|
||||
print("[SUCCESS] New optimization from scratch!")
|
||||
```
|
||||
|
||||
Run it:
|
||||
```bash
|
||||
python studies/cantilever_plate_optimization/run_optimization.py
|
||||
```
|
||||
|
||||
#### 6. Verify Library Reuse
|
||||
|
||||
**Key test**: Did it reuse extractors from Test 1?
|
||||
|
||||
```python
|
||||
# Create: check_reuse.py
|
||||
from optimization_engine.extractor_library import ExtractorLibrary
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
||||
library = ExtractorLibrary()
|
||||
|
||||
# Check manifest from Test 2
|
||||
manifest_file = Path("studies/cantilever_plate_optimization/2_substudies/optimization_run_001/extractors_manifest.json")
|
||||
with open(manifest_file) as f:
|
||||
manifest = json.load(f)
|
||||
|
||||
print("Extractors used in Test 2:")
|
||||
for sig in manifest['extractors_used']:
|
||||
info = library.get_extractor_metadata(sig)
|
||||
print(f" {info['name']} (signature: {sig})")
|
||||
|
||||
print()
|
||||
print("Core library status:")
|
||||
print(f" Total extractors: {len(library.catalog)}")
|
||||
print()
|
||||
|
||||
# VERIFY: Should still be 3 extractors (reused from Test 1!)
|
||||
assert len(library.catalog) == 3, "FAIL: Should reuse extractors, not duplicate!"
|
||||
print("[OK] Extractors were reused from core library!")
|
||||
print("[OK] No duplicate code generated!")
|
||||
```
|
||||
|
||||
### Success Criteria
|
||||
- ✅ Claude successfully creates workflow JSON from natural language
|
||||
- ✅ Optimization runs without errors
|
||||
- ✅ Core library STILL only has 3 extractors (reused!)
|
||||
- ✅ Study folder clean (no generated_extractors/)
|
||||
- ✅ Results make engineering sense
|
||||
|
||||
### If It Fails
|
||||
- NX expression mismatch: Check Tools → Expression in NX
|
||||
- OP2 results missing: Verify FEM setup outputs required results
|
||||
- Library issues: Check `optimization_engine/extractors/catalog.json`
|
||||
|
||||
---
|
||||
|
||||
## Test 3: Validate Extractor Deduplication (15 minutes)
|
||||
|
||||
### Goal
|
||||
Explicitly test that signature-based deduplication works correctly.
|
||||
|
||||
### Steps
|
||||
|
||||
#### 1. Run Same Workflow Twice
|
||||
|
||||
```python
|
||||
# Create: test_deduplication.py
|
||||
|
||||
from pathlib import Path
|
||||
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
|
||||
from optimization_engine.extractor_library import ExtractorLibrary
|
||||
|
||||
print("="*80)
|
||||
print("TEST 3: EXTRACTOR DEDUPLICATION")
|
||||
print("="*80)
|
||||
print()
|
||||
|
||||
library = ExtractorLibrary()
|
||||
print(f"Core library before test: {len(library.catalog)} extractors")
|
||||
print()
|
||||
|
||||
# Run 1: First optimization
|
||||
print("RUN 1: First optimization with displacement extractor...")
|
||||
study_dir = Path("studies/simple_beam_optimization")
|
||||
runner1 = LLMOptimizationRunner(
|
||||
llm_workflow_file=study_dir / "1_setup/workflow_config.json",
|
||||
prt_file=study_dir / "1_setup/model/Beam.prt",
|
||||
sim_file=study_dir / "1_setup/model/Beam_sim1.sim",
|
||||
output_dir=study_dir / "2_substudies/dedup_test_run1",
|
||||
n_trials=2 # Just 2 trials
|
||||
)
|
||||
study1 = runner1.run()
|
||||
print("[OK] Run 1 complete")
|
||||
print()
|
||||
|
||||
# Check library
|
||||
library = ExtractorLibrary() # Reload
|
||||
count_after_run1 = len(library.catalog)
|
||||
print(f"Core library after Run 1: {count_after_run1} extractors")
|
||||
print()
|
||||
|
||||
# Run 2: Same workflow, different output directory
|
||||
print("RUN 2: Same optimization, different study...")
|
||||
runner2 = LLMOptimizationRunner(
|
||||
llm_workflow_file=study_dir / "1_setup/workflow_config.json",
|
||||
prt_file=study_dir / "1_setup/model/Beam.prt",
|
||||
sim_file=study_dir / "1_setup/model/Beam_sim1.sim",
|
||||
output_dir=study_dir / "2_substudies/dedup_test_run2",
|
||||
n_trials=2 # Just 2 trials
|
||||
)
|
||||
study2 = runner2.run()
|
||||
print("[OK] Run 2 complete")
|
||||
print()
|
||||
|
||||
# Check library again
|
||||
library = ExtractorLibrary() # Reload
|
||||
count_after_run2 = len(library.catalog)
|
||||
print(f"Core library after Run 2: {count_after_run2} extractors")
|
||||
print()
|
||||
|
||||
# VERIFY: Should be same count (deduplication worked!)
|
||||
print("="*80)
|
||||
print("DEDUPLICATION TEST RESULTS")
|
||||
print("="*80)
|
||||
print()
|
||||
if count_after_run1 == count_after_run2:
|
||||
print(f"[SUCCESS] Extractor count unchanged ({count_after_run1} → {count_after_run2})")
|
||||
print("[SUCCESS] Deduplication working correctly!")
|
||||
print()
|
||||
print("This means:")
|
||||
print(" ✓ Run 2 reused extractors from Run 1")
|
||||
print(" ✓ No duplicate code generated")
|
||||
print(" ✓ Core library stays clean")
|
||||
else:
|
||||
print(f"[FAIL] Extractor count changed ({count_after_run1} → {count_after_run2})")
|
||||
print("[FAIL] Deduplication not working!")
|
||||
|
||||
print()
|
||||
print("="*80)
|
||||
```
|
||||
|
||||
Run it:
|
||||
```bash
|
||||
python test_deduplication.py
|
||||
```
|
||||
|
||||
#### 2. Inspect Manifests
|
||||
|
||||
```python
|
||||
# Create: compare_manifests.py
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
manifest1 = Path("studies/simple_beam_optimization/2_substudies/dedup_test_run1/extractors_manifest.json")
|
||||
manifest2 = Path("studies/simple_beam_optimization/2_substudies/dedup_test_run2/extractors_manifest.json")
|
||||
|
||||
with open(manifest1) as f:
|
||||
data1 = json.load(f)
|
||||
|
||||
with open(manifest2) as f:
|
||||
data2 = json.load(f)
|
||||
|
||||
print("Run 1 used extractors:")
|
||||
for sig in data1['extractors_used']:
|
||||
print(f" {sig}")
|
||||
|
||||
print()
|
||||
print("Run 2 used extractors:")
|
||||
for sig in data2['extractors_used']:
|
||||
print(f" {sig}")
|
||||
|
||||
print()
|
||||
if data1['extractors_used'] == data2['extractors_used']:
|
||||
print("[OK] Same extractors referenced")
|
||||
print("[OK] Signatures match correctly")
|
||||
else:
|
||||
print("[WARN] Different extractors used")
|
||||
```
|
||||
|
||||
### Success Criteria
|
||||
- ✅ Core library size unchanged after Run 2
|
||||
- ✅ Both manifests reference same extractor signatures
|
||||
- ✅ No duplicate extractor files created
|
||||
- ✅ Study folders both clean (only manifests, no code)
|
||||
|
||||
### If It Fails
|
||||
- Check signature computation in `extractor_library.py:73-92`
|
||||
- Verify catalog.json persistence
|
||||
- Check `get_or_create()` logic in `extractor_library.py:93-137`
|
||||
|
||||
---
|
||||
|
||||
## Test 4: Dashboard Visualization (30 minutes) - OPTIONAL
|
||||
|
||||
### Goal
|
||||
Verify dashboard can visualize the optimization results.
|
||||
|
||||
### Steps
|
||||
|
||||
#### 1. Start Dashboard
|
||||
```bash
|
||||
cd dashboard/api
|
||||
python app.py
|
||||
```
|
||||
|
||||
#### 2. Open Browser
|
||||
```
|
||||
http://localhost:5000
|
||||
```
|
||||
|
||||
#### 3. Load Study
|
||||
- Navigate to beam optimization study
|
||||
- View optimization history plot
|
||||
- Check Pareto front (if multi-objective)
|
||||
- Inspect trial details
|
||||
|
||||
### Success Criteria
|
||||
- ✅ Dashboard loads without errors
|
||||
- ✅ Can select study from dropdown
|
||||
- ✅ History plot shows all trials
|
||||
- ✅ Best design highlighted
|
||||
- ✅ Can inspect individual trials
|
||||
|
||||
---
|
||||
|
||||
## Summary Checklist
|
||||
|
||||
At end of testing session, verify:
|
||||
|
||||
### Architecture
|
||||
- [ ] Core library system working (deduplication verified)
|
||||
- [ ] Study folders clean (only 5 files, no code pollution)
|
||||
- [ ] Extractors manifest created correctly
|
||||
- [ ] Workflow config auto-saved
|
||||
|
||||
### Functionality
|
||||
- [ ] Parameter bounds parsed correctly (actual mm values)
|
||||
- [ ] Extractors auto-generated successfully
|
||||
- [ ] Optimization completes without errors
|
||||
- [ ] Results make engineering sense
|
||||
|
||||
### Hybrid Mode Workflow
|
||||
- [ ] Claude successfully creates workflow JSON from natural language
|
||||
- [ ] LLMOptimizationRunner handles workflow correctly
|
||||
- [ ] 90% automation achieved (only JSON creation manual)
|
||||
- [ ] Full audit trail saved (workflow config + manifest)
|
||||
|
||||
### Production Readiness
|
||||
- [ ] No code duplication across studies
|
||||
- [ ] Clean folder structure maintained
|
||||
- [ ] Library grows intelligently (deduplication)
|
||||
- [ ] Reproducible (workflow config captures everything)
|
||||
|
||||
---
|
||||
|
||||
## If Everything Passes
|
||||
|
||||
**Congratulations!** 🎉
|
||||
|
||||
You now have a production-ready optimization system with:
|
||||
- ✅ 90% automation (Hybrid Mode)
|
||||
- ✅ Clean architecture (centralized library)
|
||||
- ✅ Full transparency (audit trails)
|
||||
- ✅ Code reuse (deduplication)
|
||||
- ✅ Professional structure (studies = data, core = code)
|
||||
|
||||
### Next Steps
|
||||
1. Run longer optimizations (50-100 trials)
|
||||
2. Try real engineering problems
|
||||
3. Build up core library with domain-specific extractors
|
||||
4. Consider upgrading to Full LLM Mode (API) when ready
|
||||
|
||||
### Share Your Success
|
||||
- Update DEVELOPMENT.md with test results
|
||||
- Document any issues encountered
|
||||
- Add your own optimization examples to `studies/`
|
||||
|
||||
---
|
||||
|
||||
## If Something Fails
|
||||
|
||||
### Debugging Strategy
|
||||
|
||||
1. **Check logs**: Look for error messages in terminal output
|
||||
2. **Verify files**: Ensure NX model and sim files exist and are valid
|
||||
3. **Inspect manifests**: Check `extractors_manifest.json` is created
|
||||
4. **Review library**: Run `python -m optimization_engine.extractor_library` to see library status
|
||||
5. **Test components**: Run E2E test: `python tests/test_phase_3_2_e2e.py`
|
||||
|
||||
### Common Issues
|
||||
|
||||
**"Expression not found"**:
|
||||
- Open NX model
|
||||
- Tools → Expression
|
||||
- Verify exact parameter names
|
||||
- Update workflow JSON
|
||||
|
||||
**"No mass results"**:
|
||||
- Check OP2 file contains mass data
|
||||
- Try different result type (displacement, stress)
|
||||
- Verify FEM setup outputs required results
|
||||
|
||||
**"Extractor generation failed"**:
|
||||
- Check pyNastran can read OP2: `python -c "from pyNastran.op2.op2 import OP2; OP2().read_op2('path')"`
|
||||
- Review knowledge base patterns
|
||||
- Manually create extractor if needed
|
||||
|
||||
**"Deduplication not working"**:
|
||||
- Check `optimization_engine/extractors/catalog.json`
|
||||
- Verify signature computation
|
||||
- Review `get_or_create()` logic
|
||||
|
||||
### Get Help
|
||||
- Review `docs/HYBRID_MODE_GUIDE.md`
|
||||
- Check `docs/ARCHITECTURE_REFACTOR_NOV17.md`
|
||||
- Inspect code in `optimization_engine/llm_optimization_runner.py`
|
||||
|
||||
---
|
||||
|
||||
**Ready to revolutionize your optimization workflow!** 🚀
|
||||
|
||||
**Start Time**: ___________
|
||||
**End Time**: ___________
|
||||
**Tests Passed**: ___ / 4
|
||||
**Issues Found**: ___________
|
||||
**Notes**: ___________
|
||||
Reference in New Issue
Block a user