- Add validation framework (config, model, results, study validators) - Add Claude Code skills (create-study, run-optimization, generate-report, troubleshoot, analyze-model) - Add Atomizer Dashboard (React frontend + FastAPI backend) - Reorganize docs into structured directories (00-09) - Add neural surrogate modules and training infrastructure - Add multi-objective optimization support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
298 lines
7.9 KiB
Markdown
298 lines
7.9 KiB
Markdown
# How to Extend an Optimization Study
|
|
|
|
**Date**: November 20, 2025
|
|
|
|
When you want to run more iterations to get better results, you have three options:
|
|
|
|
---
|
|
|
|
## Option 1: Continue Existing Study (Recommended)
|
|
|
|
**Best for**: When you want to keep all previous trial data and just add more iterations
|
|
|
|
**Advantages**:
|
|
- Preserves all existing trials
|
|
- Continues from current best result
|
|
- Uses accumulated knowledge from previous trials
|
|
- More efficient (no wasted trials)
|
|
|
|
**Process**:
|
|
|
|
### Step 1: Wait for current optimization to finish
|
|
Check if the v2.1 test is still running:
|
|
```bash
|
|
# On Windows
|
|
tasklist | findstr python
|
|
|
|
# Check background job status
|
|
# Look for the running optimization process
|
|
```
|
|
|
|
### Step 2: Run the continuation script
|
|
```bash
|
|
cd studies/circular_plate_protocol10_v2_1_test
|
|
python continue_optimization.py
|
|
```
|
|
|
|
### Step 3: Configure number of additional trials
|
|
Edit [continue_optimization.py:29](../studies/circular_plate_protocol10_v2_1_test/continue_optimization.py#L29):
|
|
```python
|
|
# CONFIGURE THIS: Number of additional trials to run
|
|
ADDITIONAL_TRIALS = 50 # Change to 100 for total of ~150 trials
|
|
```
|
|
|
|
**Example**: If you ran 50 trials initially and want 100 total:
|
|
- Set `ADDITIONAL_TRIALS = 50`
|
|
- Study will run trials #50-99 (continuing from where it left off)
|
|
- All 100 trials will be in the same study database
|
|
|
|
---
|
|
|
|
## Option 2: Modify Config and Restart
|
|
|
|
**Best for**: When you want a completely fresh start with more iterations
|
|
|
|
**Advantages**:
|
|
- Clean slate optimization
|
|
- Good for testing different configurations
|
|
- Simpler to understand (one continuous run)
|
|
|
|
**Disadvantages**:
|
|
- Loses all previous trial data
|
|
- Wastes computational budget if previous trials were good
|
|
|
|
**Process**:
|
|
|
|
### Step 1: Stop any running optimization
|
|
```bash
|
|
# Kill the running process if needed
|
|
# On Windows, find the PID and:
|
|
taskkill /PID <process_id> /F
|
|
```
|
|
|
|
### Step 2: Edit optimization config
|
|
Edit [studies/circular_plate_protocol10_v2_1_test/1_setup/optimization_config.json](../studies/circular_plate_protocol10_v2_1_test/1_setup/optimization_config.json):
|
|
```json
|
|
{
|
|
"trials": {
|
|
"n_trials": 100, // Changed from 50 to 100
|
|
"timeout_per_trial": 3600
|
|
}
|
|
}
|
|
```
|
|
|
|
### Step 3: Delete old results
|
|
```bash
|
|
cd studies/circular_plate_protocol10_v2_1_test
|
|
|
|
# Delete old database and history
|
|
del 2_results\study.db
|
|
del 2_results\optimization_history_incremental.json
|
|
del 2_results\intelligent_optimizer\*.*
|
|
```
|
|
|
|
### Step 4: Rerun optimization
|
|
```bash
|
|
python run_optimization.py
|
|
```
|
|
|
|
---
|
|
|
|
## Option 3: Wait and Evaluate First
|
|
|
|
**Best for**: When you're not sure if more iterations are needed
|
|
|
|
**Process**:
|
|
|
|
### Step 1: Wait for current test to finish
|
|
The v2.1 test is currently running with 50 trials. Let it complete first.
|
|
|
|
### Step 2: Check results
|
|
```bash
|
|
cd studies/circular_plate_protocol10_v2_1_test
|
|
|
|
# View optimization report
|
|
type 3_reports\OPTIMIZATION_REPORT.md
|
|
|
|
# Or check test summary
|
|
type 2_results\test_summary.json
|
|
```
|
|
|
|
### Step 3: Evaluate performance
|
|
Look at:
|
|
- **Best error**: Is it < 0.1 Hz? (target achieved)
|
|
- **Convergence**: Has it plateaued or still improving?
|
|
- **Pruning rate**: < 5% is good
|
|
|
|
### Step 4: Decide next action
|
|
- **If target achieved**: Done! No need for more trials
|
|
- **If converging**: Add 20-30 more trials (Option 1)
|
|
- **If struggling**: May need algorithm adjustment, not more trials
|
|
|
|
---
|
|
|
|
## Comparison Table
|
|
|
|
| Feature | Option 1: Continue | Option 2: Restart | Option 3: Wait |
|
|
|---------|-------------------|-------------------|----------------|
|
|
| Preserves data | ✅ Yes | ❌ No | ✅ Yes |
|
|
| Efficient | ✅ Very | ❌ Wasteful | ✅ Most |
|
|
| Easy to set up | ✅ Simple | ⚠️ Moderate | ✅ Simplest |
|
|
| Best use case | Adding more trials | Testing new config | Evaluating first |
|
|
|
|
---
|
|
|
|
## Detailed Example: Extending to 100 Trials
|
|
|
|
Let's say the v2.1 test (50 trials) finishes with:
|
|
- Best error: 0.25 Hz (not at target yet)
|
|
- Convergence: Still improving
|
|
- Pruning rate: 4% (good)
|
|
|
|
**Recommendation**: Continue with 50 more trials (Option 1)
|
|
|
|
### Step-by-step:
|
|
|
|
1. **Check current status**:
|
|
```python
|
|
import optuna
|
|
storage = "sqlite:///studies/circular_plate_protocol10_v2_1_test/2_results/study.db"
|
|
study = optuna.load_study(study_name="circular_plate_protocol10_v2_1_test", storage=storage)
|
|
|
|
print(f"Current trials: {len(study.trials)}")
|
|
print(f"Best error: {study.best_value:.4f} Hz")
|
|
```
|
|
|
|
2. **Edit continuation script**:
|
|
```python
|
|
# In continue_optimization.py line 29
|
|
ADDITIONAL_TRIALS = 50 # Will reach ~100 total
|
|
```
|
|
|
|
3. **Run continuation**:
|
|
```bash
|
|
cd studies/circular_plate_protocol10_v2_1_test
|
|
python continue_optimization.py
|
|
```
|
|
|
|
4. **Monitor progress**:
|
|
- Watch console output for trial results
|
|
- Check `optimization_history_incremental.json` for updates
|
|
- Look for convergence (error decreasing)
|
|
|
|
5. **Verify results**:
|
|
```python
|
|
# After completion
|
|
study = optuna.load_study(...)
|
|
print(f"Total trials: {len(study.trials)}") # Should be ~100
|
|
print(f"Final best error: {study.best_value:.4f} Hz")
|
|
```
|
|
|
|
---
|
|
|
|
## Understanding Trial Counts
|
|
|
|
**Important**: The "total trials" count includes both successful and pruned trials.
|
|
|
|
Example breakdown:
|
|
```
|
|
Total trials: 50
|
|
├── Successful: 47 (94%)
|
|
│ └── Used for optimization
|
|
└── Pruned: 3 (6%)
|
|
└── Rejected (invalid parameters, simulation failures)
|
|
```
|
|
|
|
When you add 50 more trials:
|
|
```
|
|
Total trials: 100
|
|
├── Successful: ~94 (94%)
|
|
└── Pruned: ~6 (6%)
|
|
```
|
|
|
|
The optimization algorithm only learns from **successful trials**, so:
|
|
- 50 successful trials ≈ 53 total trials (with 6% pruning)
|
|
- 100 successful trials ≈ 106 total trials (with 6% pruning)
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### When to Add More Trials:
|
|
✅ Error still decreasing (not converged yet)
|
|
✅ Close to target but need refinement
|
|
✅ Exploring new parameter regions
|
|
|
|
### When NOT to Add More Trials:
|
|
❌ Error has plateaued for 20+ trials
|
|
❌ Already achieved target tolerance
|
|
❌ High pruning rate (>10%) - fix validation instead
|
|
❌ Wrong algorithm selected - fix strategy selector instead
|
|
|
|
### How Many to Add:
|
|
- **Close to target** (within 2x tolerance): Add 20-30 trials
|
|
- **Moderate distance** (2-5x tolerance): Add 50 trials
|
|
- **Far from target** (>5x tolerance): Investigate root cause first
|
|
|
|
---
|
|
|
|
## Monitoring Long Runs
|
|
|
|
For runs with 100+ trials (several hours):
|
|
|
|
### Option A: Run in background (Windows)
|
|
```bash
|
|
# Start minimized
|
|
start /MIN python continue_optimization.py
|
|
```
|
|
|
|
### Option B: Use screen/tmux (if available)
|
|
```bash
|
|
# Not standard on Windows, but useful on Linux/Mac
|
|
tmux new -s optimization
|
|
python continue_optimization.py
|
|
# Detach: Ctrl+B, then D
|
|
# Reattach: tmux attach -t optimization
|
|
```
|
|
|
|
### Option C: Monitor progress file
|
|
```python
|
|
# Check progress without interrupting
|
|
import json
|
|
with open('2_results/optimization_history_incremental.json') as f:
|
|
history = json.load(f)
|
|
|
|
print(f"Completed trials: {len(history)}")
|
|
best = min(history, key=lambda x: x['objective'])
|
|
print(f"Current best: {best['objective']:.4f} Hz")
|
|
```
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Issue: "Study not found in database"
|
|
**Cause**: Initial optimization hasn't run yet or database corrupted
|
|
**Fix**: Run `run_optimization.py` first to create the initial study
|
|
|
|
### Issue: Continuation starts from trial #0
|
|
**Cause**: Study database exists but is empty
|
|
**Fix**: Delete database and run fresh optimization
|
|
|
|
### Issue: NX session conflicts
|
|
**Cause**: Multiple NX sessions accessing same model
|
|
**Solution**: NX Session Manager handles this automatically, but verify:
|
|
```python
|
|
from optimization_engine.nx_session_manager import NXSessionManager
|
|
mgr = NXSessionManager()
|
|
print(mgr.get_status_report())
|
|
```
|
|
|
|
### Issue: High pruning rate in continuation
|
|
**Cause**: Optimization exploring extreme parameter regions
|
|
**Fix**: Simulation validator should prevent this, but verify rules are active
|
|
|
|
---
|
|
|
|
**Summary**: For your case (wanting 100 iterations), use **Option 1** with the `continue_optimization.py` script. Set `ADDITIONAL_TRIALS = 50` and run it after the current test finishes.
|