feat: Add TrialManager and DashboardDB for unified trial management
- Add TrialManager (trial_manager.py) for consistent trial_NNNN naming - Add DashboardDB (dashboard_db.py) for Optuna-compatible database schema - Update CLAUDE.md with trial management documentation - Update ATOMIZER_CONTEXT.md with v1.8 trial system - Update cheatsheet v2.2 with new utilities - Update SYS_14 protocol to v2.3 with TrialManager integration - Add LAC learnings for trial management patterns - Add archive/README.md for deprecated code policy Key principles: - Trial numbers NEVER reset (monotonic) - Folders NEVER get overwritten - Database always synced with filesystem - Surrogate predictions are NOT trials (only FEA results) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -106,17 +106,21 @@ studies/
|
|||||||
studies/{geometry_type}/{study_name}/
|
studies/{geometry_type}/{study_name}/
|
||||||
├── optimization_config.json # Problem definition
|
├── optimization_config.json # Problem definition
|
||||||
├── run_optimization.py # FEA optimization script
|
├── run_optimization.py # FEA optimization script
|
||||||
├── run_nn_optimization.py # Neural acceleration (optional)
|
├── run_turbo_optimization.py # GNN-Turbo acceleration (optional)
|
||||||
├── README.md # MANDATORY documentation
|
├── README.md # MANDATORY documentation
|
||||||
├── STUDY_REPORT.md # Results template
|
├── STUDY_REPORT.md # Results template
|
||||||
├── 1_setup/
|
├── 1_setup/
|
||||||
|
│ ├── optimization_config.json # Config copy for reference
|
||||||
│ └── model/
|
│ └── model/
|
||||||
│ ├── Model.prt # NX part file
|
│ ├── Model.prt # NX part file
|
||||||
│ ├── Model_sim1.sim # NX simulation
|
│ ├── Model_sim1.sim # NX simulation
|
||||||
│ └── Model_fem1.fem # FEM definition
|
│ └── Model_fem1.fem # FEM definition
|
||||||
├── 2_iterations/ # FEA trial folders (iter1, iter2, ...)
|
├── 2_iterations/ # FEA trial folders (trial_NNNN/)
|
||||||
|
│ ├── trial_0001/ # Zero-padded, NEVER reset
|
||||||
|
│ ├── trial_0002/
|
||||||
|
│ └── ...
|
||||||
├── 3_results/
|
├── 3_results/
|
||||||
│ ├── study.db # Optuna database
|
│ ├── study.db # Optuna-compatible database
|
||||||
│ ├── optimization.log # Logs
|
│ ├── optimization.log # Logs
|
||||||
│ └── turbo_report.json # NN results (if run)
|
│ └── turbo_report.json # NN results (if run)
|
||||||
└── 3_insights/ # Study Insights (SYS_16)
|
└── 3_insights/ # Study Insights (SYS_16)
|
||||||
@@ -435,11 +439,68 @@ python -m optimization_engine.auto_doc templates
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Trial Management System (v2.3)
|
||||||
|
|
||||||
|
New unified trial management ensures consistency across all optimization methods:
|
||||||
|
|
||||||
|
### Key Components
|
||||||
|
|
||||||
|
| Component | Path | Purpose |
|
||||||
|
|-----------|------|---------|
|
||||||
|
| `TrialManager` | `optimization_engine/utils/trial_manager.py` | Unified trial folder + DB management |
|
||||||
|
| `DashboardDB` | `optimization_engine/utils/dashboard_db.py` | Optuna-compatible database wrapper |
|
||||||
|
|
||||||
|
### Trial Naming Convention
|
||||||
|
|
||||||
|
```
|
||||||
|
2_iterations/
|
||||||
|
├── trial_0001/ # Zero-padded, monotonically increasing
|
||||||
|
├── trial_0002/ # NEVER reset, NEVER overwritten
|
||||||
|
├── trial_0003/
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key principles**:
|
||||||
|
- Trial numbers **NEVER reset** (monotonically increasing)
|
||||||
|
- Folders **NEVER get overwritten**
|
||||||
|
- Database is always in sync with filesystem
|
||||||
|
- Surrogate predictions (5K) are NOT trials - only FEA results
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.trial_manager import TrialManager
|
||||||
|
|
||||||
|
tm = TrialManager(study_dir)
|
||||||
|
|
||||||
|
# Start new trial
|
||||||
|
trial = tm.new_trial(params={'rib_thickness': 10.5})
|
||||||
|
|
||||||
|
# After FEA completes
|
||||||
|
tm.complete_trial(
|
||||||
|
trial_number=trial['trial_number'],
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
weighted_sum=42.5,
|
||||||
|
is_feasible=True
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Schema (Optuna-Compatible)
|
||||||
|
|
||||||
|
The `DashboardDB` class creates Optuna-compatible schema for dashboard integration:
|
||||||
|
- `trials` - Main trial records with state, datetime, value
|
||||||
|
- `trial_values` - Objective values (supports multiple objectives)
|
||||||
|
- `trial_params` - Design parameter values
|
||||||
|
- `trial_user_attributes` - Metadata (source, solve_time, etc.)
|
||||||
|
- `studies` - Study metadata (directions, name)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Version Info
|
## Version Info
|
||||||
|
|
||||||
| Component | Version | Last Updated |
|
| Component | Version | Last Updated |
|
||||||
|-----------|---------|--------------|
|
|-----------|---------|--------------|
|
||||||
| ATOMIZER_CONTEXT | 1.7 | 2025-12-20 |
|
| ATOMIZER_CONTEXT | 1.8 | 2025-12-28 |
|
||||||
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
||||||
| GenericSurrogate | 1.0 | 2025-12-07 |
|
| GenericSurrogate | 1.0 | 2025-12-07 |
|
||||||
| Study State Detector | 1.0 | 2025-12-07 |
|
| Study State Detector | 1.0 | 2025-12-07 |
|
||||||
@@ -452,6 +513,9 @@ python -m optimization_engine.auto_doc templates
|
|||||||
| Subagent Commands | 1.0 | 2025-12-07 |
|
| Subagent Commands | 1.0 | 2025-12-07 |
|
||||||
| FEARunner Pattern | 1.0 | 2025-12-12 |
|
| FEARunner Pattern | 1.0 | 2025-12-12 |
|
||||||
| Study Insights | 1.0 | 2025-12-20 |
|
| Study Insights | 1.0 | 2025-12-20 |
|
||||||
|
| TrialManager | 1.0 | 2025-12-28 |
|
||||||
|
| DashboardDB | 1.0 | 2025-12-28 |
|
||||||
|
| GNN-Turbo System | 2.3 | 2025-12-28 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,19 +1,21 @@
|
|||||||
---
|
---
|
||||||
skill_id: SKILL_001
|
skill_id: SKILL_001
|
||||||
version: 2.1
|
version: 2.2
|
||||||
last_updated: 2025-12-22
|
last_updated: 2025-12-28
|
||||||
type: reference
|
type: reference
|
||||||
code_dependencies:
|
code_dependencies:
|
||||||
- optimization_engine/extractors/__init__.py
|
- optimization_engine/extractors/__init__.py
|
||||||
- optimization_engine/method_selector.py
|
- optimization_engine/method_selector.py
|
||||||
|
- optimization_engine/utils/trial_manager.py
|
||||||
|
- optimization_engine/utils/dashboard_db.py
|
||||||
requires_skills:
|
requires_skills:
|
||||||
- SKILL_000
|
- SKILL_000
|
||||||
---
|
---
|
||||||
|
|
||||||
# Atomizer Quick Reference Cheatsheet
|
# Atomizer Quick Reference Cheatsheet
|
||||||
|
|
||||||
**Version**: 2.1
|
**Version**: 2.2
|
||||||
**Updated**: 2025-12-22
|
**Updated**: 2025-12-28
|
||||||
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -406,3 +408,75 @@ class FEARunner:
|
|||||||
**Reference implementations**:
|
**Reference implementations**:
|
||||||
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
||||||
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Trial Management Utilities
|
||||||
|
|
||||||
|
### TrialManager - Unified Trial Folder + DB Management
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.trial_manager import TrialManager
|
||||||
|
|
||||||
|
tm = TrialManager(study_dir)
|
||||||
|
|
||||||
|
# Start new trial (creates folder, saves params)
|
||||||
|
trial = tm.new_trial(
|
||||||
|
params={'rib_thickness': 10.5, 'mirror_face_thickness': 17.0},
|
||||||
|
source="turbo",
|
||||||
|
metadata={'turbo_batch': 1, 'predicted_ws': 42.0}
|
||||||
|
)
|
||||||
|
# Returns: {'trial_id': 47, 'trial_number': 47, 'folder_path': Path(...)}
|
||||||
|
|
||||||
|
# After FEA completes
|
||||||
|
tm.complete_trial(
|
||||||
|
trial_number=trial['trial_number'],
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
weighted_sum=42.5,
|
||||||
|
is_feasible=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mark failed trial
|
||||||
|
tm.fail_trial(trial_number=47, error="NX solver timeout")
|
||||||
|
```
|
||||||
|
|
||||||
|
### DashboardDB - Optuna-Compatible Database
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.dashboard_db import DashboardDB, convert_custom_to_optuna
|
||||||
|
|
||||||
|
# Create new dashboard-compatible database
|
||||||
|
db = DashboardDB(db_path, study_name="my_study")
|
||||||
|
|
||||||
|
# Log a trial
|
||||||
|
trial_id = db.log_trial(
|
||||||
|
params={'rib_thickness': 10.5},
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
weighted_sum=42.5,
|
||||||
|
is_feasible=True,
|
||||||
|
state="COMPLETE"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mark best trial
|
||||||
|
db.mark_best(trial_id)
|
||||||
|
|
||||||
|
# Get summary
|
||||||
|
summary = db.get_summary()
|
||||||
|
|
||||||
|
# Convert existing custom database to Optuna format
|
||||||
|
convert_custom_to_optuna(db_path, study_name)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Trial Naming Convention
|
||||||
|
|
||||||
|
```
|
||||||
|
2_iterations/
|
||||||
|
├── trial_0001/ # Zero-padded, monotonically increasing
|
||||||
|
├── trial_0002/ # NEVER reset, NEVER overwritten
|
||||||
|
└── trial_0003/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key principles**:
|
||||||
|
- Trial numbers **NEVER reset** across study lifetime
|
||||||
|
- Surrogate predictions (5K per batch) are NOT logged as trials
|
||||||
|
- Only FEA-validated results become trials
|
||||||
|
|||||||
68
CLAUDE.md
68
CLAUDE.md
@@ -140,8 +140,10 @@ Atomizer/
|
|||||||
│ └── extensions/ # EXT_01 - EXT_04
|
│ └── extensions/ # EXT_01 - EXT_04
|
||||||
├── optimization_engine/ # Core Python modules
|
├── optimization_engine/ # Core Python modules
|
||||||
│ ├── extractors/ # Physics extraction library
|
│ ├── extractors/ # Physics extraction library
|
||||||
│ └── gnn/ # GNN surrogate module (Zernike)
|
│ ├── gnn/ # GNN surrogate module (Zernike)
|
||||||
|
│ └── utils/ # Utilities (dashboard_db, trial_manager)
|
||||||
├── studies/ # User studies
|
├── studies/ # User studies
|
||||||
|
├── archive/ # Deprecated code (for reference)
|
||||||
└── atomizer-dashboard/ # React dashboard
|
└── atomizer-dashboard/ # React dashboard
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -170,6 +172,70 @@ python run_gnn_turbo.py --trials 5000
|
|||||||
|
|
||||||
**Full documentation**: `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md`
|
**Full documentation**: `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md`
|
||||||
|
|
||||||
|
## Trial Management & Dashboard Compatibility
|
||||||
|
|
||||||
|
### Trial Naming Convention
|
||||||
|
|
||||||
|
**CRITICAL**: Use `trial_NNNN/` folders (zero-padded, never reused, never overwritten).
|
||||||
|
|
||||||
|
```
|
||||||
|
2_iterations/
|
||||||
|
├── trial_0001/ # First FEA validation
|
||||||
|
│ ├── params.json # Input parameters
|
||||||
|
│ ├── results.json # Output objectives
|
||||||
|
│ ├── _meta.json # Metadata (source, timestamps, predictions)
|
||||||
|
│ └── *.op2, *.fem... # FEA files
|
||||||
|
├── trial_0002/
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Principles:**
|
||||||
|
- Trial numbers are **global and monotonic** - never reset between runs
|
||||||
|
- Only **FEA-validated results** are trials (surrogate predictions are ephemeral)
|
||||||
|
- Each trial folder is **immutable** after completion
|
||||||
|
|
||||||
|
### Using TrialManager
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.trial_manager import TrialManager
|
||||||
|
|
||||||
|
tm = TrialManager(study_dir, "my_study_name")
|
||||||
|
|
||||||
|
# Create new trial (reserves folder + DB row)
|
||||||
|
trial = tm.new_trial(params={'rib_thickness': 10.5}, source="turbo")
|
||||||
|
|
||||||
|
# After FEA completes
|
||||||
|
tm.complete_trial(
|
||||||
|
trial_number=trial['trial_number'],
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
weighted_sum=175.87,
|
||||||
|
is_feasible=True
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dashboard Database Compatibility
|
||||||
|
|
||||||
|
All studies must use Optuna-compatible SQLite schema for dashboard integration:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.dashboard_db import DashboardDB
|
||||||
|
|
||||||
|
db = DashboardDB(study_dir / "3_results" / "study.db", "study_name")
|
||||||
|
db.log_trial(params={...}, objectives={...}, weighted_sum=175.87)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required Tables** (Optuna schema):
|
||||||
|
- `trials` - with `trial_id`, `number`, `study_id`, `state`
|
||||||
|
- `trial_values` - objective values
|
||||||
|
- `trial_params` - parameter values
|
||||||
|
- `trial_user_attributes` - custom metadata
|
||||||
|
|
||||||
|
**To convert legacy databases:**
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.dashboard_db import convert_custom_to_optuna
|
||||||
|
convert_custom_to_optuna(db_path, "study_name")
|
||||||
|
```
|
||||||
|
|
||||||
## CRITICAL: NX Open Development Protocol
|
## CRITICAL: NX Open Development Protocol
|
||||||
|
|
||||||
### Always Use Official Documentation First
|
### Always Use Official Documentation First
|
||||||
|
|||||||
39
archive/README.md
Normal file
39
archive/README.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Atomizer Archive
|
||||||
|
|
||||||
|
This directory contains deprecated/replaced code that is kept for reference and potential rollback.
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
archive/
|
||||||
|
├── extractors/ # Deprecated physics extractors
|
||||||
|
│ └── zernike_legacy/ # Pre-OPD Zernike extractors
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Archive Policy
|
||||||
|
|
||||||
|
When replacing functionality:
|
||||||
|
1. Move the old file to the appropriate archive subdirectory
|
||||||
|
2. Add a header comment noting the archive date and replacement
|
||||||
|
3. Update this README with the change
|
||||||
|
|
||||||
|
## Archived Items
|
||||||
|
|
||||||
|
### extractors/zernike_legacy/ (2024-12-28)
|
||||||
|
|
||||||
|
**Replaced by:** `extract_zernike_opd.py` (ZernikeOPDExtractor)
|
||||||
|
|
||||||
|
**Reason:** The OPD method provides more accurate wavefront error calculations by:
|
||||||
|
- Using optical path difference (OPD) directly instead of surface displacement
|
||||||
|
- Proper handling of relative subcase comparisons
|
||||||
|
- Better numerical stability for high-order Zernike modes
|
||||||
|
|
||||||
|
**Archived files:**
|
||||||
|
| File | Original Class | Description |
|
||||||
|
|------|----------------|-------------|
|
||||||
|
| `extract_zernike.py` | `ZernikeExtractor` | Original displacement-based Zernike |
|
||||||
|
| `extract_zernike_surface.py` | `ZernikeSurfaceExtractor` | Surface-normal projection variant |
|
||||||
|
| `extract_zernike_figure.py` | `ZernikeFigureExtractor` | Figure error variant |
|
||||||
|
|
||||||
|
**To restore:** Copy files back to `optimization_engine/extractors/` and update `__init__.py`
|
||||||
@@ -676,10 +676,287 @@ optimization_engine/
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Self-Improving Turbo Optimization
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
The **Self-Improving Turbo** pattern combines MLP surrogate exploration with iterative FEA validation and surrogate retraining. This creates a closed-loop optimization where the surrogate continuously improves from its own mistakes.
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```
|
||||||
|
INITIALIZE:
|
||||||
|
- Load pre-trained surrogate (from prior FEA data)
|
||||||
|
- Load previous FEA params for diversity checking
|
||||||
|
|
||||||
|
REPEAT until converged or FEA budget exhausted:
|
||||||
|
|
||||||
|
1. SURROGATE EXPLORE (~1 min)
|
||||||
|
├─ Run 5000 Optuna TPE trials with surrogate
|
||||||
|
├─ Quantize predictions to machining precision
|
||||||
|
└─ Find diverse top candidates
|
||||||
|
|
||||||
|
2. SELECT DIVERSE CANDIDATES
|
||||||
|
├─ Sort by weighted sum
|
||||||
|
├─ Select top 5 that are:
|
||||||
|
│ ├─ At least 15% different from each other
|
||||||
|
│ └─ At least 7.5% different from ALL previous FEA
|
||||||
|
└─ Ensures exploration, not just exploitation
|
||||||
|
|
||||||
|
3. FEA VALIDATE (~25 min for 5 candidates)
|
||||||
|
├─ For each candidate:
|
||||||
|
│ ├─ Create iteration folder
|
||||||
|
│ ├─ Update NX expressions
|
||||||
|
│ ├─ Run Nastran solver
|
||||||
|
│ ├─ Extract objectives (ZernikeOPD or other)
|
||||||
|
│ └─ Log prediction error
|
||||||
|
└─ Add results to training data
|
||||||
|
|
||||||
|
4. RETRAIN SURROGATE (~2 min)
|
||||||
|
├─ Combine all FEA samples
|
||||||
|
├─ Retrain MLP for 100 epochs
|
||||||
|
├─ Save new checkpoint
|
||||||
|
└─ Reload improved model
|
||||||
|
|
||||||
|
5. CHECK CONVERGENCE
|
||||||
|
├─ Track best feasible objective
|
||||||
|
├─ If improved: reset patience counter
|
||||||
|
└─ If no improvement for 3 iterations: STOP
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Example
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"turbo_settings": {
|
||||||
|
"surrogate_trials_per_iteration": 5000,
|
||||||
|
"fea_validations_per_iteration": 5,
|
||||||
|
"max_fea_validations": 100,
|
||||||
|
"max_iterations": 30,
|
||||||
|
"convergence_patience": 3,
|
||||||
|
"retrain_frequency": "every_iteration",
|
||||||
|
"min_samples_for_retrain": 20
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Parameters
|
||||||
|
|
||||||
|
| Parameter | Typical Value | Description |
|
||||||
|
|-----------|---------------|-------------|
|
||||||
|
| `surrogate_trials_per_iteration` | 5000 | NN trials per iteration |
|
||||||
|
| `fea_validations_per_iteration` | 5 | FEA runs per iteration |
|
||||||
|
| `max_fea_validations` | 100 | Total FEA budget |
|
||||||
|
| `convergence_patience` | 3 | Stop after N no-improvement iterations |
|
||||||
|
| `MIN_CANDIDATE_DISTANCE` | 0.15 | 15% of param range for diversity |
|
||||||
|
|
||||||
|
### Example Results (M1 Mirror Turbo V1)
|
||||||
|
|
||||||
|
| Metric | Value |
|
||||||
|
|--------|-------|
|
||||||
|
| FEA Validations | 45 |
|
||||||
|
| Best WS Found | 282.05 |
|
||||||
|
| Baseline (V11) | 284.19 |
|
||||||
|
| Improvement | 0.75% |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dashboard Integration for Neural Studies
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
Neural surrogate studies generate thousands of NN-only trials that would overwhelm the dashboard. Only FEA-validated trials should be visible.
|
||||||
|
|
||||||
|
### Solution: Separate Optuna Study
|
||||||
|
|
||||||
|
Log FEA validation results to a separate Optuna study that the dashboard can read:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import optuna
|
||||||
|
|
||||||
|
# Create Optuna study for dashboard visibility
|
||||||
|
optuna_db_path = RESULTS_DIR / "study.db"
|
||||||
|
optuna_storage = f"sqlite:///{optuna_db_path}"
|
||||||
|
optuna_study = optuna.create_study(
|
||||||
|
study_name=study_name,
|
||||||
|
storage=optuna_storage,
|
||||||
|
direction="minimize",
|
||||||
|
load_if_exists=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# After each FEA validation:
|
||||||
|
trial = optuna_study.ask()
|
||||||
|
|
||||||
|
# Set parameters (using suggest_float with fixed bounds)
|
||||||
|
for var_name, var_val in result['params'].items():
|
||||||
|
trial.suggest_float(var_name, var_val, var_val)
|
||||||
|
|
||||||
|
# Set objectives as user attributes
|
||||||
|
for obj_name, obj_val in result['objectives'].items():
|
||||||
|
trial.set_user_attr(obj_name, obj_val)
|
||||||
|
|
||||||
|
# Log iteration metadata
|
||||||
|
trial.set_user_attr('turbo_iteration', turbo_iter)
|
||||||
|
trial.set_user_attr('prediction_error', abs(actual_ws - predicted_ws))
|
||||||
|
trial.set_user_attr('is_feasible', is_feasible)
|
||||||
|
|
||||||
|
# Report the objective value
|
||||||
|
optuna_study.tell(trial, result['weighted_sum'])
|
||||||
|
```
|
||||||
|
|
||||||
|
### File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
3_results/
|
||||||
|
├── study.db # Optuna format (for dashboard)
|
||||||
|
├── study_custom.db # Custom SQLite (detailed turbo data)
|
||||||
|
├── checkpoints/
|
||||||
|
│ └── best_model.pt # Surrogate model
|
||||||
|
├── turbo_logs/ # Per-iteration JSON logs
|
||||||
|
└── best_design_archive/ # Archived best designs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backfilling Existing Data
|
||||||
|
|
||||||
|
If you have existing turbo runs without Optuna logging, use the backfill script:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# scripts/backfill_optuna.py
|
||||||
|
import optuna
|
||||||
|
import sqlite3
|
||||||
|
import json
|
||||||
|
|
||||||
|
# Read from custom database
|
||||||
|
conn = sqlite3.connect('study_custom.db')
|
||||||
|
c.execute('''
|
||||||
|
SELECT iter_num, turbo_iteration, weighted_sum, surrogate_predicted_ws,
|
||||||
|
params, objectives, is_feasible
|
||||||
|
FROM trials ORDER BY iter_num
|
||||||
|
''')
|
||||||
|
|
||||||
|
# Create Optuna study
|
||||||
|
study = optuna.create_study(...)
|
||||||
|
|
||||||
|
# Backfill each trial
|
||||||
|
for row in rows:
|
||||||
|
trial = study.ask()
|
||||||
|
params = json.loads(row['params']) # Stored as JSON
|
||||||
|
objectives = json.loads(row['objectives'])
|
||||||
|
|
||||||
|
for name, val in params.items():
|
||||||
|
trial.suggest_float(name, float(val), float(val))
|
||||||
|
for name, val in objectives.items():
|
||||||
|
trial.set_user_attr(name, float(val))
|
||||||
|
|
||||||
|
study.tell(trial, row['weighted_sum'])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dashboard View
|
||||||
|
|
||||||
|
After integration, the dashboard shows:
|
||||||
|
- Only FEA-validated trials (not NN-only)
|
||||||
|
- Objective convergence over FEA iterations
|
||||||
|
- Parameter distributions from validated designs
|
||||||
|
- Prediction error trends (via user attributes)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Version History
|
## Version History
|
||||||
|
|
||||||
| Version | Date | Changes |
|
| Version | Date | Changes |
|
||||||
|---------|------|---------|
|
|---------|------|---------|
|
||||||
|
| 2.3 | 2025-12-28 | Added TrialManager, DashboardDB, proper trial_NNNN naming |
|
||||||
|
| 2.2 | 2025-12-24 | Added Self-Improving Turbo and Dashboard Integration sections |
|
||||||
| 2.1 | 2025-12-10 | Added Zernike GNN section for mirror optimization |
|
| 2.1 | 2025-12-10 | Added Zernike GNN section for mirror optimization |
|
||||||
| 2.0 | 2025-12-06 | Added MLP Surrogate with Turbo Mode |
|
| 2.0 | 2025-12-06 | Added MLP Surrogate with Turbo Mode |
|
||||||
| 1.0 | 2025-12-05 | Initial consolidation from neural docs |
|
| 1.0 | 2025-12-05 | Initial consolidation from neural docs |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## New Trial Management System (v2.3)
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
The new trial management system provides:
|
||||||
|
1. **Consistent trial naming**: `trial_NNNN/` folders (zero-padded, never reused)
|
||||||
|
2. **Dashboard compatibility**: Optuna-compatible SQLite schema
|
||||||
|
3. **Clear separation**: Surrogate predictions are ephemeral, only FEA results are trials
|
||||||
|
|
||||||
|
### Key Components
|
||||||
|
|
||||||
|
| Component | File | Purpose |
|
||||||
|
|-----------|------|---------|
|
||||||
|
| `TrialManager` | `optimization_engine/utils/trial_manager.py` | Trial folder + DB management |
|
||||||
|
| `DashboardDB` | `optimization_engine/utils/dashboard_db.py` | Optuna-compatible database ops |
|
||||||
|
|
||||||
|
### Usage Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.trial_manager import TrialManager
|
||||||
|
|
||||||
|
# Initialize
|
||||||
|
tm = TrialManager(study_dir, "my_study")
|
||||||
|
|
||||||
|
# Start trial (creates folder, reserves DB row)
|
||||||
|
trial = tm.new_trial(
|
||||||
|
params={'rib_thickness': 10.5},
|
||||||
|
source="turbo",
|
||||||
|
metadata={'turbo_batch': 1, 'predicted_ws': 186.77}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Run FEA...
|
||||||
|
|
||||||
|
# Complete trial (logs to DB)
|
||||||
|
tm.complete_trial(
|
||||||
|
trial_number=trial['trial_number'],
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
weighted_sum=175.87,
|
||||||
|
is_feasible=True,
|
||||||
|
metadata={'solve_time': 211.7}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Trial Folder Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
2_iterations/
|
||||||
|
├── trial_0001/
|
||||||
|
│ ├── params.json # Input parameters
|
||||||
|
│ ├── params.exp # NX expression format
|
||||||
|
│ ├── results.json # Output objectives
|
||||||
|
│ ├── _meta.json # Full metadata (source, timestamps, predictions)
|
||||||
|
│ └── *.op2, *.fem... # FEA files
|
||||||
|
├── trial_0002/
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Schema
|
||||||
|
|
||||||
|
The `DashboardDB` class creates Optuna-compatible tables:
|
||||||
|
|
||||||
|
| Table | Purpose |
|
||||||
|
|-------|---------|
|
||||||
|
| `studies` | Study metadata |
|
||||||
|
| `trials` | Trial info with `state`, `number`, `study_id` |
|
||||||
|
| `trial_values` | Objective values |
|
||||||
|
| `trial_params` | Parameter values |
|
||||||
|
| `trial_user_attributes` | Custom metadata (turbo_batch, predicted_ws, etc.) |
|
||||||
|
|
||||||
|
### Converting Legacy Databases
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.utils.dashboard_db import convert_custom_to_optuna
|
||||||
|
|
||||||
|
# Convert custom schema to Optuna format
|
||||||
|
convert_custom_to_optuna(
|
||||||
|
db_path="3_results/study.db",
|
||||||
|
study_name="my_study"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Principles
|
||||||
|
|
||||||
|
1. **Surrogate predictions are NOT trials** - only FEA-validated results are logged
|
||||||
|
2. **Trial numbers never reset** - monotonically increasing across all runs
|
||||||
|
3. **Folders never overwritten** - each trial gets a unique `trial_NNNN/` directory
|
||||||
|
4. **Metadata preserved** - predictions stored for accuracy analysis
|
||||||
|
|||||||
@@ -0,0 +1,2 @@
|
|||||||
|
{"timestamp": "2025-12-24T08:13:38.642843", "category": "protocol_clarification", "context": "SYS_14 Neural Acceleration with dashboard integration", "insight": "When running neural surrogate turbo optimization, FEA validation trials MUST be logged to Optuna for dashboard visibility. Use optuna.create_study() with load_if_exists=True, then for each FEA result: trial=study.ask(), set params via suggest_float(), set objectives as user_attrs, then study.tell(trial, weighted_sum).", "confidence": 0.95, "tags": ["SYS_14", "neural", "optuna", "dashboard", "turbo"]}
|
||||||
|
{"timestamp": "2025-12-28T10:15:00", "category": "protocol_clarification", "context": "SYS_14 v2.3 update with TrialManager integration", "insight": "SYS_14 Neural Acceleration protocol updated to v2.3. Now uses TrialManager for consistent trial_NNNN naming instead of iter{N}. Key components: (1) TrialManager for folder+DB management, (2) DashboardDB for Optuna-compatible schema, (3) Trial numbers are monotonically increasing and NEVER reset. Reference implementation: studies/M1_Mirror/m1_mirror_cost_reduction_flat_back_V5/run_turbo_optimization.py", "confidence": 0.95, "tags": ["SYS_14", "trial_manager", "dashboard_db", "v2.3"]}
|
||||||
@@ -1,3 +1,6 @@
|
|||||||
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Organized M1 Mirror documentation with parent-child README hierarchy","insight":"DOCUMENTATION PATTERN: Studies use a two-level README hierarchy. Parent README at studies/{geometry_type}/README.md contains project-wide context (optical specs, design variables catalog, objectives catalog, campaign history, sub-studies index). Child README at studies/{geometry_type}/{study_name}/README.md references parent and contains study-specific details (active variables, algorithm config, results). This eliminates duplication, maintains single source of truth for specs, and makes sub-study docs concise. Pattern documented in OP_01_CREATE_STUDY.md and study-creation-core.md.","confidence":0.95,"tags":["documentation","readme","hierarchy","study-creation","organization"],"rule":"When creating studies for a geometry type: (1) Create parent README with project context if first study, (2) Add reference banner to child README: '> See [../README.md](../README.md) for project overview', (3) Update parent's sub-studies index table when adding new sub-studies."}
|
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Organized M1 Mirror documentation with parent-child README hierarchy","insight":"DOCUMENTATION PATTERN: Studies use a two-level README hierarchy. Parent README at studies/{geometry_type}/README.md contains project-wide context (optical specs, design variables catalog, objectives catalog, campaign history, sub-studies index). Child README at studies/{geometry_type}/{study_name}/README.md references parent and contains study-specific details (active variables, algorithm config, results). This eliminates duplication, maintains single source of truth for specs, and makes sub-study docs concise. Pattern documented in OP_01_CREATE_STUDY.md and study-creation-core.md.","confidence":0.95,"tags":["documentation","readme","hierarchy","study-creation","organization"],"rule":"When creating studies for a geometry type: (1) Create parent README with project context if first study, (2) Add reference banner to child README: '> See [../README.md](../README.md) for project overview', (3) Update parent's sub-studies index table when adding new sub-studies."}
|
||||||
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Created universal mirror optical specs extraction tool","insight":"TOOL PATTERN: Mirror optical specs (focal length, f-number, diameter) can be auto-estimated from FEA mesh geometry by fitting z = a*r² + b to node coordinates. Focal length = 1/(4*|a|). Tool at tools/extract_mirror_optical_specs.py works with any mirror study - just point it at an OP2 file or study directory. Reports fit quality to indicate if explicit focal length should be used instead. Use: python tools/extract_mirror_optical_specs.py path/to/study","confidence":0.9,"tags":["tools","mirror","optical-specs","zernike","opd","extraction"],"rule":"For mirror optimization: (1) Run extract_mirror_optical_specs.py to estimate optical prescription from mesh, (2) Validate against design specs, (3) Document in parent README, (4) Use explicit focal_length in ZernikeOPDExtractor if fit quality is poor."}
|
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Created universal mirror optical specs extraction tool","insight":"TOOL PATTERN: Mirror optical specs (focal length, f-number, diameter) can be auto-estimated from FEA mesh geometry by fitting z = a*r² + b to node coordinates. Focal length = 1/(4*|a|). Tool at tools/extract_mirror_optical_specs.py works with any mirror study - just point it at an OP2 file or study directory. Reports fit quality to indicate if explicit focal length should be used instead. Use: python tools/extract_mirror_optical_specs.py path/to/study","confidence":0.9,"tags":["tools","mirror","optical-specs","zernike","opd","extraction"],"rule":"For mirror optimization: (1) Run extract_mirror_optical_specs.py to estimate optical prescription from mesh, (2) Validate against design specs, (3) Document in parent README, (4) Use explicit focal_length in ZernikeOPDExtractor if fit quality is poor."}
|
||||||
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Implemented OPD-based Zernike method for lateral support optimization","insight":"PHYSICS PATTERN: Standard Zernike WFE analysis uses Z-displacement at original (x,y) coordinates. This is INCORRECT for lateral support optimization where nodes shift in X,Y. The rigorous OPD method computes: surface_error = dz - delta_z_parabola where delta_z_parabola = -delta_r²/(4f) for concave mirrors. This accounts for the fact that laterally displaced nodes should be compared against parabola height at their NEW position. Implemented in extract_zernike_opd.py with ZernikeOPDExtractor class. Use extract_comparison() to see method difference. Threshold: >10µm lateral displacement = CRITICAL to use OPD.","confidence":1.0,"tags":["zernike","opd","lateral-support","mirror","wfe","physics"],"rule":"For mirror optimization with lateral supports or any case where X,Y displacement may be significant: (1) Use ZernikeOPDExtractor instead of ZernikeExtractor, (2) Run zernike_opd_comparison insight to check lateral displacement magnitude, (3) If max lateral >10µm, OPD method is CRITICAL."}
|
{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Implemented OPD-based Zernike method for lateral support optimization","insight":"PHYSICS PATTERN: Standard Zernike WFE analysis uses Z-displacement at original (x,y) coordinates. This is INCORRECT for lateral support optimization where nodes shift in X,Y. The rigorous OPD method computes: surface_error = dz - delta_z_parabola where delta_z_parabola = -delta_r²/(4f) for concave mirrors. This accounts for the fact that laterally displaced nodes should be compared against parabola height at their NEW position. Implemented in extract_zernike_opd.py with ZernikeOPDExtractor class. Use extract_comparison() to see method difference. Threshold: >10µm lateral displacement = CRITICAL to use OPD.","confidence":1.0,"tags":["zernike","opd","lateral-support","mirror","wfe","physics"],"rule":"For mirror optimization with lateral supports or any case where X,Y displacement may be significant: (1) Use ZernikeOPDExtractor instead of ZernikeExtractor, (2) Run zernike_opd_comparison insight to check lateral displacement magnitude, (3) If max lateral >10µm, OPD method is CRITICAL."}
|
||||||
|
{"timestamp": "2025-12-24T08:13:38.640319", "category": "success_pattern", "context": "Neural surrogate turbo optimization with FEA validation", "insight": "For surrogate-based optimization, log FEA validation trials to a SEPARATE Optuna study.db for dashboard visibility. The surrogate exploration runs internally (not logged), but every FEA validation gets logged to Optuna using study.ask()/tell() pattern. This allows dashboard monitoring of FEA progress while keeping surrogate trials private.", "confidence": 0.95, "tags": ["surrogate", "turbo", "optuna", "dashboard", "fea", "neural"]}
|
||||||
|
{"timestamp": "2025-12-28T10:15:00", "category": "success_pattern", "context": "Unified trial management with TrialManager and DashboardDB", "insight": "TRIAL MANAGEMENT PATTERN: Use TrialManager for consistent trial_NNNN naming across all optimization methods (Optuna, Turbo, GNN, manual). Key principles: (1) Trial numbers NEVER reset (monotonic), (2) Folders NEVER get overwritten, (3) Database always synced with filesystem, (4) Surrogate predictions are NOT trials - only FEA results. DashboardDB provides Optuna-compatible schema for dashboard integration. Path: optimization_engine/utils/trial_manager.py", "confidence": 0.95, "tags": ["trial_manager", "dashboard_db", "optuna", "trial_naming", "turbo"]}
|
||||||
|
{"timestamp": "2025-12-28T10:15:00", "category": "success_pattern", "context": "GNN Turbo training data loading from multiple studies", "insight": "MULTI-STUDY TRAINING: When loading training data from multiple prior studies for GNN surrogate training, param names may have unit prefixes like '[mm]rib_thickness' or '[Degrees]angle'. Strip prefixes: if ']' in name: name = name.split(']', 1)[1]. Also, objective attribute names vary between studies (rel_filtered_rms_40_vs_20 vs obj_rel_filtered_rms_40_vs_20) - use fallback chain with 'or'. V5 successfully trained on 316 samples (V3: 297, V4: 19) with R²=[0.94, 0.94, 0.89, 0.95].", "confidence": 0.9, "tags": ["gnn", "turbo", "training_data", "multi_study", "param_naming"]}
|
||||||
|
|||||||
2
knowledge_base/lac/session_insights/workaround.jsonl
Normal file
2
knowledge_base/lac/session_insights/workaround.jsonl
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
{"timestamp": "2025-12-24T08:13:38.641823", "category": "workaround", "context": "Turbo optimization study structure", "insight": "Turbo studies use 3_results/ not 2_results/. Dashboard already supports both. Use study.db for Optuna-format (dashboard compatible), study_custom.db for internal custom tracking. Backfill script (scripts/backfill_optuna.py) can convert existing trials.", "confidence": 0.9, "tags": ["turbo", "study_structure", "optuna", "dashboard"]}
|
||||||
|
{"timestamp": "2025-12-28T10:15:00", "category": "workaround", "context": "Custom database schema not showing in dashboard", "insight": "DASHBOARD COMPATIBILITY: If a study uses custom database schema instead of Optuna's (missing trial_values, trial_params, trial_user_attributes tables), the dashboard won't show trials. Use convert_custom_to_optuna() from dashboard_db.py to convert. This function drops all tables and recreates with Optuna-compatible schema, migrating all trial data.", "confidence": 0.95, "tags": ["dashboard", "optuna", "database", "schema", "migration"]}
|
||||||
574
optimization_engine/utils/dashboard_db.py
Normal file
574
optimization_engine/utils/dashboard_db.py
Normal file
@@ -0,0 +1,574 @@
|
|||||||
|
"""
|
||||||
|
Dashboard Database Compatibility Module
|
||||||
|
========================================
|
||||||
|
|
||||||
|
Provides Optuna-compatible database schema for all optimization types,
|
||||||
|
ensuring dashboard compatibility regardless of optimization method
|
||||||
|
(standard Optuna, turbo/surrogate, GNN, etc.)
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.utils.dashboard_db import DashboardDB
|
||||||
|
|
||||||
|
# Initialize (creates Optuna-compatible schema)
|
||||||
|
db = DashboardDB(study_dir / "3_results" / "study.db", study_name="my_study")
|
||||||
|
|
||||||
|
# Log a trial
|
||||||
|
db.log_trial(
|
||||||
|
params={"rib_thickness": 10.5, "mass": 118.0},
|
||||||
|
objectives={"wfe_40_20": 5.63, "wfe_60_20": 12.75},
|
||||||
|
weighted_sum=175.87, # optional, for single-objective ranking
|
||||||
|
is_feasible=True,
|
||||||
|
metadata={"turbo_iteration": 1, "predicted_ws": 186.77}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mark best trial
|
||||||
|
db.mark_best(trial_id=1)
|
||||||
|
|
||||||
|
# Get summary
|
||||||
|
print(db.get_summary())
|
||||||
|
|
||||||
|
Schema follows Optuna's native format for full dashboard compatibility.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, Any, Optional, List, Union
|
||||||
|
|
||||||
|
|
||||||
|
class DashboardDB:
|
||||||
|
"""Optuna-compatible database wrapper for dashboard integration."""
|
||||||
|
|
||||||
|
SCHEMA_VERSION = 1
|
||||||
|
|
||||||
|
def __init__(self, db_path: Union[str, Path], study_name: str, direction: str = "MINIMIZE"):
|
||||||
|
"""
|
||||||
|
Initialize database with Optuna-compatible schema.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
db_path: Path to SQLite database file
|
||||||
|
study_name: Name of the optimization study
|
||||||
|
direction: "MINIMIZE" or "MAXIMIZE"
|
||||||
|
"""
|
||||||
|
self.db_path = Path(db_path)
|
||||||
|
self.study_name = study_name
|
||||||
|
self.direction = direction
|
||||||
|
self._init_schema()
|
||||||
|
|
||||||
|
def _init_schema(self):
|
||||||
|
"""Create Optuna-compatible database schema."""
|
||||||
|
self.db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Core Optuna tables
|
||||||
|
|
||||||
|
# version_info - tracks schema version
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS version_info (
|
||||||
|
version_info_id INTEGER PRIMARY KEY,
|
||||||
|
schema_version INTEGER,
|
||||||
|
library_version VARCHAR(256)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# Insert version if not exists
|
||||||
|
cursor.execute("SELECT COUNT(*) FROM version_info")
|
||||||
|
if cursor.fetchone()[0] == 0:
|
||||||
|
cursor.execute(
|
||||||
|
"INSERT INTO version_info (schema_version, library_version) VALUES (?, ?)",
|
||||||
|
(12, "atomizer-dashboard-1.0")
|
||||||
|
)
|
||||||
|
|
||||||
|
# studies - Optuna study metadata
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS studies (
|
||||||
|
study_id INTEGER PRIMARY KEY,
|
||||||
|
study_name VARCHAR(512) UNIQUE
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# Insert study if not exists
|
||||||
|
cursor.execute("SELECT study_id FROM studies WHERE study_name = ?", (self.study_name,))
|
||||||
|
result = cursor.fetchone()
|
||||||
|
if result:
|
||||||
|
self.study_id = result[0]
|
||||||
|
else:
|
||||||
|
cursor.execute("INSERT INTO studies (study_name) VALUES (?)", (self.study_name,))
|
||||||
|
self.study_id = cursor.lastrowid
|
||||||
|
|
||||||
|
# study_directions - optimization direction
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS study_directions (
|
||||||
|
study_direction_id INTEGER PRIMARY KEY,
|
||||||
|
direction VARCHAR(8) NOT NULL,
|
||||||
|
study_id INTEGER,
|
||||||
|
objective INTEGER,
|
||||||
|
FOREIGN KEY (study_id) REFERENCES studies(study_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# Insert direction if not exists
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT COUNT(*) FROM study_directions WHERE study_id = ?",
|
||||||
|
(self.study_id,)
|
||||||
|
)
|
||||||
|
if cursor.fetchone()[0] == 0:
|
||||||
|
cursor.execute(
|
||||||
|
"INSERT INTO study_directions (direction, study_id, objective) VALUES (?, ?, ?)",
|
||||||
|
(self.direction, self.study_id, 0)
|
||||||
|
)
|
||||||
|
|
||||||
|
# trials - main trial table (Optuna schema)
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trials (
|
||||||
|
trial_id INTEGER PRIMARY KEY,
|
||||||
|
number INTEGER,
|
||||||
|
study_id INTEGER,
|
||||||
|
state VARCHAR(8) NOT NULL DEFAULT 'COMPLETE',
|
||||||
|
datetime_start DATETIME,
|
||||||
|
datetime_complete DATETIME,
|
||||||
|
FOREIGN KEY (study_id) REFERENCES studies(study_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_values - objective values
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_values (
|
||||||
|
trial_value_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
objective INTEGER,
|
||||||
|
value FLOAT,
|
||||||
|
value_type VARCHAR(7) DEFAULT 'FINITE',
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_params - parameter values
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_params (
|
||||||
|
param_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
param_name VARCHAR(512),
|
||||||
|
param_value FLOAT,
|
||||||
|
distribution_json TEXT,
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_user_attributes - custom metadata
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_user_attributes (
|
||||||
|
trial_user_attribute_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
key VARCHAR(512),
|
||||||
|
value_json TEXT,
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_system_attributes - system metadata
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_system_attributes (
|
||||||
|
trial_system_attribute_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
key VARCHAR(512),
|
||||||
|
value_json TEXT,
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# study_user_attributes
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS study_user_attributes (
|
||||||
|
study_user_attribute_id INTEGER PRIMARY KEY,
|
||||||
|
study_id INTEGER,
|
||||||
|
key VARCHAR(512),
|
||||||
|
value_json TEXT,
|
||||||
|
FOREIGN KEY (study_id) REFERENCES studies(study_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# study_system_attributes
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS study_system_attributes (
|
||||||
|
study_system_attribute_id INTEGER PRIMARY KEY,
|
||||||
|
study_id INTEGER,
|
||||||
|
key VARCHAR(512),
|
||||||
|
value_json TEXT,
|
||||||
|
FOREIGN KEY (study_id) REFERENCES studies(study_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_intermediate_values (for pruning callbacks)
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_intermediate_values (
|
||||||
|
trial_intermediate_value_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
step INTEGER,
|
||||||
|
intermediate_value FLOAT,
|
||||||
|
intermediate_value_type VARCHAR(7) DEFAULT 'FINITE',
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# trial_heartbeats (for distributed optimization)
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS trial_heartbeats (
|
||||||
|
trial_heartbeat_id INTEGER PRIMARY KEY,
|
||||||
|
trial_id INTEGER,
|
||||||
|
heartbeat DATETIME,
|
||||||
|
FOREIGN KEY (trial_id) REFERENCES trials(trial_id)
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# alembic_version (Optuna uses alembic for migrations)
|
||||||
|
cursor.execute('''
|
||||||
|
CREATE TABLE IF NOT EXISTS alembic_version (
|
||||||
|
version_num VARCHAR(32) PRIMARY KEY
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
cursor.execute("INSERT OR IGNORE INTO alembic_version VALUES ('v3.0.0')")
|
||||||
|
|
||||||
|
# Create indexes for performance
|
||||||
|
cursor.execute("CREATE INDEX IF NOT EXISTS ix_trials_study_id ON trials(study_id)")
|
||||||
|
cursor.execute("CREATE INDEX IF NOT EXISTS ix_trials_state ON trials(state)")
|
||||||
|
cursor.execute("CREATE INDEX IF NOT EXISTS ix_trial_values_trial_id ON trial_values(trial_id)")
|
||||||
|
cursor.execute("CREATE INDEX IF NOT EXISTS ix_trial_params_trial_id ON trial_params(trial_id)")
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
def log_trial(
|
||||||
|
self,
|
||||||
|
params: Dict[str, float],
|
||||||
|
objectives: Dict[str, float],
|
||||||
|
weighted_sum: Optional[float] = None,
|
||||||
|
is_feasible: bool = True,
|
||||||
|
state: str = "COMPLETE",
|
||||||
|
datetime_start: Optional[str] = None,
|
||||||
|
datetime_complete: Optional[str] = None,
|
||||||
|
metadata: Optional[Dict[str, Any]] = None,
|
||||||
|
) -> int:
|
||||||
|
"""
|
||||||
|
Log a trial to the database.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Parameter name -> value mapping
|
||||||
|
objectives: Objective name -> value mapping
|
||||||
|
weighted_sum: Optional weighted sum for single-objective ranking
|
||||||
|
is_feasible: Whether trial meets constraints
|
||||||
|
state: Trial state ("COMPLETE", "PRUNED", "FAIL", "RUNNING")
|
||||||
|
datetime_start: ISO format timestamp
|
||||||
|
datetime_complete: ISO format timestamp
|
||||||
|
metadata: Additional metadata (turbo_iteration, predicted values, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
trial_id of inserted trial
|
||||||
|
"""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Get next trial number
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT COALESCE(MAX(number), -1) + 1 FROM trials WHERE study_id = ?",
|
||||||
|
(self.study_id,)
|
||||||
|
)
|
||||||
|
trial_number = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
# Default timestamps
|
||||||
|
now = datetime.now().isoformat()
|
||||||
|
dt_start = datetime_start or now
|
||||||
|
dt_complete = datetime_complete or now
|
||||||
|
|
||||||
|
# Insert trial
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trials (number, study_id, state, datetime_start, datetime_complete)
|
||||||
|
VALUES (?, ?, ?, ?, ?)
|
||||||
|
''', (trial_number, self.study_id, state, dt_start, dt_complete))
|
||||||
|
trial_id = cursor.lastrowid
|
||||||
|
|
||||||
|
# Insert objective values
|
||||||
|
# Use weighted_sum as primary objective if provided, else first objective value
|
||||||
|
primary_value = weighted_sum if weighted_sum is not None else list(objectives.values())[0]
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_values (trial_id, objective, value, value_type)
|
||||||
|
VALUES (?, ?, ?, ?)
|
||||||
|
''', (trial_id, 0, primary_value, 'FINITE'))
|
||||||
|
|
||||||
|
# Insert all objectives as user attributes
|
||||||
|
for obj_name, obj_value in objectives.items():
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_user_attributes (trial_id, key, value_json)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
''', (trial_id, f"obj_{obj_name}", json.dumps(obj_value)))
|
||||||
|
|
||||||
|
# Insert parameters
|
||||||
|
for param_name, param_value in params.items():
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_params (trial_id, param_name, param_value, distribution_json)
|
||||||
|
VALUES (?, ?, ?, ?)
|
||||||
|
''', (trial_id, param_name, param_value, '{}'))
|
||||||
|
|
||||||
|
# Insert feasibility as user attribute
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_user_attributes (trial_id, key, value_json)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
''', (trial_id, 'is_feasible', json.dumps(is_feasible)))
|
||||||
|
|
||||||
|
# Insert metadata
|
||||||
|
if metadata:
|
||||||
|
for key, value in metadata.items():
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_user_attributes (trial_id, key, value_json)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
''', (trial_id, key, json.dumps(value)))
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return trial_id
|
||||||
|
|
||||||
|
def mark_best(self, trial_id: int):
|
||||||
|
"""Mark a trial as the best (adds user attribute)."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Remove previous best markers
|
||||||
|
cursor.execute('''
|
||||||
|
DELETE FROM trial_user_attributes
|
||||||
|
WHERE key = 'is_best' AND trial_id IN (
|
||||||
|
SELECT trial_id FROM trials WHERE study_id = ?
|
||||||
|
)
|
||||||
|
''', (self.study_id,))
|
||||||
|
|
||||||
|
# Mark new best
|
||||||
|
cursor.execute('''
|
||||||
|
INSERT INTO trial_user_attributes (trial_id, key, value_json)
|
||||||
|
VALUES (?, 'is_best', 'true')
|
||||||
|
''', (trial_id,))
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
def get_trial_count(self, state: str = "COMPLETE") -> int:
|
||||||
|
"""Get count of trials in given state."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT COUNT(*) FROM trials WHERE study_id = ? AND state = ?",
|
||||||
|
(self.study_id, state)
|
||||||
|
)
|
||||||
|
count = cursor.fetchone()[0]
|
||||||
|
conn.close()
|
||||||
|
return count
|
||||||
|
|
||||||
|
def get_best_trial(self) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get best trial (lowest objective value)."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
cursor.execute('''
|
||||||
|
SELECT t.trial_id, t.number, tv.value
|
||||||
|
FROM trials t
|
||||||
|
JOIN trial_values tv ON t.trial_id = tv.trial_id
|
||||||
|
WHERE t.study_id = ? AND t.state = 'COMPLETE'
|
||||||
|
ORDER BY tv.value ASC
|
||||||
|
LIMIT 1
|
||||||
|
''', (self.study_id,))
|
||||||
|
|
||||||
|
result = cursor.fetchone()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
if result:
|
||||||
|
return {
|
||||||
|
'trial_id': result[0],
|
||||||
|
'number': result[1],
|
||||||
|
'value': result[2]
|
||||||
|
}
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_summary(self) -> Dict[str, Any]:
|
||||||
|
"""Get database summary for logging."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT COUNT(*) FROM trials WHERE study_id = ? AND state = 'COMPLETE'",
|
||||||
|
(self.study_id,)
|
||||||
|
)
|
||||||
|
complete = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT COUNT(*) FROM trials WHERE study_id = ? AND state = 'PRUNED'",
|
||||||
|
(self.study_id,)
|
||||||
|
)
|
||||||
|
pruned = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
best = self.get_best_trial()
|
||||||
|
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return {
|
||||||
|
'study_name': self.study_name,
|
||||||
|
'complete_trials': complete,
|
||||||
|
'pruned_trials': pruned,
|
||||||
|
'best_value': best['value'] if best else None,
|
||||||
|
'best_trial': best['number'] if best else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
def clear(self):
|
||||||
|
"""Clear all trials (for re-running)."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
cursor.execute("DELETE FROM trial_user_attributes WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trial_system_attributes WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trial_values WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trial_params WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trial_intermediate_values WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trial_heartbeats WHERE trial_id IN (SELECT trial_id FROM trials WHERE study_id = ?)", (self.study_id,))
|
||||||
|
cursor.execute("DELETE FROM trials WHERE study_id = ?", (self.study_id,))
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
|
||||||
|
def convert_custom_to_optuna(
|
||||||
|
db_path: Union[str, Path],
|
||||||
|
study_name: str,
|
||||||
|
custom_table: str = "trials",
|
||||||
|
param_columns: Optional[List[str]] = None,
|
||||||
|
objective_column: str = "weighted_sum",
|
||||||
|
status_column: str = "status",
|
||||||
|
datetime_column: str = "datetime_complete",
|
||||||
|
) -> int:
|
||||||
|
"""
|
||||||
|
Convert a custom database schema to Optuna-compatible format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
db_path: Path to database
|
||||||
|
study_name: Name for the study
|
||||||
|
custom_table: Name of custom trials table to convert
|
||||||
|
param_columns: List of parameter column names (auto-detect if None)
|
||||||
|
objective_column: Column containing objective value
|
||||||
|
status_column: Column containing trial status
|
||||||
|
datetime_column: Column containing timestamp
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Number of trials converted
|
||||||
|
"""
|
||||||
|
db_path = Path(db_path)
|
||||||
|
backup_path = db_path.with_suffix('.db.bak')
|
||||||
|
|
||||||
|
# Backup original
|
||||||
|
import shutil
|
||||||
|
shutil.copy(db_path, backup_path)
|
||||||
|
|
||||||
|
conn = sqlite3.connect(db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Check if custom table exists
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT name FROM sqlite_master WHERE type='table' AND name=?",
|
||||||
|
(custom_table,)
|
||||||
|
)
|
||||||
|
if not cursor.fetchone():
|
||||||
|
conn.close()
|
||||||
|
raise ValueError(f"Table '{custom_table}' not found")
|
||||||
|
|
||||||
|
# Get column info
|
||||||
|
cursor.execute(f"PRAGMA table_info({custom_table})")
|
||||||
|
columns = {row[1]: row[2] for row in cursor.fetchall()}
|
||||||
|
|
||||||
|
# Read all custom trials
|
||||||
|
cursor.execute(f"SELECT * FROM {custom_table}")
|
||||||
|
custom_trials = cursor.fetchall()
|
||||||
|
|
||||||
|
# Get column names
|
||||||
|
cursor.execute(f"PRAGMA table_info({custom_table})")
|
||||||
|
col_names = [row[1] for row in cursor.fetchall()]
|
||||||
|
|
||||||
|
# Drop ALL existing tables to start fresh
|
||||||
|
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||||
|
existing_tables = [row[0] for row in cursor.fetchall()]
|
||||||
|
for table in existing_tables:
|
||||||
|
if table != 'sqlite_sequence': # Don't drop internal SQLite table
|
||||||
|
cursor.execute(f"DROP TABLE IF EXISTS {table}")
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
# Now create proper Optuna schema from scratch
|
||||||
|
db = DashboardDB(db_path, study_name)
|
||||||
|
|
||||||
|
converted = 0
|
||||||
|
for row in custom_trials:
|
||||||
|
trial_data = dict(zip(col_names, row))
|
||||||
|
|
||||||
|
# Extract params from JSON if available
|
||||||
|
params = {}
|
||||||
|
if 'params_json' in trial_data and trial_data['params_json']:
|
||||||
|
try:
|
||||||
|
params = json.loads(trial_data['params_json'])
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Extract objectives from JSON if available
|
||||||
|
objectives = {}
|
||||||
|
if 'objectives_json' in trial_data and trial_data['objectives_json']:
|
||||||
|
try:
|
||||||
|
objectives = json.loads(trial_data['objectives_json'])
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Get weighted sum
|
||||||
|
weighted_sum = trial_data.get(objective_column)
|
||||||
|
|
||||||
|
# Map status to state
|
||||||
|
status = trial_data.get(status_column, 'COMPLETE')
|
||||||
|
state = 'COMPLETE' if status.upper() in ('COMPLETE', 'COMPLETED') else status.upper()
|
||||||
|
|
||||||
|
# Get feasibility
|
||||||
|
is_feasible = bool(trial_data.get('is_feasible', 1))
|
||||||
|
|
||||||
|
# Build metadata
|
||||||
|
metadata = {}
|
||||||
|
for key in ['turbo_iteration', 'predicted_ws', 'prediction_error', 'solve_time']:
|
||||||
|
if key in trial_data and trial_data[key] is not None:
|
||||||
|
metadata[key] = trial_data[key]
|
||||||
|
|
||||||
|
# Log trial
|
||||||
|
db.log_trial(
|
||||||
|
params=params,
|
||||||
|
objectives=objectives,
|
||||||
|
weighted_sum=weighted_sum,
|
||||||
|
is_feasible=is_feasible,
|
||||||
|
state=state,
|
||||||
|
datetime_start=trial_data.get('datetime_start'),
|
||||||
|
datetime_complete=trial_data.get(datetime_column),
|
||||||
|
metadata=metadata,
|
||||||
|
)
|
||||||
|
converted += 1
|
||||||
|
|
||||||
|
return converted
|
||||||
|
|
||||||
|
|
||||||
|
# Convenience function for turbo optimization
|
||||||
|
def init_turbo_database(study_dir: Path, study_name: str) -> DashboardDB:
|
||||||
|
"""
|
||||||
|
Initialize a dashboard-compatible database for turbo optimization.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_dir: Study directory (contains 3_results/)
|
||||||
|
study_name: Name of the study
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
DashboardDB instance ready for logging
|
||||||
|
"""
|
||||||
|
results_dir = study_dir / "3_results"
|
||||||
|
results_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
db_path = results_dir / "study.db"
|
||||||
|
|
||||||
|
return DashboardDB(db_path, study_name)
|
||||||
292
optimization_engine/utils/trial_manager.py
Normal file
292
optimization_engine/utils/trial_manager.py
Normal file
@@ -0,0 +1,292 @@
|
|||||||
|
"""
|
||||||
|
Trial Manager - Unified trial numbering and folder management
|
||||||
|
==============================================================
|
||||||
|
|
||||||
|
Provides consistent trial_NNNN naming across all optimization methods
|
||||||
|
(Optuna, Turbo, GNN, manual) with proper database integration.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.utils.trial_manager import TrialManager
|
||||||
|
|
||||||
|
tm = TrialManager(study_dir)
|
||||||
|
|
||||||
|
# Get next trial (creates folder, reserves DB row)
|
||||||
|
trial = tm.new_trial(params={'rib_thickness': 10.5, ...})
|
||||||
|
|
||||||
|
# After FEA completes
|
||||||
|
tm.complete_trial(
|
||||||
|
trial_id=trial['trial_id'],
|
||||||
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
||||||
|
metadata={'solve_time': 211.7}
|
||||||
|
)
|
||||||
|
|
||||||
|
Key principles:
|
||||||
|
- Trial numbers NEVER reset (monotonically increasing)
|
||||||
|
- Folders NEVER get overwritten
|
||||||
|
- Database is always in sync with filesystem
|
||||||
|
- Surrogate predictions are NOT trials (only FEA results)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sqlite3
|
||||||
|
import shutil
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, Any, Optional, List, Union
|
||||||
|
from filelock import FileLock
|
||||||
|
|
||||||
|
from .dashboard_db import DashboardDB
|
||||||
|
|
||||||
|
|
||||||
|
class TrialManager:
|
||||||
|
"""Manages trial numbering, folders, and database for optimization studies."""
|
||||||
|
|
||||||
|
def __init__(self, study_dir: Union[str, Path], study_name: Optional[str] = None):
|
||||||
|
"""
|
||||||
|
Initialize trial manager for a study.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_dir: Path to study directory (contains 1_setup/, 2_iterations/, 3_results/)
|
||||||
|
study_name: Name of study (defaults to directory name)
|
||||||
|
"""
|
||||||
|
self.study_dir = Path(study_dir)
|
||||||
|
self.study_name = study_name or self.study_dir.name
|
||||||
|
|
||||||
|
self.iterations_dir = self.study_dir / "2_iterations"
|
||||||
|
self.results_dir = self.study_dir / "3_results"
|
||||||
|
self.db_path = self.results_dir / "study.db"
|
||||||
|
self.lock_path = self.results_dir / ".trial_lock"
|
||||||
|
|
||||||
|
# Ensure directories exist
|
||||||
|
self.iterations_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.results_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Initialize database
|
||||||
|
self.db = DashboardDB(self.db_path, self.study_name)
|
||||||
|
|
||||||
|
def _get_next_trial_number(self) -> int:
|
||||||
|
"""Get next available trial number (never resets)."""
|
||||||
|
# Check filesystem
|
||||||
|
existing_folders = list(self.iterations_dir.glob("trial_*"))
|
||||||
|
max_folder = 0
|
||||||
|
for folder in existing_folders:
|
||||||
|
try:
|
||||||
|
num = int(folder.name.split('_')[1])
|
||||||
|
max_folder = max(max_folder, num)
|
||||||
|
except (IndexError, ValueError):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check database
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute("SELECT COALESCE(MAX(number), -1) + 1 FROM trials")
|
||||||
|
max_db = cursor.fetchone()[0]
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
# Return max of both + 1 (use 1-based for folders, 0-based for DB)
|
||||||
|
return max(max_folder, max_db) + 1
|
||||||
|
|
||||||
|
def new_trial(
|
||||||
|
self,
|
||||||
|
params: Dict[str, float],
|
||||||
|
source: str = "turbo",
|
||||||
|
metadata: Optional[Dict[str, Any]] = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Start a new trial - creates folder and reserves DB row.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
params: Design parameters for this trial
|
||||||
|
source: How this trial was generated ("turbo", "optuna", "manual")
|
||||||
|
metadata: Additional info (turbo_batch, predicted_ws, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with trial_id, trial_number, folder_path
|
||||||
|
"""
|
||||||
|
# Use file lock to prevent race conditions
|
||||||
|
with FileLock(self.lock_path):
|
||||||
|
trial_number = self._get_next_trial_number()
|
||||||
|
|
||||||
|
# Create folder with zero-padded name
|
||||||
|
folder_name = f"trial_{trial_number:04d}"
|
||||||
|
folder_path = self.iterations_dir / folder_name
|
||||||
|
folder_path.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Save params to folder
|
||||||
|
params_file = folder_path / "params.json"
|
||||||
|
with open(params_file, 'w') as f:
|
||||||
|
json.dump(params, f, indent=2)
|
||||||
|
|
||||||
|
# Also save as .exp format for NX compatibility
|
||||||
|
exp_file = folder_path / "params.exp"
|
||||||
|
with open(exp_file, 'w') as f:
|
||||||
|
for name, value in params.items():
|
||||||
|
f.write(f"[mm]{name}={value}\n")
|
||||||
|
|
||||||
|
# Save metadata
|
||||||
|
meta = {
|
||||||
|
"trial_number": trial_number,
|
||||||
|
"source": source,
|
||||||
|
"status": "RUNNING",
|
||||||
|
"datetime_start": datetime.now().isoformat(),
|
||||||
|
"params": params,
|
||||||
|
}
|
||||||
|
if metadata:
|
||||||
|
meta.update(metadata)
|
||||||
|
|
||||||
|
meta_file = folder_path / "_meta.json"
|
||||||
|
with open(meta_file, 'w') as f:
|
||||||
|
json.dump(meta, f, indent=2)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"trial_id": trial_number, # Will be updated after DB insert
|
||||||
|
"trial_number": trial_number,
|
||||||
|
"folder_path": folder_path,
|
||||||
|
"folder_name": folder_name,
|
||||||
|
}
|
||||||
|
|
||||||
|
def complete_trial(
|
||||||
|
self,
|
||||||
|
trial_number: int,
|
||||||
|
objectives: Dict[str, float],
|
||||||
|
weighted_sum: Optional[float] = None,
|
||||||
|
is_feasible: bool = True,
|
||||||
|
metadata: Optional[Dict[str, Any]] = None
|
||||||
|
) -> int:
|
||||||
|
"""
|
||||||
|
Complete a trial - logs to database and updates folder metadata.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
trial_number: Trial number from new_trial()
|
||||||
|
objectives: Objective values from FEA
|
||||||
|
weighted_sum: Combined objective for ranking
|
||||||
|
is_feasible: Whether constraints are satisfied
|
||||||
|
metadata: Additional info (solve_time, prediction_error, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Database trial_id
|
||||||
|
"""
|
||||||
|
folder_path = self.iterations_dir / f"trial_{trial_number:04d}"
|
||||||
|
|
||||||
|
# Load existing metadata
|
||||||
|
meta_file = folder_path / "_meta.json"
|
||||||
|
with open(meta_file, 'r') as f:
|
||||||
|
meta = json.load(f)
|
||||||
|
|
||||||
|
params = meta.get("params", {})
|
||||||
|
|
||||||
|
# Update metadata
|
||||||
|
meta["status"] = "COMPLETE"
|
||||||
|
meta["datetime_complete"] = datetime.now().isoformat()
|
||||||
|
meta["objectives"] = objectives
|
||||||
|
meta["weighted_sum"] = weighted_sum
|
||||||
|
meta["is_feasible"] = is_feasible
|
||||||
|
if metadata:
|
||||||
|
meta.update(metadata)
|
||||||
|
|
||||||
|
# Save results.json
|
||||||
|
results_file = folder_path / "results.json"
|
||||||
|
with open(results_file, 'w') as f:
|
||||||
|
json.dump({
|
||||||
|
"objectives": objectives,
|
||||||
|
"weighted_sum": weighted_sum,
|
||||||
|
"is_feasible": is_feasible,
|
||||||
|
"metadata": metadata or {}
|
||||||
|
}, f, indent=2)
|
||||||
|
|
||||||
|
# Update _meta.json
|
||||||
|
with open(meta_file, 'w') as f:
|
||||||
|
json.dump(meta, f, indent=2)
|
||||||
|
|
||||||
|
# Log to database
|
||||||
|
db_metadata = metadata or {}
|
||||||
|
db_metadata["source"] = meta.get("source", "unknown")
|
||||||
|
if "turbo_batch" in meta:
|
||||||
|
db_metadata["turbo_batch"] = meta["turbo_batch"]
|
||||||
|
if "predicted_ws" in meta:
|
||||||
|
db_metadata["predicted_ws"] = meta["predicted_ws"]
|
||||||
|
|
||||||
|
trial_id = self.db.log_trial(
|
||||||
|
params=params,
|
||||||
|
objectives=objectives,
|
||||||
|
weighted_sum=weighted_sum,
|
||||||
|
is_feasible=is_feasible,
|
||||||
|
state="COMPLETE",
|
||||||
|
datetime_start=meta.get("datetime_start"),
|
||||||
|
datetime_complete=meta.get("datetime_complete"),
|
||||||
|
metadata=db_metadata,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if this is the new best
|
||||||
|
best = self.db.get_best_trial()
|
||||||
|
if best and best['trial_id'] == trial_id:
|
||||||
|
self.db.mark_best(trial_id)
|
||||||
|
meta["is_best"] = True
|
||||||
|
with open(meta_file, 'w') as f:
|
||||||
|
json.dump(meta, f, indent=2)
|
||||||
|
|
||||||
|
return trial_id
|
||||||
|
|
||||||
|
def fail_trial(self, trial_number: int, error: str):
|
||||||
|
"""Mark a trial as failed."""
|
||||||
|
folder_path = self.iterations_dir / f"trial_{trial_number:04d}"
|
||||||
|
meta_file = folder_path / "_meta.json"
|
||||||
|
|
||||||
|
if meta_file.exists():
|
||||||
|
with open(meta_file, 'r') as f:
|
||||||
|
meta = json.load(f)
|
||||||
|
meta["status"] = "FAIL"
|
||||||
|
meta["error"] = error
|
||||||
|
meta["datetime_complete"] = datetime.now().isoformat()
|
||||||
|
with open(meta_file, 'w') as f:
|
||||||
|
json.dump(meta, f, indent=2)
|
||||||
|
|
||||||
|
def get_trial_folder(self, trial_number: int) -> Path:
|
||||||
|
"""Get folder path for a trial number."""
|
||||||
|
return self.iterations_dir / f"trial_{trial_number:04d}"
|
||||||
|
|
||||||
|
def get_all_trials(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Get all completed trials from database."""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
cursor.execute("""
|
||||||
|
SELECT t.trial_id, t.number, tv.value
|
||||||
|
FROM trials t
|
||||||
|
JOIN trial_values tv ON t.trial_id = tv.trial_id
|
||||||
|
WHERE t.state = 'COMPLETE'
|
||||||
|
ORDER BY t.number
|
||||||
|
""")
|
||||||
|
|
||||||
|
trials = []
|
||||||
|
for row in cursor.fetchall():
|
||||||
|
trials.append({
|
||||||
|
"trial_id": row[0],
|
||||||
|
"number": row[1],
|
||||||
|
"value": row[2]
|
||||||
|
})
|
||||||
|
|
||||||
|
conn.close()
|
||||||
|
return trials
|
||||||
|
|
||||||
|
def get_summary(self) -> Dict[str, Any]:
|
||||||
|
"""Get trial manager summary."""
|
||||||
|
summary = self.db.get_summary()
|
||||||
|
|
||||||
|
# Add folder count
|
||||||
|
folders = list(self.iterations_dir.glob("trial_*"))
|
||||||
|
summary["folder_count"] = len(folders)
|
||||||
|
|
||||||
|
return summary
|
||||||
|
|
||||||
|
def copy_model_files(self, source_dir: Path, trial_number: int) -> Path:
|
||||||
|
"""Copy NX model files to trial folder."""
|
||||||
|
dest = self.get_trial_folder(trial_number)
|
||||||
|
|
||||||
|
# Copy relevant files
|
||||||
|
extensions = ['.prt', '.fem', '.sim', '.afm', '.op2', '.f06', '.dat']
|
||||||
|
for ext in extensions:
|
||||||
|
for src_file in source_dir.glob(f"*{ext}"):
|
||||||
|
shutil.copy2(src_file, dest / src_file.name)
|
||||||
|
|
||||||
|
return dest
|
||||||
Reference in New Issue
Block a user