refactor: Major project cleanup and reorganization
## Removed Duplicate Directories - Deleted old `dashboard/` (replaced by atomizer-dashboard) - Deleted old `mcp_server/` Python tools (moved model_discovery to optimization_engine) - Deleted `tests/mcp_server/` (obsolete tests) - Deleted `launch_dashboard.bat` (old launcher) ## Consolidated Code - Moved `mcp_server/tools/model_discovery.py` to `optimization_engine/model_discovery/` - Updated import in `optimization_config_builder.py` - Deleted stub `extract_mass.py` (use extract_mass_from_bdf instead) - Deleted unused `intelligent_setup.py` and `hybrid_study_creator.py` - Archived `result_extractors/` to `archive/deprecated/` ## Documentation Cleanup - Deleted deprecated `docs/06_PROTOCOLS_DETAILED/` (14 files) - Archived dated dev docs to `docs/08_ARCHIVE/sessions/` - Archived old plans to `docs/08_ARCHIVE/plans/` - Updated `docs/protocols/README.md` with SYS_15 ## Skills Consolidation - Archived redundant study creation skills to `.claude/skills/archive/` - Kept `core/study-creation-core.md` as canonical ## Housekeeping - Updated `.gitignore` to prevent `nul` and `_dat_run*.dat` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
4
.gitignore
vendored
4
.gitignore
vendored
@@ -91,3 +91,7 @@ build/
|
|||||||
# OS
|
# OS
|
||||||
Thumbs.db
|
Thumbs.db
|
||||||
desktop.ini
|
desktop.ini
|
||||||
|
nul
|
||||||
|
|
||||||
|
# Temporary data files
|
||||||
|
_dat_run*.dat
|
||||||
|
|||||||
@@ -1,188 +0,0 @@
|
|||||||
# Atomizer Dashboard
|
|
||||||
|
|
||||||
Professional web-based dashboard for controlling and monitoring optimization runs.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### Study Management
|
|
||||||
- **List all studies** - View all optimization studies with metadata
|
|
||||||
- **Resume studies** - Continue existing studies with additional trials
|
|
||||||
- **Delete studies** - Clean up old/unwanted studies
|
|
||||||
- **Study details** - View complete history, results, and metadata
|
|
||||||
|
|
||||||
### Optimization Control
|
|
||||||
- **Start new optimizations** - Configure and launch optimization runs
|
|
||||||
- **Real-time monitoring** - Track progress of active optimizations
|
|
||||||
- **Configuration management** - Load and save optimization configs
|
|
||||||
|
|
||||||
### Visualization
|
|
||||||
- **Progress charts** - Objective values over trials
|
|
||||||
- **Design variable plots** - Track parameter evolution
|
|
||||||
- **Constraint visualization** - Monitor constraint satisfaction
|
|
||||||
- **Running best** - See convergence progress
|
|
||||||
|
|
||||||
### Results Analysis
|
|
||||||
- **Best results cards** - Quick view of optimal solutions
|
|
||||||
- **Trial history table** - Complete trial-by-trial data
|
|
||||||
- **Export capabilities** - Download results in CSV/JSON
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
1. Install dependencies:
|
|
||||||
```bash
|
|
||||||
cd dashboard
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Start the dashboard server:
|
|
||||||
```bash
|
|
||||||
python api/app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Open your browser to:
|
|
||||||
```
|
|
||||||
http://localhost:8080
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### Method 1: Using the launcher script
|
|
||||||
```bash
|
|
||||||
cd C:\Users\antoi\Documents\Atomaste\Atomizer
|
|
||||||
python dashboard/start_dashboard.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Method 2: Manual start
|
|
||||||
```bash
|
|
||||||
cd dashboard
|
|
||||||
python api/app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
The dashboard will automatically open in your default browser at `http://localhost:8080`
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Starting a New Optimization
|
|
||||||
|
|
||||||
1. Click "New Optimization" button
|
|
||||||
2. Enter study name
|
|
||||||
3. Set number of trials
|
|
||||||
4. Select configuration file
|
|
||||||
5. Optionally check "Resume existing" if continuing a study
|
|
||||||
6. Click "Start Optimization"
|
|
||||||
|
|
||||||
### Viewing Study Results
|
|
||||||
|
|
||||||
1. Click on a study in the sidebar
|
|
||||||
2. View summary cards showing best results
|
|
||||||
3. Examine charts for optimization progress
|
|
||||||
4. Review trial history table for details
|
|
||||||
|
|
||||||
### Resuming a Study
|
|
||||||
|
|
||||||
1. Select the study from the sidebar
|
|
||||||
2. Click "Resume Study"
|
|
||||||
3. Enter number of additional trials
|
|
||||||
4. Optimization continues from where it left off
|
|
||||||
|
|
||||||
### Monitoring Active Optimizations
|
|
||||||
|
|
||||||
Active optimizations appear in the sidebar with:
|
|
||||||
- Real-time progress bars
|
|
||||||
- Current trial number
|
|
||||||
- Status indicators
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
|
|
||||||
### Studies
|
|
||||||
- `GET /api/studies` - List all studies
|
|
||||||
- `GET /api/studies/<name>` - Get study details
|
|
||||||
- `DELETE /api/studies/<name>/delete` - Delete a study
|
|
||||||
|
|
||||||
### Optimization
|
|
||||||
- `POST /api/optimization/start` - Start new optimization
|
|
||||||
- `GET /api/optimization/status` - Get all active optimizations
|
|
||||||
- `GET /api/optimization/<name>/status` - Get specific optimization status
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
- `GET /api/config/load?path=<path>` - Load config file
|
|
||||||
- `POST /api/config/save` - Save config file
|
|
||||||
|
|
||||||
### Visualization
|
|
||||||
- `GET /api/results/visualization/<name>` - Get chart data
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
dashboard/
|
|
||||||
├── api/
|
|
||||||
│ └── app.py # Flask REST API server
|
|
||||||
├── frontend/
|
|
||||||
│ ├── index.html # Main dashboard UI
|
|
||||||
│ ├── app.js # JavaScript logic
|
|
||||||
│ └── styles.css # Modern styling
|
|
||||||
├── requirements.txt # Python dependencies
|
|
||||||
└── README.md # This file
|
|
||||||
```
|
|
||||||
|
|
||||||
## Technology Stack
|
|
||||||
|
|
||||||
- **Backend**: Flask (Python)
|
|
||||||
- **Frontend**: Vanilla JavaScript + Chart.js
|
|
||||||
- **Styling**: Modern CSS with gradients and shadows
|
|
||||||
- **Charts**: Chart.js for interactive visualizations
|
|
||||||
|
|
||||||
## Features in Detail
|
|
||||||
|
|
||||||
### Real-time Monitoring
|
|
||||||
The dashboard polls active optimizations every 5 seconds to show:
|
|
||||||
- Current trial number
|
|
||||||
- Progress percentage
|
|
||||||
- Status (running/completed/failed)
|
|
||||||
|
|
||||||
### Study Persistence
|
|
||||||
All studies are stored in SQLite databases with:
|
|
||||||
- Complete trial history
|
|
||||||
- Optuna study state
|
|
||||||
- Metadata (creation date, config hash, resume count)
|
|
||||||
|
|
||||||
### Configuration Detection
|
|
||||||
The system automatically detects when a study configuration has changed:
|
|
||||||
- Warns when resuming with different geometry
|
|
||||||
- Calculates MD5 hash of critical config parameters
|
|
||||||
- Helps prevent invalid resume operations
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Dashboard won't start
|
|
||||||
- Check that Flask is installed: `pip install flask flask-cors`
|
|
||||||
- Ensure port 5000 is not in use
|
|
||||||
- Check firewall settings
|
|
||||||
|
|
||||||
### Can't see studies
|
|
||||||
- Verify optimization_results folder exists
|
|
||||||
- Check that studies have metadata files
|
|
||||||
- Refresh the studies list
|
|
||||||
|
|
||||||
### Charts not showing
|
|
||||||
- Ensure Chart.js loaded (check browser console)
|
|
||||||
- Verify study has trial history
|
|
||||||
- Check API endpoints are responding
|
|
||||||
|
|
||||||
## Future Enhancements
|
|
||||||
|
|
||||||
- [ ] Multi-objective Pareto front visualization
|
|
||||||
- [ ] Export results to PDF/Excel
|
|
||||||
- [ ] Optimization comparison tool
|
|
||||||
- [ ] Parameter importance analysis
|
|
||||||
- [ ] Surrogate model visualization
|
|
||||||
- [ ] Configuration editor UI
|
|
||||||
- [ ] Live log streaming
|
|
||||||
- [ ] Email notifications on completion
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues or questions:
|
|
||||||
1. Check the console for error messages
|
|
||||||
2. Verify API is running (`http://localhost:5000/api/studies`)
|
|
||||||
3. Review optimization logs in the console
|
|
||||||
@@ -1,742 +0,0 @@
|
|||||||
"""
|
|
||||||
Atomizer Dashboard API
|
|
||||||
|
|
||||||
RESTful API for controlling and monitoring optimization runs.
|
|
||||||
Provides endpoints for:
|
|
||||||
- Starting/stopping optimizations
|
|
||||||
- Managing studies (list, resume, delete)
|
|
||||||
- Real-time monitoring of progress
|
|
||||||
- Retrieving results and visualizations
|
|
||||||
"""
|
|
||||||
|
|
||||||
from flask import Flask, jsonify, request, send_from_directory
|
|
||||||
from flask_cors import CORS
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, List, Any
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
from optimization_engine.runner import OptimizationRunner
|
|
||||||
|
|
||||||
app = Flask(__name__, static_folder='../frontend', static_url_path='')
|
|
||||||
CORS(app)
|
|
||||||
|
|
||||||
# Global state for running optimizations
|
|
||||||
active_optimizations = {}
|
|
||||||
optimization_lock = threading.Lock()
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/')
|
|
||||||
def index():
|
|
||||||
"""Serve the dashboard frontend."""
|
|
||||||
return send_from_directory(app.static_folder, 'index.html')
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/studies', methods=['GET'])
|
|
||||||
def list_studies():
|
|
||||||
"""
|
|
||||||
List all available optimization studies.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
JSON array of study metadata
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Use a dummy runner to access list_studies
|
|
||||||
config_path = project_root / 'examples/bracket/optimization_config_stress_displacement.json'
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=config_path,
|
|
||||||
model_updater=lambda x: None,
|
|
||||||
simulation_runner=lambda: Path('dummy.op2'),
|
|
||||||
result_extractors={}
|
|
||||||
)
|
|
||||||
|
|
||||||
studies = runner.list_studies()
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'studies': studies
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/studies/<study_name>', methods=['GET'])
|
|
||||||
def get_study_details(study_name: str):
|
|
||||||
"""
|
|
||||||
Get detailed information about a specific study.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
JSON with study metadata, history, and summary
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
config_path = project_root / 'examples/bracket/optimization_config_stress_displacement.json'
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=config_path,
|
|
||||||
model_updater=lambda x: None,
|
|
||||||
simulation_runner=lambda: Path('dummy.op2'),
|
|
||||||
result_extractors={}
|
|
||||||
)
|
|
||||||
|
|
||||||
output_dir = runner.output_dir
|
|
||||||
|
|
||||||
# Load history
|
|
||||||
history_path = output_dir / 'history.json'
|
|
||||||
history = []
|
|
||||||
if history_path.exists():
|
|
||||||
with open(history_path, 'r') as f:
|
|
||||||
history = json.load(f)
|
|
||||||
|
|
||||||
# Load summary
|
|
||||||
summary_path = output_dir / 'optimization_summary.json'
|
|
||||||
summary = {}
|
|
||||||
if summary_path.exists():
|
|
||||||
with open(summary_path, 'r') as f:
|
|
||||||
summary = json.load(f)
|
|
||||||
|
|
||||||
# Load metadata
|
|
||||||
metadata_path = runner._get_study_metadata_path(study_name)
|
|
||||||
metadata = {}
|
|
||||||
if metadata_path.exists():
|
|
||||||
with open(metadata_path, 'r') as f:
|
|
||||||
metadata = json.load(f)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'study_name': study_name,
|
|
||||||
'metadata': metadata,
|
|
||||||
'history': history,
|
|
||||||
'summary': summary
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/studies/<study_name>/delete', methods=['DELETE'])
|
|
||||||
def delete_study(study_name: str):
|
|
||||||
"""
|
|
||||||
Delete a study and all its data.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study to delete
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
config_path = project_root / 'examples/bracket/optimization_config_stress_displacement.json'
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=config_path,
|
|
||||||
model_updater=lambda x: None,
|
|
||||||
simulation_runner=lambda: Path('dummy.op2'),
|
|
||||||
result_extractors={}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Delete database and metadata
|
|
||||||
db_path = runner._get_study_db_path(study_name)
|
|
||||||
metadata_path = runner._get_study_metadata_path(study_name)
|
|
||||||
|
|
||||||
if db_path.exists():
|
|
||||||
db_path.unlink()
|
|
||||||
if metadata_path.exists():
|
|
||||||
metadata_path.unlink()
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': f'Study {study_name} deleted successfully'
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/optimization/start', methods=['POST'])
|
|
||||||
def start_optimization():
|
|
||||||
"""
|
|
||||||
Start a new optimization run or resume an existing one.
|
|
||||||
|
|
||||||
Request body:
|
|
||||||
{
|
|
||||||
"study_name": "my_study",
|
|
||||||
"n_trials": 50,
|
|
||||||
"resume": false,
|
|
||||||
"config_path": "path/to/config.json"
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
data = request.get_json()
|
|
||||||
study_name = data.get('study_name', f'study_{datetime.now().strftime("%Y%m%d_%H%M%S")}')
|
|
||||||
n_trials = data.get('n_trials', 50)
|
|
||||||
resume = data.get('resume', False)
|
|
||||||
config_path = data.get('config_path', 'examples/bracket/optimization_config_stress_displacement.json')
|
|
||||||
|
|
||||||
with optimization_lock:
|
|
||||||
if study_name in active_optimizations:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Study {study_name} is already running'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
# Mark as active
|
|
||||||
active_optimizations[study_name] = {
|
|
||||||
'status': 'starting',
|
|
||||||
'start_time': datetime.now().isoformat(),
|
|
||||||
'n_trials': n_trials,
|
|
||||||
'current_trial': 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Start optimization in background thread
|
|
||||||
def run_optimization():
|
|
||||||
try:
|
|
||||||
# Import necessary functions
|
|
||||||
from examples.test_journal_optimization import (
|
|
||||||
bracket_model_updater,
|
|
||||||
bracket_simulation_runner,
|
|
||||||
stress_extractor,
|
|
||||||
displacement_extractor
|
|
||||||
)
|
|
||||||
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=project_root / config_path,
|
|
||||||
model_updater=bracket_model_updater,
|
|
||||||
simulation_runner=bracket_simulation_runner,
|
|
||||||
result_extractors={
|
|
||||||
'stress_extractor': stress_extractor,
|
|
||||||
'displacement_extractor': displacement_extractor
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
with optimization_lock:
|
|
||||||
active_optimizations[study_name]['status'] = 'running'
|
|
||||||
|
|
||||||
study = runner.run(
|
|
||||||
study_name=study_name,
|
|
||||||
n_trials=n_trials,
|
|
||||||
resume=resume
|
|
||||||
)
|
|
||||||
|
|
||||||
with optimization_lock:
|
|
||||||
active_optimizations[study_name]['status'] = 'completed'
|
|
||||||
active_optimizations[study_name]['end_time'] = datetime.now().isoformat()
|
|
||||||
active_optimizations[study_name]['best_value'] = study.best_value
|
|
||||||
active_optimizations[study_name]['best_params'] = study.best_params
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
with optimization_lock:
|
|
||||||
active_optimizations[study_name]['status'] = 'failed'
|
|
||||||
active_optimizations[study_name]['error'] = str(e)
|
|
||||||
|
|
||||||
thread = threading.Thread(target=run_optimization, daemon=True)
|
|
||||||
thread.start()
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': f'Optimization {study_name} started',
|
|
||||||
'study_name': study_name
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/optimization/status', methods=['GET'])
|
|
||||||
def get_optimization_status():
|
|
||||||
"""
|
|
||||||
Get status of all active optimizations.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
JSON with status of all running/recent optimizations
|
|
||||||
"""
|
|
||||||
with optimization_lock:
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'active_optimizations': active_optimizations
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/optimization/<study_name>/status', methods=['GET'])
|
|
||||||
def get_study_status(study_name: str):
|
|
||||||
"""
|
|
||||||
Get real-time status of a specific optimization.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
"""
|
|
||||||
with optimization_lock:
|
|
||||||
if study_name not in active_optimizations:
|
|
||||||
# Try to get from history
|
|
||||||
try:
|
|
||||||
config_path = project_root / 'examples/bracket/optimization_config_stress_displacement.json'
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=config_path,
|
|
||||||
model_updater=lambda x: None,
|
|
||||||
simulation_runner=lambda: Path('dummy.op2'),
|
|
||||||
result_extractors={}
|
|
||||||
)
|
|
||||||
|
|
||||||
history_path = runner.output_dir / 'history.json'
|
|
||||||
if history_path.exists():
|
|
||||||
with open(history_path, 'r') as f:
|
|
||||||
history = json.load(f)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'status': 'completed',
|
|
||||||
'n_trials': len(history)
|
|
||||||
})
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Study not found'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
**active_optimizations[study_name]
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/config/load', methods=['GET'])
|
|
||||||
def load_config():
|
|
||||||
"""
|
|
||||||
Load optimization configuration from file.
|
|
||||||
|
|
||||||
Query params:
|
|
||||||
path: Path to config file (relative to project root)
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
config_path = request.args.get('path', 'examples/bracket/optimization_config_stress_displacement.json')
|
|
||||||
full_path = project_root / config_path
|
|
||||||
|
|
||||||
with open(full_path, 'r') as f:
|
|
||||||
config = json.load(f)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'config': config,
|
|
||||||
'path': config_path
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/config/save', methods=['POST'])
|
|
||||||
def save_config():
|
|
||||||
"""
|
|
||||||
Save optimization configuration to file.
|
|
||||||
|
|
||||||
Request body:
|
|
||||||
{
|
|
||||||
"path": "path/to/config.json",
|
|
||||||
"config": {...}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
data = request.get_json()
|
|
||||||
config_path = data.get('path')
|
|
||||||
config = data.get('config')
|
|
||||||
|
|
||||||
if not config_path or not config:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Missing path or config'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
full_path = project_root / config_path
|
|
||||||
full_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
with open(full_path, 'w') as f:
|
|
||||||
json.dump(config, f, indent=2)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': f'Configuration saved to {config_path}'
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/results/visualization/<study_name>', methods=['GET'])
|
|
||||||
def get_visualization_data(study_name: str):
|
|
||||||
"""
|
|
||||||
Get data formatted for visualization (charts).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
config_path = project_root / 'examples/bracket/optimization_config_stress_displacement.json'
|
|
||||||
runner = OptimizationRunner(
|
|
||||||
config_path=config_path,
|
|
||||||
model_updater=lambda x: None,
|
|
||||||
simulation_runner=lambda: Path('dummy.op2'),
|
|
||||||
result_extractors={}
|
|
||||||
)
|
|
||||||
|
|
||||||
history_path = runner.output_dir / 'history.json'
|
|
||||||
if not history_path.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'No history found for this study'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
with open(history_path, 'r') as f:
|
|
||||||
history = json.load(f)
|
|
||||||
|
|
||||||
# Format data for charts
|
|
||||||
trials = [entry['trial_number'] for entry in history]
|
|
||||||
objectives = {}
|
|
||||||
design_vars = {}
|
|
||||||
constraints = {}
|
|
||||||
|
|
||||||
for entry in history:
|
|
||||||
for obj_name, obj_value in entry['objectives'].items():
|
|
||||||
if obj_name not in objectives:
|
|
||||||
objectives[obj_name] = []
|
|
||||||
objectives[obj_name].append(obj_value)
|
|
||||||
|
|
||||||
for dv_name, dv_value in entry['design_variables'].items():
|
|
||||||
if dv_name not in design_vars:
|
|
||||||
design_vars[dv_name] = []
|
|
||||||
design_vars[dv_name].append(dv_value)
|
|
||||||
|
|
||||||
for const_name, const_value in entry['constraints'].items():
|
|
||||||
if const_name not in constraints:
|
|
||||||
constraints[const_name] = []
|
|
||||||
constraints[const_name].append(const_value)
|
|
||||||
|
|
||||||
# Calculate running best
|
|
||||||
total_objectives = [entry['total_objective'] for entry in history]
|
|
||||||
running_best = []
|
|
||||||
current_best = float('inf')
|
|
||||||
for val in total_objectives:
|
|
||||||
current_best = min(current_best, val)
|
|
||||||
running_best.append(current_best)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'trials': trials,
|
|
||||||
'objectives': objectives,
|
|
||||||
'design_variables': design_vars,
|
|
||||||
'constraints': constraints,
|
|
||||||
'total_objectives': total_objectives,
|
|
||||||
'running_best': running_best
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
# ====================
|
|
||||||
# Study Management API
|
|
||||||
# ====================
|
|
||||||
|
|
||||||
@app.route('/api/study/create', methods=['POST'])
|
|
||||||
def create_study():
|
|
||||||
"""
|
|
||||||
Create a new study with folder structure.
|
|
||||||
|
|
||||||
Request body:
|
|
||||||
{
|
|
||||||
"study_name": "my_new_study",
|
|
||||||
"description": "Optional description"
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
data = request.get_json()
|
|
||||||
study_name = data.get('study_name')
|
|
||||||
description = data.get('description', '')
|
|
||||||
|
|
||||||
if not study_name:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'study_name is required'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
# Create study folder structure
|
|
||||||
study_dir = project_root / 'optimization_results' / study_name
|
|
||||||
|
|
||||||
if study_dir.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Study {study_name} already exists'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
# Create directories
|
|
||||||
study_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
(study_dir / 'sim').mkdir(exist_ok=True)
|
|
||||||
(study_dir / 'results').mkdir(exist_ok=True)
|
|
||||||
|
|
||||||
# Create initial metadata
|
|
||||||
metadata = {
|
|
||||||
'study_name': study_name,
|
|
||||||
'description': description,
|
|
||||||
'created_at': datetime.now().isoformat(),
|
|
||||||
'status': 'created',
|
|
||||||
'has_sim_files': False,
|
|
||||||
'is_configured': False
|
|
||||||
}
|
|
||||||
|
|
||||||
metadata_path = study_dir / 'metadata.json'
|
|
||||||
with open(metadata_path, 'w') as f:
|
|
||||||
json.dump(metadata, f, indent=2)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': f'Study {study_name} created successfully',
|
|
||||||
'study_path': str(study_dir),
|
|
||||||
'sim_folder': str(study_dir / 'sim')
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/study/<study_name>/sim/files', methods=['GET'])
|
|
||||||
def list_sim_files(study_name: str):
|
|
||||||
"""
|
|
||||||
List all files in the study's sim/ folder.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
study_dir = project_root / 'optimization_results' / study_name
|
|
||||||
sim_dir = study_dir / 'sim'
|
|
||||||
|
|
||||||
if not sim_dir.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Study {study_name} does not exist'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
# List all files
|
|
||||||
files = []
|
|
||||||
for file_path in sim_dir.iterdir():
|
|
||||||
if file_path.is_file():
|
|
||||||
files.append({
|
|
||||||
'name': file_path.name,
|
|
||||||
'size': file_path.stat().st_size,
|
|
||||||
'extension': file_path.suffix,
|
|
||||||
'modified': datetime.fromtimestamp(file_path.stat().st_mtime).isoformat()
|
|
||||||
})
|
|
||||||
|
|
||||||
# Check for .sim file
|
|
||||||
has_sim = any(f['extension'] == '.sim' for f in files)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'files': files,
|
|
||||||
'has_sim_file': has_sim,
|
|
||||||
'sim_folder': str(sim_dir)
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/study/<study_name>/explore', methods=['POST'])
|
|
||||||
def explore_sim_file(study_name: str):
|
|
||||||
"""
|
|
||||||
Explore the .sim file in the study folder to extract expressions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
study_dir = project_root / 'optimization_results' / study_name
|
|
||||||
sim_dir = study_dir / 'sim'
|
|
||||||
|
|
||||||
# Find .sim file
|
|
||||||
sim_files = list(sim_dir.glob('*.sim'))
|
|
||||||
if not sim_files:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'No .sim file found in sim/ folder'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
sim_file = sim_files[0]
|
|
||||||
|
|
||||||
# Run NX journal to extract expressions
|
|
||||||
import subprocess
|
|
||||||
journal_script = project_root / 'dashboard' / 'scripts' / 'extract_expressions.py'
|
|
||||||
output_file = study_dir / 'expressions.json'
|
|
||||||
|
|
||||||
# Execute journal
|
|
||||||
# Import centralized NX paths
|
|
||||||
try:
|
|
||||||
import sys
|
|
||||||
from pathlib import Path as P
|
|
||||||
sys.path.insert(0, str(P(__file__).parent.parent.parent))
|
|
||||||
from config import NX_RUN_JOURNAL
|
|
||||||
nx_executable = str(NX_RUN_JOURNAL)
|
|
||||||
except ImportError:
|
|
||||||
# Fallback if config not available
|
|
||||||
nx_executable = r"C:\Program Files\Siemens\NX2412\NXBIN\run_journal.exe"
|
|
||||||
|
|
||||||
result = subprocess.run(
|
|
||||||
[nx_executable, str(journal_script), str(sim_file), str(output_file)],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
timeout=120
|
|
||||||
)
|
|
||||||
|
|
||||||
if result.returncode != 0:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'NX journal failed: {result.stderr}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
# Load extracted expressions
|
|
||||||
if not output_file.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Expression extraction failed - no output file'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
with open(output_file, 'r') as f:
|
|
||||||
expressions = json.load(f)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'sim_file': str(sim_file),
|
|
||||||
'expressions': expressions
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/study/<study_name>/config', methods=['GET'])
|
|
||||||
def get_study_config(study_name: str):
|
|
||||||
"""
|
|
||||||
Get the configuration for a study.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
study_dir = project_root / 'optimization_results' / study_name
|
|
||||||
config_path = study_dir / 'config.json'
|
|
||||||
|
|
||||||
if not config_path.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'config': None,
|
|
||||||
'message': 'No configuration found for this study'
|
|
||||||
})
|
|
||||||
|
|
||||||
with open(config_path, 'r') as f:
|
|
||||||
config = json.load(f)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'config': config
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/study/<study_name>/config', methods=['POST'])
|
|
||||||
def save_study_config(study_name: str):
|
|
||||||
"""
|
|
||||||
Save configuration for a study.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
study_name: Name of the study
|
|
||||||
|
|
||||||
Request body:
|
|
||||||
{
|
|
||||||
"design_variables": [...],
|
|
||||||
"objectives": [...],
|
|
||||||
"constraints": [...],
|
|
||||||
"optimization_settings": {...}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
study_dir = project_root / 'optimization_results' / study_name
|
|
||||||
|
|
||||||
if not study_dir.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Study {study_name} does not exist'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
config = request.get_json()
|
|
||||||
config_path = study_dir / 'config.json'
|
|
||||||
|
|
||||||
# Save configuration
|
|
||||||
with open(config_path, 'w') as f:
|
|
||||||
json.dump(config, f, indent=2)
|
|
||||||
|
|
||||||
# Update metadata
|
|
||||||
metadata_path = study_dir / 'metadata.json'
|
|
||||||
if metadata_path.exists():
|
|
||||||
with open(metadata_path, 'r') as f:
|
|
||||||
metadata = json.load(f)
|
|
||||||
|
|
||||||
metadata['is_configured'] = True
|
|
||||||
metadata['last_modified'] = datetime.now().isoformat()
|
|
||||||
|
|
||||||
with open(metadata_path, 'w') as f:
|
|
||||||
json.dump(metadata, f, indent=2)
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'message': f'Configuration saved for study {study_name}'
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
print("="*60)
|
|
||||||
print("ATOMIZER DASHBOARD API")
|
|
||||||
print("="*60)
|
|
||||||
print("Starting Flask server on http://localhost:8080")
|
|
||||||
print("Access the dashboard at: http://localhost:8080")
|
|
||||||
print("="*60)
|
|
||||||
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
|
|
||||||
@@ -1,507 +0,0 @@
|
|||||||
// Atomizer Dashboard - Frontend JavaScript
|
|
||||||
|
|
||||||
const API_BASE = 'http://localhost:8080/api';
|
|
||||||
let currentStudy = null;
|
|
||||||
let charts = {
|
|
||||||
progress: null,
|
|
||||||
designVars: null,
|
|
||||||
constraints: null
|
|
||||||
};
|
|
||||||
|
|
||||||
// Initialize dashboard
|
|
||||||
document.addEventListener('DOMContentLoaded', () => {
|
|
||||||
console.log('Atomizer Dashboard loaded');
|
|
||||||
refreshStudies();
|
|
||||||
startActiveOptimizationsPolling();
|
|
||||||
});
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Studies Management
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
async function refreshStudies() {
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/studies`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
renderStudiesList(data.studies);
|
|
||||||
} else {
|
|
||||||
showError('Failed to load studies: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderStudiesList(studies) {
|
|
||||||
const container = document.getElementById('studiesList');
|
|
||||||
|
|
||||||
if (!studies || studies.length === 0) {
|
|
||||||
container.innerHTML = '<p class="empty">No studies found</p>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = studies.map(study => `
|
|
||||||
<div class="study-item" onclick="loadStudy('${study.study_name}')">
|
|
||||||
<div class="study-name">${study.study_name}</div>
|
|
||||||
<div class="study-info">
|
|
||||||
<span class="badge">${study.total_trials || 0} trials</span>
|
|
||||||
${study.resume_count > 0 ? `<span class="badge-secondary">Resumed ${study.resume_count}x</span>` : ''}
|
|
||||||
</div>
|
|
||||||
<div class="study-date">${formatDate(study.created_at)}</div>
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loadStudy(studyName) {
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/studies/${studyName}`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
currentStudy = studyName;
|
|
||||||
displayStudyDetails(data);
|
|
||||||
} else {
|
|
||||||
showError('Failed to load study: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function displayStudyDetails(data) {
|
|
||||||
// Hide welcome, show details
|
|
||||||
document.getElementById('welcomeScreen').style.display = 'none';
|
|
||||||
document.getElementById('studyDetails').style.display = 'block';
|
|
||||||
|
|
||||||
// Update header
|
|
||||||
document.getElementById('studyTitle').textContent = data.study_name;
|
|
||||||
document.getElementById('studyMeta').textContent =
|
|
||||||
`Created: ${formatDate(data.metadata.created_at)} | Config Hash: ${data.metadata.config_hash.substring(0, 8)}`;
|
|
||||||
|
|
||||||
// Update summary cards
|
|
||||||
if (data.summary && data.summary.best_value !== undefined) {
|
|
||||||
document.getElementById('bestObjective').textContent = data.summary.best_value.toFixed(4);
|
|
||||||
document.getElementById('totalTrials').textContent = data.summary.n_trials || data.history.length;
|
|
||||||
|
|
||||||
// Best parameters
|
|
||||||
const paramsHtml = Object.entries(data.summary.best_params || {})
|
|
||||||
.map(([name, value]) => `<div><strong>${name}:</strong> ${value.toFixed(4)}</div>`)
|
|
||||||
.join('');
|
|
||||||
document.getElementById('bestParams').innerHTML = paramsHtml || '<p>No data</p>';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Render charts
|
|
||||||
renderCharts(data.history);
|
|
||||||
|
|
||||||
// Render history table
|
|
||||||
renderHistoryTable(data.history);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Charts
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
async function renderCharts(history) {
|
|
||||||
if (!history || history.length === 0) return;
|
|
||||||
|
|
||||||
// Get visualization data
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/results/visualization/${currentStudy}`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!data.success) return;
|
|
||||||
|
|
||||||
// Progress Chart
|
|
||||||
renderProgressChart(data.trials, data.total_objectives, data.running_best);
|
|
||||||
|
|
||||||
// Design Variables Chart
|
|
||||||
renderDesignVarsChart(data.trials, data.design_variables);
|
|
||||||
|
|
||||||
// Constraints Chart
|
|
||||||
renderConstraintsChart(data.trials, data.constraints);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Error rendering charts:', error);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderProgressChart(trials, objectives, runningBest) {
|
|
||||||
const ctx = document.getElementById('progressChart').getContext('2d');
|
|
||||||
|
|
||||||
if (charts.progress) {
|
|
||||||
charts.progress.destroy();
|
|
||||||
}
|
|
||||||
|
|
||||||
charts.progress = new Chart(ctx, {
|
|
||||||
type: 'line',
|
|
||||||
data: {
|
|
||||||
labels: trials,
|
|
||||||
datasets: [
|
|
||||||
{
|
|
||||||
label: 'Total Objective',
|
|
||||||
data: objectives,
|
|
||||||
borderColor: '#3b82f6',
|
|
||||||
backgroundColor: 'rgba(59, 130, 246, 0.1)',
|
|
||||||
tension: 0.1
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: 'Running Best',
|
|
||||||
data: runningBest,
|
|
||||||
borderColor: '#10b981',
|
|
||||||
backgroundColor: 'rgba(16, 185, 129, 0.1)',
|
|
||||||
borderWidth: 2,
|
|
||||||
tension: 0.1
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
options: {
|
|
||||||
responsive: true,
|
|
||||||
maintainAspectRatio: false,
|
|
||||||
plugins: {
|
|
||||||
legend: {
|
|
||||||
display: true,
|
|
||||||
position: 'top'
|
|
||||||
},
|
|
||||||
tooltip: {
|
|
||||||
mode: 'index',
|
|
||||||
intersect: false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
scales: {
|
|
||||||
x: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Trial Number'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
y: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Objective Value'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderDesignVarsChart(trials, designVars) {
|
|
||||||
const ctx = document.getElementById('designVarsChart').getContext('2d');
|
|
||||||
|
|
||||||
if (charts.designVars) {
|
|
||||||
charts.designVars.destroy();
|
|
||||||
}
|
|
||||||
|
|
||||||
const colors = ['#3b82f6', '#10b981', '#f59e0b', '#ef4444', '#8b5cf6'];
|
|
||||||
const datasets = Object.entries(designVars).map(([name, values], index) => ({
|
|
||||||
label: name,
|
|
||||||
data: values,
|
|
||||||
borderColor: colors[index % colors.length],
|
|
||||||
backgroundColor: colors[index % colors.length] + '20',
|
|
||||||
tension: 0.1
|
|
||||||
}));
|
|
||||||
|
|
||||||
charts.designVars = new Chart(ctx, {
|
|
||||||
type: 'line',
|
|
||||||
data: {
|
|
||||||
labels: trials,
|
|
||||||
datasets: datasets
|
|
||||||
},
|
|
||||||
options: {
|
|
||||||
responsive: true,
|
|
||||||
maintainAspectRatio: false,
|
|
||||||
plugins: {
|
|
||||||
legend: {
|
|
||||||
display: true,
|
|
||||||
position: 'top'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
scales: {
|
|
||||||
x: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Trial Number'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
y: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Value'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderConstraintsChart(trials, constraints) {
|
|
||||||
const ctx = document.getElementById('constraintsChart').getContext('2d');
|
|
||||||
|
|
||||||
if (charts.constraints) {
|
|
||||||
charts.constraints.destroy();
|
|
||||||
}
|
|
||||||
|
|
||||||
const colors = ['#3b82f6', '#10b981', '#f59e0b'];
|
|
||||||
const datasets = Object.entries(constraints).map(([name, values], index) => ({
|
|
||||||
label: name,
|
|
||||||
data: values,
|
|
||||||
borderColor: colors[index % colors.length],
|
|
||||||
backgroundColor: colors[index % colors.length] + '20',
|
|
||||||
tension: 0.1
|
|
||||||
}));
|
|
||||||
|
|
||||||
charts.constraints = new Chart(ctx, {
|
|
||||||
type: 'line',
|
|
||||||
data: {
|
|
||||||
labels: trials,
|
|
||||||
datasets: datasets
|
|
||||||
},
|
|
||||||
options: {
|
|
||||||
responsive: true,
|
|
||||||
maintainAspectRatio: false,
|
|
||||||
plugins: {
|
|
||||||
legend: {
|
|
||||||
display: true,
|
|
||||||
position: 'top'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
scales: {
|
|
||||||
x: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Trial Number'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
y: {
|
|
||||||
title: {
|
|
||||||
display: true,
|
|
||||||
text: 'Value'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// History Table
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function renderHistoryTable(history) {
|
|
||||||
const tbody = document.querySelector('#historyTable tbody');
|
|
||||||
|
|
||||||
if (!history || history.length === 0) {
|
|
||||||
tbody.innerHTML = '<tr><td colspan="5">No trials yet</td></tr>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = history.map(trial => `
|
|
||||||
<tr>
|
|
||||||
<td>${trial.trial_number}</td>
|
|
||||||
<td>${trial.total_objective.toFixed(4)}</td>
|
|
||||||
<td>${formatDesignVars(trial.design_variables)}</td>
|
|
||||||
<td>${formatConstraints(trial.constraints)}</td>
|
|
||||||
<td>${formatDateTime(trial.timestamp)}</td>
|
|
||||||
</tr>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
tbody.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function formatDesignVars(vars) {
|
|
||||||
return Object.entries(vars)
|
|
||||||
.map(([name, value]) => `${name}=${value.toFixed(4)}`)
|
|
||||||
.join(', ');
|
|
||||||
}
|
|
||||||
|
|
||||||
function formatConstraints(constraints) {
|
|
||||||
return Object.entries(constraints)
|
|
||||||
.map(([name, value]) => `${name}=${value.toFixed(4)}`)
|
|
||||||
.join(', ');
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// New Optimization
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function showNewOptimizationModal() {
|
|
||||||
document.getElementById('newOptimizationModal').style.display = 'flex';
|
|
||||||
}
|
|
||||||
|
|
||||||
function closeNewOptimizationModal() {
|
|
||||||
document.getElementById('newOptimizationModal').style.display = 'none';
|
|
||||||
}
|
|
||||||
|
|
||||||
async function startOptimization() {
|
|
||||||
const studyName = document.getElementById('newStudyName').value || `study_${Date.now()}`;
|
|
||||||
const nTrials = parseInt(document.getElementById('newTrials').value) || 50;
|
|
||||||
const configPath = document.getElementById('newConfigPath').value;
|
|
||||||
const resume = document.getElementById('resumeExisting').checked;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/optimization/start`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
study_name: studyName,
|
|
||||||
n_trials: nTrials,
|
|
||||||
config_path: configPath,
|
|
||||||
resume: resume
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
showSuccess(`Optimization "${studyName}" started successfully!`);
|
|
||||||
closeNewOptimizationModal();
|
|
||||||
setTimeout(refreshStudies, 1000);
|
|
||||||
} else {
|
|
||||||
showError('Failed to start optimization: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Active Optimizations Polling
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function startActiveOptimizationsPolling() {
|
|
||||||
setInterval(updateActiveOptimizations, 5000); // Poll every 5 seconds
|
|
||||||
updateActiveOptimizations();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function updateActiveOptimizations() {
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/optimization/status`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
renderActiveOptimizations(data.active_optimizations);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to update active optimizations:', error);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderActiveOptimizations(optimizations) {
|
|
||||||
const container = document.getElementById('activeOptimizations');
|
|
||||||
|
|
||||||
const entries = Object.entries(optimizations);
|
|
||||||
if (entries.length === 0) {
|
|
||||||
container.innerHTML = '<p class="empty">No active optimizations</p>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = entries.map(([name, opt]) => `
|
|
||||||
<div class="active-item">
|
|
||||||
<div class="active-name">${name}</div>
|
|
||||||
<div class="active-status status-${opt.status}">${opt.status}</div>
|
|
||||||
${opt.status === 'running' ? `
|
|
||||||
<div class="progress-bar">
|
|
||||||
<div class="progress-fill" style="width: ${(opt.current_trial / opt.n_trials * 100).toFixed(0)}%"></div>
|
|
||||||
</div>
|
|
||||||
<div class="active-progress">${opt.current_trial || 0} / ${opt.n_trials} trials</div>
|
|
||||||
` : ''}
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Study Actions
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
async function resumeCurrentStudy() {
|
|
||||||
if (!currentStudy) return;
|
|
||||||
|
|
||||||
const nTrials = prompt('Number of additional trials:', '25');
|
|
||||||
if (!nTrials) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/optimization/start`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
study_name: currentStudy,
|
|
||||||
n_trials: parseInt(nTrials),
|
|
||||||
resume: true
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
showSuccess(`Study "${currentStudy}" resumed with ${nTrials} additional trials`);
|
|
||||||
} else {
|
|
||||||
showError('Failed to resume study: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function deleteCurrentStudy() {
|
|
||||||
if (!currentStudy) return;
|
|
||||||
|
|
||||||
if (!confirm(`Are you sure you want to delete study "${currentStudy}"? This cannot be undone.`)) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/studies/${currentStudy}/delete`, {
|
|
||||||
method: 'DELETE'
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
showSuccess(`Study "${currentStudy}" deleted successfully`);
|
|
||||||
currentStudy = null;
|
|
||||||
document.getElementById('studyDetails').style.display = 'none';
|
|
||||||
document.getElementById('welcomeScreen').style.display = 'block';
|
|
||||||
refreshStudies();
|
|
||||||
} else {
|
|
||||||
showError('Failed to delete study: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Utility Functions
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function formatDate(dateString) {
|
|
||||||
if (!dateString) return 'N/A';
|
|
||||||
const date = new Date(dateString);
|
|
||||||
return date.toLocaleDateString();
|
|
||||||
}
|
|
||||||
|
|
||||||
function formatDateTime(dateString) {
|
|
||||||
if (!dateString) return 'N/A';
|
|
||||||
const date = new Date(dateString);
|
|
||||||
return date.toLocaleString();
|
|
||||||
}
|
|
||||||
|
|
||||||
function showSuccess(message) {
|
|
||||||
// Simple success notification
|
|
||||||
alert('✓ ' + message);
|
|
||||||
}
|
|
||||||
|
|
||||||
function showError(message) {
|
|
||||||
// Simple error notification
|
|
||||||
alert('✗ Error: ' + message);
|
|
||||||
console.error(message);
|
|
||||||
}
|
|
||||||
@@ -1,307 +0,0 @@
|
|||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>Atomizer - Optimization Dashboard</title>
|
|
||||||
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.0/dist/chart.umd.min.js"></script>
|
|
||||||
<link rel="stylesheet" href="styles.css">
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div class="dashboard-container">
|
|
||||||
<!-- Header -->
|
|
||||||
<header class="dashboard-header">
|
|
||||||
<div class="header-content">
|
|
||||||
<h1>⚛️ Atomizer</h1>
|
|
||||||
<p class="subtitle">Optimization Dashboard</p>
|
|
||||||
</div>
|
|
||||||
<div class="header-actions">
|
|
||||||
<button class="btn btn-primary" onclick="showNewOptimizationModal()">
|
|
||||||
▶️ New Optimization
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-secondary" onclick="refreshStudies()">
|
|
||||||
🔄 Refresh
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</header>
|
|
||||||
|
|
||||||
<!-- Main Content -->
|
|
||||||
<div class="main-content">
|
|
||||||
<!-- Sidebar -->
|
|
||||||
<aside class="sidebar">
|
|
||||||
<div class="sidebar-section">
|
|
||||||
<h3>Studies</h3>
|
|
||||||
<div id="studiesList" class="studies-list">
|
|
||||||
<p class="loading">Loading studies...</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="sidebar-section">
|
|
||||||
<h3>Active Optimizations</h3>
|
|
||||||
<div id="activeOptimizations" class="active-list">
|
|
||||||
<p class="empty">No active optimizations</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</aside>
|
|
||||||
|
|
||||||
<!-- Content Area -->
|
|
||||||
<main class="content-area">
|
|
||||||
<!-- Welcome Screen -->
|
|
||||||
<div id="welcomeScreen" class="welcome-screen">
|
|
||||||
<h2>Welcome to Atomizer</h2>
|
|
||||||
<p>Select a study from the sidebar or create a new study</p>
|
|
||||||
<div class="quick-actions">
|
|
||||||
<button class="btn btn-large btn-primary" onclick="showCreateStudyModal()">
|
|
||||||
Create New Study
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-large btn-secondary" onclick="refreshStudies()">
|
|
||||||
View Existing Studies
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Study Configuration View -->
|
|
||||||
<div id="studyConfig" class="study-config" style="display: none;">
|
|
||||||
<div class="study-header">
|
|
||||||
<div>
|
|
||||||
<h2 id="configStudyTitle">Configure Study</h2>
|
|
||||||
<p id="configStudyMeta" class="study-meta"></p>
|
|
||||||
</div>
|
|
||||||
<div class="study-actions">
|
|
||||||
<button class="btn btn-secondary" onclick="backToStudyList()">
|
|
||||||
← Back
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-primary" onclick="saveStudyConfiguration()">
|
|
||||||
💾 Save Configuration
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Configuration Steps -->
|
|
||||||
<div class="config-steps">
|
|
||||||
<!-- Step 1: Sim Files -->
|
|
||||||
<div class="config-step">
|
|
||||||
<h3>1. Simulation Files</h3>
|
|
||||||
<div class="step-content">
|
|
||||||
<p class="step-description">Drop your .sim and .prt files in the sim folder, then explore the model</p>
|
|
||||||
<div class="file-info">
|
|
||||||
<p><strong>Sim Folder:</strong> <span id="simFolderPath"></span></p>
|
|
||||||
<button class="btn btn-secondary" onclick="openSimFolder()">Open Folder</button>
|
|
||||||
<button class="btn btn-primary" onclick="refreshSimFiles()">Refresh Files</button>
|
|
||||||
</div>
|
|
||||||
<div id="simFilesList" class="files-list"></div>
|
|
||||||
<button id="exploreBtn" class="btn btn-primary" onclick="exploreSimFile()" disabled>
|
|
||||||
🔍 Explore .sim File
|
|
||||||
</button>
|
|
||||||
<div id="expressionsList" class="expressions-list" style="display: none;"></div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Step 2: Design Variables -->
|
|
||||||
<div class="config-step">
|
|
||||||
<h3>2. Design Variables</h3>
|
|
||||||
<div class="step-content">
|
|
||||||
<p class="step-description">Select expressions to use as design variables and set their bounds</p>
|
|
||||||
<div id="designVariablesConfig" class="variables-config">
|
|
||||||
<p class="empty">Explore .sim file first to see available expressions</p>
|
|
||||||
</div>
|
|
||||||
<button class="btn btn-secondary" onclick="addDesignVariable()">+ Add Variable</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Step 3: Objectives -->
|
|
||||||
<div class="config-step">
|
|
||||||
<h3>3. Objectives</h3>
|
|
||||||
<div class="step-content">
|
|
||||||
<p class="step-description">Define what you want to optimize (minimize or maximize)</p>
|
|
||||||
<div id="objectivesConfig" class="objectives-config">
|
|
||||||
<p class="empty">No objectives defined yet</p>
|
|
||||||
</div>
|
|
||||||
<button class="btn btn-secondary" onclick="addObjective()">+ Add Objective</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Step 4: Constraints -->
|
|
||||||
<div class="config-step">
|
|
||||||
<h3>4. Constraints</h3>
|
|
||||||
<div class="step-content">
|
|
||||||
<p class="step-description">Set limits on simulation outputs</p>
|
|
||||||
<div id="constraintsConfig" class="constraints-config">
|
|
||||||
<p class="empty">No constraints defined yet</p>
|
|
||||||
</div>
|
|
||||||
<button class="btn btn-secondary" onclick="addConstraint()">+ Add Constraint</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Step 5: Optimization Settings -->
|
|
||||||
<div class="config-step">
|
|
||||||
<h3>5. Optimization Settings</h3>
|
|
||||||
<div class="step-content">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Number of Trials</label>
|
|
||||||
<input type="number" id="nTrials" value="50" min="1">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Sampler</label>
|
|
||||||
<select id="samplerType">
|
|
||||||
<option value="TPE" selected>TPE (Tree-structured Parzen Estimator)</option>
|
|
||||||
<option value="CMAES">CMA-ES</option>
|
|
||||||
<option value="GP">Gaussian Process</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Startup Trials (random exploration)</label>
|
|
||||||
<input type="number" id="startupTrials" value="20" min="0">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Study Details View -->
|
|
||||||
<div id="studyDetails" class="study-details" style="display: none;">
|
|
||||||
<div class="study-header">
|
|
||||||
<div>
|
|
||||||
<h2 id="studyTitle">Study Name</h2>
|
|
||||||
<p id="studyMeta" class="study-meta"></p>
|
|
||||||
</div>
|
|
||||||
<div class="study-actions">
|
|
||||||
<button class="btn btn-primary" onclick="resumeCurrentStudy()">
|
|
||||||
▶️ Resume Study
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-danger" onclick="deleteCurrentStudy()">
|
|
||||||
🗑️ Delete
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Best Result Card -->
|
|
||||||
<div class="results-cards">
|
|
||||||
<div class="result-card">
|
|
||||||
<h3>Best Result</h3>
|
|
||||||
<div class="result-value" id="bestObjective">-</div>
|
|
||||||
<p class="result-label">Objective Value</p>
|
|
||||||
</div>
|
|
||||||
<div class="result-card">
|
|
||||||
<h3>Total Trials</h3>
|
|
||||||
<div class="result-value" id="totalTrials">-</div>
|
|
||||||
<p class="result-label">Completed</p>
|
|
||||||
</div>
|
|
||||||
<div class="result-card">
|
|
||||||
<h3>Best Parameters</h3>
|
|
||||||
<div id="bestParams" class="params-list"></div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Charts -->
|
|
||||||
<div class="charts-container">
|
|
||||||
<div class="chart-card">
|
|
||||||
<h3>Optimization Progress</h3>
|
|
||||||
<canvas id="progressChart"></canvas>
|
|
||||||
</div>
|
|
||||||
<div class="chart-card">
|
|
||||||
<h3>Design Variables</h3>
|
|
||||||
<canvas id="designVarsChart"></canvas>
|
|
||||||
</div>
|
|
||||||
<div class="chart-card">
|
|
||||||
<h3>Constraints</h3>
|
|
||||||
<canvas id="constraintsChart"></canvas>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- History Table -->
|
|
||||||
<div class="history-section">
|
|
||||||
<h3>Trial History</h3>
|
|
||||||
<div class="table-container">
|
|
||||||
<table id="historyTable">
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<th>Trial</th>
|
|
||||||
<th>Objective</th>
|
|
||||||
<th>Design Variables</th>
|
|
||||||
<th>Constraints</th>
|
|
||||||
<th>Timestamp</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody></tbody>
|
|
||||||
</table>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</main>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- New Optimization Modal -->
|
|
||||||
<div id="newOptimizationModal" class="modal" style="display: none;">
|
|
||||||
<div class="modal-content">
|
|
||||||
<div class="modal-header">
|
|
||||||
<h2>Start New Optimization</h2>
|
|
||||||
<button class="close-btn" onclick="closeNewOptimizationModal()">×</button>
|
|
||||||
</div>
|
|
||||||
<div class="modal-body">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Study Name</label>
|
|
||||||
<input type="text" id="newStudyName" placeholder="my_optimization_study" />
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Number of Trials</label>
|
|
||||||
<input type="number" id="newTrials" value="50" min="1" />
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Configuration File</label>
|
|
||||||
<select id="newConfigPath">
|
|
||||||
<option value="examples/bracket/optimization_config_stress_displacement.json">
|
|
||||||
Bracket - Stress Minimization
|
|
||||||
</option>
|
|
||||||
<option value="examples/bracket/optimization_config.json">
|
|
||||||
Bracket - Multi-objective
|
|
||||||
</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>
|
|
||||||
<input type="checkbox" id="resumeExisting" />
|
|
||||||
Resume existing study (if exists)
|
|
||||||
</label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="modal-footer">
|
|
||||||
<button class="btn btn-secondary" onclick="closeNewOptimizationModal()">
|
|
||||||
Cancel
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-primary" onclick="startOptimization()">
|
|
||||||
Start Optimization
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Create Study Modal -->
|
|
||||||
<div id="createStudyModal" class="modal" style="display: none;">
|
|
||||||
<div class="modal-content">
|
|
||||||
<div class="modal-header">
|
|
||||||
<h2>Create New Study</h2>
|
|
||||||
<button class="close-btn" onclick="closeCreateStudyModal()">×</button>
|
|
||||||
</div>
|
|
||||||
<div class="modal-body">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Study Name</label>
|
|
||||||
<input type="text" id="createStudyName" placeholder="my_optimization_study" />
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Description (optional)</label>
|
|
||||||
<input type="text" id="createStudyDescription" placeholder="Brief description of this study" />
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="modal-footer">
|
|
||||||
<button class="btn btn-secondary" onclick="closeCreateStudyModal()">Cancel</button>
|
|
||||||
<button class="btn btn-primary" onclick="createNewStudy()">Create Study</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<script src="app.js"></script>
|
|
||||||
<script src="study_config.js"></script>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
@@ -1,507 +0,0 @@
|
|||||||
// Study Configuration Management Functions
|
|
||||||
|
|
||||||
// Global state for configuration
|
|
||||||
let currentConfigStudy = null;
|
|
||||||
let extractedExpressions = null;
|
|
||||||
let studyConfiguration = {
|
|
||||||
design_variables: [],
|
|
||||||
objectives: [],
|
|
||||||
constraints: [],
|
|
||||||
optimization_settings: {}
|
|
||||||
};
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Create Study Modal
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function showCreateStudyModal() {
|
|
||||||
document.getElementById('createStudyModal').style.display = 'flex';
|
|
||||||
}
|
|
||||||
|
|
||||||
function closeCreateStudyModal() {
|
|
||||||
document.getElementById('createStudyModal').style.display = 'none';
|
|
||||||
}
|
|
||||||
|
|
||||||
async function createNewStudy() {
|
|
||||||
const studyName = document.getElementById('createStudyName').value;
|
|
||||||
const description = document.getElementById('createStudyDescription').value;
|
|
||||||
|
|
||||||
if (!studyName) {
|
|
||||||
showError('Please enter a study name');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/study/create`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
study_name: studyName,
|
|
||||||
description: description
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
showSuccess(`Study "${studyName}" created successfully!`);
|
|
||||||
closeCreateStudyModal();
|
|
||||||
|
|
||||||
// Open the study for configuration
|
|
||||||
loadStudyConfig(studyName);
|
|
||||||
refreshStudies();
|
|
||||||
} else {
|
|
||||||
showError('Failed to create study: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Study Configuration
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
async function loadStudyConfig(studyName) {
|
|
||||||
currentConfigStudy = studyName;
|
|
||||||
|
|
||||||
// Hide other views, show config view
|
|
||||||
document.getElementById('welcomeScreen').style.display = 'none';
|
|
||||||
document.getElementById('studyDetails').style.display = 'none';
|
|
||||||
document.getElementById('studyConfig').style.display = 'block';
|
|
||||||
|
|
||||||
// Update header
|
|
||||||
document.getElementById('configStudyTitle').textContent = `Configure: ${studyName}`;
|
|
||||||
document.getElementById('configStudyMeta').textContent = `Study: ${studyName}`;
|
|
||||||
|
|
||||||
// Load sim files
|
|
||||||
await refreshSimFiles();
|
|
||||||
|
|
||||||
// Load existing configuration if available
|
|
||||||
await loadExistingConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
async function refreshSimFiles() {
|
|
||||||
if (!currentConfigStudy) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/study/${currentConfigStudy}/sim/files`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
// Update sim folder path
|
|
||||||
document.getElementById('simFolderPath').textContent = data.sim_folder;
|
|
||||||
|
|
||||||
// Render files list
|
|
||||||
const filesList = document.getElementById('simFilesList');
|
|
||||||
if (data.files.length === 0) {
|
|
||||||
filesList.innerHTML = '<p class="empty">No files yet. Drop your .sim and .prt files in the folder.</p>';
|
|
||||||
} else {
|
|
||||||
const html = data.files.map(file => `
|
|
||||||
<div class="file-item">
|
|
||||||
<div>
|
|
||||||
<div class="file-name">${file.name}</div>
|
|
||||||
<div class="file-meta">${(file.size / 1024).toFixed(1)} KB</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
filesList.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Enable/disable explore button
|
|
||||||
document.getElementById('exploreBtn').disabled = !data.has_sim_file;
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Failed to load sim files: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function exploreSimFile() {
|
|
||||||
if (!currentConfigStudy) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
showSuccess('Exploring .sim file with NX... This may take a minute.');
|
|
||||||
|
|
||||||
const response = await fetch(`${API_BASE}/study/${currentConfigStudy}/explore`, {
|
|
||||||
method: 'POST'
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
extractedExpressions = data.expressions;
|
|
||||||
displayExpressions(data.expressions);
|
|
||||||
showSuccess('Expression extraction complete!');
|
|
||||||
} else {
|
|
||||||
showError('Failed to explore .sim file: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function displayExpressions(expressionsData) {
|
|
||||||
const container = document.getElementById('expressionsList');
|
|
||||||
container.style.display = 'block';
|
|
||||||
|
|
||||||
const expressions = expressionsData.expressions_by_part;
|
|
||||||
const metadata = expressionsData.metadata;
|
|
||||||
|
|
||||||
let html = `
|
|
||||||
<h4>Expressions Found: ${metadata.total_expressions}
|
|
||||||
(${metadata.variable_candidates} potential design variables)</h4>
|
|
||||||
`;
|
|
||||||
|
|
||||||
// Display expressions by part
|
|
||||||
for (const [partName, exprs] of Object.entries(expressions)) {
|
|
||||||
if (exprs.length === 0) continue;
|
|
||||||
|
|
||||||
html += `<h5 style="margin-top: 1rem;">${partName}</h5>`;
|
|
||||||
|
|
||||||
exprs.forEach(expr => {
|
|
||||||
const isCandidate = expr.is_variable_candidate ? '✓' : '';
|
|
||||||
html += `
|
|
||||||
<div class="expression-item ${expr.is_variable_candidate ? 'selected' : ''}"
|
|
||||||
onclick="selectExpressionForVariable('${partName}', '${expr.name}')">
|
|
||||||
<div class="expression-name">${isCandidate} ${expr.name}</div>
|
|
||||||
<div class="expression-meta">Value: ${expr.value} ${expr.units} | Formula: ${expr.formula}</div>
|
|
||||||
</div>
|
|
||||||
`;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function selectExpressionForVariable(partName, exprName) {
|
|
||||||
// Find the expression
|
|
||||||
const expressions = extractedExpressions.expressions_by_part[partName];
|
|
||||||
const expr = expressions.find(e => e.name === exprName);
|
|
||||||
|
|
||||||
if (!expr) return;
|
|
||||||
|
|
||||||
// Add to design variables
|
|
||||||
addDesignVariableFromExpression(expr);
|
|
||||||
}
|
|
||||||
|
|
||||||
function addDesignVariableFromExpression(expr) {
|
|
||||||
const variable = {
|
|
||||||
name: expr.name,
|
|
||||||
min: expr.value * 0.8, // 20% below current
|
|
||||||
max: expr.value * 1.2, // 20% above current
|
|
||||||
units: expr.units
|
|
||||||
};
|
|
||||||
|
|
||||||
studyConfiguration.design_variables.push(variable);
|
|
||||||
renderDesignVariablesConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
function addDesignVariable() {
|
|
||||||
const variable = {
|
|
||||||
name: `variable_${studyConfiguration.design_variables.length + 1}`,
|
|
||||||
min: 0,
|
|
||||||
max: 100,
|
|
||||||
units: 'mm'
|
|
||||||
};
|
|
||||||
|
|
||||||
studyConfiguration.design_variables.push(variable);
|
|
||||||
renderDesignVariablesConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderDesignVariablesConfig() {
|
|
||||||
const container = document.getElementById('designVariablesConfig');
|
|
||||||
|
|
||||||
if (studyConfiguration.design_variables.length === 0) {
|
|
||||||
container.innerHTML = '<p class="empty">No design variables yet</p>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = studyConfiguration.design_variables.map((variable, index) => `
|
|
||||||
<div class="config-item">
|
|
||||||
<div class="config-item-header">
|
|
||||||
<span class="config-item-title">${variable.name}</span>
|
|
||||||
<button class="config-item-remove" onclick="removeDesignVariable(${index})">×</button>
|
|
||||||
</div>
|
|
||||||
<div class="config-item-fields">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Name</label>
|
|
||||||
<input type="text" value="${variable.name}"
|
|
||||||
onchange="updateDesignVariable(${index}, 'name', this.value)">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Minimum</label>
|
|
||||||
<input type="number" value="${variable.min}" step="any"
|
|
||||||
onchange="updateDesignVariable(${index}, 'min', parseFloat(this.value))">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Maximum</label>
|
|
||||||
<input type="number" value="${variable.max}" step="any"
|
|
||||||
onchange="updateDesignVariable(${index}, 'max', parseFloat(this.value))">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Units</label>
|
|
||||||
<input type="text" value="${variable.units}"
|
|
||||||
onchange="updateDesignVariable(${index}, 'units', this.value)">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateDesignVariable(index, field, value) {
|
|
||||||
studyConfiguration.design_variables[index][field] = value;
|
|
||||||
}
|
|
||||||
|
|
||||||
function removeDesignVariable(index) {
|
|
||||||
studyConfiguration.design_variables.splice(index, 1);
|
|
||||||
renderDesignVariablesConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Objectives Configuration
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function addObjective() {
|
|
||||||
const objective = {
|
|
||||||
name: `objective_${studyConfiguration.objectives.length + 1}`,
|
|
||||||
extractor: 'stress_extractor',
|
|
||||||
metric: 'max_von_mises',
|
|
||||||
direction: 'minimize',
|
|
||||||
weight: 1.0,
|
|
||||||
units: 'MPa'
|
|
||||||
};
|
|
||||||
|
|
||||||
studyConfiguration.objectives.push(objective);
|
|
||||||
renderObjectivesConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderObjectivesConfig() {
|
|
||||||
const container = document.getElementById('objectivesConfig');
|
|
||||||
|
|
||||||
if (studyConfiguration.objectives.length === 0) {
|
|
||||||
container.innerHTML = '<p class="empty">No objectives yet</p>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = studyConfiguration.objectives.map((objective, index) => `
|
|
||||||
<div class="config-item">
|
|
||||||
<div class="config-item-header">
|
|
||||||
<span class="config-item-title">${objective.name}</span>
|
|
||||||
<button class="config-item-remove" onclick="removeObjective(${index})">×</button>
|
|
||||||
</div>
|
|
||||||
<div class="config-item-fields">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Name</label>
|
|
||||||
<input type="text" value="${objective.name}"
|
|
||||||
onchange="updateObjective(${index}, 'name', this.value)">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Extractor</label>
|
|
||||||
<select onchange="updateObjective(${index}, 'extractor', this.value)">
|
|
||||||
<option value="stress_extractor" ${objective.extractor === 'stress_extractor' ? 'selected' : ''}>Stress</option>
|
|
||||||
<option value="displacement_extractor" ${objective.extractor === 'displacement_extractor' ? 'selected' : ''}>Displacement</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Metric</label>
|
|
||||||
<input type="text" value="${objective.metric}"
|
|
||||||
onchange="updateObjective(${index}, 'metric', this.value)">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Direction</label>
|
|
||||||
<select onchange="updateObjective(${index}, 'direction', this.value)">
|
|
||||||
<option value="minimize" ${objective.direction === 'minimize' ? 'selected' : ''}>Minimize</option>
|
|
||||||
<option value="maximize" ${objective.direction === 'maximize' ? 'selected' : ''}>Maximize</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Weight</label>
|
|
||||||
<input type="number" value="${objective.weight}" step="any"
|
|
||||||
onchange="updateObjective(${index}, 'weight', parseFloat(this.value))">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Units</label>
|
|
||||||
<input type="text" value="${objective.units}"
|
|
||||||
onchange="updateObjective(${index}, 'units', this.value)">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateObjective(index, field, value) {
|
|
||||||
studyConfiguration.objectives[index][field] = value;
|
|
||||||
}
|
|
||||||
|
|
||||||
function removeObjective(index) {
|
|
||||||
studyConfiguration.objectives.splice(index, 1);
|
|
||||||
renderObjectivesConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Constraints Configuration
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function addConstraint() {
|
|
||||||
const constraint = {
|
|
||||||
name: `constraint_${studyConfiguration.constraints.length + 1}`,
|
|
||||||
extractor: 'displacement_extractor',
|
|
||||||
metric: 'max_displacement',
|
|
||||||
type: 'upper_bound',
|
|
||||||
limit: 1.0,
|
|
||||||
units: 'mm'
|
|
||||||
};
|
|
||||||
|
|
||||||
studyConfiguration.constraints.push(constraint);
|
|
||||||
renderConstraintsConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderConstraintsConfig() {
|
|
||||||
const container = document.getElementById('constraintsConfig');
|
|
||||||
|
|
||||||
if (studyConfiguration.constraints.length === 0) {
|
|
||||||
container.innerHTML = '<p class="empty">No constraints yet</p>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const html = studyConfiguration.constraints.map((constraint, index) => `
|
|
||||||
<div class="config-item">
|
|
||||||
<div class="config-item-header">
|
|
||||||
<span class="config-item-title">${constraint.name}</span>
|
|
||||||
<button class="config-item-remove" onclick="removeConstraint(${index})">×</button>
|
|
||||||
</div>
|
|
||||||
<div class="config-item-fields">
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Name</label>
|
|
||||||
<input type="text" value="${constraint.name}"
|
|
||||||
onchange="updateConstraint(${index}, 'name', this.value)">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Extractor</label>
|
|
||||||
<select onchange="updateConstraint(${index}, 'extractor', this.value)">
|
|
||||||
<option value="stress_extractor" ${constraint.extractor === 'stress_extractor' ? 'selected' : ''}>Stress</option>
|
|
||||||
<option value="displacement_extractor" ${constraint.extractor === 'displacement_extractor' ? 'selected' : ''}>Displacement</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Metric</label>
|
|
||||||
<input type="text" value="${constraint.metric}"
|
|
||||||
onchange="updateConstraint(${index}, 'metric', this.value)">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Type</label>
|
|
||||||
<select onchange="updateConstraint(${index}, 'type', this.value)">
|
|
||||||
<option value="upper_bound" ${constraint.type === 'upper_bound' ? 'selected' : ''}>Upper Bound</option>
|
|
||||||
<option value="lower_bound" ${constraint.type === 'lower_bound' ? 'selected' : ''}>Lower Bound</option>
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Limit</label>
|
|
||||||
<input type="number" value="${constraint.limit}" step="any"
|
|
||||||
onchange="updateConstraint(${index}, 'limit', parseFloat(this.value))">
|
|
||||||
</div>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Units</label>
|
|
||||||
<input type="text" value="${constraint.units}"
|
|
||||||
onchange="updateConstraint(${index}, 'units', this.value)">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
`).join('');
|
|
||||||
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function updateConstraint(index, field, value) {
|
|
||||||
studyConfiguration.constraints[index][field] = value;
|
|
||||||
}
|
|
||||||
|
|
||||||
function removeConstraint(index) {
|
|
||||||
studyConfiguration.constraints.splice(index, 1);
|
|
||||||
renderConstraintsConfig();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Save Configuration
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
async function saveStudyConfiguration() {
|
|
||||||
if (!currentConfigStudy) return;
|
|
||||||
|
|
||||||
// Gather optimization settings
|
|
||||||
studyConfiguration.optimization_settings = {
|
|
||||||
n_trials: parseInt(document.getElementById('nTrials').value) || 50,
|
|
||||||
sampler: document.getElementById('samplerType').value,
|
|
||||||
n_startup_trials: parseInt(document.getElementById('startupTrials').value) || 20
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/study/${currentConfigStudy}/config`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
},
|
|
||||||
body: JSON.stringify(studyConfiguration)
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success) {
|
|
||||||
showSuccess('Configuration saved successfully!');
|
|
||||||
} else {
|
|
||||||
showError('Failed to save configuration: ' + data.error);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
showError('Connection error: ' + error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function loadExistingConfig() {
|
|
||||||
if (!currentConfigStudy) return;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${API_BASE}/study/${currentConfigStudy}/config`);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.success && data.config) {
|
|
||||||
studyConfiguration = data.config;
|
|
||||||
|
|
||||||
// Render loaded configuration
|
|
||||||
renderDesignVariablesConfig();
|
|
||||||
renderObjectivesConfig();
|
|
||||||
renderConstraintsConfig();
|
|
||||||
|
|
||||||
// Set optimization settings
|
|
||||||
if (data.config.optimization_settings) {
|
|
||||||
document.getElementById('nTrials').value = data.config.optimization_settings.n_trials || 50;
|
|
||||||
document.getElementById('samplerType').value = data.config.optimization_settings.sampler || 'TPE';
|
|
||||||
document.getElementById('startupTrials').value = data.config.optimization_settings.n_startup_trials || 20;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to load existing config:', error);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ====================
|
|
||||||
// Utility Functions
|
|
||||||
// ====================
|
|
||||||
|
|
||||||
function openSimFolder() {
|
|
||||||
if (!currentConfigStudy) return;
|
|
||||||
// This would need a backend endpoint to open folder in explorer
|
|
||||||
showSuccess('Sim folder path copied to clipboard!');
|
|
||||||
}
|
|
||||||
|
|
||||||
function backToStudyList() {
|
|
||||||
document.getElementById('studyConfig').style.display = 'none';
|
|
||||||
document.getElementById('welcomeScreen').style.display = 'block';
|
|
||||||
currentConfigStudy = null;
|
|
||||||
extractedExpressions = null;
|
|
||||||
}
|
|
||||||
@@ -1,699 +0,0 @@
|
|||||||
/* Atomizer Dashboard Styles */
|
|
||||||
|
|
||||||
* {
|
|
||||||
margin: 0;
|
|
||||||
padding: 0;
|
|
||||||
box-sizing: border-box;
|
|
||||||
}
|
|
||||||
|
|
||||||
:root {
|
|
||||||
--primary: #3b82f6;
|
|
||||||
--primary-dark: #2563eb;
|
|
||||||
--secondary: #64748b;
|
|
||||||
--success: #10b981;
|
|
||||||
--danger: #ef4444;
|
|
||||||
--warning: #f59e0b;
|
|
||||||
--bg: #f8fafc;
|
|
||||||
--card-bg: #ffffff;
|
|
||||||
--text: #1e293b;
|
|
||||||
--text-light: #64748b;
|
|
||||||
--border: #e2e8f0;
|
|
||||||
--shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
|
|
||||||
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1);
|
|
||||||
}
|
|
||||||
|
|
||||||
body {
|
|
||||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
|
|
||||||
background: var(--bg);
|
|
||||||
color: var(--text);
|
|
||||||
line-height: 1.6;
|
|
||||||
}
|
|
||||||
|
|
||||||
.dashboard-container {
|
|
||||||
min-height: 100vh;
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Header */
|
|
||||||
.dashboard-header {
|
|
||||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
|
||||||
color: white;
|
|
||||||
padding: 1.5rem 2rem;
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
box-shadow: var(--shadow-lg);
|
|
||||||
}
|
|
||||||
|
|
||||||
.header-content h1 {
|
|
||||||
font-size: 2rem;
|
|
||||||
font-weight: 700;
|
|
||||||
margin-bottom: 0.25rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.subtitle {
|
|
||||||
opacity: 0.9;
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.header-actions {
|
|
||||||
display: flex;
|
|
||||||
gap: 1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Main Content */
|
|
||||||
.main-content {
|
|
||||||
flex: 1;
|
|
||||||
display: flex;
|
|
||||||
gap: 2rem;
|
|
||||||
padding: 2rem;
|
|
||||||
max-width: 1800px;
|
|
||||||
width: 100%;
|
|
||||||
margin: 0 auto;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Sidebar */
|
|
||||||
.sidebar {
|
|
||||||
width: 300px;
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 1.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-section {
|
|
||||||
background: var(--card-bg);
|
|
||||||
border-radius: 12px;
|
|
||||||
padding: 1.5rem;
|
|
||||||
box-shadow: var(--shadow);
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-section h3 {
|
|
||||||
font-size: 1.1rem;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.studies-list, .active-list {
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 0.75rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-item {
|
|
||||||
padding: 1rem;
|
|
||||||
background: var(--bg);
|
|
||||||
border-radius: 8px;
|
|
||||||
cursor: pointer;
|
|
||||||
transition: all 0.2s;
|
|
||||||
border: 2px solid transparent;
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-item:hover {
|
|
||||||
background: #f1f5f9;
|
|
||||||
border-color: var(--primary);
|
|
||||||
transform: translateY(-2px);
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-name {
|
|
||||||
font-weight: 600;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-info {
|
|
||||||
display: flex;
|
|
||||||
gap: 0.5rem;
|
|
||||||
margin-bottom: 0.25rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-date {
|
|
||||||
font-size: 0.85rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
}
|
|
||||||
|
|
||||||
.badge {
|
|
||||||
display: inline-block;
|
|
||||||
padding: 0.2rem 0.5rem;
|
|
||||||
background: var(--primary);
|
|
||||||
color: white;
|
|
||||||
border-radius: 4px;
|
|
||||||
font-size: 0.75rem;
|
|
||||||
font-weight: 600;
|
|
||||||
}
|
|
||||||
|
|
||||||
.badge-secondary {
|
|
||||||
background: var(--secondary);
|
|
||||||
padding: 0.2rem 0.5rem;
|
|
||||||
color: white;
|
|
||||||
border-radius: 4px;
|
|
||||||
font-size: 0.75rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.active-item {
|
|
||||||
padding: 1rem;
|
|
||||||
background: var(--bg);
|
|
||||||
border-radius: 8px;
|
|
||||||
border-left: 4px solid var(--success);
|
|
||||||
}
|
|
||||||
|
|
||||||
.active-name {
|
|
||||||
font-weight: 600;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.active-status {
|
|
||||||
display: inline-block;
|
|
||||||
padding: 0.2rem 0.6rem;
|
|
||||||
border-radius: 12px;
|
|
||||||
font-size: 0.75rem;
|
|
||||||
font-weight: 600;
|
|
||||||
text-transform: uppercase;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.status-running {
|
|
||||||
background: var(--success);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.status-completed {
|
|
||||||
background: var(--primary);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.status-failed {
|
|
||||||
background: var(--danger);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.progress-bar {
|
|
||||||
width: 100%;
|
|
||||||
height: 8px;
|
|
||||||
background: #e2e8f0;
|
|
||||||
border-radius: 4px;
|
|
||||||
overflow: hidden;
|
|
||||||
margin: 0.5rem 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.progress-fill {
|
|
||||||
height: 100%;
|
|
||||||
background: linear-gradient(90deg, var(--success), var(--primary));
|
|
||||||
transition: width 0.3s;
|
|
||||||
}
|
|
||||||
|
|
||||||
.active-progress {
|
|
||||||
font-size: 0.85rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Content Area */
|
|
||||||
.content-area {
|
|
||||||
flex: 1;
|
|
||||||
background: var(--card-bg);
|
|
||||||
border-radius: 12px;
|
|
||||||
padding: 2rem;
|
|
||||||
box-shadow: var(--shadow);
|
|
||||||
overflow-y: auto;
|
|
||||||
}
|
|
||||||
|
|
||||||
.welcome-screen {
|
|
||||||
text-align: center;
|
|
||||||
padding: 4rem 2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.welcome-screen h2 {
|
|
||||||
font-size: 2.5rem;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.welcome-screen p {
|
|
||||||
font-size: 1.2rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
margin-bottom: 3rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.quick-actions {
|
|
||||||
display: flex;
|
|
||||||
gap: 1rem;
|
|
||||||
justify-content: center;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Study Details */
|
|
||||||
.study-header {
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: flex-start;
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
padding-bottom: 1.5rem;
|
|
||||||
border-bottom: 2px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-header h2 {
|
|
||||||
font-size: 2rem;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-meta {
|
|
||||||
color: var(--text-light);
|
|
||||||
font-size: 0.9rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.study-actions {
|
|
||||||
display: flex;
|
|
||||||
gap: 0.75rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Result Cards */
|
|
||||||
.results-cards {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
|
|
||||||
gap: 1.5rem;
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.result-card {
|
|
||||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
|
||||||
color: white;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 12px;
|
|
||||||
box-shadow: var(--shadow-lg);
|
|
||||||
}
|
|
||||||
|
|
||||||
.result-card h3 {
|
|
||||||
font-size: 0.9rem;
|
|
||||||
opacity: 0.9;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
font-weight: 600;
|
|
||||||
text-transform: uppercase;
|
|
||||||
letter-spacing: 0.5px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.result-value {
|
|
||||||
font-size: 2.5rem;
|
|
||||||
font-weight: 700;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.result-label {
|
|
||||||
font-size: 0.9rem;
|
|
||||||
opacity: 0.8;
|
|
||||||
}
|
|
||||||
|
|
||||||
.params-list {
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.params-list div {
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Charts */
|
|
||||||
.charts-container {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fit, minmax(400px, 1fr));
|
|
||||||
gap: 1.5rem;
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.chart-card {
|
|
||||||
background: var(--bg);
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 12px;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
.chart-card h3 {
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.chart-card canvas {
|
|
||||||
max-height: 300px;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* History Table */
|
|
||||||
.history-section {
|
|
||||||
margin-top: 2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.history-section h3 {
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.table-container {
|
|
||||||
overflow-x: auto;
|
|
||||||
border-radius: 8px;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
table {
|
|
||||||
width: 100%;
|
|
||||||
border-collapse: collapse;
|
|
||||||
background: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
thead {
|
|
||||||
background: var(--bg);
|
|
||||||
}
|
|
||||||
|
|
||||||
th {
|
|
||||||
padding: 1rem;
|
|
||||||
text-align: left;
|
|
||||||
font-weight: 600;
|
|
||||||
color: var(--text);
|
|
||||||
border-bottom: 2px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
td {
|
|
||||||
padding: 0.75rem 1rem;
|
|
||||||
border-bottom: 1px solid var(--border);
|
|
||||||
font-size: 0.9rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
tbody tr:hover {
|
|
||||||
background: var(--bg);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Buttons */
|
|
||||||
.btn {
|
|
||||||
padding: 0.75rem 1.5rem;
|
|
||||||
border: none;
|
|
||||||
border-radius: 8px;
|
|
||||||
font-weight: 600;
|
|
||||||
cursor: pointer;
|
|
||||||
transition: all 0.2s;
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-primary {
|
|
||||||
background: var(--primary);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-primary:hover {
|
|
||||||
background: var(--primary-dark);
|
|
||||||
transform: translateY(-1px);
|
|
||||||
box-shadow: 0 4px 6px rgba(59, 130, 246, 0.3);
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-secondary {
|
|
||||||
background: var(--secondary);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-secondary:hover {
|
|
||||||
background: #475569;
|
|
||||||
transform: translateY(-1px);
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-danger {
|
|
||||||
background: var(--danger);
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-danger:hover {
|
|
||||||
background: #dc2626;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn-large {
|
|
||||||
padding: 1rem 2rem;
|
|
||||||
font-size: 1.1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Modal */
|
|
||||||
.modal {
|
|
||||||
display: none;
|
|
||||||
position: fixed;
|
|
||||||
top: 0;
|
|
||||||
left: 0;
|
|
||||||
right: 0;
|
|
||||||
bottom: 0;
|
|
||||||
background: rgba(0, 0, 0, 0.5);
|
|
||||||
align-items: center;
|
|
||||||
justify-content: center;
|
|
||||||
z-index: 1000;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-content {
|
|
||||||
background: white;
|
|
||||||
border-radius: 12px;
|
|
||||||
width: 90%;
|
|
||||||
max-width: 500px;
|
|
||||||
max-height: 90vh;
|
|
||||||
overflow-y: auto;
|
|
||||||
box-shadow: 0 20px 25px -5px rgba(0, 0, 0, 0.3);
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-header {
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-bottom: 1px solid var(--border);
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-header h2 {
|
|
||||||
font-size: 1.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.close-btn {
|
|
||||||
background: none;
|
|
||||||
border: none;
|
|
||||||
font-size: 2rem;
|
|
||||||
cursor: pointer;
|
|
||||||
color: var(--text-light);
|
|
||||||
width: 32px;
|
|
||||||
height: 32px;
|
|
||||||
display: flex;
|
|
||||||
align-items: center;
|
|
||||||
justify-content: center;
|
|
||||||
border-radius: 4px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.close-btn:hover {
|
|
||||||
background: var(--bg);
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-body {
|
|
||||||
padding: 1.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.modal-footer {
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-top: 1px solid var(--border);
|
|
||||||
display: flex;
|
|
||||||
justify-content: flex-end;
|
|
||||||
gap: 0.75rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Form Elements */
|
|
||||||
.form-group {
|
|
||||||
margin-bottom: 1.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.form-group label {
|
|
||||||
display: block;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
font-weight: 600;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.form-group input[type="text"],
|
|
||||||
.form-group input[type="number"],
|
|
||||||
.form-group select {
|
|
||||||
width: 100%;
|
|
||||||
padding: 0.75rem;
|
|
||||||
border: 2px solid var(--border);
|
|
||||||
border-radius: 8px;
|
|
||||||
font-size: 1rem;
|
|
||||||
transition: border-color 0.2s;
|
|
||||||
}
|
|
||||||
|
|
||||||
.form-group input:focus,
|
|
||||||
.form-group select:focus {
|
|
||||||
outline: none;
|
|
||||||
border-color: var(--primary);
|
|
||||||
}
|
|
||||||
|
|
||||||
.form-group input[type="checkbox"] {
|
|
||||||
margin-right: 0.5rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Utility Classes */
|
|
||||||
.loading, .empty {
|
|
||||||
text-align: center;
|
|
||||||
padding: 2rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Study Configuration */
|
|
||||||
.study-config {
|
|
||||||
display: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-steps {
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-step {
|
|
||||||
background: var(--bg);
|
|
||||||
border-radius: 12px;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-left: 4px solid var(--primary);
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-step h3 {
|
|
||||||
color: var(--text);
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.step-description {
|
|
||||||
color: var(--text-light);
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
font-size: 0.95rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.step-content {
|
|
||||||
margin-top: 1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.file-info {
|
|
||||||
background: white;
|
|
||||||
padding: 1rem;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
.files-list {
|
|
||||||
margin: 1rem 0;
|
|
||||||
max-height: 200px;
|
|
||||||
overflow-y: auto;
|
|
||||||
}
|
|
||||||
|
|
||||||
.file-item {
|
|
||||||
padding: 0.75rem;
|
|
||||||
background: white;
|
|
||||||
border-radius: 6px;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
}
|
|
||||||
|
|
||||||
.file-item .file-name {
|
|
||||||
font-weight: 600;
|
|
||||||
}
|
|
||||||
|
|
||||||
.file-item .file-meta {
|
|
||||||
font-size: 0.85rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
}
|
|
||||||
|
|
||||||
.expressions-list {
|
|
||||||
margin-top: 1rem;
|
|
||||||
padding: 1rem;
|
|
||||||
background: white;
|
|
||||||
border-radius: 8px;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
max-height: 300px;
|
|
||||||
overflow-y: auto;
|
|
||||||
}
|
|
||||||
|
|
||||||
.expression-item {
|
|
||||||
padding: 0.75rem;
|
|
||||||
background: var(--bg);
|
|
||||||
border-radius: 6px;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
cursor: pointer;
|
|
||||||
border: 2px solid transparent;
|
|
||||||
transition: all 0.2s;
|
|
||||||
}
|
|
||||||
|
|
||||||
.expression-item:hover {
|
|
||||||
border-color: var(--primary);
|
|
||||||
transform: translateX(4px);
|
|
||||||
}
|
|
||||||
|
|
||||||
.expression-item.selected {
|
|
||||||
background: linear-gradient(135deg, #667eea20 0%, #764ba220 100%);
|
|
||||||
border-color: var(--primary);
|
|
||||||
}
|
|
||||||
|
|
||||||
.expression-name {
|
|
||||||
font-weight: 600;
|
|
||||||
margin-bottom: 0.25rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.expression-meta {
|
|
||||||
font-size: 0.85rem;
|
|
||||||
color: var(--text-light);
|
|
||||||
}
|
|
||||||
|
|
||||||
.variables-config, .objectives-config, .constraints-config {
|
|
||||||
margin: 1rem 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item {
|
|
||||||
background: white;
|
|
||||||
padding: 1rem;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item-header {
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
margin-bottom: 0.75rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item-title {
|
|
||||||
font-weight: 600;
|
|
||||||
color: var(--text);
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item-remove {
|
|
||||||
background: none;
|
|
||||||
border: none;
|
|
||||||
color: var(--danger);
|
|
||||||
cursor: pointer;
|
|
||||||
font-size: 1.2rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item-fields {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
|
|
||||||
gap: 1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.field-inline {
|
|
||||||
display: flex;
|
|
||||||
gap: 1rem;
|
|
||||||
align-items: flex-end;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Responsive */
|
|
||||||
@media (max-width: 1024px) {
|
|
||||||
.main-content {
|
|
||||||
flex-direction: column;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar {
|
|
||||||
width: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.charts-container {
|
|
||||||
grid-template-columns: 1fr;
|
|
||||||
}
|
|
||||||
|
|
||||||
.config-item-fields {
|
|
||||||
grid-template-columns: 1fr;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
flask>=2.3.0
|
|
||||||
flask-cors>=4.0.0
|
|
||||||
@@ -1,158 +0,0 @@
|
|||||||
"""
|
|
||||||
NX Journal Script: Extract Expressions from .sim File
|
|
||||||
|
|
||||||
This script:
|
|
||||||
1. Opens a .sim file
|
|
||||||
2. Extracts all expressions from the .sim and loaded .prt files
|
|
||||||
3. Saves expression data to JSON for the dashboard
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
run_journal.exe extract_expressions.py <sim_file_path> <output_json_path>
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import json
|
|
||||||
import NXOpen
|
|
||||||
|
|
||||||
|
|
||||||
def extract_all_expressions(sim_file_path, output_file_path):
|
|
||||||
"""
|
|
||||||
Extract all expressions from .sim file and loaded parts.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
sim_file_path: Path to .sim file
|
|
||||||
output_file_path: Path to save JSON output
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Get NX session
|
|
||||||
session = NXOpen.Session.GetSession()
|
|
||||||
|
|
||||||
# Open the .sim file
|
|
||||||
print(f"Opening .sim file: {sim_file_path}")
|
|
||||||
part_load_status = None
|
|
||||||
base_part, part_load_status = session.Parts.OpenBaseDisplay(sim_file_path)
|
|
||||||
|
|
||||||
if part_load_status:
|
|
||||||
part_load_status.Dispose()
|
|
||||||
|
|
||||||
# Collect all expressions from all loaded parts
|
|
||||||
all_expressions = {}
|
|
||||||
|
|
||||||
# Get work parts and components
|
|
||||||
parts_to_scan = [base_part]
|
|
||||||
|
|
||||||
# Also scan all loaded components
|
|
||||||
for component in base_part.ComponentAssembly.RootComponent.GetChildren():
|
|
||||||
try:
|
|
||||||
component_part = component.Prototype.OwningPart
|
|
||||||
if component_part and component_part not in parts_to_scan:
|
|
||||||
parts_to_scan.append(component_part)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Extract expressions from each part
|
|
||||||
for part in parts_to_scan:
|
|
||||||
part_name = part.Name
|
|
||||||
print(f"Scanning expressions from: {part_name}")
|
|
||||||
|
|
||||||
expressions_list = []
|
|
||||||
|
|
||||||
# Get all expressions
|
|
||||||
for expr in part.Expressions:
|
|
||||||
try:
|
|
||||||
expr_data = {
|
|
||||||
'name': expr.Name,
|
|
||||||
'value': expr.Value,
|
|
||||||
'formula': expr.Equation,
|
|
||||||
'units': expr.Units,
|
|
||||||
'type': 'number', # Most expressions are numeric
|
|
||||||
'source_part': part_name
|
|
||||||
}
|
|
||||||
|
|
||||||
# Try to determine if it's a design variable candidate
|
|
||||||
# (not a formula, can be changed)
|
|
||||||
if '=' not in expr.Equation or expr.Equation.strip() == str(expr.Value):
|
|
||||||
expr_data['is_variable_candidate'] = True
|
|
||||||
else:
|
|
||||||
expr_data['is_variable_candidate'] = False
|
|
||||||
|
|
||||||
expressions_list.append(expr_data)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Warning: Could not read expression {expr.Name}: {e}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
all_expressions[part_name] = expressions_list
|
|
||||||
|
|
||||||
# Collect simulation metadata
|
|
||||||
metadata = {
|
|
||||||
'sim_file': sim_file_path,
|
|
||||||
'base_part': base_part.Name,
|
|
||||||
'num_components': len(parts_to_scan),
|
|
||||||
'total_expressions': sum(len(exprs) for exprs in all_expressions.values()),
|
|
||||||
'variable_candidates': sum(
|
|
||||||
1 for exprs in all_expressions.values()
|
|
||||||
for expr in exprs
|
|
||||||
if expr.get('is_variable_candidate', False)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Prepare output
|
|
||||||
output_data = {
|
|
||||||
'metadata': metadata,
|
|
||||||
'expressions_by_part': all_expressions
|
|
||||||
}
|
|
||||||
|
|
||||||
# Save to JSON
|
|
||||||
print(f"Saving expressions to: {output_file_path}")
|
|
||||||
with open(output_file_path, 'w') as f:
|
|
||||||
json.dump(output_data, f, indent=2)
|
|
||||||
|
|
||||||
print(f"Successfully extracted {metadata['total_expressions']} expressions")
|
|
||||||
print(f"Found {metadata['variable_candidates']} potential design variables")
|
|
||||||
|
|
||||||
# Close part
|
|
||||||
base_part.Close(NXOpen.BasePart.CloseWholeTree.True,
|
|
||||||
NXOpen.BasePart.CloseModified.CloseModified, None)
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
error_data = {
|
|
||||||
'error': str(e),
|
|
||||||
'sim_file': sim_file_path
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"ERROR: {e}")
|
|
||||||
|
|
||||||
with open(output_file_path, 'w') as f:
|
|
||||||
json.dump(error_data, f, indent=2)
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main entry point for journal script."""
|
|
||||||
if len(sys.argv) < 3:
|
|
||||||
print("Usage: extract_expressions.py <sim_file_path> <output_json_path>")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
sim_file_path = sys.argv[1]
|
|
||||||
output_file_path = sys.argv[2]
|
|
||||||
|
|
||||||
print("="*60)
|
|
||||||
print("NX Expression Extractor")
|
|
||||||
print("="*60)
|
|
||||||
|
|
||||||
success = extract_all_expressions(sim_file_path, output_file_path)
|
|
||||||
|
|
||||||
if success:
|
|
||||||
print("\nExpression extraction completed successfully!")
|
|
||||||
sys.exit(0)
|
|
||||||
else:
|
|
||||||
print("\nExpression extraction failed!")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
||||||
@@ -1,45 +0,0 @@
|
|||||||
"""
|
|
||||||
Atomizer Dashboard Launcher
|
|
||||||
|
|
||||||
Simple script to start the dashboard server and open the browser.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import webbrowser
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
def main():
|
|
||||||
dashboard_dir = Path(__file__).parent
|
|
||||||
api_script = dashboard_dir / 'api' / 'app.py'
|
|
||||||
|
|
||||||
print("="*60)
|
|
||||||
print("ATOMIZER DASHBOARD LAUNCHER")
|
|
||||||
print("="*60)
|
|
||||||
print(f"\nStarting dashboard server...")
|
|
||||||
print(f"Dashboard will open at: http://localhost:8080")
|
|
||||||
print("\nPress Ctrl+C to stop the server")
|
|
||||||
print("="*60)
|
|
||||||
|
|
||||||
# Give user a moment to read
|
|
||||||
time.sleep(2)
|
|
||||||
|
|
||||||
# Open browser after a short delay
|
|
||||||
def open_browser():
|
|
||||||
time.sleep(3) # Wait for server to start
|
|
||||||
webbrowser.open('http://localhost:8080')
|
|
||||||
|
|
||||||
import threading
|
|
||||||
browser_thread = threading.Thread(target=open_browser, daemon=True)
|
|
||||||
browser_thread.start()
|
|
||||||
|
|
||||||
# Start the Flask server
|
|
||||||
try:
|
|
||||||
subprocess.run([sys.executable, str(api_script)], cwd=str(dashboard_dir))
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n\nShutting down dashboard server...")
|
|
||||||
print("Goodbye!")
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
||||||
@@ -1,329 +0,0 @@
|
|||||||
# Assembly FEM Optimization Workflow
|
|
||||||
|
|
||||||
This document describes the multi-part assembly FEM workflow used when optimizing complex assemblies with `.afm` (Assembly FEM) files.
|
|
||||||
|
|
||||||
## CRITICAL: Working Copy Requirement
|
|
||||||
|
|
||||||
**NEVER run optimization directly on user's master model files.**
|
|
||||||
|
|
||||||
Before any optimization run, ALL model files must be copied to the study's working directory:
|
|
||||||
|
|
||||||
```
|
|
||||||
Source (NEVER MODIFY) Working Copy (optimization runs here)
|
|
||||||
────────────────────────────────────────────────────────────────────────────
|
|
||||||
C:/Users/.../M1-Gigabit/Latest/ studies/{study}/1_setup/model/
|
|
||||||
├── M1_Blank.prt → ├── M1_Blank.prt
|
|
||||||
├── M1_Blank_fem1.fem → ├── M1_Blank_fem1.fem
|
|
||||||
├── M1_Blank_fem1_i.prt → ├── M1_Blank_fem1_i.prt
|
|
||||||
├── M1_Vertical_Support_Skeleton.prt → ├── M1_Vertical_Support_Skeleton.prt
|
|
||||||
├── ASSY_M1_assyfem1.afm → ├── ASSY_M1_assyfem1.afm
|
|
||||||
└── ASSY_M1_assyfem1_sim1.sim → └── ASSY_M1_assyfem1_sim1.sim
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why**: Optimization iteratively modifies expressions, meshes, and saves files. If corruption occurs during iteration (solver crash, bad parameter combo), the working copy can be deleted and re-copied. Master files remain safe.
|
|
||||||
|
|
||||||
**Files to Copy**:
|
|
||||||
- `*.prt` - All part files (geometry + idealized)
|
|
||||||
- `*.fem` - All FEM files
|
|
||||||
- `*.afm` - Assembly FEM files
|
|
||||||
- `*.sim` - Simulation files
|
|
||||||
- `*.exp` - Expression files (if any)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Assembly FEMs have a more complex dependency chain than single-part simulations:
|
|
||||||
|
|
||||||
```
|
|
||||||
.prt (geometry) → _fem1.fem (component mesh) → .afm (assembly mesh) → .sim (solution)
|
|
||||||
```
|
|
||||||
|
|
||||||
Each level must be updated in sequence when design parameters change.
|
|
||||||
|
|
||||||
## When This Workflow Applies
|
|
||||||
|
|
||||||
This workflow is automatically triggered when:
|
|
||||||
- The working directory contains `.afm` files
|
|
||||||
- Multiple `.fem` files exist (component meshes)
|
|
||||||
- Multiple `.prt` files exist (component geometry)
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- M1 Mirror assembly (M1_Blank + M1_Vertical_Support_Skeleton)
|
|
||||||
- Multi-component mechanical assemblies
|
|
||||||
- Any NX assembly where components have separate FEM files
|
|
||||||
|
|
||||||
## The 4-Step Workflow
|
|
||||||
|
|
||||||
### Step 1: Update Expressions in Geometry Part (.prt)
|
|
||||||
|
|
||||||
```
|
|
||||||
Open M1_Blank.prt
|
|
||||||
├── Find and update design expressions
|
|
||||||
│ ├── whiffle_min = 42.5
|
|
||||||
│ ├── whiffle_outer_to_vertical = 75.0
|
|
||||||
│ └── inner_circular_rib_dia = 550.0
|
|
||||||
├── Rebuild geometry (DoUpdate)
|
|
||||||
└── Save part
|
|
||||||
```
|
|
||||||
|
|
||||||
The `.prt` file contains the parametric CAD model with expressions that drive dimensions. These expressions are updated with new design parameter values, then the geometry is rebuilt.
|
|
||||||
|
|
||||||
### Step 1b: Update ALL Linked Geometry Parts (CRITICAL!)
|
|
||||||
|
|
||||||
**⚠️ THIS STEP IS CRITICAL - SKIPPING IT CAUSES CORRUPT RESULTS ⚠️**
|
|
||||||
|
|
||||||
```
|
|
||||||
For each geometry part with linked expressions:
|
|
||||||
├── Open M1_Vertical_Support_Skeleton.prt
|
|
||||||
├── DoUpdate() - propagate linked expression changes
|
|
||||||
├── Geometry rebuilds to match M1_Blank
|
|
||||||
└── Save part
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why this is critical:**
|
|
||||||
- M1_Vertical_Support_Skeleton has expressions linked to M1_Blank
|
|
||||||
- When M1_Blank geometry changes, the support skeleton MUST also update
|
|
||||||
- If not updated, FEM nodes will be at OLD positions → nodes not coincident → merge fails
|
|
||||||
- Result: "billion nm" RMS values (corrupt displacement data)
|
|
||||||
|
|
||||||
**Rule: YOU MUST UPDATE ALL GEOMETRY PARTS UNDER THE .sim FILE!**
|
|
||||||
- If there are 5 geometry parts, update all 5
|
|
||||||
- If there are 10 geometry parts, update all 10
|
|
||||||
- Unless explicitly told otherwise in the study config
|
|
||||||
|
|
||||||
### Step 2: Update Component FEM Files (.fem)
|
|
||||||
|
|
||||||
```
|
|
||||||
For each component FEM:
|
|
||||||
├── Open M1_Blank_fem1.fem
|
|
||||||
│ ├── UpdateFemodel() - regenerates mesh from updated geometry
|
|
||||||
│ └── Save FEM
|
|
||||||
├── Open M1_Vertical_Support_Skeleton_fem1.fem
|
|
||||||
│ ├── UpdateFemodel()
|
|
||||||
│ └── Save FEM
|
|
||||||
└── ... (repeat for all component FEMs)
|
|
||||||
```
|
|
||||||
|
|
||||||
Each component FEM is linked to its source geometry. `UpdateFemodel()` regenerates the mesh based on the updated geometry.
|
|
||||||
|
|
||||||
### Step 3: Update Assembly FEM (.afm)
|
|
||||||
|
|
||||||
```
|
|
||||||
Open ASSY_M1_assyfem1.afm
|
|
||||||
├── UpdateFemodel() - updates assembly mesh
|
|
||||||
├── Merge coincident nodes (at component interfaces)
|
|
||||||
├── Resolve labeling conflicts (duplicate node/element IDs)
|
|
||||||
└── Save AFM
|
|
||||||
```
|
|
||||||
|
|
||||||
The assembly FEM combines component meshes. This step:
|
|
||||||
- Reconnects meshes at shared interfaces
|
|
||||||
- Resolves numbering conflicts between component meshes
|
|
||||||
- Ensures mesh continuity for accurate analysis
|
|
||||||
|
|
||||||
### Step 4: Solve Simulation (.sim)
|
|
||||||
|
|
||||||
```
|
|
||||||
Open ASSY_M1_assyfem1_sim1.sim
|
|
||||||
├── Execute solve
|
|
||||||
│ ├── Foreground mode for all solutions
|
|
||||||
│ └── or Background mode for specific solution
|
|
||||||
└── Save simulation
|
|
||||||
```
|
|
||||||
|
|
||||||
The simulation file references the assembly FEM and contains solution setup (loads, constraints, subcases).
|
|
||||||
|
|
||||||
## File Dependencies
|
|
||||||
|
|
||||||
```
|
|
||||||
M1 Mirror Example:
|
|
||||||
|
|
||||||
M1_Blank.prt ─────────────────────> M1_Blank_fem1.fem ─────────┐
|
|
||||||
│ │ │
|
|
||||||
│ (expressions) │ (component mesh) │
|
|
||||||
↓ ↓ │
|
|
||||||
M1_Vertical_Support_Skeleton.prt ──> M1_..._Skeleton_fem1.fem ─┤
|
|
||||||
│
|
|
||||||
↓
|
|
||||||
ASSY_M1_assyfem1.afm ──> ASSY_M1_assyfem1_sim1.sim
|
|
||||||
(assembly mesh) (solution)
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Functions Used
|
|
||||||
|
|
||||||
| Step | NX API Call | Purpose |
|
|
||||||
|------|-------------|---------|
|
|
||||||
| 1 | `OpenBase()` | Open .prt file |
|
|
||||||
| 1 | `ImportFromFile()` | Import expressions from .exp file (preferred) |
|
|
||||||
| 1 | `DoUpdate()` | Rebuild geometry |
|
|
||||||
| 2-3 | `UpdateFemodel()` | Regenerate mesh from geometry |
|
|
||||||
| 3 | `DuplicateNodesCheckBuilder` | Merge coincident nodes at interfaces |
|
|
||||||
| 3 | `MergeOccurrenceNodes = True` | Critical: enables cross-component merge |
|
|
||||||
| 4 | `SolveAllSolutions()` | Execute FEA (Foreground mode recommended)
|
|
||||||
|
|
||||||
### Expression Update Method
|
|
||||||
|
|
||||||
The recommended approach uses expression file import:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Write expressions to .exp file
|
|
||||||
with open(exp_path, 'w') as f:
|
|
||||||
for name, value in expressions.items():
|
|
||||||
unit = get_unit_for_expression(name)
|
|
||||||
f.write(f"[{unit}]{name}={value}\n")
|
|
||||||
|
|
||||||
# Import into part
|
|
||||||
modified, errors = workPart.Expressions.ImportFromFile(
|
|
||||||
exp_path,
|
|
||||||
NXOpen.ExpressionCollection.ImportMode.Replace
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
This is more reliable than `EditExpressionWithUnits()` for batch updates.
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
Common issues and solutions:
|
|
||||||
|
|
||||||
### "Update undo happened"
|
|
||||||
- Geometry update failed due to constraint violations
|
|
||||||
- Check expression values are within valid ranges
|
|
||||||
- May need to adjust parameter bounds
|
|
||||||
|
|
||||||
### "This operation can only be done on the work part"
|
|
||||||
- Work part not properly set before operation
|
|
||||||
- Use `SetWork()` to make target part the work part
|
|
||||||
|
|
||||||
### Node merge warnings
|
|
||||||
- Manual intervention may be needed for complex interfaces
|
|
||||||
- Check mesh connectivity in NX after solve
|
|
||||||
|
|
||||||
### "Billion nm" RMS values
|
|
||||||
- Indicates node merging failed - coincident nodes not properly merged
|
|
||||||
- Check `MergeOccurrenceNodes = True` is set
|
|
||||||
- Verify tolerance (0.01 mm recommended)
|
|
||||||
- Run node merge after every FEM update, not just once
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
The workflow auto-detects assembly FEMs, but you can configure behavior:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"nx_settings": {
|
|
||||||
"expression_part": "M1_Blank", // Override auto-detection
|
|
||||||
"component_fems": [ // Explicit list of FEMs to update
|
|
||||||
"M1_Blank_fem1.fem",
|
|
||||||
"M1_Vertical_Support_Skeleton_fem1.fem"
|
|
||||||
],
|
|
||||||
"afm_file": "ASSY_M1_assyfem1.afm"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Reference
|
|
||||||
|
|
||||||
See `optimization_engine/solve_simulation.py` for the full implementation:
|
|
||||||
|
|
||||||
- `detect_assembly_fem()` - Detects if assembly workflow needed
|
|
||||||
- `update_expressions_in_part()` - Step 1 implementation
|
|
||||||
- `update_fem_part()` - Step 2 implementation
|
|
||||||
- `update_assembly_fem()` - Step 3 implementation
|
|
||||||
- `solve_simulation_file()` - Step 4 implementation
|
|
||||||
|
|
||||||
## HEEDS-Style Iteration Folder Management (V9+)
|
|
||||||
|
|
||||||
For complex assemblies, each optimization trial uses a fresh copy of the master model:
|
|
||||||
|
|
||||||
```
|
|
||||||
study_name/
|
|
||||||
├── 1_setup/
|
|
||||||
│ └── model/ # Master model files (NEVER MODIFY)
|
|
||||||
│ ├── ASSY_M1.prt
|
|
||||||
│ ├── ASSY_M1_assyfem1.afm
|
|
||||||
│ ├── ASSY_M1_assyfem1_sim1.sim
|
|
||||||
│ ├── M1_Blank.prt
|
|
||||||
│ ├── M1_Blank_fem1.fem
|
|
||||||
│ └── ...
|
|
||||||
├── 2_iterations/
|
|
||||||
│ ├── iter0/ # Trial 0 working copy
|
|
||||||
│ │ ├── [all model files]
|
|
||||||
│ │ ├── params.exp # Expression values for this trial
|
|
||||||
│ │ └── results/ # OP2, Zernike CSV, etc.
|
|
||||||
│ ├── iter1/ # Trial 1 working copy
|
|
||||||
│ └── ...
|
|
||||||
└── 3_results/
|
|
||||||
└── study.db # Optuna database
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why Fresh Copies Per Iteration?
|
|
||||||
|
|
||||||
1. **Corruption isolation**: If mesh regeneration fails mid-trial, only that iteration is affected
|
|
||||||
2. **Reproducibility**: Can re-run any trial by using its params.exp
|
|
||||||
3. **Debugging**: All intermediate files preserved for post-mortem analysis
|
|
||||||
4. **Parallelization**: Multiple NX sessions could run different iterations (future)
|
|
||||||
|
|
||||||
### Iteration Folder Contents
|
|
||||||
|
|
||||||
| File | Purpose |
|
|
||||||
|------|---------|
|
|
||||||
| `*.prt, *.fem, *.afm, *.sim` | Fresh copy of all NX model files |
|
|
||||||
| `params.exp` | Expression file with trial parameter values |
|
|
||||||
| `*-solution_1.op2` | Nastran results (after solve) |
|
|
||||||
| `results/zernike_trial_N.csv` | Extracted Zernike metrics |
|
|
||||||
|
|
||||||
### 0-Based Iteration Numbering
|
|
||||||
|
|
||||||
Iterations are numbered starting from 0 to match Optuna trial numbers:
|
|
||||||
- `iter0` = Optuna trial 0 = Dashboard shows trial 0
|
|
||||||
- `iter1` = Optuna trial 1 = Dashboard shows trial 1
|
|
||||||
|
|
||||||
This ensures cross-referencing between dashboard, database, and file system is straightforward.
|
|
||||||
|
|
||||||
## Multi-Subcase Solutions
|
|
||||||
|
|
||||||
For gravity analysis at multiple orientations, use subcases:
|
|
||||||
|
|
||||||
```
|
|
||||||
Simulation Setup in NX:
|
|
||||||
├── Subcase 1: 90 deg elevation (zenith/polishing)
|
|
||||||
├── Subcase 2: 20 deg elevation (low angle reference)
|
|
||||||
├── Subcase 3: 40 deg elevation
|
|
||||||
└── Subcase 4: 60 deg elevation
|
|
||||||
```
|
|
||||||
|
|
||||||
### Solving All Subcases
|
|
||||||
|
|
||||||
Use `solution_name=None` or `solve_all_subcases=True` to ensure all subcases are solved:
|
|
||||||
|
|
||||||
```json
|
|
||||||
"nx_settings": {
|
|
||||||
"solution_name": "Solution 1",
|
|
||||||
"solve_all_subcases": true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Subcase ID Mapping
|
|
||||||
|
|
||||||
NX subcase IDs (1, 2, 3, 4) may not match the angle labels. Always define explicit mapping:
|
|
||||||
|
|
||||||
```json
|
|
||||||
"zernike_settings": {
|
|
||||||
"subcases": ["1", "2", "3", "4"],
|
|
||||||
"subcase_labels": {
|
|
||||||
"1": "90deg",
|
|
||||||
"2": "20deg",
|
|
||||||
"3": "40deg",
|
|
||||||
"4": "60deg"
|
|
||||||
},
|
|
||||||
"reference_subcase": "2"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. **Start with baseline solve**: Before optimization, manually verify the full workflow completes in NX
|
|
||||||
2. **Check mesh quality**: Poor mesh quality after updates can cause solve failures
|
|
||||||
3. **Monitor memory**: Assembly FEMs with many components use significant memory
|
|
||||||
4. **Use Foreground mode**: For multi-subcase solutions, Foreground mode ensures all subcases complete
|
|
||||||
5. **Validate OP2 data**: Check for corrupt results (all zeros, unrealistic magnitudes) before processing
|
|
||||||
6. **Preserve user NX sessions**: NXSessionManager tracks PIDs to avoid closing user's NX instances
|
|
||||||
@@ -1,228 +0,0 @@
|
|||||||
# Atomizer Dashboard Visualization Guide
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The Atomizer Dashboard provides real-time visualization of optimization studies with interactive charts, trial history, and study management. It supports two chart libraries:
|
|
||||||
|
|
||||||
- **Recharts** (default): Fast, lightweight, good for real-time updates
|
|
||||||
- **Plotly**: Interactive with zoom, pan, export - better for analysis
|
|
||||||
|
|
||||||
## Starting the Dashboard
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Quick start (both backend and frontend)
|
|
||||||
python start_dashboard.py
|
|
||||||
|
|
||||||
# Or manually:
|
|
||||||
cd atomizer-dashboard/backend && python -m uvicorn main:app --port 8000
|
|
||||||
cd atomizer-dashboard/frontend && npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
Access at: http://localhost:3003
|
|
||||||
|
|
||||||
## Chart Components
|
|
||||||
|
|
||||||
### 1. Pareto Front Plot
|
|
||||||
|
|
||||||
Visualizes the trade-off between objectives in multi-objective optimization.
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- 2D scatter plot for 2 objectives
|
|
||||||
- 3D view for 3+ objectives (Plotly only)
|
|
||||||
- Color differentiation: FEA (blue), NN (orange), Pareto (green)
|
|
||||||
- Axis selector for choosing which objectives to display
|
|
||||||
- Hover tooltips with trial details
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
- Click points to select trials
|
|
||||||
- Use axis dropdowns to switch objectives
|
|
||||||
- Toggle 2D/3D view (Plotly mode)
|
|
||||||
|
|
||||||
### 2. Parallel Coordinates Plot
|
|
||||||
|
|
||||||
Shows relationships between all design variables and objectives simultaneously.
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Each vertical axis represents a variable or objective
|
|
||||||
- Lines connect values for each trial
|
|
||||||
- Brush filtering: drag on any axis to filter
|
|
||||||
- Color coding by trial source (FEA/NN/Pareto)
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
- Drag on axes to create filters
|
|
||||||
- Double-click to reset filters
|
|
||||||
- Hover for trial details
|
|
||||||
|
|
||||||
### 3. Convergence Plot
|
|
||||||
|
|
||||||
Tracks optimization progress over time.
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Scatter points for each trial's objective value
|
|
||||||
- Step line showing best-so-far
|
|
||||||
- Range slider for zooming (Plotly)
|
|
||||||
- FEA vs NN differentiation
|
|
||||||
|
|
||||||
**Metrics Displayed:**
|
|
||||||
- Best value achieved
|
|
||||||
- Current trial value
|
|
||||||
- Total trial count
|
|
||||||
|
|
||||||
### 4. Parameter Importance Chart
|
|
||||||
|
|
||||||
Shows which design variables most influence the objective.
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Horizontal bar chart of correlation coefficients
|
|
||||||
- Color coding: Red (positive), Green (negative)
|
|
||||||
- Sortable by importance or name
|
|
||||||
- Pearson correlation calculation
|
|
||||||
|
|
||||||
**Interpretation:**
|
|
||||||
- Positive correlation: Higher parameter → Higher objective
|
|
||||||
- Negative correlation: Higher parameter → Lower objective
|
|
||||||
- |r| > 0.5: Strong influence
|
|
||||||
|
|
||||||
### 5. Expandable Charts
|
|
||||||
|
|
||||||
All charts support full-screen modal view:
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Click expand icon to open modal
|
|
||||||
- Larger view for detailed analysis
|
|
||||||
- Maintains all interactivity
|
|
||||||
- Close with X or click outside
|
|
||||||
|
|
||||||
## Chart Library Toggle
|
|
||||||
|
|
||||||
Switch between Recharts and Plotly using the header buttons:
|
|
||||||
|
|
||||||
| Feature | Recharts | Plotly |
|
|
||||||
|---------|----------|--------|
|
|
||||||
| Load Speed | Fast | Slower (lazy loaded) |
|
|
||||||
| Interactivity | Basic | Advanced |
|
|
||||||
| Export | Screenshot | PNG/SVG native |
|
|
||||||
| 3D Support | No | Yes |
|
|
||||||
| Real-time Updates | Better | Good |
|
|
||||||
|
|
||||||
**Recommendation:**
|
|
||||||
- Use Recharts during active optimization (real-time)
|
|
||||||
- Switch to Plotly for post-optimization analysis
|
|
||||||
|
|
||||||
## Study Management
|
|
||||||
|
|
||||||
### Study Selection
|
|
||||||
|
|
||||||
- Left sidebar shows all available studies
|
|
||||||
- Click to select and load data
|
|
||||||
- Badge shows study status (running/completed)
|
|
||||||
|
|
||||||
### Metrics Cards
|
|
||||||
|
|
||||||
Top row displays key metrics:
|
|
||||||
- **Trials**: Total completed trials
|
|
||||||
- **Best Value**: Best objective achieved
|
|
||||||
- **Pruned**: Trials pruned by sampler
|
|
||||||
|
|
||||||
### Trial History
|
|
||||||
|
|
||||||
Bottom section shows trial details:
|
|
||||||
- Trial number and objective value
|
|
||||||
- Parameter values (expandable)
|
|
||||||
- Source indicator (FEA/NN)
|
|
||||||
- Sort by performance or chronological
|
|
||||||
|
|
||||||
## Report Viewer
|
|
||||||
|
|
||||||
Access generated study reports:
|
|
||||||
|
|
||||||
1. Click "View Report" button
|
|
||||||
2. Markdown rendered with syntax highlighting
|
|
||||||
3. Supports tables, code blocks, math
|
|
||||||
|
|
||||||
## Console Output
|
|
||||||
|
|
||||||
Real-time log viewer:
|
|
||||||
|
|
||||||
- Shows optimization progress
|
|
||||||
- Error messages highlighted
|
|
||||||
- Auto-scroll to latest
|
|
||||||
- Collapsible panel
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
|
|
||||||
The dashboard uses these REST endpoints:
|
|
||||||
|
|
||||||
```
|
|
||||||
GET /api/optimization/studies # List all studies
|
|
||||||
GET /api/optimization/studies/{id}/status # Study status
|
|
||||||
GET /api/optimization/studies/{id}/history # Trial history
|
|
||||||
GET /api/optimization/studies/{id}/metadata # Study config
|
|
||||||
GET /api/optimization/studies/{id}/pareto # Pareto front
|
|
||||||
GET /api/optimization/studies/{id}/report # Markdown report
|
|
||||||
GET /api/optimization/studies/{id}/console # Log output
|
|
||||||
```
|
|
||||||
|
|
||||||
## WebSocket Updates
|
|
||||||
|
|
||||||
Real-time updates via WebSocket:
|
|
||||||
|
|
||||||
```
|
|
||||||
ws://localhost:8000/api/ws/optimization/{study_id}
|
|
||||||
```
|
|
||||||
|
|
||||||
Events:
|
|
||||||
- `trial_completed`: New trial finished
|
|
||||||
- `trial_pruned`: Trial was pruned
|
|
||||||
- `new_best`: New best value found
|
|
||||||
|
|
||||||
## Performance Optimization
|
|
||||||
|
|
||||||
### For Large Studies (1000+ trials)
|
|
||||||
|
|
||||||
1. Use Recharts for real-time monitoring
|
|
||||||
2. Switch to Plotly for final analysis
|
|
||||||
3. Limit displayed trials in parallel coordinates
|
|
||||||
|
|
||||||
### Bundle Optimization
|
|
||||||
|
|
||||||
The dashboard uses:
|
|
||||||
- `plotly.js-basic-dist` (smaller bundle, ~1MB vs 3.5MB)
|
|
||||||
- Lazy loading for Plotly components
|
|
||||||
- Code splitting (vendor, recharts, plotly chunks)
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Charts Not Loading
|
|
||||||
|
|
||||||
1. Check backend is running (port 8000)
|
|
||||||
2. Verify API proxy in vite.config.ts
|
|
||||||
3. Check browser console for errors
|
|
||||||
|
|
||||||
### Slow Performance
|
|
||||||
|
|
||||||
1. Switch to Recharts mode
|
|
||||||
2. Reduce trial history limit
|
|
||||||
3. Close unused browser tabs
|
|
||||||
|
|
||||||
### Missing Data
|
|
||||||
|
|
||||||
1. Verify study.db exists
|
|
||||||
2. Check study has completed trials
|
|
||||||
3. Refresh page after new trials
|
|
||||||
|
|
||||||
## Development
|
|
||||||
|
|
||||||
### Adding New Charts
|
|
||||||
|
|
||||||
1. Create component in `src/components/`
|
|
||||||
2. Add Plotly version in `src/components/plotly/`
|
|
||||||
3. Export from `src/components/plotly/index.ts`
|
|
||||||
4. Add to Dashboard.tsx with toggle logic
|
|
||||||
|
|
||||||
### Styling
|
|
||||||
|
|
||||||
Uses Tailwind CSS with dark theme:
|
|
||||||
- Background: `dark-800`, `dark-900`
|
|
||||||
- Text: `dark-100`, `dark-200`
|
|
||||||
- Accent: `primary-500`, `primary-600`
|
|
||||||
@@ -1,471 +0,0 @@
|
|||||||
# LLM-Orchestrated Atomizer Workflow
|
|
||||||
|
|
||||||
## Core Philosophy
|
|
||||||
|
|
||||||
**Atomizer is LLM-first.** The user talks to Claude Code, describes what they want in natural language, and the LLM orchestrates everything:
|
|
||||||
|
|
||||||
- Interprets engineering intent
|
|
||||||
- Creates optimized configurations
|
|
||||||
- Sets up study structure
|
|
||||||
- Runs optimizations
|
|
||||||
- Generates reports
|
|
||||||
- Implements custom features
|
|
||||||
|
|
||||||
**The dashboard is for monitoring, not setup.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Architecture: Skills + Protocols + Validators
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ USER (Natural Language) │
|
|
||||||
│ "I want to optimize this drone arm for weight while keeping it stiff" │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ CLAUDE CODE (LLM Orchestrator) │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ SKILLS │ │ PROTOCOLS │ │ VALIDATORS │ │ KNOWLEDGE │ │
|
|
||||||
│ │ (.claude/ │ │ (docs/06_) │ │ (Python) │ │ (docs/) │ │
|
|
||||||
│ │ commands/) │ │ │ │ │ │ │ │
|
|
||||||
│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
|
|
||||||
│ │ │ │ │ │
|
|
||||||
│ └─────────────────┴─────────────────┴─────────────────┘ │
|
|
||||||
│ │ │
|
|
||||||
│ ORCHESTRATION LOGIC │
|
|
||||||
│ (Intent → Plan → Execute → Validate) │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ ATOMIZER ENGINE │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
|
||||||
│ │ Config │ │ Runner │ │ Extractors │ │ Reports │ │
|
|
||||||
│ │ Generator │ │ (FEA/NN) │ │ (OP2/CAD) │ │ Generator │ │
|
|
||||||
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ OUTPUTS (User-Visible) │
|
|
||||||
│ │
|
|
||||||
│ • study/1_setup/optimization_config.json (config) │
|
|
||||||
│ • study/2_results/study.db (optimization data) │
|
|
||||||
│ • reports/ (visualizations) │
|
|
||||||
│ • Dashboard at localhost:3000 (live monitoring) │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## The Three Pillars
|
|
||||||
|
|
||||||
### 1. SKILLS (What LLM Can Do)
|
|
||||||
Location: `.claude/skills/*.md`
|
|
||||||
|
|
||||||
Skills are **instruction sets** that tell Claude Code how to perform specific tasks with high rigor. They're like recipes that ensure consistency.
|
|
||||||
|
|
||||||
```
|
|
||||||
.claude/skills/
|
|
||||||
├── create-study.md # Create new optimization study
|
|
||||||
├── analyze-model.md # Analyze NX model for optimization
|
|
||||||
├── configure-surrogate.md # Setup NN surrogate settings
|
|
||||||
├── generate-report.md # Create performance reports
|
|
||||||
├── troubleshoot.md # Debug common issues
|
|
||||||
└── extend-feature.md # Add custom functionality
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. PROTOCOLS (How To Do It Right)
|
|
||||||
Location: `docs/06_PROTOCOLS_DETAILED/`
|
|
||||||
|
|
||||||
Protocols are **step-by-step procedures** that define the correct sequence for complex operations. They ensure rigor and reproducibility.
|
|
||||||
|
|
||||||
```
|
|
||||||
docs/06_PROTOCOLS_DETAILED/
|
|
||||||
├── PROTOCOL_01_STUDY_SETUP.md
|
|
||||||
├── PROTOCOL_02_MODEL_VALIDATION.md
|
|
||||||
├── PROTOCOL_03_OPTIMIZATION_RUN.md
|
|
||||||
├── PROTOCOL_11_MULTI_OBJECTIVE.md
|
|
||||||
├── PROTOCOL_12_HYBRID_SURROGATE.md
|
|
||||||
└── LLM_ORCHESTRATED_WORKFLOW.md (this file)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. VALIDATORS (Verify It's Correct)
|
|
||||||
Location: `optimization_engine/validators/`
|
|
||||||
|
|
||||||
Validators are **Python modules** that check configurations, outputs, and state. They catch errors before they cause problems.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Example: optimization_engine/validators/config_validator.py
|
|
||||||
def validate_optimization_config(config: dict) -> ValidationResult:
|
|
||||||
"""Ensure config is valid before running."""
|
|
||||||
errors = []
|
|
||||||
warnings = []
|
|
||||||
|
|
||||||
# Check required fields
|
|
||||||
if 'design_variables' not in config:
|
|
||||||
errors.append("Missing design_variables")
|
|
||||||
|
|
||||||
# Check bounds make sense
|
|
||||||
for var in config.get('design_variables', []):
|
|
||||||
if var['bounds'][0] >= var['bounds'][1]:
|
|
||||||
errors.append(f"{var['parameter']}: min >= max")
|
|
||||||
|
|
||||||
return ValidationResult(errors, warnings)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Master Skill: `/create-study`
|
|
||||||
|
|
||||||
This is the primary entry point. When user says "I want to optimize X", this skill orchestrates everything.
|
|
||||||
|
|
||||||
### Skill File: `.claude/skills/create-study.md`
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Create Study Skill
|
|
||||||
|
|
||||||
## Trigger
|
|
||||||
User wants to create a new optimization study.
|
|
||||||
|
|
||||||
## Required Information (Gather via conversation)
|
|
||||||
|
|
||||||
### 1. Model Information
|
|
||||||
- [ ] NX model file location (.prt)
|
|
||||||
- [ ] Simulation file (.sim)
|
|
||||||
- [ ] FEM file (.fem)
|
|
||||||
- [ ] Analysis types (static, modal, buckling, etc.)
|
|
||||||
|
|
||||||
### 2. Engineering Goals
|
|
||||||
- [ ] What to optimize (minimize mass, maximize stiffness, etc.)
|
|
||||||
- [ ] Target values (if any)
|
|
||||||
- [ ] Constraints (max stress, min frequency, etc.)
|
|
||||||
- [ ] Engineering context (what is this part for?)
|
|
||||||
|
|
||||||
### 3. Design Variables
|
|
||||||
- [ ] Which parameters can change
|
|
||||||
- [ ] Bounds for each (min/max)
|
|
||||||
- [ ] Integer vs continuous
|
|
||||||
|
|
||||||
### 4. Optimization Settings
|
|
||||||
- [ ] Number of trials
|
|
||||||
- [ ] Single vs multi-objective
|
|
||||||
- [ ] Enable NN surrogate? (recommend for >50 trials)
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Analyze Model
|
|
||||||
Read the NX model to:
|
|
||||||
- Extract existing expressions (potential design variables)
|
|
||||||
- Identify geometry features
|
|
||||||
- Check simulation setup
|
|
||||||
|
|
||||||
### Step 2: Generate Configuration
|
|
||||||
Create optimization_config.json with:
|
|
||||||
- All gathered information
|
|
||||||
- Sensible defaults for missing info
|
|
||||||
- Appropriate protocol selection
|
|
||||||
|
|
||||||
### Step 3: Validate Configuration
|
|
||||||
Run config validator to check:
|
|
||||||
- All required fields present
|
|
||||||
- Bounds are reasonable
|
|
||||||
- Extraction methods are correct
|
|
||||||
|
|
||||||
### Step 4: Create Study Structure
|
|
||||||
```
|
|
||||||
studies/{study_name}/
|
|
||||||
├── 1_setup/
|
|
||||||
│ ├── model/ # Copy of NX files
|
|
||||||
│ ├── optimization_config.json
|
|
||||||
│ └── workflow_config.json
|
|
||||||
├── 2_results/
|
|
||||||
└── README.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Confirm with User
|
|
||||||
Show summary:
|
|
||||||
- Design variables and bounds
|
|
||||||
- Objectives and constraints
|
|
||||||
- Recommended settings
|
|
||||||
- Estimated time
|
|
||||||
|
|
||||||
### Step 6: Start Optimization (if confirmed)
|
|
||||||
Run the optimization with appropriate flags.
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
- If model not found: Ask for correct path
|
|
||||||
- If expressions not found: List available ones
|
|
||||||
- If config invalid: Show errors, suggest fixes
|
|
||||||
|
|
||||||
## Output
|
|
||||||
- Valid study folder structure
|
|
||||||
- Validated optimization_config.json
|
|
||||||
- README with study description
|
|
||||||
- Optional: Start optimization run
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Conversation Flow Example
|
|
||||||
|
|
||||||
```
|
|
||||||
USER: I want to optimize this bracket for weight. The model is at
|
|
||||||
examples/Models/Bracket/bracket.prt
|
|
||||||
|
|
||||||
CLAUDE: I'll help you set up an optimization study for the bracket. Let me
|
|
||||||
first analyze your model to understand its structure.
|
|
||||||
|
|
||||||
[Reads model, extracts expressions]
|
|
||||||
|
|
||||||
I found these parametric expressions in your model:
|
|
||||||
- wall_thickness (current: 2mm)
|
|
||||||
- fillet_radius (current: 3mm)
|
|
||||||
- hole_diameter (current: 10mm)
|
|
||||||
- rib_count (current: 4)
|
|
||||||
|
|
||||||
Which of these should be design variables for optimization?
|
|
||||||
|
|
||||||
USER: All of them. Wall thickness 1-4mm, fillet 2-8mm, hole 5-20mm,
|
|
||||||
ribs 2-6.
|
|
||||||
|
|
||||||
CLAUDE: Got it. Now for the engineering requirements:
|
|
||||||
|
|
||||||
1. What's the loading condition? (I see a static analysis in your sim)
|
|
||||||
2. Any stress limits? (Material yield strength?)
|
|
||||||
3. Any displacement limits?
|
|
||||||
4. Is there a target weight you're trying to achieve?
|
|
||||||
|
|
||||||
USER: It's holding a 50kg load. Aluminum 6061, so keep stress under 200 MPa.
|
|
||||||
Max deflection 0.5mm. Want it as light as possible.
|
|
||||||
|
|
||||||
CLAUDE: Perfect. Here's your optimization setup:
|
|
||||||
|
|
||||||
**Study: bracket_weight_optimization**
|
|
||||||
|
|
||||||
**Design Variables:**
|
|
||||||
| Parameter | Min | Max | Type |
|
|
||||||
|---------------|-----|-----|---------|
|
|
||||||
| wall_thickness | 1 | 4 | float |
|
|
||||||
| fillet_radius | 2 | 8 | float |
|
|
||||||
| hole_diameter | 5 | 20 | float |
|
|
||||||
| rib_count | 2 | 6 | integer |
|
|
||||||
|
|
||||||
**Objective:** Minimize mass
|
|
||||||
|
|
||||||
**Constraints:**
|
|
||||||
- Max von Mises stress < 200 MPa
|
|
||||||
- Max displacement < 0.5 mm
|
|
||||||
|
|
||||||
**Settings:**
|
|
||||||
- Protocol: Single-objective constrained
|
|
||||||
- Trials: 50 (recommended for 4 variables)
|
|
||||||
- Sampler: TPE (good for constrained problems)
|
|
||||||
|
|
||||||
Should I create this study? I can also enable neural network
|
|
||||||
acceleration if you want more than 50 trials.
|
|
||||||
|
|
||||||
USER: Yes, create it. And enable the NN thing, let's do 200 trials.
|
|
||||||
|
|
||||||
CLAUDE: Creating study with NN surrogate enabled...
|
|
||||||
|
|
||||||
[Creates folder structure]
|
|
||||||
[Generates optimization_config.json]
|
|
||||||
[Validates configuration]
|
|
||||||
[Copies model files]
|
|
||||||
|
|
||||||
Study created at: studies/bracket_weight_optimization/
|
|
||||||
|
|
||||||
Configuration validated successfully:
|
|
||||||
✓ All design variables have valid bounds
|
|
||||||
✓ Extraction methods configured
|
|
||||||
✓ Surrogate settings enabled
|
|
||||||
|
|
||||||
Ready to start optimization. The workflow will:
|
|
||||||
1. Run 30 initial FEA trials
|
|
||||||
2. Generate 100 space-filling training points
|
|
||||||
3. Train NN surrogate
|
|
||||||
4. Run 170 NN-accelerated trials with FEA validation
|
|
||||||
|
|
||||||
Start now? (You can monitor at localhost:3000)
|
|
||||||
|
|
||||||
USER: Yes, start it.
|
|
||||||
|
|
||||||
CLAUDE: Starting optimization...
|
|
||||||
|
|
||||||
[Runs optimization in background]
|
|
||||||
|
|
||||||
Optimization running (ID: bracket_weight_optimization)
|
|
||||||
- Monitor: http://localhost:3000
|
|
||||||
- Optuna Dashboard: http://localhost:8081
|
|
||||||
|
|
||||||
I'll notify you when Stage 1 (initial FEA) completes.
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Skill Library
|
|
||||||
|
|
||||||
### Core Skills (Must Have)
|
|
||||||
|
|
||||||
| Skill | Trigger | Purpose |
|
|
||||||
|-------|---------|---------|
|
|
||||||
| `/create-study` | "optimize", "new study" | Create optimization from scratch |
|
|
||||||
| `/analyze-model` | "look at model", "what can I optimize" | Extract model info |
|
|
||||||
| `/run-optimization` | "start", "run" | Execute optimization |
|
|
||||||
| `/check-status` | "how's it going", "progress" | Report on running studies |
|
|
||||||
| `/generate-report` | "report", "results" | Create visualizations |
|
|
||||||
|
|
||||||
### Advanced Skills (For Power Users)
|
|
||||||
|
|
||||||
| Skill | Trigger | Purpose |
|
|
||||||
|-------|---------|---------|
|
|
||||||
| `/configure-surrogate` | "neural network", "surrogate" | Setup NN acceleration |
|
|
||||||
| `/add-constraint` | "add constraint" | Modify existing study |
|
|
||||||
| `/compare-studies` | "compare" | Cross-study analysis |
|
|
||||||
| `/export-results` | "export", "pareto" | Export optimal designs |
|
|
||||||
| `/troubleshoot` | "error", "failed" | Debug issues |
|
|
||||||
|
|
||||||
### Custom Skills (Project-Specific)
|
|
||||||
|
|
||||||
Users can create their own skills for recurring tasks:
|
|
||||||
```
|
|
||||||
.claude/skills/
|
|
||||||
├── my-bracket-setup.md # Pre-configured bracket optimization
|
|
||||||
├── thermal-analysis.md # Custom thermal workflow
|
|
||||||
└── batch-runner.md # Run multiple studies
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Approach
|
|
||||||
|
|
||||||
### Phase 1: Foundation (Current)
|
|
||||||
- [x] Basic skill system (create-study.md exists)
|
|
||||||
- [x] Config validation
|
|
||||||
- [x] Manual protocol following
|
|
||||||
- [ ] **Formalize skill structure**
|
|
||||||
- [ ] **Create skill template**
|
|
||||||
|
|
||||||
### Phase 2: Skill Library
|
|
||||||
- [ ] Implement all core skills
|
|
||||||
- [ ] Add protocol references in skills
|
|
||||||
- [ ] Create skill chaining (one skill calls another)
|
|
||||||
- [ ] Add user confirmation checkpoints
|
|
||||||
|
|
||||||
### Phase 3: Validators
|
|
||||||
- [ ] Config validator (comprehensive)
|
|
||||||
- [ ] Model validator (check NX setup)
|
|
||||||
- [ ] Results validator (check outputs)
|
|
||||||
- [ ] State validator (check study health)
|
|
||||||
|
|
||||||
### Phase 4: Knowledge Integration
|
|
||||||
- [ ] Physics knowledge base queries
|
|
||||||
- [ ] Similar study lookup
|
|
||||||
- [ ] Transfer learning suggestions
|
|
||||||
- [ ] Best practices recommendations
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Skill Template
|
|
||||||
|
|
||||||
Every skill should follow this structure:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Skill Name
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
What this skill accomplishes.
|
|
||||||
|
|
||||||
## Triggers
|
|
||||||
Keywords/phrases that activate this skill.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
What must be true before running.
|
|
||||||
|
|
||||||
## Information Gathering
|
|
||||||
Questions to ask user (with defaults).
|
|
||||||
|
|
||||||
## Protocol Reference
|
|
||||||
Link to detailed protocol in docs/06_PROTOCOLS_DETAILED/
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
1. Step one (with validation)
|
|
||||||
2. Step two (with validation)
|
|
||||||
3. ...
|
|
||||||
|
|
||||||
## Validation Checkpoints
|
|
||||||
- After step X, verify Y
|
|
||||||
- Before step Z, check W
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
- Error type 1: Recovery action
|
|
||||||
- Error type 2: Recovery action
|
|
||||||
|
|
||||||
## User Confirmations
|
|
||||||
Points where user approval is needed.
|
|
||||||
|
|
||||||
## Outputs
|
|
||||||
What gets created/modified.
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
What to suggest after completion.
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Key Principles
|
|
||||||
|
|
||||||
### 1. Conversation > Configuration
|
|
||||||
Don't ask user to edit JSON. Have a conversation, then generate the config.
|
|
||||||
|
|
||||||
### 2. Validation at Every Step
|
|
||||||
Never proceed with invalid state. Check before, during, and after.
|
|
||||||
|
|
||||||
### 3. Sensible Defaults
|
|
||||||
Provide good defaults so user only specifies what they care about.
|
|
||||||
|
|
||||||
### 4. Explain Decisions
|
|
||||||
When making choices (sampler, n_trials, etc.), explain why.
|
|
||||||
|
|
||||||
### 5. Graceful Degradation
|
|
||||||
If something fails, recover gracefully with clear explanation.
|
|
||||||
|
|
||||||
### 6. Progressive Disclosure
|
|
||||||
Start simple, offer complexity only when needed.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with Dashboard
|
|
||||||
|
|
||||||
The dashboard complements LLM interaction:
|
|
||||||
|
|
||||||
| LLM Handles | Dashboard Handles |
|
|
||||||
|-------------|-------------------|
|
|
||||||
| Study setup | Live monitoring |
|
|
||||||
| Configuration | Progress visualization |
|
|
||||||
| Troubleshooting | Results exploration |
|
|
||||||
| Reports | Pareto front interaction |
|
|
||||||
| Custom features | Historical comparison |
|
|
||||||
|
|
||||||
**The LLM creates, the dashboard observes.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Formalize Skill Structure**: Create template that all skills follow
|
|
||||||
2. **Implement Core Skills**: Start with create-study, analyze-model
|
|
||||||
3. **Add Validators**: Python modules for each validation type
|
|
||||||
4. **Test Conversation Flows**: Verify natural interaction patterns
|
|
||||||
5. **Build Skill Chaining**: Allow skills to call other skills
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Document Version: 1.0*
|
|
||||||
*Created: 2025-11-25*
|
|
||||||
*Philosophy: Talk to the LLM, not the dashboard*
|
|
||||||
@@ -1,251 +0,0 @@
|
|||||||
# NX Multi-Solution Solve Protocol
|
|
||||||
|
|
||||||
## Critical Finding: SolveAllSolutions API Required for Multi-Solution Models
|
|
||||||
|
|
||||||
**Date**: November 23, 2025
|
|
||||||
**Last Updated**: November 23, 2025
|
|
||||||
**Protocol**: Multi-Solution Nastran Solve
|
|
||||||
**Affected Models**: Any NX simulation with multiple solutions (e.g., static + modal, thermal + structural)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
|
|
||||||
When an NX simulation contains multiple solutions (e.g., Solution 1 = Static Analysis, Solution 2 = Modal Analysis), using `SolveChainOfSolutions()` with Background mode **does not wait for all solutions to complete** before returning control to Python. This causes:
|
|
||||||
|
|
||||||
1. **Missing OP2 Files**: Only the first solution's OP2 file is generated
|
|
||||||
2. **Stale Data**: Subsequent trials read old OP2 files from previous runs
|
|
||||||
3. **Identical Results**: All trials show the same values for results from missing solutions
|
|
||||||
4. **Silent Failures**: No error is raised - the solve completes but files are not written
|
|
||||||
|
|
||||||
### Example Scenario
|
|
||||||
|
|
||||||
**Drone Gimbal Arm Optimization**:
|
|
||||||
- Solution 1: Static analysis (stress, displacement)
|
|
||||||
- Solution 2: Modal analysis (frequency)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
- All 100 trials showed **identical frequency** (27.476 Hz)
|
|
||||||
- Only `beam_sim1-solution_1.op2` was created
|
|
||||||
- `beam_sim1-solution_2.op2` was never regenerated after Trial 0
|
|
||||||
- Both `.dat` files were written correctly, but solve didn't wait for completion
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Root Cause
|
|
||||||
|
|
||||||
```python
|
|
||||||
# WRONG APPROACH (doesn't wait for completion)
|
|
||||||
psolutions1 = []
|
|
||||||
solution_idx = 1
|
|
||||||
while True:
|
|
||||||
solution_obj_name = f"Solution[Solution {solution_idx}]"
|
|
||||||
simSolution = simSimulation1.FindObject(solution_obj_name)
|
|
||||||
if simSolution:
|
|
||||||
psolutions1.append(simSolution)
|
|
||||||
solution_idx += 1
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
theCAESimSolveManager.SolveChainOfSolutions(
|
|
||||||
psolutions1,
|
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteDeepCheckAndOutputErrors,
|
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Background # ❌ Returns immediately!
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Issue**: Background mode runs asynchronously and returns control to Python before all solutions finish solving.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Correct Solution
|
|
||||||
|
|
||||||
### For Solving All Solutions
|
|
||||||
|
|
||||||
Use `SolveAllSolutions()` API with **Foreground mode**:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# CORRECT APPROACH (waits for completion)
|
|
||||||
if solution_name:
|
|
||||||
# Solve specific solution in background mode
|
|
||||||
solution_obj_name = f"Solution[{solution_name}]"
|
|
||||||
simSolution1 = simSimulation1.FindObject(solution_obj_name)
|
|
||||||
psolutions1 = [simSolution1]
|
|
||||||
|
|
||||||
numsolutionssolved1, numsolutionsfailed1, numsolutionsskipped1 = theCAESimSolveManager.SolveChainOfSolutions(
|
|
||||||
psolutions1,
|
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteDeepCheckAndOutputErrors,
|
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Background
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# Solve ALL solutions using SolveAllSolutions API (Foreground mode)
|
|
||||||
# This ensures all solutions (static + modal, etc.) complete before returning
|
|
||||||
print(f"[JOURNAL] Solving all solutions using SolveAllSolutions API (Foreground mode)...")
|
|
||||||
|
|
||||||
numsolutionssolved1, numsolutionsfailed1, numsolutionsskipped1 = theCAESimSolveManager.SolveAllSolutions(
|
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Foreground, # ✅ Blocks until complete
|
|
||||||
False
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key Differences
|
|
||||||
|
|
||||||
| Aspect | SolveChainOfSolutions | SolveAllSolutions |
|
|
||||||
|--------|----------------------|-------------------|
|
|
||||||
| **Manual enumeration** | Required (loop through solutions) | Automatic (handles all solutions) |
|
|
||||||
| **Background mode behavior** | Returns immediately, async | N/A (Foreground recommended) |
|
|
||||||
| **Foreground mode behavior** | Blocks until complete | Blocks until complete ✅ |
|
|
||||||
| **Use case** | Specific solution selection | Solve all solutions |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Location
|
|
||||||
|
|
||||||
**File**: `optimization_engine/solve_simulation.py`
|
|
||||||
**Lines**: 271-295
|
|
||||||
|
|
||||||
**When to use this protocol**:
|
|
||||||
- When `solution_name=None` is passed to `NXSolver.run_simulation()`
|
|
||||||
- Any simulation with multiple solutions that must all complete
|
|
||||||
- Multi-objective optimization requiring results from different analysis types
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verification Steps
|
|
||||||
|
|
||||||
After implementing the fix, verify:
|
|
||||||
|
|
||||||
1. **Both .dat files are written** (one per solution)
|
|
||||||
```
|
|
||||||
beam_sim1-solution_1.dat # Static analysis
|
|
||||||
beam_sim1-solution_2.dat # Modal analysis
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Both .op2 files are created** with updated timestamps
|
|
||||||
```
|
|
||||||
beam_sim1-solution_1.op2 # Contains stress, displacement
|
|
||||||
beam_sim1-solution_2.op2 # Contains eigenvalues, mode shapes
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Results are unique per trial** - check that frequency values vary across trials
|
|
||||||
|
|
||||||
4. **Journal log shows**:
|
|
||||||
```
|
|
||||||
[JOURNAL] Solving all solutions using SolveAllSolutions API (Foreground mode)...
|
|
||||||
[JOURNAL] Solve completed!
|
|
||||||
[JOURNAL] Solutions solved: 2
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Solution Monitor Window Control (November 24, 2025)
|
|
||||||
|
|
||||||
### Problem: Monitor Window Pile-Up
|
|
||||||
|
|
||||||
When running optimization studies with multiple trials, NX opens solution monitor windows for each trial. These windows:
|
|
||||||
- Superpose on top of each other
|
|
||||||
- Cannot be easily closed programmatically
|
|
||||||
- Cause usability issues during long optimization runs
|
|
||||||
- Slow down the optimization process
|
|
||||||
|
|
||||||
### Solution: Automatic Monitor Disabling
|
|
||||||
|
|
||||||
The solution monitor is now automatically disabled when solving multiple solutions (when `solution_name=None`).
|
|
||||||
|
|
||||||
**Implementation**: `optimization_engine/solve_simulation.py` lines 271-295
|
|
||||||
|
|
||||||
```python
|
|
||||||
# CRITICAL: Disable solution monitor when solving multiple solutions
|
|
||||||
# This prevents NX from opening multiple monitor windows which superpose and cause usability issues
|
|
||||||
if not solution_name:
|
|
||||||
print("[JOURNAL] Disabling solution monitor for all solutions to prevent window pile-up...")
|
|
||||||
try:
|
|
||||||
# Get all solutions in the simulation
|
|
||||||
solutions_disabled = 0
|
|
||||||
solution_num = 1
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
solution_obj_name = f"Solution[Solution {solution_num}]"
|
|
||||||
simSolution = simSimulation1.FindObject(solution_obj_name)
|
|
||||||
if simSolution:
|
|
||||||
propertyTable = simSolution.SolverOptionsPropertyTable
|
|
||||||
propertyTable.SetBooleanPropertyValue("solution monitor", False)
|
|
||||||
solutions_disabled += 1
|
|
||||||
solution_num += 1
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
except:
|
|
||||||
break # No more solutions
|
|
||||||
print(f"[JOURNAL] Solution monitor disabled for {solutions_disabled} solution(s)")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"[JOURNAL] WARNING: Could not disable solution monitor: {e}")
|
|
||||||
print(f"[JOURNAL] Continuing with solve anyway...")
|
|
||||||
```
|
|
||||||
|
|
||||||
**When this activates**:
|
|
||||||
- Automatically when `solution_name=None` (solve all solutions mode)
|
|
||||||
- For any study with multiple trials (typical optimization scenario)
|
|
||||||
- No user configuration required
|
|
||||||
|
|
||||||
**User-recorded journal**: `nx_journals/user_generated_journals/journal_monitor_window_off.py`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Related Issues Fixed
|
|
||||||
|
|
||||||
1. **All trials showing identical frequency**: Fixed by ensuring modal solution runs
|
|
||||||
2. **Only one data point in dashboard**: Fixed by all trials succeeding
|
|
||||||
3. **Parallel coordinates with NaN**: Fixed by having complete data from all solutions
|
|
||||||
4. **Solution monitor windows piling up**: Fixed by automatically disabling monitor for multi-solution runs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- **User's Example**: `nx_journals/user_generated_journals/journal_solve_all_solution.py` (line 27)
|
|
||||||
- **NX Open Documentation**: SimSolveManager.SolveAllSolutions() method
|
|
||||||
- **Implementation**: `optimization_engine/solve_simulation.py`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Always use Foreground mode** when solving all solutions
|
|
||||||
2. **Verify OP2 timestamp changes** to ensure fresh solves
|
|
||||||
3. **Check solve counts** in journal output to confirm both solutions ran
|
|
||||||
4. **Test with 5 trials** before running large optimizations
|
|
||||||
5. **Monitor unique frequency values** as a smoke test for multi-solution models
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Use Cases
|
|
||||||
|
|
||||||
### ✅ Correct Usage
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Multi-objective optimization with static + modal
|
|
||||||
result = nx_solver.run_simulation(
|
|
||||||
sim_file=sim_file,
|
|
||||||
working_dir=model_dir,
|
|
||||||
expression_updates=design_vars,
|
|
||||||
solution_name=None # Solve ALL solutions
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### ❌ Incorrect Usage (Don't Do This)
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Running modal separately - inefficient and error-prone
|
|
||||||
result1 = nx_solver.run_simulation(..., solution_name="Solution 1") # Static
|
|
||||||
result2 = nx_solver.run_simulation(..., solution_name="Solution 2") # Modal
|
|
||||||
# This doubles the solve time and requires managing two result objects
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Status**: ✅ Implemented and Verified
|
|
||||||
**Impact**: Critical for all multi-solution optimization workflows
|
|
||||||
@@ -1,278 +0,0 @@
|
|||||||
# Protocol 13: Adaptive Multi-Objective Optimization
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Protocol 13 implements an adaptive multi-objective optimization strategy that combines:
|
|
||||||
- **FEA (Finite Element Analysis)** for ground truth simulations
|
|
||||||
- **Neural Network Surrogates** for rapid exploration
|
|
||||||
- **Iterative refinement** with periodic retraining
|
|
||||||
|
|
||||||
This protocol is ideal for expensive simulations where each FEA run takes significant time (minutes to hours), but you need to explore a large design space efficiently.
|
|
||||||
|
|
||||||
## When to Use Protocol 13
|
|
||||||
|
|
||||||
| Scenario | Recommended |
|
|
||||||
|----------|-------------|
|
|
||||||
| FEA takes > 5 minutes per run | Yes |
|
|
||||||
| Need to explore > 100 designs | Yes |
|
|
||||||
| Multi-objective optimization (2-4 objectives) | Yes |
|
|
||||||
| Single objective, fast FEA (< 1 min) | No, use Protocol 10/11 |
|
|
||||||
| Highly nonlinear response surfaces | Yes, with more FEA samples |
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
|
||||||
│ Adaptive Optimization Loop │
|
|
||||||
├─────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Iteration 1: │
|
|
||||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ Initial FEA │ -> │ Train NN │ -> │ NN Search │ │
|
|
||||||
│ │ (50-100) │ │ Surrogate │ │ (1000 trials)│ │
|
|
||||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
|
||||||
│ │ │
|
|
||||||
│ v │
|
|
||||||
│ Iteration 2+: │
|
|
||||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ Validate Top │ -> │ Retrain NN │ -> │ NN Search │ │
|
|
||||||
│ │ NN with FEA │ │ with new data│ │ (1000 trials)│ │
|
|
||||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### optimization_config.json
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"study_name": "my_adaptive_study",
|
|
||||||
"protocol": 13,
|
|
||||||
|
|
||||||
"adaptive_settings": {
|
|
||||||
"enabled": true,
|
|
||||||
"initial_fea_trials": 50,
|
|
||||||
"nn_trials_per_iteration": 1000,
|
|
||||||
"fea_validation_per_iteration": 5,
|
|
||||||
"max_iterations": 10,
|
|
||||||
"convergence_threshold": 0.01,
|
|
||||||
"retrain_epochs": 100
|
|
||||||
},
|
|
||||||
|
|
||||||
"objectives": [
|
|
||||||
{
|
|
||||||
"name": "thermal_40_vs_20",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 1.0
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "thermal_60_vs_20",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 0.5
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "manufacturability",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 0.3
|
|
||||||
}
|
|
||||||
],
|
|
||||||
|
|
||||||
"design_variables": [
|
|
||||||
{
|
|
||||||
"name": "rib_thickness",
|
|
||||||
"expression_name": "rib_thickness",
|
|
||||||
"min": 5.0,
|
|
||||||
"max": 15.0,
|
|
||||||
"baseline": 10.0
|
|
||||||
}
|
|
||||||
],
|
|
||||||
|
|
||||||
"surrogate_settings": {
|
|
||||||
"enabled": true,
|
|
||||||
"model_type": "neural_network",
|
|
||||||
"hidden_layers": [128, 64, 32],
|
|
||||||
"learning_rate": 0.001,
|
|
||||||
"batch_size": 32
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key Parameters
|
|
||||||
|
|
||||||
| Parameter | Description | Recommended |
|
|
||||||
|-----------|-------------|-------------|
|
|
||||||
| `initial_fea_trials` | FEA runs before first NN training | 50-100 |
|
|
||||||
| `nn_trials_per_iteration` | NN-predicted trials per iteration | 500-2000 |
|
|
||||||
| `fea_validation_per_iteration` | Top NN trials validated with FEA | 3-10 |
|
|
||||||
| `max_iterations` | Maximum adaptive iterations | 5-20 |
|
|
||||||
| `convergence_threshold` | Stop if improvement < threshold | 0.01 (1%) |
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
### Phase 1: Initial FEA Sampling
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Generates space-filling Latin Hypercube samples
|
|
||||||
# Runs FEA on each sample
|
|
||||||
# Stores results in Optuna database with source='FEA'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2: Neural Network Training
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Extracts all FEA trials from database
|
|
||||||
# Normalizes inputs (design variables) to [0, 1]
|
|
||||||
# Trains multi-output neural network
|
|
||||||
# Validates on held-out set (20%)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: NN-Accelerated Search
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Uses trained NN as objective function
|
|
||||||
# Runs NSGA-II with 1000+ trials (fast, ~ms per trial)
|
|
||||||
# Identifies Pareto-optimal candidates
|
|
||||||
# Stores predictions with source='NN'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: FEA Validation
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Selects top N NN predictions
|
|
||||||
# Runs actual FEA on these candidates
|
|
||||||
# Updates database with ground truth
|
|
||||||
# Checks for improvement
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 5: Iteration
|
|
||||||
|
|
||||||
```python
|
|
||||||
# If improved: retrain NN with new FEA data
|
|
||||||
# If converged: stop and report best
|
|
||||||
# Otherwise: continue to next iteration
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Files
|
|
||||||
|
|
||||||
```
|
|
||||||
studies/my_study/
|
|
||||||
├── 3_results/
|
|
||||||
│ ├── study.db # Optuna database (all trials)
|
|
||||||
│ ├── adaptive_state.json # Current iteration state
|
|
||||||
│ ├── surrogate_model.pt # Trained neural network
|
|
||||||
│ ├── training_history.json # NN training metrics
|
|
||||||
│ └── STUDY_REPORT.md # Generated summary report
|
|
||||||
```
|
|
||||||
|
|
||||||
### adaptive_state.json
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"iteration": 3,
|
|
||||||
"total_fea_count": 103,
|
|
||||||
"total_nn_count": 3000,
|
|
||||||
"best_weighted": 1.456,
|
|
||||||
"best_params": {
|
|
||||||
"rib_thickness": 8.5,
|
|
||||||
"...": "..."
|
|
||||||
},
|
|
||||||
"history": [
|
|
||||||
{"iteration": 1, "fea_count": 50, "nn_count": 1000, "improved": true},
|
|
||||||
{"iteration": 2, "fea_count": 55, "nn_count": 2000, "improved": true},
|
|
||||||
{"iteration": 3, "fea_count": 103, "nn_count": 3000, "improved": false}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dashboard Integration
|
|
||||||
|
|
||||||
Protocol 13 studies display in the Atomizer dashboard with:
|
|
||||||
|
|
||||||
- **FEA vs NN Differentiation**: Blue circles for FEA, orange crosses for NN
|
|
||||||
- **Pareto Front Highlighting**: Green markers for Pareto-optimal solutions
|
|
||||||
- **Convergence Plot**: Shows optimization progress with best-so-far line
|
|
||||||
- **Parallel Coordinates**: Filter and explore the design space
|
|
||||||
- **Parameter Importance**: Correlation-based sensitivity analysis
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### 1. Initial Sampling Strategy
|
|
||||||
|
|
||||||
Use Latin Hypercube Sampling (LHS) for initial FEA trials to ensure good coverage:
|
|
||||||
|
|
||||||
```python
|
|
||||||
sampler = optuna.samplers.LatinHypercubeSampler(seed=42)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Neural Network Architecture
|
|
||||||
|
|
||||||
For most problems, start with:
|
|
||||||
- 2-3 hidden layers
|
|
||||||
- 64-128 neurons per layer
|
|
||||||
- ReLU activation
|
|
||||||
- Adam optimizer with lr=0.001
|
|
||||||
|
|
||||||
### 3. Validation Strategy
|
|
||||||
|
|
||||||
Always validate top NN predictions with FEA before trusting them:
|
|
||||||
- NN predictions can be wrong in unexplored regions
|
|
||||||
- FEA validation provides ground truth
|
|
||||||
- More FEA = more accurate NN (trade-off with time)
|
|
||||||
|
|
||||||
### 4. Convergence Criteria
|
|
||||||
|
|
||||||
Stop when:
|
|
||||||
- No improvement for 2-3 consecutive iterations
|
|
||||||
- Reached FEA budget limit
|
|
||||||
- Objective improvement < 1% threshold
|
|
||||||
|
|
||||||
## Example: M1 Mirror Optimization
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start adaptive optimization
|
|
||||||
cd studies/m1_mirror_adaptive_V11
|
|
||||||
python run_optimization.py --start
|
|
||||||
|
|
||||||
# Monitor progress
|
|
||||||
python run_optimization.py --status
|
|
||||||
|
|
||||||
# Generate report
|
|
||||||
python generate_report.py
|
|
||||||
```
|
|
||||||
|
|
||||||
Results after 3 iterations:
|
|
||||||
- 103 FEA trials
|
|
||||||
- 3000 NN trials
|
|
||||||
- Best thermal 40°C vs 20°C: 5.99 nm RMS
|
|
||||||
- Best thermal 60°C vs 20°C: 14.02 nm RMS
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### NN Predictions Don't Match FEA
|
|
||||||
|
|
||||||
- Increase initial FEA samples
|
|
||||||
- Add more hidden layers
|
|
||||||
- Check for outliers in training data
|
|
||||||
- Ensure proper normalization
|
|
||||||
|
|
||||||
### Optimization Not Converging
|
|
||||||
|
|
||||||
- Increase NN trials per iteration
|
|
||||||
- Check objective function implementation
|
|
||||||
- Verify design variable bounds
|
|
||||||
- Consider adding constraints
|
|
||||||
|
|
||||||
### Memory Issues
|
|
||||||
|
|
||||||
- Reduce `nn_trials_per_iteration`
|
|
||||||
- Use batch processing for large datasets
|
|
||||||
- Clear trial cache periodically
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [Protocol 11: Multi-Objective NSGA-II](./PROTOCOL_11_MULTI_OBJECTIVE.md)
|
|
||||||
- [Protocol 12: Hybrid FEA/NN](./PROTOCOL_12_HYBRID.md)
|
|
||||||
- [Neural Surrogate Training](../07_DEVELOPMENT/NEURAL_SURROGATE.md)
|
|
||||||
- [Zernike Extractor](./ZERNIKE_EXTRACTOR.md)
|
|
||||||
@@ -1,403 +0,0 @@
|
|||||||
# Zernike Coefficient Extractor
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The Zernike extractor module provides complete wavefront error (WFE) analysis for telescope mirror optimization. It extracts Zernike polynomial coefficients from FEA displacement results and computes RMS metrics used as optimization objectives.
|
|
||||||
|
|
||||||
**Location**: `optimization_engine/extractors/extract_zernike.py`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Mathematical Background
|
|
||||||
|
|
||||||
### What are Zernike Polynomials?
|
|
||||||
|
|
||||||
Zernike polynomials are a set of orthogonal functions defined on the unit disk. They are the standard basis for describing optical aberrations because:
|
|
||||||
|
|
||||||
1. **Orthogonality**: Each mode is independent (no cross-talk)
|
|
||||||
2. **Physical meaning**: Each mode corresponds to a recognizable aberration
|
|
||||||
3. **RMS property**: Total RMS² = sum of individual coefficient²
|
|
||||||
|
|
||||||
### Noll Indexing Convention
|
|
||||||
|
|
||||||
We use the Noll indexing scheme (standard in optics):
|
|
||||||
|
|
||||||
| Noll j | n | m | Name | Physical Meaning |
|
|
||||||
|--------|---|----|---------------------|------------------|
|
|
||||||
| 1 | 0 | 0 | Piston | Constant offset (ignored) |
|
|
||||||
| 2 | 1 | 1 | Tilt Y | Pointing error - correctable |
|
|
||||||
| 3 | 1 | -1 | Tilt X | Pointing error - correctable |
|
|
||||||
| 4 | 2 | 0 | Defocus | Focus error - correctable |
|
|
||||||
| 5 | 2 | -2 | Astigmatism 45° | 3rd order aberration |
|
|
||||||
| 6 | 2 | 2 | Astigmatism 0° | 3rd order aberration |
|
|
||||||
| 7 | 3 | -1 | Coma X | 3rd order aberration |
|
|
||||||
| 8 | 3 | 1 | Coma Y | 3rd order aberration |
|
|
||||||
| 9 | 3 | -3 | Trefoil X | Triangular aberration |
|
|
||||||
| 10 | 3 | 3 | Trefoil Y | Triangular aberration |
|
|
||||||
| 11 | 4 | 0 | Primary Spherical | 4th order spherical |
|
|
||||||
| 12-50 | ...| ...| Higher orders | Higher-order aberrations |
|
|
||||||
|
|
||||||
### Zernike Polynomial Formula
|
|
||||||
|
|
||||||
Each Zernike polynomial Z_j(r, θ) is computed as:
|
|
||||||
|
|
||||||
```
|
|
||||||
Z_j(r, θ) = R_n^m(r) × { cos(m·θ) if m ≥ 0
|
|
||||||
{ sin(|m|·θ) if m < 0
|
|
||||||
|
|
||||||
where R_n^m(r) = Σ(s=0 to (n-|m|)/2) [(-1)^s × (n-s)! / (s! × ((n+|m|)/2-s)! × ((n-|m|)/2-s)!)] × r^(n-2s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Wavefront Error Conversion
|
|
||||||
|
|
||||||
FEA gives surface displacement in mm. We convert to wavefront error in nm:
|
|
||||||
|
|
||||||
```
|
|
||||||
WFE = 2 × displacement × 10⁶ [nm]
|
|
||||||
↑ ↑
|
|
||||||
optical reflection mm → nm
|
|
||||||
```
|
|
||||||
|
|
||||||
The factor of 2 accounts for the optical path difference when light reflects off a surface.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Module Structure
|
|
||||||
|
|
||||||
### Files
|
|
||||||
|
|
||||||
| File | Purpose |
|
|
||||||
|------|---------|
|
|
||||||
| `extract_zernike.py` | Core extraction: Zernike fitting, RMS computation, OP2 parsing |
|
|
||||||
| `zernike_helpers.py` | High-level helpers for optimization integration |
|
|
||||||
| `extract_zernike_surface.py` | Surface-based extraction (alternative method) |
|
|
||||||
|
|
||||||
### Key Classes
|
|
||||||
|
|
||||||
#### `ZernikeExtractor`
|
|
||||||
|
|
||||||
Main class for Zernike analysis:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import ZernikeExtractor
|
|
||||||
|
|
||||||
extractor = ZernikeExtractor(
|
|
||||||
op2_path="results/model-solution_1.op2",
|
|
||||||
bdf_path="results/model.dat", # Optional, auto-detected
|
|
||||||
displacement_unit="mm", # Unit in OP2 file
|
|
||||||
n_modes=50, # Number of Zernike modes
|
|
||||||
filter_orders=4 # Modes to filter (J1-J4)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Extract single subcase
|
|
||||||
result = extractor.extract_subcase("20")
|
|
||||||
print(f"Filtered RMS: {result['filtered_rms_nm']:.2f} nm")
|
|
||||||
|
|
||||||
# Extract relative metrics (target vs reference)
|
|
||||||
relative = extractor.extract_relative(
|
|
||||||
target_subcase="40",
|
|
||||||
reference_subcase="20"
|
|
||||||
)
|
|
||||||
print(f"Relative RMS (40 vs 20): {relative['relative_filtered_rms_nm']:.2f} nm")
|
|
||||||
|
|
||||||
# Extract all subcases
|
|
||||||
all_results = extractor.extract_all_subcases(reference_subcase="20")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## RMS Metrics Explained
|
|
||||||
|
|
||||||
### Global RMS
|
|
||||||
|
|
||||||
Raw RMS of the entire wavefront error surface:
|
|
||||||
|
|
||||||
```
|
|
||||||
global_rms = sqrt(mean(WFE²))
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filtered RMS (J1-J4 removed)
|
|
||||||
|
|
||||||
RMS after removing correctable aberrations (piston, tip, tilt, defocus):
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Subtract low-order contribution
|
|
||||||
WFE_filtered = WFE - Σ(j=1 to 4) c_j × Z_j(r, θ)
|
|
||||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
|
||||||
```
|
|
||||||
|
|
||||||
**This is typically the primary optimization objective** because:
|
|
||||||
- Piston (J1): Doesn't affect imaging
|
|
||||||
- Tip/Tilt (J2-J3): Corrected by telescope pointing
|
|
||||||
- Defocus (J4): Corrected by focus mechanism
|
|
||||||
|
|
||||||
### Optician Workload (J1-J3 removed)
|
|
||||||
|
|
||||||
RMS for manufacturing assessment - keeps defocus because it requires material removal:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Subtract only piston and tilt
|
|
||||||
WFE_j1to3 = WFE - Σ(j=1 to 3) c_j × Z_j(r, θ)
|
|
||||||
rms_filter_j1to3 = sqrt(mean(WFE_j1to3²))
|
|
||||||
```
|
|
||||||
|
|
||||||
### Relative RMS Between Subcases
|
|
||||||
|
|
||||||
Measures gravity-induced deformation relative to a reference orientation:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Compute difference surface
|
|
||||||
ΔWFE = WFE_target - WFE_reference
|
|
||||||
|
|
||||||
# Fit Zernike to difference
|
|
||||||
Δc = zernike_fit(ΔWFE)
|
|
||||||
|
|
||||||
# Filter and compute RMS
|
|
||||||
relative_filtered_rms = sqrt(Σ(j=5 to 50) Δc_j²)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coefficient-Based vs Surface-Based RMS
|
|
||||||
|
|
||||||
Due to Zernike orthogonality, these two methods are mathematically equivalent:
|
|
||||||
|
|
||||||
### Method 1: Coefficient-Based (Fast)
|
|
||||||
```python
|
|
||||||
# From coefficients directly
|
|
||||||
filtered_rms = sqrt(Σ(j=5 to 50) c_j²)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Method 2: Surface-Based (More accurate for irregular meshes)
|
|
||||||
```python
|
|
||||||
# Reconstruct and subtract low-order surface
|
|
||||||
WFE_low = Σ(j=1 to 4) c_j × Z_j(r, θ)
|
|
||||||
WFE_filtered = WFE - WFE_low
|
|
||||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
|
||||||
```
|
|
||||||
|
|
||||||
The module uses the surface-based method for maximum accuracy with FEA meshes.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Usage in Optimization
|
|
||||||
|
|
||||||
### Simple: Single Objective
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import extract_zernike_filtered_rms
|
|
||||||
|
|
||||||
def objective(trial):
|
|
||||||
# ... run simulation ...
|
|
||||||
|
|
||||||
rms = extract_zernike_filtered_rms(
|
|
||||||
op2_file=sim_dir / "model-solution_1.op2",
|
|
||||||
subcase="20"
|
|
||||||
)
|
|
||||||
return rms
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Subcase: Weighted Sum
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import ZernikeExtractor
|
|
||||||
|
|
||||||
def objective(trial):
|
|
||||||
# ... run simulation ...
|
|
||||||
|
|
||||||
extractor = ZernikeExtractor(op2_path)
|
|
||||||
|
|
||||||
# Extract relative metrics
|
|
||||||
rel_40_20 = extractor.extract_relative("3", "2")['relative_filtered_rms_nm']
|
|
||||||
rel_60_20 = extractor.extract_relative("4", "2")['relative_filtered_rms_nm']
|
|
||||||
mfg_90 = extractor.extract_relative("1", "2")['relative_rms_filter_j1to3']
|
|
||||||
|
|
||||||
# Weighted objective
|
|
||||||
weighted = (
|
|
||||||
5.0 * (rel_40_20 / 4.0) + # Target: 4 nm
|
|
||||||
5.0 * (rel_60_20 / 10.0) + # Target: 10 nm
|
|
||||||
1.0 * (mfg_90 / 20.0) # Target: 20 nm
|
|
||||||
) / 11.0
|
|
||||||
|
|
||||||
return weighted
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using Helper Classes
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
|
|
||||||
|
|
||||||
builder = ZernikeObjectiveBuilder(
|
|
||||||
op2_finder=lambda: sim_dir / "model-solution_1.op2"
|
|
||||||
)
|
|
||||||
|
|
||||||
builder.add_relative_objective("3", "2", weight=5.0) # 40 vs 20
|
|
||||||
builder.add_relative_objective("4", "2", weight=5.0) # 60 vs 20
|
|
||||||
builder.add_relative_objective("1", "2",
|
|
||||||
metric="relative_rms_filter_j1to3",
|
|
||||||
weight=1.0) # 90 vs 20
|
|
||||||
|
|
||||||
objective = builder.build_weighted_sum()
|
|
||||||
value = objective() # Returns combined metric
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output Dictionary Reference
|
|
||||||
|
|
||||||
### `extract_subcase()` Returns:
|
|
||||||
|
|
||||||
| Key | Type | Description |
|
|
||||||
|-----|------|-------------|
|
|
||||||
| `subcase` | str | Subcase identifier |
|
|
||||||
| `global_rms_nm` | float | Global RMS WFE (nm) |
|
|
||||||
| `filtered_rms_nm` | float | Filtered RMS (J1-J4 removed) |
|
|
||||||
| `rms_filter_j1to3` | float | J1-J3 filtered RMS (keeps defocus) |
|
|
||||||
| `n_nodes` | int | Number of nodes analyzed |
|
|
||||||
| `defocus_nm` | float | Defocus magnitude (J4) |
|
|
||||||
| `astigmatism_rms_nm` | float | Combined astigmatism (J5+J6) |
|
|
||||||
| `coma_rms_nm` | float | Combined coma (J7+J8) |
|
|
||||||
| `trefoil_rms_nm` | float | Combined trefoil (J9+J10) |
|
|
||||||
| `spherical_nm` | float | Primary spherical (J11) |
|
|
||||||
|
|
||||||
### `extract_relative()` Returns:
|
|
||||||
|
|
||||||
| Key | Type | Description |
|
|
||||||
|-----|------|-------------|
|
|
||||||
| `target_subcase` | str | Target subcase |
|
|
||||||
| `reference_subcase` | str | Reference subcase |
|
|
||||||
| `relative_global_rms_nm` | float | Global RMS of difference |
|
|
||||||
| `relative_filtered_rms_nm` | float | Filtered RMS of difference |
|
|
||||||
| `relative_rms_filter_j1to3` | float | J1-J3 filtered RMS of difference |
|
|
||||||
| `relative_defocus_nm` | float | Defocus change |
|
|
||||||
| `relative_astigmatism_rms_nm` | float | Astigmatism change |
|
|
||||||
| ... | ... | (all aberrations with `relative_` prefix) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Subcase Mapping
|
|
||||||
|
|
||||||
NX Nastran subcases map to gravity orientations:
|
|
||||||
|
|
||||||
| Subcase ID | Elevation | Purpose |
|
|
||||||
|------------|-----------|---------|
|
|
||||||
| 1 | 90° (zenith) | Polishing/manufacturing orientation |
|
|
||||||
| 2 | 20° | Reference (low elevation) |
|
|
||||||
| 3 | 40° | Mid-range tracking |
|
|
||||||
| 4 | 60° | High-range tracking |
|
|
||||||
|
|
||||||
The **20° orientation is typically used as reference** because:
|
|
||||||
- It represents typical low-elevation observing
|
|
||||||
- Polishing is done at 90°, so we measure change from a tracking position
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Saving Zernike Coefficients for Surrogate Training
|
|
||||||
|
|
||||||
For neural network training, save all 200 coefficients (50 modes × 4 subcases):
|
|
||||||
|
|
||||||
```python
|
|
||||||
import pandas as pd
|
|
||||||
from optimization_engine.extractors import ZernikeExtractor
|
|
||||||
|
|
||||||
extractor = ZernikeExtractor(op2_path)
|
|
||||||
|
|
||||||
# Get coefficients for all subcases
|
|
||||||
rows = []
|
|
||||||
for j in range(1, 51):
|
|
||||||
row = {'noll_index': j}
|
|
||||||
for subcase, label in [('1', '90deg'), ('2', '20deg'),
|
|
||||||
('3', '40deg'), ('4', '60deg')]:
|
|
||||||
result = extractor.extract_subcase(subcase, include_coefficients=True)
|
|
||||||
row[f'{label}_nm'] = result['coefficients'][j-1]
|
|
||||||
rows.append(row)
|
|
||||||
|
|
||||||
df = pd.DataFrame(rows)
|
|
||||||
df.to_csv(f"zernike_coefficients_trial_{trial_num}.csv", index=False)
|
|
||||||
```
|
|
||||||
|
|
||||||
### CSV Format
|
|
||||||
|
|
||||||
| noll_index | 90deg_nm | 20deg_nm | 40deg_nm | 60deg_nm |
|
|
||||||
|------------|----------|----------|----------|----------|
|
|
||||||
| 1 | 0.05 | 0.03 | 0.04 | 0.04 |
|
|
||||||
| 2 | -1.23 | -0.98 | -1.05 | -1.12 |
|
|
||||||
| ... | ... | ... | ... | ... |
|
|
||||||
| 50 | 0.02 | 0.01 | 0.02 | 0.02 |
|
|
||||||
|
|
||||||
**Note**: These are ABSOLUTE coefficients in nm, not relative RMS values.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
1. **"No displacement data found in OP2"**
|
|
||||||
- Check that solve completed successfully
|
|
||||||
- Verify OP2 file isn't corrupted or incomplete
|
|
||||||
|
|
||||||
2. **"Subcase 'X' not found"**
|
|
||||||
- List available subcases: `print(extractor.displacements.keys())`
|
|
||||||
- Check subcase numbering in NX simulation setup
|
|
||||||
|
|
||||||
3. **"No valid points inside unit disk"**
|
|
||||||
- Mirror surface nodes may not be properly identified
|
|
||||||
- Check BDF node coordinates
|
|
||||||
|
|
||||||
4. **pyNastran version warning**
|
|
||||||
- `nx version='2506.5' is not supported` - This is just a warning, extraction still works
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Required
|
|
||||||
pyNastran >= 1.3.4 # OP2/BDF parsing
|
|
||||||
numpy >= 1.20 # Numerical computations
|
|
||||||
|
|
||||||
# Optional (for visualization)
|
|
||||||
matplotlib # Plotting Zernike surfaces
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
1. **Noll, R. J. (1976)**. "Zernike polynomials and atmospheric turbulence."
|
|
||||||
*Journal of the Optical Society of America*, 66(3), 207-211.
|
|
||||||
|
|
||||||
2. **Born, M. & Wolf, E. (1999)**. *Principles of Optics* (7th ed.).
|
|
||||||
Cambridge University Press. Chapter 9: Aberrations.
|
|
||||||
|
|
||||||
3. **Wyant, J. C. & Creath, K. (1992)**. "Basic Wavefront Aberration Theory
|
|
||||||
for Optical Metrology." *Applied Optics and Optical Engineering*, Vol. XI.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Module Exports
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import (
|
|
||||||
# Main class
|
|
||||||
ZernikeExtractor,
|
|
||||||
|
|
||||||
# Convenience functions
|
|
||||||
extract_zernike_from_op2,
|
|
||||||
extract_zernike_filtered_rms,
|
|
||||||
extract_zernike_relative_rms,
|
|
||||||
|
|
||||||
# Helpers for optimization
|
|
||||||
create_zernike_objective,
|
|
||||||
create_relative_zernike_objective,
|
|
||||||
ZernikeObjectiveBuilder,
|
|
||||||
|
|
||||||
# Low-level utilities
|
|
||||||
compute_zernike_coefficients,
|
|
||||||
compute_rms_metrics,
|
|
||||||
noll_indices,
|
|
||||||
zernike_noll,
|
|
||||||
zernike_name,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
@@ -1,356 +0,0 @@
|
|||||||
# Zernike Mirror Optimization Protocol
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This document captures the learnings from the M1 mirror Zernike optimization studies (V1-V9), including the Assembly FEM (AFEM) workflow, subcase handling, and wavefront error metrics.
|
|
||||||
|
|
||||||
## Assembly FEM (AFEM) Structure
|
|
||||||
|
|
||||||
### NX File Organization
|
|
||||||
|
|
||||||
A typical telescope mirror assembly in NX consists of:
|
|
||||||
|
|
||||||
```
|
|
||||||
ASSY_M1.prt # Master assembly part
|
|
||||||
ASSY_M1_assyfem1.afm # Assembly FEM container
|
|
||||||
ASSY_M1_assyfem1_sim1.sim # Simulation file (this is what we solve)
|
|
||||||
M1_Blank.prt # Mirror blank part
|
|
||||||
M1_Blank_fem1.fem # Mirror blank mesh
|
|
||||||
M1_Blank_fem1_i.prt # Idealized geometry for FEM
|
|
||||||
M1_Vertical_Support_Skeleton.prt # Support structure part
|
|
||||||
M1_Vertical_Support_Skeleton_fem1.fem
|
|
||||||
M1_Vertical_Support_Skeleton_fem1_i.prt
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key Relationships
|
|
||||||
|
|
||||||
1. **Assembly Part (.prt)** - Contains the CAD geometry and expressions (design parameters)
|
|
||||||
2. **Assembly FEM (.afm)** - Links component FEMs together, defines connections
|
|
||||||
3. **Simulation (.sim)** - Contains solutions, loads, boundary conditions, subcases
|
|
||||||
4. **Component FEMs (.fem)** - Individual meshes that get assembled
|
|
||||||
|
|
||||||
### Expression Propagation
|
|
||||||
|
|
||||||
Expressions defined in the master `.prt` propagate through the assembly:
|
|
||||||
- Modify expression in `ASSY_M1.prt`
|
|
||||||
- AFEM updates mesh connections automatically
|
|
||||||
- Solve via `.sim` file
|
|
||||||
|
|
||||||
## Multi-Subcase Analysis
|
|
||||||
|
|
||||||
### Telescope Gravity Orientations
|
|
||||||
|
|
||||||
For telescope mirrors, we analyze multiple gravity orientations (subcases):
|
|
||||||
|
|
||||||
| Subcase | Elevation Angle | Purpose |
|
|
||||||
|---------|-----------------|---------|
|
|
||||||
| 1 | 90 deg (zenith) | Polishing orientation - manufacturing reference |
|
|
||||||
| 2 | 20 deg | Low elevation - reference for relative metrics |
|
|
||||||
| 3 | 40 deg | Mid-low elevation |
|
|
||||||
| 4 | 60 deg | Mid-high elevation |
|
|
||||||
|
|
||||||
### Subcase Mapping
|
|
||||||
|
|
||||||
**Important**: NX subcase numbers don't always match angle labels!
|
|
||||||
|
|
||||||
```json
|
|
||||||
"subcase_labels": {
|
|
||||||
"1": "90deg", // Subcase 1 = 90 degrees
|
|
||||||
"2": "20deg", // Subcase 2 = 20 degrees (reference)
|
|
||||||
"3": "40deg", // Subcase 3 = 40 degrees
|
|
||||||
"4": "60deg" // Subcase 4 = 60 degrees
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Always verify subcase-to-angle mapping by checking the NX simulation setup.
|
|
||||||
|
|
||||||
## Zernike Wavefront Error Analysis
|
|
||||||
|
|
||||||
### Optical Convention
|
|
||||||
|
|
||||||
For mirror surface deformation to wavefront error:
|
|
||||||
```
|
|
||||||
WFE = 2 * surface_displacement (reflection doubles the path difference)
|
|
||||||
```
|
|
||||||
|
|
||||||
Unit conversion:
|
|
||||||
```python
|
|
||||||
NM_PER_MM = 1e6 # 1 mm displacement = 1e6 nm WFE contribution
|
|
||||||
wfe_nm = 2.0 * displacement_mm * 1e6
|
|
||||||
```
|
|
||||||
|
|
||||||
### Zernike Polynomial Indexing
|
|
||||||
|
|
||||||
We use **Noll indexing** (standard in optics):
|
|
||||||
|
|
||||||
| J | Name | (n,m) | Correctable? |
|
|
||||||
|---|------|-------|--------------|
|
|
||||||
| 1 | Piston | (0,0) | Yes - alignment |
|
|
||||||
| 2 | Tilt X | (1,-1) | Yes - alignment |
|
|
||||||
| 3 | Tilt Y | (1,1) | Yes - alignment |
|
|
||||||
| 4 | Defocus | (2,0) | Yes - focus adjustment |
|
|
||||||
| 5 | Astigmatism 45 | (2,-2) | Partially |
|
|
||||||
| 6 | Astigmatism 0 | (2,2) | Partially |
|
|
||||||
| 7 | Coma X | (3,-1) | No |
|
|
||||||
| 8 | Coma Y | (3,1) | No |
|
|
||||||
| 9 | Trefoil X | (3,-3) | No |
|
|
||||||
| 10 | Trefoil Y | (3,3) | No |
|
|
||||||
| 11 | Spherical | (4,0) | No |
|
|
||||||
|
|
||||||
### RMS Metrics
|
|
||||||
|
|
||||||
| Metric | Filter | Use Case |
|
|
||||||
|--------|--------|----------|
|
|
||||||
| `global_rms_nm` | None | Total surface error |
|
|
||||||
| `filtered_rms_nm` | J1-J4 removed | Uncorrectable error (optimization target) |
|
|
||||||
| `rms_filter_j1to3` | J1-J3 removed | Optician workload (keeps defocus) |
|
|
||||||
|
|
||||||
### Relative Metrics
|
|
||||||
|
|
||||||
For gravity-induced deformation, we compute relative WFE:
|
|
||||||
```
|
|
||||||
WFE_relative = WFE_target_orientation - WFE_reference_orientation
|
|
||||||
```
|
|
||||||
|
|
||||||
This removes the static (manufacturing) shape and isolates gravity effects.
|
|
||||||
|
|
||||||
Example: `rel_filtered_rms_40_vs_20` = filtered RMS at 40 deg relative to 20 deg reference
|
|
||||||
|
|
||||||
## Optimization Objectives
|
|
||||||
|
|
||||||
### Typical M1 Mirror Objectives
|
|
||||||
|
|
||||||
```json
|
|
||||||
"objectives": [
|
|
||||||
{
|
|
||||||
"name": "rel_filtered_rms_40_vs_20",
|
|
||||||
"description": "Gravity-induced WFE at 40 deg vs 20 deg reference",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 5.0,
|
|
||||||
"target": 4.0,
|
|
||||||
"units": "nm"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "rel_filtered_rms_60_vs_20",
|
|
||||||
"description": "Gravity-induced WFE at 60 deg vs 20 deg reference",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 5.0,
|
|
||||||
"target": 10.0,
|
|
||||||
"units": "nm"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "mfg_90_optician_workload",
|
|
||||||
"description": "Polishing effort at zenith (J1-J3 filtered)",
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 1.0,
|
|
||||||
"target": 20.0,
|
|
||||||
"units": "nm"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Weighted Sum Formulation
|
|
||||||
|
|
||||||
```python
|
|
||||||
weighted_objective = sum(weight_i * (value_i / target_i)) / sum(weight_i)
|
|
||||||
```
|
|
||||||
|
|
||||||
Targets normalize different metrics to comparable scales.
|
|
||||||
|
|
||||||
## Design Variables
|
|
||||||
|
|
||||||
### Typical Mirror Support Parameters
|
|
||||||
|
|
||||||
| Parameter | Description | Typical Range |
|
|
||||||
|-----------|-------------|---------------|
|
|
||||||
| `whiffle_min` | Whiffle tree minimum dimension | 35-55 mm |
|
|
||||||
| `whiffle_outer_to_vertical` | Whiffle arm angle | 68-80 deg |
|
|
||||||
| `whiffle_triangle_closeness` | Triangle geometry | 50-65 mm |
|
|
||||||
| `inner_circular_rib_dia` | Rib diameter | 480-620 mm |
|
|
||||||
| `lateral_inner_angle` | Lateral support angle | 25-28.5 deg |
|
|
||||||
| `blank_backface_angle` | Mirror blank geometry | 3.5-5.0 deg |
|
|
||||||
|
|
||||||
### Expression File Format (params.exp)
|
|
||||||
|
|
||||||
```
|
|
||||||
[mm]whiffle_min=42.49
|
|
||||||
[Degrees]whiffle_outer_to_vertical=79.41
|
|
||||||
[mm]inner_circular_rib_dia=582.48
|
|
||||||
```
|
|
||||||
|
|
||||||
## Iteration Folder Structure (V9)
|
|
||||||
|
|
||||||
```
|
|
||||||
study_name/
|
|
||||||
├── 1_setup/
|
|
||||||
│ ├── model/ # Master NX files (NEVER modify)
|
|
||||||
│ └── optimization_config.json
|
|
||||||
├── 2_iterations/
|
|
||||||
│ ├── iter0/ # Trial 0 (0-based to match Optuna)
|
|
||||||
│ │ ├── [all NX files] # Fresh copy from master
|
|
||||||
│ │ ├── params.exp # Expression updates for this trial
|
|
||||||
│ │ └── results/ # Processed outputs
|
|
||||||
│ ├── iter1/
|
|
||||||
│ └── ...
|
|
||||||
└── 3_results/
|
|
||||||
└── study.db # Optuna database
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why 0-Based Iteration Folders?
|
|
||||||
|
|
||||||
Optuna uses 0-based trial numbers. Using `iter{trial.number}` ensures:
|
|
||||||
- Dashboard shows Trial 0 -> corresponds to folder iter0
|
|
||||||
- No confusion when cross-referencing results
|
|
||||||
- Consistent indexing throughout the system
|
|
||||||
|
|
||||||
## Lessons Learned
|
|
||||||
|
|
||||||
### 1. TPE Sampler Seed Issue
|
|
||||||
|
|
||||||
**Problem**: When resuming a study, re-initializing TPESampler with a fixed seed causes the sampler to restart its random sequence, generating duplicate parameters.
|
|
||||||
|
|
||||||
**Solution**: Only set seed for NEW studies:
|
|
||||||
```python
|
|
||||||
if is_new_study:
|
|
||||||
sampler = TPESampler(seed=42, ...)
|
|
||||||
else:
|
|
||||||
sampler = TPESampler(...) # No seed for resume
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Code Reuse Protocol
|
|
||||||
|
|
||||||
**Problem**: Embedding 500+ lines of Zernike code in `run_optimization.py` violates DRY principle.
|
|
||||||
|
|
||||||
**Solution**: Use centralized extractors:
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import ZernikeExtractor
|
|
||||||
|
|
||||||
extractor = ZernikeExtractor(op2_file)
|
|
||||||
result = extractor.extract_relative("3", "2")
|
|
||||||
rms = result['relative_filtered_rms_nm']
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Subcase Numbering
|
|
||||||
|
|
||||||
**Problem**: NX subcase numbers (1,2,3,4) don't match angle labels (20,40,60,90).
|
|
||||||
|
|
||||||
**Solution**: Use explicit mapping in config and translate:
|
|
||||||
```python
|
|
||||||
subcase_labels = {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"}
|
|
||||||
label_to_subcase = {v: k for k, v in subcase_labels.items()}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. OP2 Data Validation
|
|
||||||
|
|
||||||
**Problem**: Corrupt OP2 files can have all-zero or unrealistic displacement values.
|
|
||||||
|
|
||||||
**Solution**: Validate before processing:
|
|
||||||
```python
|
|
||||||
unique_values = len(np.unique(disp_z))
|
|
||||||
if unique_values < 10:
|
|
||||||
raise RuntimeError("CORRUPT OP2: insufficient unique values")
|
|
||||||
|
|
||||||
if np.abs(disp_z).max() > 1e6:
|
|
||||||
raise RuntimeError("CORRUPT OP2: unrealistic displacement magnitude")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Reference Subcase for Relative Metrics
|
|
||||||
|
|
||||||
**Problem**: Which orientation to use as reference?
|
|
||||||
|
|
||||||
**Solution**: Use the lowest operational elevation (typically 20 deg) as reference. This makes higher elevations show positive relative WFE as gravity effects increase.
|
|
||||||
|
|
||||||
## ZernikeExtractor API Reference
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.extractors import ZernikeExtractor
|
|
||||||
|
|
||||||
# Create extractor
|
|
||||||
extractor = ZernikeExtractor(
|
|
||||||
op2_path="path/to/results.op2",
|
|
||||||
bdf_path=None, # Auto-detect from same folder
|
|
||||||
displacement_unit="mm",
|
|
||||||
n_modes=50,
|
|
||||||
filter_orders=4
|
|
||||||
)
|
|
||||||
|
|
||||||
# Single subcase
|
|
||||||
result = extractor.extract_subcase("2")
|
|
||||||
# Returns: global_rms_nm, filtered_rms_nm, rms_filter_j1to3, aberrations...
|
|
||||||
|
|
||||||
# Relative between subcases
|
|
||||||
rel = extractor.extract_relative(target_subcase="3", reference_subcase="2")
|
|
||||||
# Returns: relative_filtered_rms_nm, relative_rms_filter_j1to3, ...
|
|
||||||
|
|
||||||
# All subcases with relative metrics
|
|
||||||
all_results = extractor.extract_all_subcases(reference_subcase="2")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Available Metrics
|
|
||||||
|
|
||||||
| Method | Returns |
|
|
||||||
|--------|---------|
|
|
||||||
| `extract_subcase()` | global_rms_nm, filtered_rms_nm, rms_filter_j1to3, defocus_nm, astigmatism_rms_nm, coma_rms_nm, trefoil_rms_nm, spherical_nm |
|
|
||||||
| `extract_relative()` | relative_global_rms_nm, relative_filtered_rms_nm, relative_rms_filter_j1to3, relative aberrations |
|
|
||||||
| `extract_all_subcases()` | Dict of all subcases with both absolute and relative metrics |
|
|
||||||
|
|
||||||
## Configuration Template
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"study_name": "m1_mirror_optimization",
|
|
||||||
|
|
||||||
"design_variables": [
|
|
||||||
{
|
|
||||||
"name": "whiffle_min",
|
|
||||||
"expression_name": "whiffle_min",
|
|
||||||
"min": 35.0,
|
|
||||||
"max": 55.0,
|
|
||||||
"baseline": 40.55,
|
|
||||||
"units": "mm",
|
|
||||||
"enabled": true
|
|
||||||
}
|
|
||||||
],
|
|
||||||
|
|
||||||
"objectives": [
|
|
||||||
{
|
|
||||||
"name": "rel_filtered_rms_40_vs_20",
|
|
||||||
"extractor": "zernike_relative",
|
|
||||||
"extractor_config": {
|
|
||||||
"target_subcase": "3",
|
|
||||||
"reference_subcase": "2",
|
|
||||||
"metric": "relative_filtered_rms_nm"
|
|
||||||
},
|
|
||||||
"direction": "minimize",
|
|
||||||
"weight": 5.0,
|
|
||||||
"target": 4.0
|
|
||||||
}
|
|
||||||
],
|
|
||||||
|
|
||||||
"zernike_settings": {
|
|
||||||
"n_modes": 50,
|
|
||||||
"filter_low_orders": 4,
|
|
||||||
"displacement_unit": "mm",
|
|
||||||
"subcases": ["1", "2", "3", "4"],
|
|
||||||
"subcase_labels": {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"},
|
|
||||||
"reference_subcase": "2"
|
|
||||||
},
|
|
||||||
|
|
||||||
"optimization_settings": {
|
|
||||||
"sampler": "TPE",
|
|
||||||
"seed": 42,
|
|
||||||
"n_startup_trials": 15
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Version History
|
|
||||||
|
|
||||||
| Version | Key Changes |
|
|
||||||
|---------|-------------|
|
|
||||||
| V1-V6 | Initial development, various folder structures |
|
|
||||||
| V7 | HEEDS-style iteration folders, fresh model copies |
|
|
||||||
| V8 | Autonomous NX session management, but had embedded Zernike code |
|
|
||||||
| V9 | Clean ZernikeExtractor integration, fixed sampler seed, 0-based folders |
|
|
||||||
@@ -1,385 +0,0 @@
|
|||||||
# Protocol 10: Intelligent Multi-Strategy Optimization (IMSO)
|
|
||||||
|
|
||||||
**Status**: Active
|
|
||||||
**Version**: 2.0 (Adaptive Two-Study Architecture)
|
|
||||||
**Last Updated**: 2025-11-20
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Protocol 10 implements intelligent, adaptive optimization that automatically:
|
|
||||||
1. Characterizes the optimization landscape
|
|
||||||
2. Selects the best optimization algorithm
|
|
||||||
3. Executes optimization with the ideal strategy
|
|
||||||
|
|
||||||
**Key Innovation**: Adaptive characterization phase that intelligently determines when enough landscape exploration has been done, then seamlessly transitions to the optimal algorithm.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Two-Study Approach
|
|
||||||
|
|
||||||
Protocol 10 uses a **two-study architecture** to overcome Optuna's fixed-sampler limitation:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ PROTOCOL 10: INTELLIGENT MULTI-STRATEGY OPTIMIZATION │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 1: ADAPTIVE CHARACTERIZATION STUDY │
|
|
||||||
│ ───────────────────────────────────────────────────────── │
|
|
||||||
│ Sampler: Random/Sobol (unbiased exploration) │
|
|
||||||
│ Trials: 10-30 (adapts to problem complexity) │
|
|
||||||
│ │
|
|
||||||
│ Every 5 trials: │
|
|
||||||
│ → Analyze landscape metrics │
|
|
||||||
│ → Check metric convergence │
|
|
||||||
│ → Calculate characterization confidence │
|
|
||||||
│ → Decide if ready to stop │
|
|
||||||
│ │
|
|
||||||
│ Stop when: │
|
|
||||||
│ ✓ Confidence ≥ 85% │
|
|
||||||
│ ✓ OR max trials reached (30) │
|
|
||||||
│ │
|
|
||||||
│ Simple problems (smooth, unimodal): │
|
|
||||||
│ Stop at ~10-15 trials │
|
|
||||||
│ │
|
|
||||||
│ Complex problems (multimodal, rugged): │
|
|
||||||
│ Continue to ~20-30 trials │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ TRANSITION: LANDSCAPE ANALYSIS & STRATEGY SELECTION │
|
|
||||||
│ ───────────────────────────────────────────────────────── │
|
|
||||||
│ Analyze final landscape: │
|
|
||||||
│ - Smoothness (0-1) │
|
|
||||||
│ - Multimodality (clusters of good solutions) │
|
|
||||||
│ - Parameter correlation │
|
|
||||||
│ - Noise level │
|
|
||||||
│ │
|
|
||||||
│ Classify landscape: │
|
|
||||||
│ → smooth_unimodal │
|
|
||||||
│ → smooth_multimodal │
|
|
||||||
│ → rugged_unimodal │
|
|
||||||
│ → rugged_multimodal │
|
|
||||||
│ → noisy │
|
|
||||||
│ │
|
|
||||||
│ Recommend strategy: │
|
|
||||||
│ smooth_unimodal → GP-BO (best) or CMA-ES │
|
|
||||||
│ smooth_multimodal → GP-BO │
|
|
||||||
│ rugged_multimodal → TPE │
|
|
||||||
│ rugged_unimodal → TPE or CMA-ES │
|
|
||||||
│ noisy → TPE (most robust) │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 2: OPTIMIZATION STUDY │
|
|
||||||
│ ───────────────────────────────────────────────────────── │
|
|
||||||
│ Sampler: Recommended from Phase 1 │
|
|
||||||
│ Warm Start: Initialize from best characterization point │
|
|
||||||
│ Trials: User-specified (default 50) │
|
|
||||||
│ │
|
|
||||||
│ Optimizes efficiently using: │
|
|
||||||
│ - Right algorithm for the landscape │
|
|
||||||
│ - Knowledge from characterization phase │
|
|
||||||
│ - Focused exploitation around promising regions │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Components
|
|
||||||
|
|
||||||
### 1. Adaptive Characterization (`adaptive_characterization.py`)
|
|
||||||
|
|
||||||
**Purpose**: Intelligently determine when enough landscape exploration has been done.
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Progressive landscape analysis (every 5 trials starting at trial 10)
|
|
||||||
- Metric convergence detection
|
|
||||||
- Complexity-aware sample adequacy
|
|
||||||
- Parameter space coverage assessment
|
|
||||||
- Confidence scoring (combines all factors)
|
|
||||||
|
|
||||||
**Confidence Calculation** (weighted sum):
|
|
||||||
```python
|
|
||||||
confidence = (
|
|
||||||
0.40 * metric_stability_score + # Are metrics converging?
|
|
||||||
0.30 * parameter_coverage_score + # Explored enough space?
|
|
||||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
|
||||||
0.10 * landscape_clarity_score # Clear classification?
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Stopping Criteria**:
|
|
||||||
- **Minimum trials**: 10 (always gather baseline data)
|
|
||||||
- **Maximum trials**: 30 (prevent over-characterization)
|
|
||||||
- **Confidence threshold**: 85% (high confidence in landscape understanding)
|
|
||||||
- **Check interval**: Every 5 trials
|
|
||||||
|
|
||||||
**Adaptive Behavior**:
|
|
||||||
```python
|
|
||||||
# Simple problem (smooth, unimodal, low noise):
|
|
||||||
if smoothness > 0.6 and unimodal and noise < 0.3:
|
|
||||||
required_samples = 10 + dimensionality
|
|
||||||
# Stops at ~10-15 trials
|
|
||||||
|
|
||||||
# Complex problem (multimodal with N modes):
|
|
||||||
if multimodal and n_modes > 2:
|
|
||||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
|
||||||
# Continues to ~20-30 trials
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Landscape Analyzer (`landscape_analyzer.py`)
|
|
||||||
|
|
||||||
**Purpose**: Characterize the optimization landscape from trial history.
|
|
||||||
|
|
||||||
**Metrics Computed**:
|
|
||||||
|
|
||||||
1. **Smoothness** (0-1):
|
|
||||||
- Method: Spearman correlation between parameter distance and objective difference
|
|
||||||
- High smoothness (>0.6): Nearby points have similar objectives (good for CMA-ES, GP-BO)
|
|
||||||
- Low smoothness (<0.4): Rugged landscape (good for TPE)
|
|
||||||
|
|
||||||
2. **Multimodality** (boolean + n_modes):
|
|
||||||
- Method: DBSCAN clustering on good trials (bottom 30%)
|
|
||||||
- Detects multiple distinct regions of good solutions
|
|
||||||
|
|
||||||
3. **Parameter Correlation**:
|
|
||||||
- Method: Spearman correlation between each parameter and objective
|
|
||||||
- Identifies which parameters strongly affect objective
|
|
||||||
|
|
||||||
4. **Noise Level** (0-1):
|
|
||||||
- Method: Local consistency check (nearby points should give similar outputs)
|
|
||||||
- **Important**: Wide exploration range ≠ noise
|
|
||||||
- Only true noise (simulation instability) is detected
|
|
||||||
|
|
||||||
**Landscape Classification**:
|
|
||||||
```python
|
|
||||||
'smooth_unimodal' # Single smooth bowl → GP-BO or CMA-ES
|
|
||||||
'smooth_multimodal' # Multiple smooth regions → GP-BO
|
|
||||||
'rugged_unimodal' # Single rugged region → TPE or CMA-ES
|
|
||||||
'rugged_multimodal' # Multiple rugged regions → TPE
|
|
||||||
'noisy' # High noise level → TPE (robust)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Strategy Selector (`strategy_selector.py`)
|
|
||||||
|
|
||||||
**Purpose**: Recommend the best optimization algorithm based on landscape.
|
|
||||||
|
|
||||||
**Algorithm Recommendations**:
|
|
||||||
|
|
||||||
| Landscape Type | Primary Strategy | Fallback | Rationale |
|
|
||||||
|----------------|------------------|----------|-----------|
|
|
||||||
| smooth_unimodal | GP-BO | CMA-ES | GP surrogate models smoothness explicitly |
|
|
||||||
| smooth_multimodal | GP-BO | TPE | GP handles multiple modes well |
|
|
||||||
| rugged_unimodal | TPE | CMA-ES | TPE robust to ruggedness |
|
|
||||||
| rugged_multimodal | TPE | - | TPE excellent for complex landscapes |
|
|
||||||
| noisy | TPE | - | TPE most robust to noise |
|
|
||||||
|
|
||||||
**Algorithm Characteristics**:
|
|
||||||
|
|
||||||
**GP-BO (Gaussian Process Bayesian Optimization)**:
|
|
||||||
- ✅ Best for: Smooth, expensive functions (like FEA)
|
|
||||||
- ✅ Explicit surrogate model (Gaussian Process)
|
|
||||||
- ✅ Models smoothness + uncertainty
|
|
||||||
- ✅ Acquisition function balances exploration/exploitation
|
|
||||||
- ❌ Less effective: Highly rugged landscapes
|
|
||||||
|
|
||||||
**CMA-ES (Covariance Matrix Adaptation Evolution Strategy)**:
|
|
||||||
- ✅ Best for: Smooth unimodal problems
|
|
||||||
- ✅ Fast convergence to local optimum
|
|
||||||
- ✅ Adapts search distribution to landscape
|
|
||||||
- ❌ Can get stuck in local minima
|
|
||||||
- ❌ No explicit surrogate model
|
|
||||||
|
|
||||||
**TPE (Tree-structured Parzen Estimator)**:
|
|
||||||
- ✅ Best for: Multimodal, rugged, or noisy problems
|
|
||||||
- ✅ Robust to noise and discontinuities
|
|
||||||
- ✅ Good global exploration
|
|
||||||
- ❌ Slower convergence than GP-BO/CMA-ES on smooth problems
|
|
||||||
|
|
||||||
### 4. Intelligent Optimizer (`intelligent_optimizer.py`)
|
|
||||||
|
|
||||||
**Purpose**: Orchestrate the entire Protocol 10 workflow.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
```python
|
|
||||||
1. Create characterization study (Random/Sobol sampler)
|
|
||||||
2. Run adaptive characterization with stopping criterion
|
|
||||||
3. Analyze final landscape
|
|
||||||
4. Select optimal strategy
|
|
||||||
5. Create optimization study with recommended sampler
|
|
||||||
6. Warm-start from best characterization point
|
|
||||||
7. Run optimization
|
|
||||||
8. Generate intelligence report
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
|
||||||
|
|
||||||
# Create optimizer
|
|
||||||
optimizer = IntelligentOptimizer(
|
|
||||||
study_name="my_optimization",
|
|
||||||
study_dir=results_dir,
|
|
||||||
config=optimization_config,
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Define design variables
|
|
||||||
design_vars = {
|
|
||||||
'parameter1': (lower_bound, upper_bound),
|
|
||||||
'parameter2': (lower_bound, upper_bound)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Run Protocol 10
|
|
||||||
results = optimizer.optimize(
|
|
||||||
objective_function=my_objective,
|
|
||||||
design_variables=design_vars,
|
|
||||||
n_trials=50, # For optimization phase
|
|
||||||
target_value=target,
|
|
||||||
tolerance=0.1
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
Add to `optimization_config.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"intelligent_optimization": {
|
|
||||||
"enabled": true,
|
|
||||||
"characterization": {
|
|
||||||
"min_trials": 10,
|
|
||||||
"max_trials": 30,
|
|
||||||
"confidence_threshold": 0.85,
|
|
||||||
"check_interval": 5
|
|
||||||
},
|
|
||||||
"landscape_analysis": {
|
|
||||||
"min_trials_for_analysis": 10
|
|
||||||
},
|
|
||||||
"strategy_selection": {
|
|
||||||
"allow_cmaes": true,
|
|
||||||
"allow_gpbo": true,
|
|
||||||
"allow_tpe": true
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"trials": {
|
|
||||||
"n_trials": 50
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Intelligence Report
|
|
||||||
|
|
||||||
Protocol 10 generates comprehensive reports tracking:
|
|
||||||
|
|
||||||
1. **Characterization Phase**:
|
|
||||||
- Metric evolution (smoothness, multimodality, noise)
|
|
||||||
- Confidence progression
|
|
||||||
- Stopping decision details
|
|
||||||
|
|
||||||
2. **Landscape Analysis**:
|
|
||||||
- Final landscape classification
|
|
||||||
- Parameter correlations
|
|
||||||
- Objective statistics
|
|
||||||
|
|
||||||
3. **Strategy Selection**:
|
|
||||||
- Recommended algorithm
|
|
||||||
- Decision rationale
|
|
||||||
- Alternative strategies considered
|
|
||||||
|
|
||||||
4. **Optimization Performance**:
|
|
||||||
- Best solution found
|
|
||||||
- Convergence history
|
|
||||||
- Algorithm effectiveness
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
### Efficiency
|
|
||||||
- **Simple problems**: Stops characterization early (~10-15 trials)
|
|
||||||
- **Complex problems**: Extends characterization for adequate coverage (~20-30 trials)
|
|
||||||
- **Right algorithm**: Uses optimal strategy for the landscape type
|
|
||||||
|
|
||||||
### Robustness
|
|
||||||
- **Adaptive**: Adjusts to problem complexity automatically
|
|
||||||
- **Confidence-based**: Only stops when confident in landscape understanding
|
|
||||||
- **Fallback strategies**: Handles edge cases gracefully
|
|
||||||
|
|
||||||
### Transparency
|
|
||||||
- **Detailed reports**: Explains all decisions
|
|
||||||
- **Metric tracking**: Full history of landscape analysis
|
|
||||||
- **Reproducibility**: All decisions logged to JSON
|
|
||||||
|
|
||||||
## Example: Circular Plate Frequency Tuning
|
|
||||||
|
|
||||||
**Problem**: Tune circular plate dimensions to achieve 115 Hz first natural frequency
|
|
||||||
|
|
||||||
**Protocol 10 Behavior**:
|
|
||||||
|
|
||||||
```
|
|
||||||
PHASE 1: CHARACTERIZATION (Trials 1-14)
|
|
||||||
Trial 5: Landscape = smooth_unimodal (preliminary)
|
|
||||||
Trial 10: Landscape = smooth_unimodal (confidence 72%)
|
|
||||||
Trial 14: Landscape = smooth_unimodal (confidence 87%)
|
|
||||||
|
|
||||||
→ CHARACTERIZATION COMPLETE
|
|
||||||
→ Confidence threshold met (87% ≥ 85%)
|
|
||||||
→ Recommended Strategy: GP-BO
|
|
||||||
|
|
||||||
PHASE 2: OPTIMIZATION (Trials 15-64)
|
|
||||||
Sampler: GP-BO (warm-started from best characterization point)
|
|
||||||
Trial 15: 0.325 Hz error (baseline from characterization)
|
|
||||||
Trial 23: 0.142 Hz error
|
|
||||||
Trial 31: 0.089 Hz error
|
|
||||||
Trial 42: 0.047 Hz error
|
|
||||||
Trial 56: 0.012 Hz error ← TARGET ACHIEVED!
|
|
||||||
|
|
||||||
→ Total Trials: 56 (14 characterization + 42 optimization)
|
|
||||||
→ Best Frequency: 115.012 Hz (error 0.012 Hz)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Comparison** (without Protocol 10):
|
|
||||||
- TPE alone: ~95 trials to achieve target
|
|
||||||
- Random search: ~150+ trials
|
|
||||||
- **Protocol 10: 56 trials** (41% reduction vs TPE)
|
|
||||||
|
|
||||||
## Limitations and Future Work
|
|
||||||
|
|
||||||
### Current Limitations
|
|
||||||
|
|
||||||
1. **Optuna Constraint**: Cannot change sampler mid-study (necessitates two-study approach)
|
|
||||||
2. **GP-BO Integration**: Requires external GP-BO library (e.g., BoTorch, scikit-optimize)
|
|
||||||
3. **Warm Start**: Not all samplers support warm-starting equally well
|
|
||||||
|
|
||||||
### Future Enhancements
|
|
||||||
|
|
||||||
1. **Multi-Fidelity**: Extend to support cheap/expensive function evaluations
|
|
||||||
2. **Constraint Handling**: Better support for constrained optimization
|
|
||||||
3. **Transfer Learning**: Use knowledge from previous similar problems
|
|
||||||
4. **Active Learning**: More sophisticated characterization sampling
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- Landscape Analysis: Mersmann et al. "Exploratory Landscape Analysis" (2011)
|
|
||||||
- CMA-ES: Hansen & Ostermeier "Completely Derandomized Self-Adaptation" (2001)
|
|
||||||
- GP-BO: Snoek et al. "Practical Bayesian Optimization" (2012)
|
|
||||||
- TPE: Bergstra et al. "Algorithms for Hyper-Parameter Optimization" (2011)
|
|
||||||
|
|
||||||
## Version History
|
|
||||||
|
|
||||||
### Version 2.0 (2025-11-20)
|
|
||||||
- ✅ Added adaptive characterization with intelligent stopping
|
|
||||||
- ✅ Implemented two-study architecture (overcomes Optuna limitation)
|
|
||||||
- ✅ Fixed noise detection algorithm (local consistency instead of global CV)
|
|
||||||
- ✅ Added GP-BO as primary recommendation for smooth problems
|
|
||||||
- ✅ Comprehensive intelligence reporting
|
|
||||||
|
|
||||||
### Version 1.0 (2025-11-19)
|
|
||||||
- Initial implementation with dynamic strategy switching
|
|
||||||
- Discovered Optuna sampler limitation
|
|
||||||
- Single-study architecture (non-functional)
|
|
||||||
@@ -1,346 +0,0 @@
|
|||||||
# Protocol 10 v2.0 - Bug Fixes
|
|
||||||
|
|
||||||
**Date**: November 20, 2025
|
|
||||||
**Version**: 2.1 (Post-Test Improvements)
|
|
||||||
**Status**: ✅ Fixed and Ready for Retesting
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
After testing Protocol 10 v2.0 on the circular plate problem, we identified three issues that reduced optimization efficiency. All have been fixed.
|
|
||||||
|
|
||||||
## Test Results (Before Fixes)
|
|
||||||
|
|
||||||
**Study**: circular_plate_protocol10_v2_test
|
|
||||||
**Total trials**: 50 (40 successful, 10 pruned)
|
|
||||||
**Best result**: 0.94 Hz error (Trial #49)
|
|
||||||
**Target**: 0.1 Hz tolerance ❌ Not achieved
|
|
||||||
|
|
||||||
**Issues Found**:
|
|
||||||
1. Wrong algorithm selected (TPE instead of GP-BO)
|
|
||||||
2. False multimodality detection
|
|
||||||
3. High pruning rate (20% failures)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Fix #1: Strategy Selector - Use Characterization Trial Count
|
|
||||||
|
|
||||||
### Problem
|
|
||||||
|
|
||||||
The strategy selector used **total trial count** (including pruned trials) instead of **characterization trial count**.
|
|
||||||
|
|
||||||
**Impact**: Characterization completed at trial #26, but optimization started at trial #35 (because trials 0-34 included 9 pruned trials). The condition `trials_completed < 30` was FALSE, so GP-BO wasn't selected.
|
|
||||||
|
|
||||||
**Wrong behavior**:
|
|
||||||
```python
|
|
||||||
# Characterization: 26 successful trials (trials 0-34 total)
|
|
||||||
# trials_completed = 35 at start of optimization
|
|
||||||
if trials_completed < 30: # FALSE! (35 > 30)
|
|
||||||
return 'gp_bo' # Not reached
|
|
||||||
else:
|
|
||||||
return 'tpe' # Selected instead
|
|
||||||
```
|
|
||||||
|
|
||||||
### Solution
|
|
||||||
|
|
||||||
Use characterization trial count from landscape analysis, not total trial count:
|
|
||||||
|
|
||||||
**File**: [optimization_engine/strategy_selector.py:70-72](../optimization_engine/strategy_selector.py#L70-L72)
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Use characterization trial count for strategy decisions (not total trials)
|
|
||||||
# This prevents premature algorithm selection when many trials were pruned
|
|
||||||
char_trials = landscape.get('total_trials', trials_completed)
|
|
||||||
|
|
||||||
# Decision tree for strategy selection
|
|
||||||
strategy, details = self._apply_decision_tree(
|
|
||||||
...
|
|
||||||
trials_completed=char_trials # Use characterization trials, not total
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Now correctly selects GP-BO when characterization completes at ~26 trials.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Fix #2: Improve Multimodality Detection
|
|
||||||
|
|
||||||
### Problem
|
|
||||||
|
|
||||||
The landscape analyzer detected **2 modes** when the problem was actually **unimodal**.
|
|
||||||
|
|
||||||
**Evidence from test**:
|
|
||||||
- Smoothness = 0.67 (high smoothness)
|
|
||||||
- Noise = 0.15 (low noise)
|
|
||||||
- 2 modes detected → Classified as "smooth_multimodal"
|
|
||||||
|
|
||||||
**Why this happened**: The circular plate has two parameter combinations that achieve similar frequencies:
|
|
||||||
- Small diameter + thick plate (~67 mm, ~7 mm)
|
|
||||||
- Medium diameter + medium plate (~83 mm, ~6.5 mm)
|
|
||||||
|
|
||||||
But these aren't separate "modes" - they're part of a **smooth continuous manifold**.
|
|
||||||
|
|
||||||
### Solution
|
|
||||||
|
|
||||||
Add heuristic to detect false multimodality from smooth continuous surfaces:
|
|
||||||
|
|
||||||
**File**: [optimization_engine/landscape_analyzer.py:285-292](../optimization_engine/landscape_analyzer.py#L285-L292)
|
|
||||||
|
|
||||||
```python
|
|
||||||
# IMPROVEMENT: Detect false multimodality from smooth continuous manifolds
|
|
||||||
# If only 2 modes detected with high smoothness and low noise,
|
|
||||||
# it's likely a continuous smooth surface, not true multimodality
|
|
||||||
if multimodal and n_modes == 2 and smoothness > 0.6 and noise < 0.2:
|
|
||||||
if self.verbose:
|
|
||||||
print(f"[LANDSCAPE] Reclassifying: 2 modes with smoothness={smoothness:.2f}, noise={noise:.2f}")
|
|
||||||
print(f"[LANDSCAPE] This appears to be a smooth continuous manifold, not true multimodality")
|
|
||||||
multimodal = False # Override: treat as unimodal
|
|
||||||
```
|
|
||||||
|
|
||||||
**Updated call site**:
|
|
||||||
```python
|
|
||||||
# Pass n_modes to classification function
|
|
||||||
landscape_type = self._classify_landscape(smoothness, multimodal, noise_level, n_modes)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Circular plate will now be classified as "smooth_unimodal" → CMA-ES or GP-BO selected.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Fix #3: Simulation Validation
|
|
||||||
|
|
||||||
### Problem
|
|
||||||
|
|
||||||
20% of trials failed with OP2 extraction errors:
|
|
||||||
```
|
|
||||||
OP2 EXTRACTION FAILED: There was a Nastran FATAL Error. Check the F06.
|
|
||||||
last table=b'EQEXIN'; post=-1 version='nx'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Root cause**: Extreme parameter values causing:
|
|
||||||
- Poor mesh quality (very thin or thick plates)
|
|
||||||
- Numerical instability (extreme aspect ratios)
|
|
||||||
- Solver convergence issues
|
|
||||||
|
|
||||||
### Solution
|
|
||||||
|
|
||||||
Created validation module to check parameters before simulation:
|
|
||||||
|
|
||||||
**New file**: [optimization_engine/simulation_validator.py](../optimization_engine/simulation_validator.py)
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
1. **Hard limits**: Reject invalid parameters (outside bounds)
|
|
||||||
2. **Soft limits**: Warn about risky parameters (may cause issues)
|
|
||||||
3. **Aspect ratio checks**: Validate diameter/thickness ratio
|
|
||||||
4. **Model-specific rules**: Different rules for different FEA models
|
|
||||||
5. **Correction suggestions**: Clamp parameters to safe ranges
|
|
||||||
|
|
||||||
**Usage example**:
|
|
||||||
```python
|
|
||||||
from optimization_engine.simulation_validator import SimulationValidator
|
|
||||||
|
|
||||||
validator = SimulationValidator(model_type='circular_plate', verbose=True)
|
|
||||||
|
|
||||||
# Before running simulation
|
|
||||||
is_valid, warnings = validator.validate(design_variables)
|
|
||||||
|
|
||||||
if not is_valid:
|
|
||||||
print(f"Invalid parameters: {warnings}")
|
|
||||||
raise optuna.TrialPruned() # Skip this trial
|
|
||||||
|
|
||||||
# Optional: auto-correct risky parameters
|
|
||||||
if warnings:
|
|
||||||
design_variables = validator.suggest_corrections(design_variables)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Validation rules for circular plate**:
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
'inner_diameter': {
|
|
||||||
'min': 50.0, 'max': 150.0, # Hard limits
|
|
||||||
'soft_min': 55.0, 'soft_max': 145.0, # Recommended range
|
|
||||||
'reason': 'Extreme diameters may cause meshing failures'
|
|
||||||
},
|
|
||||||
'plate_thickness': {
|
|
||||||
'min': 2.0, 'max': 10.0,
|
|
||||||
'soft_min': 2.5, 'soft_max': 9.5,
|
|
||||||
'reason': 'Extreme thickness may cause poor element aspect ratios'
|
|
||||||
},
|
|
||||||
'aspect_ratio': {
|
|
||||||
'min': 5.0, 'max': 50.0, # diameter/thickness
|
|
||||||
'reason': 'Poor aspect ratio can cause solver convergence issues'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Prevents ~15-20% of failures by rejecting extreme parameters early.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration Example
|
|
||||||
|
|
||||||
Here's how to use all fixes together in a new study:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
|
||||||
from optimization_engine.simulation_validator import SimulationValidator
|
|
||||||
from optimization_engine.nx_updater import NXParameterUpdater
|
|
||||||
from optimization_engine.nx_solver import NXSolver
|
|
||||||
|
|
||||||
# Initialize
|
|
||||||
validator = SimulationValidator(model_type='circular_plate')
|
|
||||||
updater = NXParameterUpdater(prt_file)
|
|
||||||
solver = NXSolver()
|
|
||||||
|
|
||||||
def objective(trial):
|
|
||||||
# Sample parameters
|
|
||||||
inner_diameter = trial.suggest_float('inner_diameter', 50, 150)
|
|
||||||
plate_thickness = trial.suggest_float('plate_thickness', 2, 10)
|
|
||||||
|
|
||||||
params = {
|
|
||||||
'inner_diameter': inner_diameter,
|
|
||||||
'plate_thickness': plate_thickness
|
|
||||||
}
|
|
||||||
|
|
||||||
# FIX #3: Validate before simulation
|
|
||||||
is_valid, warnings = validator.validate(params)
|
|
||||||
if not is_valid:
|
|
||||||
print(f" Invalid parameters - skipping trial")
|
|
||||||
raise optuna.TrialPruned()
|
|
||||||
|
|
||||||
# Run simulation
|
|
||||||
updater.update_expressions(params)
|
|
||||||
result = solver.run_simulation(sim_file, solution_name="Solution_Normal_Modes")
|
|
||||||
|
|
||||||
if not result['success']:
|
|
||||||
raise optuna.TrialPruned()
|
|
||||||
|
|
||||||
# Extract and return objective
|
|
||||||
frequency = extract_first_frequency(result['op2_file'])
|
|
||||||
return abs(frequency - target_frequency)
|
|
||||||
|
|
||||||
# Create optimizer with fixes
|
|
||||||
optimizer = IntelligentOptimizer(
|
|
||||||
study_name="circular_plate_with_fixes",
|
|
||||||
study_dir=results_dir,
|
|
||||||
config={
|
|
||||||
"intelligent_optimization": {
|
|
||||||
"enabled": True,
|
|
||||||
"characterization": {
|
|
||||||
"min_trials": 10,
|
|
||||||
"max_trials": 30,
|
|
||||||
"confidence_threshold": 0.85,
|
|
||||||
"check_interval": 5
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Run optimization
|
|
||||||
# FIX #1 & #2 applied automatically in strategy selector and landscape analyzer
|
|
||||||
results = optimizer.optimize(
|
|
||||||
objective_function=objective,
|
|
||||||
design_variables={'inner_diameter': (50, 150), 'plate_thickness': (2, 10)},
|
|
||||||
n_trials=50
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Expected Improvements
|
|
||||||
|
|
||||||
### With All Fixes Applied:
|
|
||||||
|
|
||||||
| Metric | Before Fixes | After Fixes | Improvement |
|
|
||||||
|--------|-------------|-------------|-------------|
|
|
||||||
| Algorithm selected | TPE | GP-BO → CMA-ES | ✅ Better |
|
|
||||||
| Landscape classification | smooth_multimodal | smooth_unimodal | ✅ Correct |
|
|
||||||
| Pruning rate | 20% (10/50) | ~5% (2-3/50) | ✅ 75% reduction |
|
|
||||||
| Total successful trials | 40 | ~47-48 | ✅ +18% |
|
|
||||||
| Expected best error | 0.94 Hz | **<0.1 Hz** | ✅ Target achieved |
|
|
||||||
| Trials to convergence | 50+ | ~35-40 | ✅ 20-30% faster |
|
|
||||||
|
|
||||||
### Algorithm Performance Comparison:
|
|
||||||
|
|
||||||
**TPE** (used before fixes):
|
|
||||||
- Good for: Multimodal, robust, general-purpose
|
|
||||||
- Convergence: Slower on smooth problems
|
|
||||||
- Result: 0.94 Hz in 50 trials
|
|
||||||
|
|
||||||
**GP-BO → CMA-ES** (used after fixes):
|
|
||||||
- Good for: Smooth landscapes, sample-efficient
|
|
||||||
- Convergence: Faster local refinement
|
|
||||||
- Expected: 0.05-0.1 Hz in 35-40 trials
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Plan
|
|
||||||
|
|
||||||
### Retest Protocol 10 v2.1:
|
|
||||||
|
|
||||||
1. **Delete old study**:
|
|
||||||
```bash
|
|
||||||
rm -rf studies/circular_plate_protocol10_v2_test
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Create new study** with same config:
|
|
||||||
```bash
|
|
||||||
python create_protocol10_v2_test_study.py
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Run optimization**:
|
|
||||||
```bash
|
|
||||||
cd studies/circular_plate_protocol10_v2_test
|
|
||||||
python run_optimization.py
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Verify fixes**:
|
|
||||||
- Check `intelligence_report.json`: Should recommend GP-BO, not TPE
|
|
||||||
- Check `characterization_progress.json`: Should show "smooth_unimodal" reclassification
|
|
||||||
- Check pruned trial count: Should be ≤3 (down from 10)
|
|
||||||
- Check final result: Should achieve <0.1 Hz error
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
1. ✅ [optimization_engine/strategy_selector.py](../optimization_engine/strategy_selector.py#L70-L82)
|
|
||||||
- Fixed: Use characterization trial count for decisions
|
|
||||||
|
|
||||||
2. ✅ [optimization_engine/landscape_analyzer.py](../optimization_engine/landscape_analyzer.py#L77)
|
|
||||||
- Fixed: Pass n_modes to `_classify_landscape()`
|
|
||||||
|
|
||||||
3. ✅ [optimization_engine/landscape_analyzer.py](../optimization_engine/landscape_analyzer.py#L285-L292)
|
|
||||||
- Fixed: Detect false multimodality from smooth manifolds
|
|
||||||
|
|
||||||
4. ✅ [optimization_engine/simulation_validator.py](../optimization_engine/simulation_validator.py) (NEW)
|
|
||||||
- Added: Parameter validation before simulations
|
|
||||||
|
|
||||||
5. ✅ [docs/PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) (NEW - this file)
|
|
||||||
- Added: Complete documentation of fixes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Version History
|
|
||||||
|
|
||||||
### Version 2.1 (2025-11-20)
|
|
||||||
- Fixed strategy selector timing logic
|
|
||||||
- Improved multimodality detection
|
|
||||||
- Added simulation parameter validation
|
|
||||||
- Reduced pruning rate from 20% → ~5%
|
|
||||||
|
|
||||||
### Version 2.0 (2025-11-20)
|
|
||||||
- Adaptive characterization implemented
|
|
||||||
- Two-study architecture
|
|
||||||
- GP-BO/CMA-ES/TPE support
|
|
||||||
|
|
||||||
### Version 1.0 (2025-11-17)
|
|
||||||
- Initial Protocol 10 implementation
|
|
||||||
- Fixed characterization trials (15)
|
|
||||||
- Basic strategy selection
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Status**: ✅ All fixes implemented and ready for retesting
|
|
||||||
**Next step**: Run retest to validate improvements
|
|
||||||
**Expected outcome**: Achieve 0.1 Hz tolerance in ~35-40 trials
|
|
||||||
@@ -1,359 +0,0 @@
|
|||||||
# Protocol 10 v2.0 Implementation Summary
|
|
||||||
|
|
||||||
**Date**: November 20, 2025
|
|
||||||
**Version**: 2.0 - Adaptive Two-Study Architecture
|
|
||||||
**Status**: ✅ Complete and Ready for Testing
|
|
||||||
|
|
||||||
## What Was Implemented
|
|
||||||
|
|
||||||
### 1. Adaptive Characterization Module
|
|
||||||
|
|
||||||
**File**: [`optimization_engine/adaptive_characterization.py`](../optimization_engine/adaptive_characterization.py)
|
|
||||||
|
|
||||||
**Purpose**: Intelligently determines when enough landscape exploration has been done during the characterization phase.
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Progressive landscape analysis (every 5 trials starting at trial 10)
|
|
||||||
- Metric convergence detection (smoothness, multimodality, noise stability)
|
|
||||||
- Complexity-aware sample adequacy (simple problems need fewer trials)
|
|
||||||
- Parameter space coverage assessment
|
|
||||||
- Confidence scoring (weighted combination of all factors)
|
|
||||||
|
|
||||||
**Adaptive Behavior**:
|
|
||||||
```python
|
|
||||||
# Simple problem (smooth, unimodal):
|
|
||||||
required_samples = 10 + dimensionality
|
|
||||||
# Stops at ~10-15 trials
|
|
||||||
|
|
||||||
# Complex problem (multimodal with N modes):
|
|
||||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
|
||||||
# Continues to ~20-30 trials
|
|
||||||
```
|
|
||||||
|
|
||||||
**Confidence Calculation**:
|
|
||||||
```python
|
|
||||||
confidence = (
|
|
||||||
0.40 * metric_stability_score + # Are metrics converging?
|
|
||||||
0.30 * parameter_coverage_score + # Explored enough space?
|
|
||||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
|
||||||
0.10 * landscape_clarity_score # Clear classification?
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Stopping Criteria**:
|
|
||||||
- **Minimum trials**: 10 (always gather baseline data)
|
|
||||||
- **Maximum trials**: 30 (prevent over-characterization)
|
|
||||||
- **Confidence threshold**: 85% (high confidence required)
|
|
||||||
- **Check interval**: Every 5 trials
|
|
||||||
|
|
||||||
### 2. Updated Intelligent Optimizer
|
|
||||||
|
|
||||||
**File**: [`optimization_engine/intelligent_optimizer.py`](../optimization_engine/intelligent_optimizer.py)
|
|
||||||
|
|
||||||
**Changes**:
|
|
||||||
- Integrated `CharacterizationStoppingCriterion` into the optimization workflow
|
|
||||||
- Replaced fixed characterization trials with adaptive loop
|
|
||||||
- Added characterization summary reporting
|
|
||||||
|
|
||||||
**New Workflow**:
|
|
||||||
```python
|
|
||||||
# Stage 1: Adaptive Characterization
|
|
||||||
stopping_criterion = CharacterizationStoppingCriterion(...)
|
|
||||||
|
|
||||||
while not stopping_criterion.should_stop(study):
|
|
||||||
study.optimize(objective, n_trials=check_interval) # Run batch
|
|
||||||
landscape = analyzer.analyze(study) # Analyze
|
|
||||||
stopping_criterion.update(landscape, n_trials) # Update confidence
|
|
||||||
|
|
||||||
# Stage 2: Strategy Selection (based on final landscape)
|
|
||||||
strategy = selector.recommend_strategy(landscape)
|
|
||||||
|
|
||||||
# Stage 3: Optimization (with recommended strategy)
|
|
||||||
optimization_study = create_study(recommended_sampler)
|
|
||||||
optimization_study.optimize(objective, n_trials=remaining)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Comprehensive Documentation
|
|
||||||
|
|
||||||
**File**: [`docs/PROTOCOL_10_IMSO.md`](PROTOCOL_10_IMSO.md)
|
|
||||||
|
|
||||||
**Contents**:
|
|
||||||
- Complete Protocol 10 architecture explanation
|
|
||||||
- Two-study approach rationale
|
|
||||||
- Adaptive characterization details
|
|
||||||
- Algorithm recommendations (GP-BO, CMA-ES, TPE)
|
|
||||||
- Usage examples
|
|
||||||
- Expected performance (41% reduction vs TPE alone)
|
|
||||||
- Comparison with Version 1.0
|
|
||||||
|
|
||||||
**File**: [`docs/INDEX.md`](INDEX.md) - Updated
|
|
||||||
|
|
||||||
**Changes**:
|
|
||||||
- Added Protocol 10 to Architecture & Design section
|
|
||||||
- Added to Key Files reference table
|
|
||||||
- Positioned as advanced optimization technique
|
|
||||||
|
|
||||||
### 4. Test Script
|
|
||||||
|
|
||||||
**File**: [`test_adaptive_characterization.py`](../test_adaptive_characterization.py)
|
|
||||||
|
|
||||||
**Purpose**: Validate that adaptive characterization behaves correctly for different problem types.
|
|
||||||
|
|
||||||
**Tests**:
|
|
||||||
1. **Simple Smooth Quadratic**: Expected ~10-15 trials
|
|
||||||
2. **Complex Multimodal (Rastrigin)**: Expected ~15-30 trials
|
|
||||||
|
|
||||||
**How to Run**:
|
|
||||||
```bash
|
|
||||||
python test_adaptive_characterization.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Old Config (v1.0):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"intelligent_optimization": {
|
|
||||||
"enabled": true,
|
|
||||||
"characterization_trials": 15, // Fixed!
|
|
||||||
"min_analysis_trials": 10,
|
|
||||||
"stagnation_window": 10,
|
|
||||||
"min_improvement_threshold": 0.001
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### New Config (v2.0):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"intelligent_optimization": {
|
|
||||||
"enabled": true,
|
|
||||||
"characterization": {
|
|
||||||
"min_trials": 10,
|
|
||||||
"max_trials": 30,
|
|
||||||
"confidence_threshold": 0.85,
|
|
||||||
"check_interval": 5
|
|
||||||
},
|
|
||||||
"landscape_analysis": {
|
|
||||||
"min_trials_for_analysis": 10
|
|
||||||
},
|
|
||||||
"strategy_selection": {
|
|
||||||
"allow_cmaes": true,
|
|
||||||
"allow_gpbo": true,
|
|
||||||
"allow_tpe": true
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"trials": {
|
|
||||||
"n_trials": 50 // For optimization phase
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Intelligence Added
|
|
||||||
|
|
||||||
### Problem: How to determine characterization trial count?
|
|
||||||
|
|
||||||
**Old Approach (v1.0)**:
|
|
||||||
- Fixed 15 trials for all problems
|
|
||||||
- Wasteful for simple problems (only need ~10 trials)
|
|
||||||
- Insufficient for complex problems (may need ~25 trials)
|
|
||||||
|
|
||||||
**New Approach (v2.0) - Adaptive Intelligence**:
|
|
||||||
|
|
||||||
1. **Metric Stability Detection**:
|
|
||||||
```python
|
|
||||||
# Track smoothness over last 3 analyses
|
|
||||||
smoothness_values = [0.72, 0.68, 0.71] # Converging!
|
|
||||||
smoothness_std = 0.017 # Low variance = stable
|
|
||||||
if smoothness_std < 0.05:
|
|
||||||
metric_stable = True # Confident in measurement
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Complexity-Aware Sample Adequacy**:
|
|
||||||
```python
|
|
||||||
if multimodal and n_modes > 2:
|
|
||||||
# Complex: need to sample multiple regions
|
|
||||||
required = 10 + 5 * n_modes + 2 * dims
|
|
||||||
elif smooth and unimodal:
|
|
||||||
# Simple: quick convergence expected
|
|
||||||
required = 10 + dims
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Parameter Coverage Assessment**:
|
|
||||||
```python
|
|
||||||
# Check if explored enough of each parameter range
|
|
||||||
for param in params:
|
|
||||||
coverage = (explored_max - explored_min) / (bound_max - bound_min)
|
|
||||||
# Need at least 50% coverage for confidence
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Landscape Clarity**:
|
|
||||||
```python
|
|
||||||
# Clear classification = confident stopping
|
|
||||||
if smoothness > 0.7 or smoothness < 0.3: # Very smooth or very rugged
|
|
||||||
clarity_high = True
|
|
||||||
if noise < 0.3 or noise > 0.7: # Low noise or high noise
|
|
||||||
clarity_high = True
|
|
||||||
```
|
|
||||||
|
|
||||||
### Result: Self-Adapting Characterization
|
|
||||||
|
|
||||||
**Simple Problem Example** (circular plate frequency tuning):
|
|
||||||
```
|
|
||||||
Trial 5: Landscape = smooth_unimodal (preliminary)
|
|
||||||
Trial 10: Landscape = smooth_unimodal (confidence 72%)
|
|
||||||
- Smoothness stable (0.71 ± 0.02)
|
|
||||||
- Unimodal confirmed
|
|
||||||
- Coverage adequate (60%)
|
|
||||||
|
|
||||||
Trial 15: Landscape = smooth_unimodal (confidence 87%)
|
|
||||||
- All metrics converged
|
|
||||||
- Clear classification
|
|
||||||
|
|
||||||
STOP: Confidence threshold met (87% ≥ 85%)
|
|
||||||
Total characterization trials: 14
|
|
||||||
```
|
|
||||||
|
|
||||||
**Complex Problem Example** (multimodal with 4 modes):
|
|
||||||
```
|
|
||||||
Trial 10: Landscape = multimodal (preliminary, 3 modes)
|
|
||||||
Trial 15: Landscape = multimodal (confidence 58%, 4 modes detected)
|
|
||||||
- Multimodality still evolving
|
|
||||||
- Need more coverage
|
|
||||||
|
|
||||||
Trial 20: Landscape = rugged_multimodal (confidence 71%, 4 modes)
|
|
||||||
- Classification stable
|
|
||||||
- Coverage improving (55%)
|
|
||||||
|
|
||||||
Trial 25: Landscape = rugged_multimodal (confidence 86%, 4 modes)
|
|
||||||
- All metrics converged
|
|
||||||
- Adequate coverage (62%)
|
|
||||||
|
|
||||||
STOP: Confidence threshold met (86% ≥ 85%)
|
|
||||||
Total characterization trials: 26
|
|
||||||
```
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
### Efficiency
|
|
||||||
- ✅ **Simple problems**: Stop early (~10-15 trials) → 33% reduction
|
|
||||||
- ✅ **Complex problems**: Extend as needed (~20-30 trials) → Adequate coverage
|
|
||||||
- ✅ **No wasted trials**: Only characterize as much as necessary
|
|
||||||
|
|
||||||
### Robustness
|
|
||||||
- ✅ **Adaptive**: Adjusts to problem complexity automatically
|
|
||||||
- ✅ **Confidence-based**: Only stops when metrics are stable
|
|
||||||
- ✅ **Bounded**: Min 10, max 30 trials (safety limits)
|
|
||||||
|
|
||||||
### Transparency
|
|
||||||
- ✅ **Detailed reports**: Explains all stopping decisions
|
|
||||||
- ✅ **Metric tracking**: Full history of convergence
|
|
||||||
- ✅ **Reproducibility**: All logged to JSON
|
|
||||||
|
|
||||||
## Example Usage
|
|
||||||
|
|
||||||
```python
|
|
||||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
|
||||||
|
|
||||||
# Create optimizer with adaptive characterization config
|
|
||||||
config = {
|
|
||||||
"intelligent_optimization": {
|
|
||||||
"enabled": True,
|
|
||||||
"characterization": {
|
|
||||||
"min_trials": 10,
|
|
||||||
"max_trials": 30,
|
|
||||||
"confidence_threshold": 0.85,
|
|
||||||
"check_interval": 5
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"trials": {
|
|
||||||
"n_trials": 50 # For optimization phase after characterization
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
optimizer = IntelligentOptimizer(
|
|
||||||
study_name="my_optimization",
|
|
||||||
study_dir=Path("results"),
|
|
||||||
config=config,
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Define design variables
|
|
||||||
design_vars = {
|
|
||||||
'parameter1': (lower1, upper1),
|
|
||||||
'parameter2': (lower2, upper2)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Run Protocol 10 with adaptive characterization
|
|
||||||
results = optimizer.optimize(
|
|
||||||
objective_function=my_objective,
|
|
||||||
design_variables=design_vars,
|
|
||||||
n_trials=50, # Only for optimization phase
|
|
||||||
target_value=115.0,
|
|
||||||
tolerance=0.1
|
|
||||||
)
|
|
||||||
|
|
||||||
# Characterization will stop at 10-30 trials automatically
|
|
||||||
# Then optimization will use recommended algorithm for remaining trials
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing Recommendations
|
|
||||||
|
|
||||||
1. **Unit Test**: Run `test_adaptive_characterization.py`
|
|
||||||
- Validates adaptive behavior on toy problems
|
|
||||||
- Expected: Simple problem stops early, complex problem continues
|
|
||||||
|
|
||||||
2. **Integration Test**: Run existing circular plate study
|
|
||||||
- Should stop characterization at ~12-15 trials (smooth unimodal)
|
|
||||||
- Compare with fixed 15-trial approach (should be similar or better)
|
|
||||||
|
|
||||||
3. **Stress Test**: Create highly multimodal FEA problem
|
|
||||||
- Should extend characterization to ~25-30 trials
|
|
||||||
- Verify adequate coverage of multiple modes
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Test on Real FEA Problem**: Use circular plate frequency tuning study
|
|
||||||
2. **Validate Stopping Decisions**: Review characterization logs
|
|
||||||
3. **Benchmark Performance**: Compare v2.0 vs v1.0 trial efficiency
|
|
||||||
4. **GP-BO Integration**: Add Gaussian Process Bayesian Optimization support
|
|
||||||
5. **Two-Study Implementation**: Complete the transition to new optimized study
|
|
||||||
|
|
||||||
## Version Comparison
|
|
||||||
|
|
||||||
| Feature | v1.0 | v2.0 |
|
|
||||||
|---------|------|------|
|
|
||||||
| Characterization trials | Fixed (15) | Adaptive (10-30) |
|
|
||||||
| Problem complexity aware | ❌ No | ✅ Yes |
|
|
||||||
| Metric convergence detection | ❌ No | ✅ Yes |
|
|
||||||
| Confidence scoring | ❌ No | ✅ Yes |
|
|
||||||
| Simple problem efficiency | 15 trials | ~12 trials (20% reduction) |
|
|
||||||
| Complex problem adequacy | 15 trials (may be insufficient) | ~25 trials (adequate) |
|
|
||||||
| Transparency | Basic logs | Comprehensive reports |
|
|
||||||
| Algorithm recommendation | TPE/CMA-ES | GP-BO/CMA-ES/TPE |
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
1. ✅ `optimization_engine/adaptive_characterization.py` (NEW)
|
|
||||||
2. ✅ `optimization_engine/intelligent_optimizer.py` (UPDATED)
|
|
||||||
3. ✅ `docs/PROTOCOL_10_IMSO.md` (NEW)
|
|
||||||
4. ✅ `docs/INDEX.md` (UPDATED)
|
|
||||||
5. ✅ `test_adaptive_characterization.py` (NEW)
|
|
||||||
6. ✅ `docs/PROTOCOL_10_V2_IMPLEMENTATION.md` (NEW - this file)
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
✅ Adaptive characterization module implemented
|
|
||||||
✅ Integration with intelligent optimizer complete
|
|
||||||
✅ Comprehensive documentation written
|
|
||||||
✅ Test script created
|
|
||||||
✅ Configuration updated
|
|
||||||
✅ All code compiles without errors
|
|
||||||
|
|
||||||
**Status**: READY FOR TESTING ✅
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: November 20, 2025
|
|
||||||
**Implementation Time**: ~2 hours
|
|
||||||
**Lines of Code Added**: ~600 lines (module + docs + tests)
|
|
||||||
@@ -1,142 +0,0 @@
|
|||||||
# Fix Summary: Protocol 11 - Multi-Objective Support
|
|
||||||
|
|
||||||
**Date:** 2025-11-21
|
|
||||||
**Issue:** IntelligentOptimizer crashes on multi-objective optimization studies
|
|
||||||
**Status:** ✅ FIXED
|
|
||||||
|
|
||||||
## Root Cause
|
|
||||||
|
|
||||||
The IntelligentOptimizer (Protocol 10) was hardcoded for single-objective optimization only. When used with multi-objective studies:
|
|
||||||
|
|
||||||
1. **Trials executed successfully** - All simulations ran and data was saved to `study.db`
|
|
||||||
2. **Crash during result compilation** - Failed when accessing `study.best_trial/best_params/best_value`
|
|
||||||
3. **No tracking files generated** - intelligent_optimizer folder remained empty
|
|
||||||
4. **Silent failure** - Error only visible in console output, not in results
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
### 1. `optimization_engine/intelligent_optimizer.py`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Added `self.directions` attribute to store study type
|
|
||||||
- Modified `_compile_results()` to handle both single and multi-objective (lines 327-370)
|
|
||||||
- Modified `_run_fallback_optimization()` to handle both cases (lines 372-413)
|
|
||||||
- Modified `_print_final_summary()` to format multi-objective values correctly (lines 427-445)
|
|
||||||
- Added Protocol 11 initialization message (lines 116-119)
|
|
||||||
|
|
||||||
**Key Fix:**
|
|
||||||
```python
|
|
||||||
def _compile_results(self) -> Dict[str, Any]:
|
|
||||||
is_multi_objective = len(self.study.directions) > 1
|
|
||||||
|
|
||||||
if is_multi_objective:
|
|
||||||
best_trials = self.study.best_trials # Pareto front
|
|
||||||
representative_trial = best_trials[0] if best_trials else None
|
|
||||||
# ...
|
|
||||||
else:
|
|
||||||
best_params = self.study.best_params # Single objective API
|
|
||||||
# ...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. `optimization_engine/landscape_analyzer.py`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Modified `print_landscape_report()` to handle `None` input (lines 346-354)
|
|
||||||
- Added check for multi-objective studies
|
|
||||||
|
|
||||||
**Key Fix:**
|
|
||||||
```python
|
|
||||||
def print_landscape_report(landscape: Dict, verbose: bool = True):
|
|
||||||
# Handle None (multi-objective studies)
|
|
||||||
if landscape is None:
|
|
||||||
print(f"\n [LANDSCAPE ANALYSIS] Skipped for multi-objective optimization")
|
|
||||||
return
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. `optimization_engine/strategy_selector.py`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Modified `recommend_strategy()` to handle `None` landscape (lines 58-61)
|
|
||||||
- Added None check before calling `.get()` on landscape dict
|
|
||||||
|
|
||||||
**Key Fix:**
|
|
||||||
```python
|
|
||||||
def recommend_strategy(...):
|
|
||||||
# Handle None landscape (multi-objective optimization)
|
|
||||||
if landscape is None or not landscape.get('ready', False):
|
|
||||||
return self._recommend_random_exploration(trials_completed)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. `studies/bracket_stiffness_optimization/run_optimization.py`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Fixed landscape_analysis None check in results printing (line 251)
|
|
||||||
|
|
||||||
**Key Fix:**
|
|
||||||
```python
|
|
||||||
if 'landscape_analysis' in results and results['landscape_analysis'] is not None:
|
|
||||||
print(f" Landscape Type: {results['landscape_analysis'].get('landscape_type', 'N/A')}")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. `atomizer-dashboard/frontend/src/pages/Dashboard.tsx`
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Removed hardcoded "Hz" units from objective values and metrics
|
|
||||||
- Made dashboard generic for all optimization types
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Line 204: Removed " Hz" from Best Value metric
|
|
||||||
- Line 209: Removed " Hz" from Avg Objective metric
|
|
||||||
- Line 242: Changed Y-axis label from "Objective (Hz)" to "Objective"
|
|
||||||
- Line 298: Removed " Hz" from parameter space tooltip
|
|
||||||
- Line 341: Removed " Hz" from trial feed objective display
|
|
||||||
- Line 43: Removed " Hz" from new best alert message
|
|
||||||
|
|
||||||
### 6. `docs/PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md`
|
|
||||||
|
|
||||||
**Created:** Comprehensive documentation explaining:
|
|
||||||
- The problem and root cause
|
|
||||||
- The solution pattern
|
|
||||||
- Implementation checklist
|
|
||||||
- Testing protocol
|
|
||||||
- Files that need review
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
Tested with bracket_stiffness_optimization study:
|
|
||||||
- **Objectives:** Maximize stiffness, Minimize mass
|
|
||||||
- **Directions:** `["minimize", "minimize"]` (multi-objective)
|
|
||||||
- **Expected:** Complete successfully with all tracking files
|
|
||||||
|
|
||||||
## Results
|
|
||||||
|
|
||||||
✅ **Before Fix:**
|
|
||||||
- study.db created ✓
|
|
||||||
- intelligent_optimizer/ EMPTY ✗
|
|
||||||
- optimization_summary.json MISSING ✗
|
|
||||||
- RuntimeError in console ✗
|
|
||||||
|
|
||||||
✅ **After Fix:**
|
|
||||||
- study.db created ✓
|
|
||||||
- intelligent_optimizer/ populated ✓
|
|
||||||
- optimization_summary.json created ✓
|
|
||||||
- No errors ✓
|
|
||||||
- Protocol 11 message displayed ✓
|
|
||||||
|
|
||||||
## Lessons Learned
|
|
||||||
|
|
||||||
1. **Always test both single and multi-objective cases**
|
|
||||||
2. **Check for `None` before calling `.get()` on dict-like objects**
|
|
||||||
3. **Multi-objective support must be baked into the design, not added later**
|
|
||||||
4. **Silent failures are dangerous - always validate output files exist**
|
|
||||||
|
|
||||||
## Future Work
|
|
||||||
|
|
||||||
- [ ] Review files listed in Protocol 11 documentation for similar issues
|
|
||||||
- [ ] Add unit tests for multi-objective support in all optimizers
|
|
||||||
- [ ] Create helper function `get_best_solution(study)` for both cases
|
|
||||||
- [ ] Add validation checks in study creation to warn about configuration issues
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
Protocol 11 is now **MANDATORY** for all optimization components. Any code that accesses `study.best_trial`, `study.best_params`, or `study.best_value` MUST first check if the study is multi-objective and handle it appropriately.
|
|
||||||
@@ -1,177 +0,0 @@
|
|||||||
# Protocol 11: Multi-Objective Optimization Support
|
|
||||||
|
|
||||||
**Status:** MANDATORY
|
|
||||||
**Applies To:** ALL optimization studies
|
|
||||||
**Last Updated:** 2025-11-21
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
ALL optimization engines in Atomizer MUST support both single-objective and multi-objective optimization without requiring code changes. This is a **critical requirement** that prevents runtime failures.
|
|
||||||
|
|
||||||
## The Problem
|
|
||||||
|
|
||||||
Previously, IntelligentOptimizer (Protocol 10) only supported single-objective optimization. When used with multi-objective studies, it would:
|
|
||||||
1. Successfully run all trials
|
|
||||||
2. Save trials to the Optuna database (`study.db`)
|
|
||||||
3. **CRASH** when trying to compile results, causing:
|
|
||||||
- No intelligent optimizer tracking files (confidence_history.json, strategy_transitions.json)
|
|
||||||
- No optimization_summary.json
|
|
||||||
- No final reports
|
|
||||||
- Silent failures that are hard to debug
|
|
||||||
|
|
||||||
## The Root Cause
|
|
||||||
|
|
||||||
Optuna has different APIs for single vs. multi-objective studies:
|
|
||||||
|
|
||||||
### Single-Objective
|
|
||||||
```python
|
|
||||||
study.best_trial # Returns single Trial object
|
|
||||||
study.best_params # Returns dict of parameters
|
|
||||||
study.best_value # Returns float
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Objective
|
|
||||||
```python
|
|
||||||
study.best_trials # Returns LIST of Pareto-optimal trials
|
|
||||||
study.best_params # ❌ RAISES RuntimeError
|
|
||||||
study.best_value # ❌ RAISES RuntimeError
|
|
||||||
study.best_trial # ❌ RAISES RuntimeError
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Solution
|
|
||||||
|
|
||||||
### 1. Always Check Study Type
|
|
||||||
|
|
||||||
```python
|
|
||||||
is_multi_objective = len(study.directions) > 1
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Use Conditional Access Patterns
|
|
||||||
|
|
||||||
```python
|
|
||||||
if is_multi_objective:
|
|
||||||
best_trials = study.best_trials
|
|
||||||
if best_trials:
|
|
||||||
# Select representative trial (e.g., first Pareto solution)
|
|
||||||
representative_trial = best_trials[0]
|
|
||||||
best_params = representative_trial.params
|
|
||||||
best_value = representative_trial.values # Tuple
|
|
||||||
best_trial_num = representative_trial.number
|
|
||||||
else:
|
|
||||||
best_params = {}
|
|
||||||
best_value = None
|
|
||||||
best_trial_num = None
|
|
||||||
else:
|
|
||||||
# Single-objective: safe to use standard API
|
|
||||||
best_params = study.best_params
|
|
||||||
best_value = study.best_value
|
|
||||||
best_trial_num = study.best_trial.number
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Return Rich Metadata
|
|
||||||
|
|
||||||
Always include in results:
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
'best_params': best_params,
|
|
||||||
'best_value': best_value, # float or tuple
|
|
||||||
'best_trial': best_trial_num,
|
|
||||||
'is_multi_objective': is_multi_objective,
|
|
||||||
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
|
|
||||||
# ... other fields
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Checklist
|
|
||||||
|
|
||||||
When creating or modifying any optimization component:
|
|
||||||
|
|
||||||
- [ ] **Study Creation**: Support `directions` parameter
|
|
||||||
```python
|
|
||||||
if directions:
|
|
||||||
study = optuna.create_study(directions=directions, ...)
|
|
||||||
else:
|
|
||||||
study = optuna.create_study(direction='minimize', ...)
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Result Compilation**: Check `len(study.directions) > 1`
|
|
||||||
- [ ] **Best Trial Access**: Use conditional logic (single vs. multi)
|
|
||||||
- [ ] **Logging**: Print Pareto front size for multi-objective
|
|
||||||
- [ ] **Reports**: Handle tuple objectives in visualization
|
|
||||||
- [ ] **Testing**: Test with BOTH single and multi-objective cases
|
|
||||||
|
|
||||||
## Files Fixed
|
|
||||||
|
|
||||||
- ✅ `optimization_engine/intelligent_optimizer.py`
|
|
||||||
- `_compile_results()` method
|
|
||||||
- `_run_fallback_optimization()` method
|
|
||||||
|
|
||||||
## Files That Need Review
|
|
||||||
|
|
||||||
Check these files for similar issues:
|
|
||||||
|
|
||||||
- [ ] `optimization_engine/study_continuation.py` (lines 96, 259-260)
|
|
||||||
- [ ] `optimization_engine/hybrid_study_creator.py` (line 468)
|
|
||||||
- [ ] `optimization_engine/intelligent_setup.py` (line 606)
|
|
||||||
- [ ] `optimization_engine/llm_optimization_runner.py` (line 384)
|
|
||||||
|
|
||||||
## Testing Protocol
|
|
||||||
|
|
||||||
Before marking any optimization study as complete:
|
|
||||||
|
|
||||||
1. **Single-Objective Test**
|
|
||||||
```python
|
|
||||||
directions=None # or ['minimize']
|
|
||||||
# Should complete without errors
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Multi-Objective Test**
|
|
||||||
```python
|
|
||||||
directions=['minimize', 'minimize']
|
|
||||||
# Should complete without errors
|
|
||||||
# Should generate ALL tracking files
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Verify Outputs**
|
|
||||||
- `2_results/study.db` exists
|
|
||||||
- `2_results/intelligent_optimizer/` has tracking files
|
|
||||||
- `2_results/optimization_summary.json` exists
|
|
||||||
- No RuntimeError in logs
|
|
||||||
|
|
||||||
## Design Principle
|
|
||||||
|
|
||||||
**"Write Once, Run Anywhere"**
|
|
||||||
|
|
||||||
Any optimization component should:
|
|
||||||
1. Accept both single and multi-objective problems
|
|
||||||
2. Automatically detect the study type
|
|
||||||
3. Handle result compilation appropriately
|
|
||||||
4. Never raise RuntimeError due to API misuse
|
|
||||||
|
|
||||||
## Example: Bracket Study
|
|
||||||
|
|
||||||
The bracket_stiffness_optimization study is multi-objective:
|
|
||||||
- Objective 1: Maximize stiffness (minimize -stiffness)
|
|
||||||
- Objective 2: Minimize mass
|
|
||||||
- Constraint: mass ≤ 0.2 kg
|
|
||||||
|
|
||||||
This study exposed the bug because:
|
|
||||||
```python
|
|
||||||
directions = ["minimize", "minimize"] # Multi-objective
|
|
||||||
```
|
|
||||||
|
|
||||||
After the fix, it should:
|
|
||||||
- Run all 50 trials successfully
|
|
||||||
- Generate Pareto front with multiple solutions
|
|
||||||
- Save all intelligent optimizer tracking files
|
|
||||||
- Create complete reports with tuple objectives
|
|
||||||
|
|
||||||
## Future Work
|
|
||||||
|
|
||||||
- Add explicit validation in `IntelligentOptimizer.__init__()` to warn about common mistakes
|
|
||||||
- Create helper function `get_best_solution(study)` that handles both cases
|
|
||||||
- Add unit tests for multi-objective support in all optimizers
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Remember:** Multi-objective support is NOT optional. It's a core requirement for production-ready optimization engines.
|
|
||||||
@@ -1,386 +0,0 @@
|
|||||||
# Protocol 13: Real-Time Dashboard Tracking
|
|
||||||
|
|
||||||
**Status**: ✅ COMPLETED (Enhanced December 2025)
|
|
||||||
**Date**: November 21, 2025 (Last Updated: December 3, 2025)
|
|
||||||
**Priority**: P1 (Critical)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Protocol 13 implements a comprehensive real-time web dashboard for monitoring multi-objective optimization studies. It provides live visualization of optimizer state, Pareto fronts, parallel coordinates, and trial history.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Backend Components
|
|
||||||
|
|
||||||
#### 1. Real-Time Tracking System
|
|
||||||
**File**: `optimization_engine/realtime_tracking.py`
|
|
||||||
|
|
||||||
- **Per-Trial JSON Writes**: Writes `optimizer_state.json` after every trial completion
|
|
||||||
- **Optimizer State Tracking**: Captures current phase, strategy, trial progress
|
|
||||||
- **Multi-Objective Support**: Tracks study directions and Pareto front status
|
|
||||||
|
|
||||||
```python
|
|
||||||
def create_realtime_callback(tracking_dir, optimizer_ref, verbose=False):
|
|
||||||
"""Creates Optuna callback for per-trial JSON writes"""
|
|
||||||
# Writes to: {study_dir}/2_results/intelligent_optimizer/optimizer_state.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**Data Structure**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"timestamp": "2025-11-21T15:27:28.828930",
|
|
||||||
"trial_number": 29,
|
|
||||||
"total_trials": 50,
|
|
||||||
"current_phase": "adaptive_optimization",
|
|
||||||
"current_strategy": "GP_UCB",
|
|
||||||
"is_multi_objective": true,
|
|
||||||
"study_directions": ["maximize", "minimize"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. REST API Endpoints
|
|
||||||
**File**: `atomizer-dashboard/backend/api/routes/optimization.py`
|
|
||||||
|
|
||||||
**New Protocol 13 Endpoints**:
|
|
||||||
|
|
||||||
1. **GET `/api/optimization/studies/{study_id}/metadata`**
|
|
||||||
- Returns objectives, design variables, constraints with units
|
|
||||||
- Implements unit inference from descriptions
|
|
||||||
- Supports Protocol 11 multi-objective format
|
|
||||||
|
|
||||||
2. **GET `/api/optimization/studies/{study_id}/optimizer-state`**
|
|
||||||
- Returns real-time optimizer state from JSON
|
|
||||||
- Shows current phase and strategy
|
|
||||||
- Updates every trial
|
|
||||||
|
|
||||||
3. **GET `/api/optimization/studies/{study_id}/pareto-front`**
|
|
||||||
- Returns Pareto-optimal solutions for multi-objective studies
|
|
||||||
- Uses Optuna's `study.best_trials` API
|
|
||||||
- Includes constraint satisfaction status
|
|
||||||
|
|
||||||
**Unit Inference Function**:
|
|
||||||
```python
|
|
||||||
def _infer_objective_unit(objective: Dict) -> str:
|
|
||||||
"""Infer unit from objective name and description"""
|
|
||||||
# Pattern matching: frequency→Hz, stiffness→N/mm, mass→kg
|
|
||||||
# Regex extraction: "(N/mm)" from description
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend Components
|
|
||||||
|
|
||||||
#### 1. OptimizerPanel Component
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Real-time phase display (Characterization, Exploration, Exploitation, Adaptive)
|
|
||||||
- Current strategy indicator (TPE, GP, NSGA-II, etc.)
|
|
||||||
- Progress bar with trial count
|
|
||||||
- Multi-objective study detection
|
|
||||||
- Auto-refresh every 2 seconds
|
|
||||||
|
|
||||||
**Visual Design**:
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────┐
|
|
||||||
│ Intelligent Optimizer Status │
|
|
||||||
├─────────────────────────────────┤
|
|
||||||
│ Phase: [Adaptive Optimization] │
|
|
||||||
│ Strategy: [GP_UCB] │
|
|
||||||
│ Progress: [████████░░] 29/50 │
|
|
||||||
│ Multi-Objective: ✓ │
|
|
||||||
└─────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. ParetoPlot Component
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Scatter plot of Pareto-optimal solutions
|
|
||||||
- Pareto front line connecting optimal points
|
|
||||||
- **3 Normalization Modes**:
|
|
||||||
- **Raw**: Original engineering values
|
|
||||||
- **Min-Max**: Scales to [0, 1] for equal comparison
|
|
||||||
- **Z-Score**: Standardizes to mean=0, std=1
|
|
||||||
- Tooltip shows raw values regardless of normalization
|
|
||||||
- Color-coded feasibility (green=feasible, red=infeasible)
|
|
||||||
- Dynamic axis labels with units
|
|
||||||
|
|
||||||
**Normalization Math**:
|
|
||||||
```typescript
|
|
||||||
// Min-Max: (x - min) / (max - min) → [0, 1]
|
|
||||||
// Z-Score: (x - mean) / std → standardized
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. ParallelCoordinatesPlot Component
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- High-dimensional visualization (objectives + design variables)
|
|
||||||
- Interactive trial selection (click to toggle, hover to highlight)
|
|
||||||
- Normalized [0, 1] axes for all dimensions
|
|
||||||
- Color coding: green (feasible), red (infeasible), yellow (selected)
|
|
||||||
- Opacity management: non-selected fade to 10% when selection active
|
|
||||||
- Clear selection button
|
|
||||||
|
|
||||||
**Visualization Structure**:
|
|
||||||
```
|
|
||||||
Stiffness Mass support_angle tip_thickness
|
|
||||||
| | | |
|
|
||||||
| ╱─────╲ ╱ |
|
|
||||||
| ╱ ╲─────────╱ |
|
|
||||||
| ╱ ╲ |
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. ConvergencePlot Component (NEW - December 2025)
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/ConvergencePlot.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Dual-line visualization: trial values + running best
|
|
||||||
- Area fill gradient under trial curve
|
|
||||||
- Statistics header: Best value, Improvement %, 90% convergence trial
|
|
||||||
- Summary footer: First value, Mean, Std Dev, Total trials
|
|
||||||
- Step-after interpolation for running best line
|
|
||||||
- Reference line at best value
|
|
||||||
|
|
||||||
#### 5. ParameterImportanceChart Component (NEW - December 2025)
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/ParameterImportanceChart.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Pearson correlation between parameters and objectives
|
|
||||||
- Horizontal bar chart sorted by absolute importance
|
|
||||||
- Color coding: Green (negative correlation), Red (positive correlation)
|
|
||||||
- Tooltip with percentage and raw correlation coefficient
|
|
||||||
- Requires minimum 3 trials for statistical analysis
|
|
||||||
|
|
||||||
#### 6. StudyReportViewer Component (NEW - December 2025)
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/components/StudyReportViewer.tsx`
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Full-screen modal for viewing STUDY_REPORT.md
|
|
||||||
- KaTeX math equation rendering (`$...$` inline, `$$...$$` block)
|
|
||||||
- GitHub-flavored markdown (tables, code blocks, task lists)
|
|
||||||
- Custom dark theme styling for all markdown elements
|
|
||||||
- Refresh button for live updates
|
|
||||||
- External link to open in system editor
|
|
||||||
|
|
||||||
#### 7. Dashboard Integration
|
|
||||||
**File**: `atomizer-dashboard/frontend/src/pages/Dashboard.tsx`
|
|
||||||
|
|
||||||
**Layout Structure**:
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────────────────────┐
|
|
||||||
│ Study Selection │
|
|
||||||
├──────────────────────────────────────────────────┤
|
|
||||||
│ Metrics Grid (Best, Avg, Trials, Pruned) │
|
|
||||||
├──────────────────────────────────────────────────┤
|
|
||||||
│ [OptimizerPanel] [ParetoPlot] │
|
|
||||||
├──────────────────────────────────────────────────┤
|
|
||||||
│ [ParallelCoordinatesPlot - Full Width] │
|
|
||||||
├──────────────────────────────────────────────────┤
|
|
||||||
│ [Convergence] [Parameter Space] │
|
|
||||||
├──────────────────────────────────────────────────┤
|
|
||||||
│ [Recent Trials Table] │
|
|
||||||
└──────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dynamic Units**:
|
|
||||||
- `getParamLabel()` helper function looks up units from metadata
|
|
||||||
- Applied to Parameter Space chart axes
|
|
||||||
- Format: `"support_angle (degrees)"`, `"tip_thickness (mm)"`
|
|
||||||
|
|
||||||
## Integration with Existing Protocols
|
|
||||||
|
|
||||||
### Protocol 10: Intelligent Optimizer
|
|
||||||
- Real-time callback integrated into `IntelligentOptimizer.optimize()`
|
|
||||||
- Tracks phase transitions (characterization → adaptive optimization)
|
|
||||||
- Reports strategy changes
|
|
||||||
- Location: `optimization_engine/intelligent_optimizer.py:117-121`
|
|
||||||
|
|
||||||
### Protocol 11: Multi-Objective Support
|
|
||||||
- Pareto front endpoint checks `len(study.directions) > 1`
|
|
||||||
- Dashboard conditionally renders Pareto plots
|
|
||||||
- Handles both single and multi-objective studies gracefully
|
|
||||||
- Uses Optuna's `study.best_trials` for Pareto front
|
|
||||||
|
|
||||||
### Protocol 12: Unified Extraction Library
|
|
||||||
- Extractors provide objective values for dashboard visualization
|
|
||||||
- Units defined in extractor classes flow to dashboard
|
|
||||||
- Consistent data format across all studies
|
|
||||||
|
|
||||||
## Data Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
Trial Completion (Optuna)
|
|
||||||
↓
|
|
||||||
Realtime Callback (optimization_engine/realtime_tracking.py)
|
|
||||||
↓
|
|
||||||
Write optimizer_state.json
|
|
||||||
↓
|
|
||||||
Backend API /optimizer-state endpoint
|
|
||||||
↓
|
|
||||||
Frontend OptimizerPanel (2s polling)
|
|
||||||
↓
|
|
||||||
User sees live updates
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
### Tested With
|
|
||||||
- **Study**: `bracket_stiffness_optimization_V2`
|
|
||||||
- **Trials**: 50 (30 completed in testing)
|
|
||||||
- **Objectives**: 2 (stiffness maximize, mass minimize)
|
|
||||||
- **Design Variables**: 2 (support_angle, tip_thickness)
|
|
||||||
- **Pareto Solutions**: 20 identified
|
|
||||||
- **Dashboard Port**: 3001 (frontend) + 8000 (backend)
|
|
||||||
|
|
||||||
### Verified Features
|
|
||||||
✅ Real-time optimizer state updates
|
|
||||||
✅ Pareto front visualization with line
|
|
||||||
✅ Normalization toggle (Raw, Min-Max, Z-Score)
|
|
||||||
✅ Parallel coordinates with selection
|
|
||||||
✅ Dynamic units from config
|
|
||||||
✅ Multi-objective detection
|
|
||||||
✅ Constraint satisfaction coloring
|
|
||||||
|
|
||||||
## File Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
atomizer-dashboard/
|
|
||||||
├── backend/
|
|
||||||
│ └── api/
|
|
||||||
│ └── routes/
|
|
||||||
│ └── optimization.py (Protocol 13 endpoints)
|
|
||||||
└── frontend/
|
|
||||||
└── src/
|
|
||||||
├── components/
|
|
||||||
│ ├── OptimizerPanel.tsx (NEW)
|
|
||||||
│ ├── ParetoPlot.tsx (NEW)
|
|
||||||
│ └── ParallelCoordinatesPlot.tsx (NEW)
|
|
||||||
└── pages/
|
|
||||||
└── Dashboard.tsx (updated with Protocol 13)
|
|
||||||
|
|
||||||
optimization_engine/
|
|
||||||
├── realtime_tracking.py (NEW - per-trial JSON writes)
|
|
||||||
└── intelligent_optimizer.py (updated with realtime callback)
|
|
||||||
|
|
||||||
studies/
|
|
||||||
└── {study_name}/
|
|
||||||
└── 2_results/
|
|
||||||
└── intelligent_optimizer/
|
|
||||||
└── optimizer_state.json (written every trial)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Backend Setup
|
|
||||||
```bash
|
|
||||||
cd atomizer-dashboard/backend
|
|
||||||
python -m uvicorn api.main:app --reload --port 8000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend Setup
|
|
||||||
```bash
|
|
||||||
cd atomizer-dashboard/frontend
|
|
||||||
npm run dev # Runs on port 3001
|
|
||||||
```
|
|
||||||
|
|
||||||
### Study Requirements
|
|
||||||
- Must use Protocol 10 (IntelligentOptimizer)
|
|
||||||
- Must have `optimization_config.json` with objectives and design_variables
|
|
||||||
- Real-time tracking enabled by default in IntelligentOptimizer
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
1. **Start Dashboard**:
|
|
||||||
```bash
|
|
||||||
# Terminal 1: Backend
|
|
||||||
cd atomizer-dashboard/backend
|
|
||||||
python -m uvicorn api.main:app --reload --port 8000
|
|
||||||
|
|
||||||
# Terminal 2: Frontend
|
|
||||||
cd atomizer-dashboard/frontend
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Start Optimization**:
|
|
||||||
```bash
|
|
||||||
cd studies/my_study
|
|
||||||
python run_optimization.py --trials 50
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **View Dashboard**:
|
|
||||||
- Open browser to `http://localhost:3001`
|
|
||||||
- Select study from dropdown
|
|
||||||
- Watch real-time updates every trial
|
|
||||||
|
|
||||||
4. **Interact with Plots**:
|
|
||||||
- Toggle normalization on Pareto plot
|
|
||||||
- Click lines in parallel coordinates to select trials
|
|
||||||
- Hover for detailed trial information
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
- **Backend**: ~10ms per endpoint (SQLite queries cached)
|
|
||||||
- **Frontend**: 2s polling interval (configurable)
|
|
||||||
- **Real-time writes**: <5ms per trial (JSON serialization)
|
|
||||||
- **Dashboard load time**: <500ms initial render
|
|
||||||
|
|
||||||
## December 2025 Enhancements
|
|
||||||
|
|
||||||
### Completed
|
|
||||||
- [x] **ConvergencePlot**: Enhanced with running best, statistics panel, gradient fill
|
|
||||||
- [x] **ParameterImportanceChart**: Pearson correlation analysis with color-coded bars
|
|
||||||
- [x] **StudyReportViewer**: Full markdown rendering with KaTeX math equation support
|
|
||||||
- [x] **Pruning endpoint**: Now queries Optuna SQLite directly instead of JSON file
|
|
||||||
- [x] **Report endpoint**: New `/studies/{id}/report` endpoint for STUDY_REPORT.md
|
|
||||||
- [x] **Chart data fix**: Proper `values` array transformation for single/multi-objective
|
|
||||||
|
|
||||||
### API Endpoint Additions (December 2025)
|
|
||||||
|
|
||||||
4. **GET `/api/optimization/studies/{study_id}/pruning`** (Enhanced)
|
|
||||||
- Now queries Optuna database directly for PRUNED trials
|
|
||||||
- Returns params, timing, and pruning cause for each trial
|
|
||||||
- Fallback to legacy JSON file if database unavailable
|
|
||||||
|
|
||||||
5. **GET `/api/optimization/studies/{study_id}/report`** (NEW)
|
|
||||||
- Returns STUDY_REPORT.md content as JSON
|
|
||||||
- Searches in 2_results/, 3_results/, and study root
|
|
||||||
- Returns 404 if no report found
|
|
||||||
|
|
||||||
## Future Enhancements (P3)
|
|
||||||
|
|
||||||
- [ ] WebSocket support for instant updates (currently polling)
|
|
||||||
- [ ] Export Pareto front as CSV/JSON
|
|
||||||
- [ ] 3D Pareto plot for 3+ objectives
|
|
||||||
- [ ] Strategy performance comparison charts
|
|
||||||
- [ ] Historical phase duration analysis
|
|
||||||
- [ ] Mobile-responsive design
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Dashboard shows "No Pareto front data yet"
|
|
||||||
- Study must have multiple objectives
|
|
||||||
- At least 2 trials must complete
|
|
||||||
- Check `/api/optimization/studies/{id}/pareto-front` endpoint
|
|
||||||
|
|
||||||
### OptimizerPanel shows "Not available"
|
|
||||||
- Study must use IntelligentOptimizer (Protocol 10)
|
|
||||||
- Check `2_results/intelligent_optimizer/optimizer_state.json` exists
|
|
||||||
- Verify realtime_callback is registered in optimize() call
|
|
||||||
|
|
||||||
### Units not showing
|
|
||||||
- Add `unit` field to objectives in `optimization_config.json`
|
|
||||||
- Or ensure description contains unit pattern: "(N/mm)", "Hz", etc.
|
|
||||||
- Backend will infer from common patterns
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [Protocol 10: Intelligent Optimizer](PROTOCOL_10_V2_IMPLEMENTATION.md)
|
|
||||||
- [Protocol 11: Multi-Objective Support](PROTOCOL_10_IMSO.md)
|
|
||||||
- [Protocol 12: Unified Extraction](HOW_TO_EXTEND_OPTIMIZATION.md)
|
|
||||||
- [Dashboard React Implementation](DASHBOARD_REACT_IMPLEMENTATION.md)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Implementation Complete**: All P1 and P2 features delivered
|
|
||||||
**Ready for Production**: Yes
|
|
||||||
**Tested**: Yes (50-trial multi-objective study)
|
|
||||||
@@ -1,425 +0,0 @@
|
|||||||
# Implementation Guide: Protocol 13 - Real-Time Tracking
|
|
||||||
|
|
||||||
**Date:** 2025-11-21
|
|
||||||
**Status:** 🚧 IN PROGRESS
|
|
||||||
**Priority:** P0 - CRITICAL
|
|
||||||
|
|
||||||
## What's Done ✅
|
|
||||||
|
|
||||||
1. **Created [`realtime_tracking.py`](../optimization_engine/realtime_tracking.py)**
|
|
||||||
- `RealtimeTrackingCallback` class
|
|
||||||
- Writes JSON files after EVERY trial (atomic writes)
|
|
||||||
- Files: optimizer_state.json, strategy_history.json, trial_log.json, landscape_snapshot.json, confidence_history.json
|
|
||||||
|
|
||||||
2. **Fixed Multi-Objective Strategy (Protocol 12)**
|
|
||||||
- Modified [`strategy_selector.py`](../optimization_engine/strategy_selector.py)
|
|
||||||
- Added `_recommend_multiobjective_strategy()` method
|
|
||||||
- Multi-objective: Random (8 trials) → TPE with multivariate
|
|
||||||
|
|
||||||
## What's Needed ⚠️
|
|
||||||
|
|
||||||
### Step 1: Integrate Callback into IntelligentOptimizer
|
|
||||||
|
|
||||||
**File:** [`optimization_engine/intelligent_optimizer.py`](../optimization_engine/intelligent_optimizer.py)
|
|
||||||
|
|
||||||
**Line 48 - Add import:**
|
|
||||||
```python
|
|
||||||
from optimization_engine.adaptive_characterization import CharacterizationStoppingCriterion
|
|
||||||
from optimization_engine.realtime_tracking import create_realtime_callback # ADD THIS
|
|
||||||
```
|
|
||||||
|
|
||||||
**Line ~90 in `__init__()` - Create callback:**
|
|
||||||
```python
|
|
||||||
def __init__(self, study_name: str, study_dir: Path, config: Dict, verbose: bool = True):
|
|
||||||
# ... existing init code ...
|
|
||||||
|
|
||||||
# Create realtime tracking callback (Protocol 13)
|
|
||||||
self.realtime_callback = create_realtime_callback(
|
|
||||||
tracking_dir=self.tracking_dir,
|
|
||||||
optimizer_ref=self,
|
|
||||||
verbose=self.verbose
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Find ALL `study.optimize()` calls and add callback:**
|
|
||||||
|
|
||||||
Search for: `self.study.optimize(`
|
|
||||||
|
|
||||||
Replace pattern:
|
|
||||||
```python
|
|
||||||
# BEFORE:
|
|
||||||
self.study.optimize(objective_function, n_trials=check_interval)
|
|
||||||
|
|
||||||
# AFTER:
|
|
||||||
self.study.optimize(
|
|
||||||
objective_function,
|
|
||||||
n_trials=check_interval,
|
|
||||||
callbacks=[self.realtime_callback]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Locations to fix (approximate line numbers):**
|
|
||||||
- Line ~190: Characterization phase
|
|
||||||
- Line ~230: Optimization phase (multiple locations)
|
|
||||||
- Line ~260: Refinement phase
|
|
||||||
- Line ~380: Fallback optimization
|
|
||||||
|
|
||||||
**CRITICAL:** EVERY `study.optimize()` call must include `callbacks=[self.realtime_callback]`
|
|
||||||
|
|
||||||
### Step 2: Test Realtime Tracking
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Clear old results
|
|
||||||
cd studies/bracket_stiffness_optimization_V2
|
|
||||||
del /Q 2_results\study.db
|
|
||||||
rd /S /Q 2_results\intelligent_optimizer
|
|
||||||
|
|
||||||
# Run with new code
|
|
||||||
python -B run_optimization.py --trials 10
|
|
||||||
|
|
||||||
# Verify files appear IMMEDIATELY after each trial
|
|
||||||
dir 2_results\intelligent_optimizer
|
|
||||||
# Should see:
|
|
||||||
# - optimizer_state.json
|
|
||||||
# - strategy_history.json
|
|
||||||
# - trial_log.json
|
|
||||||
# - landscape_snapshot.json
|
|
||||||
# - confidence_history.json
|
|
||||||
|
|
||||||
# Check file updates in real-time
|
|
||||||
python -c "import json; print(json.load(open('2_results/intelligent_optimizer/trial_log.json'))[-1])"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dashboard Implementation Plan
|
|
||||||
|
|
||||||
### Backend API Endpoints (Python/FastAPI)
|
|
||||||
|
|
||||||
**File:** [`atomizer-dashboard/backend/api/routes/optimization.py`](../atomizer-dashboard/backend/api/routes/optimization.py)
|
|
||||||
|
|
||||||
**Add new endpoints:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
@router.get("/studies/{study_id}/metadata")
|
|
||||||
async def get_study_metadata(study_id: str):
|
|
||||||
"""Read optimization_config.json for objectives, design vars, units."""
|
|
||||||
study_dir = find_study_dir(study_id)
|
|
||||||
config_file = study_dir / "optimization_config.json"
|
|
||||||
|
|
||||||
with open(config_file) as f:
|
|
||||||
config = json.load(f)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"objectives": config["objectives"],
|
|
||||||
"design_variables": config["design_variables"],
|
|
||||||
"constraints": config.get("constraints", []),
|
|
||||||
"study_name": config["study_name"]
|
|
||||||
}
|
|
||||||
|
|
||||||
@router.get("/studies/{study_id}/optimizer-state")
|
|
||||||
async def get_optimizer_state(study_id: str):
|
|
||||||
"""Read realtime optimizer state from intelligent_optimizer/."""
|
|
||||||
study_dir = find_study_dir(study_id)
|
|
||||||
state_file = study_dir / "2_results/intelligent_optimizer/optimizer_state.json"
|
|
||||||
|
|
||||||
if not state_file.exists():
|
|
||||||
return {"available": False}
|
|
||||||
|
|
||||||
with open(state_file) as f:
|
|
||||||
state = json.load(f)
|
|
||||||
|
|
||||||
return {"available": True, **state}
|
|
||||||
|
|
||||||
@router.get("/studies/{study_id}/pareto-front")
|
|
||||||
async def get_pareto_front(study_id: str):
|
|
||||||
"""Get Pareto-optimal solutions for multi-objective studies."""
|
|
||||||
study_dir = find_study_dir(study_id)
|
|
||||||
db_path = study_dir / "2_results/study.db"
|
|
||||||
|
|
||||||
storage = optuna.storages.RDBStorage(f"sqlite:///{db_path}")
|
|
||||||
study = optuna.load_study(study_name=study_id, storage=storage)
|
|
||||||
|
|
||||||
if len(study.directions) == 1:
|
|
||||||
return {"is_multi_objective": False}
|
|
||||||
|
|
||||||
pareto_trials = study.best_trials
|
|
||||||
|
|
||||||
return {
|
|
||||||
"is_multi_objective": True,
|
|
||||||
"pareto_front": [
|
|
||||||
{
|
|
||||||
"trial_number": t.number,
|
|
||||||
"values": t.values,
|
|
||||||
"params": t.params,
|
|
||||||
"user_attrs": dict(t.user_attrs)
|
|
||||||
}
|
|
||||||
for t in pareto_trials
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend Components (React/TypeScript)
|
|
||||||
|
|
||||||
**1. Optimizer Panel Component**
|
|
||||||
|
|
||||||
**File:** `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx` (CREATE NEW)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { useEffect, useState } from 'react';
|
|
||||||
import { Card } from './Card';
|
|
||||||
|
|
||||||
interface OptimizerState {
|
|
||||||
available: boolean;
|
|
||||||
current_phase?: string;
|
|
||||||
current_strategy?: string;
|
|
||||||
trial_number?: number;
|
|
||||||
total_trials?: number;
|
|
||||||
latest_recommendation?: {
|
|
||||||
strategy: string;
|
|
||||||
confidence: number;
|
|
||||||
reasoning: string;
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
export function OptimizerPanel({ studyId }: { studyId: string }) {
|
|
||||||
const [state, setState] = useState<OptimizerState | null>(null);
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
const fetchState = async () => {
|
|
||||||
const res = await fetch(`/api/optimization/studies/${studyId}/optimizer-state`);
|
|
||||||
const data = await res.json();
|
|
||||||
setState(data);
|
|
||||||
};
|
|
||||||
|
|
||||||
fetchState();
|
|
||||||
const interval = setInterval(fetchState, 1000); // Update every second
|
|
||||||
return () => clearInterval(interval);
|
|
||||||
}, [studyId]);
|
|
||||||
|
|
||||||
if (!state?.available) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Card title="Intelligent Optimizer Status">
|
|
||||||
<div className="space-y-4">
|
|
||||||
{/* Phase */}
|
|
||||||
<div>
|
|
||||||
<div className="text-sm text-dark-300">Phase</div>
|
|
||||||
<div className="text-lg font-semibold text-primary-400">
|
|
||||||
{state.current_phase || 'Unknown'}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Strategy */}
|
|
||||||
<div>
|
|
||||||
<div className="text-sm text-dark-300">Current Strategy</div>
|
|
||||||
<div className="text-lg font-semibold text-blue-400">
|
|
||||||
{state.current_strategy?.toUpperCase() || 'Unknown'}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Progress */}
|
|
||||||
<div>
|
|
||||||
<div className="text-sm text-dark-300">Progress</div>
|
|
||||||
<div className="text-lg">
|
|
||||||
{state.trial_number} / {state.total_trials} trials
|
|
||||||
</div>
|
|
||||||
<div className="w-full bg-dark-500 rounded-full h-2 mt-2">
|
|
||||||
<div
|
|
||||||
className="bg-primary-400 h-2 rounded-full transition-all"
|
|
||||||
style={{
|
|
||||||
width: `${((state.trial_number || 0) / (state.total_trials || 1)) * 100}%`
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Confidence */}
|
|
||||||
{state.latest_recommendation && (
|
|
||||||
<div>
|
|
||||||
<div className="text-sm text-dark-300">Confidence</div>
|
|
||||||
<div className="flex items-center gap-2">
|
|
||||||
<div className="flex-1 bg-dark-500 rounded-full h-2">
|
|
||||||
<div
|
|
||||||
className="bg-green-400 h-2 rounded-full transition-all"
|
|
||||||
style={{
|
|
||||||
width: `${state.latest_recommendation.confidence * 100}%`
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
<span className="text-sm font-mono">
|
|
||||||
{(state.latest_recommendation.confidence * 100).toFixed(0)}%
|
|
||||||
</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{/* Reasoning */}
|
|
||||||
{state.latest_recommendation && (
|
|
||||||
<div>
|
|
||||||
<div className="text-sm text-dark-300">Reasoning</div>
|
|
||||||
<div className="text-sm text-dark-100 mt-1">
|
|
||||||
{state.latest_recommendation.reasoning}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
</Card>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Pareto Front Plot**
|
|
||||||
|
|
||||||
**File:** `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx` (CREATE NEW)
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { ScatterChart, Scatter, XAxis, YAxis, CartesianGrid, Tooltip, Cell, ResponsiveContainer } from 'recharts';
|
|
||||||
|
|
||||||
interface ParetoData {
|
|
||||||
trial_number: number;
|
|
||||||
values: [number, number];
|
|
||||||
params: Record<string, number>;
|
|
||||||
constraint_satisfied?: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
export function ParetoPlot({ paretoData, objectives }: {
|
|
||||||
paretoData: ParetoData[];
|
|
||||||
objectives: Array<{ name: string; unit?: string }>;
|
|
||||||
}) {
|
|
||||||
if (paretoData.length === 0) {
|
|
||||||
return (
|
|
||||||
<div className="h-64 flex items-center justify-center text-dark-300">
|
|
||||||
No Pareto front data yet
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = paretoData.map(trial => ({
|
|
||||||
x: trial.values[0],
|
|
||||||
y: trial.values[1],
|
|
||||||
trial_number: trial.number,
|
|
||||||
feasible: trial.constraint_satisfied !== false
|
|
||||||
}));
|
|
||||||
|
|
||||||
return (
|
|
||||||
<ResponsiveContainer width="100%" height={400}>
|
|
||||||
<ScatterChart>
|
|
||||||
<CartesianGrid strokeDasharray="3 3" stroke="#334155" />
|
|
||||||
<XAxis
|
|
||||||
type="number"
|
|
||||||
dataKey="x"
|
|
||||||
name={objectives[0]?.name || 'Objective 1'}
|
|
||||||
stroke="#94a3b8"
|
|
||||||
label={{
|
|
||||||
value: `${objectives[0]?.name || 'Objective 1'} ${objectives[0]?.unit || ''}`.trim(),
|
|
||||||
position: 'insideBottom',
|
|
||||||
offset: -5,
|
|
||||||
fill: '#94a3b8'
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<YAxis
|
|
||||||
type="number"
|
|
||||||
dataKey="y"
|
|
||||||
name={objectives[1]?.name || 'Objective 2'}
|
|
||||||
stroke="#94a3b8"
|
|
||||||
label={{
|
|
||||||
value: `${objectives[1]?.name || 'Objective 2'} ${objectives[1]?.unit || ''}`.trim(),
|
|
||||||
angle: -90,
|
|
||||||
position: 'insideLeft',
|
|
||||||
fill: '#94a3b8'
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<Tooltip
|
|
||||||
contentStyle={{ backgroundColor: '#1e293b', border: 'none', borderRadius: '8px' }}
|
|
||||||
labelStyle={{ color: '#e2e8f0' }}
|
|
||||||
/>
|
|
||||||
<Scatter name="Pareto Front" data={data}>
|
|
||||||
{data.map((entry, index) => (
|
|
||||||
<Cell
|
|
||||||
key={`cell-${index}`}
|
|
||||||
fill={entry.feasible ? '#10b981' : '#ef4444'}
|
|
||||||
r={entry.feasible ? 6 : 4}
|
|
||||||
/>
|
|
||||||
))}
|
|
||||||
</Scatter>
|
|
||||||
</ScatterChart>
|
|
||||||
</ResponsiveContainer>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Update Dashboard.tsx**
|
|
||||||
|
|
||||||
**File:** [`atomizer-dashboard/frontend/src/pages/Dashboard.tsx`](../atomizer-dashboard/frontend/src/pages/Dashboard.tsx)
|
|
||||||
|
|
||||||
Add imports at top:
|
|
||||||
```typescript
|
|
||||||
import { OptimizerPanel } from '../components/OptimizerPanel';
|
|
||||||
import { ParetoPlot } from '../components/ParetoPlot';
|
|
||||||
```
|
|
||||||
|
|
||||||
Add new state:
|
|
||||||
```typescript
|
|
||||||
const [studyMetadata, setStudyMetadata] = useState(null);
|
|
||||||
const [paretoFront, setParetoFront] = useState([]);
|
|
||||||
```
|
|
||||||
|
|
||||||
Fetch metadata when study selected:
|
|
||||||
```typescript
|
|
||||||
useEffect(() => {
|
|
||||||
if (selectedStudyId) {
|
|
||||||
fetch(`/api/optimization/studies/${selectedStudyId}/metadata`)
|
|
||||||
.then(res => res.json())
|
|
||||||
.then(setStudyMetadata);
|
|
||||||
|
|
||||||
fetch(`/api/optimization/studies/${selectedStudyId}/pareto-front`)
|
|
||||||
.then(res => res.json())
|
|
||||||
.then(data => {
|
|
||||||
if (data.is_multi_objective) {
|
|
||||||
setParetoFront(data.pareto_front);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}, [selectedStudyId]);
|
|
||||||
```
|
|
||||||
|
|
||||||
Add components to layout:
|
|
||||||
```typescript
|
|
||||||
{/* Add after metrics grid */}
|
|
||||||
<div className="grid grid-cols-2 gap-6 mb-6">
|
|
||||||
<OptimizerPanel studyId={selectedStudyId} />
|
|
||||||
{paretoFront.length > 0 && (
|
|
||||||
<Card title="Pareto Front">
|
|
||||||
<ParetoPlot
|
|
||||||
paretoData={paretoFront}
|
|
||||||
objectives={studyMetadata?.objectives || []}
|
|
||||||
/>
|
|
||||||
</Card>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Checklist
|
|
||||||
|
|
||||||
- [ ] Realtime callback writes files after EVERY trial
|
|
||||||
- [ ] optimizer_state.json updates in real-time
|
|
||||||
- [ ] Dashboard shows optimizer panel with live updates
|
|
||||||
- [ ] Pareto front appears for multi-objective studies
|
|
||||||
- [ ] Units are dynamic (read from config)
|
|
||||||
- [ ] Multi-objective strategy switches from random → TPE after 8 trials
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. Integrate callback into IntelligentOptimizer (Steps above)
|
|
||||||
2. Implement backend API endpoints
|
|
||||||
3. Create frontend components
|
|
||||||
4. Test end-to-end with bracket study
|
|
||||||
5. Document as Protocol 13
|
|
||||||
|
|
||||||
@@ -64,6 +64,7 @@ Core technical specifications:
|
|||||||
- **SYS_12**: Extractor Library
|
- **SYS_12**: Extractor Library
|
||||||
- **SYS_13**: Real-Time Dashboard Tracking
|
- **SYS_13**: Real-Time Dashboard Tracking
|
||||||
- **SYS_14**: Neural Network Acceleration
|
- **SYS_14**: Neural Network Acceleration
|
||||||
|
- **SYS_15**: Method Selector
|
||||||
|
|
||||||
### Layer 4: Extensions (`extensions/`)
|
### Layer 4: Extensions (`extensions/`)
|
||||||
Guides for extending Atomizer:
|
Guides for extending Atomizer:
|
||||||
@@ -140,6 +141,7 @@ LOAD_WITH: [{dependencies}]
|
|||||||
| 12 | Extractors | [System](system/SYS_12_EXTRACTOR_LIBRARY.md) |
|
| 12 | Extractors | [System](system/SYS_12_EXTRACTOR_LIBRARY.md) |
|
||||||
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
|
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
|
||||||
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
|
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
|
||||||
|
| 15 | Method Selector | [System](system/SYS_15_METHOD_SELECTOR.md) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,42 +0,0 @@
|
|||||||
@echo off
|
|
||||||
REM Atomizer Dashboard Launcher
|
|
||||||
REM Starts both backend and frontend, then opens browser
|
|
||||||
|
|
||||||
echo ============================================
|
|
||||||
echo ATOMIZER DASHBOARD LAUNCHER
|
|
||||||
echo ============================================
|
|
||||||
echo.
|
|
||||||
|
|
||||||
REM Set paths
|
|
||||||
set ATOMIZER_ROOT=%~dp0
|
|
||||||
set BACKEND_DIR=%ATOMIZER_ROOT%atomizer-dashboard\backend
|
|
||||||
set FRONTEND_DIR=%ATOMIZER_ROOT%atomizer-dashboard\frontend
|
|
||||||
set CONDA_ENV=atomizer
|
|
||||||
|
|
||||||
echo Starting Backend API (port 8000)...
|
|
||||||
start "Atomizer Backend" cmd /k "cd /d %BACKEND_DIR% && conda activate %CONDA_ENV% && python -m uvicorn api.main:app --host 0.0.0.0 --port 8000 --reload"
|
|
||||||
|
|
||||||
echo Starting Frontend (port 3003)...
|
|
||||||
start "Atomizer Frontend" cmd /k "cd /d %FRONTEND_DIR% && npm run dev"
|
|
||||||
|
|
||||||
echo.
|
|
||||||
echo Waiting for services to start...
|
|
||||||
timeout /t 5 /nobreak > nul
|
|
||||||
|
|
||||||
echo.
|
|
||||||
echo ============================================
|
|
||||||
echo DASHBOARD READY
|
|
||||||
echo ============================================
|
|
||||||
echo.
|
|
||||||
echo Frontend: http://localhost:3003
|
|
||||||
echo Backend: http://localhost:8000
|
|
||||||
echo API Docs: http://localhost:8000/docs
|
|
||||||
echo.
|
|
||||||
echo ============================================
|
|
||||||
|
|
||||||
REM Open browser
|
|
||||||
start http://localhost:3003
|
|
||||||
|
|
||||||
echo.
|
|
||||||
echo Press any key to close this window (servers will keep running)...
|
|
||||||
pause > nul
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
"""
|
|
||||||
Atomizer MCP Server
|
|
||||||
|
|
||||||
Model Context Protocol server for LLM-driven NX optimization configuration.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__version__ = "0.1.0"
|
|
||||||
__author__ = "Atomaste"
|
|
||||||
@@ -1,450 +0,0 @@
|
|||||||
# Atomizer MCP Assistant - System Prompt
|
|
||||||
|
|
||||||
## Role
|
|
||||||
|
|
||||||
You are an expert FEA optimization assistant for Siemens NX. Your role is to help users configure and run optimizations using the Atomizer platform through natural language conversation.
|
|
||||||
|
|
||||||
## Core Responsibilities
|
|
||||||
|
|
||||||
1. **Discover FEA Models**: Parse .sim files to extract solutions, expressions, and mesh information
|
|
||||||
2. **Configure Optimizations**: Build optimization configs from natural language requirements
|
|
||||||
3. **Monitor Progress**: Query optimization status and present insights
|
|
||||||
4. **Analyze Results**: Extract metrics from FEA results and recommend next steps
|
|
||||||
5. **Provide NXOpen Guidance**: Help users write NX automation scripts
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Available MCP Tools
|
|
||||||
|
|
||||||
### Model Discovery
|
|
||||||
- **`discover_fea_model`**: Analyze .sim file structure, extract expressions, solutions, and FEM info
|
|
||||||
- **`search_nxopen_docs`**: Search official Siemens NXOpen API documentation
|
|
||||||
|
|
||||||
### Optimization Control
|
|
||||||
- **`build_optimization_config`**: Create optimization_config.json from user requirements
|
|
||||||
- **`start_optimization`**: Launch optimization run
|
|
||||||
- **`query_optimization_status`**: Get current iteration status and metrics
|
|
||||||
|
|
||||||
### Result Analysis
|
|
||||||
- **`extract_results`**: Parse OP2/F06/XDB files for stress, displacement, mass, etc.
|
|
||||||
- **`run_nx_journal`**: Execute custom NXOpen scripts
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Workflow Guidelines
|
|
||||||
|
|
||||||
### 1. Model Discovery Phase
|
|
||||||
|
|
||||||
When a user provides a .sim file:
|
|
||||||
|
|
||||||
```
|
|
||||||
User: "My FEA is in C:\Projects\Bracket\analysis.sim, please import its features."
|
|
||||||
|
|
||||||
Assistant Actions:
|
|
||||||
1. Call discover_fea_model(sim_file_path="C:/Projects/Bracket/analysis.sim")
|
|
||||||
2. Parse results to identify:
|
|
||||||
- Available solutions
|
|
||||||
- Parametric expressions (potential design variables)
|
|
||||||
- FEM characteristics (mesh size, element types)
|
|
||||||
- Linked part files
|
|
||||||
3. Present summary to user in clear, structured format
|
|
||||||
4. Ask clarifying questions about optimization goals
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example Response**:
|
|
||||||
> I've analyzed your FEA model. Here's what I found:
|
|
||||||
>
|
|
||||||
> **Simulation**: `analysis.sim`
|
|
||||||
> - **Solution**: Static Analysis (NX Nastran)
|
|
||||||
> - **Expressions** (8 found):
|
|
||||||
> - `wall_thickness` = 5.0 mm
|
|
||||||
> - `hole_diameter` = 10.0 mm
|
|
||||||
> - `material_density` = 2.7 g/cm³
|
|
||||||
> - ...
|
|
||||||
> - **Mesh**: 8,234 nodes, 4,521 CQUAD4 elements
|
|
||||||
>
|
|
||||||
> Which parameters would you like to optimize?
|
|
||||||
|
|
||||||
### 2. Configuration Building Phase
|
|
||||||
|
|
||||||
When defining optimization objectives:
|
|
||||||
|
|
||||||
```
|
|
||||||
User: "Minimize maximum stress while keeping mass under 0.5 kg"
|
|
||||||
|
|
||||||
Assistant Actions:
|
|
||||||
1. Identify objectives:
|
|
||||||
- Primary: Minimize max_stress
|
|
||||||
- Constraint: mass <= 0.5 kg
|
|
||||||
2. Determine appropriate extractors:
|
|
||||||
- max_stress → nastran_stress extractor
|
|
||||||
- mass → nx_mass extractor
|
|
||||||
3. Suggest reasonable targets based on engineering judgment
|
|
||||||
4. Ask for design variable bounds
|
|
||||||
5. Call build_optimization_config with complete specification
|
|
||||||
6. Present config summary for user approval
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Principles**:
|
|
||||||
- **Always validate paths**: Check that files exist before proceeding
|
|
||||||
- **Suggest realistic targets**: Base on baseline FEA results when available
|
|
||||||
- **Confirm before running**: Show config summary and ask for confirmation
|
|
||||||
- **Use domain knowledge**: Apply FEA best practices (e.g., stress concentration considerations)
|
|
||||||
|
|
||||||
### 3. Optimization Execution Phase
|
|
||||||
|
|
||||||
When starting optimization:
|
|
||||||
|
|
||||||
```
|
|
||||||
User: "Start the optimization!"
|
|
||||||
|
|
||||||
Assistant Actions:
|
|
||||||
1. Verify config is complete and valid
|
|
||||||
2. Call start_optimization(config_path=..., resume=False)
|
|
||||||
3. Provide session info to user:
|
|
||||||
- Session ID
|
|
||||||
- Dashboard URL
|
|
||||||
- Log file location
|
|
||||||
4. Set up monitoring (if requested)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Monitoring Phase
|
|
||||||
|
|
||||||
Proactively check status and alert user:
|
|
||||||
|
|
||||||
```
|
|
||||||
Assistant Actions (periodic):
|
|
||||||
1. Call query_optimization_status(session_id=...)
|
|
||||||
2. Check for:
|
|
||||||
- Significant improvements (>10% better objective)
|
|
||||||
- Convergence stalling (no improvement in 20 iterations)
|
|
||||||
- Errors or failures
|
|
||||||
3. Provide concise updates with key metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## NXOpen Development Guidance
|
|
||||||
|
|
||||||
### Resource Hierarchy
|
|
||||||
|
|
||||||
When helping users write NXOpen code, consult resources in this order:
|
|
||||||
|
|
||||||
1. **Official Siemens NXOpen API Documentation**
|
|
||||||
- URL: https://docs.sw.siemens.com/en-US/doc/209349590/
|
|
||||||
- Use for: Method signatures, parameter types, official API reference
|
|
||||||
- Tool: `search_nxopen_docs`
|
|
||||||
|
|
||||||
2. **NXOpenTSE Documentation** (Reference Only)
|
|
||||||
- URL: https://nxopentsedocumentation.thescriptingengineer.com/
|
|
||||||
- GitHub: https://github.com/theScriptingEngineer/nxopentse
|
|
||||||
- Use for: Real-world usage patterns, best practices, design patterns
|
|
||||||
- **Important**: Reference for learning, NOT for copying code
|
|
||||||
|
|
||||||
3. **Atomizer's NXOpen Resources Guide**
|
|
||||||
- File: `docs/NXOPEN_RESOURCES.md`
|
|
||||||
- Use for: Project-specific patterns and conventions
|
|
||||||
|
|
||||||
### Code Generation Workflow
|
|
||||||
|
|
||||||
When generating NXOpen code:
|
|
||||||
|
|
||||||
**Step 1**: Understand the requirement
|
|
||||||
```
|
|
||||||
User: "I need to update all expressions in a sim file"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 2**: Check official API
|
|
||||||
- Search NXOpen docs for `Part.Expressions` methods
|
|
||||||
- Identify: `CreateExpression`, `FindObject`, `Edit`
|
|
||||||
|
|
||||||
**Step 3**: Reference NXOpenTSE for patterns
|
|
||||||
- Look at expression handling examples
|
|
||||||
- Note: Error handling, session management, cleanup patterns
|
|
||||||
|
|
||||||
**Step 4**: Generate Atomizer-specific code
|
|
||||||
- Adapt pattern to our architecture
|
|
||||||
- Add Atomizer-specific error handling
|
|
||||||
- Include JSON serialization (for MCP responses)
|
|
||||||
- Add comments explaining NX-specific requirements
|
|
||||||
|
|
||||||
**Example Output**:
|
|
||||||
```python
|
|
||||||
def update_expressions(sim_path: Path, updates: Dict[str, float]) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Update expressions in a .sim file.
|
|
||||||
|
|
||||||
Pattern inspired by NXOpenTSE expression handling.
|
|
||||||
See: https://nxopentsedocumentation.thescriptingengineer.com/expressions.html
|
|
||||||
"""
|
|
||||||
import NXOpen
|
|
||||||
|
|
||||||
session = NXOpen.Session.GetSession()
|
|
||||||
|
|
||||||
# Load sim file
|
|
||||||
part, status = session.Parts.OpenActiveDisplay(str(sim_path))
|
|
||||||
if status != NXOpen.PartLoadStatus.Success:
|
|
||||||
raise ValueError(f"Failed to load {sim_path}")
|
|
||||||
|
|
||||||
results = {}
|
|
||||||
try:
|
|
||||||
for expr_name, new_value in updates.items():
|
|
||||||
expr = part.Expressions.FindObject(expr_name)
|
|
||||||
|
|
||||||
if expr:
|
|
||||||
# Update existing expression
|
|
||||||
expr.Edit(str(new_value))
|
|
||||||
results[expr_name] = {
|
|
||||||
"status": "updated",
|
|
||||||
"old_value": expr.Value,
|
|
||||||
"new_value": new_value
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
# Create if doesn't exist
|
|
||||||
unit = part.UnitCollection.FindObject("MilliMeter")
|
|
||||||
new_expr = part.Expressions.CreateExpression(
|
|
||||||
unit, expr_name, str(new_value)
|
|
||||||
)
|
|
||||||
results[expr_name] = {
|
|
||||||
"status": "created",
|
|
||||||
"value": new_value
|
|
||||||
}
|
|
||||||
|
|
||||||
# Commit changes
|
|
||||||
part.Save(NXOpen.BasePart.SaveComponents.TrueValue)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
results["error"] = str(e)
|
|
||||||
finally:
|
|
||||||
# Always clean up
|
|
||||||
session.Parts.SetWork(None)
|
|
||||||
|
|
||||||
return results
|
|
||||||
```
|
|
||||||
|
|
||||||
### NXOpen Best Practices (from NXOpenTSE)
|
|
||||||
|
|
||||||
1. **Session Management**
|
|
||||||
```python
|
|
||||||
# Always get session first
|
|
||||||
session = NXOpen.Session.GetSession()
|
|
||||||
|
|
||||||
# Check work part exists
|
|
||||||
if session.Parts.Work is None:
|
|
||||||
raise ValueError("No work part loaded")
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Error Handling**
|
|
||||||
```python
|
|
||||||
# Always use try-finally for cleanup
|
|
||||||
try:
|
|
||||||
# NX operations
|
|
||||||
result = do_something()
|
|
||||||
finally:
|
|
||||||
# Cleanup even on error
|
|
||||||
session.Parts.SetWork(None)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Expression Updates**
|
|
||||||
```python
|
|
||||||
# Find before creating
|
|
||||||
expr = part.Expressions.FindObject("param_name")
|
|
||||||
if expr:
|
|
||||||
expr.Edit(new_value)
|
|
||||||
else:
|
|
||||||
# Create new
|
|
||||||
part.Expressions.CreateExpression(...)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Simulation Access**
|
|
||||||
```python
|
|
||||||
# Safely access simulation objects
|
|
||||||
if hasattr(sim_part, 'Simulation') and sim_part.Simulation:
|
|
||||||
solutions = sim_part.Simulation.Solutions
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Domain Knowledge
|
|
||||||
|
|
||||||
### FEA Optimization Best Practices
|
|
||||||
|
|
||||||
1. **Stress Analysis**:
|
|
||||||
- Target values: Typically 0.5-0.7 × yield strength for safety factor 1.5-2.0
|
|
||||||
- Watch for: Stress concentrations, von Mises vs. principal stress
|
|
||||||
- Suggest: Finer mesh around holes, fillets, load application points
|
|
||||||
|
|
||||||
2. **Mass Optimization**:
|
|
||||||
- Typical constraints: ±20% of baseline
|
|
||||||
- Watch for: Minimum wall thickness for manufacturability
|
|
||||||
- Suggest: Material density checks, volume calculations
|
|
||||||
|
|
||||||
3. **Multi-Objective**:
|
|
||||||
- Use weighted sum for initial exploration
|
|
||||||
- Suggest Pareto front analysis if objectives conflict
|
|
||||||
- Typical weights: Stress (10×), Displacement (5×), Mass (1×)
|
|
||||||
|
|
||||||
4. **Convergence**:
|
|
||||||
- Early phase: Latin Hypercube for exploration (15-20 trials)
|
|
||||||
- Mid phase: TPE sampler for exploitation
|
|
||||||
- Late phase: Gaussian Process for fine-tuning
|
|
||||||
- Stalling criteria: <1% improvement over 20 iterations
|
|
||||||
|
|
||||||
### Parameter Bounds Recommendations
|
|
||||||
|
|
||||||
When user asks for suggested bounds:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def suggest_bounds(baseline_value: float, param_type: str) -> tuple:
|
|
||||||
"""Suggest reasonable parameter bounds."""
|
|
||||||
|
|
||||||
if param_type in ["thickness", "dimension"]:
|
|
||||||
# ±30% for geometric parameters
|
|
||||||
return (baseline_value * 0.7, baseline_value * 1.3)
|
|
||||||
|
|
||||||
elif param_type == "angle":
|
|
||||||
# ±15 degrees for angles
|
|
||||||
return (max(0, baseline_value - 15), min(90, baseline_value + 15))
|
|
||||||
|
|
||||||
elif param_type == "material_property":
|
|
||||||
# ±10% for material properties
|
|
||||||
return (baseline_value * 0.9, baseline_value * 1.1)
|
|
||||||
|
|
||||||
else:
|
|
||||||
# Default ±25%
|
|
||||||
return (baseline_value * 0.75, baseline_value * 1.25)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Response Style
|
|
||||||
|
|
||||||
### Tone
|
|
||||||
- **Professional but approachable**
|
|
||||||
- **Concise but complete**
|
|
||||||
- **Technical when needed, plain language when possible**
|
|
||||||
|
|
||||||
### Formatting
|
|
||||||
- Use **bold** for emphasis on key values
|
|
||||||
- Use `code blocks` for file paths and technical terms
|
|
||||||
- Use bullet points for lists
|
|
||||||
- Use tables for structured comparisons
|
|
||||||
|
|
||||||
### Validation Questions
|
|
||||||
Always ask these before starting optimization:
|
|
||||||
1. ✓ Are all design variable bounds reasonable?
|
|
||||||
2. ✓ Are objective targets achievable?
|
|
||||||
3. ✓ Is the baseline FEA result valid?
|
|
||||||
4. ✓ Are solver settings appropriate?
|
|
||||||
5. ✓ Is there enough disk space for results?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Common Issues and Solutions
|
|
||||||
|
|
||||||
**Issue**: Sim file not found
|
|
||||||
```
|
|
||||||
Response: "I couldn't find the .sim file at that path. Please check:
|
|
||||||
- File path is absolute (C:/...)
|
|
||||||
- File exists and has .sim extension
|
|
||||||
- You have read permissions
|
|
||||||
- File is not currently open in NX"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Issue**: No expressions found
|
|
||||||
```
|
|
||||||
Response: "The sim file doesn't contain any parametric expressions.
|
|
||||||
To optimize this model, you'll need to:
|
|
||||||
1. Open the .prt file in NX
|
|
||||||
2. Create expressions for key dimensions
|
|
||||||
3. Link FEM mesh to these expressions
|
|
||||||
4. Save and reload the .sim file"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Issue**: Solver fails
|
|
||||||
```
|
|
||||||
Response: "The FEA solver encountered an error. Common causes:
|
|
||||||
- Invalid geometry (negative thickness, overlapping parts)
|
|
||||||
- Missing material properties
|
|
||||||
- Insufficient constraints (rigid body motion)
|
|
||||||
- Mesh quality issues
|
|
||||||
|
|
||||||
Check the .f06 file for detailed error messages."
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Conversations
|
|
||||||
|
|
||||||
### Example 1: Complete Workflow
|
|
||||||
|
|
||||||
**User**: "I have a bracket analysis in C:/FEA/bracket.sim. I want to make it lighter without exceeding 200 MPa stress."
|
|
||||||
|
|
||||||
**Assistant**:
|
|
||||||
> Let me analyze your FEA model.
|
|
||||||
>
|
|
||||||
> *[Calls discover_fea_model]*
|
|
||||||
>
|
|
||||||
> **Model Summary**:
|
|
||||||
> - Solution: Static Analysis
|
|
||||||
> - Expressions: wall_thickness (5mm), rib_spacing (40mm), hole_radius (6mm)
|
|
||||||
> - Baseline mass: 0.48 kg
|
|
||||||
> - Current max stress: 165 MPa
|
|
||||||
>
|
|
||||||
> Great! You have headroom in stress (165 vs 200 MPa target). I recommend optimizing:
|
|
||||||
> - **wall_thickness**: 3-6 mm (reduce mass)
|
|
||||||
> - **rib_spacing**: 30-50 mm (balance stiffness)
|
|
||||||
>
|
|
||||||
> Would you like to proceed with this configuration, or adjust the parameters?
|
|
||||||
|
|
||||||
**User**: "Yes, let's do it with 100 iterations."
|
|
||||||
|
|
||||||
**Assistant**:
|
|
||||||
> *[Calls build_optimization_config]*
|
|
||||||
>
|
|
||||||
> Configuration created! Here's the setup:
|
|
||||||
>
|
|
||||||
> **Design Variables**:
|
|
||||||
> - wall_thickness: 3.0 - 6.0 mm
|
|
||||||
> - rib_spacing: 30 - 50 mm
|
|
||||||
>
|
|
||||||
> **Objectives**:
|
|
||||||
> - Minimize mass (weight: 5, target: 0.4 kg)
|
|
||||||
> - Keep max_stress < 200 MPa (weight: 10)
|
|
||||||
>
|
|
||||||
> **Optimization**: 100 iterations, TPE sampler
|
|
||||||
>
|
|
||||||
> Ready to start? (This will take approximately 3-4 hours)
|
|
||||||
|
|
||||||
**User**: "Start!"
|
|
||||||
|
|
||||||
**Assistant**:
|
|
||||||
> *[Calls start_optimization]*
|
|
||||||
>
|
|
||||||
> Optimization launched! 🚀
|
|
||||||
>
|
|
||||||
> **Session**: opt_20250115_094522
|
|
||||||
> **Dashboard**: http://localhost:8080/dashboard/opt_20250115_094522
|
|
||||||
> **Logs**: C:/FEA/logs/optimization.log
|
|
||||||
>
|
|
||||||
> I'll monitor progress and notify you of significant updates.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Important Reminders
|
|
||||||
|
|
||||||
1. **Never run optimization without user confirmation**
|
|
||||||
2. **Always validate file paths before operations**
|
|
||||||
3. **Provide clear, actionable error messages**
|
|
||||||
4. **Reference NXOpenTSE for learning, not copying**
|
|
||||||
5. **Use engineering judgment for reasonable parameter bounds**
|
|
||||||
6. **Monitor for convergence issues and alert user**
|
|
||||||
7. **Save configs to allow easy re-runs**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: 2025-11-15
|
|
||||||
**Version**: 0.1.0
|
|
||||||
**Atomizer MCP Server**
|
|
||||||
@@ -1,281 +0,0 @@
|
|||||||
# MCP Tools Documentation
|
|
||||||
|
|
||||||
This directory contains the MCP (Model Context Protocol) tools that enable LLM-driven optimization configuration for Atomizer.
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
### 1. Model Discovery (`model_discovery.py`) ✅ IMPLEMENTED
|
|
||||||
|
|
||||||
**Purpose**: Parse Siemens NX .sim files to extract FEA model information.
|
|
||||||
|
|
||||||
**Function**: `discover_fea_model(sim_file_path: str) -> Dict[str, Any]`
|
|
||||||
|
|
||||||
**What it extracts**:
|
|
||||||
- **Solutions**: Analysis types (static, thermal, modal, etc.)
|
|
||||||
- **Expressions**: Parametric variables that can be optimized
|
|
||||||
- **FEM Info**: Mesh, materials, loads, constraints
|
|
||||||
- **Linked Files**: Associated .prt files and result files
|
|
||||||
|
|
||||||
**Usage Example**:
|
|
||||||
```python
|
|
||||||
from mcp_server.tools import discover_fea_model, format_discovery_result_for_llm
|
|
||||||
|
|
||||||
# Discover model
|
|
||||||
result = discover_fea_model("C:/Projects/Bracket/analysis.sim")
|
|
||||||
|
|
||||||
# Format for LLM
|
|
||||||
if result['status'] == 'success':
|
|
||||||
markdown_output = format_discovery_result_for_llm(result)
|
|
||||||
print(markdown_output)
|
|
||||||
|
|
||||||
# Access structured data
|
|
||||||
for expr in result['expressions']:
|
|
||||||
print(f"{expr['name']}: {expr['value']} {expr['units']}")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Command Line Usage**:
|
|
||||||
```bash
|
|
||||||
python mcp_server/tools/model_discovery.py examples/test_bracket.sim
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Format**:
|
|
||||||
- **JSON**: Complete structured data for programmatic use
|
|
||||||
- **Markdown**: Human-readable format for LLM consumption
|
|
||||||
|
|
||||||
**Supported .sim File Versions**:
|
|
||||||
- NX 2412 (tested)
|
|
||||||
- Should work with NX 12.0+ (XML-based .sim files)
|
|
||||||
|
|
||||||
**Limitations**:
|
|
||||||
- Expression values are best-effort extracted from .sim XML
|
|
||||||
- For accurate values, the associated .prt file is parsed (binary parsing)
|
|
||||||
- Binary .prt parsing is heuristic-based and may miss some expressions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Build Optimization Config (`optimization_config.py`) ✅ IMPLEMENTED
|
|
||||||
|
|
||||||
**Purpose**: Generate `optimization_config.json` from user selections of objectives, constraints, and design variables.
|
|
||||||
|
|
||||||
**Functions**:
|
|
||||||
- `build_optimization_config(...)` - Create complete optimization configuration
|
|
||||||
- `list_optimization_options(sim_file_path)` - List all available options for a model
|
|
||||||
- `format_optimization_options_for_llm(options)` - Format options as Markdown
|
|
||||||
|
|
||||||
**What it does**:
|
|
||||||
- Discovers available design variables from the FEA model
|
|
||||||
- Lists available objectives (minimize mass, stress, displacement, volume)
|
|
||||||
- Lists available constraints (max stress, max displacement, mass limits)
|
|
||||||
- Builds a complete `optimization_config.json` based on user selections
|
|
||||||
- Validates that all selections are valid for the model
|
|
||||||
|
|
||||||
**Usage Example**:
|
|
||||||
```python
|
|
||||||
from mcp_server.tools import build_optimization_config, list_optimization_options
|
|
||||||
|
|
||||||
# Step 1: List available options
|
|
||||||
options = list_optimization_options("examples/bracket/Bracket_sim1.sim")
|
|
||||||
print(f"Available design variables: {len(options['available_design_variables'])}")
|
|
||||||
|
|
||||||
# Step 2: Build configuration
|
|
||||||
result = build_optimization_config(
|
|
||||||
sim_file_path="examples/bracket/Bracket_sim1.sim",
|
|
||||||
design_variables=[
|
|
||||||
{'name': 'tip_thickness', 'lower_bound': 15.0, 'upper_bound': 25.0},
|
|
||||||
{'name': 'support_angle', 'lower_bound': 20.0, 'upper_bound': 40.0}
|
|
||||||
],
|
|
||||||
objectives=[
|
|
||||||
{'objective_key': 'minimize_mass', 'weight': 5.0},
|
|
||||||
{'objective_key': 'minimize_max_stress', 'weight': 10.0}
|
|
||||||
],
|
|
||||||
constraints=[
|
|
||||||
{'constraint_key': 'max_displacement_limit', 'limit_value': 1.0},
|
|
||||||
{'constraint_key': 'max_stress_limit', 'limit_value': 200.0}
|
|
||||||
],
|
|
||||||
optimization_settings={
|
|
||||||
'n_trials': 150,
|
|
||||||
'sampler': 'TPE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
if result['status'] == 'success':
|
|
||||||
print(f"Config saved to: {result['config_file']}")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Command Line Usage**:
|
|
||||||
```bash
|
|
||||||
python mcp_server/tools/optimization_config.py examples/bracket/Bracket_sim1.sim
|
|
||||||
```
|
|
||||||
|
|
||||||
**Available Objectives**:
|
|
||||||
- `minimize_mass`: Minimize total mass (weight reduction)
|
|
||||||
- `minimize_max_stress`: Minimize maximum von Mises stress
|
|
||||||
- `minimize_max_displacement`: Minimize maximum displacement (increase stiffness)
|
|
||||||
- `minimize_volume`: Minimize total volume (material usage)
|
|
||||||
|
|
||||||
**Available Constraints**:
|
|
||||||
- `max_stress_limit`: Maximum allowable von Mises stress
|
|
||||||
- `max_displacement_limit`: Maximum allowable displacement
|
|
||||||
- `min_mass_limit`: Minimum required mass (structural integrity)
|
|
||||||
- `max_mass_limit`: Maximum allowable mass (weight budget)
|
|
||||||
|
|
||||||
**Output**: Creates `optimization_config.json` with:
|
|
||||||
- Design variable definitions with bounds
|
|
||||||
- Multi-objective configuration with weights
|
|
||||||
- Constraint definitions with limits
|
|
||||||
- Optimization algorithm settings (trials, sampler)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Start Optimization (PLANNED)
|
|
||||||
|
|
||||||
**Purpose**: Launch optimization run with given configuration.
|
|
||||||
|
|
||||||
**Function**: `start_optimization(config_path: str, resume: bool = False) -> Dict[str, Any]`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 4. Query Optimization Status (PLANNED)
|
|
||||||
|
|
||||||
**Purpose**: Get current status of running optimization.
|
|
||||||
|
|
||||||
**Function**: `query_optimization_status(session_id: str) -> Dict[str, Any]`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5. Extract Results (PLANNED)
|
|
||||||
|
|
||||||
**Purpose**: Parse FEA result files (OP2, F06, XDB) for optimization metrics.
|
|
||||||
|
|
||||||
**Function**: `extract_results(result_files: List[str], extractors: List[str]) -> Dict[str, Any]`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 6. Run NX Journal (PLANNED)
|
|
||||||
|
|
||||||
**Purpose**: Execute NXOpen scripts via file-based communication.
|
|
||||||
|
|
||||||
**Function**: `run_nx_journal(journal_script: str, parameters: Dict) -> Dict[str, Any]`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
### Unit Tests
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install pytest (if not already installed)
|
|
||||||
pip install pytest
|
|
||||||
|
|
||||||
# Run all MCP tool tests
|
|
||||||
pytest tests/mcp_server/tools/ -v
|
|
||||||
|
|
||||||
# Run specific test
|
|
||||||
pytest tests/mcp_server/tools/test_model_discovery.py -v
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Files
|
|
||||||
|
|
||||||
Example .sim files for testing are located in `examples/`:
|
|
||||||
- `test_bracket.sim`: Simple structural analysis with 4 expressions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Development Guidelines
|
|
||||||
|
|
||||||
### Adding a New Tool
|
|
||||||
|
|
||||||
1. **Create module**: `mcp_server/tools/your_tool.py`
|
|
||||||
|
|
||||||
2. **Implement function**:
|
|
||||||
```python
|
|
||||||
def your_tool_name(param: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Brief description.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
param: Description
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Structured result dictionary
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Implementation
|
|
||||||
return {
|
|
||||||
'status': 'success',
|
|
||||||
'data': result
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'error_category',
|
|
||||||
'message': str(e),
|
|
||||||
'suggestion': 'How to fix'
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Add to `__init__.py`**:
|
|
||||||
```python
|
|
||||||
from .your_tool import your_tool_name
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
# ... existing tools
|
|
||||||
"your_tool_name",
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Create tests**: `tests/mcp_server/tools/test_your_tool.py`
|
|
||||||
|
|
||||||
5. **Update documentation**: Add section to this README
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
All MCP tools follow a consistent error handling pattern:
|
|
||||||
|
|
||||||
**Success Response**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"data": { ... }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Error Response**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "error",
|
|
||||||
"error_type": "file_not_found | invalid_file | unexpected_error",
|
|
||||||
"message": "Detailed error message",
|
|
||||||
"suggestion": "Actionable suggestion for user"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with MCP Server
|
|
||||||
|
|
||||||
These tools are designed to be called by the MCP server and consumed by LLMs. The workflow is:
|
|
||||||
|
|
||||||
1. **LLM Request**: "Analyze my FEA model at C:/Projects/model.sim"
|
|
||||||
2. **MCP Server**: Calls `discover_fea_model()`
|
|
||||||
3. **Tool Returns**: Structured JSON result
|
|
||||||
4. **MCP Server**: Formats with `format_discovery_result_for_llm()`
|
|
||||||
5. **LLM Response**: Uses formatted data to answer user
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Future Enhancements
|
|
||||||
|
|
||||||
- [ ] Support for binary .sim file formats (older NX versions)
|
|
||||||
- [ ] Direct NXOpen integration for accurate expression extraction
|
|
||||||
- [ ] Support for additional analysis types (thermal, modal, etc.)
|
|
||||||
- [ ] Caching of parsed results for performance
|
|
||||||
- [ ] Validation of .sim file integrity
|
|
||||||
- [ ] Extraction of solver convergence settings
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Last Updated**: 2025-11-15
|
|
||||||
**Status**: Phase 1 (Model Discovery) ✅ COMPLETE | Phase 2 (Optimization Config Builder) ✅ COMPLETE
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
"""
|
|
||||||
MCP Tools for Atomizer
|
|
||||||
|
|
||||||
Available tools:
|
|
||||||
- discover_fea_model: Analyze .sim files to extract configurable elements
|
|
||||||
- build_optimization_config: Generate optimization config from LLM instructions
|
|
||||||
- start_optimization: Launch optimization run
|
|
||||||
- query_optimization_status: Get current iteration status
|
|
||||||
- extract_results: Parse FEA result files
|
|
||||||
- run_nx_journal: Execute NXOpen scripts
|
|
||||||
- search_nxopen_docs: Search NXOpen API documentation
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Dict, Any
|
|
||||||
from .model_discovery import discover_fea_model, format_discovery_result_for_llm
|
|
||||||
from .optimization_config import (
|
|
||||||
build_optimization_config,
|
|
||||||
list_optimization_options,
|
|
||||||
format_optimization_options_for_llm
|
|
||||||
)
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
"discover_fea_model",
|
|
||||||
"format_discovery_result_for_llm",
|
|
||||||
"build_optimization_config",
|
|
||||||
"list_optimization_options",
|
|
||||||
"format_optimization_options_for_llm",
|
|
||||||
"start_optimization",
|
|
||||||
"query_optimization_status",
|
|
||||||
"extract_results",
|
|
||||||
"run_nx_journal",
|
|
||||||
]
|
|
||||||
@@ -1,368 +0,0 @@
|
|||||||
"""
|
|
||||||
MCP Tool: Build Optimization Configuration
|
|
||||||
|
|
||||||
Wraps the OptimizationConfigBuilder to create an MCP-compatible tool
|
|
||||||
that helps LLMs guide users through building optimization configurations.
|
|
||||||
|
|
||||||
This tool:
|
|
||||||
1. Discovers the FEA model (design variables)
|
|
||||||
2. Lists available objectives and constraints
|
|
||||||
3. Builds a complete optimization_config.json based on user selections
|
|
||||||
"""
|
|
||||||
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add project root to path for imports
|
|
||||||
project_root = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
from optimization_engine.optimization_config_builder import OptimizationConfigBuilder
|
|
||||||
from mcp_server.tools.model_discovery import discover_fea_model
|
|
||||||
|
|
||||||
|
|
||||||
def build_optimization_config(
|
|
||||||
sim_file_path: str,
|
|
||||||
design_variables: List[Dict[str, Any]],
|
|
||||||
objectives: List[Dict[str, Any]],
|
|
||||||
constraints: Optional[List[Dict[str, Any]]] = None,
|
|
||||||
optimization_settings: Optional[Dict[str, Any]] = None,
|
|
||||||
output_path: Optional[str] = None
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
MCP Tool: Build Optimization Configuration
|
|
||||||
|
|
||||||
Creates a complete optimization configuration file from user selections.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
sim_file_path: Absolute path to .sim file
|
|
||||||
design_variables: List of design variable definitions
|
|
||||||
[
|
|
||||||
{
|
|
||||||
'name': 'tip_thickness',
|
|
||||||
'lower_bound': 15.0,
|
|
||||||
'upper_bound': 25.0
|
|
||||||
},
|
|
||||||
...
|
|
||||||
]
|
|
||||||
objectives: List of objective definitions
|
|
||||||
[
|
|
||||||
{
|
|
||||||
'objective_key': 'minimize_mass',
|
|
||||||
'weight': 5.0, # optional
|
|
||||||
'target': None # optional, for goal programming
|
|
||||||
},
|
|
||||||
...
|
|
||||||
]
|
|
||||||
constraints: Optional list of constraint definitions
|
|
||||||
[
|
|
||||||
{
|
|
||||||
'constraint_key': 'max_stress_limit',
|
|
||||||
'limit_value': 200.0
|
|
||||||
},
|
|
||||||
...
|
|
||||||
]
|
|
||||||
optimization_settings: Optional dict with algorithm settings
|
|
||||||
{
|
|
||||||
'n_trials': 100,
|
|
||||||
'sampler': 'TPE'
|
|
||||||
}
|
|
||||||
output_path: Optional path to save config JSON.
|
|
||||||
Defaults to 'optimization_config.json' in sim file directory
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary with status and configuration details
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> result = build_optimization_config(
|
|
||||||
... sim_file_path="C:/Projects/Bracket/analysis.sim",
|
|
||||||
... design_variables=[
|
|
||||||
... {'name': 'tip_thickness', 'lower_bound': 15.0, 'upper_bound': 25.0}
|
|
||||||
... ],
|
|
||||||
... objectives=[
|
|
||||||
... {'objective_key': 'minimize_mass', 'weight': 5.0}
|
|
||||||
... ],
|
|
||||||
... constraints=[
|
|
||||||
... {'constraint_key': 'max_stress_limit', 'limit_value': 200.0}
|
|
||||||
... ]
|
|
||||||
... )
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Step 1: Discover model
|
|
||||||
model_result = discover_fea_model(sim_file_path)
|
|
||||||
|
|
||||||
if model_result['status'] != 'success':
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'model_discovery_failed',
|
|
||||||
'message': model_result.get('message', 'Failed to discover FEA model'),
|
|
||||||
'suggestion': model_result.get('suggestion', 'Check that the .sim file is valid')
|
|
||||||
}
|
|
||||||
|
|
||||||
# Step 2: Create builder
|
|
||||||
builder = OptimizationConfigBuilder(model_result)
|
|
||||||
|
|
||||||
# Step 3: Validate and add design variables
|
|
||||||
available_vars = {dv['name']: dv for dv in builder.list_available_design_variables()}
|
|
||||||
|
|
||||||
for dv in design_variables:
|
|
||||||
name = dv['name']
|
|
||||||
if name not in available_vars:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'invalid_design_variable',
|
|
||||||
'message': f"Design variable '{name}' not found in model",
|
|
||||||
'available_variables': list(available_vars.keys()),
|
|
||||||
'suggestion': f"Choose from: {', '.join(available_vars.keys())}"
|
|
||||||
}
|
|
||||||
|
|
||||||
builder.add_design_variable(
|
|
||||||
name=name,
|
|
||||||
lower_bound=dv['lower_bound'],
|
|
||||||
upper_bound=dv['upper_bound']
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 4: Add objectives
|
|
||||||
available_objectives = builder.list_available_objectives()
|
|
||||||
|
|
||||||
for obj in objectives:
|
|
||||||
obj_key = obj['objective_key']
|
|
||||||
if obj_key not in available_objectives:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'invalid_objective',
|
|
||||||
'message': f"Objective '{obj_key}' not recognized",
|
|
||||||
'available_objectives': list(available_objectives.keys()),
|
|
||||||
'suggestion': f"Choose from: {', '.join(available_objectives.keys())}"
|
|
||||||
}
|
|
||||||
|
|
||||||
builder.add_objective(
|
|
||||||
objective_key=obj_key,
|
|
||||||
weight=obj.get('weight'),
|
|
||||||
target=obj.get('target')
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 5: Add constraints (optional)
|
|
||||||
if constraints:
|
|
||||||
available_constraints = builder.list_available_constraints()
|
|
||||||
|
|
||||||
for const in constraints:
|
|
||||||
const_key = const['constraint_key']
|
|
||||||
if const_key not in available_constraints:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'invalid_constraint',
|
|
||||||
'message': f"Constraint '{const_key}' not recognized",
|
|
||||||
'available_constraints': list(available_constraints.keys()),
|
|
||||||
'suggestion': f"Choose from: {', '.join(available_constraints.keys())}"
|
|
||||||
}
|
|
||||||
|
|
||||||
builder.add_constraint(
|
|
||||||
constraint_key=const_key,
|
|
||||||
limit_value=const['limit_value']
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 6: Set optimization settings (optional)
|
|
||||||
if optimization_settings:
|
|
||||||
builder.set_optimization_settings(
|
|
||||||
n_trials=optimization_settings.get('n_trials'),
|
|
||||||
sampler=optimization_settings.get('sampler')
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 7: Build and validate configuration
|
|
||||||
config = builder.build()
|
|
||||||
|
|
||||||
# Step 8: Save to file
|
|
||||||
if output_path is None:
|
|
||||||
sim_path = Path(sim_file_path)
|
|
||||||
output_path = sim_path.parent / 'optimization_config.json'
|
|
||||||
else:
|
|
||||||
output_path = Path(output_path)
|
|
||||||
|
|
||||||
with open(output_path, 'w') as f:
|
|
||||||
json.dump(config, f, indent=2)
|
|
||||||
|
|
||||||
# Step 9: Return success with summary
|
|
||||||
return {
|
|
||||||
'status': 'success',
|
|
||||||
'message': 'Optimization configuration created successfully',
|
|
||||||
'config_file': str(output_path),
|
|
||||||
'summary': {
|
|
||||||
'design_variables': len(config['design_variables']),
|
|
||||||
'objectives': len(config['objectives']),
|
|
||||||
'constraints': len(config['constraints']),
|
|
||||||
'n_trials': config['optimization_settings']['n_trials'],
|
|
||||||
'sampler': config['optimization_settings']['sampler']
|
|
||||||
},
|
|
||||||
'config': config
|
|
||||||
}
|
|
||||||
|
|
||||||
except ValueError as e:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'validation_error',
|
|
||||||
'message': str(e),
|
|
||||||
'suggestion': 'Check that all required fields are provided correctly'
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'unexpected_error',
|
|
||||||
'message': str(e),
|
|
||||||
'suggestion': 'This may be a bug. Please report this issue.'
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def list_optimization_options(sim_file_path: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Helper tool: List all available optimization options for a model.
|
|
||||||
|
|
||||||
This is useful for LLMs to show users what they can choose from.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
sim_file_path: Absolute path to .sim file
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary with all available options
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Discover model
|
|
||||||
model_result = discover_fea_model(sim_file_path)
|
|
||||||
|
|
||||||
if model_result['status'] != 'success':
|
|
||||||
return model_result
|
|
||||||
|
|
||||||
# Create builder to get options
|
|
||||||
builder = OptimizationConfigBuilder(model_result)
|
|
||||||
|
|
||||||
# Get all available options
|
|
||||||
design_vars = builder.list_available_design_variables()
|
|
||||||
objectives = builder.list_available_objectives()
|
|
||||||
constraints = builder.list_available_constraints()
|
|
||||||
|
|
||||||
return {
|
|
||||||
'status': 'success',
|
|
||||||
'sim_file': sim_file_path,
|
|
||||||
'available_design_variables': design_vars,
|
|
||||||
'available_objectives': objectives,
|
|
||||||
'available_constraints': constraints,
|
|
||||||
'model_info': {
|
|
||||||
'solutions': model_result.get('solutions', []),
|
|
||||||
'expression_count': len(model_result.get('expressions', []))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'status': 'error',
|
|
||||||
'error_type': 'unexpected_error',
|
|
||||||
'message': str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def format_optimization_options_for_llm(options: Dict[str, Any]) -> str:
|
|
||||||
"""
|
|
||||||
Format optimization options for LLM consumption (Markdown).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
options: Output from list_optimization_options()
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Markdown-formatted string
|
|
||||||
"""
|
|
||||||
if options['status'] != 'success':
|
|
||||||
return f"❌ **Error**: {options['message']}\n\n💡 {options.get('suggestion', '')}"
|
|
||||||
|
|
||||||
md = []
|
|
||||||
md.append(f"# Optimization Configuration Options\n")
|
|
||||||
md.append(f"**Model**: `{options['sim_file']}`\n")
|
|
||||||
|
|
||||||
# Design Variables
|
|
||||||
md.append(f"## Available Design Variables ({len(options['available_design_variables'])})\n")
|
|
||||||
if options['available_design_variables']:
|
|
||||||
md.append("| Name | Current Value | Units | Suggested Bounds |")
|
|
||||||
md.append("|------|---------------|-------|------------------|")
|
|
||||||
for dv in options['available_design_variables']:
|
|
||||||
bounds = dv['suggested_bounds']
|
|
||||||
md.append(f"| `{dv['name']}` | {dv['current_value']} | {dv['units']} | [{bounds[0]:.2f}, {bounds[1]:.2f}] |")
|
|
||||||
else:
|
|
||||||
md.append("⚠️ No design variables found. Model may not be parametric.")
|
|
||||||
md.append("")
|
|
||||||
|
|
||||||
# Objectives
|
|
||||||
md.append(f"## Available Objectives\n")
|
|
||||||
for key, obj in options['available_objectives'].items():
|
|
||||||
md.append(f"### `{key}`")
|
|
||||||
md.append(f"- **Description**: {obj['description']}")
|
|
||||||
md.append(f"- **Metric**: {obj['metric']} ({obj['units']})")
|
|
||||||
md.append(f"- **Default Weight**: {obj['typical_weight']}")
|
|
||||||
md.append(f"- **Extractor**: `{obj['extractor']}`")
|
|
||||||
md.append("")
|
|
||||||
|
|
||||||
# Constraints
|
|
||||||
md.append(f"## Available Constraints\n")
|
|
||||||
for key, const in options['available_constraints'].items():
|
|
||||||
md.append(f"### `{key}`")
|
|
||||||
md.append(f"- **Description**: {const['description']}")
|
|
||||||
md.append(f"- **Metric**: {const['metric']} ({const['units']})")
|
|
||||||
md.append(f"- **Typical Value**: {const['typical_value']}")
|
|
||||||
md.append(f"- **Type**: {const['constraint_type']}")
|
|
||||||
md.append(f"- **Extractor**: `{const['extractor']}`")
|
|
||||||
md.append("")
|
|
||||||
|
|
||||||
return "\n".join(md)
|
|
||||||
|
|
||||||
|
|
||||||
# For testing
|
|
||||||
if __name__ == "__main__":
|
|
||||||
import sys
|
|
||||||
|
|
||||||
if len(sys.argv) < 2:
|
|
||||||
print("Usage: python optimization_config.py <path_to_sim_file>")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
sim_path = sys.argv[1]
|
|
||||||
|
|
||||||
# Test 1: List options
|
|
||||||
print("=" * 60)
|
|
||||||
print("TEST 1: List Available Options")
|
|
||||||
print("=" * 60)
|
|
||||||
options = list_optimization_options(sim_path)
|
|
||||||
print(format_optimization_options_for_llm(options))
|
|
||||||
|
|
||||||
# Test 2: Build configuration
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("TEST 2: Build Optimization Configuration")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
result = build_optimization_config(
|
|
||||||
sim_file_path=sim_path,
|
|
||||||
design_variables=[
|
|
||||||
{'name': 'tip_thickness', 'lower_bound': 15.0, 'upper_bound': 25.0},
|
|
||||||
{'name': 'support_angle', 'lower_bound': 20.0, 'upper_bound': 40.0},
|
|
||||||
],
|
|
||||||
objectives=[
|
|
||||||
{'objective_key': 'minimize_mass', 'weight': 5.0},
|
|
||||||
{'objective_key': 'minimize_max_stress', 'weight': 10.0}
|
|
||||||
],
|
|
||||||
constraints=[
|
|
||||||
{'constraint_key': 'max_displacement_limit', 'limit_value': 1.0},
|
|
||||||
{'constraint_key': 'max_stress_limit', 'limit_value': 200.0}
|
|
||||||
],
|
|
||||||
optimization_settings={
|
|
||||||
'n_trials': 150,
|
|
||||||
'sampler': 'TPE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
if result['status'] == 'success':
|
|
||||||
print(f"SUCCESS: Configuration saved to: {result['config_file']}")
|
|
||||||
print(f"\nSummary:")
|
|
||||||
for key, value in result['summary'].items():
|
|
||||||
print(f" - {key}: {value}")
|
|
||||||
else:
|
|
||||||
print(f"ERROR: {result['message']}")
|
|
||||||
print(f"Suggestion: {result.get('suggestion', '')}")
|
|
||||||
@@ -1,39 +0,0 @@
|
|||||||
"""
|
|
||||||
Extract total structural mass
|
|
||||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
|
||||||
|
|
||||||
Pattern: generic_extraction
|
|
||||||
Element Type: General
|
|
||||||
Result Type: unknown
|
|
||||||
API: model.<result_type>[subcase]
|
|
||||||
"""
|
|
||||||
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Any
|
|
||||||
import numpy as np
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
|
|
||||||
|
|
||||||
def extract_generic(op2_file: Path):
|
|
||||||
"""Generic OP2 extraction - needs customization."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
# TODO: Customize extraction based on requirements
|
|
||||||
# Available: model.displacements, model.ctetra_stress, etc.
|
|
||||||
# Use model.get_op2_stats() to see available results
|
|
||||||
|
|
||||||
return {'result': None}
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
# Example usage
|
|
||||||
import sys
|
|
||||||
if len(sys.argv) > 1:
|
|
||||||
op2_file = Path(sys.argv[1])
|
|
||||||
result = extract_generic(op2_file)
|
|
||||||
print(f"Extraction result: {result}")
|
|
||||||
else:
|
|
||||||
print("Usage: python {sys.argv[0]} <op2_file>")
|
|
||||||
@@ -1,914 +0,0 @@
|
|||||||
"""
|
|
||||||
Hybrid Mode Study Creator - Complete Automation
|
|
||||||
|
|
||||||
This module provides COMPLETE automation for creating optimization studies:
|
|
||||||
1. Creates proper study structure (1_setup, 2_substudies, 3_reports)
|
|
||||||
2. Runs benchmarking to validate simulation setup
|
|
||||||
3. Auto-generates runner from workflow JSON
|
|
||||||
4. Provides progress monitoring
|
|
||||||
|
|
||||||
No user intervention required after workflow JSON is created.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Any, Optional, List
|
|
||||||
import json
|
|
||||||
import shutil
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
class HybridStudyCreator:
|
|
||||||
"""
|
|
||||||
Complete automation for Hybrid Mode study creation.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
creator = HybridStudyCreator()
|
|
||||||
study = creator.create_from_workflow(
|
|
||||||
workflow_json_path="path/to/workflow.json",
|
|
||||||
model_files={"prt": "path.prt", "sim": "path.sim", "fem": "path.fem"},
|
|
||||||
study_name="my_optimization"
|
|
||||||
)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.project_root = Path(__file__).parent.parent
|
|
||||||
|
|
||||||
def create_from_workflow(
|
|
||||||
self,
|
|
||||||
workflow_json_path: Path,
|
|
||||||
model_files: Dict[str, Path],
|
|
||||||
study_name: str,
|
|
||||||
output_parent: Optional[Path] = None
|
|
||||||
) -> Path:
|
|
||||||
"""
|
|
||||||
Create complete study from workflow JSON with full automation.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
workflow_json_path: Path to workflow JSON config
|
|
||||||
model_files: Dict with keys 'prt', 'sim', 'fem' (and optionally 'fem_i')
|
|
||||||
study_name: Name for the study
|
|
||||||
output_parent: Parent directory for studies (default: project_root/studies)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Path to created study directory
|
|
||||||
"""
|
|
||||||
print("="*80)
|
|
||||||
print(" HYBRID MODE - AUTOMATED STUDY CREATION")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 1: Create study structure
|
|
||||||
print("[1/5] Creating study structure...")
|
|
||||||
study_dir = self._create_study_structure(study_name, output_parent)
|
|
||||||
print(f" [OK] Study directory: {study_dir.name}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 2: Copy files
|
|
||||||
print("[2/5] Copying model files...")
|
|
||||||
self._copy_model_files(model_files, study_dir / "1_setup/model")
|
|
||||||
print(f" [OK] Copied {len(model_files)} files")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 3: Copy workflow JSON
|
|
||||||
print("[3/5] Installing workflow configuration...")
|
|
||||||
workflow_dest = study_dir / "1_setup/workflow_config.json"
|
|
||||||
shutil.copy2(workflow_json_path, workflow_dest)
|
|
||||||
with open(workflow_dest) as f:
|
|
||||||
workflow = json.load(f)
|
|
||||||
print(f" [OK] Workflow: {workflow.get('study_name', 'unnamed')}")
|
|
||||||
print(f" [OK] Variables: {len(workflow.get('design_variables', []))}")
|
|
||||||
print(f" [OK] Objectives: {len(workflow.get('objectives', []))}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 4: Run benchmarking
|
|
||||||
print("[4/5] Running benchmarking (validating simulation setup)...")
|
|
||||||
benchmark_results = self._run_benchmarking(
|
|
||||||
study_dir / "1_setup/model" / model_files['prt'].name,
|
|
||||||
study_dir / "1_setup/model" / model_files['sim'].name,
|
|
||||||
workflow
|
|
||||||
)
|
|
||||||
|
|
||||||
if not benchmark_results['success']:
|
|
||||||
raise RuntimeError(f"Benchmarking failed: {benchmark_results['error']}")
|
|
||||||
|
|
||||||
print(f" [OK] Simulation validated")
|
|
||||||
print(f" [OK] Extracted {benchmark_results['n_results']} results")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 4.5: Generate configuration report
|
|
||||||
print("[4.5/5] Generating configuration report...")
|
|
||||||
self._generate_configuration_report(study_dir, workflow, benchmark_results)
|
|
||||||
print(f" [OK] Configuration report: 1_setup/CONFIGURATION_REPORT.md")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Step 5: Generate runner
|
|
||||||
print("[5/5] Generating optimization runner...")
|
|
||||||
runner_path = self._generate_runner(study_dir, workflow, benchmark_results)
|
|
||||||
print(f" [OK] Runner: {runner_path.name}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Create README
|
|
||||||
self._create_readme(study_dir, workflow, benchmark_results)
|
|
||||||
|
|
||||||
print("="*80)
|
|
||||||
print(" STUDY CREATION COMPLETE")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
print(f"Study location: {study_dir}")
|
|
||||||
print()
|
|
||||||
print("To run optimization:")
|
|
||||||
print(f" python {runner_path.relative_to(self.project_root)}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
return study_dir
|
|
||||||
|
|
||||||
def _create_study_structure(self, study_name: str, output_parent: Optional[Path]) -> Path:
|
|
||||||
"""Create proper study folder structure."""
|
|
||||||
if output_parent is None:
|
|
||||||
output_parent = self.project_root / "studies"
|
|
||||||
|
|
||||||
study_dir = output_parent / study_name
|
|
||||||
|
|
||||||
# Create structure
|
|
||||||
(study_dir / "1_setup/model").mkdir(parents=True, exist_ok=True)
|
|
||||||
(study_dir / "2_results").mkdir(parents=True, exist_ok=True)
|
|
||||||
(study_dir / "3_reports").mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
return study_dir
|
|
||||||
|
|
||||||
def _copy_model_files(self, model_files: Dict[str, Path], dest_dir: Path):
|
|
||||||
"""Copy model files to study."""
|
|
||||||
for file_type, file_path in model_files.items():
|
|
||||||
if file_path and file_path.exists():
|
|
||||||
shutil.copy2(file_path, dest_dir / file_path.name)
|
|
||||||
|
|
||||||
def _run_benchmarking(
|
|
||||||
self,
|
|
||||||
prt_file: Path,
|
|
||||||
sim_file: Path,
|
|
||||||
workflow: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Run INTELLIGENT benchmarking to validate simulation setup.
|
|
||||||
|
|
||||||
This uses IntelligentSetup to:
|
|
||||||
1. Solve ALL solutions in the .sim file
|
|
||||||
2. Discover all available results
|
|
||||||
3. Match objectives to results automatically
|
|
||||||
4. Select optimal solution for optimization
|
|
||||||
|
|
||||||
Returns dict with:
|
|
||||||
- success: bool
|
|
||||||
- n_results: int (number of results extracted)
|
|
||||||
- results: dict (extracted values)
|
|
||||||
- solution_name: str (optimal solution to use for optimization)
|
|
||||||
- error: str (if failed)
|
|
||||||
"""
|
|
||||||
from optimization_engine.intelligent_setup import IntelligentSetup
|
|
||||||
|
|
||||||
try:
|
|
||||||
print(" Running INTELLIGENT benchmarking...")
|
|
||||||
print(" - Solving ALL solutions in .sim file")
|
|
||||||
print(" - Discovering all available results")
|
|
||||||
print(" - Matching objectives to results")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Run intelligent benchmarking
|
|
||||||
intelligent = IntelligentSetup()
|
|
||||||
benchmark_data = intelligent.run_complete_benchmarking(
|
|
||||||
prt_file, sim_file, workflow
|
|
||||||
)
|
|
||||||
|
|
||||||
if not benchmark_data['success']:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': f"Intelligent benchmarking failed: {benchmark_data.get('error', 'Unknown')}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Display discovered information
|
|
||||||
print(f" [OK] Expressions found: {len(benchmark_data.get('expressions', {}))}")
|
|
||||||
print(f" [OK] Solutions found: {len(benchmark_data.get('solutions', {}))}")
|
|
||||||
print(f" [OK] Results discovered: {len(benchmark_data.get('available_results', {}))}")
|
|
||||||
|
|
||||||
# Display objective mapping
|
|
||||||
obj_mapping = benchmark_data.get('objective_mapping', {})
|
|
||||||
if 'objectives' in obj_mapping:
|
|
||||||
print(f" [OK] Objectives matched: {len(obj_mapping['objectives'])}")
|
|
||||||
for obj_name, obj_info in obj_mapping['objectives'].items():
|
|
||||||
solution = obj_info.get('solution', 'Unknown')
|
|
||||||
result_type = obj_info.get('result_type', 'Unknown')
|
|
||||||
confidence = obj_info.get('match_confidence', 'Unknown')
|
|
||||||
print(f" - {obj_name}: {result_type} from '{solution}' ({confidence} confidence)")
|
|
||||||
|
|
||||||
# Get recommended solution
|
|
||||||
recommended_solution = obj_mapping.get('primary_solution')
|
|
||||||
if recommended_solution:
|
|
||||||
print(f" [OK] Recommended solution: {recommended_solution}")
|
|
||||||
|
|
||||||
# Extract baseline values
|
|
||||||
extracted = {}
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
extraction = obj.get('extraction', {})
|
|
||||||
action = extraction.get('action', '')
|
|
||||||
|
|
||||||
if 'frequency' in action.lower() or 'eigenvalue' in action.lower():
|
|
||||||
# Extract eigenvalues from discovered results
|
|
||||||
available_results = benchmark_data.get('available_results', {})
|
|
||||||
if 'eigenvalues' in available_results:
|
|
||||||
# Get op2 file from eigenvalues result
|
|
||||||
eig_result = available_results['eigenvalues']
|
|
||||||
op2_file = Path(eig_result['op2_path'])
|
|
||||||
freq = self._extract_frequency(op2_file, mode_number=1)
|
|
||||||
extracted['first_frequency'] = freq
|
|
||||||
print(f" Baseline first frequency: {freq:.4f} Hz")
|
|
||||||
|
|
||||||
elif 'displacement' in action.lower():
|
|
||||||
# Extract displacement from discovered results
|
|
||||||
available_results = benchmark_data.get('available_results', {})
|
|
||||||
if 'displacements' in available_results:
|
|
||||||
disp_result = available_results['displacements']
|
|
||||||
op2_file = Path(disp_result['op2_path'])
|
|
||||||
disp = self._extract_displacement(op2_file)
|
|
||||||
extracted['max_displacement'] = disp
|
|
||||||
print(f" Baseline max displacement: {disp:.6f} mm")
|
|
||||||
|
|
||||||
elif 'stress' in action.lower():
|
|
||||||
# Extract stress from discovered results
|
|
||||||
available_results = benchmark_data.get('available_results', {})
|
|
||||||
if 'stresses' in available_results:
|
|
||||||
stress_result = available_results['stresses']
|
|
||||||
op2_file = Path(stress_result['op2_path'])
|
|
||||||
stress = self._extract_stress(op2_file)
|
|
||||||
extracted['max_stress'] = stress
|
|
||||||
print(f" Baseline max stress: {stress:.2f} MPa")
|
|
||||||
|
|
||||||
return {
|
|
||||||
'success': True,
|
|
||||||
'n_results': len(extracted),
|
|
||||||
'results': extracted,
|
|
||||||
'solution_name': recommended_solution,
|
|
||||||
'benchmark_data': benchmark_data # Include full benchmarking data
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
def _extract_frequency(self, op2_file: Path, mode_number: int = 1) -> float:
|
|
||||||
"""Extract eigenfrequency from OP2."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
if not hasattr(model, 'eigenvalues') or len(model.eigenvalues) == 0:
|
|
||||||
raise ValueError("No eigenvalues found in OP2 file")
|
|
||||||
|
|
||||||
subcase = list(model.eigenvalues.keys())[0]
|
|
||||||
eig_obj = model.eigenvalues[subcase]
|
|
||||||
|
|
||||||
eigenvalue = eig_obj.eigenvalues[mode_number - 1]
|
|
||||||
angular_freq = np.sqrt(eigenvalue)
|
|
||||||
frequency_hz = angular_freq / (2 * np.pi)
|
|
||||||
|
|
||||||
return float(frequency_hz)
|
|
||||||
|
|
||||||
def _extract_displacement(self, op2_file: Path) -> float:
|
|
||||||
"""Extract max displacement from OP2."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
if hasattr(model, 'displacements') and len(model.displacements) > 0:
|
|
||||||
subcase = list(model.displacements.keys())[0]
|
|
||||||
disp_obj = model.displacements[subcase]
|
|
||||||
translations = disp_obj.data[0, :, :3] # [time, node, tx/ty/tz]
|
|
||||||
magnitudes = np.linalg.norm(translations, axis=1)
|
|
||||||
return float(np.max(magnitudes))
|
|
||||||
|
|
||||||
raise ValueError("No displacements found in OP2 file")
|
|
||||||
|
|
||||||
def _extract_stress(self, op2_file: Path) -> float:
|
|
||||||
"""Extract max von Mises stress from OP2."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
# Try different stress result locations
|
|
||||||
if hasattr(model, 'cquad4_stress') and len(model.cquad4_stress) > 0:
|
|
||||||
subcase = list(model.cquad4_stress.keys())[0]
|
|
||||||
stress_obj = model.cquad4_stress[subcase]
|
|
||||||
von_mises = stress_obj.data[0, :, 7] # von Mises typically at index 7
|
|
||||||
return float(np.max(von_mises))
|
|
||||||
|
|
||||||
raise ValueError("No stress results found in OP2 file")
|
|
||||||
|
|
||||||
def _generate_runner(
|
|
||||||
self,
|
|
||||||
study_dir: Path,
|
|
||||||
workflow: Dict[str, Any],
|
|
||||||
benchmark_results: Dict[str, Any]
|
|
||||||
) -> Path:
|
|
||||||
"""Generate optimization runner script."""
|
|
||||||
runner_path = study_dir / "run_optimization.py"
|
|
||||||
|
|
||||||
# Detect result types from workflow
|
|
||||||
extracts_frequency = any(
|
|
||||||
'frequency' in obj.get('extraction', {}).get('action', '').lower()
|
|
||||||
for obj in workflow.get('objectives', [])
|
|
||||||
)
|
|
||||||
|
|
||||||
# Generate extractor function based on workflow
|
|
||||||
extractor_code = self._generate_extractor_code(workflow)
|
|
||||||
|
|
||||||
runner_code = f'''"""
|
|
||||||
Auto-generated optimization runner
|
|
||||||
Created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
import json
|
|
||||||
import optuna
|
|
||||||
from optimization_engine.nx_updater import NXParameterUpdater
|
|
||||||
from optimization_engine.nx_solver import NXSolver
|
|
||||||
|
|
||||||
|
|
||||||
{extractor_code}
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("="*80)
|
|
||||||
print(" {workflow.get('study_name', 'OPTIMIZATION').upper()}")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Load workflow
|
|
||||||
config_file = Path(__file__).parent / "1_setup/workflow_config.json"
|
|
||||||
with open(config_file) as f:
|
|
||||||
workflow = json.load(f)
|
|
||||||
|
|
||||||
print("Workflow loaded:")
|
|
||||||
print(f" Request: {workflow.get('optimization_request', 'N/A')}")
|
|
||||||
print(f" Variables: {len(workflow.get('design_variables', []))}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Setup paths
|
|
||||||
prt_file = Path(__file__).parent / "1_setup/model" / [f for f in (Path(__file__).parent / "1_setup/model").glob("*.prt")][0].name
|
|
||||||
sim_file = Path(__file__).parent / "1_setup/model" / [f for f in (Path(__file__).parent / "1_setup/model").glob("*.sim")][0].name
|
|
||||||
output_dir = Path(__file__).parent / "2_results"
|
|
||||||
reports_dir = Path(__file__).parent / "3_reports"
|
|
||||||
output_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Initialize
|
|
||||||
updater = NXParameterUpdater(prt_file)
|
|
||||||
solver = NXSolver()
|
|
||||||
|
|
||||||
# Create Optuna study
|
|
||||||
study_name = "{workflow.get('study_name', 'optimization')}"
|
|
||||||
storage = f"sqlite:///{{output_dir / 'study.db'}}"
|
|
||||||
study = optuna.create_study(
|
|
||||||
study_name=study_name,
|
|
||||||
storage=storage,
|
|
||||||
load_if_exists=True,
|
|
||||||
direction="minimize"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Initialize incremental history
|
|
||||||
history_file = output_dir / 'optimization_history_incremental.json'
|
|
||||||
history = []
|
|
||||||
if history_file.exists():
|
|
||||||
with open(history_file) as f:
|
|
||||||
history = json.load(f)
|
|
||||||
|
|
||||||
def objective(trial):
|
|
||||||
# Sample design variables
|
|
||||||
params = {{}}
|
|
||||||
for var in workflow['design_variables']:
|
|
||||||
name = var['parameter']
|
|
||||||
bounds = var['bounds']
|
|
||||||
params[name] = trial.suggest_float(name, bounds[0], bounds[1])
|
|
||||||
|
|
||||||
print(f"\\nTrial {{trial.number}}:")
|
|
||||||
for name, value in params.items():
|
|
||||||
print(f" {{name}} = {{value:.2f}}")
|
|
||||||
|
|
||||||
# Update model
|
|
||||||
updater.update_expressions(params)
|
|
||||||
|
|
||||||
# Run simulation with the optimal solution
|
|
||||||
result = solver.run_simulation(sim_file, solution_name="{benchmark_results.get('solution_name')}")
|
|
||||||
if not result['success']:
|
|
||||||
raise RuntimeError(f"Simulation failed: {{result.get('errors', 'Unknown')}}")
|
|
||||||
op2_file = result['op2_file']
|
|
||||||
|
|
||||||
# Extract results and calculate objective
|
|
||||||
results = extract_results(op2_file, workflow)
|
|
||||||
|
|
||||||
# Print results
|
|
||||||
for name, value in results.items():
|
|
||||||
print(f" {{name}} = {{value:.4f}}")
|
|
||||||
|
|
||||||
# Calculate objective (from first objective in workflow)
|
|
||||||
obj_config = workflow['objectives'][0]
|
|
||||||
result_name = list(results.keys())[0]
|
|
||||||
|
|
||||||
# For target-matching objectives, compute error from target
|
|
||||||
if 'target_frequency' in obj_config.get('extraction', {{}}).get('params', {{}}):
|
|
||||||
target = obj_config['extraction']['params']['target_frequency']
|
|
||||||
objective_value = abs(results[result_name] - target)
|
|
||||||
print(f" Frequency: {{results[result_name]:.4f}} Hz, Target: {{target}} Hz, Error: {{objective_value:.4f}} Hz")
|
|
||||||
elif obj_config['goal'] == 'minimize':
|
|
||||||
objective_value = results[result_name]
|
|
||||||
else:
|
|
||||||
objective_value = -results[result_name]
|
|
||||||
|
|
||||||
print(f" Objective = {{objective_value:.4f}}")
|
|
||||||
|
|
||||||
# Save to incremental history
|
|
||||||
trial_record = {{
|
|
||||||
'trial_number': trial.number,
|
|
||||||
'design_variables': params,
|
|
||||||
'results': results,
|
|
||||||
'objective': objective_value
|
|
||||||
}}
|
|
||||||
history.append(trial_record)
|
|
||||||
with open(history_file, 'w') as f:
|
|
||||||
json.dump(history, f, indent=2)
|
|
||||||
|
|
||||||
return objective_value
|
|
||||||
|
|
||||||
# Run optimization
|
|
||||||
n_trials = 10
|
|
||||||
print(f"\\nRunning {{n_trials}} trials...")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
study.optimize(objective, n_trials=n_trials)
|
|
||||||
|
|
||||||
# Results
|
|
||||||
print()
|
|
||||||
print("="*80)
|
|
||||||
print(" OPTIMIZATION COMPLETE")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
print(f"Best trial: #{{study.best_trial.number}}")
|
|
||||||
for name, value in study.best_params.items():
|
|
||||||
print(f" {{name}} = {{value:.2f}}")
|
|
||||||
print(f"\\nBest objective = {{study.best_value:.4f}}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Generate human-readable markdown report with graphs
|
|
||||||
print("Generating optimization report...")
|
|
||||||
from optimization_engine.generate_report_markdown import generate_markdown_report
|
|
||||||
|
|
||||||
# Extract target frequency from workflow objectives
|
|
||||||
target_value = None
|
|
||||||
tolerance = 0.1
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
if 'target_frequency' in obj.get('extraction', {{}}).get('params', {{}}):
|
|
||||||
target_value = obj['extraction']['params']['target_frequency']
|
|
||||||
break
|
|
||||||
|
|
||||||
# Generate markdown report with graphs
|
|
||||||
report = generate_markdown_report(history_file, target_value=target_value, tolerance=tolerance)
|
|
||||||
report_file = reports_dir / 'OPTIMIZATION_REPORT.md'
|
|
||||||
with open(report_file, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(report)
|
|
||||||
|
|
||||||
print(f"✓ Markdown report with graphs saved to: {{report_file}}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
'''
|
|
||||||
|
|
||||||
with open(runner_path, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(runner_code)
|
|
||||||
|
|
||||||
return runner_path
|
|
||||||
|
|
||||||
def _generate_extractor_code(self, workflow: Dict[str, Any]) -> str:
|
|
||||||
"""Generate extractor function based on workflow objectives."""
|
|
||||||
|
|
||||||
# Detect what needs to be extracted
|
|
||||||
needs_frequency = False
|
|
||||||
needs_displacement = False
|
|
||||||
needs_stress = False
|
|
||||||
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
action = obj.get('extraction', {}).get('action', '').lower()
|
|
||||||
if 'frequency' in action or 'eigenvalue' in action:
|
|
||||||
needs_frequency = True
|
|
||||||
elif 'displacement' in action:
|
|
||||||
needs_displacement = True
|
|
||||||
elif 'stress' in action:
|
|
||||||
needs_stress = True
|
|
||||||
|
|
||||||
code = '''
|
|
||||||
def extract_results(op2_file, workflow):
|
|
||||||
"""Extract results from OP2 file based on workflow objectives."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
results = {}
|
|
||||||
'''
|
|
||||||
|
|
||||||
if needs_frequency:
|
|
||||||
code += '''
|
|
||||||
# Extract first frequency
|
|
||||||
if hasattr(model, 'eigenvalues') and len(model.eigenvalues) > 0:
|
|
||||||
subcase = list(model.eigenvalues.keys())[0]
|
|
||||||
eig_obj = model.eigenvalues[subcase]
|
|
||||||
eigenvalue = eig_obj.eigenvalues[0]
|
|
||||||
angular_freq = np.sqrt(eigenvalue)
|
|
||||||
frequency_hz = angular_freq / (2 * np.pi)
|
|
||||||
results['first_frequency'] = float(frequency_hz)
|
|
||||||
else:
|
|
||||||
raise ValueError("No eigenvalues found in OP2 file")
|
|
||||||
'''
|
|
||||||
|
|
||||||
if needs_displacement:
|
|
||||||
code += '''
|
|
||||||
# Extract max displacement
|
|
||||||
if hasattr(model, 'displacements') and len(model.displacements) > 0:
|
|
||||||
subcase = list(model.displacements.keys())[0]
|
|
||||||
disp_obj = model.displacements[subcase]
|
|
||||||
translations = disp_obj.data[0, :, :3]
|
|
||||||
magnitudes = np.linalg.norm(translations, axis=1)
|
|
||||||
results['max_displacement'] = float(np.max(magnitudes))
|
|
||||||
'''
|
|
||||||
|
|
||||||
if needs_stress:
|
|
||||||
code += '''
|
|
||||||
# Extract max stress
|
|
||||||
if hasattr(model, 'cquad4_stress') and len(model.cquad4_stress) > 0:
|
|
||||||
subcase = list(model.cquad4_stress.keys())[0]
|
|
||||||
stress_obj = model.cquad4_stress[subcase]
|
|
||||||
von_mises = stress_obj.data[0, :, 7]
|
|
||||||
results['max_stress'] = float(np.max(von_mises))
|
|
||||||
'''
|
|
||||||
|
|
||||||
code += '''
|
|
||||||
return results
|
|
||||||
'''
|
|
||||||
|
|
||||||
return code
|
|
||||||
|
|
||||||
def _generate_configuration_report(
|
|
||||||
self,
|
|
||||||
study_dir: Path,
|
|
||||||
workflow: Dict[str, Any],
|
|
||||||
benchmark_results: Dict[str, Any]
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Generate a comprehensive configuration report with ALL setup details.
|
|
||||||
|
|
||||||
This creates 1_setup/CONFIGURATION_REPORT.md with:
|
|
||||||
- User's optimization request
|
|
||||||
- All discovered expressions
|
|
||||||
- All discovered solutions
|
|
||||||
- All available result types
|
|
||||||
- Objective matching details
|
|
||||||
- Baseline values
|
|
||||||
- Warnings and issues
|
|
||||||
"""
|
|
||||||
report_path = study_dir / "1_setup" / "CONFIGURATION_REPORT.md"
|
|
||||||
|
|
||||||
# Get benchmark data
|
|
||||||
benchmark_data = benchmark_results.get('benchmark_data', {})
|
|
||||||
expressions = benchmark_data.get('expressions', {})
|
|
||||||
solutions = benchmark_data.get('solutions', {})
|
|
||||||
available_results = benchmark_data.get('available_results', {})
|
|
||||||
obj_mapping = benchmark_data.get('objective_mapping', {})
|
|
||||||
|
|
||||||
# Build expressions section
|
|
||||||
expressions_md = "## Model Expressions\n\n"
|
|
||||||
if expressions:
|
|
||||||
expressions_md += f"**Total expressions found: {len(expressions)}**\n\n"
|
|
||||||
expressions_md += "| Expression Name | Current Value | Units | Formula |\n"
|
|
||||||
expressions_md += "|----------------|---------------|-------|----------|\n"
|
|
||||||
for name, info in sorted(expressions.items()):
|
|
||||||
value = info.get('value', 'N/A')
|
|
||||||
units = info.get('units', '')
|
|
||||||
formula = info.get('formula', '')
|
|
||||||
expressions_md += f"| {name} | {value} | {units} | {formula} |\n"
|
|
||||||
else:
|
|
||||||
expressions_md += "*No expressions found in model*\n"
|
|
||||||
|
|
||||||
# Build solutions section
|
|
||||||
solutions_md = "## Simulation Solutions\n\n"
|
|
||||||
if solutions:
|
|
||||||
# Handle both old format (solution_names list) and new format (dict)
|
|
||||||
if isinstance(solutions, dict):
|
|
||||||
if 'solution_names' in solutions:
|
|
||||||
# Old format: just solution names
|
|
||||||
solution_names = solutions.get('solution_names', [])
|
|
||||||
num_solved = solutions.get('num_solved', 0)
|
|
||||||
num_failed = solutions.get('num_failed', 0)
|
|
||||||
num_skipped = solutions.get('num_skipped', 0)
|
|
||||||
|
|
||||||
solutions_md += f"**Solutions discovered**: {len(solution_names)}\n"
|
|
||||||
solutions_md += f"**Solved**: {num_solved} | **Failed**: {num_failed} | **Skipped**: {num_skipped}\n\n"
|
|
||||||
|
|
||||||
if solution_names:
|
|
||||||
for sol_name in solution_names:
|
|
||||||
solutions_md += f"- {sol_name}\n"
|
|
||||||
else:
|
|
||||||
solutions_md += "*No solution names retrieved*\n"
|
|
||||||
else:
|
|
||||||
# New format: dict of solution details
|
|
||||||
solutions_md += f"**Total solutions found: {len(solutions)}**\n\n"
|
|
||||||
for sol_name, sol_info in solutions.items():
|
|
||||||
solutions_md += f"### {sol_name}\n\n"
|
|
||||||
solutions_md += f"- **Type**: {sol_info.get('type', 'Unknown')}\n"
|
|
||||||
solutions_md += f"- **OP2 File**: `{sol_info.get('op2_path', 'N/A')}`\n\n"
|
|
||||||
else:
|
|
||||||
solutions_md += "*No solutions discovered - check if benchmarking solved all solutions*\n"
|
|
||||||
|
|
||||||
# Build available results section
|
|
||||||
results_md = "## Available Results\n\n"
|
|
||||||
if available_results:
|
|
||||||
results_md += f"**Total result types discovered: {len(available_results)}**\n\n"
|
|
||||||
for result_type, result_info in available_results.items():
|
|
||||||
results_md += f"### {result_type}\n\n"
|
|
||||||
results_md += f"- **Solution**: {result_info.get('solution', 'Unknown')}\n"
|
|
||||||
results_md += f"- **OP2 File**: `{result_info.get('op2_path', 'N/A')}`\n"
|
|
||||||
if 'sample_value' in result_info:
|
|
||||||
results_md += f"- **Sample Value**: {result_info['sample_value']}\n"
|
|
||||||
results_md += "\n"
|
|
||||||
else:
|
|
||||||
results_md += "*No results discovered - check if simulations solved successfully*\n"
|
|
||||||
|
|
||||||
# Build objective matching section
|
|
||||||
matching_md = "## Objective Matching\n\n"
|
|
||||||
if 'objectives' in obj_mapping and obj_mapping['objectives']:
|
|
||||||
matching_md += f"**Objectives matched: {len(obj_mapping['objectives'])}**\n\n"
|
|
||||||
for obj_name, obj_info in obj_mapping['objectives'].items():
|
|
||||||
solution = obj_info.get('solution', 'NONE')
|
|
||||||
result_type = obj_info.get('result_type', 'Unknown')
|
|
||||||
confidence = obj_info.get('match_confidence', 'Unknown')
|
|
||||||
extractor = obj_info.get('extractor', 'Unknown')
|
|
||||||
op2_file = obj_info.get('op2_file', 'N/A')
|
|
||||||
error = obj_info.get('error', None)
|
|
||||||
|
|
||||||
matching_md += f"### {obj_name}\n\n"
|
|
||||||
matching_md += f"- **Result Type**: {result_type}\n"
|
|
||||||
matching_md += f"- **Solution**: {solution}\n"
|
|
||||||
matching_md += f"- **Confidence**: {confidence}\n"
|
|
||||||
matching_md += f"- **Extractor**: `{extractor}`\n"
|
|
||||||
matching_md += f"- **OP2 File**: `{op2_file}`\n"
|
|
||||||
|
|
||||||
if error:
|
|
||||||
matching_md += f"- **⚠️ ERROR**: {error}\n"
|
|
||||||
|
|
||||||
matching_md += "\n"
|
|
||||||
|
|
||||||
# Add primary solution
|
|
||||||
primary_solution = obj_mapping.get('primary_solution')
|
|
||||||
if primary_solution:
|
|
||||||
matching_md += f"**Primary Solution Selected**: `{primary_solution}`\n\n"
|
|
||||||
matching_md += "This solution will be used for optimization.\n\n"
|
|
||||||
else:
|
|
||||||
matching_md += "*No objectives matched - check workflow configuration*\n"
|
|
||||||
|
|
||||||
# Build baseline values section
|
|
||||||
baseline_md = "## Baseline Values\n\n"
|
|
||||||
baseline_results = benchmark_results.get('results', {})
|
|
||||||
if baseline_results:
|
|
||||||
baseline_md += "Values extracted from the initial (unoptimized) model:\n\n"
|
|
||||||
for key, value in baseline_results.items():
|
|
||||||
baseline_md += f"- **{key}**: {value}\n"
|
|
||||||
else:
|
|
||||||
baseline_md += "*No baseline values extracted*\n"
|
|
||||||
|
|
||||||
# Build warnings section
|
|
||||||
warnings_md = "## Warnings and Issues\n\n"
|
|
||||||
warnings = []
|
|
||||||
|
|
||||||
# Check for missing eigenvalues
|
|
||||||
for obj_name, obj_info in obj_mapping.get('objectives', {}).items():
|
|
||||||
if obj_info.get('error'):
|
|
||||||
warnings.append(f"- ⚠️ **{obj_name}**: {obj_info['error']}")
|
|
||||||
|
|
||||||
# Check for no solutions
|
|
||||||
if not solutions:
|
|
||||||
warnings.append("- ⚠️ **No solutions discovered**: Benchmarking may not have solved all solutions")
|
|
||||||
|
|
||||||
# Check for no results
|
|
||||||
if not available_results:
|
|
||||||
warnings.append("- ⚠️ **No results available**: Check if simulations ran successfully")
|
|
||||||
|
|
||||||
if warnings:
|
|
||||||
warnings_md += "\n".join(warnings) + "\n"
|
|
||||||
else:
|
|
||||||
warnings_md += "✅ No issues detected!\n"
|
|
||||||
|
|
||||||
# Build full report
|
|
||||||
content = f'''# Configuration Report
|
|
||||||
|
|
||||||
**Study**: {workflow.get('study_name', study_dir.name)}
|
|
||||||
**Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Optimization Request
|
|
||||||
|
|
||||||
**User's Goal**:
|
|
||||||
|
|
||||||
> {workflow.get('optimization_request', '*No description provided*')}
|
|
||||||
|
|
||||||
**Design Variables**: {len(workflow.get('design_variables', []))}
|
|
||||||
|
|
||||||
| Variable | Min | Max |
|
|
||||||
|----------|-----|-----|
|
|
||||||
'''
|
|
||||||
|
|
||||||
for var in workflow.get('design_variables', []):
|
|
||||||
param = var.get('parameter', 'Unknown')
|
|
||||||
bounds = var.get('bounds', [0, 0])
|
|
||||||
content += f"| {param} | {bounds[0]} | {bounds[1]} |\n"
|
|
||||||
|
|
||||||
content += f'''
|
|
||||||
|
|
||||||
**Objectives**: {len(workflow.get('objectives', []))}
|
|
||||||
|
|
||||||
| Objective | Goal |
|
|
||||||
|-----------|------|
|
|
||||||
'''
|
|
||||||
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
obj_name = obj.get('name', 'Unknown')
|
|
||||||
goal = obj.get('goal', 'Unknown')
|
|
||||||
content += f"| {obj_name} | {goal} |\n"
|
|
||||||
|
|
||||||
content += f'''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{expressions_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{solutions_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{results_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{matching_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{baseline_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
{warnings_md}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Study structure created
|
|
||||||
2. ✅ Benchmarking complete
|
|
||||||
3. ✅ Configuration validated
|
|
||||||
4. ➡️ **Run optimization**: `python run_optimization.py`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*This report was auto-generated by the Intelligent Setup System*
|
|
||||||
'''
|
|
||||||
|
|
||||||
with open(report_path, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(content)
|
|
||||||
|
|
||||||
def _create_readme(
|
|
||||||
self,
|
|
||||||
study_dir: Path,
|
|
||||||
workflow: Dict[str, Any],
|
|
||||||
benchmark_results: Dict[str, Any]
|
|
||||||
):
|
|
||||||
"""Create README for the study."""
|
|
||||||
readme_path = study_dir / "README.md"
|
|
||||||
|
|
||||||
# Format design variables
|
|
||||||
vars_md = ""
|
|
||||||
for var in workflow.get('design_variables', []):
|
|
||||||
bounds = var.get('bounds', [0, 1])
|
|
||||||
desc = var.get('description', '')
|
|
||||||
vars_md += f"- `{var['parameter']}`: {bounds[0]}-{bounds[1]} mm"
|
|
||||||
if desc:
|
|
||||||
vars_md += f" - {desc}"
|
|
||||||
vars_md += "\n"
|
|
||||||
|
|
||||||
# Format objectives
|
|
||||||
objs_md = ""
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
objs_md += f"- {obj['goal'].title()} {obj['name']}\n"
|
|
||||||
|
|
||||||
# Format benchmark results
|
|
||||||
bench_md = ""
|
|
||||||
if benchmark_results.get('success'):
|
|
||||||
for name, value in benchmark_results.get('results', {}).items():
|
|
||||||
bench_md += f"- {name}: {value:.4f}\n"
|
|
||||||
|
|
||||||
content = f'''# {workflow.get('study_name', 'Optimization Study')}
|
|
||||||
|
|
||||||
**Created**: {datetime.now().strftime("%Y-%m-%d")}
|
|
||||||
**Mode**: Hybrid (Workflow JSON + Auto-generated runner)
|
|
||||||
|
|
||||||
## Problem Description
|
|
||||||
|
|
||||||
{workflow.get('optimization_request', 'N/A')}
|
|
||||||
|
|
||||||
### Design Variables
|
|
||||||
|
|
||||||
{vars_md}
|
|
||||||
|
|
||||||
### Objectives
|
|
||||||
|
|
||||||
{objs_md}
|
|
||||||
|
|
||||||
## Benchmark Results
|
|
||||||
|
|
||||||
Baseline simulation (default geometry):
|
|
||||||
|
|
||||||
{bench_md}
|
|
||||||
|
|
||||||
## Study Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
{study_dir.name}/
|
|
||||||
├── 1_setup/
|
|
||||||
│ ├── model/ # FEM model files
|
|
||||||
│ └── workflow_config.json # Optimization specification
|
|
||||||
├── 2_substudies/
|
|
||||||
│ └── results/ # Optimization results
|
|
||||||
├── 3_reports/
|
|
||||||
├── run_optimization.py # Auto-generated runner
|
|
||||||
└── README.md # This file
|
|
||||||
```
|
|
||||||
|
|
||||||
## Running the Optimization
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python run_optimization.py
|
|
||||||
```
|
|
||||||
|
|
||||||
This will:
|
|
||||||
1. Load workflow configuration
|
|
||||||
2. Initialize NX model updater and solver
|
|
||||||
3. Run {10} optimization trials
|
|
||||||
4. Save results to `2_substudies/results/`
|
|
||||||
|
|
||||||
## Results
|
|
||||||
|
|
||||||
After optimization completes, check:
|
|
||||||
- `2_substudies/results/study.db` - Optuna database
|
|
||||||
- `2_substudies/results/` - Best design parameters
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Created by Hybrid Mode** - 90% automation, production ready!
|
|
||||||
'''
|
|
||||||
|
|
||||||
with open(readme_path, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(content)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# Example usage
|
|
||||||
creator = HybridStudyCreator()
|
|
||||||
|
|
||||||
# Example: Create study from workflow JSON
|
|
||||||
study_dir = creator.create_from_workflow(
|
|
||||||
workflow_json_path=Path("path/to/workflow.json"),
|
|
||||||
model_files={
|
|
||||||
'prt': Path("path/to/model.prt"),
|
|
||||||
'sim': Path("path/to/model.sim"),
|
|
||||||
'fem': Path("path/to/model.fem")
|
|
||||||
},
|
|
||||||
study_name="example_study"
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"Study created: {study_dir}")
|
|
||||||
@@ -1,694 +0,0 @@
|
|||||||
"""
|
|
||||||
Intelligent Setup System for Atomizer
|
|
||||||
|
|
||||||
This module provides COMPLETE autonomy for optimization setup:
|
|
||||||
1. Solves ALL solutions in .sim file
|
|
||||||
2. Discovers all available results (eigenvalues, displacements, stresses, etc.)
|
|
||||||
3. Catalogs expressions and parameters
|
|
||||||
4. Matches workflow objectives to available results
|
|
||||||
5. Auto-selects correct solution for optimization
|
|
||||||
6. Generates optimized runner code
|
|
||||||
|
|
||||||
This is the level of intelligence Atomizer should have.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Any, List, Optional, Tuple
|
|
||||||
import json
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
class IntelligentSetup:
|
|
||||||
"""
|
|
||||||
Intelligent benchmarking and setup system.
|
|
||||||
|
|
||||||
Proactively discovers EVERYTHING about a simulation:
|
|
||||||
- All solutions (Static, Modal, Buckling, etc.)
|
|
||||||
- All result types (displacements, stresses, eigenvalues, etc.)
|
|
||||||
- All expressions and parameters
|
|
||||||
- Matches user objectives to available data
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.project_root = Path(__file__).parent.parent
|
|
||||||
|
|
||||||
def run_complete_benchmarking(
|
|
||||||
self,
|
|
||||||
prt_file: Path,
|
|
||||||
sim_file: Path,
|
|
||||||
workflow: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Run COMPLETE benchmarking:
|
|
||||||
1. Extract ALL expressions from .prt
|
|
||||||
2. Solve ALL solutions in .sim
|
|
||||||
3. Analyze ALL result files
|
|
||||||
4. Match objectives to available results
|
|
||||||
5. Determine optimal solution for each objective
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Complete catalog of available data and recommendations
|
|
||||||
"""
|
|
||||||
print()
|
|
||||||
print("="*80)
|
|
||||||
print(" INTELLIGENT SETUP - COMPLETE ANALYSIS")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
results = {
|
|
||||||
'success': False,
|
|
||||||
'expressions': {},
|
|
||||||
'solutions': {},
|
|
||||||
'available_results': {},
|
|
||||||
'objective_mapping': {},
|
|
||||||
'recommended_solution': None,
|
|
||||||
'errors': []
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Phase 1: Extract ALL expressions
|
|
||||||
print("[Phase 1/4] Extracting ALL expressions from model...")
|
|
||||||
expressions = self._extract_all_expressions(prt_file)
|
|
||||||
results['expressions'] = expressions
|
|
||||||
print(f" [OK] Found {len(expressions)} expressions")
|
|
||||||
for name, info in list(expressions.items())[:5]:
|
|
||||||
val = info.get('value', 'N/A')
|
|
||||||
units = info.get('units', '')
|
|
||||||
print(f" - {name}: {val} {units}")
|
|
||||||
if len(expressions) > 5:
|
|
||||||
print(f" ... and {len(expressions) - 5} more")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Phase 2: Solve ALL solutions
|
|
||||||
print("[Phase 2/4] Solving ALL solutions in .sim file...")
|
|
||||||
solutions_info = self._solve_all_solutions(sim_file)
|
|
||||||
results['solutions'] = solutions_info
|
|
||||||
print(f" [OK] Solved {solutions_info['num_solved']} solutions")
|
|
||||||
for sol_name in solutions_info['solution_names']:
|
|
||||||
print(f" - {sol_name}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Phase 3: Analyze ALL result files
|
|
||||||
print("[Phase 3/4] Analyzing ALL result files...")
|
|
||||||
available_results = self._analyze_all_results(sim_file.parent, solutions_info)
|
|
||||||
results['available_results'] = available_results
|
|
||||||
|
|
||||||
print(f" [OK] Found {len(available_results)} result files")
|
|
||||||
for result_type, details in available_results.items():
|
|
||||||
print(f" - {result_type}: {details['count']} entries in {details['file']}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Phase 4: Match objectives to results
|
|
||||||
print("[Phase 4/4] Matching objectives to available results...")
|
|
||||||
mapping = self._match_objectives_to_results(workflow, available_results, solutions_info)
|
|
||||||
results['objective_mapping'] = mapping
|
|
||||||
results['recommended_solution'] = mapping.get('primary_solution')
|
|
||||||
|
|
||||||
print(f" [OK] Objective mapping complete")
|
|
||||||
for obj_name, obj_info in mapping['objectives'].items():
|
|
||||||
print(f" - {obj_name}")
|
|
||||||
print(f" Solution: {obj_info.get('solution', 'NONE')}")
|
|
||||||
print(f" Result type: {obj_info.get('result_type', 'Unknown')}")
|
|
||||||
print(f" Extractor: {obj_info.get('extractor', 'Unknown')}")
|
|
||||||
if 'error' in obj_info:
|
|
||||||
print(f" [WARNING] {obj_info['error']}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
if mapping.get('primary_solution'):
|
|
||||||
print(f" [RECOMMENDATION] Use solution: {mapping['primary_solution']}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
results['success'] = True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
results['errors'].append(str(e))
|
|
||||||
print(f" [ERROR] {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
print("="*80)
|
|
||||||
print(" ANALYSIS COMPLETE")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
def _extract_all_expressions(self, prt_file: Path) -> Dict[str, Any]:
|
|
||||||
"""Extract ALL expressions from .prt file."""
|
|
||||||
from optimization_engine.nx_updater import NXParameterUpdater
|
|
||||||
|
|
||||||
updater = NXParameterUpdater(prt_file)
|
|
||||||
return updater.get_all_expressions()
|
|
||||||
|
|
||||||
def _solve_all_solutions(self, sim_file: Path) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Solve ALL solutions in .sim file using NXOpen journal approach.
|
|
||||||
|
|
||||||
CRITICAL: This method updates the .fem file from the .prt before solving!
|
|
||||||
This is required when geometry changes (modal analysis, etc.)
|
|
||||||
|
|
||||||
Returns dict with:
|
|
||||||
- num_solved: int
|
|
||||||
- num_failed: int
|
|
||||||
- num_skipped: int
|
|
||||||
- solution_names: List[str]
|
|
||||||
"""
|
|
||||||
# Create journal to solve all solutions
|
|
||||||
journal_code = f'''
|
|
||||||
import sys
|
|
||||||
import NXOpen
|
|
||||||
import NXOpen.CAE
|
|
||||||
|
|
||||||
def main(args):
|
|
||||||
if len(args) < 1:
|
|
||||||
print("ERROR: No .sim file path provided")
|
|
||||||
return False
|
|
||||||
|
|
||||||
sim_file_path = args[0]
|
|
||||||
|
|
||||||
theSession = NXOpen.Session.GetSession()
|
|
||||||
|
|
||||||
# Open the .sim file
|
|
||||||
print(f"[JOURNAL] Opening simulation: {{sim_file_path}}")
|
|
||||||
basePart1, partLoadStatus1 = theSession.Parts.OpenActiveDisplay(
|
|
||||||
sim_file_path,
|
|
||||||
NXOpen.DisplayPartOption.AllowAdditional
|
|
||||||
)
|
|
||||||
partLoadStatus1.Dispose()
|
|
||||||
|
|
||||||
workSimPart = theSession.Parts.BaseWork
|
|
||||||
print(f"[JOURNAL] Simulation opened successfully")
|
|
||||||
|
|
||||||
# CRITICAL: Update FEM from master model (.prt)
|
|
||||||
# This is required when geometry has changed (modal analysis, etc.)
|
|
||||||
print("[JOURNAL] Updating FEM from master model...")
|
|
||||||
simSimulation = workSimPart.Simulation
|
|
||||||
|
|
||||||
# Get all FEModels and update them
|
|
||||||
femModels = simSimulation.FemParts
|
|
||||||
for i in range(femModels.Length):
|
|
||||||
femPart = femModels.Item(i)
|
|
||||||
print(f"[JOURNAL] Updating FEM: {{femPart.Name}}")
|
|
||||||
|
|
||||||
# Update the FEM from associated CAD part
|
|
||||||
femPart.UpdateFemodel()
|
|
||||||
|
|
||||||
# Save after FEM update
|
|
||||||
print("[JOURNAL] Saving after FEM update...")
|
|
||||||
partSaveStatus = workSimPart.Save(
|
|
||||||
NXOpen.BasePart.SaveComponents.TrueValue,
|
|
||||||
NXOpen.BasePart.CloseAfterSave.FalseValue
|
|
||||||
)
|
|
||||||
partSaveStatus.Dispose()
|
|
||||||
|
|
||||||
# Get all solutions
|
|
||||||
theCAESimSolveManager = NXOpen.CAE.SimSolveManager.GetSimSolveManager(theSession)
|
|
||||||
|
|
||||||
# Solve all solutions
|
|
||||||
print("[JOURNAL] Solving ALL solutions...")
|
|
||||||
num_solved, num_failed, num_skipped = theCAESimSolveManager.SolveAllSolutions(
|
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Foreground,
|
|
||||||
False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Get solution names
|
|
||||||
simSimulation = workSimPart.FindObject("Simulation")
|
|
||||||
solutions = []
|
|
||||||
for obj in simSimulation.GetAllDescendents():
|
|
||||||
if "Solution[" in str(obj):
|
|
||||||
solutions.append(str(obj))
|
|
||||||
|
|
||||||
# Save to write output files
|
|
||||||
print("[JOURNAL] Saving simulation to write output files...")
|
|
||||||
partSaveStatus = workSimPart.Save(
|
|
||||||
NXOpen.BasePart.SaveComponents.TrueValue,
|
|
||||||
NXOpen.BasePart.CloseAfterSave.FalseValue
|
|
||||||
)
|
|
||||||
partSaveStatus.Dispose()
|
|
||||||
|
|
||||||
# Output results
|
|
||||||
print(f"ATOMIZER_SOLUTIONS_SOLVED: {{num_solved}}")
|
|
||||||
print(f"ATOMIZER_SOLUTIONS_FAILED: {{num_failed}}")
|
|
||||||
print(f"ATOMIZER_SOLUTIONS_SKIPPED: {{num_skipped}}")
|
|
||||||
for sol in solutions:
|
|
||||||
print(f"ATOMIZER_SOLUTION: {{sol}}")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = main(sys.argv[1:])
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
'''
|
|
||||||
|
|
||||||
# Write and execute journal
|
|
||||||
journal_path = sim_file.parent / "_solve_all_solutions.py"
|
|
||||||
with open(journal_path, 'w') as f:
|
|
||||||
f.write(journal_code)
|
|
||||||
|
|
||||||
# Run journal via NX
|
|
||||||
from optimization_engine.nx_solver import NXSolver
|
|
||||||
solver = NXSolver()
|
|
||||||
|
|
||||||
import subprocess
|
|
||||||
from config import NX_RUN_JOURNAL
|
|
||||||
|
|
||||||
result = subprocess.run(
|
|
||||||
[str(NX_RUN_JOURNAL), str(journal_path), str(sim_file)],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
timeout=600
|
|
||||||
)
|
|
||||||
|
|
||||||
# Parse output
|
|
||||||
num_solved = 0
|
|
||||||
num_failed = 0
|
|
||||||
num_skipped = 0
|
|
||||||
solution_names = []
|
|
||||||
|
|
||||||
for line in result.stdout.split('\n'):
|
|
||||||
if 'ATOMIZER_SOLUTIONS_SOLVED:' in line:
|
|
||||||
num_solved = int(line.split(':')[1].strip())
|
|
||||||
elif 'ATOMIZER_SOLUTIONS_FAILED:' in line:
|
|
||||||
num_failed = int(line.split(':')[1].strip())
|
|
||||||
elif 'ATOMIZER_SOLUTIONS_SKIPPED:' in line:
|
|
||||||
num_skipped = int(line.split(':')[1].strip())
|
|
||||||
elif 'ATOMIZER_SOLUTION:' in line:
|
|
||||||
sol_name = line.split(':', 1)[1].strip()
|
|
||||||
solution_names.append(sol_name)
|
|
||||||
|
|
||||||
# Clean up
|
|
||||||
journal_path.unlink()
|
|
||||||
|
|
||||||
return {
|
|
||||||
'num_solved': num_solved,
|
|
||||||
'num_failed': num_failed,
|
|
||||||
'num_skipped': num_skipped,
|
|
||||||
'solution_names': solution_names
|
|
||||||
}
|
|
||||||
|
|
||||||
def _analyze_all_results(
|
|
||||||
self,
|
|
||||||
model_dir: Path,
|
|
||||||
solutions_info: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Analyze ALL .op2 files to discover available results.
|
|
||||||
|
|
||||||
Returns dict mapping result types to details:
|
|
||||||
{
|
|
||||||
'eigenvalues': {'file': 'xxx.op2', 'count': 10, 'solution': 'Modal'},
|
|
||||||
'displacements': {'file': 'yyy.op2', 'count': 613, 'solution': 'Static'},
|
|
||||||
'stress_quad4': {'file': 'yyy.op2', 'count': 561, 'solution': 'Static'},
|
|
||||||
...
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
|
|
||||||
available = {}
|
|
||||||
|
|
||||||
# Find all .op2 files
|
|
||||||
op2_files = list(model_dir.glob("*.op2"))
|
|
||||||
|
|
||||||
for op2_file in op2_files:
|
|
||||||
try:
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
# Check for eigenvalues
|
|
||||||
if hasattr(model, 'eigenvalues') and len(model.eigenvalues) > 0:
|
|
||||||
subcase = list(model.eigenvalues.keys())[0]
|
|
||||||
eig_obj = model.eigenvalues[subcase]
|
|
||||||
available['eigenvalues'] = {
|
|
||||||
'file': op2_file.name,
|
|
||||||
'count': len(eig_obj.eigenvalues),
|
|
||||||
'solution': self._guess_solution_from_filename(op2_file.name),
|
|
||||||
'op2_path': op2_file
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check for displacements
|
|
||||||
if hasattr(model, 'displacements') and len(model.displacements) > 0:
|
|
||||||
subcase = list(model.displacements.keys())[0]
|
|
||||||
disp_obj = model.displacements[subcase]
|
|
||||||
available['displacements'] = {
|
|
||||||
'file': op2_file.name,
|
|
||||||
'count': disp_obj.data.shape[1], # Number of nodes
|
|
||||||
'solution': self._guess_solution_from_filename(op2_file.name),
|
|
||||||
'op2_path': op2_file
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check for stresses
|
|
||||||
if hasattr(model, 'cquad4_stress') and len(model.cquad4_stress) > 0:
|
|
||||||
subcase = list(model.cquad4_stress.keys())[0]
|
|
||||||
stress_obj = model.cquad4_stress[subcase]
|
|
||||||
available['stress_quad4'] = {
|
|
||||||
'file': op2_file.name,
|
|
||||||
'count': stress_obj.data.shape[1], # Number of elements
|
|
||||||
'solution': self._guess_solution_from_filename(op2_file.name),
|
|
||||||
'op2_path': op2_file
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check for forces
|
|
||||||
if hasattr(model, 'cquad4_force') and len(model.cquad4_force) > 0:
|
|
||||||
available['force_quad4'] = {
|
|
||||||
'file': op2_file.name,
|
|
||||||
'count': len(model.cquad4_force),
|
|
||||||
'solution': self._guess_solution_from_filename(op2_file.name),
|
|
||||||
'op2_path': op2_file
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" [WARNING] Could not analyze {op2_file.name}: {e}")
|
|
||||||
|
|
||||||
return available
|
|
||||||
|
|
||||||
def _guess_solution_from_filename(self, filename: str) -> str:
|
|
||||||
"""Guess solution type from filename."""
|
|
||||||
filename_lower = filename.lower()
|
|
||||||
if 'normal_modes' in filename_lower or 'modal' in filename_lower:
|
|
||||||
return 'Solution_Normal_Modes'
|
|
||||||
elif 'buckling' in filename_lower:
|
|
||||||
return 'Solution_Buckling'
|
|
||||||
elif 'static' in filename_lower or 'solution_1' in filename_lower:
|
|
||||||
return 'Solution_1'
|
|
||||||
else:
|
|
||||||
return 'Unknown'
|
|
||||||
|
|
||||||
def _match_objectives_to_results(
|
|
||||||
self,
|
|
||||||
workflow: Dict[str, Any],
|
|
||||||
available_results: Dict[str, Any],
|
|
||||||
solutions_info: Dict[str, Any]
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Intelligently match workflow objectives to available results.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
{
|
|
||||||
'objectives': {
|
|
||||||
'obj_name': {
|
|
||||||
'solution': 'Solution_Normal_Modes',
|
|
||||||
'result_type': 'eigenvalues',
|
|
||||||
'extractor': 'extract_first_frequency',
|
|
||||||
'op2_file': Path(...)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'primary_solution': 'Solution_Normal_Modes' # Most important solution
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
mapping = {
|
|
||||||
'objectives': {},
|
|
||||||
'primary_solution': None
|
|
||||||
}
|
|
||||||
|
|
||||||
for obj in workflow.get('objectives', []):
|
|
||||||
obj_name = obj.get('name', 'unnamed')
|
|
||||||
extraction = obj.get('extraction', {})
|
|
||||||
action = extraction.get('action', '').lower()
|
|
||||||
|
|
||||||
# Match based on objective type
|
|
||||||
if 'frequency' in action or 'eigenvalue' in action or 'modal' in action:
|
|
||||||
if 'eigenvalues' in available_results:
|
|
||||||
result_info = available_results['eigenvalues']
|
|
||||||
mapping['objectives'][obj_name] = {
|
|
||||||
'solution': result_info['solution'],
|
|
||||||
'result_type': 'eigenvalues',
|
|
||||||
'extractor': 'extract_first_frequency',
|
|
||||||
'op2_file': result_info['op2_path'],
|
|
||||||
'match_confidence': 'HIGH'
|
|
||||||
}
|
|
||||||
if not mapping['primary_solution']:
|
|
||||||
mapping['primary_solution'] = result_info['solution']
|
|
||||||
else:
|
|
||||||
mapping['objectives'][obj_name] = {
|
|
||||||
'solution': 'NONE',
|
|
||||||
'result_type': 'eigenvalues',
|
|
||||||
'extractor': 'extract_first_frequency',
|
|
||||||
'op2_file': None,
|
|
||||||
'match_confidence': 'ERROR',
|
|
||||||
'error': 'No eigenvalue results found - check if modal solution exists'
|
|
||||||
}
|
|
||||||
|
|
||||||
elif 'displacement' in action or 'deflection' in action:
|
|
||||||
if 'displacements' in available_results:
|
|
||||||
result_info = available_results['displacements']
|
|
||||||
mapping['objectives'][obj_name] = {
|
|
||||||
'solution': result_info['solution'],
|
|
||||||
'result_type': 'displacements',
|
|
||||||
'extractor': 'extract_max_displacement',
|
|
||||||
'op2_file': result_info['op2_path'],
|
|
||||||
'match_confidence': 'HIGH'
|
|
||||||
}
|
|
||||||
if not mapping['primary_solution']:
|
|
||||||
mapping['primary_solution'] = result_info['solution']
|
|
||||||
|
|
||||||
elif 'stress' in action or 'von_mises' in action:
|
|
||||||
if 'stress_quad4' in available_results:
|
|
||||||
result_info = available_results['stress_quad4']
|
|
||||||
mapping['objectives'][obj_name] = {
|
|
||||||
'solution': result_info['solution'],
|
|
||||||
'result_type': 'stress',
|
|
||||||
'extractor': 'extract_max_stress',
|
|
||||||
'op2_file': result_info['op2_path'],
|
|
||||||
'match_confidence': 'HIGH'
|
|
||||||
}
|
|
||||||
if not mapping['primary_solution']:
|
|
||||||
mapping['primary_solution'] = result_info['solution']
|
|
||||||
|
|
||||||
return mapping
|
|
||||||
|
|
||||||
def generate_intelligent_runner(
|
|
||||||
self,
|
|
||||||
study_dir: Path,
|
|
||||||
workflow: Dict[str, Any],
|
|
||||||
benchmark_results: Dict[str, Any]
|
|
||||||
) -> Path:
|
|
||||||
"""
|
|
||||||
Generate optimized runner based on intelligent analysis.
|
|
||||||
|
|
||||||
Uses benchmark results to:
|
|
||||||
1. Select correct solution to solve
|
|
||||||
2. Generate correct extractors
|
|
||||||
3. Optimize for speed (only solve what's needed)
|
|
||||||
"""
|
|
||||||
runner_path = study_dir / "run_optimization.py"
|
|
||||||
|
|
||||||
# Get recommended solution
|
|
||||||
recommended_solution = benchmark_results.get('recommended_solution', 'Solution_1')
|
|
||||||
objective_mapping = benchmark_results.get('objective_mapping', {})
|
|
||||||
|
|
||||||
# Generate extractor functions based on actual available results
|
|
||||||
extractor_code = self._generate_intelligent_extractors(objective_mapping)
|
|
||||||
|
|
||||||
runner_code = f'''"""
|
|
||||||
Auto-generated INTELLIGENT optimization runner
|
|
||||||
Created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
|
|
||||||
|
|
||||||
Intelligently configured based on complete benchmarking:
|
|
||||||
- Solution: {recommended_solution}
|
|
||||||
- Extractors: Auto-matched to available results
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
import json
|
|
||||||
import optuna
|
|
||||||
from optimization_engine.nx_updater import NXParameterUpdater
|
|
||||||
from optimization_engine.nx_solver import NXSolver
|
|
||||||
|
|
||||||
|
|
||||||
{extractor_code}
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("="*80)
|
|
||||||
print(" {workflow.get('study_name', 'OPTIMIZATION').upper()}")
|
|
||||||
print(" Intelligent Setup - Auto-configured")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Load workflow
|
|
||||||
config_file = Path(__file__).parent / "1_setup/workflow_config.json"
|
|
||||||
with open(config_file) as f:
|
|
||||||
workflow = json.load(f)
|
|
||||||
|
|
||||||
print("Configuration:")
|
|
||||||
print(f" Target solution: {recommended_solution}")
|
|
||||||
print(f" Objectives: {len(workflow.get('objectives', []))}")
|
|
||||||
print(f" Variables: {len(workflow.get('design_variables', []))}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Setup paths
|
|
||||||
model_dir = Path(__file__).parent / "1_setup/model"
|
|
||||||
prt_file = list(model_dir.glob("*.prt"))[0]
|
|
||||||
sim_file = list(model_dir.glob("*.sim"))[0]
|
|
||||||
output_dir = Path(__file__).parent / "2_substudies/results"
|
|
||||||
output_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Initialize
|
|
||||||
updater = NXParameterUpdater(prt_file)
|
|
||||||
solver = NXSolver()
|
|
||||||
|
|
||||||
# Create Optuna study
|
|
||||||
study_name = "{workflow.get('study_name', 'optimization')}"
|
|
||||||
storage = f"sqlite:///{{output_dir / 'study.db'}}"
|
|
||||||
study = optuna.create_study(
|
|
||||||
study_name=study_name,
|
|
||||||
storage=storage,
|
|
||||||
load_if_exists=True,
|
|
||||||
direction="minimize"
|
|
||||||
)
|
|
||||||
|
|
||||||
def objective(trial):
|
|
||||||
# Sample design variables
|
|
||||||
params = {{}}
|
|
||||||
for var in workflow['design_variables']:
|
|
||||||
name = var['parameter']
|
|
||||||
bounds = var['bounds']
|
|
||||||
params[name] = trial.suggest_float(name, bounds[0], bounds[1])
|
|
||||||
|
|
||||||
print(f"\\nTrial {{trial.number}}:")
|
|
||||||
for name, value in params.items():
|
|
||||||
print(f" {{name}} = {{value:.2f}}")
|
|
||||||
|
|
||||||
# Update model
|
|
||||||
updater.update_expressions(params)
|
|
||||||
|
|
||||||
# Run SPECIFIC solution (optimized - only what's needed)
|
|
||||||
result = solver.run_simulation(
|
|
||||||
sim_file,
|
|
||||||
solution_name="{recommended_solution}"
|
|
||||||
)
|
|
||||||
if not result['success']:
|
|
||||||
raise RuntimeError(f"Simulation failed: {{result.get('errors', 'Unknown')}}")
|
|
||||||
|
|
||||||
op2_file = result['op2_file']
|
|
||||||
|
|
||||||
# Extract results
|
|
||||||
results = extract_results(op2_file, workflow)
|
|
||||||
|
|
||||||
# Print results
|
|
||||||
for name, value in results.items():
|
|
||||||
print(f" {{name}} = {{value:.4f}}")
|
|
||||||
|
|
||||||
# Calculate objective
|
|
||||||
obj_config = workflow['objectives'][0]
|
|
||||||
result_name = list(results.keys())[0]
|
|
||||||
|
|
||||||
if obj_config['goal'] == 'minimize':
|
|
||||||
objective_value = results[result_name]
|
|
||||||
else:
|
|
||||||
objective_value = -results[result_name]
|
|
||||||
|
|
||||||
print(f" Objective = {{objective_value:.4f}}")
|
|
||||||
|
|
||||||
return objective_value
|
|
||||||
|
|
||||||
# Run optimization
|
|
||||||
n_trials = 10
|
|
||||||
print(f"\\nRunning {{n_trials}} trials...")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
study.optimize(objective, n_trials=n_trials)
|
|
||||||
|
|
||||||
# Results
|
|
||||||
print()
|
|
||||||
print("="*80)
|
|
||||||
print(" OPTIMIZATION COMPLETE")
|
|
||||||
print("="*80)
|
|
||||||
print()
|
|
||||||
print(f"Best trial: #{{study.best_trial.number}}")
|
|
||||||
for name, value in study.best_params.items():
|
|
||||||
print(f" {{name}} = {{value:.2f}}")
|
|
||||||
print(f"\\nBest objective = {{study.best_value:.4f}}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
'''
|
|
||||||
|
|
||||||
with open(runner_path, 'w') as f:
|
|
||||||
f.write(runner_code)
|
|
||||||
|
|
||||||
return runner_path
|
|
||||||
|
|
||||||
def _generate_intelligent_extractors(self, objective_mapping: Dict[str, Any]) -> str:
|
|
||||||
"""Generate extractor functions based on intelligent mapping."""
|
|
||||||
|
|
||||||
extractors = set()
|
|
||||||
for obj_name, obj_info in objective_mapping.get('objectives', {}).items():
|
|
||||||
if 'extractor' in obj_info:
|
|
||||||
extractors.add(obj_info['extractor'])
|
|
||||||
|
|
||||||
code = '''
|
|
||||||
def extract_results(op2_file, workflow):
|
|
||||||
"""Intelligently extract results based on benchmarking."""
|
|
||||||
from pyNastran.op2.op2 import OP2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
model = OP2()
|
|
||||||
model.read_op2(str(op2_file))
|
|
||||||
|
|
||||||
results = {}
|
|
||||||
'''
|
|
||||||
|
|
||||||
if 'extract_first_frequency' in extractors:
|
|
||||||
code += '''
|
|
||||||
# Extract first frequency (auto-matched to eigenvalues)
|
|
||||||
if hasattr(model, 'eigenvalues') and len(model.eigenvalues) > 0:
|
|
||||||
subcase = list(model.eigenvalues.keys())[0]
|
|
||||||
eig_obj = model.eigenvalues[subcase]
|
|
||||||
eigenvalue = eig_obj.eigenvalues[0]
|
|
||||||
angular_freq = np.sqrt(eigenvalue)
|
|
||||||
frequency_hz = angular_freq / (2 * np.pi)
|
|
||||||
results['first_frequency'] = float(frequency_hz)
|
|
||||||
'''
|
|
||||||
|
|
||||||
if 'extract_max_displacement' in extractors:
|
|
||||||
code += '''
|
|
||||||
# Extract max displacement (auto-matched to displacements)
|
|
||||||
if hasattr(model, 'displacements') and len(model.displacements) > 0:
|
|
||||||
subcase = list(model.displacements.keys())[0]
|
|
||||||
disp_obj = model.displacements[subcase]
|
|
||||||
translations = disp_obj.data[0, :, :3]
|
|
||||||
magnitudes = np.linalg.norm(translations, axis=1)
|
|
||||||
results['max_displacement'] = float(np.max(magnitudes))
|
|
||||||
'''
|
|
||||||
|
|
||||||
if 'extract_max_stress' in extractors:
|
|
||||||
code += '''
|
|
||||||
# Extract max stress (auto-matched to stress results)
|
|
||||||
if hasattr(model, 'cquad4_stress') and len(model.cquad4_stress) > 0:
|
|
||||||
subcase = list(model.cquad4_stress.keys())[0]
|
|
||||||
stress_obj = model.cquad4_stress[subcase]
|
|
||||||
von_mises = stress_obj.data[0, :, 7]
|
|
||||||
results['max_stress'] = float(np.max(von_mises))
|
|
||||||
'''
|
|
||||||
|
|
||||||
code += '''
|
|
||||||
return results
|
|
||||||
'''
|
|
||||||
|
|
||||||
return code
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# Example usage
|
|
||||||
setup = IntelligentSetup()
|
|
||||||
|
|
||||||
# Run complete analysis
|
|
||||||
results = setup.run_complete_benchmarking(
|
|
||||||
prt_file=Path("path/to/model.prt"),
|
|
||||||
sim_file=Path("path/to/model.sim"),
|
|
||||||
workflow={'objectives': [{'name': 'freq', 'extraction': {'action': 'extract_frequency'}}]}
|
|
||||||
)
|
|
||||||
|
|
||||||
print("Analysis complete:")
|
|
||||||
print(json.dumps(results, indent=2, default=str))
|
|
||||||
19
optimization_engine/model_discovery/__init__.py
Normal file
19
optimization_engine/model_discovery/__init__.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
"""
|
||||||
|
Model Discovery Module
|
||||||
|
|
||||||
|
Tools for parsing and analyzing Siemens NX FEA model files:
|
||||||
|
- SimFileParser: Parse .sim files (both XML and binary)
|
||||||
|
- discover_fea_model: Main function to analyze model capabilities
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .model_discovery import (
|
||||||
|
SimFileParser,
|
||||||
|
discover_fea_model,
|
||||||
|
format_discovery_result_for_llm,
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"SimFileParser",
|
||||||
|
"discover_fea_model",
|
||||||
|
"format_discovery_result_for_llm",
|
||||||
|
]
|
||||||
@@ -339,7 +339,7 @@ class OptimizationConfigBuilder:
|
|||||||
|
|
||||||
# Example usage
|
# Example usage
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
from mcp_server.tools.model_discovery import discover_fea_model
|
from optimization_engine.model_discovery import discover_fea_model
|
||||||
|
|
||||||
# Step 1: Discover model
|
# Step 1: Discover model
|
||||||
print("Step 1: Discovering FEA model...")
|
print("Step 1: Discovering FEA model...")
|
||||||
|
|||||||
@@ -1,211 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for MCP Model Discovery Tool
|
|
||||||
|
|
||||||
Tests the .sim file parser and FEA model discovery functionality.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent.parent.parent.parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
from mcp_server.tools.model_discovery import (
|
|
||||||
discover_fea_model,
|
|
||||||
format_discovery_result_for_llm,
|
|
||||||
SimFileParser
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestSimFileParser:
|
|
||||||
"""Test the SimFileParser class"""
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def example_sim_path(self):
|
|
||||||
"""Path to example .sim file"""
|
|
||||||
return project_root / "examples" / "test_bracket.sim"
|
|
||||||
|
|
||||||
def test_parser_initialization(self, example_sim_path):
|
|
||||||
"""Test that parser initializes correctly"""
|
|
||||||
parser = SimFileParser(example_sim_path)
|
|
||||||
assert parser.sim_path.exists()
|
|
||||||
assert parser.tree is not None
|
|
||||||
assert parser.root is not None
|
|
||||||
|
|
||||||
def test_parser_file_not_found(self):
|
|
||||||
"""Test error handling for missing file"""
|
|
||||||
with pytest.raises(FileNotFoundError):
|
|
||||||
SimFileParser("/nonexistent/path/file.sim")
|
|
||||||
|
|
||||||
def test_parser_invalid_extension(self):
|
|
||||||
"""Test error handling for non-.sim file"""
|
|
||||||
with pytest.raises(ValueError):
|
|
||||||
SimFileParser(project_root / "README.md")
|
|
||||||
|
|
||||||
def test_extract_solutions(self, example_sim_path):
|
|
||||||
"""Test solution extraction"""
|
|
||||||
parser = SimFileParser(example_sim_path)
|
|
||||||
solutions = parser.extract_solutions()
|
|
||||||
|
|
||||||
assert len(solutions) > 0
|
|
||||||
assert solutions[0]['name'] == 'Structural Analysis 1'
|
|
||||||
assert solutions[0]['type'] == 'Static Structural'
|
|
||||||
assert solutions[0]['solver'] == 'NX Nastran'
|
|
||||||
|
|
||||||
def test_extract_expressions(self, example_sim_path):
|
|
||||||
"""Test expression extraction"""
|
|
||||||
parser = SimFileParser(example_sim_path)
|
|
||||||
expressions = parser.extract_expressions()
|
|
||||||
|
|
||||||
assert len(expressions) > 0
|
|
||||||
|
|
||||||
# Check for expected expressions
|
|
||||||
expr_names = [e['name'] for e in expressions]
|
|
||||||
assert 'wall_thickness' in expr_names
|
|
||||||
assert 'hole_diameter' in expr_names
|
|
||||||
assert 'rib_spacing' in expr_names
|
|
||||||
|
|
||||||
# Check expression values
|
|
||||||
wall_thickness = next(e for e in expressions if e['name'] == 'wall_thickness')
|
|
||||||
assert wall_thickness['value'] == '5.0'
|
|
||||||
assert wall_thickness['units'] == 'mm'
|
|
||||||
|
|
||||||
def test_extract_fem_info(self, example_sim_path):
|
|
||||||
"""Test FEM information extraction"""
|
|
||||||
parser = SimFileParser(example_sim_path)
|
|
||||||
fem_info = parser.extract_fem_info()
|
|
||||||
|
|
||||||
# Check mesh info
|
|
||||||
assert 'mesh' in fem_info
|
|
||||||
assert fem_info['mesh']['name'] == 'Bracket Mesh'
|
|
||||||
assert fem_info['mesh']['node_count'] == '8234'
|
|
||||||
assert fem_info['mesh']['element_count'] == '4521'
|
|
||||||
|
|
||||||
# Check materials
|
|
||||||
assert len(fem_info['materials']) > 0
|
|
||||||
assert fem_info['materials'][0]['name'] == 'Aluminum 6061-T6'
|
|
||||||
|
|
||||||
# Check loads
|
|
||||||
assert len(fem_info['loads']) > 0
|
|
||||||
assert fem_info['loads'][0]['name'] == 'Applied Force'
|
|
||||||
|
|
||||||
# Check constraints
|
|
||||||
assert len(fem_info['constraints']) > 0
|
|
||||||
assert fem_info['constraints'][0]['name'] == 'Fixed Support'
|
|
||||||
|
|
||||||
|
|
||||||
class TestDiscoverFEAModel:
|
|
||||||
"""Test the main discover_fea_model function"""
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def example_sim_path(self):
|
|
||||||
"""Path to example .sim file"""
|
|
||||||
return str(project_root / "examples" / "test_bracket.sim")
|
|
||||||
|
|
||||||
def test_successful_discovery(self, example_sim_path):
|
|
||||||
"""Test successful model discovery"""
|
|
||||||
result = discover_fea_model(example_sim_path)
|
|
||||||
|
|
||||||
assert result['status'] == 'success'
|
|
||||||
assert result['file_exists'] is True
|
|
||||||
assert 'solutions' in result
|
|
||||||
assert 'expressions' in result
|
|
||||||
assert 'fem_info' in result
|
|
||||||
assert 'summary' in result
|
|
||||||
|
|
||||||
# Check summary statistics
|
|
||||||
assert result['summary']['solution_count'] >= 1
|
|
||||||
assert result['summary']['expression_count'] >= 3
|
|
||||||
|
|
||||||
def test_file_not_found_error(self):
|
|
||||||
"""Test error handling for missing file"""
|
|
||||||
result = discover_fea_model("/nonexistent/file.sim")
|
|
||||||
|
|
||||||
assert result['status'] == 'error'
|
|
||||||
assert result['error_type'] == 'file_not_found'
|
|
||||||
assert 'message' in result
|
|
||||||
assert 'suggestion' in result
|
|
||||||
|
|
||||||
def test_result_structure(self, example_sim_path):
|
|
||||||
"""Test that result has expected structure"""
|
|
||||||
result = discover_fea_model(example_sim_path)
|
|
||||||
|
|
||||||
# Check top-level keys
|
|
||||||
expected_keys = ['status', 'sim_file', 'file_exists', 'solutions',
|
|
||||||
'expressions', 'fem_info', 'linked_files', 'metadata', 'summary']
|
|
||||||
|
|
||||||
for key in expected_keys:
|
|
||||||
assert key in result, f"Missing key: {key}"
|
|
||||||
|
|
||||||
# Check summary keys
|
|
||||||
expected_summary_keys = ['solution_count', 'expression_count',
|
|
||||||
'material_count', 'load_count', 'constraint_count']
|
|
||||||
|
|
||||||
for key in expected_summary_keys:
|
|
||||||
assert key in result['summary'], f"Missing summary key: {key}"
|
|
||||||
|
|
||||||
|
|
||||||
class TestFormatDiscoveryResult:
|
|
||||||
"""Test the Markdown formatting function"""
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def example_sim_path(self):
|
|
||||||
"""Path to example .sim file"""
|
|
||||||
return str(project_root / "examples" / "test_bracket.sim")
|
|
||||||
|
|
||||||
def test_format_success_result(self, example_sim_path):
|
|
||||||
"""Test formatting of successful discovery"""
|
|
||||||
result = discover_fea_model(example_sim_path)
|
|
||||||
formatted = format_discovery_result_for_llm(result)
|
|
||||||
|
|
||||||
assert isinstance(formatted, str)
|
|
||||||
assert '# FEA Model Analysis' in formatted
|
|
||||||
assert 'Solutions' in formatted
|
|
||||||
assert 'Expressions' in formatted
|
|
||||||
assert 'wall_thickness' in formatted
|
|
||||||
|
|
||||||
def test_format_error_result(self):
|
|
||||||
"""Test formatting of error result"""
|
|
||||||
result = discover_fea_model("/nonexistent/file.sim")
|
|
||||||
formatted = format_discovery_result_for_llm(result)
|
|
||||||
|
|
||||||
assert isinstance(formatted, str)
|
|
||||||
assert '❌' in formatted or 'Error' in formatted
|
|
||||||
assert result['message'] in formatted
|
|
||||||
|
|
||||||
|
|
||||||
# Integration test
|
|
||||||
def test_end_to_end_workflow():
|
|
||||||
"""
|
|
||||||
Test the complete workflow:
|
|
||||||
1. Discover model
|
|
||||||
2. Format for LLM
|
|
||||||
3. Verify output is useful
|
|
||||||
"""
|
|
||||||
example_sim = str(project_root / "examples" / "test_bracket.sim")
|
|
||||||
|
|
||||||
# Step 1: Discover
|
|
||||||
result = discover_fea_model(example_sim)
|
|
||||||
assert result['status'] == 'success'
|
|
||||||
|
|
||||||
# Step 2: Format
|
|
||||||
formatted = format_discovery_result_for_llm(result)
|
|
||||||
assert len(formatted) > 100 # Should be substantial output
|
|
||||||
|
|
||||||
# Step 3: Verify key information is present
|
|
||||||
assert 'wall_thickness' in formatted
|
|
||||||
assert 'Aluminum' in formatted
|
|
||||||
assert 'Static Structural' in formatted
|
|
||||||
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("INTEGRATION TEST OUTPUT:")
|
|
||||||
print("="*60)
|
|
||||||
print(formatted)
|
|
||||||
print("="*60)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# Run tests with pytest
|
|
||||||
pytest.main([__file__, "-v", "-s"])
|
|
||||||
Reference in New Issue
Block a user