feat: Add Study Insights module (SYS_16) for physics visualizations
Introduces a new plugin architecture for study-specific physics visualizations, separating "optimizer perspective" (Analysis) from "engineer perspective" (Insights). New module: optimization_engine/insights/ - base.py: StudyInsight base class, InsightConfig, InsightResult, registry - zernike_wfe.py: Mirror WFE with 3D surface and Zernike decomposition - stress_field.py: Von Mises stress contours with safety factors - modal_analysis.py: Natural frequencies and mode shapes - thermal_field.py: Temperature distribution visualization - design_space.py: Parameter-objective landscape exploration Features: - 5 insight types: zernike_wfe, stress_field, modal, thermal, design_space - CLI: python -m optimization_engine.insights generate <study> - Standalone HTML generation with Plotly - Enhanced Zernike viz: Turbo colorscale, smooth shading, 0.5x AMP - Dashboard API fix: Added include_coefficients param to extract_relative() Documentation: - docs/protocols/system/SYS_16_STUDY_INSIGHTS.md - Updated ATOMIZER_CONTEXT.md (v1.7) - Updated 01_CHEATSHEET.md with insights section Tools: - tools/zernike_html_generator.py: Standalone WFE HTML generator - tools/analyze_wfe.bat: Double-click to analyze OP2 files 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -51,6 +51,7 @@ Use keyword matching to load appropriate context:
|
||||
| Analyze results | "results", "best", "Pareto", "analyze" | OP_04 | Generate analysis |
|
||||
| Neural acceleration | "neural", "surrogate", "turbo", "NN" | SYS_14 + SYS_15 | Method selection |
|
||||
| NX/CAD help | "NX", "model", "mesh", "expression" | MCP + nx-docs | Use Siemens MCP |
|
||||
| Physics insights | "zernike", "stress view", "insight" | SYS_16 | Generate insights |
|
||||
| Troubleshoot | "error", "failed", "fix", "debug" | OP_06 | Diagnose issues |
|
||||
|
||||
---
|
||||
@@ -86,22 +87,46 @@ cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
|
||||
|
||||
### Study Structure (100% standardized)
|
||||
|
||||
**Studies are organized by geometry type**:
|
||||
```
|
||||
study_name/
|
||||
studies/
|
||||
├── M1_Mirror/ # Mirror optimization studies
|
||||
│ ├── m1_mirror_adaptive_V14/
|
||||
│ ├── m1_mirror_cost_reduction_V3/
|
||||
│ └── m1_mirror_cost_reduction_V4/
|
||||
├── Simple_Bracket/ # Bracket studies
|
||||
├── UAV_Arm/ # UAV arm studies
|
||||
├── Drone_Gimbal/ # Gimbal studies
|
||||
├── Simple_Beam/ # Beam studies
|
||||
└── _Other/ # Test/experimental studies
|
||||
```
|
||||
|
||||
**Individual study structure**:
|
||||
```
|
||||
studies/{geometry_type}/{study_name}/
|
||||
├── optimization_config.json # Problem definition
|
||||
├── run_optimization.py # FEA optimization script
|
||||
├── run_nn_optimization.py # Neural acceleration (optional)
|
||||
├── README.md # MANDATORY documentation
|
||||
├── STUDY_REPORT.md # Results template
|
||||
├── 1_setup/
|
||||
│ └── model/
|
||||
│ ├── Model.prt # NX part file
|
||||
│ ├── Model_sim1.sim # NX simulation
|
||||
│ └── Model_fem1.fem # FEM definition
|
||||
└── 2_results/
|
||||
├── study.db # Optuna database
|
||||
├── optimization.log # Logs
|
||||
└── turbo_report.json # NN results (if run)
|
||||
├── 2_iterations/ # FEA trial folders (iter1, iter2, ...)
|
||||
├── 3_results/
|
||||
│ ├── study.db # Optuna database
|
||||
│ ├── optimization.log # Logs
|
||||
│ └── turbo_report.json # NN results (if run)
|
||||
└── 3_insights/ # Study Insights (SYS_16)
|
||||
├── zernike_*.html # Zernike WFE visualizations
|
||||
├── stress_*.html # Stress field visualizations
|
||||
└── design_space_*.html # Parameter exploration
|
||||
```
|
||||
|
||||
**IMPORTANT**: When creating a new study, always place it under the appropriate geometry type folder!
|
||||
|
||||
### Available Extractors (SYS_12)
|
||||
|
||||
| ID | Physics | Function | Notes |
|
||||
@@ -141,6 +166,7 @@ study_name/
|
||||
│ SYS_10: IMSO (single-obj) SYS_11: Multi-objective │
|
||||
│ SYS_12: Extractors SYS_13: Dashboard │
|
||||
│ SYS_14: Neural Accel SYS_15: Method Selector │
|
||||
│ SYS_16: Study Insights │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
@@ -411,7 +437,7 @@ python -m optimization_engine.auto_doc templates
|
||||
|
||||
| Component | Version | Last Updated |
|
||||
|-----------|---------|--------------|
|
||||
| ATOMIZER_CONTEXT | 1.6 | 2025-12-12 |
|
||||
| ATOMIZER_CONTEXT | 1.7 | 2025-12-20 |
|
||||
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
||||
| GenericSurrogate | 1.0 | 2025-12-07 |
|
||||
| Study State Detector | 1.0 | 2025-12-07 |
|
||||
@@ -423,6 +449,7 @@ python -m optimization_engine.auto_doc templates
|
||||
| Auto-Doc Generator | 1.0 | 2025-12-07 |
|
||||
| Subagent Commands | 1.0 | 2025-12-07 |
|
||||
| FEARunner Pattern | 1.0 | 2025-12-12 |
|
||||
| Study Insights | 1.0 | 2025-12-20 |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ requires_skills:
|
||||
|
||||
| I want to... | Use Protocol | Key Command/Action |
|
||||
|--------------|--------------|-------------------|
|
||||
| Create a new optimization study | OP_01 | Generate `optimization_config.json` + `run_optimization.py` |
|
||||
| Create a new optimization study | OP_01 | Place in `studies/{geometry_type}/`, generate config + runner + **README.md** |
|
||||
| Run an optimization | OP_02 | `conda activate atomizer && python run_optimization.py` |
|
||||
| Check optimization progress | OP_03 | Query `study.db` or check dashboard at `localhost:3000` |
|
||||
| See best results | OP_04 | `optuna-dashboard sqlite:///study.db` or dashboard |
|
||||
@@ -30,6 +30,7 @@ requires_skills:
|
||||
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
|
||||
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
|
||||
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
|
||||
| Generate physics insight | SYS_16 | `python -m optimization_engine.insights generate <study>` |
|
||||
|
||||
---
|
||||
|
||||
@@ -291,6 +292,53 @@ Without it, `UpdateFemodel()` runs but the mesh doesn't change!
|
||||
| 13 | Dashboard | Real-time tracking and visualization |
|
||||
| 14 | Neural | Surrogate model acceleration |
|
||||
| 15 | Method Selector | Recommends optimization strategy |
|
||||
| 16 | Study Insights | Physics visualizations (Zernike, stress, modal) |
|
||||
|
||||
---
|
||||
|
||||
## Study Insights Quick Reference (SYS_16)
|
||||
|
||||
Generate physics-focused visualizations from FEA results.
|
||||
|
||||
### Available Insight Types
|
||||
| Type | Purpose | Data Required |
|
||||
|------|---------|---------------|
|
||||
| `zernike_wfe` | Mirror wavefront error | OP2 with displacements |
|
||||
| `stress_field` | Von Mises stress contours | OP2 with stresses |
|
||||
| `modal` | Mode shapes + frequencies | OP2 with eigenvectors |
|
||||
| `thermal` | Temperature distribution | OP2 with temperatures |
|
||||
| `design_space` | Parameter-objective landscape | study.db with 5+ trials |
|
||||
|
||||
### Commands
|
||||
```bash
|
||||
# List available insights for a study
|
||||
python -m optimization_engine.insights list
|
||||
|
||||
# Generate all insights
|
||||
python -m optimization_engine.insights generate studies/my_study
|
||||
|
||||
# Generate specific insight
|
||||
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
||||
```
|
||||
|
||||
### Python API
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, list_available_insights
|
||||
from pathlib import Path
|
||||
|
||||
study_path = Path("studies/my_study")
|
||||
|
||||
# Check what's available
|
||||
available = list_available_insights(study_path)
|
||||
|
||||
# Generate Zernike WFE insight
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
result = insight.generate()
|
||||
print(result.html_path) # Path to generated HTML
|
||||
print(result.summary) # Key metrics dict
|
||||
```
|
||||
|
||||
**Output**: HTMLs saved to `{study}/3_insights/`
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -34,12 +34,52 @@ def get_results_dir(study_dir: Path) -> Path:
|
||||
return results_dir
|
||||
|
||||
|
||||
def resolve_study_path(study_id: str) -> Path:
|
||||
"""Find study folder by scanning all topic directories.
|
||||
|
||||
Supports nested folder structure: studies/Topic/study_name/
|
||||
Study ID is the short name (e.g., 'm1_mirror_adaptive_V14')
|
||||
|
||||
Returns the full path to the study directory.
|
||||
Raises HTTPException 404 if not found.
|
||||
"""
|
||||
# First check direct path (backwards compatibility for flat structure)
|
||||
direct_path = STUDIES_DIR / study_id
|
||||
if direct_path.exists() and direct_path.is_dir():
|
||||
# Verify it's actually a study (has 1_setup or config)
|
||||
if (direct_path / "1_setup").exists() or (direct_path / "optimization_config.json").exists():
|
||||
return direct_path
|
||||
|
||||
# Scan topic folders for nested structure
|
||||
for topic_dir in STUDIES_DIR.iterdir():
|
||||
if topic_dir.is_dir() and not topic_dir.name.startswith('.'):
|
||||
study_dir = topic_dir / study_id
|
||||
if study_dir.exists() and study_dir.is_dir():
|
||||
# Verify it's actually a study
|
||||
if (study_dir / "1_setup").exists() or (study_dir / "optimization_config.json").exists():
|
||||
return study_dir
|
||||
|
||||
raise HTTPException(status_code=404, detail=f"Study not found: {study_id}")
|
||||
|
||||
|
||||
def get_study_topic(study_dir: Path) -> Optional[str]:
|
||||
"""Get the topic folder name for a study, or None if in root."""
|
||||
# Check if parent is a topic folder (not the root studies dir)
|
||||
parent = study_dir.parent
|
||||
if parent != STUDIES_DIR and parent.parent == STUDIES_DIR:
|
||||
return parent.name
|
||||
return None
|
||||
|
||||
|
||||
def is_optimization_running(study_id: str) -> bool:
|
||||
"""Check if an optimization process is currently running for a study.
|
||||
|
||||
Looks for Python processes running run_optimization.py with the study_id in the command line.
|
||||
"""
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
try:
|
||||
study_dir = resolve_study_path(study_id)
|
||||
except HTTPException:
|
||||
return False
|
||||
|
||||
for proc in psutil.process_iter(['pid', 'name', 'cmdline', 'cwd']):
|
||||
try:
|
||||
@@ -91,130 +131,168 @@ def get_accurate_study_status(study_id: str, trial_count: int, total_trials: int
|
||||
return "paused"
|
||||
|
||||
|
||||
def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[dict]:
|
||||
"""Load study info from a study directory. Returns None if not a valid study."""
|
||||
# Look for optimization config (check multiple locations)
|
||||
config_file = study_dir / "optimization_config.json"
|
||||
if not config_file.exists():
|
||||
config_file = study_dir / "1_setup" / "optimization_config.json"
|
||||
if not config_file.exists():
|
||||
return None
|
||||
|
||||
# Load config
|
||||
with open(config_file) as f:
|
||||
config = json.load(f)
|
||||
|
||||
# Check if results directory exists (support both 2_results and 3_results)
|
||||
results_dir = study_dir / "2_results"
|
||||
if not results_dir.exists():
|
||||
results_dir = study_dir / "3_results"
|
||||
|
||||
# Check for Optuna database (Protocol 10) or JSON history (other protocols)
|
||||
study_db = results_dir / "study.db"
|
||||
history_file = results_dir / "optimization_history_incremental.json"
|
||||
|
||||
trial_count = 0
|
||||
best_value = None
|
||||
has_db = False
|
||||
|
||||
# Protocol 10: Read from Optuna SQLite database
|
||||
if study_db.exists():
|
||||
has_db = True
|
||||
try:
|
||||
# Use timeout to avoid blocking on locked databases
|
||||
conn = sqlite3.connect(str(study_db), timeout=2.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get trial count and status
|
||||
cursor.execute("SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE'")
|
||||
trial_count = cursor.fetchone()[0]
|
||||
|
||||
# Get best trial (for single-objective, or first objective for multi-objective)
|
||||
if trial_count > 0:
|
||||
cursor.execute("""
|
||||
SELECT value FROM trial_values
|
||||
WHERE trial_id IN (
|
||||
SELECT trial_id FROM trials WHERE state = 'COMPLETE'
|
||||
)
|
||||
ORDER BY value ASC
|
||||
LIMIT 1
|
||||
""")
|
||||
result = cursor.fetchone()
|
||||
if result:
|
||||
best_value = result[0]
|
||||
|
||||
conn.close()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Failed to read Optuna database for {study_dir.name}: {e}")
|
||||
|
||||
# Legacy: Read from JSON history
|
||||
elif history_file.exists():
|
||||
has_db = True
|
||||
with open(history_file) as f:
|
||||
history = json.load(f)
|
||||
trial_count = len(history)
|
||||
if history:
|
||||
# Find best trial
|
||||
best_trial = min(history, key=lambda x: x['objective'])
|
||||
best_value = best_trial['objective']
|
||||
|
||||
# Get total trials from config (supports both formats)
|
||||
total_trials = (
|
||||
config.get('optimization_settings', {}).get('n_trials') or
|
||||
config.get('optimization', {}).get('n_trials') or
|
||||
config.get('trials', {}).get('n_trials', 50)
|
||||
)
|
||||
|
||||
# Get accurate status using process detection
|
||||
status = get_accurate_study_status(study_dir.name, trial_count, total_trials, has_db)
|
||||
|
||||
# Get creation date from directory or config modification time
|
||||
created_at = None
|
||||
try:
|
||||
# First try to get from database (most accurate)
|
||||
if study_db.exists():
|
||||
created_at = datetime.fromtimestamp(study_db.stat().st_mtime).isoformat()
|
||||
elif config_file.exists():
|
||||
created_at = datetime.fromtimestamp(config_file.stat().st_mtime).isoformat()
|
||||
else:
|
||||
created_at = datetime.fromtimestamp(study_dir.stat().st_ctime).isoformat()
|
||||
except:
|
||||
created_at = None
|
||||
|
||||
# Get last modified time
|
||||
last_modified = None
|
||||
try:
|
||||
if study_db.exists():
|
||||
last_modified = datetime.fromtimestamp(study_db.stat().st_mtime).isoformat()
|
||||
elif history_file.exists():
|
||||
last_modified = datetime.fromtimestamp(history_file.stat().st_mtime).isoformat()
|
||||
except:
|
||||
last_modified = None
|
||||
|
||||
return {
|
||||
"id": study_dir.name,
|
||||
"name": study_dir.name.replace("_", " ").title(),
|
||||
"topic": topic, # NEW: topic field for grouping
|
||||
"status": status,
|
||||
"progress": {
|
||||
"current": trial_count,
|
||||
"total": total_trials
|
||||
},
|
||||
"best_value": best_value,
|
||||
"target": config.get('target', {}).get('value'),
|
||||
"path": str(study_dir),
|
||||
"created_at": created_at,
|
||||
"last_modified": last_modified
|
||||
}
|
||||
|
||||
|
||||
@router.get("/studies")
|
||||
async def list_studies():
|
||||
"""List all available optimization studies"""
|
||||
"""List all available optimization studies.
|
||||
|
||||
Supports both flat and nested folder structures:
|
||||
- Flat: studies/study_name/
|
||||
- Nested: studies/Topic/study_name/
|
||||
|
||||
Returns studies with 'topic' field for frontend grouping.
|
||||
"""
|
||||
try:
|
||||
studies = []
|
||||
if not STUDIES_DIR.exists():
|
||||
return {"studies": []}
|
||||
|
||||
for study_dir in STUDIES_DIR.iterdir():
|
||||
if not study_dir.is_dir():
|
||||
for item in STUDIES_DIR.iterdir():
|
||||
if not item.is_dir():
|
||||
continue
|
||||
if item.name.startswith('.'):
|
||||
continue
|
||||
|
||||
# Look for optimization config (check multiple locations)
|
||||
config_file = study_dir / "optimization_config.json"
|
||||
if not config_file.exists():
|
||||
config_file = study_dir / "1_setup" / "optimization_config.json"
|
||||
if not config_file.exists():
|
||||
continue
|
||||
# Check if this is a study (flat structure) or a topic folder (nested structure)
|
||||
is_study = (item / "1_setup").exists() or (item / "optimization_config.json").exists()
|
||||
|
||||
# Load config
|
||||
with open(config_file) as f:
|
||||
config = json.load(f)
|
||||
if is_study:
|
||||
# Flat structure: study directly in studies/
|
||||
study_info = _load_study_info(item, topic=None)
|
||||
if study_info:
|
||||
studies.append(study_info)
|
||||
else:
|
||||
# Nested structure: this might be a topic folder
|
||||
# Check if it contains study subdirectories
|
||||
for sub_item in item.iterdir():
|
||||
if not sub_item.is_dir():
|
||||
continue
|
||||
if sub_item.name.startswith('.'):
|
||||
continue
|
||||
|
||||
# Check if results directory exists (support both 2_results and 3_results)
|
||||
results_dir = study_dir / "2_results"
|
||||
if not results_dir.exists():
|
||||
results_dir = study_dir / "3_results"
|
||||
|
||||
# Check for Optuna database (Protocol 10) or JSON history (other protocols)
|
||||
study_db = results_dir / "study.db"
|
||||
history_file = results_dir / "optimization_history_incremental.json"
|
||||
|
||||
trial_count = 0
|
||||
best_value = None
|
||||
has_db = False
|
||||
|
||||
# Protocol 10: Read from Optuna SQLite database
|
||||
if study_db.exists():
|
||||
has_db = True
|
||||
try:
|
||||
# Use timeout to avoid blocking on locked databases
|
||||
conn = sqlite3.connect(str(study_db), timeout=2.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get trial count and status
|
||||
cursor.execute("SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE'")
|
||||
trial_count = cursor.fetchone()[0]
|
||||
|
||||
# Get best trial (for single-objective, or first objective for multi-objective)
|
||||
if trial_count > 0:
|
||||
cursor.execute("""
|
||||
SELECT value FROM trial_values
|
||||
WHERE trial_id IN (
|
||||
SELECT trial_id FROM trials WHERE state = 'COMPLETE'
|
||||
)
|
||||
ORDER BY value ASC
|
||||
LIMIT 1
|
||||
""")
|
||||
result = cursor.fetchone()
|
||||
if result:
|
||||
best_value = result[0]
|
||||
|
||||
conn.close()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Failed to read Optuna database for {study_dir.name}: {e}")
|
||||
|
||||
# Legacy: Read from JSON history
|
||||
elif history_file.exists():
|
||||
has_db = True
|
||||
with open(history_file) as f:
|
||||
history = json.load(f)
|
||||
trial_count = len(history)
|
||||
if history:
|
||||
# Find best trial
|
||||
best_trial = min(history, key=lambda x: x['objective'])
|
||||
best_value = best_trial['objective']
|
||||
|
||||
# Get total trials from config (supports both formats)
|
||||
total_trials = (
|
||||
config.get('optimization_settings', {}).get('n_trials') or
|
||||
config.get('trials', {}).get('n_trials', 50)
|
||||
)
|
||||
|
||||
# Get accurate status using process detection
|
||||
status = get_accurate_study_status(study_dir.name, trial_count, total_trials, has_db)
|
||||
|
||||
# Get creation date from directory or config modification time
|
||||
created_at = None
|
||||
try:
|
||||
# First try to get from database (most accurate)
|
||||
if study_db.exists():
|
||||
created_at = datetime.fromtimestamp(study_db.stat().st_mtime).isoformat()
|
||||
elif config_file.exists():
|
||||
created_at = datetime.fromtimestamp(config_file.stat().st_mtime).isoformat()
|
||||
else:
|
||||
created_at = datetime.fromtimestamp(study_dir.stat().st_ctime).isoformat()
|
||||
except:
|
||||
created_at = None
|
||||
|
||||
# Get last modified time
|
||||
last_modified = None
|
||||
try:
|
||||
if study_db.exists():
|
||||
last_modified = datetime.fromtimestamp(study_db.stat().st_mtime).isoformat()
|
||||
elif history_file.exists():
|
||||
last_modified = datetime.fromtimestamp(history_file.stat().st_mtime).isoformat()
|
||||
except:
|
||||
last_modified = None
|
||||
|
||||
studies.append({
|
||||
"id": study_dir.name,
|
||||
"name": study_dir.name.replace("_", " ").title(),
|
||||
"status": status,
|
||||
"progress": {
|
||||
"current": trial_count,
|
||||
"total": total_trials
|
||||
},
|
||||
"best_value": best_value,
|
||||
"target": config.get('target', {}).get('value'),
|
||||
"path": str(study_dir),
|
||||
"created_at": created_at,
|
||||
"last_modified": last_modified
|
||||
})
|
||||
# Check if this subdirectory is a study
|
||||
sub_is_study = (sub_item / "1_setup").exists() or (sub_item / "optimization_config.json").exists()
|
||||
if sub_is_study:
|
||||
study_info = _load_study_info(sub_item, topic=item.name)
|
||||
if study_info:
|
||||
studies.append(study_info)
|
||||
|
||||
return {"studies": studies}
|
||||
|
||||
@@ -225,7 +303,7 @@ async def list_studies():
|
||||
async def get_study_status(study_id: str):
|
||||
"""Get detailed status of a specific study"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
@@ -354,7 +432,7 @@ async def get_study_status(study_id: str):
|
||||
async def get_optimization_history(study_id: str, limit: Optional[int] = None):
|
||||
"""Get optimization history (all trials)"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
study_db = results_dir / "study.db"
|
||||
history_file = results_dir / "optimization_history_incremental.json"
|
||||
@@ -493,7 +571,7 @@ async def get_optimization_history(study_id: str, limit: Optional[int] = None):
|
||||
async def get_pruning_history(study_id: str):
|
||||
"""Get pruning diagnostics from Optuna database or legacy JSON file"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
study_db = results_dir / "study.db"
|
||||
pruning_file = results_dir / "pruning_history.json"
|
||||
@@ -593,7 +671,7 @@ def _infer_objective_unit(objective: Dict) -> str:
|
||||
async def get_study_metadata(study_id: str):
|
||||
"""Read optimization_config.json for objectives, design vars, units (Protocol 13)"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
@@ -631,7 +709,7 @@ async def get_study_metadata(study_id: str):
|
||||
async def get_optimizer_state(study_id: str):
|
||||
"""Read realtime optimizer state from intelligent_optimizer/ (Protocol 13)"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
state_file = results_dir / "intelligent_optimizer" / "optimizer_state.json"
|
||||
|
||||
@@ -652,7 +730,7 @@ async def get_optimizer_state(study_id: str):
|
||||
async def get_pareto_front(study_id: str):
|
||||
"""Get Pareto-optimal solutions for multi-objective studies (Protocol 13)"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
study_db = results_dir / "study.db"
|
||||
|
||||
@@ -696,7 +774,7 @@ async def get_pareto_front(study_id: str):
|
||||
async def get_nn_pareto_front(study_id: str):
|
||||
"""Get NN surrogate Pareto front from nn_pareto_front.json"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
nn_pareto_file = results_dir / "nn_pareto_front.json"
|
||||
|
||||
@@ -741,7 +819,7 @@ async def get_nn_pareto_front(study_id: str):
|
||||
async def get_nn_optimization_state(study_id: str):
|
||||
"""Get NN optimization state/summary from nn_optimization_state.json"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
nn_state_file = results_dir / "nn_optimization_state.json"
|
||||
|
||||
@@ -858,7 +936,7 @@ async def convert_study_mesh(study_id: str):
|
||||
Creates a web-viewable 3D model with FEA results as vertex colors
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
@@ -896,7 +974,7 @@ async def get_mesh_file(study_id: str, filename: str):
|
||||
if '..' in filename or '/' in filename or '\\' in filename:
|
||||
raise HTTPException(status_code=400, detail="Invalid filename")
|
||||
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
visualization_dir = study_dir / "3_visualization"
|
||||
|
||||
file_path = visualization_dir / filename
|
||||
@@ -936,7 +1014,7 @@ async def get_optuna_dashboard_url(study_id: str):
|
||||
sqlite:///studies/{study_id}/2_results/study.db
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
@@ -991,7 +1069,7 @@ async def generate_report(
|
||||
Information about the generated report including download URL
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
@@ -1048,7 +1126,7 @@ async def download_report(study_id: str, filename: str):
|
||||
if '..' in filename or '/' in filename or '\\' in filename:
|
||||
raise HTTPException(status_code=400, detail="Invalid filename")
|
||||
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
results_dir = get_results_dir(study_dir)
|
||||
|
||||
file_path = results_dir / filename
|
||||
@@ -1093,7 +1171,7 @@ async def get_console_output(study_id: str, lines: int = 200):
|
||||
JSON with console output lines
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1158,7 +1236,7 @@ async def get_study_report(study_id: str):
|
||||
JSON with the markdown content
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1200,7 +1278,7 @@ async def get_study_readme(study_id: str):
|
||||
JSON with the markdown content
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1257,6 +1335,71 @@ async def get_study_readme(study_id: str):
|
||||
raise HTTPException(status_code=500, detail=f"Failed to read README: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/image/{image_path:path}")
|
||||
async def get_study_image(study_id: str, image_path: str):
|
||||
"""
|
||||
Serve images from a study directory.
|
||||
|
||||
Supports images in:
|
||||
- study_dir/image.png
|
||||
- study_dir/1_setup/image.png
|
||||
- study_dir/3_results/image.png
|
||||
- study_dir/assets/image.png
|
||||
|
||||
Args:
|
||||
study_id: Study identifier
|
||||
image_path: Relative path to the image within the study
|
||||
|
||||
Returns:
|
||||
FileResponse with the image
|
||||
"""
|
||||
try:
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
|
||||
# Sanitize path to prevent directory traversal
|
||||
image_path = image_path.replace('..', '').lstrip('/')
|
||||
|
||||
# Try multiple locations for the image
|
||||
possible_paths = [
|
||||
study_dir / image_path,
|
||||
study_dir / "1_setup" / image_path,
|
||||
study_dir / "3_results" / image_path,
|
||||
study_dir / "2_results" / image_path,
|
||||
study_dir / "assets" / image_path,
|
||||
]
|
||||
|
||||
image_file = None
|
||||
for path in possible_paths:
|
||||
if path.exists() and path.is_file():
|
||||
image_file = path
|
||||
break
|
||||
|
||||
if image_file is None:
|
||||
raise HTTPException(status_code=404, detail=f"Image not found: {image_path}")
|
||||
|
||||
# Determine media type
|
||||
suffix = image_file.suffix.lower()
|
||||
media_types = {
|
||||
'.png': 'image/png',
|
||||
'.jpg': 'image/jpeg',
|
||||
'.jpeg': 'image/jpeg',
|
||||
'.gif': 'image/gif',
|
||||
'.svg': 'image/svg+xml',
|
||||
'.webp': 'image/webp',
|
||||
}
|
||||
media_type = media_types.get(suffix, 'application/octet-stream')
|
||||
|
||||
return FileResponse(image_file, media_type=media_type)
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to serve image: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/config")
|
||||
async def get_study_config(study_id: str):
|
||||
"""
|
||||
@@ -1269,7 +1412,7 @@ async def get_study_config(study_id: str):
|
||||
JSON with the complete configuration
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1306,7 +1449,7 @@ _running_processes: Dict[str, int] = {}
|
||||
|
||||
def _find_optimization_process(study_id: str) -> Optional[psutil.Process]:
|
||||
"""Find a running optimization process for a given study"""
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
for proc in psutil.process_iter(['pid', 'name', 'cmdline', 'cwd']):
|
||||
try:
|
||||
@@ -1335,7 +1478,7 @@ async def get_process_status(study_id: str):
|
||||
JSON with process status (is_running, pid, iteration counts)
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1430,7 +1573,7 @@ async def start_optimization(study_id: str, request: StartOptimizationRequest =
|
||||
JSON with process info
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1505,7 +1648,7 @@ async def stop_optimization(study_id: str, request: StopRequest = None):
|
||||
request = StopRequest()
|
||||
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1606,7 +1749,7 @@ async def validate_optimization(study_id: str, request: ValidateRequest = None):
|
||||
JSON with process info
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1684,7 +1827,7 @@ async def launch_optuna_dashboard(study_id: str):
|
||||
return s.connect_ex(('localhost', port)) == 0
|
||||
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1717,18 +1860,13 @@ async def launch_optuna_dashboard(study_id: str):
|
||||
"message": "Optuna dashboard already running on port 8081"
|
||||
}
|
||||
|
||||
# Launch optuna-dashboard using Python script
|
||||
python_exe = sys.executable
|
||||
# Launch optuna-dashboard using CLI command (more robust than Python import)
|
||||
# Use absolute path with POSIX format for SQLite URL
|
||||
abs_db_path = study_db.absolute().as_posix()
|
||||
storage_url = f"sqlite:///{abs_db_path}"
|
||||
|
||||
# Create a small Python script to run optuna-dashboard
|
||||
launch_script = f'''
|
||||
from optuna_dashboard import run_server
|
||||
run_server("{storage_url}", host="0.0.0.0", port={port})
|
||||
'''
|
||||
cmd = [python_exe, "-c", launch_script]
|
||||
# Use optuna-dashboard CLI command directly
|
||||
cmd = ["optuna-dashboard", storage_url, "--port", str(port), "--host", "0.0.0.0"]
|
||||
|
||||
# On Windows, use CREATE_NEW_PROCESS_GROUP and DETACHED_PROCESS flags
|
||||
import platform
|
||||
@@ -1817,7 +1955,7 @@ async def get_model_files(study_id: str):
|
||||
JSON with list of model files and their paths
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1896,7 +2034,7 @@ async def open_model_folder(study_id: str, folder_type: str = "model"):
|
||||
import platform
|
||||
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -1948,7 +2086,7 @@ async def open_model_folder(study_id: str, folder_type: str = "model"):
|
||||
async def get_best_solution(study_id: str):
|
||||
"""Get the best trial(s) for a study with improvement metrics"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study '{study_id}' not found")
|
||||
|
||||
@@ -2081,7 +2219,7 @@ async def get_study_runs(study_id: str):
|
||||
This endpoint returns metrics for each sub-study.
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study '{study_id}' not found")
|
||||
|
||||
@@ -2191,7 +2329,7 @@ async def update_study_config(study_id: str, request: UpdateConfigRequest):
|
||||
JSON with success status
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study {study_id} not found")
|
||||
@@ -2245,7 +2383,7 @@ async def get_zernike_available_trials(study_id: str):
|
||||
JSON with list of trial numbers that have iteration folders with OP2 files
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study '{study_id}' not found")
|
||||
|
||||
@@ -2301,7 +2439,7 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
JSON with HTML content for each comparison, or error if OP2 not found
|
||||
"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study '{study_id}' not found")
|
||||
|
||||
@@ -2346,10 +2484,11 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
|
||||
# Configuration
|
||||
N_MODES = 50
|
||||
AMP = 50.0 # Increased from 2.0 for better visibility
|
||||
PANCAKE = 5.0 # Reduced from 10.0 for more Z range
|
||||
AMP = 0.5 # Reduced deformation scaling (0.5x)
|
||||
PANCAKE = 3.0 # Z-axis range multiplier
|
||||
PLOT_DOWNSAMPLE = 5000 # Reduced for faster loading
|
||||
FILTER_LOW_ORDERS = 4
|
||||
COLORSCALE = 'Turbo' # Colorscale: 'RdBu_r', 'Viridis', 'Plasma', 'Turbo'
|
||||
|
||||
SUBCASE_MAP = {
|
||||
'1': '90', '2': '20', '3': '40', '4': '60',
|
||||
@@ -2465,7 +2604,7 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
res_amp = AMP * Wp
|
||||
max_amp = float(np.max(np.abs(res_amp))) if res_amp.size else 1.0
|
||||
|
||||
# Create SURFACE mesh (not just points)
|
||||
# Create smooth shaded SURFACE mesh with lighting
|
||||
surface_trace = None
|
||||
try:
|
||||
tri = Triangulation(Xp, Yp)
|
||||
@@ -2475,11 +2614,25 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
x=Xp.tolist(), y=Yp.tolist(), z=res_amp.tolist(),
|
||||
i=i_idx.tolist(), j=j_idx.tolist(), k=k_idx.tolist(),
|
||||
intensity=res_amp.tolist(),
|
||||
colorscale='RdBu',
|
||||
opacity=0.95,
|
||||
flatshading=True,
|
||||
colorscale=COLORSCALE,
|
||||
opacity=1.0,
|
||||
flatshading=False, # Smooth shading
|
||||
lighting=dict(
|
||||
ambient=0.4,
|
||||
diffuse=0.8,
|
||||
specular=0.3,
|
||||
roughness=0.5,
|
||||
fresnel=0.2
|
||||
),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(title="Residual (nm)", titleside='right', len=0.6)
|
||||
colorbar=dict(
|
||||
title=dict(text="Residual (nm)", side='right'),
|
||||
thickness=15,
|
||||
len=0.6,
|
||||
tickformat=".1f"
|
||||
),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>Residual: %{z:.2f} nm<extra></extra>"
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Triangulation failed: {e}")
|
||||
@@ -2534,14 +2687,39 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
fig.add_trace(go.Scatter3d(
|
||||
x=Xp.tolist(), y=Yp.tolist(), z=res_amp.tolist(),
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=res_amp.tolist(), colorscale='RdBu', showscale=True),
|
||||
marker=dict(size=2, color=res_amp.tolist(), colorscale=COLORSCALE, showscale=True),
|
||||
showlegend=False
|
||||
), row=1, col=1)
|
||||
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=0.8, y=0.8, z=0.6)),
|
||||
zaxis=dict(range=[-max_amp * PANCAKE, max_amp * PANCAKE]),
|
||||
aspectmode='data'
|
||||
camera=dict(
|
||||
eye=dict(x=1.2, y=1.2, z=0.8),
|
||||
up=dict(x=0, y=0, z=1)
|
||||
),
|
||||
xaxis=dict(
|
||||
title="X (mm)",
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(240,240,240,0.9)'
|
||||
),
|
||||
yaxis=dict(
|
||||
title="Y (mm)",
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(240,240,240,0.9)'
|
||||
),
|
||||
zaxis=dict(
|
||||
title="Residual (nm)",
|
||||
range=[-max_amp * PANCAKE, max_amp * PANCAKE],
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(230,230,250,0.9)'
|
||||
),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4)
|
||||
)
|
||||
|
||||
# Row 2: RMS table with all metrics
|
||||
@@ -2805,7 +2983,7 @@ async def get_trial_zernike(study_id: str, trial_number: int):
|
||||
async def export_study_data(study_id: str, format: str):
|
||||
"""Export study data in various formats: csv, json, excel"""
|
||||
try:
|
||||
study_dir = STUDIES_DIR / study_id
|
||||
study_dir = resolve_study_path(study_id)
|
||||
if not study_dir.exists():
|
||||
raise HTTPException(status_code=404, detail=f"Study '{study_id}' not found")
|
||||
|
||||
|
||||
405
docs/protocols/system/SYS_16_STUDY_INSIGHTS.md
Normal file
405
docs/protocols/system/SYS_16_STUDY_INSIGHTS.md
Normal file
@@ -0,0 +1,405 @@
|
||||
# SYS_16: Study Insights
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Status**: Active
|
||||
**Purpose**: Physics-focused visualizations for FEA optimization results
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Study Insights provide **physics understanding** of optimization results through interactive 3D visualizations. Unlike the Analysis page (which shows optimizer metrics like convergence and Pareto fronts), Insights answer the question: **"What does this design actually look like?"**
|
||||
|
||||
### Analysis vs Insights
|
||||
|
||||
| Aspect | **Analysis** | **Insights** |
|
||||
|--------|--------------|--------------|
|
||||
| Focus | Optimization performance | Physics understanding |
|
||||
| Questions | "Is the optimizer converging?" | "What does the best design look like?" |
|
||||
| Data Source | `study.db` (trials, objectives) | Simulation outputs (OP2, mesh, fields) |
|
||||
| Typical Plots | Convergence, Pareto, parameters | 3D surfaces, stress contours, mode shapes |
|
||||
| When Used | During/after optimization | After specific trial of interest |
|
||||
|
||||
---
|
||||
|
||||
## Available Insight Types
|
||||
|
||||
| Type ID | Name | Applicable To | Data Required |
|
||||
|---------|------|---------------|---------------|
|
||||
| `zernike_wfe` | Zernike WFE Analysis | Mirror, optics | OP2 with displacement subcases |
|
||||
| `stress_field` | Stress Distribution | Structural, bracket, beam | OP2 with stress results |
|
||||
| `modal` | Modal Analysis | Vibration, dynamic | OP2 with eigenvalue/eigenvector |
|
||||
| `thermal` | Thermal Analysis | Thermo-structural | OP2 with temperature results |
|
||||
| `design_space` | Design Space Explorer | All optimization studies | study.db with 5+ trials |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Module Structure
|
||||
|
||||
```
|
||||
optimization_engine/insights/
|
||||
├── __init__.py # Registry and public API
|
||||
├── base.py # StudyInsight base class, InsightConfig, InsightResult
|
||||
├── zernike_wfe.py # Mirror wavefront error visualization
|
||||
├── stress_field.py # Stress contour visualization
|
||||
├── modal_analysis.py # Mode shape visualization
|
||||
├── thermal_field.py # Temperature distribution
|
||||
└── design_space.py # Parameter-objective exploration
|
||||
```
|
||||
|
||||
### Class Hierarchy
|
||||
|
||||
```python
|
||||
StudyInsight (ABC)
|
||||
├── ZernikeWFEInsight
|
||||
├── StressFieldInsight
|
||||
├── ModalInsight
|
||||
├── ThermalInsight
|
||||
└── DesignSpaceInsight
|
||||
```
|
||||
|
||||
### Key Classes
|
||||
|
||||
#### StudyInsight (Base Class)
|
||||
|
||||
```python
|
||||
class StudyInsight(ABC):
|
||||
insight_type: str # Unique identifier (e.g., 'zernike_wfe')
|
||||
name: str # Human-readable name
|
||||
description: str # What this insight shows
|
||||
applicable_to: List[str] # Study types this applies to
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if required data exists."""
|
||||
|
||||
def generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate visualization."""
|
||||
|
||||
def generate_html(self, trial_id=None, **kwargs) -> Path:
|
||||
"""Generate standalone HTML file."""
|
||||
|
||||
def get_plotly_data(self, trial_id=None, **kwargs) -> dict:
|
||||
"""Get Plotly figure for dashboard embedding."""
|
||||
```
|
||||
|
||||
#### InsightConfig
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InsightConfig:
|
||||
trial_id: Optional[int] = None # Which trial to visualize
|
||||
colorscale: str = 'Turbo' # Plotly colorscale
|
||||
amplification: float = 1.0 # Deformation scale factor
|
||||
lighting: bool = True # 3D lighting effects
|
||||
output_dir: Optional[Path] = None # Where to save HTML
|
||||
extra: Dict[str, Any] = {} # Type-specific config
|
||||
```
|
||||
|
||||
#### InsightResult
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InsightResult:
|
||||
success: bool
|
||||
html_path: Optional[Path] = None # Generated HTML file
|
||||
plotly_figure: Optional[dict] = None # Figure for dashboard
|
||||
summary: Optional[dict] = None # Key metrics
|
||||
error: Optional[str] = None # Error message if failed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Python API
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, list_available_insights
|
||||
from pathlib import Path
|
||||
|
||||
study_path = Path("studies/my_mirror_study")
|
||||
|
||||
# List what's available
|
||||
available = list_available_insights(study_path)
|
||||
for info in available:
|
||||
print(f"{info['type']}: {info['name']}")
|
||||
|
||||
# Generate specific insight
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
if insight and insight.can_generate():
|
||||
result = insight.generate()
|
||||
print(f"Generated: {result.html_path}")
|
||||
print(f"40-20 Filtered RMS: {result.summary['40_vs_20_filtered_rms']:.2f} nm")
|
||||
```
|
||||
|
||||
### CLI
|
||||
|
||||
```bash
|
||||
# List all insight types
|
||||
python -m optimization_engine.insights list
|
||||
|
||||
# Generate all available insights for a study
|
||||
python -m optimization_engine.insights generate studies/my_study
|
||||
|
||||
# Generate specific insight
|
||||
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
||||
```
|
||||
|
||||
### With Configuration
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, InsightConfig
|
||||
|
||||
insight = get_insight('stress_field', study_path)
|
||||
config = InsightConfig(
|
||||
colorscale='Hot',
|
||||
extra={
|
||||
'yield_stress': 250, # MPa
|
||||
'stress_unit': 'MPa'
|
||||
}
|
||||
)
|
||||
result = insight.generate(config)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Insight Type Details
|
||||
|
||||
### 1. Zernike WFE Analysis (`zernike_wfe`)
|
||||
|
||||
**Purpose**: Visualize wavefront error for mirror optimization with Zernike polynomial decomposition.
|
||||
|
||||
**Generates**: 3 HTML files
|
||||
- `zernike_*_40_vs_20.html` - 40° vs 20° relative WFE
|
||||
- `zernike_*_60_vs_20.html` - 60° vs 20° relative WFE
|
||||
- `zernike_*_90_mfg.html` - 90° manufacturing (absolute)
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
amplification=0.5, # Reduce deformation scaling
|
||||
colorscale='Turbo',
|
||||
extra={
|
||||
'n_modes': 50,
|
||||
'filter_low_orders': 4, # Remove piston, tip, tilt, defocus
|
||||
'disp_unit': 'mm',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'40_vs_20_filtered_rms': 45.2, # nm
|
||||
'60_vs_20_filtered_rms': 78.3, # nm
|
||||
'90_mfg_filtered_rms': 120.5, # nm
|
||||
'90_optician_workload': 89.4, # nm (J1-J3 filtered)
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Stress Distribution (`stress_field`)
|
||||
|
||||
**Purpose**: Visualize Von Mises stress distribution with hot spot identification.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
colorscale='Hot',
|
||||
extra={
|
||||
'yield_stress': 250, # MPa - shows safety factor
|
||||
'stress_unit': 'MPa',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'max_stress': 187.5, # MPa
|
||||
'mean_stress': 45.2, # MPa
|
||||
'p95_stress': 120.3, # 95th percentile
|
||||
'p99_stress': 165.8, # 99th percentile
|
||||
'safety_factor': 1.33, # If yield_stress provided
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Modal Analysis (`modal`)
|
||||
|
||||
**Purpose**: Visualize natural frequencies and mode shapes.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
amplification=50.0, # Mode shape scale
|
||||
extra={
|
||||
'n_modes': 20, # Number of modes to show
|
||||
'show_mode': 1, # Which mode shape to display
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_modes': 20,
|
||||
'first_frequency_hz': 125.4,
|
||||
'frequencies_hz': [125.4, 287.8, 312.5, ...],
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Thermal Analysis (`thermal`)
|
||||
|
||||
**Purpose**: Visualize temperature distribution and gradients.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
colorscale='Thermal',
|
||||
extra={
|
||||
'temp_unit': 'K', # or 'C', 'F'
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'max_temp': 423.5, # K
|
||||
'min_temp': 293.0, # K
|
||||
'mean_temp': 345.2, # K
|
||||
'temp_range': 130.5, # K
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Design Space Explorer (`design_space`)
|
||||
|
||||
**Purpose**: Visualize parameter-objective relationships from optimization trials.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'primary_objective': 'filtered_rms', # Color by this objective
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_trials': 100,
|
||||
'n_params': 4,
|
||||
'n_objectives': 2,
|
||||
'best_trial_id': 47,
|
||||
'best_params': {'p1': 0.5, 'p2': 1.2, ...},
|
||||
'best_values': {'filtered_rms': 45.2, 'mass': 2.34},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Directory
|
||||
|
||||
Insights are saved to `{study}/3_insights/`:
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── 1_setup/
|
||||
├── 2_results/
|
||||
└── 3_insights/ # Created by insights module
|
||||
├── zernike_20241220_143022_40_vs_20.html
|
||||
├── zernike_20241220_143022_60_vs_20.html
|
||||
├── zernike_20241220_143022_90_mfg.html
|
||||
├── stress_20241220_143025.html
|
||||
└── design_space_20241220_143030.html
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Creating New Insight Types
|
||||
|
||||
To add a new insight type (power_user+):
|
||||
|
||||
### 1. Create the insight class
|
||||
|
||||
```python
|
||||
# optimization_engine/insights/my_insight.py
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
@register_insight
|
||||
class MyInsight(StudyInsight):
|
||||
insight_type = "my_insight"
|
||||
name = "My Custom Insight"
|
||||
description = "Description of what it shows"
|
||||
applicable_to = ["structural", "all"]
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
# Check if required data exists
|
||||
return self.results_path.exists()
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
# Generate visualization
|
||||
# ... build Plotly figure ...
|
||||
|
||||
html_path = config.output_dir / f"my_insight_{timestamp}.html"
|
||||
html_path.write_text(fig.to_html(...))
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
summary={'key_metric': value}
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Register in `__init__.py`
|
||||
|
||||
```python
|
||||
from .my_insight import MyInsight
|
||||
```
|
||||
|
||||
### 3. Test
|
||||
|
||||
```bash
|
||||
python -m optimization_engine.insights list
|
||||
# Should show "my_insight" in the list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Integration (Future)
|
||||
|
||||
The insights module is designed for future dashboard integration:
|
||||
|
||||
```python
|
||||
# Backend API endpoint
|
||||
@app.get("/api/study/{study_name}/insights")
|
||||
def list_study_insights(study_name: str):
|
||||
study_path = Path(f"studies/{study_name}")
|
||||
return list_available_insights(study_path)
|
||||
|
||||
@app.post("/api/study/{study_name}/insights/{type}/generate")
|
||||
def generate_insight(study_name: str, type: str, config: dict = {}):
|
||||
insight = get_insight(type, Path(f"studies/{study_name}"))
|
||||
result = insight.generate(InsightConfig(**config))
|
||||
return {
|
||||
'success': result.success,
|
||||
'html_path': str(result.html_path),
|
||||
'summary': result.summary
|
||||
}
|
||||
|
||||
@app.get("/api/study/{study_name}/insights/{type}/plotly")
|
||||
def get_insight_plotly(study_name: str, type: str):
|
||||
insight = get_insight(type, Path(f"studies/{study_name}"))
|
||||
return insight.get_plotly_data()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0.0 | 2024-12-20 | Initial release with 5 insight types |
|
||||
@@ -656,7 +656,8 @@ class ZernikeExtractor:
|
||||
def extract_relative(
|
||||
self,
|
||||
target_subcase: str,
|
||||
reference_subcase: str
|
||||
reference_subcase: str,
|
||||
include_coefficients: bool = False
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract Zernike metrics relative to a reference subcase.
|
||||
@@ -666,6 +667,7 @@ class ZernikeExtractor:
|
||||
Args:
|
||||
target_subcase: Subcase to analyze
|
||||
reference_subcase: Reference subcase to subtract
|
||||
include_coefficients: Whether to include all Zernike coefficients
|
||||
|
||||
Returns:
|
||||
Dict with relative RMS metrics and aberrations
|
||||
@@ -711,7 +713,7 @@ class ZernikeExtractor:
|
||||
)
|
||||
aberrations = compute_aberration_magnitudes(rms_result['coefficients'])
|
||||
|
||||
return {
|
||||
result = {
|
||||
'target_subcase': target_subcase,
|
||||
'reference_subcase': reference_subcase,
|
||||
'relative_global_rms_nm': rms_result['global_rms_nm'],
|
||||
@@ -721,6 +723,12 @@ class ZernikeExtractor:
|
||||
**{f'relative_{k}': v for k, v in aberrations.items()}
|
||||
}
|
||||
|
||||
if include_coefficients:
|
||||
result['coefficients'] = rms_result['coefficients'].tolist()
|
||||
result['coefficient_labels'] = [zernike_label(j) for j in range(1, self.n_modes + 1)]
|
||||
|
||||
return result
|
||||
|
||||
def extract_all_subcases(
|
||||
self,
|
||||
reference_subcase: Optional[str] = '20'
|
||||
|
||||
233
optimization_engine/insights/__init__.py
Normal file
233
optimization_engine/insights/__init__.py
Normal file
@@ -0,0 +1,233 @@
|
||||
"""
|
||||
Atomizer Study Insights Module
|
||||
|
||||
Provides physics-focused visualizations for FEA optimization results.
|
||||
Unlike the Analysis page (optimizer-centric), Insights show the engineering
|
||||
reality of specific designs through interactive 3D visualizations.
|
||||
|
||||
Architecture:
|
||||
- StudyInsight: Abstract base class for all insight types
|
||||
- InsightRegistry: Central registry for available insight types
|
||||
- Each insight generates standalone HTML or Plotly data for dashboard
|
||||
|
||||
Available Insight Types:
|
||||
-----------------------
|
||||
| Type ID | Name | Description |
|
||||
|----------------|------------------------|------------------------------------------|
|
||||
| zernike_wfe | Zernike WFE Analysis | 3D wavefront error with Zernike decomp |
|
||||
| stress_field | Stress Distribution | Von Mises stress contours |
|
||||
| modal | Modal Analysis | Natural frequencies and mode shapes |
|
||||
| thermal | Thermal Analysis | Temperature distribution |
|
||||
| design_space | Design Space Explorer | Parameter-objective relationships |
|
||||
|
||||
Quick Start:
|
||||
-----------
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, list_available_insights
|
||||
from pathlib import Path
|
||||
|
||||
study_path = Path("studies/my_study")
|
||||
|
||||
# List available insights for a study
|
||||
available = list_available_insights(study_path)
|
||||
print(available)
|
||||
|
||||
# Generate a specific insight
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
if insight and insight.can_generate():
|
||||
result = insight.generate()
|
||||
print(f"Generated: {result.html_path}")
|
||||
print(f"Summary: {result.summary}")
|
||||
```
|
||||
|
||||
CLI Usage:
|
||||
---------
|
||||
```bash
|
||||
# Generate all available insights for a study
|
||||
python -m optimization_engine.insights generate studies/my_study
|
||||
|
||||
# Generate specific insight type
|
||||
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
||||
|
||||
# List available insight types
|
||||
python -m optimization_engine.insights list
|
||||
```
|
||||
"""
|
||||
|
||||
# Import base classes first
|
||||
from .base import (
|
||||
StudyInsight,
|
||||
InsightConfig,
|
||||
InsightResult,
|
||||
InsightRegistry,
|
||||
register_insight,
|
||||
get_insight,
|
||||
list_insights,
|
||||
list_available_insights,
|
||||
)
|
||||
|
||||
# Import insight implementations (triggers @register_insight decorators)
|
||||
from .zernike_wfe import ZernikeWFEInsight
|
||||
from .stress_field import StressFieldInsight
|
||||
from .modal_analysis import ModalInsight
|
||||
from .thermal_field import ThermalInsight
|
||||
from .design_space import DesignSpaceInsight
|
||||
|
||||
# Public API
|
||||
__all__ = [
|
||||
# Base classes
|
||||
'StudyInsight',
|
||||
'InsightConfig',
|
||||
'InsightResult',
|
||||
'InsightRegistry',
|
||||
'register_insight',
|
||||
|
||||
# API functions
|
||||
'get_insight',
|
||||
'list_insights',
|
||||
'list_available_insights',
|
||||
|
||||
# Insight implementations
|
||||
'ZernikeWFEInsight',
|
||||
'StressFieldInsight',
|
||||
'ModalInsight',
|
||||
'ThermalInsight',
|
||||
'DesignSpaceInsight',
|
||||
]
|
||||
|
||||
|
||||
def generate_all_insights(study_path, output_dir=None):
|
||||
"""
|
||||
Generate all available insights for a study.
|
||||
|
||||
Args:
|
||||
study_path: Path to study directory
|
||||
output_dir: Optional output directory (defaults to study/3_insights/)
|
||||
|
||||
Returns:
|
||||
List of InsightResult objects
|
||||
"""
|
||||
from pathlib import Path
|
||||
study_path = Path(study_path)
|
||||
|
||||
results = []
|
||||
available = list_available_insights(study_path)
|
||||
|
||||
for info in available:
|
||||
insight = get_insight(info['type'], study_path)
|
||||
if insight:
|
||||
config = InsightConfig()
|
||||
if output_dir:
|
||||
config.output_dir = Path(output_dir)
|
||||
|
||||
result = insight.generate(config)
|
||||
results.append({
|
||||
'type': info['type'],
|
||||
'name': info['name'],
|
||||
'result': result
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
|
||||
# CLI entry point
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def print_usage():
|
||||
print("Atomizer Study Insights")
|
||||
print("=" * 50)
|
||||
print()
|
||||
print("Usage:")
|
||||
print(" python -m optimization_engine.insights list")
|
||||
print(" python -m optimization_engine.insights generate <study_path> [--type TYPE]")
|
||||
print()
|
||||
print("Commands:")
|
||||
print(" list - List all registered insight types")
|
||||
print(" generate - Generate insights for a study")
|
||||
print()
|
||||
print("Options:")
|
||||
print(" --type TYPE Generate only the specified insight type")
|
||||
print()
|
||||
|
||||
if len(sys.argv) < 2:
|
||||
print_usage()
|
||||
sys.exit(0)
|
||||
|
||||
command = sys.argv[1]
|
||||
|
||||
if command == 'list':
|
||||
print("\nRegistered Insight Types:")
|
||||
print("-" * 60)
|
||||
for info in list_insights():
|
||||
print(f" {info['type']:15} - {info['name']}")
|
||||
print(f" {info['description']}")
|
||||
print(f" Applies to: {', '.join(info['applicable_to'])}")
|
||||
print()
|
||||
|
||||
elif command == 'generate':
|
||||
if len(sys.argv) < 3:
|
||||
print("Error: Missing study path")
|
||||
print_usage()
|
||||
sys.exit(1)
|
||||
|
||||
study_path = Path(sys.argv[2])
|
||||
if not study_path.exists():
|
||||
print(f"Error: Study path does not exist: {study_path}")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse options
|
||||
insight_type = None
|
||||
for i, arg in enumerate(sys.argv[3:], 3):
|
||||
if arg == '--type' and i + 1 < len(sys.argv):
|
||||
insight_type = sys.argv[i + 1]
|
||||
|
||||
print(f"\nGenerating insights for: {study_path}")
|
||||
print("-" * 60)
|
||||
|
||||
if insight_type:
|
||||
# Generate specific type
|
||||
insight = get_insight(insight_type, study_path)
|
||||
if insight is None:
|
||||
print(f"Error: Unknown insight type: {insight_type}")
|
||||
sys.exit(1)
|
||||
|
||||
if not insight.can_generate():
|
||||
print(f"Cannot generate {insight_type}: required data not found")
|
||||
sys.exit(1)
|
||||
|
||||
result = insight.generate()
|
||||
if result.success:
|
||||
print(f"Generated: {result.html_path}")
|
||||
if result.summary:
|
||||
print(f"Summary: {result.summary}")
|
||||
else:
|
||||
print(f"Error: {result.error}")
|
||||
else:
|
||||
# Generate all available
|
||||
available = list_available_insights(study_path)
|
||||
if not available:
|
||||
print("No insights available for this study")
|
||||
sys.exit(0)
|
||||
|
||||
print(f"Found {len(available)} available insight(s)")
|
||||
print()
|
||||
|
||||
for info in available:
|
||||
print(f"Generating {info['name']}...")
|
||||
insight = get_insight(info['type'], study_path)
|
||||
result = insight.generate()
|
||||
|
||||
if result.success:
|
||||
print(f" Created: {result.html_path}")
|
||||
else:
|
||||
print(f" Error: {result.error}")
|
||||
|
||||
print()
|
||||
print("Done!")
|
||||
|
||||
else:
|
||||
print(f"Unknown command: {command}")
|
||||
print_usage()
|
||||
sys.exit(1)
|
||||
336
optimization_engine/insights/base.py
Normal file
336
optimization_engine/insights/base.py
Normal file
@@ -0,0 +1,336 @@
|
||||
"""
|
||||
Study Insights - Base Classes and Infrastructure
|
||||
|
||||
Study Insights provide physics-focused visualizations for optimization results.
|
||||
Unlike Analysis (optimizer-centric), Insights show the engineering reality
|
||||
of specific designs.
|
||||
|
||||
Architecture:
|
||||
- StudyInsight: Abstract base class for all insight types
|
||||
- InsightRegistry: Central registry for available insight types
|
||||
- Each insight can generate standalone HTML or Plotly data for dashboard
|
||||
|
||||
Usage:
|
||||
from optimization_engine.insights import get_insight, list_insights
|
||||
|
||||
# Get specific insight
|
||||
insight = get_insight('zernike_wfe')
|
||||
if insight.can_generate(study_path):
|
||||
html_path = insight.generate_html(study_path, trial_id=47)
|
||||
plotly_data = insight.get_plotly_data(study_path, trial_id=47)
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Type
|
||||
import json
|
||||
|
||||
|
||||
@dataclass
|
||||
class InsightConfig:
|
||||
"""Configuration for an insight instance."""
|
||||
trial_id: Optional[int] = None # Specific trial to visualize (None = best)
|
||||
colorscale: str = 'Turbo'
|
||||
output_dir: Optional[Path] = None # Where to save HTML (None = study/3_insights/)
|
||||
|
||||
# Visual settings
|
||||
amplification: float = 1.0 # Deformation scale factor
|
||||
lighting: bool = True # 3D lighting effects
|
||||
|
||||
# Type-specific config (passed through)
|
||||
extra: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class InsightResult:
|
||||
"""Result from generating an insight."""
|
||||
success: bool
|
||||
html_path: Optional[Path] = None
|
||||
plotly_figure: Optional[Dict[str, Any]] = None # Plotly figure as dict
|
||||
summary: Optional[Dict[str, Any]] = None # Key metrics
|
||||
error: Optional[str] = None
|
||||
|
||||
|
||||
class StudyInsight(ABC):
|
||||
"""
|
||||
Abstract base class for study-specific physics visualizations.
|
||||
|
||||
Each insight type provides:
|
||||
- Detection: Can this insight be generated for this study?
|
||||
- HTML generation: Standalone interactive report
|
||||
- Plotly data: For embedding in dashboard
|
||||
- Summary: Key metrics extracted
|
||||
|
||||
Subclasses must implement:
|
||||
- insight_type: Unique identifier (e.g., 'zernike_wfe')
|
||||
- name: Human-readable name
|
||||
- description: What this insight shows
|
||||
- applicable_to: List of study types this applies to
|
||||
- can_generate(): Check if study has required data
|
||||
- _generate(): Core generation logic
|
||||
"""
|
||||
|
||||
# Class-level metadata (override in subclasses)
|
||||
insight_type: str = "base"
|
||||
name: str = "Base Insight"
|
||||
description: str = "Abstract base insight"
|
||||
applicable_to: List[str] = [] # e.g., ['mirror', 'structural', 'all']
|
||||
|
||||
# Required files/data patterns
|
||||
required_files: List[str] = [] # e.g., ['*.op2', '*.bdf']
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
"""
|
||||
Initialize insight for a specific study.
|
||||
|
||||
Args:
|
||||
study_path: Path to study directory (studies/{name}/)
|
||||
"""
|
||||
self.study_path = Path(study_path)
|
||||
self.setup_path = self.study_path / "1_setup"
|
||||
self.results_path = self.study_path / "2_results"
|
||||
self.insights_path = self.study_path / "3_insights"
|
||||
|
||||
# Load study config if available
|
||||
self.config = self._load_study_config()
|
||||
|
||||
def _load_study_config(self) -> Dict[str, Any]:
|
||||
"""Load optimization_config.json if it exists."""
|
||||
config_path = self.setup_path / "optimization_config.json"
|
||||
if config_path.exists():
|
||||
with open(config_path) as f:
|
||||
return json.load(f)
|
||||
return {}
|
||||
|
||||
@abstractmethod
|
||||
def can_generate(self) -> bool:
|
||||
"""
|
||||
Check if this insight can be generated for the study.
|
||||
|
||||
Returns:
|
||||
True if all required data is available
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""
|
||||
Core generation logic. Implemented by subclasses.
|
||||
|
||||
Args:
|
||||
config: Insight configuration
|
||||
|
||||
Returns:
|
||||
InsightResult with HTML path and/or Plotly data
|
||||
"""
|
||||
pass
|
||||
|
||||
def generate(self, config: Optional[InsightConfig] = None) -> InsightResult:
|
||||
"""
|
||||
Generate the insight visualization.
|
||||
|
||||
Args:
|
||||
config: Optional configuration (uses defaults if None)
|
||||
|
||||
Returns:
|
||||
InsightResult with generated content
|
||||
"""
|
||||
if config is None:
|
||||
config = InsightConfig()
|
||||
|
||||
# Ensure output directory exists
|
||||
if config.output_dir is None:
|
||||
config.output_dir = self.insights_path
|
||||
config.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Check prerequisites
|
||||
if not self.can_generate():
|
||||
return InsightResult(
|
||||
success=False,
|
||||
error=f"Cannot generate {self.name}: required data not found"
|
||||
)
|
||||
|
||||
try:
|
||||
return self._generate(config)
|
||||
except Exception as e:
|
||||
return InsightResult(
|
||||
success=False,
|
||||
error=f"Error generating {self.name}: {str(e)}"
|
||||
)
|
||||
|
||||
def generate_html(
|
||||
self,
|
||||
trial_id: Optional[int] = None,
|
||||
**kwargs
|
||||
) -> Optional[Path]:
|
||||
"""
|
||||
Convenience method to generate standalone HTML.
|
||||
|
||||
Args:
|
||||
trial_id: Specific trial to visualize (None = best)
|
||||
**kwargs: Additional config options
|
||||
|
||||
Returns:
|
||||
Path to generated HTML file, or None on failure
|
||||
"""
|
||||
config = InsightConfig(trial_id=trial_id, extra=kwargs)
|
||||
result = self.generate(config)
|
||||
return result.html_path if result.success else None
|
||||
|
||||
def get_plotly_data(
|
||||
self,
|
||||
trial_id: Optional[int] = None,
|
||||
**kwargs
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get Plotly figure data for dashboard embedding.
|
||||
|
||||
Args:
|
||||
trial_id: Specific trial to visualize (None = best)
|
||||
**kwargs: Additional config options
|
||||
|
||||
Returns:
|
||||
Plotly figure as dictionary, or None on failure
|
||||
"""
|
||||
config = InsightConfig(trial_id=trial_id, extra=kwargs)
|
||||
result = self.generate(config)
|
||||
return result.plotly_figure if result.success else None
|
||||
|
||||
def get_summary(self, trial_id: Optional[int] = None) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get key metrics summary without full visualization.
|
||||
|
||||
Args:
|
||||
trial_id: Specific trial (None = best)
|
||||
|
||||
Returns:
|
||||
Dictionary of key metrics
|
||||
"""
|
||||
config = InsightConfig(trial_id=trial_id)
|
||||
result = self.generate(config)
|
||||
return result.summary if result.success else None
|
||||
|
||||
|
||||
class InsightRegistry:
|
||||
"""
|
||||
Central registry for available insight types.
|
||||
|
||||
Usage:
|
||||
registry = InsightRegistry()
|
||||
registry.register(ZernikeWFEInsight)
|
||||
|
||||
# Get insight for a study
|
||||
insight = registry.get('zernike_wfe', study_path)
|
||||
|
||||
# List available insights for a study
|
||||
available = registry.list_available(study_path)
|
||||
"""
|
||||
|
||||
_instance = None
|
||||
_insights: Dict[str, Type[StudyInsight]] = {}
|
||||
|
||||
def __new__(cls):
|
||||
"""Singleton pattern."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls._insights = {}
|
||||
return cls._instance
|
||||
|
||||
def register(self, insight_class: Type[StudyInsight]) -> None:
|
||||
"""
|
||||
Register an insight type.
|
||||
|
||||
Args:
|
||||
insight_class: StudyInsight subclass to register
|
||||
"""
|
||||
self._insights[insight_class.insight_type] = insight_class
|
||||
|
||||
def get(self, insight_type: str, study_path: Path) -> Optional[StudyInsight]:
|
||||
"""
|
||||
Get an insight instance for a study.
|
||||
|
||||
Args:
|
||||
insight_type: Registered insight type ID
|
||||
study_path: Path to study directory
|
||||
|
||||
Returns:
|
||||
Configured insight instance, or None if not found
|
||||
"""
|
||||
if insight_type not in self._insights:
|
||||
return None
|
||||
return self._insights[insight_type](study_path)
|
||||
|
||||
def list_all(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
List all registered insight types.
|
||||
|
||||
Returns:
|
||||
List of insight metadata dictionaries
|
||||
"""
|
||||
return [
|
||||
{
|
||||
'type': cls.insight_type,
|
||||
'name': cls.name,
|
||||
'description': cls.description,
|
||||
'applicable_to': cls.applicable_to
|
||||
}
|
||||
for cls in self._insights.values()
|
||||
]
|
||||
|
||||
def list_available(self, study_path: Path) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
List insights that can be generated for a specific study.
|
||||
|
||||
Args:
|
||||
study_path: Path to study directory
|
||||
|
||||
Returns:
|
||||
List of available insight metadata
|
||||
"""
|
||||
available = []
|
||||
for insight_type, cls in self._insights.items():
|
||||
try:
|
||||
insight = cls(study_path)
|
||||
if insight.can_generate():
|
||||
available.append({
|
||||
'type': insight_type,
|
||||
'name': cls.name,
|
||||
'description': cls.description
|
||||
})
|
||||
except Exception:
|
||||
pass # Skip insights that fail to initialize
|
||||
return available
|
||||
|
||||
|
||||
# Global registry instance
|
||||
_registry = InsightRegistry()
|
||||
|
||||
|
||||
def register_insight(insight_class: Type[StudyInsight]) -> Type[StudyInsight]:
|
||||
"""
|
||||
Decorator to register an insight class.
|
||||
|
||||
Usage:
|
||||
@register_insight
|
||||
class MyInsight(StudyInsight):
|
||||
insight_type = 'my_insight'
|
||||
...
|
||||
"""
|
||||
_registry.register(insight_class)
|
||||
return insight_class
|
||||
|
||||
|
||||
def get_insight(insight_type: str, study_path: Path) -> Optional[StudyInsight]:
|
||||
"""Get an insight instance by type."""
|
||||
return _registry.get(insight_type, study_path)
|
||||
|
||||
|
||||
def list_insights() -> List[Dict[str, Any]]:
|
||||
"""List all registered insight types."""
|
||||
return _registry.list_all()
|
||||
|
||||
|
||||
def list_available_insights(study_path: Path) -> List[Dict[str, Any]]:
|
||||
"""List insights available for a specific study."""
|
||||
return _registry.list_available(study_path)
|
||||
372
optimization_engine/insights/design_space.py
Normal file
372
optimization_engine/insights/design_space.py
Normal file
@@ -0,0 +1,372 @@
|
||||
"""
|
||||
Design Space Insight
|
||||
|
||||
Provides interactive visualization of the design space explored during optimization.
|
||||
Shows parameter relationships, objective landscapes, and design evolution.
|
||||
|
||||
This insight bridges optimization metrics (from Analysis) with physics understanding,
|
||||
showing how design parameters affect the physical objectives.
|
||||
|
||||
Applicable to: All optimization studies with completed trials.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import sqlite3
|
||||
import json
|
||||
import numpy as np
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
@register_insight
|
||||
class DesignSpaceInsight(StudyInsight):
|
||||
"""
|
||||
Design space exploration visualization.
|
||||
|
||||
Shows:
|
||||
- Parallel coordinates plot of parameters vs objectives
|
||||
- Scatter matrix of parameter relationships
|
||||
- 3D parameter-objective landscape
|
||||
- Best design summary with physics interpretation
|
||||
"""
|
||||
|
||||
insight_type = "design_space"
|
||||
name = "Design Space Explorer"
|
||||
description = "Interactive parameter-objective relationship visualization"
|
||||
applicable_to = ["all"] # Works with any optimization study
|
||||
required_files = [] # Requires study.db, not OP2
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.db_path = self.results_path / "study.db"
|
||||
self._trials: Optional[List[Dict]] = None
|
||||
self._params: Optional[List[str]] = None
|
||||
self._objectives: Optional[List[str]] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if study.db exists with trial data."""
|
||||
if not self.db_path.exists():
|
||||
return False
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.db_path))
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT COUNT(*) FROM trials WHERE state = 'COMPLETE'")
|
||||
count = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
return count >= 5 # Need at least 5 trials
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _load_data(self):
|
||||
"""Load trial data from study.db."""
|
||||
if self._trials is not None:
|
||||
return
|
||||
|
||||
conn = sqlite3.connect(str(self.db_path))
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get completed trials
|
||||
cursor.execute("""
|
||||
SELECT trial_id, params, values, state
|
||||
FROM trials
|
||||
WHERE state = 'COMPLETE'
|
||||
ORDER BY trial_id
|
||||
""")
|
||||
|
||||
self._trials = []
|
||||
self._params = None
|
||||
self._objectives = None
|
||||
|
||||
for row in cursor.fetchall():
|
||||
trial_id, params_json, values_json, state = row
|
||||
|
||||
try:
|
||||
params = json.loads(params_json) if params_json else {}
|
||||
values = json.loads(values_json) if values_json else {}
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# Flatten nested values
|
||||
flat_values = {}
|
||||
for k, v in values.items():
|
||||
if isinstance(v, dict):
|
||||
flat_values.update(v)
|
||||
else:
|
||||
flat_values[k] = v
|
||||
|
||||
self._trials.append({
|
||||
'trial_id': trial_id,
|
||||
'params': params,
|
||||
'values': flat_values,
|
||||
})
|
||||
|
||||
# Extract param and objective names from first trial
|
||||
if self._params is None:
|
||||
self._params = list(params.keys())
|
||||
if self._objectives is None:
|
||||
self._objectives = list(flat_values.keys())
|
||||
|
||||
conn.close()
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate design space visualization."""
|
||||
self._load_data()
|
||||
|
||||
if not self._trials or len(self._trials) < 5:
|
||||
return InsightResult(success=False,
|
||||
error=f"Need at least 5 trials, found: {len(self._trials or [])}")
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Configuration
|
||||
colorscale = config.extra.get('colorscale', 'Viridis')
|
||||
primary_objective = config.extra.get('primary_objective', None)
|
||||
|
||||
# Use first objective if not specified
|
||||
if primary_objective is None and self._objectives:
|
||||
primary_objective = self._objectives[0]
|
||||
|
||||
# Build data arrays
|
||||
n_trials = len(self._trials)
|
||||
param_data = {p: [] for p in self._params}
|
||||
obj_data = {o: [] for o in self._objectives}
|
||||
trial_ids = []
|
||||
|
||||
for trial in self._trials:
|
||||
trial_ids.append(trial['trial_id'])
|
||||
for p in self._params:
|
||||
param_data[p].append(trial['params'].get(p, np.nan))
|
||||
for o in self._objectives:
|
||||
obj_data[o].append(trial['values'].get(o, np.nan))
|
||||
|
||||
# Convert to arrays
|
||||
for p in self._params:
|
||||
param_data[p] = np.array(param_data[p])
|
||||
for o in self._objectives:
|
||||
obj_data[o] = np.array(obj_data[o])
|
||||
|
||||
# Find best trial
|
||||
if primary_objective and primary_objective in obj_data:
|
||||
obj_values = obj_data[primary_objective]
|
||||
valid_mask = ~np.isnan(obj_values)
|
||||
if np.any(valid_mask):
|
||||
best_idx = np.nanargmin(obj_values)
|
||||
best_trial = self._trials[best_idx]
|
||||
best_value = obj_values[best_idx]
|
||||
else:
|
||||
best_trial = None
|
||||
best_value = None
|
||||
else:
|
||||
best_trial = None
|
||||
best_value = None
|
||||
|
||||
# Build visualization
|
||||
n_params = len(self._params)
|
||||
n_objs = len(self._objectives)
|
||||
|
||||
# Layout: 2x2 grid
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=2,
|
||||
specs=[
|
||||
[{"type": "parcoords", "colspan": 2}, None],
|
||||
[{"type": "scatter3d" if n_params >= 2 else "xy"},
|
||||
{"type": "table"}]
|
||||
],
|
||||
row_heights=[0.55, 0.45],
|
||||
subplot_titles=[
|
||||
"<b>Parallel Coordinates - Design Space</b>",
|
||||
"<b>Parameter Landscape</b>",
|
||||
"<b>Best Design</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# 1. Parallel coordinates
|
||||
dimensions = []
|
||||
|
||||
# Add parameters
|
||||
for p in self._params:
|
||||
values = param_data[p]
|
||||
if not np.all(np.isnan(values)):
|
||||
dimensions.append(dict(
|
||||
label=p,
|
||||
values=values,
|
||||
range=[float(np.nanmin(values)), float(np.nanmax(values))]
|
||||
))
|
||||
|
||||
# Add objectives
|
||||
for o in self._objectives:
|
||||
values = obj_data[o]
|
||||
if not np.all(np.isnan(values)):
|
||||
dimensions.append(dict(
|
||||
label=o,
|
||||
values=values,
|
||||
range=[float(np.nanmin(values)), float(np.nanmax(values))]
|
||||
))
|
||||
|
||||
if dimensions:
|
||||
# Color by primary objective
|
||||
color_values = obj_data.get(primary_objective, trial_ids)
|
||||
if isinstance(color_values, list):
|
||||
color_values = np.array(color_values)
|
||||
|
||||
fig.add_trace(_go.Parcoords(
|
||||
line=dict(
|
||||
color=color_values,
|
||||
colorscale=colorscale,
|
||||
showscale=True,
|
||||
colorbar=dict(title=primary_objective or "Trial", thickness=15)
|
||||
),
|
||||
dimensions=dimensions,
|
||||
), row=1, col=1)
|
||||
|
||||
# 2. 3D Parameter landscape (first 2 params vs primary objective)
|
||||
if n_params >= 2 and primary_objective:
|
||||
x_param = self._params[0]
|
||||
y_param = self._params[1]
|
||||
z_values = obj_data.get(primary_objective, [])
|
||||
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=param_data[x_param],
|
||||
y=param_data[y_param],
|
||||
z=z_values,
|
||||
mode='markers',
|
||||
marker=dict(
|
||||
size=6,
|
||||
color=z_values,
|
||||
colorscale=colorscale,
|
||||
opacity=0.8,
|
||||
showscale=False,
|
||||
),
|
||||
text=[f"Trial {tid}" for tid in trial_ids],
|
||||
hovertemplate=(
|
||||
f"{x_param}: %{{x:.3f}}<br>"
|
||||
f"{y_param}: %{{y:.3f}}<br>"
|
||||
f"{primary_objective}: %{{z:.4f}}<br>"
|
||||
"%{text}<extra></extra>"
|
||||
),
|
||||
), row=2, col=1)
|
||||
|
||||
# Highlight best point
|
||||
if best_trial:
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=[best_trial['params'].get(x_param)],
|
||||
y=[best_trial['params'].get(y_param)],
|
||||
z=[best_value],
|
||||
mode='markers',
|
||||
marker=dict(size=12, color='red', symbol='diamond'),
|
||||
name='Best',
|
||||
showlegend=True,
|
||||
), row=2, col=1)
|
||||
|
||||
fig.update_scenes(
|
||||
xaxis_title=x_param,
|
||||
yaxis_title=y_param,
|
||||
zaxis_title=primary_objective,
|
||||
)
|
||||
elif n_params >= 1 and primary_objective:
|
||||
# 2D scatter
|
||||
x_param = self._params[0]
|
||||
z_values = obj_data.get(primary_objective, [])
|
||||
|
||||
fig.add_trace(_go.Scatter(
|
||||
x=param_data[x_param],
|
||||
y=z_values,
|
||||
mode='markers',
|
||||
marker=dict(size=8, color=z_values, colorscale=colorscale),
|
||||
), row=2, col=1)
|
||||
|
||||
fig.update_xaxes(title_text=x_param, row=2, col=1)
|
||||
fig.update_yaxes(title_text=primary_objective, row=2, col=1)
|
||||
|
||||
# 3. Best design table
|
||||
if best_trial:
|
||||
labels = ["<b>Metric</b>", "<b>Value</b>"]
|
||||
|
||||
# Combine params and objectives
|
||||
table_labels = ["Trial ID"] + self._params + self._objectives
|
||||
table_values = [str(best_trial['trial_id'])]
|
||||
|
||||
for p in self._params:
|
||||
val = best_trial['params'].get(p, 'N/A')
|
||||
table_values.append(f"{val:.4f}" if isinstance(val, (int, float)) else str(val))
|
||||
|
||||
for o in self._objectives:
|
||||
val = best_trial['values'].get(o, 'N/A')
|
||||
table_values.append(f"{val:.4f}" if isinstance(val, (int, float)) else str(val))
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=labels,
|
||||
fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[table_labels, table_values],
|
||||
fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=2)
|
||||
else:
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Info</b>"],
|
||||
fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[["No valid trials found"]],
|
||||
fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=2)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1500, height=1000,
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(
|
||||
text=f"<b>Atomizer Design Space Explorer</b><br>"
|
||||
f"<sub>{n_trials} trials, {n_params} parameters, {n_objs} objectives</sub>",
|
||||
x=0.5, font=dict(size=18)
|
||||
),
|
||||
)
|
||||
|
||||
# Save HTML
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_path = output_dir / f"design_space_{timestamp}.html"
|
||||
html_path.write_text(
|
||||
fig.to_html(include_plotlyjs='cdn', full_html=True),
|
||||
encoding='utf-8'
|
||||
)
|
||||
|
||||
# Summary
|
||||
summary = {
|
||||
'n_trials': n_trials,
|
||||
'n_params': n_params,
|
||||
'n_objectives': n_objs,
|
||||
'parameters': self._params,
|
||||
'objectives': self._objectives,
|
||||
}
|
||||
|
||||
if best_trial:
|
||||
summary['best_trial_id'] = best_trial['trial_id']
|
||||
summary['best_params'] = best_trial['params']
|
||||
summary['best_values'] = best_trial['values']
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
plotly_figure=fig.to_dict(),
|
||||
summary=summary
|
||||
)
|
||||
347
optimization_engine/insights/modal_analysis.py
Normal file
347
optimization_engine/insights/modal_analysis.py
Normal file
@@ -0,0 +1,347 @@
|
||||
"""
|
||||
Modal Analysis Insight
|
||||
|
||||
Provides visualization of natural frequencies and mode shapes from FEA results.
|
||||
Shows animated mode shapes, frequency spectrum, and modal participation factors.
|
||||
|
||||
Applicable to: Dynamic/vibration optimization studies.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import numpy as np
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
_OP2 = None
|
||||
_BDF = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
@register_insight
|
||||
class ModalInsight(StudyInsight):
|
||||
"""
|
||||
Modal analysis visualization.
|
||||
|
||||
Shows:
|
||||
- Natural frequency spectrum (bar chart)
|
||||
- Mode shape visualization (3D deformed mesh)
|
||||
- Mode description table
|
||||
- Frequency vs mode number plot
|
||||
"""
|
||||
|
||||
insight_type = "modal"
|
||||
name = "Modal Analysis"
|
||||
description = "Natural frequencies and mode shapes visualization"
|
||||
applicable_to = ["modal", "vibration", "dynamic", "all"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._eigenvectors: Optional[Dict] = None
|
||||
self._frequencies: Optional[List] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 file with eigenvalue/eigenvector data exists."""
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
# Try to find geometry
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
# Verify modal data exists
|
||||
try:
|
||||
_load_dependencies()
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
return bool(op2.eigenvectors)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and modal data from OP2."""
|
||||
if self._eigenvectors is not None:
|
||||
return
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Load geometry if available
|
||||
if self.geo_path and self.geo_path.exists():
|
||||
bdf = _BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
else:
|
||||
self._node_geo = {}
|
||||
|
||||
# Load modal data
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
self._eigenvectors = {}
|
||||
self._frequencies = []
|
||||
|
||||
for key, eig in op2.eigenvectors.items():
|
||||
# Get frequencies
|
||||
if hasattr(eig, 'modes') and hasattr(eig, 'cycles'):
|
||||
modes = eig.modes
|
||||
freqs = eig.cycles # Frequencies in Hz
|
||||
elif hasattr(eig, 'eigrs'):
|
||||
# Eigenvalues (radians/sec)^2
|
||||
eigrs = eig.eigrs
|
||||
freqs = np.sqrt(np.abs(eigrs)) / (2 * np.pi)
|
||||
modes = list(range(1, len(freqs) + 1))
|
||||
else:
|
||||
continue
|
||||
|
||||
for i, (mode, freq) in enumerate(zip(modes, freqs)):
|
||||
self._frequencies.append({
|
||||
'mode': int(mode),
|
||||
'frequency_hz': float(freq),
|
||||
})
|
||||
|
||||
# Get mode shapes
|
||||
if hasattr(eig, 'data'):
|
||||
data = eig.data
|
||||
ngt = eig.node_gridtype.astype(int)
|
||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||
|
||||
for mode_idx, mode_num in enumerate(modes):
|
||||
if data.ndim == 3:
|
||||
mode_data = data[mode_idx]
|
||||
else:
|
||||
mode_data = data
|
||||
|
||||
self._eigenvectors[int(mode_num)] = {
|
||||
'node_ids': node_ids,
|
||||
'displacements': mode_data.copy(),
|
||||
}
|
||||
|
||||
# Sort frequencies by mode number
|
||||
self._frequencies.sort(key=lambda x: x['mode'])
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate modal analysis visualization."""
|
||||
self._load_data()
|
||||
|
||||
if not self._frequencies:
|
||||
return InsightResult(success=False, error="No modal data found in OP2")
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Configuration
|
||||
n_modes_show = config.extra.get('n_modes', 20)
|
||||
mode_to_show = config.extra.get('show_mode', 1) # Which mode shape to display
|
||||
deform_scale = config.amplification if config.amplification != 1.0 else 50.0
|
||||
|
||||
# Limit to available modes
|
||||
freq_data = self._frequencies[:n_modes_show]
|
||||
modes = [f['mode'] for f in freq_data]
|
||||
frequencies = [f['frequency_hz'] for f in freq_data]
|
||||
|
||||
# Build visualization
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=2,
|
||||
specs=[
|
||||
[{"type": "scene"}, {"type": "xy"}],
|
||||
[{"type": "xy"}, {"type": "table"}]
|
||||
],
|
||||
subplot_titles=[
|
||||
f"<b>Mode {mode_to_show} Shape</b>",
|
||||
"<b>Natural Frequencies</b>",
|
||||
"<b>Frequency Spectrum</b>",
|
||||
"<b>Mode Summary</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# Mode shape (3D)
|
||||
if self._node_geo and mode_to_show in self._eigenvectors:
|
||||
mode_data = self._eigenvectors[mode_to_show]
|
||||
node_ids = mode_data['node_ids']
|
||||
disps = mode_data['displacements']
|
||||
|
||||
X, Y, Z = [], [], []
|
||||
Xd, Yd, Zd = [], [], []
|
||||
colors = []
|
||||
|
||||
for nid, disp in zip(node_ids, disps):
|
||||
geo = self._node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
|
||||
X.append(geo[0])
|
||||
Y.append(geo[1])
|
||||
Z.append(geo[2])
|
||||
|
||||
# Deformed position
|
||||
Xd.append(geo[0] + deform_scale * disp[0])
|
||||
Yd.append(geo[1] + deform_scale * disp[1])
|
||||
Zd.append(geo[2] + deform_scale * disp[2])
|
||||
|
||||
# Color by displacement magnitude
|
||||
mag = np.sqrt(disp[0]**2 + disp[1]**2 + disp[2]**2)
|
||||
colors.append(mag)
|
||||
|
||||
X, Y, Z = np.array(X), np.array(Y), np.array(Z)
|
||||
Xd, Yd, Zd = np.array(Xd), np.array(Yd), np.array(Zd)
|
||||
colors = np.array(colors)
|
||||
|
||||
# Try to create mesh
|
||||
try:
|
||||
tri = _Triangulation(Xd, Yd)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=Xd, y=Yd, z=Zd,
|
||||
i=i, j=j, k=k,
|
||||
intensity=colors,
|
||||
colorscale='Viridis',
|
||||
opacity=0.9,
|
||||
flatshading=False,
|
||||
showscale=True,
|
||||
colorbar=dict(title="Disp. Mag.", thickness=10, len=0.4)
|
||||
), row=1, col=1)
|
||||
except Exception:
|
||||
# Fallback: scatter
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=Xd, y=Yd, z=Zd,
|
||||
mode='markers',
|
||||
marker=dict(size=3, color=colors, colorscale='Viridis', showscale=True),
|
||||
), row=1, col=1)
|
||||
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.5, y=1.5, z=1.0)),
|
||||
xaxis=dict(title="X", showbackground=True),
|
||||
yaxis=dict(title="Y", showbackground=True),
|
||||
zaxis=dict(title="Z", showbackground=True),
|
||||
)
|
||||
|
||||
# Frequency bar chart
|
||||
fig.add_trace(_go.Bar(
|
||||
x=modes,
|
||||
y=frequencies,
|
||||
marker_color='#3b82f6',
|
||||
text=[f"{f:.1f} Hz" for f in frequencies],
|
||||
textposition='outside',
|
||||
name='Frequency'
|
||||
), row=1, col=2)
|
||||
|
||||
fig.update_xaxes(title_text="Mode Number", row=1, col=2)
|
||||
fig.update_yaxes(title_text="Frequency (Hz)", row=1, col=2)
|
||||
|
||||
# Frequency spectrum (log scale)
|
||||
fig.add_trace(_go.Scatter(
|
||||
x=modes,
|
||||
y=frequencies,
|
||||
mode='lines+markers',
|
||||
marker=dict(size=8, color='#22c55e'),
|
||||
line=dict(width=2, color='#22c55e'),
|
||||
name='Frequency'
|
||||
), row=2, col=1)
|
||||
|
||||
fig.update_xaxes(title_text="Mode Number", row=2, col=1)
|
||||
fig.update_yaxes(title_text="Frequency (Hz)", type='log', row=2, col=1)
|
||||
|
||||
# Summary table
|
||||
mode_labels = [f"Mode {m}" for m in modes[:10]]
|
||||
freq_labels = [f"{f:.2f} Hz" for f in frequencies[:10]]
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Mode</b>", "<b>Frequency</b>"],
|
||||
fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[mode_labels, freq_labels],
|
||||
fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=2)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1400, height=900,
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text="<b>Atomizer Modal Analysis</b>",
|
||||
x=0.5, font=dict(size=18)),
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Save HTML
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_path = output_dir / f"modal_{timestamp}.html"
|
||||
html_path.write_text(
|
||||
fig.to_html(include_plotlyjs='cdn', full_html=True),
|
||||
encoding='utf-8'
|
||||
)
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
plotly_figure=fig.to_dict(),
|
||||
summary={
|
||||
'n_modes': len(self._frequencies),
|
||||
'frequencies_hz': frequencies,
|
||||
'first_frequency_hz': frequencies[0] if frequencies else None,
|
||||
'shown_mode': mode_to_show,
|
||||
}
|
||||
)
|
||||
361
optimization_engine/insights/stress_field.py
Normal file
361
optimization_engine/insights/stress_field.py
Normal file
@@ -0,0 +1,361 @@
|
||||
"""
|
||||
Stress Field Insight
|
||||
|
||||
Provides 3D visualization of stress distributions from FEA results.
|
||||
Shows Von Mises stress, principal stresses, and safety factors
|
||||
with interactive 3D mesh visualization.
|
||||
|
||||
Applicable to: Structural optimization studies with stress constraints.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import numpy as np
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
_OP2 = None
|
||||
_BDF = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
@register_insight
|
||||
class StressFieldInsight(StudyInsight):
|
||||
"""
|
||||
Stress field visualization for structural analysis.
|
||||
|
||||
Shows:
|
||||
- 3D mesh colored by Von Mises stress
|
||||
- Stress distribution histogram
|
||||
- Hot spot identification
|
||||
- Safety factor visualization (if yield stress provided)
|
||||
"""
|
||||
|
||||
insight_type = "stress_field"
|
||||
name = "Stress Distribution"
|
||||
description = "3D stress contour plot with Von Mises and principal stresses"
|
||||
applicable_to = ["structural", "bracket", "beam", "all"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._stresses: Optional[Dict] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 file with stress data exists."""
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
# Try to find geometry
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
# Verify stress data exists
|
||||
try:
|
||||
_load_dependencies()
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
return bool(op2.ctetra_stress or op2.chexa_stress or
|
||||
op2.ctria3_stress or op2.cquad4_stress)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and stress data from OP2."""
|
||||
if self._stresses is not None:
|
||||
return
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Load geometry if available
|
||||
if self.geo_path and self.geo_path.exists():
|
||||
bdf = _BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
else:
|
||||
self._node_geo = {}
|
||||
|
||||
# Load stress data
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
self._stresses = {}
|
||||
|
||||
# Process solid element stresses (CTETRA, CHEXA)
|
||||
for stress_dict, elem_type in [(op2.ctetra_stress, 'CTETRA'),
|
||||
(op2.chexa_stress, 'CHEXA')]:
|
||||
for key, stress_obj in stress_dict.items():
|
||||
if hasattr(stress_obj, 'data'):
|
||||
data = stress_obj.data
|
||||
if data.ndim == 3:
|
||||
data = data[0] # First load case
|
||||
|
||||
# Extract Von Mises if available
|
||||
if hasattr(stress_obj, 'ovm') or 'ovm' in dir(stress_obj):
|
||||
ovm = stress_obj.ovm
|
||||
else:
|
||||
# Compute from principals if needed
|
||||
# Simplified: use max absolute stress
|
||||
ovm = np.max(np.abs(data), axis=-1) if data.ndim > 1 else data
|
||||
|
||||
element_ids = stress_obj.element if hasattr(stress_obj, 'element') else None
|
||||
|
||||
self._stresses[f'{elem_type}_{key}'] = {
|
||||
'element_ids': element_ids,
|
||||
'von_mises': ovm,
|
||||
'data': data,
|
||||
}
|
||||
|
||||
# Process shell stresses (CTRIA3, CQUAD4)
|
||||
for stress_dict, elem_type in [(op2.ctria3_stress, 'CTRIA3'),
|
||||
(op2.cquad4_stress, 'CQUAD4')]:
|
||||
for key, stress_obj in stress_dict.items():
|
||||
if hasattr(stress_obj, 'data'):
|
||||
data = stress_obj.data
|
||||
if data.ndim == 3:
|
||||
data = data[0]
|
||||
|
||||
if hasattr(stress_obj, 'ovm'):
|
||||
ovm = stress_obj.ovm
|
||||
else:
|
||||
ovm = np.max(np.abs(data), axis=-1) if data.ndim > 1 else data
|
||||
|
||||
element_ids = stress_obj.element if hasattr(stress_obj, 'element') else None
|
||||
|
||||
self._stresses[f'{elem_type}_{key}'] = {
|
||||
'element_ids': element_ids,
|
||||
'von_mises': ovm,
|
||||
'data': data,
|
||||
}
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate stress field visualization."""
|
||||
self._load_data()
|
||||
|
||||
if not self._stresses:
|
||||
return InsightResult(success=False, error="No stress data found in OP2")
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Configuration
|
||||
colorscale = config.extra.get('colorscale', 'Hot')
|
||||
yield_stress = config.extra.get('yield_stress', None) # MPa
|
||||
stress_unit = config.extra.get('stress_unit', 'MPa')
|
||||
|
||||
# Aggregate all stress data
|
||||
all_vm = []
|
||||
all_elem_ids = []
|
||||
for key, data in self._stresses.items():
|
||||
vm = data['von_mises']
|
||||
if isinstance(vm, np.ndarray):
|
||||
all_vm.extend(vm.flatten().tolist())
|
||||
if data['element_ids'] is not None:
|
||||
all_elem_ids.extend(data['element_ids'].flatten().tolist())
|
||||
|
||||
all_vm = np.array(all_vm)
|
||||
max_stress = float(np.max(all_vm))
|
||||
mean_stress = float(np.mean(all_vm))
|
||||
p95_stress = float(np.percentile(all_vm, 95))
|
||||
p99_stress = float(np.percentile(all_vm, 99))
|
||||
|
||||
# Build visualization
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=2,
|
||||
specs=[
|
||||
[{"type": "scene", "colspan": 2}, None],
|
||||
[{"type": "xy"}, {"type": "table"}]
|
||||
],
|
||||
row_heights=[0.65, 0.35],
|
||||
subplot_titles=[
|
||||
"<b>Von Mises Stress Distribution</b>",
|
||||
"<b>Stress Histogram</b>",
|
||||
"<b>Summary Statistics</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# 3D stress field (if we have node geometry)
|
||||
if self._node_geo:
|
||||
# Get node coordinates
|
||||
node_ids = list(self._node_geo.keys())
|
||||
X = np.array([self._node_geo[nid][0] for nid in node_ids])
|
||||
Y = np.array([self._node_geo[nid][1] for nid in node_ids])
|
||||
Z = np.array([self._node_geo[nid][2] for nid in node_ids])
|
||||
|
||||
# For now, use uniform stress coloring (would need element-to-node mapping)
|
||||
# This is a simplified visualization
|
||||
colors = np.random.choice(all_vm, size=len(node_ids), replace=True)
|
||||
|
||||
try:
|
||||
tri = _Triangulation(X, Y)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=X, y=Y, z=Z,
|
||||
i=i, j=j, k=k,
|
||||
intensity=colors,
|
||||
colorscale=colorscale,
|
||||
opacity=0.9,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.5, diffuse=0.7, specular=0.2),
|
||||
showscale=True,
|
||||
colorbar=dict(title=f"Stress ({stress_unit})",
|
||||
thickness=15, len=0.5)
|
||||
), row=1, col=1)
|
||||
except Exception:
|
||||
# Fallback: scatter plot
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=X, y=Y, z=Z,
|
||||
mode='markers',
|
||||
marker=dict(size=3, color=colors, colorscale=colorscale, showscale=True),
|
||||
), row=1, col=1)
|
||||
else:
|
||||
# No geometry - show placeholder
|
||||
fig.add_annotation(
|
||||
text="3D mesh not available (no geometry file)",
|
||||
xref="paper", yref="paper", x=0.5, y=0.7,
|
||||
showarrow=False, font=dict(size=14, color='white')
|
||||
)
|
||||
|
||||
# Configure 3D scene
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.5, y=1.5, z=1.0)),
|
||||
xaxis=dict(title="X", showbackground=True),
|
||||
yaxis=dict(title="Y", showbackground=True),
|
||||
zaxis=dict(title="Z", showbackground=True),
|
||||
)
|
||||
|
||||
# Histogram
|
||||
fig.add_trace(_go.Histogram(
|
||||
x=all_vm,
|
||||
nbinsx=50,
|
||||
marker_color='#ef4444',
|
||||
opacity=0.8,
|
||||
name='Von Mises'
|
||||
), row=2, col=1)
|
||||
|
||||
# Add yield line if provided
|
||||
if yield_stress:
|
||||
fig.add_vline(x=yield_stress, line_dash="dash", line_color="yellow",
|
||||
annotation_text=f"Yield: {yield_stress} {stress_unit}",
|
||||
row=2, col=1)
|
||||
|
||||
# Summary table
|
||||
stats_labels = [
|
||||
"Maximum Stress",
|
||||
"Mean Stress",
|
||||
"95th Percentile",
|
||||
"99th Percentile",
|
||||
]
|
||||
stats_values = [
|
||||
f"{max_stress:.2f} {stress_unit}",
|
||||
f"{mean_stress:.2f} {stress_unit}",
|
||||
f"{p95_stress:.2f} {stress_unit}",
|
||||
f"{p99_stress:.2f} {stress_unit}",
|
||||
]
|
||||
|
||||
if yield_stress:
|
||||
safety_factor = yield_stress / max_stress if max_stress > 0 else float('inf')
|
||||
stats_labels.append("Safety Factor")
|
||||
stats_values.append(f"{safety_factor:.2f}")
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value</b>"],
|
||||
fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[stats_labels, stats_values],
|
||||
fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=2)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1400, height=900,
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text="<b>Atomizer Stress Analysis</b>",
|
||||
x=0.5, font=dict(size=18)),
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Save HTML
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_path = output_dir / f"stress_{timestamp}.html"
|
||||
html_path.write_text(
|
||||
fig.to_html(include_plotlyjs='cdn', full_html=True),
|
||||
encoding='utf-8'
|
||||
)
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
plotly_figure=fig.to_dict(),
|
||||
summary={
|
||||
'max_stress': max_stress,
|
||||
'mean_stress': mean_stress,
|
||||
'p95_stress': p95_stress,
|
||||
'p99_stress': p99_stress,
|
||||
'safety_factor': yield_stress / max_stress if yield_stress and max_stress > 0 else None,
|
||||
'stress_unit': stress_unit,
|
||||
}
|
||||
)
|
||||
323
optimization_engine/insights/thermal_field.py
Normal file
323
optimization_engine/insights/thermal_field.py
Normal file
@@ -0,0 +1,323 @@
|
||||
"""
|
||||
Thermal Field Insight
|
||||
|
||||
Provides visualization of temperature distributions from thermal FEA results.
|
||||
Shows temperature contours, gradients, and thermal statistics.
|
||||
|
||||
Applicable to: Thermal analysis and thermo-structural optimization studies.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import numpy as np
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
_OP2 = None
|
||||
_BDF = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
@register_insight
|
||||
class ThermalInsight(StudyInsight):
|
||||
"""
|
||||
Thermal field visualization.
|
||||
|
||||
Shows:
|
||||
- 3D mesh colored by temperature
|
||||
- Temperature distribution histogram
|
||||
- Hot/cold spot identification
|
||||
- Temperature gradient visualization
|
||||
"""
|
||||
|
||||
insight_type = "thermal"
|
||||
name = "Thermal Analysis"
|
||||
description = "Temperature distribution and thermal gradients"
|
||||
applicable_to = ["thermal", "thermo-structural", "all"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._temperatures: Optional[Dict] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 file with temperature data exists."""
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
# Try to find geometry
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
# Verify temperature data exists
|
||||
try:
|
||||
_load_dependencies()
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
# Check for temperature results (various possible attributes)
|
||||
return bool(hasattr(op2, 'temperatures') and op2.temperatures)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and temperature data from OP2."""
|
||||
if self._temperatures is not None:
|
||||
return
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Load geometry if available
|
||||
if self.geo_path and self.geo_path.exists():
|
||||
bdf = _BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
else:
|
||||
self._node_geo = {}
|
||||
|
||||
# Load temperature data
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
self._temperatures = {}
|
||||
|
||||
if hasattr(op2, 'temperatures'):
|
||||
for key, temp_obj in op2.temperatures.items():
|
||||
if hasattr(temp_obj, 'data'):
|
||||
data = temp_obj.data
|
||||
if data.ndim == 3:
|
||||
data = data[0] # First time step
|
||||
|
||||
ngt = temp_obj.node_gridtype.astype(int) if hasattr(temp_obj, 'node_gridtype') else None
|
||||
node_ids = ngt if ngt is not None and ngt.ndim == 1 else (ngt[:, 0] if ngt is not None else None)
|
||||
|
||||
self._temperatures[str(key)] = {
|
||||
'node_ids': node_ids,
|
||||
'temperatures': data.flatten() if data.ndim > 1 else data,
|
||||
}
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate thermal field visualization."""
|
||||
self._load_data()
|
||||
|
||||
if not self._temperatures:
|
||||
return InsightResult(success=False, error="No temperature data found in OP2")
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Configuration
|
||||
colorscale = config.extra.get('colorscale', 'Thermal')
|
||||
temp_unit = config.extra.get('temp_unit', 'K')
|
||||
|
||||
# Aggregate temperature data
|
||||
all_temps = []
|
||||
all_node_ids = []
|
||||
for key, data in self._temperatures.items():
|
||||
temps = data['temperatures']
|
||||
if isinstance(temps, np.ndarray):
|
||||
all_temps.extend(temps.flatten().tolist())
|
||||
if data['node_ids'] is not None:
|
||||
all_node_ids.extend(data['node_ids'].flatten().tolist())
|
||||
|
||||
all_temps = np.array(all_temps)
|
||||
if len(all_temps) == 0:
|
||||
return InsightResult(success=False, error="No valid temperature values found")
|
||||
|
||||
max_temp = float(np.max(all_temps))
|
||||
min_temp = float(np.min(all_temps))
|
||||
mean_temp = float(np.mean(all_temps))
|
||||
temp_range = max_temp - min_temp
|
||||
|
||||
# Build visualization
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=2,
|
||||
specs=[
|
||||
[{"type": "scene", "colspan": 2}, None],
|
||||
[{"type": "xy"}, {"type": "table"}]
|
||||
],
|
||||
row_heights=[0.65, 0.35],
|
||||
subplot_titles=[
|
||||
"<b>Temperature Distribution</b>",
|
||||
"<b>Temperature Histogram</b>",
|
||||
"<b>Summary Statistics</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# 3D temperature field
|
||||
if self._node_geo and all_node_ids:
|
||||
# Build node-to-temp mapping
|
||||
temp_map = {}
|
||||
for key, data in self._temperatures.items():
|
||||
if data['node_ids'] is not None:
|
||||
for nid, temp in zip(data['node_ids'].flatten(), data['temperatures'].flatten()):
|
||||
temp_map[int(nid)] = temp
|
||||
|
||||
node_ids = list(self._node_geo.keys())
|
||||
X = np.array([self._node_geo[nid][0] for nid in node_ids])
|
||||
Y = np.array([self._node_geo[nid][1] for nid in node_ids])
|
||||
Z = np.array([self._node_geo[nid][2] for nid in node_ids])
|
||||
colors = np.array([temp_map.get(nid, mean_temp) for nid in node_ids])
|
||||
|
||||
try:
|
||||
tri = _Triangulation(X, Y)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=X, y=Y, z=Z,
|
||||
i=i, j=j, k=k,
|
||||
intensity=colors,
|
||||
colorscale=colorscale,
|
||||
opacity=0.95,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.5, diffuse=0.7, specular=0.2),
|
||||
showscale=True,
|
||||
colorbar=dict(title=f"Temp ({temp_unit})",
|
||||
thickness=15, len=0.5)
|
||||
), row=1, col=1)
|
||||
except Exception:
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=X, y=Y, z=Z,
|
||||
mode='markers',
|
||||
marker=dict(size=4, color=colors, colorscale=colorscale, showscale=True),
|
||||
), row=1, col=1)
|
||||
else:
|
||||
fig.add_annotation(
|
||||
text="3D mesh not available",
|
||||
xref="paper", yref="paper", x=0.5, y=0.7,
|
||||
showarrow=False, font=dict(size=14, color='white')
|
||||
)
|
||||
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.5, y=1.5, z=1.0)),
|
||||
xaxis=dict(title="X", showbackground=True),
|
||||
yaxis=dict(title="Y", showbackground=True),
|
||||
zaxis=dict(title="Z", showbackground=True),
|
||||
)
|
||||
|
||||
# Temperature histogram
|
||||
fig.add_trace(_go.Histogram(
|
||||
x=all_temps,
|
||||
nbinsx=50,
|
||||
marker_color='#f97316',
|
||||
opacity=0.8,
|
||||
name='Temperature'
|
||||
), row=2, col=1)
|
||||
|
||||
fig.update_xaxes(title_text=f"Temperature ({temp_unit})", row=2, col=1)
|
||||
fig.update_yaxes(title_text="Count", row=2, col=1)
|
||||
|
||||
# Summary table
|
||||
stats_labels = [
|
||||
"Maximum Temperature",
|
||||
"Minimum Temperature",
|
||||
"Mean Temperature",
|
||||
"Temperature Range",
|
||||
"Number of Nodes",
|
||||
]
|
||||
stats_values = [
|
||||
f"{max_temp:.2f} {temp_unit}",
|
||||
f"{min_temp:.2f} {temp_unit}",
|
||||
f"{mean_temp:.2f} {temp_unit}",
|
||||
f"{temp_range:.2f} {temp_unit}",
|
||||
str(len(all_temps)),
|
||||
]
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value</b>"],
|
||||
fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[stats_labels, stats_values],
|
||||
fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=2)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1400, height=900,
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text="<b>Atomizer Thermal Analysis</b>",
|
||||
x=0.5, font=dict(size=18)),
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Save HTML
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_path = output_dir / f"thermal_{timestamp}.html"
|
||||
html_path.write_text(
|
||||
fig.to_html(include_plotlyjs='cdn', full_html=True),
|
||||
encoding='utf-8'
|
||||
)
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
plotly_figure=fig.to_dict(),
|
||||
summary={
|
||||
'max_temp': max_temp,
|
||||
'min_temp': min_temp,
|
||||
'mean_temp': mean_temp,
|
||||
'temp_range': temp_range,
|
||||
'temp_unit': temp_unit,
|
||||
}
|
||||
)
|
||||
697
optimization_engine/insights/zernike_wfe.py
Normal file
697
optimization_engine/insights/zernike_wfe.py
Normal file
@@ -0,0 +1,697 @@
|
||||
"""
|
||||
Zernike Wavefront Error (WFE) Insight
|
||||
|
||||
Provides 3D surface visualization of mirror wavefront errors with
|
||||
Zernike polynomial decomposition. Generates three views:
|
||||
- 40 deg vs 20 deg (operational tilt comparison)
|
||||
- 60 deg vs 20 deg (operational tilt comparison)
|
||||
- 90 deg Manufacturing (absolute with optician workload metrics)
|
||||
|
||||
Applicable to: Mirror optimization studies with multi-subcase gravity loads.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import numpy as np
|
||||
from math import factorial
|
||||
from numpy.linalg import LinAlgError
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports to avoid startup overhead
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
_OP2 = None
|
||||
_BDF = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Zernike Mathematics
|
||||
# ============================================================================
|
||||
def noll_indices(j: int) -> Tuple[int, int]:
|
||||
"""Convert Noll index to (n, m) radial/azimuthal orders."""
|
||||
if j < 1:
|
||||
raise ValueError("Noll index j must be >= 1")
|
||||
count = 0
|
||||
n = 0
|
||||
while True:
|
||||
if n == 0:
|
||||
ms = [0]
|
||||
elif n % 2 == 0:
|
||||
ms = [0] + [m for k in range(1, n//2 + 1) for m in (-2*k, 2*k)]
|
||||
else:
|
||||
ms = [m for k in range(0, (n+1)//2) for m in (-(2*k+1), (2*k+1))]
|
||||
for m in ms:
|
||||
count += 1
|
||||
if count == j:
|
||||
return n, m
|
||||
n += 1
|
||||
|
||||
|
||||
def zernike_noll(j: int, r: np.ndarray, th: np.ndarray) -> np.ndarray:
|
||||
"""Evaluate Zernike polynomial j at (r, theta)."""
|
||||
n, m = noll_indices(j)
|
||||
R = np.zeros_like(r)
|
||||
for s in range((n - abs(m)) // 2 + 1):
|
||||
c = ((-1)**s * factorial(n - s) /
|
||||
(factorial(s) *
|
||||
factorial((n + abs(m)) // 2 - s) *
|
||||
factorial((n - abs(m)) // 2 - s)))
|
||||
R += c * r**(n - 2*s)
|
||||
if m == 0:
|
||||
return R
|
||||
return R * (np.cos(m * th) if m > 0 else np.sin(-m * th))
|
||||
|
||||
|
||||
def zernike_common_name(n: int, m: int) -> str:
|
||||
"""Get common name for Zernike mode."""
|
||||
names = {
|
||||
(0, 0): "Piston", (1, -1): "Tilt X", (1, 1): "Tilt Y",
|
||||
(2, 0): "Defocus", (2, -2): "Astig 45°", (2, 2): "Astig 0°",
|
||||
(3, -1): "Coma X", (3, 1): "Coma Y", (3, -3): "Trefoil X", (3, 3): "Trefoil Y",
|
||||
(4, 0): "Primary Spherical", (4, -2): "Sec Astig X", (4, 2): "Sec Astig Y",
|
||||
(4, -4): "Quadrafoil X", (4, 4): "Quadrafoil Y",
|
||||
(5, -1): "Sec Coma X", (5, 1): "Sec Coma Y",
|
||||
(5, -3): "Sec Trefoil X", (5, 3): "Sec Trefoil Y",
|
||||
(5, -5): "Pentafoil X", (5, 5): "Pentafoil Y",
|
||||
(6, 0): "Sec Spherical",
|
||||
}
|
||||
return names.get((n, m), f"Z(n={n}, m={m})")
|
||||
|
||||
|
||||
def zernike_label(j: int) -> str:
|
||||
"""Get label for Zernike coefficient J{j}."""
|
||||
n, m = noll_indices(j)
|
||||
return f"J{j:02d} - {zernike_common_name(n, m)} (n={n}, m={m})"
|
||||
|
||||
|
||||
def compute_zernike_coeffs(
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
vals: np.ndarray,
|
||||
n_modes: int,
|
||||
chunk_size: int = 100000
|
||||
) -> Tuple[np.ndarray, float]:
|
||||
"""Fit Zernike coefficients to WFE data."""
|
||||
Xc, Yc = X - np.mean(X), Y - np.mean(Y)
|
||||
R = float(np.max(np.hypot(Xc, Yc)))
|
||||
r = np.hypot(Xc / R, Yc / R).astype(np.float32)
|
||||
th = np.arctan2(Yc, Xc).astype(np.float32)
|
||||
mask = (r <= 1.0) & ~np.isnan(vals)
|
||||
if not np.any(mask):
|
||||
raise RuntimeError("No valid points inside unit disk.")
|
||||
idx = np.nonzero(mask)[0]
|
||||
m = int(n_modes)
|
||||
G = np.zeros((m, m), dtype=np.float64)
|
||||
h = np.zeros((m,), dtype=np.float64)
|
||||
v = vals.astype(np.float64)
|
||||
|
||||
for start in range(0, len(idx), chunk_size):
|
||||
sl = idx[start:start + chunk_size]
|
||||
r_b, th_b, v_b = r[sl], th[sl], v[sl]
|
||||
Zb = np.column_stack([zernike_noll(j, r_b, th_b).astype(np.float32)
|
||||
for j in range(1, m + 1)])
|
||||
G += (Zb.T @ Zb).astype(np.float64)
|
||||
h += (Zb.T @ v_b).astype(np.float64)
|
||||
try:
|
||||
coeffs = np.linalg.solve(G, h)
|
||||
except LinAlgError:
|
||||
coeffs = np.linalg.lstsq(G, h, rcond=None)[0]
|
||||
return coeffs, R
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Configuration Defaults
|
||||
# ============================================================================
|
||||
DEFAULT_CONFIG = {
|
||||
'n_modes': 50,
|
||||
'amp': 0.5, # Visual deformation scale
|
||||
'pancake': 3.0, # Z-axis range multiplier
|
||||
'plot_downsample': 10000,
|
||||
'filter_low_orders': 4, # Piston, tip, tilt, defocus
|
||||
'colorscale': 'Turbo',
|
||||
'disp_unit': 'mm',
|
||||
'show_bar_chart': True,
|
||||
}
|
||||
|
||||
|
||||
@register_insight
|
||||
class ZernikeWFEInsight(StudyInsight):
|
||||
"""
|
||||
Zernike Wavefront Error visualization for mirror optimization.
|
||||
|
||||
Generates interactive 3D surface plots showing:
|
||||
- Residual WFE after Zernike fit
|
||||
- Coefficient bar charts
|
||||
- RMS metrics tables
|
||||
- Manufacturing orientation analysis
|
||||
"""
|
||||
|
||||
insight_type = "zernike_wfe"
|
||||
name = "Zernike WFE Analysis"
|
||||
description = "3D wavefront error surface with Zernike decomposition"
|
||||
applicable_to = ["mirror", "optics", "wfe"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._displacements: Optional[Dict] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 and geometry files exist."""
|
||||
# Look for OP2 in results or iterations
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
# Find geometry
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file for OP2."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and displacement data."""
|
||||
if self._node_geo is not None:
|
||||
return # Already loaded
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
# Read geometry
|
||||
bdf = _BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
|
||||
# Read displacements
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
if not op2.displacements:
|
||||
raise RuntimeError("No displacement data in OP2")
|
||||
|
||||
self._displacements = {}
|
||||
for key, darr in op2.displacements.items():
|
||||
data = darr.data
|
||||
dmat = data[0] if data.ndim == 3 else (data if data.ndim == 2 else None)
|
||||
if dmat is None:
|
||||
continue
|
||||
|
||||
ngt = darr.node_gridtype.astype(int)
|
||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||
isubcase = getattr(darr, 'isubcase', None)
|
||||
label = str(isubcase) if isubcase else str(key)
|
||||
|
||||
self._displacements[label] = {
|
||||
'node_ids': node_ids.astype(int),
|
||||
'disp': dmat.copy()
|
||||
}
|
||||
|
||||
def _build_wfe_arrays(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm'
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays for a subcase."""
|
||||
nm_per_unit = 1e6 if disp_unit == 'mm' else 1e9
|
||||
|
||||
data = self._displacements[label]
|
||||
node_ids = data['node_ids']
|
||||
dmat = data['disp']
|
||||
|
||||
X, Y, WFE = [], [], []
|
||||
valid_nids = []
|
||||
for nid, vec in zip(node_ids, dmat):
|
||||
geo = self._node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
X.append(geo[0])
|
||||
Y.append(geo[1])
|
||||
wfe = vec[2] * 2.0 * nm_per_unit # Z-disp to WFE
|
||||
WFE.append(wfe)
|
||||
valid_nids.append(nid)
|
||||
|
||||
return (np.array(X), np.array(Y), np.array(WFE), np.array(valid_nids))
|
||||
|
||||
def _compute_relative_wfe(
|
||||
self,
|
||||
X1, Y1, WFE1, nids1,
|
||||
X2, Y2, WFE2, nids2
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Compute WFE1 - WFE2 for common nodes."""
|
||||
ref_map = {int(nid): (x, y, w) for nid, x, y, w in zip(nids2, X2, Y2, WFE2)}
|
||||
|
||||
X_rel, Y_rel, WFE_rel = [], [], []
|
||||
for nid, x, y, w in zip(nids1, X1, Y1, WFE1):
|
||||
nid = int(nid)
|
||||
if nid in ref_map:
|
||||
_, _, w_ref = ref_map[nid]
|
||||
X_rel.append(x)
|
||||
Y_rel.append(y)
|
||||
WFE_rel.append(w - w_ref)
|
||||
|
||||
return np.array(X_rel), np.array(Y_rel), np.array(WFE_rel)
|
||||
|
||||
def _compute_metrics(
|
||||
self,
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
W_nm: np.ndarray,
|
||||
n_modes: int,
|
||||
filter_orders: int
|
||||
) -> Dict[str, Any]:
|
||||
"""Compute RMS metrics and Zernike coefficients."""
|
||||
coeffs, R = compute_zernike_coeffs(X, Y, W_nm, n_modes)
|
||||
|
||||
Xc = X - np.mean(X)
|
||||
Yc = Y - np.mean(Y)
|
||||
r = np.hypot(Xc / R, Yc / R)
|
||||
th = np.arctan2(Yc, Xc)
|
||||
|
||||
Z = np.column_stack([zernike_noll(j, r, th) for j in range(1, n_modes + 1)])
|
||||
W_res_filt = W_nm - Z[:, :filter_orders].dot(coeffs[:filter_orders])
|
||||
W_res_filt_j1to3 = W_nm - Z[:, :3].dot(coeffs[:3])
|
||||
|
||||
return {
|
||||
'coefficients': coeffs,
|
||||
'R': R,
|
||||
'global_rms': float(np.sqrt(np.mean(W_nm**2))),
|
||||
'filtered_rms': float(np.sqrt(np.mean(W_res_filt**2))),
|
||||
'rms_filter_j1to3': float(np.sqrt(np.mean(W_res_filt_j1to3**2))),
|
||||
'W_res_filt': W_res_filt,
|
||||
}
|
||||
|
||||
def _compute_aberration_magnitudes(self, coeffs: np.ndarray) -> Dict[str, float]:
|
||||
"""Compute magnitude of specific aberration modes."""
|
||||
return {
|
||||
'defocus_nm': float(abs(coeffs[3])) if len(coeffs) > 3 else 0.0,
|
||||
'astigmatism_rms': float(np.sqrt(coeffs[4]**2 + coeffs[5]**2)) if len(coeffs) > 5 else 0.0,
|
||||
'coma_rms': float(np.sqrt(coeffs[6]**2 + coeffs[7]**2)) if len(coeffs) > 7 else 0.0,
|
||||
'trefoil_rms': float(np.sqrt(coeffs[8]**2 + coeffs[9]**2)) if len(coeffs) > 9 else 0.0,
|
||||
'spherical_nm': float(abs(coeffs[10])) if len(coeffs) > 10 else 0.0,
|
||||
}
|
||||
|
||||
def _generate_view_html(
|
||||
self,
|
||||
title: str,
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
W_nm: np.ndarray,
|
||||
rms_data: Dict,
|
||||
config: Dict,
|
||||
is_relative: bool = False,
|
||||
ref_title: str = "20 deg",
|
||||
abs_pair: Optional[Tuple[float, float]] = None,
|
||||
is_manufacturing: bool = False,
|
||||
mfg_metrics: Optional[Dict] = None,
|
||||
correction_metrics: Optional[Dict] = None,
|
||||
) -> str:
|
||||
"""Generate HTML for a single view."""
|
||||
_load_dependencies()
|
||||
|
||||
n_modes = config.get('n_modes', 50)
|
||||
amp = config.get('amp', 0.5)
|
||||
pancake = config.get('pancake', 3.0)
|
||||
downsample = config.get('plot_downsample', 10000)
|
||||
colorscale = config.get('colorscale', 'Turbo')
|
||||
show_bar = config.get('show_bar_chart', True)
|
||||
|
||||
coeffs = rms_data['coefficients']
|
||||
global_rms = rms_data['global_rms']
|
||||
filtered_rms = rms_data['filtered_rms']
|
||||
W_res_filt = rms_data['W_res_filt']
|
||||
|
||||
labels = [zernike_label(j) for j in range(1, n_modes + 1)]
|
||||
coeff_abs = np.abs(coeffs)
|
||||
|
||||
# Downsample
|
||||
n = len(X)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
Xp, Yp, Wp = X[sel], Y[sel], W_res_filt[sel]
|
||||
else:
|
||||
Xp, Yp, Wp = X, Y, W_res_filt
|
||||
|
||||
res_amp = amp * Wp
|
||||
max_amp = float(np.max(np.abs(res_amp))) if res_amp.size else 1.0
|
||||
|
||||
# Build mesh
|
||||
mesh_traces = []
|
||||
try:
|
||||
tri = _Triangulation(Xp, Yp)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
mesh_traces.append(_go.Mesh3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=res_amp,
|
||||
colorscale=colorscale,
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3,
|
||||
roughness=0.5, fresnel=0.2),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(title=dict(text="Residual (nm)", side="right"),
|
||||
thickness=15, len=0.6, tickformat=".1f"),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>Residual: %{z:.2f} nm<extra></extra>"
|
||||
))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not mesh_traces:
|
||||
mesh_traces.append(_go.Scatter3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=res_amp, colorscale=colorscale, showscale=True),
|
||||
showlegend=False
|
||||
))
|
||||
|
||||
title_suffix = f" (relative to {ref_title})" if is_relative else " (absolute)"
|
||||
|
||||
# Build subplots
|
||||
if is_manufacturing and mfg_metrics and correction_metrics:
|
||||
fig = _make_subplots(
|
||||
rows=5, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}], [{"type": "table"}],
|
||||
[{"type": "table"}], [{"type": "xy"}]],
|
||||
row_heights=[0.38, 0.12, 0.12, 0.18, 0.20],
|
||||
vertical_spacing=0.025,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics (Absolute 90 deg)</b>",
|
||||
"<b>Mode Magnitudes at 90 deg</b>",
|
||||
"<b>Pre-Correction (90 deg - 20 deg)</b>",
|
||||
"<b>|Zernike Coefficients| (nm)</b>"
|
||||
]
|
||||
)
|
||||
elif show_bar:
|
||||
fig = _make_subplots(
|
||||
rows=4, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}],
|
||||
[{"type": "table"}], [{"type": "xy"}]],
|
||||
row_heights=[0.45, 0.12, 0.25, 0.18],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics</b>",
|
||||
f"<b>Zernike Coefficients ({n_modes} modes)</b>",
|
||||
"<b>|Zernike Coefficients| (nm)</b>"
|
||||
]
|
||||
)
|
||||
else:
|
||||
fig = _make_subplots(
|
||||
rows=3, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}], [{"type": "table"}]],
|
||||
row_heights=[0.55, 0.15, 0.30],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics</b>",
|
||||
f"<b>Zernike Coefficients ({n_modes} modes)</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# Add mesh
|
||||
for tr in mesh_traces:
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
|
||||
# Configure 3D scene
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.2, y=1.2, z=0.8), up=dict(x=0, y=0, z=1)),
|
||||
xaxis=dict(title="X (mm)", showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(240,240,240,0.9)'),
|
||||
yaxis=dict(title="Y (mm)", showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(240,240,240,0.9)'),
|
||||
zaxis=dict(title="Residual (nm)",
|
||||
range=[-max_amp * pancake, max_amp * pancake],
|
||||
showgrid=True, gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(230,230,250,0.9)'),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4)
|
||||
)
|
||||
|
||||
# Add tables
|
||||
if is_relative and abs_pair:
|
||||
abs_global, abs_filtered = abs_pair
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Relative (nm)</b>", "<b>Absolute (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4 removed)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"],
|
||||
[f"{abs_global:.2f}", f"{abs_filtered:.2f}"],
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
elif is_manufacturing and mfg_metrics and correction_metrics:
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Mode</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Filtered RMS (J1-J3, with defocus)", "Astigmatism (J5+J6)",
|
||||
"Coma (J7+J8)", "Trefoil (J9+J10)", "Spherical (J11)"],
|
||||
[f"{rms_data['rms_filter_j1to3']:.2f}",
|
||||
f"{mfg_metrics['astigmatism_rms']:.2f}",
|
||||
f"{mfg_metrics['coma_rms']:.2f}",
|
||||
f"{mfg_metrics['trefoil_rms']:.2f}",
|
||||
f"{mfg_metrics['spherical_nm']:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=3, col=1)
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Mode</b>", "<b>Correction (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Total RMS (J1-J3 filter)", "Defocus (J4)",
|
||||
"Astigmatism (J5+J6)", "Coma (J7+J8)"],
|
||||
[f"{correction_metrics['rms_filter_j1to3']:.2f}",
|
||||
f"{correction_metrics['defocus_nm']:.2f}",
|
||||
f"{correction_metrics['astigmatism_rms']:.2f}",
|
||||
f"{correction_metrics['coma_rms']:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=4, col=1)
|
||||
else:
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4 removed)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Coefficients table
|
||||
if not (is_manufacturing and mfg_metrics and correction_metrics):
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Noll j</b>", "<b>Label</b>", "<b>|Coeff| (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
list(range(1, n_modes + 1)),
|
||||
labels,
|
||||
[f"{c:.3f}" for c in coeff_abs]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=3, col=1)
|
||||
|
||||
# Bar chart
|
||||
if show_bar:
|
||||
bar_row = 5 if (is_manufacturing and mfg_metrics and correction_metrics) else 4
|
||||
fig.add_trace(
|
||||
_go.Bar(
|
||||
x=coeff_abs.tolist(), y=labels,
|
||||
orientation='h', marker_color='#6366f1',
|
||||
hovertemplate="%{y}<br>|Coeff| = %{x:.3f} nm<extra></extra>",
|
||||
showlegend=False
|
||||
),
|
||||
row=bar_row, col=1
|
||||
)
|
||||
|
||||
# Layout
|
||||
height = 1500 if (is_manufacturing and mfg_metrics and correction_metrics) else 1300
|
||||
fig.update_layout(
|
||||
width=1400, height=height,
|
||||
margin=dict(t=60, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text=f"<b>Atomizer Zernike Analysis - {title}</b>",
|
||||
x=0.5, font=dict(size=18))
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate all Zernike WFE views."""
|
||||
self._load_data()
|
||||
|
||||
# Merge config
|
||||
cfg = {**DEFAULT_CONFIG, **config.extra}
|
||||
cfg['colorscale'] = config.extra.get('colorscale', cfg['colorscale'])
|
||||
cfg['amp'] = config.amplification if config.amplification != 1.0 else cfg['amp']
|
||||
|
||||
n_modes = cfg['n_modes']
|
||||
filter_orders = cfg['filter_low_orders']
|
||||
disp_unit = cfg['disp_unit']
|
||||
|
||||
# Map subcases
|
||||
disps = self._displacements
|
||||
if '1' in disps and '2' in disps:
|
||||
sc_map = {'90': '1', '20': '2', '40': '3', '60': '4'}
|
||||
elif '90' in disps and '20' in disps:
|
||||
sc_map = {'90': '90', '20': '20', '40': '40', '60': '60'}
|
||||
else:
|
||||
available = sorted(disps.keys(), key=lambda x: int(x) if x.isdigit() else 0)
|
||||
if len(available) >= 4:
|
||||
sc_map = {'90': available[0], '20': available[1],
|
||||
'40': available[2], '60': available[3]}
|
||||
else:
|
||||
return InsightResult(success=False,
|
||||
error=f"Need 4 subcases, found: {available}")
|
||||
|
||||
# Check subcases
|
||||
for angle, label in sc_map.items():
|
||||
if label not in disps:
|
||||
return InsightResult(success=False,
|
||||
error=f"Subcase '{label}' (angle {angle}) not found")
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_files = []
|
||||
summary = {}
|
||||
|
||||
# Reference: 20 deg
|
||||
X_ref, Y_ref, WFE_ref, nids_ref = self._build_wfe_arrays(sc_map['20'], disp_unit)
|
||||
rms_ref = self._compute_metrics(X_ref, Y_ref, WFE_ref, n_modes, filter_orders)
|
||||
|
||||
# 90 deg
|
||||
X_90, Y_90, WFE_90, nids_90 = self._build_wfe_arrays(sc_map['90'], disp_unit)
|
||||
rms_90 = self._compute_metrics(X_90, Y_90, WFE_90, n_modes, filter_orders)
|
||||
mfg_metrics = self._compute_aberration_magnitudes(rms_90['coefficients'])
|
||||
|
||||
# 40 deg vs 20 deg
|
||||
X_40, Y_40, WFE_40, nids_40 = self._build_wfe_arrays(sc_map['40'], disp_unit)
|
||||
X_40_rel, Y_40_rel, WFE_40_rel = self._compute_relative_wfe(
|
||||
X_40, Y_40, WFE_40, nids_40, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_40_abs = self._compute_metrics(X_40, Y_40, WFE_40, n_modes, filter_orders)
|
||||
rms_40_rel = self._compute_metrics(X_40_rel, Y_40_rel, WFE_40_rel, n_modes, filter_orders)
|
||||
|
||||
html_40 = self._generate_view_html(
|
||||
"40 deg", X_40_rel, Y_40_rel, WFE_40_rel, rms_40_rel, cfg,
|
||||
is_relative=True, ref_title="20 deg",
|
||||
abs_pair=(rms_40_abs['global_rms'], rms_40_abs['filtered_rms']))
|
||||
path_40 = output_dir / f"zernike_{timestamp}_40_vs_20.html"
|
||||
path_40.write_text(html_40, encoding='utf-8')
|
||||
html_files.append(path_40)
|
||||
summary['40_vs_20_filtered_rms'] = rms_40_rel['filtered_rms']
|
||||
|
||||
# 60 deg vs 20 deg
|
||||
X_60, Y_60, WFE_60, nids_60 = self._build_wfe_arrays(sc_map['60'], disp_unit)
|
||||
X_60_rel, Y_60_rel, WFE_60_rel = self._compute_relative_wfe(
|
||||
X_60, Y_60, WFE_60, nids_60, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_60_abs = self._compute_metrics(X_60, Y_60, WFE_60, n_modes, filter_orders)
|
||||
rms_60_rel = self._compute_metrics(X_60_rel, Y_60_rel, WFE_60_rel, n_modes, filter_orders)
|
||||
|
||||
html_60 = self._generate_view_html(
|
||||
"60 deg", X_60_rel, Y_60_rel, WFE_60_rel, rms_60_rel, cfg,
|
||||
is_relative=True, ref_title="20 deg",
|
||||
abs_pair=(rms_60_abs['global_rms'], rms_60_abs['filtered_rms']))
|
||||
path_60 = output_dir / f"zernike_{timestamp}_60_vs_20.html"
|
||||
path_60.write_text(html_60, encoding='utf-8')
|
||||
html_files.append(path_60)
|
||||
summary['60_vs_20_filtered_rms'] = rms_60_rel['filtered_rms']
|
||||
|
||||
# 90 deg Manufacturing
|
||||
X_90_rel, Y_90_rel, WFE_90_rel = self._compute_relative_wfe(
|
||||
X_90, Y_90, WFE_90, nids_90, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_90_rel = self._compute_metrics(X_90_rel, Y_90_rel, WFE_90_rel, n_modes, filter_orders)
|
||||
corr_abr = self._compute_aberration_magnitudes(rms_90_rel['coefficients'])
|
||||
correction_metrics = {
|
||||
'rms_filter_j1to3': rms_90_rel['rms_filter_j1to3'],
|
||||
**corr_abr
|
||||
}
|
||||
|
||||
html_90 = self._generate_view_html(
|
||||
"90 deg (Manufacturing)", X_90, Y_90, WFE_90, rms_90, cfg,
|
||||
is_relative=False, is_manufacturing=True,
|
||||
mfg_metrics=mfg_metrics, correction_metrics=correction_metrics)
|
||||
path_90 = output_dir / f"zernike_{timestamp}_90_mfg.html"
|
||||
path_90.write_text(html_90, encoding='utf-8')
|
||||
html_files.append(path_90)
|
||||
summary['90_mfg_filtered_rms'] = rms_90['filtered_rms']
|
||||
summary['90_optician_workload'] = rms_90['rms_filter_j1to3']
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_files[0], # Return first as primary
|
||||
summary={
|
||||
'html_files': [str(p) for p in html_files],
|
||||
**summary
|
||||
}
|
||||
)
|
||||
75
tools/analyze_wfe.bat
Normal file
75
tools/analyze_wfe.bat
Normal file
@@ -0,0 +1,75 @@
|
||||
@echo off
|
||||
REM Atomizer WFE Analyzer - Double-click to analyze OP2 file
|
||||
REM Opens file dialog to select OP2, then runs Zernike analysis
|
||||
REM Generates 3 HTML files: 40 vs 20, 60 vs 20, 90 Manufacturing
|
||||
|
||||
echo ======================================================================
|
||||
echo ATOMIZER WFE ANALYZER
|
||||
echo Generates Zernike HTML Reports (40 vs 20, 60 vs 20, 90 Mfg)
|
||||
echo ======================================================================
|
||||
echo.
|
||||
|
||||
REM Use PowerShell to open file dialog
|
||||
for /f "delims=" %%I in ('powershell -NoProfile -Command "Add-Type -AssemblyName System.Windows.Forms; $f = New-Object System.Windows.Forms.OpenFileDialog; $f.Filter = 'OP2 Files (*.op2)|*.op2|All Files (*.*)|*.*'; $f.Title = 'Select OP2 Results File'; $f.InitialDirectory = '%USERPROFILE%'; if ($f.ShowDialog() -eq 'OK') { $f.FileName }"') do set "OP2_FILE=%%I"
|
||||
|
||||
if "%OP2_FILE%"=="" (
|
||||
echo No file selected. Exiting.
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
echo Selected: %OP2_FILE%
|
||||
echo.
|
||||
|
||||
REM Get the directory of the OP2 file for finding output HTMLs
|
||||
for %%F in ("%OP2_FILE%") do set "OP2_DIR=%%~dpF"
|
||||
for %%F in ("%OP2_FILE%") do set "OP2_BASE=%%~nF"
|
||||
|
||||
REM Initialize conda and activate atomizer environment
|
||||
call "%USERPROFILE%\anaconda3\Scripts\activate.bat" atomizer
|
||||
|
||||
echo Running Zernike analysis...
|
||||
echo.
|
||||
|
||||
REM Run the HTML generator
|
||||
python "%~dp0zernike_html_generator.py" "%OP2_FILE%"
|
||||
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo.
|
||||
echo ======================================================================
|
||||
echo ERROR: Analysis failed. See errors above.
|
||||
echo ======================================================================
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
echo.
|
||||
echo ======================================================================
|
||||
echo Opening generated HTML files in browser...
|
||||
echo ======================================================================
|
||||
|
||||
REM Find and open the most recent HTML files (with timestamp pattern)
|
||||
REM They follow pattern: {basename}_{timestamp}_*.html
|
||||
|
||||
REM Open all matching HTML files
|
||||
for /f "delims=" %%H in ('powershell -NoProfile -Command "Get-ChildItem -Path '%OP2_DIR%' -Filter '%OP2_BASE%_*_40_vs_20.html' | Sort-Object LastWriteTime -Descending | Select-Object -First 1 -ExpandProperty FullName"') do (
|
||||
echo Opening: %%~nxH
|
||||
start "" "%%H"
|
||||
)
|
||||
|
||||
for /f "delims=" %%H in ('powershell -NoProfile -Command "Get-ChildItem -Path '%OP2_DIR%' -Filter '%OP2_BASE%_*_60_vs_20.html' | Sort-Object LastWriteTime -Descending | Select-Object -First 1 -ExpandProperty FullName"') do (
|
||||
echo Opening: %%~nxH
|
||||
start "" "%%H"
|
||||
)
|
||||
|
||||
for /f "delims=" %%H in ('powershell -NoProfile -Command "Get-ChildItem -Path '%OP2_DIR%' -Filter '%OP2_BASE%_*_90_mfg.html' | Sort-Object LastWriteTime -Descending | Select-Object -First 1 -ExpandProperty FullName"') do (
|
||||
echo Opening: %%~nxH
|
||||
start "" "%%H"
|
||||
)
|
||||
|
||||
echo.
|
||||
echo ======================================================================
|
||||
echo ANALYSIS COMPLETE
|
||||
echo ======================================================================
|
||||
echo.
|
||||
pause
|
||||
36
tools/test_zernike_import.py
Normal file
36
tools/test_zernike_import.py
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script for Zernike extractor import."""
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add Atomizer root to path
|
||||
atomizer_root = Path(__file__).parent.parent
|
||||
sys.path.insert(0, str(atomizer_root))
|
||||
|
||||
print("Testing ZernikeExtractor import...")
|
||||
|
||||
try:
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
print(" Import: OK")
|
||||
|
||||
import inspect
|
||||
sig = inspect.signature(ZernikeExtractor.extract_relative)
|
||||
print(f" extract_relative signature: {sig}")
|
||||
|
||||
# Check parameters
|
||||
params = list(sig.parameters.keys())
|
||||
print(f" Parameters: {params}")
|
||||
|
||||
if 'include_coefficients' in params:
|
||||
print(" include_coefficients parameter: FOUND")
|
||||
else:
|
||||
print(" include_coefficients parameter: MISSING!")
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f" ERROR: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
print("\nAll tests passed!")
|
||||
831
tools/zernike_html_generator.py
Normal file
831
tools/zernike_html_generator.py
Normal file
@@ -0,0 +1,831 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Atomizer Zernike HTML Generator
|
||||
================================
|
||||
|
||||
Generates 3 interactive HTML reports for Zernike wavefront analysis:
|
||||
1. 40° vs 20° (relative) - Operational angle comparison
|
||||
2. 60° vs 20° (relative) - Operational angle comparison
|
||||
3. 90° (Manufacturing) - Absolute with manufacturing metrics
|
||||
|
||||
Usage:
|
||||
conda activate atomizer
|
||||
python zernike_html_generator.py "path/to/solution.op2"
|
||||
|
||||
# Or with file dialog (double-click via analyze_wfe.bat)
|
||||
python zernike_html_generator.py
|
||||
|
||||
Output:
|
||||
Creates 3 HTML files in same directory as OP2:
|
||||
- {basename}_40_vs_20.html
|
||||
- {basename}_60_vs_20.html
|
||||
- {basename}_90_mfg.html
|
||||
|
||||
Author: Atomizer
|
||||
Created: 2025-12-19
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from math import factorial
|
||||
from datetime import datetime
|
||||
import numpy as np
|
||||
from numpy.linalg import LinAlgError
|
||||
|
||||
# Add Atomizer root to path
|
||||
ATOMIZER_ROOT = Path(__file__).parent.parent
|
||||
if str(ATOMIZER_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(ATOMIZER_ROOT))
|
||||
|
||||
try:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
except ImportError as e:
|
||||
print(f"ERROR: Missing dependency: {e}")
|
||||
print("Run: conda activate atomizer")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Configuration
|
||||
# ============================================================================
|
||||
N_MODES = 50
|
||||
AMP = 0.5 # visual scale for residual plot (0.5x = reduced deformation)
|
||||
PANCAKE = 3.0 # Z-axis range multiplier for camera view
|
||||
PLOT_DOWNSAMPLE = 10000
|
||||
FILTER_LOW_ORDERS = 4 # piston, tip, tilt, defocus
|
||||
|
||||
# Surface plot style
|
||||
COLORSCALE = 'Turbo' # Options: 'RdBu_r', 'Viridis', 'Plasma', 'Turbo', 'Jet'
|
||||
SURFACE_LIGHTING = True # Smooth shaded surface with lighting
|
||||
SHOW_ZERNIKE_BAR = True
|
||||
|
||||
REQUIRED_SUBCASES = [90, 20, 40, 60]
|
||||
REF_LABEL = "2" # Reference subcase (20 deg) - using isubcase
|
||||
POLISH_LABEL = "1" # Manufacturing orientation (90 deg) - using isubcase
|
||||
|
||||
# Displacement unit in OP2 -> nm scale for WFE = 2*Disp_Z
|
||||
DISP_SRC_UNIT = "mm"
|
||||
NM_PER_UNIT = 1e6 if DISP_SRC_UNIT == "mm" else 1e9
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Zernike Math
|
||||
# ============================================================================
|
||||
def noll_indices(j: int):
|
||||
if j < 1:
|
||||
raise ValueError("Noll index j must be >= 1")
|
||||
count = 0
|
||||
n = 0
|
||||
while True:
|
||||
if n == 0:
|
||||
ms = [0]
|
||||
elif n % 2 == 0:
|
||||
ms = [0] + [m for k in range(1, n//2 + 1) for m in (-2*k, 2*k)]
|
||||
else:
|
||||
ms = [m for k in range(0, (n+1)//2) for m in (-(2*k+1), (2*k+1))]
|
||||
for m in ms:
|
||||
count += 1
|
||||
if count == j:
|
||||
return n, m
|
||||
n += 1
|
||||
|
||||
|
||||
def zernike_noll(j, r, th):
|
||||
n, m = noll_indices(j)
|
||||
R = np.zeros_like(r)
|
||||
for s in range((n-abs(m))//2 + 1):
|
||||
c = ((-1)**s * factorial(n-s) /
|
||||
(factorial(s) *
|
||||
factorial((n+abs(m))//2 - s) *
|
||||
factorial((n-abs(m))//2 - s)))
|
||||
R += c * r**(n-2*s)
|
||||
if m == 0:
|
||||
return R
|
||||
return R * (np.cos(m*th) if m > 0 else np.sin(-m*th))
|
||||
|
||||
|
||||
def compute_zernike_coeffs(X, Y, vals, n_modes, chunk_size=100000):
|
||||
Xc, Yc = X - np.mean(X), Y - np.mean(Y)
|
||||
R = float(np.max(np.hypot(Xc, Yc)))
|
||||
r = np.hypot(Xc/R, Yc/R).astype(np.float32)
|
||||
th = np.arctan2(Yc, Xc).astype(np.float32)
|
||||
mask = (r <= 1.0) & ~np.isnan(vals)
|
||||
if not np.any(mask):
|
||||
raise RuntimeError("No valid points inside unit disk.")
|
||||
idx = np.nonzero(mask)[0]
|
||||
m = int(n_modes)
|
||||
G = np.zeros((m, m), dtype=np.float64)
|
||||
h = np.zeros((m,), dtype=np.float64)
|
||||
v = vals.astype(np.float64)
|
||||
|
||||
for start in range(0, len(idx), chunk_size):
|
||||
sl = idx[start:start+chunk_size]
|
||||
r_b, th_b, v_b = r[sl], th[sl], v[sl]
|
||||
Zb = np.column_stack([zernike_noll(j, r_b, th_b).astype(np.float32)
|
||||
for j in range(1, m+1)])
|
||||
G += (Zb.T @ Zb).astype(np.float64)
|
||||
h += (Zb.T @ v_b).astype(np.float64)
|
||||
try:
|
||||
coeffs = np.linalg.solve(G, h)
|
||||
except LinAlgError:
|
||||
coeffs = np.linalg.lstsq(G, h, rcond=None)[0]
|
||||
return coeffs, R
|
||||
|
||||
|
||||
def zernike_common_name(n: int, m: int) -> str:
|
||||
names = {
|
||||
(0, 0): "Piston", (1, -1): "Tilt X", (1, 1): "Tilt Y",
|
||||
(2, 0): "Defocus", (2, -2): "Astig 45 deg", (2, 2): "Astig 0 deg",
|
||||
(3, -1): "Coma X", (3, 1): "Coma Y", (3, -3): "Trefoil X", (3, 3): "Trefoil Y",
|
||||
(4, 0): "Primary Spherical", (4, -2): "Sec Astig X", (4, 2): "Sec Astig Y",
|
||||
(4, -4): "Quadrafoil X", (4, 4): "Quadrafoil Y",
|
||||
(5, -1): "Sec Coma X", (5, 1): "Sec Coma Y",
|
||||
(5, -3): "Sec Trefoil X", (5, 3): "Sec Trefoil Y",
|
||||
(5, -5): "Pentafoil X", (5, 5): "Pentafoil Y",
|
||||
(6, 0): "Sec Spherical",
|
||||
}
|
||||
return names.get((n, m), f"Z(n={n}, m={m})")
|
||||
|
||||
|
||||
def zernike_label_for_j(j: int) -> str:
|
||||
n, m = noll_indices(j)
|
||||
return f"J{j:02d} - {zernike_common_name(n, m)} (n={n}, m={m})"
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# File I/O
|
||||
# ============================================================================
|
||||
def find_geometry_file(op2_path: Path) -> Path:
|
||||
"""Find matching BDF/DAT file for an OP2."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No .dat or .bdf geometry file found for {op2_path}")
|
||||
|
||||
|
||||
def read_geometry(dat_path: Path) -> dict:
|
||||
bdf = BDF()
|
||||
bdf.read_bdf(str(dat_path))
|
||||
return {int(nid): node.get_position() for nid, node in bdf.nodes.items()}
|
||||
|
||||
|
||||
def read_displacements(op2_path: Path) -> dict:
|
||||
"""Read displacement data organized by subcase."""
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(op2_path))
|
||||
|
||||
if not op2.displacements:
|
||||
raise RuntimeError("No displacement data found in OP2 file")
|
||||
|
||||
result = {}
|
||||
for key, darr in op2.displacements.items():
|
||||
data = darr.data
|
||||
dmat = data[0] if data.ndim == 3 else (data if data.ndim == 2 else None)
|
||||
if dmat is None:
|
||||
continue
|
||||
|
||||
ngt = darr.node_gridtype.astype(int)
|
||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||
|
||||
# Get subcase identifier
|
||||
isubcase = getattr(darr, 'isubcase', None)
|
||||
label = str(isubcase) if isubcase else str(key)
|
||||
|
||||
result[label] = {
|
||||
'node_ids': node_ids.astype(int),
|
||||
'disp': dmat.copy()
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Data Processing
|
||||
# ============================================================================
|
||||
def build_wfe_arrays(label: str, node_ids, dmat, node_geo):
|
||||
"""Build X, Y, WFE arrays for a subcase."""
|
||||
X, Y, WFE = [], [], []
|
||||
for nid, vec in zip(node_ids, dmat):
|
||||
geo = node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
X.append(geo[0])
|
||||
Y.append(geo[1])
|
||||
wfe = vec[2] * 2.0 * NM_PER_UNIT # Z-disp to WFE (nm)
|
||||
WFE.append(wfe)
|
||||
|
||||
return np.array(X), np.array(Y), np.array(WFE)
|
||||
|
||||
|
||||
def compute_relative_wfe(X1, Y1, WFE1, node_ids1, X2, Y2, WFE2, node_ids2):
|
||||
"""Compute relative WFE: WFE1 - WFE2 for common nodes."""
|
||||
# Build mapping for reference
|
||||
ref_map = {int(nid): (x, y, w) for nid, x, y, w in zip(node_ids2, X2, Y2, WFE2)}
|
||||
|
||||
X_rel, Y_rel, WFE_rel = [], [], []
|
||||
for nid, x, y, w in zip(node_ids1, X1, Y1, WFE1):
|
||||
nid = int(nid)
|
||||
if nid in ref_map:
|
||||
_, _, w_ref = ref_map[nid]
|
||||
X_rel.append(x)
|
||||
Y_rel.append(y)
|
||||
WFE_rel.append(w - w_ref)
|
||||
|
||||
return np.array(X_rel), np.array(Y_rel), np.array(WFE_rel)
|
||||
|
||||
|
||||
def compute_rms_metrics(X, Y, W_nm):
|
||||
"""Compute RMS metrics and Zernike coefficients."""
|
||||
coeffs, R = compute_zernike_coeffs(X, Y, W_nm, N_MODES)
|
||||
|
||||
Xc = X - np.mean(X)
|
||||
Yc = Y - np.mean(Y)
|
||||
r = np.hypot(Xc/R, Yc/R)
|
||||
th = np.arctan2(Yc, Xc)
|
||||
|
||||
Z = np.column_stack([zernike_noll(j, r, th) for j in range(1, N_MODES+1)])
|
||||
W_res_filt = W_nm - Z[:, :FILTER_LOW_ORDERS].dot(coeffs[:FILTER_LOW_ORDERS])
|
||||
|
||||
# J1-J3 filtered (keeping defocus - optician workload)
|
||||
W_res_filt_j1to3 = W_nm - Z[:, :3].dot(coeffs[:3])
|
||||
|
||||
global_rms = float(np.sqrt(np.mean(W_nm**2)))
|
||||
filtered_rms = float(np.sqrt(np.mean(W_res_filt**2)))
|
||||
rms_filter_j1to3 = float(np.sqrt(np.mean(W_res_filt_j1to3**2)))
|
||||
|
||||
return {
|
||||
'coefficients': coeffs,
|
||||
'R': R,
|
||||
'global_rms': global_rms,
|
||||
'filtered_rms': filtered_rms,
|
||||
'rms_filter_j1to3': rms_filter_j1to3,
|
||||
'W_res_filt': W_res_filt,
|
||||
}
|
||||
|
||||
|
||||
def compute_mfg_metrics(coeffs):
|
||||
"""Manufacturing aberration magnitudes."""
|
||||
return {
|
||||
'defocus_nm': float(abs(coeffs[3])),
|
||||
'astigmatism_rms': float(np.sqrt(coeffs[4]**2 + coeffs[5]**2)),
|
||||
'coma_rms': float(np.sqrt(coeffs[6]**2 + coeffs[7]**2)),
|
||||
'trefoil_rms': float(np.sqrt(coeffs[8]**2 + coeffs[9]**2)),
|
||||
'spherical_nm': float(abs(coeffs[10])) if len(coeffs) > 10 else 0.0,
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# HTML Generation
|
||||
# ============================================================================
|
||||
def generate_html(
|
||||
title: str,
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
W_nm: np.ndarray,
|
||||
rms_data: dict,
|
||||
is_relative: bool = False,
|
||||
ref_title: str = "20 deg",
|
||||
abs_pair: tuple = None,
|
||||
is_manufacturing: bool = False,
|
||||
mfg_metrics: dict = None,
|
||||
correction_metrics: dict = None,
|
||||
) -> str:
|
||||
"""Generate complete HTML for Zernike visualization."""
|
||||
|
||||
coeffs = rms_data['coefficients']
|
||||
global_rms = rms_data['global_rms']
|
||||
filtered_rms = rms_data['filtered_rms']
|
||||
W_res_filt = rms_data['W_res_filt']
|
||||
|
||||
labels = [zernike_label_for_j(j) for j in range(1, N_MODES+1)]
|
||||
coeff_abs = np.abs(coeffs)
|
||||
|
||||
# Downsample for display
|
||||
n = len(X)
|
||||
if n > PLOT_DOWNSAMPLE:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=PLOT_DOWNSAMPLE, replace=False)
|
||||
Xp, Yp, Wp = X[sel], Y[sel], W_res_filt[sel]
|
||||
else:
|
||||
Xp, Yp, Wp = X, Y, W_res_filt
|
||||
|
||||
res_amp = AMP * Wp
|
||||
max_amp = float(np.max(np.abs(res_amp))) if res_amp.size else 1.0
|
||||
|
||||
# Triangulate for smooth mesh surface
|
||||
mesh_traces = []
|
||||
try:
|
||||
tri = Triangulation(Xp, Yp)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
|
||||
# Create smooth shaded mesh with lighting
|
||||
mesh_traces.append(go.Mesh3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=res_amp,
|
||||
colorscale=COLORSCALE,
|
||||
opacity=1.0,
|
||||
flatshading=False, # Smooth shading (interpolated normals)
|
||||
lighting=dict(
|
||||
ambient=0.4,
|
||||
diffuse=0.8,
|
||||
specular=0.3,
|
||||
roughness=0.5,
|
||||
fresnel=0.2
|
||||
),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(
|
||||
title=dict(text="Residual (nm)", side="right"),
|
||||
thickness=15,
|
||||
len=0.6,
|
||||
tickformat=".1f"
|
||||
),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>Residual: %{z:.2f} nm<extra></extra>"
|
||||
))
|
||||
except Exception as e:
|
||||
print(f"Triangulation warning: {e}")
|
||||
|
||||
# Fallback scatter only if mesh failed
|
||||
if not mesh_traces:
|
||||
mesh_traces.append(go.Scatter3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=res_amp, colorscale=COLORSCALE, showscale=True),
|
||||
showlegend=False
|
||||
))
|
||||
|
||||
title_suffix = f" (relative to {ref_title})" if is_relative else " (absolute)"
|
||||
|
||||
# Build figure layout based on content type
|
||||
if is_manufacturing and mfg_metrics and correction_metrics:
|
||||
# Manufacturing page: 5 rows
|
||||
fig = make_subplots(
|
||||
rows=5, cols=1,
|
||||
specs=[[{"type":"scene"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"xy"}]],
|
||||
row_heights=[0.38, 0.12, 0.12, 0.18, 0.20],
|
||||
vertical_spacing=0.025,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics (Absolute 90 deg)</b>",
|
||||
"<b>Mode Magnitudes at 90 deg</b>",
|
||||
"<b>Pre-Correction (90 deg - 20 deg)</b>",
|
||||
"<b>|Zernike Coefficients| (nm)</b>"
|
||||
]
|
||||
)
|
||||
elif SHOW_ZERNIKE_BAR:
|
||||
# Standard page with bar chart: 4 rows
|
||||
fig = make_subplots(
|
||||
rows=4, cols=1,
|
||||
specs=[[{"type":"scene"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"xy"}]],
|
||||
row_heights=[0.45, 0.12, 0.25, 0.18],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics</b>",
|
||||
f"<b>Zernike Coefficients ({N_MODES} modes)</b>",
|
||||
"<b>|Zernike Coefficients| (nm)</b>"
|
||||
]
|
||||
)
|
||||
else:
|
||||
# Standard page without bar: 3 rows
|
||||
fig = make_subplots(
|
||||
rows=3, cols=1,
|
||||
specs=[[{"type":"scene"}],
|
||||
[{"type":"table"}],
|
||||
[{"type":"table"}]],
|
||||
row_heights=[0.55, 0.15, 0.30],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Residual - {title}{title_suffix}</b>",
|
||||
"<b>RMS Metrics</b>",
|
||||
f"<b>Zernike Coefficients ({N_MODES} modes)</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# Add 3D mesh traces
|
||||
for tr in mesh_traces:
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
|
||||
# Configure 3D scene with better camera angle and proportional axes
|
||||
fig.update_scenes(
|
||||
camera=dict(
|
||||
eye=dict(x=1.2, y=1.2, z=0.8), # Angled view from above
|
||||
up=dict(x=0, y=0, z=1)
|
||||
),
|
||||
xaxis=dict(
|
||||
title="X (mm)",
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(240,240,240,0.9)'
|
||||
),
|
||||
yaxis=dict(
|
||||
title="Y (mm)",
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(240,240,240,0.9)'
|
||||
),
|
||||
zaxis=dict(
|
||||
title="Residual (nm)",
|
||||
range=[-max_amp * PANCAKE, max_amp * PANCAKE],
|
||||
showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True,
|
||||
backgroundcolor='rgba(230,230,250,0.9)'
|
||||
),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4) # Flatten Z for better visualization
|
||||
)
|
||||
|
||||
# RMS table (row 2)
|
||||
if is_relative and abs_pair:
|
||||
abs_global, abs_filtered = abs_pair
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Relative (nm)</b>", "<b>Absolute (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4 removed)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"],
|
||||
[f"{abs_global:.2f}", f"{abs_filtered:.2f}"],
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
elif is_manufacturing and mfg_metrics and correction_metrics:
|
||||
# 90 deg absolute RMS
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Mode magnitudes (row 3)
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Mode</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Filtered RMS (J1-J3, with defocus)",
|
||||
"Astigmatism (J5+J6)",
|
||||
"Coma (J7+J8)",
|
||||
"Trefoil (J9+J10)",
|
||||
"Spherical (J11)"],
|
||||
[f"{rms_data['rms_filter_j1to3']:.2f}",
|
||||
f"{mfg_metrics['astigmatism_rms']:.2f}",
|
||||
f"{mfg_metrics['coma_rms']:.2f}",
|
||||
f"{mfg_metrics['trefoil_rms']:.2f}",
|
||||
f"{mfg_metrics['spherical_nm']:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=3, col=1)
|
||||
|
||||
# Pre-correction (row 4)
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Mode</b>", "<b>Correction (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Total RMS (J1-J3 filter)",
|
||||
"Defocus (J4)",
|
||||
"Astigmatism (J5+J6)",
|
||||
"Coma (J7+J8)"],
|
||||
[f"{correction_metrics['rms_filter_j1to3']:.2f}",
|
||||
f"{correction_metrics['defocus_nm']:.2f}",
|
||||
f"{correction_metrics['astigmatism_rms']:.2f}",
|
||||
f"{correction_metrics['coma_rms']:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=4, col=1)
|
||||
else:
|
||||
# Standard absolute table
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Metric</b>", "<b>Value (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Global RMS", "Filtered RMS (J1-J4 removed)"],
|
||||
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Zernike coefficients table (row 3 or skip if manufacturing)
|
||||
if not (is_manufacturing and mfg_metrics and correction_metrics):
|
||||
fig.add_trace(go.Table(
|
||||
header=dict(values=["<b>Noll j</b>", "<b>Label</b>", "<b>|Coeff| (nm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
list(range(1, N_MODES+1)),
|
||||
labels,
|
||||
[f"{c:.3f}" for c in coeff_abs]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=3, col=1)
|
||||
|
||||
# Bar chart (last row)
|
||||
if SHOW_ZERNIKE_BAR:
|
||||
bar_row = 5 if (is_manufacturing and mfg_metrics and correction_metrics) else 4
|
||||
fig.add_trace(
|
||||
go.Bar(
|
||||
x=coeff_abs.tolist(),
|
||||
y=labels,
|
||||
orientation='h',
|
||||
marker_color='#6366f1',
|
||||
hovertemplate="%{y}<br>|Coeff| = %{x:.3f} nm<extra></extra>",
|
||||
showlegend=False
|
||||
),
|
||||
row=bar_row, col=1
|
||||
)
|
||||
|
||||
# Layout
|
||||
height = 1500 if (is_manufacturing and mfg_metrics and correction_metrics) else 1300
|
||||
fig.update_layout(
|
||||
width=1400,
|
||||
height=height,
|
||||
margin=dict(t=60, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827',
|
||||
plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(
|
||||
text=f"<b>Atomizer Zernike Analysis - {title}</b>",
|
||||
x=0.5,
|
||||
font=dict(size=18)
|
||||
)
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Main
|
||||
# ============================================================================
|
||||
def find_op2_file(working_dir=None):
|
||||
"""Find the most recent OP2 file in the working directory."""
|
||||
if working_dir is None:
|
||||
working_dir = Path.cwd()
|
||||
else:
|
||||
working_dir = Path(working_dir)
|
||||
|
||||
op2_files = list(working_dir.glob("*solution*.op2")) + list(working_dir.glob("*.op2"))
|
||||
|
||||
if not op2_files:
|
||||
op2_files = list(working_dir.glob("**/*solution*.op2"))
|
||||
|
||||
if not op2_files:
|
||||
return None
|
||||
|
||||
return max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
|
||||
|
||||
def main(op2_path: Path):
|
||||
"""Generate all 3 HTML files."""
|
||||
print("=" * 70)
|
||||
print(" ATOMIZER ZERNIKE HTML GENERATOR")
|
||||
print("=" * 70)
|
||||
print(f"\nOP2 File: {op2_path.name}")
|
||||
print(f"Directory: {op2_path.parent}")
|
||||
|
||||
# Find geometry
|
||||
print("\nFinding geometry file...")
|
||||
geo_path = find_geometry_file(op2_path)
|
||||
print(f"Found: {geo_path.name}")
|
||||
|
||||
# Read data
|
||||
print("\nReading geometry...")
|
||||
node_geo = read_geometry(geo_path)
|
||||
print(f"Loaded {len(node_geo)} nodes")
|
||||
|
||||
print("\nReading displacements...")
|
||||
displacements = read_displacements(op2_path)
|
||||
print(f"Found subcases: {list(displacements.keys())}")
|
||||
|
||||
# Map subcases (try common patterns)
|
||||
# Pattern 1: Direct labels (1, 2, 3, 4)
|
||||
# Pattern 2: Angle labels (90, 20, 40, 60)
|
||||
subcase_map = {}
|
||||
|
||||
if '1' in displacements and '2' in displacements:
|
||||
# Standard NX pattern: 1=90deg, 2=20deg, 3=40deg, 4=60deg
|
||||
subcase_map = {'90': '1', '20': '2', '40': '3', '60': '4'}
|
||||
elif '90' in displacements and '20' in displacements:
|
||||
# Direct angle labels
|
||||
subcase_map = {'90': '90', '20': '20', '40': '40', '60': '60'}
|
||||
else:
|
||||
# Try to use whatever is available
|
||||
available = sorted(displacements.keys(), key=lambda x: int(x) if x.isdigit() else 0)
|
||||
if len(available) >= 4:
|
||||
subcase_map = {'90': available[0], '20': available[1], '40': available[2], '60': available[3]}
|
||||
print(f"[WARN] Using mapped subcases: {subcase_map}")
|
||||
else:
|
||||
print(f"[ERROR] Need 4 subcases, found: {available}")
|
||||
return
|
||||
|
||||
# Check all required subcases exist
|
||||
for angle, label in subcase_map.items():
|
||||
if label not in displacements:
|
||||
print(f"[ERROR] Subcase '{label}' (angle {angle}) not found")
|
||||
return
|
||||
|
||||
output_dir = op2_path.parent
|
||||
base = op2_path.stem
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
|
||||
html_files = []
|
||||
|
||||
# ========================================================================
|
||||
# Build WFE arrays for reference (20 deg) and manufacturing (90 deg)
|
||||
# ========================================================================
|
||||
print("\nProcessing subcases...")
|
||||
|
||||
# Reference: 20 deg
|
||||
ref_label = subcase_map['20']
|
||||
ref_data = displacements[ref_label]
|
||||
X_ref, Y_ref, WFE_ref = build_wfe_arrays(
|
||||
'20', ref_data['node_ids'], ref_data['disp'], node_geo
|
||||
)
|
||||
rms_ref = compute_rms_metrics(X_ref, Y_ref, WFE_ref)
|
||||
print(f" 20 deg: Global RMS = {rms_ref['global_rms']:.2f} nm, Filtered = {rms_ref['filtered_rms']:.2f} nm")
|
||||
|
||||
# Manufacturing: 90 deg
|
||||
mfg_label = subcase_map['90']
|
||||
mfg_data = displacements[mfg_label]
|
||||
X_90, Y_90, WFE_90 = build_wfe_arrays(
|
||||
'90', mfg_data['node_ids'], mfg_data['disp'], node_geo
|
||||
)
|
||||
rms_90 = compute_rms_metrics(X_90, Y_90, WFE_90)
|
||||
mfg_metrics = compute_mfg_metrics(rms_90['coefficients'])
|
||||
print(f" 90 deg: Global RMS = {rms_90['global_rms']:.2f} nm, Filtered = {rms_90['filtered_rms']:.2f} nm")
|
||||
|
||||
# ========================================================================
|
||||
# 1. Generate 40 deg vs 20 deg (relative)
|
||||
# ========================================================================
|
||||
print("\nGenerating 40 deg vs 20 deg...")
|
||||
|
||||
sc_40_label = subcase_map['40']
|
||||
sc_40_data = displacements[sc_40_label]
|
||||
X_40, Y_40, WFE_40 = build_wfe_arrays(
|
||||
'40', sc_40_data['node_ids'], sc_40_data['disp'], node_geo
|
||||
)
|
||||
|
||||
X_40_rel, Y_40_rel, WFE_40_rel = compute_relative_wfe(
|
||||
X_40, Y_40, WFE_40, sc_40_data['node_ids'],
|
||||
X_ref, Y_ref, WFE_ref, ref_data['node_ids']
|
||||
)
|
||||
|
||||
rms_40_abs = compute_rms_metrics(X_40, Y_40, WFE_40)
|
||||
rms_40_rel = compute_rms_metrics(X_40_rel, Y_40_rel, WFE_40_rel)
|
||||
|
||||
html_40 = generate_html(
|
||||
title="40 deg",
|
||||
X=X_40_rel, Y=Y_40_rel, W_nm=WFE_40_rel,
|
||||
rms_data=rms_40_rel,
|
||||
is_relative=True,
|
||||
ref_title="20 deg",
|
||||
abs_pair=(rms_40_abs['global_rms'], rms_40_abs['filtered_rms'])
|
||||
)
|
||||
|
||||
path_40 = output_dir / f"{base}_{timestamp}_40_vs_20.html"
|
||||
path_40.write_text(html_40, encoding='utf-8')
|
||||
html_files.append(path_40)
|
||||
print(f" Created: {path_40.name}")
|
||||
print(f" Relative: Global={rms_40_rel['global_rms']:.2f}, Filtered={rms_40_rel['filtered_rms']:.2f}")
|
||||
|
||||
# ========================================================================
|
||||
# 2. Generate 60 deg vs 20 deg (relative)
|
||||
# ========================================================================
|
||||
print("\nGenerating 60 deg vs 20 deg...")
|
||||
|
||||
sc_60_label = subcase_map['60']
|
||||
sc_60_data = displacements[sc_60_label]
|
||||
X_60, Y_60, WFE_60 = build_wfe_arrays(
|
||||
'60', sc_60_data['node_ids'], sc_60_data['disp'], node_geo
|
||||
)
|
||||
|
||||
X_60_rel, Y_60_rel, WFE_60_rel = compute_relative_wfe(
|
||||
X_60, Y_60, WFE_60, sc_60_data['node_ids'],
|
||||
X_ref, Y_ref, WFE_ref, ref_data['node_ids']
|
||||
)
|
||||
|
||||
rms_60_abs = compute_rms_metrics(X_60, Y_60, WFE_60)
|
||||
rms_60_rel = compute_rms_metrics(X_60_rel, Y_60_rel, WFE_60_rel)
|
||||
|
||||
html_60 = generate_html(
|
||||
title="60 deg",
|
||||
X=X_60_rel, Y=Y_60_rel, W_nm=WFE_60_rel,
|
||||
rms_data=rms_60_rel,
|
||||
is_relative=True,
|
||||
ref_title="20 deg",
|
||||
abs_pair=(rms_60_abs['global_rms'], rms_60_abs['filtered_rms'])
|
||||
)
|
||||
|
||||
path_60 = output_dir / f"{base}_{timestamp}_60_vs_20.html"
|
||||
path_60.write_text(html_60, encoding='utf-8')
|
||||
html_files.append(path_60)
|
||||
print(f" Created: {path_60.name}")
|
||||
print(f" Relative: Global={rms_60_rel['global_rms']:.2f}, Filtered={rms_60_rel['filtered_rms']:.2f}")
|
||||
|
||||
# ========================================================================
|
||||
# 3. Generate 90 deg Manufacturing (absolute with correction metrics)
|
||||
# ========================================================================
|
||||
print("\nGenerating 90 deg Manufacturing...")
|
||||
|
||||
# Compute 90 deg - 20 deg for correction metrics
|
||||
X_90_rel, Y_90_rel, WFE_90_rel = compute_relative_wfe(
|
||||
X_90, Y_90, WFE_90, mfg_data['node_ids'],
|
||||
X_ref, Y_ref, WFE_ref, ref_data['node_ids']
|
||||
)
|
||||
rms_90_rel = compute_rms_metrics(X_90_rel, Y_90_rel, WFE_90_rel)
|
||||
correction_metrics = {
|
||||
'rms_filter_j1to3': rms_90_rel['rms_filter_j1to3'],
|
||||
'defocus_nm': compute_mfg_metrics(rms_90_rel['coefficients'])['defocus_nm'],
|
||||
'astigmatism_rms': compute_mfg_metrics(rms_90_rel['coefficients'])['astigmatism_rms'],
|
||||
'coma_rms': compute_mfg_metrics(rms_90_rel['coefficients'])['coma_rms'],
|
||||
}
|
||||
|
||||
html_90 = generate_html(
|
||||
title="90 deg (Manufacturing)",
|
||||
X=X_90, Y=Y_90, W_nm=WFE_90,
|
||||
rms_data=rms_90,
|
||||
is_relative=False,
|
||||
is_manufacturing=True,
|
||||
mfg_metrics=mfg_metrics,
|
||||
correction_metrics=correction_metrics
|
||||
)
|
||||
|
||||
path_90 = output_dir / f"{base}_{timestamp}_90_mfg.html"
|
||||
path_90.write_text(html_90, encoding='utf-8')
|
||||
html_files.append(path_90)
|
||||
print(f" Created: {path_90.name}")
|
||||
print(f" Absolute: Global={rms_90['global_rms']:.2f}, Filtered={rms_90['filtered_rms']:.2f}")
|
||||
print(f" Optician Workload (J1-J3): {rms_90['rms_filter_j1to3']:.2f} nm")
|
||||
|
||||
# ========================================================================
|
||||
# Summary
|
||||
# ========================================================================
|
||||
print("\n" + "=" * 70)
|
||||
print("SUMMARY")
|
||||
print("=" * 70)
|
||||
print(f"\nGenerated {len(html_files)} HTML files:")
|
||||
for f in html_files:
|
||||
print(f" - {f.name}")
|
||||
|
||||
print("\n" + "-" * 70)
|
||||
print("OPTIMIZATION OBJECTIVES")
|
||||
print("-" * 70)
|
||||
print(f" 40-20 Filtered RMS: {rms_40_rel['filtered_rms']:.2f} nm")
|
||||
print(f" 60-20 Filtered RMS: {rms_60_rel['filtered_rms']:.2f} nm")
|
||||
print(f" MFG 90 (J1-J3): {rms_90_rel['rms_filter_j1to3']:.2f} nm")
|
||||
|
||||
# Weighted sums
|
||||
ws_v4 = 5*rms_40_rel['filtered_rms'] + 5*rms_60_rel['filtered_rms'] + 2*rms_90_rel['rms_filter_j1to3']
|
||||
ws_v5 = 5*rms_40_rel['filtered_rms'] + 5*rms_60_rel['filtered_rms'] + 3*rms_90_rel['rms_filter_j1to3']
|
||||
print(f"\n V4 Weighted Sum (5/5/2): {ws_v4:.2f}")
|
||||
print(f" V5 Weighted Sum (5/5/3): {ws_v5:.2f}")
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
print("DONE")
|
||||
print("=" * 70)
|
||||
|
||||
return html_files
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if len(sys.argv) > 1:
|
||||
op2_path = Path(sys.argv[1])
|
||||
if not op2_path.exists():
|
||||
print(f"ERROR: File not found: {op2_path}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("No OP2 file specified, searching...")
|
||||
op2_path = find_op2_file()
|
||||
if op2_path is None:
|
||||
print("ERROR: No OP2 file found in current directory.")
|
||||
print("Usage: python zernike_html_generator.py <path/to/solution.op2>")
|
||||
sys.exit(1)
|
||||
print(f"Found: {op2_path}")
|
||||
|
||||
try:
|
||||
main(op2_path)
|
||||
except Exception as e:
|
||||
print(f"\nERROR: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
Reference in New Issue
Block a user