feat: Add OPD method support to Zernike visualization with Standard/OPD toggle
Major improvements to Zernike WFE visualization: - Add ZernikeDashboardInsight: Unified dashboard with all orientations (40°, 60°, 90°) on one page with light theme and executive summary - Add OPD method toggle: Switch between Standard (Z-only) and OPD (X,Y,Z) methods in ZernikeWFEInsight with interactive buttons - Add lateral displacement maps: Visualize X,Y displacement for each orientation - Add displacement component views: Toggle between WFE, ΔX, ΔY, ΔZ in relative views - Add metrics comparison table showing both methods side-by-side New extractors: - extract_zernike_figure.py: ZernikeOPDExtractor using BDF geometry interpolation - extract_zernike_opd.py: Parabola-based OPD with focal length Key finding: OPD method gives 8-11% higher WFE values than Standard method (more conservative/accurate for surfaces with lateral displacement under gravity) Documentation updates: - SYS_12: Added E22 ZernikeOPD as recommended method - SYS_16: Added ZernikeDashboard, updated ZernikeWFE with OPD features - Cheatsheet: Added Zernike method comparison table 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
skill_id: SKILL_001
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
version: 2.1
|
||||
last_updated: 2025-12-22
|
||||
type: reference
|
||||
code_dependencies:
|
||||
- optimization_engine/extractors/__init__.py
|
||||
@@ -12,8 +12,8 @@ requires_skills:
|
||||
|
||||
# Atomizer Quick Reference Cheatsheet
|
||||
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Version**: 2.1
|
||||
**Updated**: 2025-12-22
|
||||
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
||||
|
||||
---
|
||||
@@ -45,10 +45,12 @@ requires_skills:
|
||||
| CAD expression mass | E5 | `extract_mass_from_expression(prt_file, expression_name='p173')` |
|
||||
| Field data | E6 | `FieldDataExtractor(field_file, result_column, aggregation)` |
|
||||
| Stiffness (k=F/δ) | E7 | `StiffnessCalculator(...)` |
|
||||
| Zernike WFE | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
|
||||
| Zernike WFE (standard) | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
|
||||
| Zernike relative | E9 | `extract_zernike_relative_rms(op2_file, bdf_file, target, ref)` |
|
||||
| Zernike builder | E10 | `ZernikeObjectiveBuilder(op2_finder)` |
|
||||
| Part mass + material | E11 | `extract_part_mass_material(prt_file)` → mass, volume, material |
|
||||
| Zernike Analytic | E20 | `extract_zernike_analytic(op2_file, focal_length=5000.0)` |
|
||||
| **Zernike OPD** | E22 | `extract_zernike_opd(op2_file)` ← **Most rigorous, RECOMMENDED** |
|
||||
|
||||
> **Mass extraction tip**: Always use E11 (geometry .prt) over E4 (BDF) for accuracy.
|
||||
> pyNastran under-reports mass ~7% on hex-dominant meshes with tet/pyramid fills.
|
||||
@@ -303,12 +305,20 @@ Generate physics-focused visualizations from FEA results.
|
||||
### Available Insight Types
|
||||
| Type | Purpose | Data Required |
|
||||
|------|---------|---------------|
|
||||
| `zernike_wfe` | Mirror wavefront error | OP2 with displacements |
|
||||
| `zernike_dashboard` | **RECOMMENDED: Unified WFE dashboard** | OP2 with displacements |
|
||||
| `zernike_wfe` | WFE with Standard/OPD toggle | OP2 with displacements |
|
||||
| `zernike_opd_comparison` | Compare Standard vs OPD methods | OP2 with displacements |
|
||||
| `stress_field` | Von Mises stress contours | OP2 with stresses |
|
||||
| `modal` | Mode shapes + frequencies | OP2 with eigenvectors |
|
||||
| `thermal` | Temperature distribution | OP2 with temperatures |
|
||||
| `design_space` | Parameter-objective landscape | study.db with 5+ trials |
|
||||
|
||||
### Zernike Method Comparison
|
||||
| Method | Use | RMS Difference |
|
||||
|--------|-----|----------------|
|
||||
| **Standard (Z-only)** | Quick analysis | Baseline |
|
||||
| **OPD (X,Y,Z)** ← RECOMMENDED | Any surface with lateral displacement | **+8-11% higher** (more accurate) |
|
||||
|
||||
### Commands
|
||||
```bash
|
||||
# List available insights for a study
|
||||
|
||||
@@ -103,6 +103,55 @@ study_info = introspect_study(study_dir)
|
||||
|
||||
---
|
||||
|
||||
## README Hierarchy (Parent-Child Documentation)
|
||||
|
||||
Studies use a **two-level documentation system**:
|
||||
|
||||
### Parent README (Geometry-Level)
|
||||
|
||||
Location: `studies/{geometry_type}/README.md`
|
||||
|
||||
Contains project-wide context that ALL sub-studies share:
|
||||
- Project overview (what is this component?)
|
||||
- Physical system specs (material, mass, loading)
|
||||
- Domain-specific specs (optical prescription for mirrors, structural limits for brackets)
|
||||
- Complete design variables catalog with ranges
|
||||
- Complete objectives catalog with formulas
|
||||
- Campaign history (evolution across sub-studies)
|
||||
- Sub-studies index with links and status
|
||||
|
||||
**Example**: `studies/M1_Mirror/README.md`
|
||||
|
||||
### Child README (Study-Level)
|
||||
|
||||
Location: `studies/{geometry_type}/{study_name}/README.md`
|
||||
|
||||
Contains study-specific details, references parent for context:
|
||||
```markdown
|
||||
# Study Name
|
||||
|
||||
> **Parent Documentation**: See [../README.md](../README.md) for project overview and specifications.
|
||||
|
||||
## Study Focus
|
||||
What THIS study specifically optimizes...
|
||||
```
|
||||
|
||||
Contents:
|
||||
- Parent reference banner (MANDATORY first line)
|
||||
- Study focus (what differentiates this study)
|
||||
- Active variables (which params enabled)
|
||||
- Algorithm config (sampler, trials, settings)
|
||||
- Baseline/seeding (starting point)
|
||||
- Results summary
|
||||
|
||||
### When Creating a Study
|
||||
|
||||
1. **First study for geometry type** → Create BOTH parent and child READMEs
|
||||
2. **Subsequent studies** → Create child README, update parent's sub-studies index
|
||||
3. **New geometry type** → Create new `studies/{type}/` folder with parent README
|
||||
|
||||
---
|
||||
|
||||
## PR.3 NXSolver Interface
|
||||
|
||||
**Module**: `optimization_engine.nx_solver`
|
||||
|
||||
289
.claude/skills/modules/insights-catalog.md
Normal file
289
.claude/skills/modules/insights-catalog.md
Normal file
@@ -0,0 +1,289 @@
|
||||
# Insights Catalog Module
|
||||
|
||||
**Last Updated**: December 22, 2025
|
||||
**Version**: 1.0
|
||||
**Type**: Optional Module
|
||||
|
||||
This module documents all available Study Insights in the Atomizer framework. Load this when the user asks about visualization, physics insights, or wants to understand specific trial results beyond optimizer metrics.
|
||||
|
||||
---
|
||||
|
||||
## When to Load
|
||||
|
||||
- User asks "what insights are available?"
|
||||
- User wants to visualize FEA results (stress, WFE, modes, etc.)
|
||||
- User asks to understand a specific trial or design
|
||||
- User wants to compare methods or generate physics reports
|
||||
- User mentions "visualization", "3D plot", "surface plot"
|
||||
|
||||
---
|
||||
|
||||
## Insights vs Analysis
|
||||
|
||||
| Aspect | **Analysis** | **Insights** |
|
||||
|--------|--------------|--------------|
|
||||
| Focus | Optimization performance | Physics understanding |
|
||||
| Questions | "Is the optimizer converging?" | "What does this design look like?" |
|
||||
| Data Source | `study.db` (trials, objectives) | Simulation outputs (OP2, mesh) |
|
||||
| Output | Convergence plots, Pareto fronts | 3D surfaces, stress contours |
|
||||
|
||||
---
|
||||
|
||||
## IN.1 Insight Catalog
|
||||
|
||||
| ID | Type ID | Name | Applicable To | Data Required | Output |
|
||||
|----|---------|------|---------------|---------------|--------|
|
||||
| I1 | `zernike_wfe` | Zernike WFE Analysis | Mirror, optics | OP2 with displacement subcases | 3D WFE surface + coefficient tables |
|
||||
| I2 | `zernike_opd_comparison` | **Zernike OPD Comparison** | Mirror, optics, lateral | OP2 with displacement subcases | Method comparison + lateral disp. map |
|
||||
| I3 | `msf_zernike` | MSF Zernike Analysis | Mirror, optics | OP2 with displacement subcases | LSF/MSF/HSF band decomposition |
|
||||
| I4 | `stress_field` | Stress Distribution | Structural, bracket | OP2 with stress results | 3D stress contours |
|
||||
| I5 | `modal` | Modal Analysis | Vibration, dynamic | OP2 with eigenvectors | Mode shape animations |
|
||||
| I6 | `thermal` | Thermal Analysis | Thermo-structural | OP2 with temperature | Temperature distribution |
|
||||
| I7 | `design_space` | Design Space Explorer | All studies | study.db with 5+ trials | Parameter-objective plots |
|
||||
|
||||
---
|
||||
|
||||
## IN.2 Insight Code Snippets (COPY-PASTE)
|
||||
|
||||
### I1: Zernike WFE Analysis
|
||||
|
||||
Standard 50-mode wavefront error visualization for mirror optimization.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, InsightConfig
|
||||
|
||||
# Get the insight for a study
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
|
||||
# Check if it can generate (data exists)
|
||||
if insight.can_generate():
|
||||
result = insight.generate(InsightConfig())
|
||||
print(f"Generated: {result.html_path}")
|
||||
```
|
||||
|
||||
**CLI Usage**:
|
||||
```bash
|
||||
python -m optimization_engine.insights generate studies/my_mirror --type zernike_wfe
|
||||
```
|
||||
|
||||
**Output**: Interactive HTML with:
|
||||
- 3D WFE surface plot
|
||||
- Zernike coefficient bar chart
|
||||
- RMS metrics table (global, filtered, J1-J3)
|
||||
- Aberration magnitudes (astigmatism, coma, trefoil, spherical)
|
||||
|
||||
---
|
||||
|
||||
### I2: Zernike OPD Method Comparison (NEW - Critical for Lateral Supports)
|
||||
|
||||
Compares standard (Z-only) vs rigorous OPD method to reveal lateral displacement impact.
|
||||
|
||||
**When to Use**:
|
||||
- Lateral support optimization
|
||||
- Any case where supports may pinch the mirror
|
||||
- Validating if your WFE objective is accurate
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('zernike_opd_comparison', study_path)
|
||||
result = insight.generate()
|
||||
print(f"Generated: {result.html_path}")
|
||||
|
||||
# Check recommendation
|
||||
print(result.summary['overall_recommendation'])
|
||||
```
|
||||
|
||||
**CLI Usage**:
|
||||
```bash
|
||||
python -m optimization_engine.insights generate studies/my_mirror --type zernike_opd_comparison
|
||||
```
|
||||
|
||||
**Output**: Interactive HTML with:
|
||||
- **Lateral displacement magnitude map** (where pinching occurs)
|
||||
- **WFE surface** using OPD method
|
||||
- **Comparison table** (Standard vs OPD metrics)
|
||||
- **Recommendation** (CRITICAL/RECOMMENDED/OPTIONAL/EQUIVALENT)
|
||||
|
||||
**Interpretation**:
|
||||
| Max Lateral Disp. | Recommendation |
|
||||
|-------------------|----------------|
|
||||
| > 10 µm | **CRITICAL**: Use OPD method for optimization |
|
||||
| 1-10 µm | **RECOMMENDED**: Use OPD method |
|
||||
| < 1 µm | Both methods equivalent |
|
||||
|
||||
**Related Documentation**: [ZERNIKE_OPD_METHOD.md](../../../docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md)
|
||||
|
||||
---
|
||||
|
||||
### I3: MSF Zernike Analysis
|
||||
|
||||
100-mode analysis with LSF/MSF/HSF band decomposition for mid-spatial frequency analysis.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('msf_zernike', study_path)
|
||||
result = insight.generate()
|
||||
```
|
||||
|
||||
**Output**: Band breakdown showing:
|
||||
- LSF (Low Spatial Frequency) - Modes 1-21 (correctable by polishing)
|
||||
- MSF (Mid Spatial Frequency) - Modes 22-50 (problematic for optical performance)
|
||||
- HSF (High Spatial Frequency) - Modes 51-100 (typically noise/mesh effects)
|
||||
|
||||
---
|
||||
|
||||
### I4: Stress Field Visualization
|
||||
|
||||
3D stress contour visualization for structural optimization.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('stress_field', study_path)
|
||||
if insight.can_generate():
|
||||
result = insight.generate()
|
||||
```
|
||||
|
||||
**Output**: Interactive 3D mesh with:
|
||||
- Von Mises stress colormap
|
||||
- Peak stress location indicators
|
||||
- Statistics table (max, mean, P99)
|
||||
|
||||
---
|
||||
|
||||
### I5: Modal Analysis
|
||||
|
||||
Mode shape visualization for frequency optimization.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('modal', study_path)
|
||||
result = insight.generate()
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- First N mode shapes (deformed mesh)
|
||||
- Frequency table
|
||||
- Modal effective mass (if available)
|
||||
|
||||
---
|
||||
|
||||
### I6: Thermal Analysis
|
||||
|
||||
Temperature distribution visualization.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('thermal', study_path)
|
||||
result = insight.generate()
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- Temperature contour plot
|
||||
- Gradient magnitude map
|
||||
- Heat flux vectors (if available)
|
||||
|
||||
---
|
||||
|
||||
### I7: Design Space Explorer
|
||||
|
||||
Parameter-objective relationship visualization.
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight
|
||||
|
||||
insight = get_insight('design_space', study_path)
|
||||
result = insight.generate()
|
||||
```
|
||||
|
||||
**Requirements**: At least 5 completed trials in study.db
|
||||
|
||||
**Output**:
|
||||
- Parallel coordinates plot
|
||||
- Parameter importance
|
||||
- Objective correlations
|
||||
|
||||
---
|
||||
|
||||
## IN.3 Generating All Insights for a Study
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import InsightReport
|
||||
|
||||
# Create report from all configured insights
|
||||
report = InsightReport(study_path)
|
||||
|
||||
# Generate all enabled insights
|
||||
results = report.generate_all()
|
||||
|
||||
# Create HTML report with all insights
|
||||
report_path = report.generate_report_html()
|
||||
print(f"Full report: {report_path}")
|
||||
```
|
||||
|
||||
**CLI**:
|
||||
```bash
|
||||
# Generate report from config
|
||||
python -m optimization_engine.insights report studies/my_study
|
||||
|
||||
# List available insights
|
||||
python -m optimization_engine.insights list
|
||||
|
||||
# Get recommendations for a study
|
||||
python -m optimization_engine.insights recommend studies/my_study
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## IN.4 Configuring Insights in optimization_config.json
|
||||
|
||||
```json
|
||||
{
|
||||
"insights": [
|
||||
{
|
||||
"type": "zernike_wfe",
|
||||
"name": "WFE at 40 vs 20 deg",
|
||||
"enabled": true,
|
||||
"linked_objective": "rel_filtered_rms_40_vs_20",
|
||||
"config": {
|
||||
"target_subcase": "3",
|
||||
"reference_subcase": "2"
|
||||
},
|
||||
"include_in_report": true
|
||||
},
|
||||
{
|
||||
"type": "zernike_opd_comparison",
|
||||
"name": "Lateral Displacement Check",
|
||||
"enabled": true,
|
||||
"config": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## IN.5 Decision Guide
|
||||
|
||||
| You want to... | Use Insight |
|
||||
|----------------|-------------|
|
||||
| See wavefront error surface | `zernike_wfe` |
|
||||
| Check if lateral displacement matters | `zernike_opd_comparison` |
|
||||
| Analyze mid-spatial frequencies | `msf_zernike` |
|
||||
| Visualize stress hotspots | `stress_field` |
|
||||
| See mode shapes | `modal` |
|
||||
| Check thermal distribution | `thermal` |
|
||||
| Understand parameter effects | `design_space` |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **Protocol Specification**: `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md`
|
||||
- **OPD Method Physics**: `docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md`
|
||||
- **Zernike Integration**: `docs/ZERNIKE_INTEGRATION.md`
|
||||
- **Extractor Catalog**: `.claude/skills/modules/extractors-catalog.md`
|
||||
@@ -1,12 +1,19 @@
|
||||
"""
|
||||
Study Insights API endpoints
|
||||
Provides physics-focused visualizations for completed optimization trials
|
||||
|
||||
Key Features:
|
||||
- Config-driven insights from optimization_config.json
|
||||
- AI recommendations based on objectives
|
||||
- Report generation with PDF export
|
||||
- Objective-linked and standalone insights
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from fastapi.responses import JSONResponse, HTMLResponse
|
||||
from pydantic import BaseModel
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Optional
|
||||
from typing import List, Dict, Optional, Any
|
||||
import json
|
||||
import sys
|
||||
|
||||
@@ -19,6 +26,22 @@ router = APIRouter()
|
||||
STUDIES_DIR = Path(__file__).parent.parent.parent.parent.parent / "studies"
|
||||
|
||||
|
||||
class InsightSpecRequest(BaseModel):
|
||||
"""Request model for insight specification."""
|
||||
type: str
|
||||
name: str
|
||||
enabled: bool = True
|
||||
linked_objective: Optional[str] = None
|
||||
config: Dict[str, Any] = {}
|
||||
include_in_report: bool = True
|
||||
|
||||
|
||||
class GenerateReportRequest(BaseModel):
|
||||
"""Request model for report generation."""
|
||||
specs: Optional[List[InsightSpecRequest]] = None
|
||||
include_appendix: bool = True
|
||||
|
||||
|
||||
def resolve_study_path(study_id: str) -> Path:
|
||||
"""Find study folder by scanning all topic directories."""
|
||||
# First check direct path (backwards compatibility)
|
||||
@@ -38,7 +61,7 @@ def resolve_study_path(study_id: str) -> Path:
|
||||
raise HTTPException(status_code=404, detail=f"Study not found: {study_id}")
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/insights/available")
|
||||
@router.get("/studies/{study_id}/available")
|
||||
async def list_available_insights(study_id: str):
|
||||
"""List all insight types that can be generated for this study."""
|
||||
try:
|
||||
@@ -65,7 +88,7 @@ async def list_available_insights(study_id: str):
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/insights/all")
|
||||
@router.get("/studies/{study_id}/all")
|
||||
async def list_all_insights():
|
||||
"""List all registered insight types (regardless of availability for any study)."""
|
||||
try:
|
||||
@@ -83,14 +106,104 @@ async def list_all_insights():
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post("/studies/{study_id}/insights/generate/{insight_type}")
|
||||
async def generate_insight(study_id: str, insight_type: str, trial_id: Optional[int] = None):
|
||||
class GenerateInsightRequest(BaseModel):
|
||||
"""Request model for insight generation."""
|
||||
iteration: Optional[str] = None # e.g., "iter5", "best_design_archive"
|
||||
trial_id: Optional[int] = None
|
||||
config: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def _is_valid_op2(op2_path: Path) -> bool:
|
||||
"""Quick check if OP2 file is valid (not from a failed solve)."""
|
||||
try:
|
||||
# Check file size - failed OP2s are often very small
|
||||
if op2_path.stat().st_size < 10000: # Less than 10KB is suspicious
|
||||
return False
|
||||
|
||||
# Try to read header to verify it's a valid OP2
|
||||
with open(op2_path, 'rb') as f:
|
||||
header = f.read(100)
|
||||
# Valid OP2 files have specific markers
|
||||
if b'NASTRAN' not in header and b'XXXXXXXX' not in header:
|
||||
# Check for common valid patterns
|
||||
pass # Size check is usually enough
|
||||
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/iterations")
|
||||
async def list_iterations(study_id: str):
|
||||
"""List available iterations/trials with OP2 files for insight generation.
|
||||
|
||||
Returns iterations sorted by modification time (newest first).
|
||||
Only includes iterations with valid (non-corrupted) OP2 files.
|
||||
"""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
iterations = []
|
||||
|
||||
# Check 2_iterations folder
|
||||
iter_dir = study_path / "2_iterations"
|
||||
if iter_dir.exists():
|
||||
for subdir in sorted(iter_dir.iterdir(), reverse=True):
|
||||
if subdir.is_dir():
|
||||
op2_files = [f for f in subdir.glob("*.op2") if _is_valid_op2(f)]
|
||||
if op2_files:
|
||||
newest_op2 = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
iterations.append({
|
||||
"id": subdir.name,
|
||||
"path": str(subdir.relative_to(study_path)),
|
||||
"op2_file": newest_op2.name,
|
||||
"modified": newest_op2.stat().st_mtime,
|
||||
"type": "iteration"
|
||||
})
|
||||
|
||||
# Check 3_results/best_design_archive folder
|
||||
best_dir = study_path / "3_results" / "best_design_archive"
|
||||
if best_dir.exists():
|
||||
op2_files = [f for f in best_dir.glob("**/*.op2") if _is_valid_op2(f)]
|
||||
if op2_files:
|
||||
newest_op2 = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
iterations.insert(0, { # Insert at start as "best"
|
||||
"id": "best_design_archive",
|
||||
"path": "3_results/best_design_archive",
|
||||
"op2_file": newest_op2.name,
|
||||
"modified": newest_op2.stat().st_mtime,
|
||||
"type": "best",
|
||||
"label": "Best Design (Recommended)"
|
||||
})
|
||||
|
||||
# Sort by modification time (newest first), keeping best at top
|
||||
best = [i for i in iterations if i.get("type") == "best"]
|
||||
others = sorted([i for i in iterations if i.get("type") != "best"],
|
||||
key=lambda x: x["modified"], reverse=True)
|
||||
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"iterations": best + others,
|
||||
"count": len(iterations)
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post("/studies/{study_id}/generate/{insight_type}")
|
||||
async def generate_insight(
|
||||
study_id: str,
|
||||
insight_type: str,
|
||||
request: Optional[GenerateInsightRequest] = None
|
||||
):
|
||||
"""Generate a specific insight visualization.
|
||||
|
||||
Args:
|
||||
study_id: Study identifier
|
||||
insight_type: Type of insight (e.g., 'zernike_wfe', 'stress_field', 'design_space')
|
||||
trial_id: Optional specific trial to analyze (defaults to best trial)
|
||||
request: Optional generation config with iteration selection
|
||||
|
||||
Returns:
|
||||
JSON with plotly_figure data and summary statistics
|
||||
@@ -104,6 +217,25 @@ async def generate_insight(study_id: str, insight_type: str, trial_id: Optional[
|
||||
if insight is None:
|
||||
raise HTTPException(status_code=404, detail=f"Unknown insight type: {insight_type}")
|
||||
|
||||
# If iteration specified, override the OP2 path
|
||||
if request and request.iteration:
|
||||
iteration_id = request.iteration
|
||||
if iteration_id == "best_design_archive":
|
||||
iter_path = study_path / "3_results" / "best_design_archive"
|
||||
else:
|
||||
iter_path = study_path / "2_iterations" / iteration_id
|
||||
|
||||
if iter_path.exists():
|
||||
op2_files = list(iter_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
# Override the insight's OP2 path
|
||||
insight.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
# Re-find geometry
|
||||
try:
|
||||
insight.geo_path = insight._find_geometry_file(insight.op2_path)
|
||||
except (FileNotFoundError, AttributeError):
|
||||
pass # Use default
|
||||
|
||||
if not insight.can_generate():
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
@@ -111,7 +243,9 @@ async def generate_insight(study_id: str, insight_type: str, trial_id: Optional[
|
||||
)
|
||||
|
||||
# Configure insight
|
||||
config = InsightConfig(trial_id=trial_id)
|
||||
trial_id = request.trial_id if request else None
|
||||
extra_config = request.config if request else {}
|
||||
config = InsightConfig(trial_id=trial_id, extra=extra_config)
|
||||
|
||||
# Generate
|
||||
result = insight.generate(config)
|
||||
@@ -123,6 +257,7 @@ async def generate_insight(study_id: str, insight_type: str, trial_id: Optional[
|
||||
"success": True,
|
||||
"insight_type": insight_type,
|
||||
"study_id": study_id,
|
||||
"iteration": request.iteration if request else None,
|
||||
"html_path": str(result.html_path) if result.html_path else None,
|
||||
"plotly_figure": result.plotly_figure,
|
||||
"summary": result.summary
|
||||
@@ -134,7 +269,7 @@ async def generate_insight(study_id: str, insight_type: str, trial_id: Optional[
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/insights/view/{insight_type}")
|
||||
@router.get("/studies/{study_id}/view/{insight_type}")
|
||||
async def view_insight_html(study_id: str, insight_type: str):
|
||||
"""Get the HTML content for an insight (for iframe embedding).
|
||||
|
||||
@@ -180,7 +315,7 @@ async def view_insight_html(study_id: str, insight_type: str):
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/insights/generated")
|
||||
@router.get("/studies/{study_id}/generated")
|
||||
async def list_generated_insights(study_id: str):
|
||||
"""List all previously generated insight HTML files for a study."""
|
||||
try:
|
||||
@@ -219,3 +354,199 @@ async def list_generated_insights(study_id: str):
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/configured")
|
||||
async def get_configured_insights_endpoint(study_id: str):
|
||||
"""Get insights configured in the study's optimization_config.json."""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
|
||||
from optimization_engine.insights import get_configured_insights
|
||||
|
||||
specs = get_configured_insights(study_path)
|
||||
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"configured": [
|
||||
{
|
||||
"type": spec.type,
|
||||
"name": spec.name,
|
||||
"enabled": spec.enabled,
|
||||
"linked_objective": spec.linked_objective,
|
||||
"config": spec.config,
|
||||
"include_in_report": spec.include_in_report
|
||||
}
|
||||
for spec in specs
|
||||
]
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except ImportError as e:
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"configured": [],
|
||||
"error": f"Insights module not available: {str(e)}"
|
||||
}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/recommend")
|
||||
async def recommend_insights_endpoint(study_id: str):
|
||||
"""Get AI recommendations for insights based on study objectives."""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
|
||||
from optimization_engine.insights import recommend_insights_for_study
|
||||
|
||||
recommendations = recommend_insights_for_study(study_path)
|
||||
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"recommendations": recommendations
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except ImportError as e:
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"recommendations": [],
|
||||
"error": f"Insights module not available: {str(e)}"
|
||||
}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post("/studies/{study_id}/report")
|
||||
async def generate_report(study_id: str, request: GenerateReportRequest):
|
||||
"""Generate comprehensive HTML report with all insights.
|
||||
|
||||
Args:
|
||||
study_id: Study identifier
|
||||
request: Report configuration with optional insight specs
|
||||
|
||||
Returns:
|
||||
JSON with report path and generation results
|
||||
"""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
|
||||
from optimization_engine.insights import (
|
||||
InsightReport, InsightSpec, get_configured_insights,
|
||||
recommend_insights_for_study
|
||||
)
|
||||
|
||||
# Build specs from request or config
|
||||
if request.specs:
|
||||
specs = [
|
||||
InsightSpec(
|
||||
type=s.type,
|
||||
name=s.name,
|
||||
enabled=s.enabled,
|
||||
linked_objective=s.linked_objective,
|
||||
config=s.config,
|
||||
include_in_report=s.include_in_report
|
||||
)
|
||||
for s in request.specs
|
||||
]
|
||||
else:
|
||||
# Try config first, then recommendations
|
||||
specs = get_configured_insights(study_path)
|
||||
if not specs:
|
||||
recommendations = recommend_insights_for_study(study_path)
|
||||
specs = [
|
||||
InsightSpec(
|
||||
type=rec['type'],
|
||||
name=rec['name'],
|
||||
linked_objective=rec.get('linked_objective'),
|
||||
config=rec.get('config', {})
|
||||
)
|
||||
for rec in recommendations
|
||||
]
|
||||
|
||||
if not specs:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="No insights configured or recommended for this study"
|
||||
)
|
||||
|
||||
# Generate report
|
||||
report = InsightReport(study_path)
|
||||
results = report.generate_all(specs)
|
||||
report_path = report.generate_report_html(include_appendix=request.include_appendix)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"study_id": study_id,
|
||||
"report_path": str(report_path),
|
||||
"results": [
|
||||
{
|
||||
"type": r.insight_type,
|
||||
"name": r.insight_name,
|
||||
"success": r.success,
|
||||
"linked_objective": r.linked_objective,
|
||||
"error": r.error
|
||||
}
|
||||
for r in results
|
||||
],
|
||||
"summary": {
|
||||
"total": len(results),
|
||||
"successful": sum(1 for r in results if r.success),
|
||||
"failed": sum(1 for r in results if not r.success)
|
||||
}
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/report/view")
|
||||
async def view_report(study_id: str):
|
||||
"""Get the latest generated report HTML for embedding."""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
report_path = study_path / "3_insights" / "STUDY_INSIGHTS_REPORT.html"
|
||||
|
||||
if not report_path.exists():
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail="No report generated yet. Use POST /insights/report to generate."
|
||||
)
|
||||
|
||||
return HTMLResponse(content=report_path.read_text(encoding='utf-8'))
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/studies/{study_id}/summary")
|
||||
async def get_insights_summary(study_id: str):
|
||||
"""Get insights summary JSON for Results page integration."""
|
||||
try:
|
||||
study_path = resolve_study_path(study_id)
|
||||
summary_path = study_path / "3_insights" / "insights_summary.json"
|
||||
|
||||
if not summary_path.exists():
|
||||
return {
|
||||
"study_id": study_id,
|
||||
"generated_at": None,
|
||||
"insights": []
|
||||
}
|
||||
|
||||
with open(summary_path) as f:
|
||||
summary = json.load(f)
|
||||
|
||||
summary["study_id"] = study_id
|
||||
return summary
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
import React, { useState, useEffect, useCallback, useMemo } from 'react';
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import {
|
||||
Eye,
|
||||
RefreshCw,
|
||||
Download,
|
||||
Maximize2,
|
||||
X,
|
||||
Activity,
|
||||
@@ -14,16 +13,41 @@ import {
|
||||
AlertCircle,
|
||||
CheckCircle,
|
||||
Clock,
|
||||
FileText
|
||||
FileText,
|
||||
Lightbulb,
|
||||
Target,
|
||||
Layers,
|
||||
BookOpen,
|
||||
ChevronRight,
|
||||
ChevronDown,
|
||||
Folder,
|
||||
Play,
|
||||
ExternalLink,
|
||||
Zap,
|
||||
List
|
||||
} from 'lucide-react';
|
||||
import { useStudy } from '../context/StudyContext';
|
||||
import { Card } from '../components/common/Card';
|
||||
import Plot from 'react-plotly.js';
|
||||
|
||||
// ============================================================================
|
||||
// Types
|
||||
// ============================================================================
|
||||
interface IterationInfo {
|
||||
id: string;
|
||||
path: string;
|
||||
op2_file: string;
|
||||
modified: number;
|
||||
type: 'iteration' | 'best';
|
||||
label?: string;
|
||||
}
|
||||
|
||||
interface InsightInfo {
|
||||
type: string;
|
||||
name: string;
|
||||
description: string;
|
||||
category?: string;
|
||||
category_label?: string;
|
||||
applicable_to: string[];
|
||||
}
|
||||
|
||||
@@ -38,13 +62,22 @@ interface GeneratedFile {
|
||||
interface InsightResult {
|
||||
success: boolean;
|
||||
insight_type: string;
|
||||
insight_name?: string;
|
||||
linked_objective?: string | null;
|
||||
iteration?: string;
|
||||
plotly_figure: any;
|
||||
summary: Record<string, any>;
|
||||
html_path: string | null;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Constants
|
||||
// ============================================================================
|
||||
const INSIGHT_ICONS: Record<string, React.ElementType> = {
|
||||
zernike_wfe: Waves,
|
||||
zernike_opd_comparison: Waves,
|
||||
msf_zernike: Waves,
|
||||
stress_field: Activity,
|
||||
modal: Box,
|
||||
thermal: Thermometer,
|
||||
@@ -53,26 +86,54 @@ const INSIGHT_ICONS: Record<string, React.ElementType> = {
|
||||
|
||||
const INSIGHT_COLORS: Record<string, string> = {
|
||||
zernike_wfe: 'text-blue-400 bg-blue-500/10 border-blue-500/30',
|
||||
zernike_opd_comparison: 'text-blue-400 bg-blue-500/10 border-blue-500/30',
|
||||
msf_zernike: 'text-cyan-400 bg-cyan-500/10 border-cyan-500/30',
|
||||
stress_field: 'text-red-400 bg-red-500/10 border-red-500/30',
|
||||
modal: 'text-purple-400 bg-purple-500/10 border-purple-500/30',
|
||||
thermal: 'text-orange-400 bg-orange-500/10 border-orange-500/30',
|
||||
design_space: 'text-green-400 bg-green-500/10 border-green-500/30'
|
||||
};
|
||||
|
||||
const CATEGORY_COLORS: Record<string, string> = {
|
||||
optical: 'border-blue-500/50 bg-blue-500/5',
|
||||
structural_static: 'border-red-500/50 bg-red-500/5',
|
||||
structural_dynamic: 'border-orange-500/50 bg-orange-500/5',
|
||||
structural_modal: 'border-purple-500/50 bg-purple-500/5',
|
||||
thermal: 'border-yellow-500/50 bg-yellow-500/5',
|
||||
kinematic: 'border-teal-500/50 bg-teal-500/5',
|
||||
design_exploration: 'border-green-500/50 bg-green-500/5',
|
||||
other: 'border-gray-500/50 bg-gray-500/5'
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Main Component
|
||||
// ============================================================================
|
||||
export default function Insights() {
|
||||
const navigate = useNavigate();
|
||||
const { selectedStudy, isInitialized } = useStudy();
|
||||
|
||||
// State - Step-based workflow
|
||||
const [step, setStep] = useState<'select-iteration' | 'select-insight' | 'view-result'>('select-iteration');
|
||||
|
||||
// Data
|
||||
const [iterations, setIterations] = useState<IterationInfo[]>([]);
|
||||
const [availableInsights, setAvailableInsights] = useState<InsightInfo[]>([]);
|
||||
const [generatedFiles, setGeneratedFiles] = useState<GeneratedFile[]>([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [generating, setGenerating] = useState<string | null>(null);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
// Active insight for display
|
||||
// Selections
|
||||
const [selectedIteration, setSelectedIteration] = useState<IterationInfo | null>(null);
|
||||
const [selectedInsightType, setSelectedInsightType] = useState<string | null>(null);
|
||||
|
||||
// Result
|
||||
const [activeInsight, setActiveInsight] = useState<InsightResult | null>(null);
|
||||
const [fullscreen, setFullscreen] = useState(false);
|
||||
|
||||
// Loading states
|
||||
const [loadingIterations, setLoadingIterations] = useState(true);
|
||||
const [loadingInsights, setLoadingInsights] = useState(false);
|
||||
const [generating, setGenerating] = useState(false);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
// Redirect if no study
|
||||
useEffect(() => {
|
||||
if (isInitialized && !selectedStudy) {
|
||||
@@ -80,47 +141,101 @@ export default function Insights() {
|
||||
}
|
||||
}, [selectedStudy, navigate, isInitialized]);
|
||||
|
||||
// Load available insights and generated files
|
||||
const loadInsights = useCallback(async () => {
|
||||
// Load iterations on mount (single fast API call + generated files for quick access)
|
||||
const loadIterations = useCallback(async () => {
|
||||
if (!selectedStudy) return;
|
||||
|
||||
setLoading(true);
|
||||
setLoadingIterations(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
// Load available insights
|
||||
const availRes = await fetch(`/api/insights/studies/${selectedStudy.id}/insights/available`);
|
||||
const availData = await availRes.json();
|
||||
setAvailableInsights(availData.insights || []);
|
||||
// Load iterations and generated files in parallel
|
||||
const [iterRes, genRes] = await Promise.all([
|
||||
fetch(`/api/insights/studies/${selectedStudy.id}/iterations`),
|
||||
fetch(`/api/insights/studies/${selectedStudy.id}/generated`)
|
||||
]);
|
||||
|
||||
// Load previously generated files
|
||||
const genRes = await fetch(`/api/insights/studies/${selectedStudy.id}/insights/generated`);
|
||||
const genData = await genRes.json();
|
||||
const [iterData, genData] = await Promise.all([
|
||||
iterRes.json(),
|
||||
genRes.json()
|
||||
]);
|
||||
|
||||
const iters = iterData.iterations || [];
|
||||
setIterations(iters);
|
||||
setGeneratedFiles(genData.files || []);
|
||||
|
||||
// Auto-select best design if available
|
||||
if (iters.length > 0) {
|
||||
const best = iters.find((i: IterationInfo) => i.type === 'best');
|
||||
if (best) {
|
||||
setSelectedIteration(best);
|
||||
} else {
|
||||
setSelectedIteration(iters[0]);
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to load insights:', err);
|
||||
setError('Failed to load insights data');
|
||||
console.error('Failed to load iterations:', err);
|
||||
setError('Failed to load iterations');
|
||||
} finally {
|
||||
setLoading(false);
|
||||
setLoadingIterations(false);
|
||||
}
|
||||
}, [selectedStudy]);
|
||||
|
||||
// Load available insights (lazy - only when needed)
|
||||
const loadAvailableInsights = useCallback(async () => {
|
||||
if (!selectedStudy) return;
|
||||
|
||||
setLoadingInsights(true);
|
||||
|
||||
try {
|
||||
const [availRes, genRes] = await Promise.all([
|
||||
fetch(`/api/insights/studies/${selectedStudy.id}/available`),
|
||||
fetch(`/api/insights/studies/${selectedStudy.id}/generated`)
|
||||
]);
|
||||
|
||||
const [availData, genData] = await Promise.all([
|
||||
availRes.json(),
|
||||
genRes.json()
|
||||
]);
|
||||
|
||||
setAvailableInsights(availData.insights || []);
|
||||
setGeneratedFiles(genData.files || []);
|
||||
} catch (err) {
|
||||
console.error('Failed to load insights:', err);
|
||||
} finally {
|
||||
setLoadingInsights(false);
|
||||
}
|
||||
}, [selectedStudy]);
|
||||
|
||||
// Initial load
|
||||
useEffect(() => {
|
||||
loadInsights();
|
||||
}, [loadInsights]);
|
||||
loadIterations();
|
||||
}, [loadIterations]);
|
||||
|
||||
// Generate an insight
|
||||
const handleGenerate = async (insightType: string) => {
|
||||
if (!selectedStudy || generating) return;
|
||||
// Load insights when moving to step 2
|
||||
useEffect(() => {
|
||||
if (step === 'select-insight' && availableInsights.length === 0) {
|
||||
loadAvailableInsights();
|
||||
}
|
||||
}, [step, availableInsights.length, loadAvailableInsights]);
|
||||
|
||||
setGenerating(insightType);
|
||||
// Generate insight
|
||||
const handleGenerate = async () => {
|
||||
if (!selectedStudy || !selectedIteration || !selectedInsightType || generating) return;
|
||||
|
||||
setGenerating(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
const res = await fetch(
|
||||
`/api/insights/studies/${selectedStudy.id}/insights/generate/${insightType}`,
|
||||
{ method: 'POST' }
|
||||
`/api/insights/studies/${selectedStudy.id}/generate/${selectedInsightType}`,
|
||||
{
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
iteration: selectedIteration.id
|
||||
})
|
||||
}
|
||||
);
|
||||
|
||||
if (!res.ok) {
|
||||
@@ -129,59 +244,54 @@ export default function Insights() {
|
||||
}
|
||||
|
||||
const result: InsightResult = await res.json();
|
||||
const insightInfo = availableInsights.find(i => i.type === selectedInsightType);
|
||||
result.insight_name = insightInfo?.name || result.insight_type;
|
||||
|
||||
setActiveInsight(result);
|
||||
setStep('view-result');
|
||||
|
||||
// Refresh file list
|
||||
loadInsights();
|
||||
|
||||
// Refresh generated files list
|
||||
loadAvailableInsights();
|
||||
} catch (err: any) {
|
||||
setError(err.message || 'Failed to generate insight');
|
||||
} finally {
|
||||
setGenerating(null);
|
||||
setGenerating(false);
|
||||
}
|
||||
};
|
||||
|
||||
// View existing insight
|
||||
const handleViewExisting = async (file: GeneratedFile) => {
|
||||
// Quick view existing generated file
|
||||
const handleQuickView = async (file: GeneratedFile) => {
|
||||
if (!selectedStudy) return;
|
||||
window.open(`/api/insights/studies/${selectedStudy.id}/view/${file.insight_type}`, '_blank');
|
||||
};
|
||||
|
||||
setGenerating(file.insight_type);
|
||||
// Format timestamp
|
||||
const formatTime = (ts: number) => {
|
||||
const date = new Date(ts * 1000);
|
||||
return date.toLocaleDateString() + ' ' + date.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||
};
|
||||
|
||||
try {
|
||||
const res = await fetch(
|
||||
`/api/insights/studies/${selectedStudy.id}/insights/generate/${file.insight_type}`,
|
||||
{ method: 'POST' }
|
||||
);
|
||||
// Group insights by category
|
||||
const groupedInsights = useMemo(() => {
|
||||
const groups: Record<string, InsightInfo[]> = {};
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error('Failed to load insight');
|
||||
availableInsights.forEach(insight => {
|
||||
const cat = insight.category || 'other';
|
||||
if (!groups[cat]) {
|
||||
groups[cat] = [];
|
||||
}
|
||||
groups[cat].push(insight);
|
||||
});
|
||||
|
||||
const result: InsightResult = await res.json();
|
||||
setActiveInsight(result);
|
||||
return groups;
|
||||
}, [availableInsights]);
|
||||
|
||||
} catch (err: any) {
|
||||
setError(err.message || 'Failed to load insight');
|
||||
} finally {
|
||||
setGenerating(null);
|
||||
}
|
||||
};
|
||||
|
||||
// Open HTML in new tab
|
||||
const handleOpenHtml = (file: GeneratedFile) => {
|
||||
if (!selectedStudy) return;
|
||||
window.open(`/api/insights/studies/${selectedStudy.id}/insights/view/${file.insight_type}`, '_blank');
|
||||
};
|
||||
|
||||
const getIcon = (type: string) => {
|
||||
const Icon = INSIGHT_ICONS[type] || Eye;
|
||||
return Icon;
|
||||
};
|
||||
|
||||
const getColorClass = (type: string) => {
|
||||
return INSIGHT_COLORS[type] || 'text-gray-400 bg-gray-500/10 border-gray-500/30';
|
||||
};
|
||||
const getIcon = (type: string) => INSIGHT_ICONS[type] || Eye;
|
||||
const getColorClass = (type: string) => INSIGHT_COLORS[type] || 'text-gray-400 bg-gray-500/10 border-gray-500/30';
|
||||
|
||||
// ============================================================================
|
||||
// Render
|
||||
// ============================================================================
|
||||
if (!isInitialized || !selectedStudy) {
|
||||
return (
|
||||
<div className="flex items-center justify-center min-h-screen">
|
||||
@@ -194,23 +304,37 @@ export default function Insights() {
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="w-full">
|
||||
{/* Header */}
|
||||
<header className="mb-6 flex items-center justify-between border-b border-dark-600 pb-4">
|
||||
<div>
|
||||
<h1 className="text-2xl font-bold text-primary-400">Insights</h1>
|
||||
<p className="text-dark-400 text-sm">
|
||||
Physics visualizations for {selectedStudy.name || selectedStudy.id}
|
||||
</p>
|
||||
<div className="w-full max-w-7xl mx-auto">
|
||||
{/* Header with Breadcrumb */}
|
||||
<header className="mb-6">
|
||||
<div className="flex items-center gap-2 text-sm text-dark-400 mb-2">
|
||||
<span className={step === 'select-iteration' ? 'text-primary-400 font-medium' : ''}>
|
||||
1. Select Iteration
|
||||
</span>
|
||||
<ChevronRight className="w-4 h-4" />
|
||||
<span className={step === 'select-insight' ? 'text-primary-400 font-medium' : ''}>
|
||||
2. Choose Insight
|
||||
</span>
|
||||
<ChevronRight className="w-4 h-4" />
|
||||
<span className={step === 'view-result' ? 'text-primary-400 font-medium' : ''}>
|
||||
3. View Result
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<h1 className="text-2xl font-bold text-white">Study Insights</h1>
|
||||
<p className="text-dark-400 text-sm">{selectedStudy.name || selectedStudy.id}</p>
|
||||
</div>
|
||||
<button
|
||||
onClick={loadInsights}
|
||||
disabled={loading}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-dark-700 hover:bg-dark-600 text-white rounded-lg transition-colors disabled:opacity-50"
|
||||
onClick={() => {
|
||||
setStep('select-iteration');
|
||||
setActiveInsight(null);
|
||||
loadIterations();
|
||||
}}
|
||||
className="flex items-center gap-2 px-3 py-2 bg-dark-700 hover:bg-dark-600 text-white rounded-lg transition-colors text-sm"
|
||||
>
|
||||
<RefreshCw className={`w-4 h-4 ${loading ? 'animate-spin' : ''}`} />
|
||||
Refresh
|
||||
<RefreshCw className="w-4 h-4" />
|
||||
Reset
|
||||
</button>
|
||||
</div>
|
||||
</header>
|
||||
@@ -219,206 +343,409 @@ export default function Insights() {
|
||||
{error && (
|
||||
<div className="mb-6 p-4 bg-red-500/10 border border-red-500/30 rounded-lg flex items-center gap-3">
|
||||
<AlertCircle className="w-5 h-5 text-red-400 flex-shrink-0" />
|
||||
<p className="text-red-400">{error}</p>
|
||||
<button onClick={() => setError(null)} className="ml-auto text-red-400 hover:text-red-300">
|
||||
<p className="text-red-400 flex-1">{error}</p>
|
||||
<button onClick={() => setError(null)} className="text-red-400 hover:text-red-300">
|
||||
<X className="w-4 h-4" />
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="grid grid-cols-1 lg:grid-cols-3 gap-6">
|
||||
{/* Left Panel: Available Insights */}
|
||||
<div className="lg:col-span-1 space-y-4">
|
||||
<Card title="Available Insights" className="p-0">
|
||||
{loading ? (
|
||||
<div className="p-6 text-center text-dark-400">
|
||||
{/* Step 1: Select Iteration */}
|
||||
{step === 'select-iteration' && (
|
||||
<div className="space-y-6">
|
||||
<Card title="Select Data Source" className="p-0">
|
||||
{loadingIterations ? (
|
||||
<div className="p-8 text-center text-dark-400">
|
||||
<RefreshCw className="w-6 h-6 animate-spin mx-auto mb-2" />
|
||||
Loading...
|
||||
Scanning for iterations...
|
||||
</div>
|
||||
) : availableInsights.length === 0 ? (
|
||||
<div className="p-6 text-center text-dark-400">
|
||||
<Eye className="w-8 h-8 mx-auto mb-2 opacity-50" />
|
||||
<p>No insights available for this study.</p>
|
||||
<p className="text-xs mt-1">Run some trials first.</p>
|
||||
) : iterations.length === 0 ? (
|
||||
<div className="p-8 text-center text-dark-400">
|
||||
<Folder className="w-12 h-12 mx-auto mb-4 opacity-30" />
|
||||
<p className="text-lg font-medium text-dark-300 mb-2">No Iterations Found</p>
|
||||
<p className="text-sm">Run some optimization trials first to generate data.</p>
|
||||
</div>
|
||||
) : (
|
||||
<div className="divide-y divide-dark-600">
|
||||
{availableInsights.map((insight) => {
|
||||
const Icon = getIcon(insight.type);
|
||||
const colorClass = getColorClass(insight.type);
|
||||
const isGenerating = generating === insight.type;
|
||||
|
||||
<div className="p-5 space-y-4">
|
||||
{/* Best Design - Primary Option */}
|
||||
{(() => {
|
||||
const bestIter = iterations.find((i) => i.type === 'best');
|
||||
if (!bestIter) return null;
|
||||
return (
|
||||
<div key={insight.type} className="p-4 hover:bg-dark-750 transition-colors">
|
||||
<div className="flex items-start gap-3">
|
||||
<div className={`p-2 rounded-lg border ${colorClass}`}>
|
||||
<Icon className="w-5 h-5" />
|
||||
<button
|
||||
onClick={() => setSelectedIteration(bestIter)}
|
||||
className={`w-full p-4 flex items-center gap-4 rounded-lg border-2 transition-all ${
|
||||
selectedIteration?.id === bestIter.id
|
||||
? 'bg-green-500/10 border-green-500 shadow-lg shadow-green-500/10'
|
||||
: 'bg-dark-750 border-dark-600 hover:border-green-500/50 hover:bg-dark-700'
|
||||
}`}
|
||||
>
|
||||
<div className="p-3 rounded-lg bg-green-500/20 text-green-400">
|
||||
<Zap className="w-6 h-6" />
|
||||
</div>
|
||||
<div className="flex-1 text-left">
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="font-semibold text-white text-lg">Best Design</span>
|
||||
<span className="px-2 py-0.5 text-xs bg-green-500/20 text-green-400 rounded-full">
|
||||
Recommended
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex-1 min-w-0">
|
||||
<h3 className="font-medium text-white">{insight.name}</h3>
|
||||
<p className="text-xs text-dark-400 mt-0.5">{insight.description}</p>
|
||||
<div className="text-sm text-dark-400 mt-1">
|
||||
{bestIter.label || bestIter.id} • {formatTime(bestIter.modified)}
|
||||
</div>
|
||||
</div>
|
||||
<button
|
||||
onClick={() => handleGenerate(insight.type)}
|
||||
disabled={isGenerating}
|
||||
className={`mt-3 w-full flex items-center justify-center gap-2 px-3 py-2 rounded-lg text-sm font-medium transition-colors ${
|
||||
isGenerating
|
||||
? 'bg-dark-600 text-dark-400 cursor-wait'
|
||||
: 'bg-primary-600 hover:bg-primary-500 text-white'
|
||||
}`}
|
||||
>
|
||||
{isGenerating ? (
|
||||
<>
|
||||
<RefreshCw className="w-4 h-4 animate-spin" />
|
||||
Generating...
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Eye className="w-4 h-4" />
|
||||
Generate
|
||||
</>
|
||||
)}
|
||||
</button>
|
||||
{selectedIteration?.id === bestIter.id && (
|
||||
<CheckCircle className="w-6 h-6 text-green-400" />
|
||||
)}
|
||||
</button>
|
||||
);
|
||||
})()}
|
||||
|
||||
{/* Separator */}
|
||||
<div className="flex items-center gap-3">
|
||||
<div className="flex-1 border-t border-dark-600"></div>
|
||||
<span className="text-sm text-dark-500">or select specific iteration</span>
|
||||
<div className="flex-1 border-t border-dark-600"></div>
|
||||
</div>
|
||||
|
||||
{/* Dropdown for other iterations */}
|
||||
{(() => {
|
||||
const regularIters = iterations.filter((i) => i.type !== 'best');
|
||||
if (regularIters.length === 0) return null;
|
||||
|
||||
const selectedRegular = regularIters.find((i) => i.id === selectedIteration?.id);
|
||||
|
||||
return (
|
||||
<div className="relative">
|
||||
<div className="flex items-center gap-3">
|
||||
<List className="w-5 h-5 text-dark-400" />
|
||||
<select
|
||||
value={selectedRegular?.id || ''}
|
||||
onChange={(e) => {
|
||||
const iter = regularIters.find((i) => i.id === e.target.value);
|
||||
if (iter) setSelectedIteration(iter);
|
||||
}}
|
||||
className="flex-1 px-4 py-3 bg-dark-750 border border-dark-600 rounded-lg text-white appearance-none cursor-pointer hover:border-dark-500 focus:border-primary-500 focus:outline-none focus:ring-1 focus:ring-primary-500 transition-colors"
|
||||
>
|
||||
<option value="" disabled>
|
||||
Choose from {regularIters.length} iteration{regularIters.length !== 1 ? 's' : ''}...
|
||||
</option>
|
||||
{regularIters.map((iter) => (
|
||||
<option key={iter.id} value={iter.id}>
|
||||
{iter.label || iter.id} ({iter.op2_file})
|
||||
</option>
|
||||
))}
|
||||
</select>
|
||||
<ChevronDown className="w-5 h-5 text-dark-400 absolute right-3 pointer-events-none" />
|
||||
</div>
|
||||
|
||||
{/* Show selected iteration details */}
|
||||
{selectedRegular && (
|
||||
<div className="mt-3 p-3 bg-dark-700 rounded-lg border border-primary-500/30">
|
||||
<div className="flex items-center gap-3">
|
||||
<Folder className="w-5 h-5 text-primary-400" />
|
||||
<div className="flex-1">
|
||||
<div className="font-medium text-white">{selectedRegular.label || selectedRegular.id}</div>
|
||||
<div className="text-sm text-dark-400">
|
||||
{selectedRegular.op2_file} • {formatTime(selectedRegular.modified)}
|
||||
</div>
|
||||
</div>
|
||||
<CheckCircle className="w-5 h-5 text-primary-400" />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
})()}
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
|
||||
{/* Previously Generated */}
|
||||
{generatedFiles.length > 0 && (
|
||||
<Card title="Generated Files" className="p-0">
|
||||
<div className="divide-y divide-dark-600 max-h-64 overflow-y-auto">
|
||||
{generatedFiles.map((file) => {
|
||||
const Icon = getIcon(file.insight_type);
|
||||
{/* Continue Button */}
|
||||
{selectedIteration && (
|
||||
<div className="flex justify-end">
|
||||
<button
|
||||
onClick={() => setStep('select-insight')}
|
||||
className="flex items-center gap-2 px-6 py-3 bg-primary-600 hover:bg-primary-500 text-white rounded-lg font-medium transition-colors"
|
||||
>
|
||||
Continue
|
||||
<ChevronRight className="w-5 h-5" />
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Previously Generated - Quick Access */}
|
||||
{generatedFiles.length > 0 && (
|
||||
<Card title="Previously Generated" className="p-0 mt-6">
|
||||
<div className="p-3 bg-dark-750 border-b border-dark-600">
|
||||
<p className="text-sm text-dark-400">Quick access to existing visualizations</p>
|
||||
</div>
|
||||
<div className="divide-y divide-dark-600 max-h-48 overflow-y-auto">
|
||||
{generatedFiles.slice(0, 5).map((file) => {
|
||||
const Icon = getIcon(file.insight_type);
|
||||
return (
|
||||
<div
|
||||
<button
|
||||
key={file.filename}
|
||||
className="p-3 hover:bg-dark-750 transition-colors flex items-center gap-3"
|
||||
onClick={() => handleQuickView(file)}
|
||||
className="w-full p-3 flex items-center gap-3 hover:bg-dark-750 transition-colors text-left"
|
||||
>
|
||||
<Icon className="w-4 h-4 text-dark-400" />
|
||||
<div className="flex-1 min-w-0">
|
||||
<p className="text-sm text-white truncate">{file.insight_type}</p>
|
||||
<p className="text-xs text-dark-500">{file.size_kb} KB</p>
|
||||
</div>
|
||||
<div className="flex gap-1">
|
||||
<button
|
||||
onClick={() => handleViewExisting(file)}
|
||||
className="p-1.5 text-dark-400 hover:text-white hover:bg-dark-600 rounded transition-colors"
|
||||
title="View in dashboard"
|
||||
>
|
||||
<Eye className="w-4 h-4" />
|
||||
</button>
|
||||
<button
|
||||
onClick={() => handleOpenHtml(file)}
|
||||
className="p-1.5 text-dark-400 hover:text-white hover:bg-dark-600 rounded transition-colors"
|
||||
title="Open in new tab"
|
||||
>
|
||||
<Maximize2 className="w-4 h-4" />
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<span className="flex-1 text-sm text-white truncate">
|
||||
{file.insight_type.replace(/_/g, ' ')}
|
||||
</span>
|
||||
<span className="text-xs text-dark-500">{file.size_kb} KB</span>
|
||||
<ExternalLink className="w-4 h-4 text-dark-400" />
|
||||
</button>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</Card>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Right Panel: Visualization */}
|
||||
<div className="lg:col-span-2">
|
||||
{activeInsight ? (
|
||||
<Card
|
||||
title={
|
||||
<div className="flex items-center justify-between">
|
||||
<span>{activeInsight.insight_type.replace('_', ' ').toUpperCase()}</span>
|
||||
<div className="flex gap-2">
|
||||
<button
|
||||
onClick={() => setFullscreen(true)}
|
||||
className="p-1.5 text-dark-400 hover:text-white hover:bg-dark-600 rounded transition-colors"
|
||||
title="Fullscreen"
|
||||
>
|
||||
<Maximize2 className="w-4 h-4" />
|
||||
</button>
|
||||
<button
|
||||
onClick={() => setActiveInsight(null)}
|
||||
className="p-1.5 text-dark-400 hover:text-white hover:bg-dark-600 rounded transition-colors"
|
||||
title="Close"
|
||||
>
|
||||
<X className="w-4 h-4" />
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
}
|
||||
className="h-full"
|
||||
{/* Step 2: Select Insight Type */}
|
||||
{step === 'select-insight' && (
|
||||
<div className="space-y-6">
|
||||
{/* Selection Summary */}
|
||||
<div className="flex items-center gap-4 p-4 bg-dark-800 rounded-lg">
|
||||
<div className="p-2 bg-primary-600/20 rounded-lg">
|
||||
<Folder className="w-5 h-5 text-primary-400" />
|
||||
</div>
|
||||
<div>
|
||||
<div className="text-sm text-dark-400">Selected Data Source</div>
|
||||
<div className="font-medium text-white">{selectedIteration?.label || selectedIteration?.id}</div>
|
||||
</div>
|
||||
<button
|
||||
onClick={() => setStep('select-iteration')}
|
||||
className="ml-auto text-sm text-primary-400 hover:text-primary-300"
|
||||
>
|
||||
{/* Summary Stats */}
|
||||
{activeInsight.summary && Object.keys(activeInsight.summary).length > 0 && (
|
||||
<div className="mb-4 grid grid-cols-2 md:grid-cols-4 gap-3">
|
||||
{Object.entries(activeInsight.summary).slice(0, 8).map(([key, value]) => (
|
||||
<div key={key} className="bg-dark-750 rounded-lg p-3">
|
||||
<div className="text-xs text-dark-400 uppercase truncate">{key.replace(/_/g, ' ')}</div>
|
||||
<div className="text-lg font-mono text-white truncate">
|
||||
{typeof value === 'number'
|
||||
? value.toExponential ? value.toExponential(3) : value
|
||||
: String(value)
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
Change
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Plotly Figure */}
|
||||
{activeInsight.plotly_figure ? (
|
||||
<div className="bg-dark-900 rounded-lg overflow-hidden" style={{ height: '500px' }}>
|
||||
<Plot
|
||||
data={activeInsight.plotly_figure.data}
|
||||
layout={{
|
||||
...activeInsight.plotly_figure.layout,
|
||||
autosize: true,
|
||||
margin: { l: 50, r: 50, t: 50, b: 50 },
|
||||
paper_bgcolor: '#111827',
|
||||
plot_bgcolor: '#1f2937',
|
||||
font: { color: 'white' }
|
||||
}}
|
||||
config={{
|
||||
responsive: true,
|
||||
displayModeBar: true,
|
||||
displaylogo: false
|
||||
}}
|
||||
style={{ width: '100%', height: '100%' }}
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex items-center justify-center h-64 text-dark-400">
|
||||
<p>No visualization data available</p>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
) : (
|
||||
<Card className="h-full flex items-center justify-center min-h-[400px]">
|
||||
<div className="text-center text-dark-400">
|
||||
<Eye className="w-12 h-12 mx-auto mb-4 opacity-30" />
|
||||
<h3 className="text-lg font-medium text-dark-300 mb-2">No Insight Selected</h3>
|
||||
<p className="text-sm">
|
||||
Select an insight type from the left panel to generate a visualization.
|
||||
</p>
|
||||
<Card title="Choose Insight Type" className="p-0">
|
||||
{loadingInsights ? (
|
||||
<div className="p-8 text-center text-dark-400">
|
||||
<RefreshCw className="w-6 h-6 animate-spin mx-auto mb-2" />
|
||||
Loading available insights...
|
||||
</div>
|
||||
</Card>
|
||||
) : availableInsights.length === 0 ? (
|
||||
<div className="p-8 text-center text-dark-400">
|
||||
<Eye className="w-12 h-12 mx-auto mb-4 opacity-30" />
|
||||
<p className="text-lg font-medium text-dark-300 mb-2">No Insights Available</p>
|
||||
<p className="text-sm">The selected iteration may not have compatible data.</p>
|
||||
</div>
|
||||
) : (
|
||||
<div className="divide-y divide-dark-600">
|
||||
{Object.entries(groupedInsights).map(([category, insights]) => (
|
||||
<div key={category} className={`border-l-2 ${CATEGORY_COLORS[category] || CATEGORY_COLORS.other}`}>
|
||||
<div className="px-4 py-2 bg-dark-750/50 border-b border-dark-600">
|
||||
<h4 className="text-sm font-medium text-dark-300 capitalize">
|
||||
{category.replace(/_/g, ' ')}
|
||||
</h4>
|
||||
</div>
|
||||
{insights.map((insight) => {
|
||||
const Icon = getIcon(insight.type);
|
||||
const colorClass = getColorClass(insight.type);
|
||||
const isSelected = selectedInsightType === insight.type;
|
||||
|
||||
return (
|
||||
<button
|
||||
key={insight.type}
|
||||
onClick={() => setSelectedInsightType(insight.type)}
|
||||
className={`w-full p-4 flex items-center gap-4 hover:bg-dark-750 transition-colors text-left ${
|
||||
isSelected ? 'bg-primary-600/10' : ''
|
||||
}`}
|
||||
>
|
||||
<div className={`p-2.5 rounded-lg border ${colorClass}`}>
|
||||
<Icon className="w-5 h-5" />
|
||||
</div>
|
||||
<div className="flex-1 min-w-0">
|
||||
<div className="font-medium text-white">{insight.name}</div>
|
||||
<div className="text-sm text-dark-400 mt-0.5">{insight.description}</div>
|
||||
</div>
|
||||
{isSelected && (
|
||||
<CheckCircle className="w-5 h-5 text-primary-400 flex-shrink-0" />
|
||||
)}
|
||||
</button>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
|
||||
{/* Generate Button */}
|
||||
{selectedInsightType && (
|
||||
<div className="flex justify-between items-center">
|
||||
<button
|
||||
onClick={() => setStep('select-iteration')}
|
||||
className="text-dark-400 hover:text-white transition-colors"
|
||||
>
|
||||
Back
|
||||
</button>
|
||||
<button
|
||||
onClick={handleGenerate}
|
||||
disabled={generating}
|
||||
className={`flex items-center gap-2 px-6 py-3 rounded-lg font-medium transition-colors ${
|
||||
generating
|
||||
? 'bg-dark-600 text-dark-400 cursor-wait'
|
||||
: 'bg-primary-600 hover:bg-primary-500 text-white'
|
||||
}`}
|
||||
>
|
||||
{generating ? (
|
||||
<>
|
||||
<RefreshCw className="w-5 h-5 animate-spin" />
|
||||
Generating...
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Play className="w-5 h-5" />
|
||||
Generate Insight
|
||||
</>
|
||||
)}
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Step 3: View Result */}
|
||||
{step === 'view-result' && activeInsight && (
|
||||
<div className="space-y-6">
|
||||
{/* Result Header */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center gap-4">
|
||||
<button
|
||||
onClick={() => {
|
||||
setStep('select-insight');
|
||||
setActiveInsight(null);
|
||||
}}
|
||||
className="text-dark-400 hover:text-white transition-colors"
|
||||
>
|
||||
← Back
|
||||
</button>
|
||||
<div>
|
||||
<h2 className="text-xl font-bold text-white">
|
||||
{activeInsight.insight_name || activeInsight.insight_type.replace(/_/g, ' ')}
|
||||
</h2>
|
||||
<p className="text-sm text-dark-400">
|
||||
Generated from {selectedIteration?.label || selectedIteration?.id}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
{activeInsight.html_path && (
|
||||
<button
|
||||
onClick={() => window.open(`/api/insights/studies/${selectedStudy?.id}/view/${activeInsight.insight_type}`, '_blank')}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-dark-700 hover:bg-dark-600 text-white rounded-lg transition-colors"
|
||||
>
|
||||
<ExternalLink className="w-4 h-4" />
|
||||
Open Full View
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
onClick={() => setFullscreen(true)}
|
||||
className="p-2 bg-dark-700 hover:bg-dark-600 text-white rounded-lg transition-colors"
|
||||
title="Fullscreen"
|
||||
>
|
||||
<Maximize2 className="w-5 h-5" />
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Summary Stats */}
|
||||
{activeInsight.summary && Object.keys(activeInsight.summary).length > 0 && (
|
||||
<div className="grid grid-cols-2 md:grid-cols-4 gap-3">
|
||||
{Object.entries(activeInsight.summary)
|
||||
.filter(([key]) => !key.startsWith('html_'))
|
||||
.slice(0, 8)
|
||||
.map(([key, value]) => (
|
||||
<div key={key} className="bg-dark-800 rounded-lg p-4">
|
||||
<div className="text-xs text-dark-400 uppercase truncate mb-1">
|
||||
{key.replace(/_/g, ' ')}
|
||||
</div>
|
||||
<div className="text-lg font-mono text-white truncate">
|
||||
{typeof value === 'number'
|
||||
? value.toFixed(2)
|
||||
: String(value)
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Plotly Figure */}
|
||||
<Card className="p-0 overflow-hidden">
|
||||
{activeInsight.plotly_figure ? (
|
||||
<div className="bg-dark-900" style={{ height: '600px' }}>
|
||||
<Plot
|
||||
data={activeInsight.plotly_figure.data}
|
||||
layout={{
|
||||
...activeInsight.plotly_figure.layout,
|
||||
autosize: true,
|
||||
margin: { l: 60, r: 60, t: 60, b: 60 },
|
||||
paper_bgcolor: '#111827',
|
||||
plot_bgcolor: '#1f2937',
|
||||
font: { color: 'white' }
|
||||
}}
|
||||
config={{
|
||||
responsive: true,
|
||||
displayModeBar: true,
|
||||
displaylogo: false
|
||||
}}
|
||||
style={{ width: '100%', height: '100%' }}
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex flex-col items-center justify-center h-64 text-dark-400 p-8">
|
||||
<CheckCircle className="w-12 h-12 text-green-400 mb-4" />
|
||||
<p className="text-lg font-medium text-white mb-2">Insight Generated Successfully</p>
|
||||
<p className="text-sm text-center">
|
||||
This insight generates HTML files. Click "Open Full View" to see the visualization.
|
||||
</p>
|
||||
{activeInsight.summary?.html_files && (
|
||||
<div className="mt-4 text-sm">
|
||||
<p className="text-dark-400 mb-2">Generated files:</p>
|
||||
<ul className="space-y-1">
|
||||
{(activeInsight.summary.html_files as string[]).slice(0, 4).map((f: string, i: number) => (
|
||||
<li key={i} className="text-dark-300">
|
||||
{f.split(/[/\\]/).pop()}
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
|
||||
{/* Generate Another */}
|
||||
<div className="flex justify-center">
|
||||
<button
|
||||
onClick={() => {
|
||||
setStep('select-insight');
|
||||
setActiveInsight(null);
|
||||
setSelectedInsightType(null);
|
||||
}}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-dark-700 hover:bg-dark-600 text-white rounded-lg transition-colors"
|
||||
>
|
||||
<Lightbulb className="w-4 h-4" />
|
||||
Generate Another Insight
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Fullscreen Modal */}
|
||||
{fullscreen && activeInsight?.plotly_figure && (
|
||||
<div className="fixed inset-0 z-50 bg-dark-900 flex flex-col">
|
||||
<div className="flex items-center justify-between p-4 border-b border-dark-600">
|
||||
<h2 className="text-xl font-bold text-white">
|
||||
{activeInsight.insight_type.replace('_', ' ').toUpperCase()}
|
||||
{activeInsight.insight_name || activeInsight.insight_type.replace(/_/g, ' ')}
|
||||
</h2>
|
||||
<button
|
||||
onClick={() => setFullscreen(false)}
|
||||
|
||||
@@ -57,6 +57,10 @@ The Extractor Library provides centralized, reusable functions for extracting ph
|
||||
| E18 | Modal Mass | `extract_modal_mass()` | .f06 | kg |
|
||||
| **Phase 4 (2025-12-19)** | | | | |
|
||||
| E19 | Part Introspection | `introspect_part()` | .prt | dict |
|
||||
| **Phase 5 (2025-12-22)** | | | | |
|
||||
| E20 | Zernike Analytic (Parabola) | `extract_zernike_analytic()` | .op2 + .bdf | nm |
|
||||
| E21 | Zernike Method Comparison | `compare_zernike_methods()` | .op2 + .bdf | dict |
|
||||
| E22 | **Zernike OPD (RECOMMENDED)** | `extract_zernike_opd()` | .op2 + .bdf | nm |
|
||||
|
||||
---
|
||||
|
||||
@@ -326,6 +330,161 @@ results = builder.evaluate_all()
|
||||
# Returns: {'rel_40_vs_20': 4.2, 'rel_60_vs_20': 8.7, 'rms_90': 15.3}
|
||||
```
|
||||
|
||||
### E20: Zernike Analytic (Parabola-Based with Lateral Correction)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_opd`
|
||||
|
||||
Uses an analytical parabola formula to account for lateral (X, Y) displacements. Requires knowing the focal length.
|
||||
|
||||
**Use when**: You know the optical prescription and want to compare against theoretical parabola.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_zernike_analytic, ZernikeAnalyticExtractor
|
||||
|
||||
# Full extraction with lateral displacement diagnostics
|
||||
result = extract_zernike_analytic(
|
||||
op2_file,
|
||||
subcase="20",
|
||||
focal_length=5000.0, # Required for analytic method
|
||||
)
|
||||
|
||||
# Class-based usage
|
||||
extractor = ZernikeAnalyticExtractor(op2_file, focal_length=5000.0)
|
||||
result = extractor.extract_subcase('20')
|
||||
```
|
||||
|
||||
### E21: Zernike Method Comparison
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_opd`
|
||||
|
||||
Compare standard (Z-only) vs analytic (parabola) methods.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import compare_zernike_methods
|
||||
|
||||
comparison = compare_zernike_methods(op2_file, subcase="20", focal_length=5000.0)
|
||||
print(comparison['recommendation'])
|
||||
```
|
||||
|
||||
### E22: Zernike OPD (RECOMMENDED - Most Rigorous)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_figure`
|
||||
|
||||
**MOST RIGOROUS METHOD** for computing WFE. Uses the actual BDF geometry (filtered to OP2 nodes) as the reference surface instead of assuming a parabolic shape.
|
||||
|
||||
**Advantages over E20 (Analytic)**:
|
||||
- No need to know focal length or optical prescription
|
||||
- Works with **any surface shape**: parabola, hyperbola, asphere, freeform
|
||||
- Uses the actual mesh geometry as the "ideal" surface reference
|
||||
- Interpolates `z_figure` at deformed `(x+dx, y+dy)` position for true OPD
|
||||
|
||||
**How it works**:
|
||||
1. Load BDF geometry for nodes present in OP2 (figure surface nodes)
|
||||
2. Build 2D interpolator `z_figure(x, y)` from undeformed coordinates
|
||||
3. For each deformed node at `(x0+dx, y0+dy, z0+dz)`:
|
||||
- Interpolate `z_figure` at the deformed (x,y) position
|
||||
- Surface error = `(z0 + dz) - z_interpolated`
|
||||
4. Fit Zernike polynomials to the surface error map
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
ZernikeOPDExtractor,
|
||||
extract_zernike_opd,
|
||||
extract_zernike_opd_filtered_rms,
|
||||
)
|
||||
|
||||
# Full extraction with diagnostics
|
||||
result = extract_zernike_opd(op2_file, subcase="20")
|
||||
# Returns: {
|
||||
# 'global_rms_nm': float,
|
||||
# 'filtered_rms_nm': float,
|
||||
# 'max_lateral_displacement_um': float,
|
||||
# 'rms_lateral_displacement_um': float,
|
||||
# 'coefficients': list, # 50 Zernike coefficients
|
||||
# 'method': 'opd',
|
||||
# 'figure_file': 'BDF (filtered to OP2)',
|
||||
# ...
|
||||
# }
|
||||
|
||||
# Simple usage for optimization objective
|
||||
rms = extract_zernike_opd_filtered_rms(op2_file, subcase="20")
|
||||
|
||||
# Class-based for multi-subcase analysis
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
results = extractor.extract_all_subcases()
|
||||
```
|
||||
|
||||
#### Relative WFE (CRITICAL for Optimization)
|
||||
|
||||
**Use `extract_relative()` for computing relative WFE between subcases!**
|
||||
|
||||
> **BUG WARNING (V10 Fix - 2025-12-22)**: The WRONG way to compute relative WFE is:
|
||||
> ```python
|
||||
> # ❌ WRONG: Difference of RMS values
|
||||
> result_40 = extractor.extract_subcase("3")
|
||||
> result_ref = extractor.extract_subcase("2")
|
||||
> rel_40 = abs(result_40['filtered_rms_nm'] - result_ref['filtered_rms_nm']) # WRONG!
|
||||
> ```
|
||||
>
|
||||
> This computes `|RMS(WFE_40) - RMS(WFE_20)|`, which is NOT the same as `RMS(WFE_40 - WFE_20)`.
|
||||
> The difference can be **3-4x lower** than the correct value, leading to false "too good to be true" results.
|
||||
|
||||
**The CORRECT approach uses `extract_relative()`:**
|
||||
|
||||
```python
|
||||
# ✅ CORRECT: Computes node-by-node WFE difference, then fits Zernike, then RMS
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
|
||||
rel_40 = extractor.extract_relative("3", "2") # 40 deg vs 20 deg
|
||||
rel_60 = extractor.extract_relative("4", "2") # 60 deg vs 20 deg
|
||||
rel_90 = extractor.extract_relative("1", "2") # 90 deg vs 20 deg
|
||||
|
||||
# Returns: {
|
||||
# 'target_subcase': '3',
|
||||
# 'reference_subcase': '2',
|
||||
# 'method': 'figure_opd_relative',
|
||||
# 'relative_global_rms_nm': float, # RMS of the difference field
|
||||
# 'relative_filtered_rms_nm': float, # Use this for optimization!
|
||||
# 'relative_rms_filter_j1to3': float, # For manufacturing/optician workload
|
||||
# 'max_lateral_displacement_um': float,
|
||||
# 'rms_lateral_displacement_um': float,
|
||||
# 'delta_coefficients': list, # Zernike coeffs of difference
|
||||
# }
|
||||
|
||||
# Use in optimization objectives:
|
||||
objectives = {
|
||||
'rel_filtered_rms_40_vs_20': rel_40['relative_filtered_rms_nm'],
|
||||
'rel_filtered_rms_60_vs_20': rel_60['relative_filtered_rms_nm'],
|
||||
'mfg_90_optician_workload': rel_90['relative_rms_filter_j1to3'],
|
||||
}
|
||||
```
|
||||
|
||||
**Mathematical Difference**:
|
||||
```
|
||||
WRONG: |RMS(WFE_40) - RMS(WFE_20)| = |6.14 - 8.13| = 1.99 nm ← FALSE!
|
||||
CORRECT: RMS(WFE_40 - WFE_20) = RMS(diff_field) = 6.59 nm ← TRUE!
|
||||
```
|
||||
|
||||
The Standard `ZernikeExtractor` also has `extract_relative()` if you don't need the OPD method:
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_file, n_modes=50, filter_orders=4)
|
||||
rel_40 = extractor.extract_relative("3", "2") # Z-only method
|
||||
```
|
||||
|
||||
**Backwards compatibility**: The old names (`ZernikeFigureExtractor`, `extract_zernike_figure`, `extract_zernike_figure_rms`) still work but are deprecated.
|
||||
|
||||
**When to use which Zernike method**:
|
||||
|
||||
| Method | Class | When to Use | Assumptions |
|
||||
|--------|-------|-------------|-------------|
|
||||
| Standard (E8) | `ZernikeExtractor` | Quick analysis, negligible lateral displacement | Z-only at original (x,y) |
|
||||
| Analytic (E20) | `ZernikeAnalyticExtractor` | Known focal length, parabolic surface | Parabola shape |
|
||||
| **OPD (E22)** | `ZernikeOPDExtractor` | **Any surface, most rigorous** | None - uses actual geometry |
|
||||
|
||||
**IMPORTANT**: Do NOT provide a figure.dat file unless you're certain it matches your BDF geometry exactly. The default behavior (using BDF geometry filtered to OP2 nodes) is the safest option.
|
||||
|
||||
---
|
||||
|
||||
## Code Reuse Protocol
|
||||
@@ -698,7 +857,9 @@ optimization_engine/extractors/
|
||||
├── extract_mass_from_expression.py # E5
|
||||
├── field_data_extractor.py # E6
|
||||
├── stiffness_calculator.py # E7
|
||||
├── extract_zernike.py # E8, E9
|
||||
├── extract_zernike.py # E8, E9 (Standard Z-only)
|
||||
├── extract_zernike_opd.py # E20, E21 (Parabola OPD)
|
||||
├── extract_zernike_figure.py # E22 (Figure OPD - most rigorous)
|
||||
├── zernike_helpers.py # E10
|
||||
├── extract_part_mass_material.py # E11 (Part mass & material)
|
||||
├── extract_zernike_surface.py # Surface utilities
|
||||
@@ -728,3 +889,4 @@ nx_journals/
|
||||
| 1.2 | 2025-12-06 | Added Phase 3: E15-E17 (thermal), E18 (modal mass) |
|
||||
| 1.3 | 2025-12-07 | Added Element Type Selection Guide; documented shell vs solid stress columns |
|
||||
| 1.4 | 2025-12-19 | Added Phase 4: E19 (comprehensive part introspection) |
|
||||
| 1.5 | 2025-12-22 | Added Phase 5: E20 (Parabola OPD), E21 (comparison), E22 (Figure OPD - most rigorous) |
|
||||
|
||||
@@ -26,12 +26,32 @@ Study Insights provide **physics understanding** of optimization results through
|
||||
|
||||
| Type ID | Name | Applicable To | Data Required |
|
||||
|---------|------|---------------|---------------|
|
||||
| `zernike_dashboard` | **Zernike Dashboard (RECOMMENDED)** | Mirror, optics | OP2 with displacement subcases |
|
||||
| `zernike_wfe` | Zernike WFE Analysis | Mirror, optics | OP2 with displacement subcases |
|
||||
| `zernike_opd_comparison` | Zernike OPD Method Comparison | Mirror, optics, lateral | OP2 with displacement subcases |
|
||||
| `msf_zernike` | MSF Zernike Analysis | Mirror, optics | OP2 with displacement subcases |
|
||||
| `stress_field` | Stress Distribution | Structural, bracket, beam | OP2 with stress results |
|
||||
| `modal` | Modal Analysis | Vibration, dynamic | OP2 with eigenvalue/eigenvector |
|
||||
| `thermal` | Thermal Analysis | Thermo-structural | OP2 with temperature results |
|
||||
| `design_space` | Design Space Explorer | All optimization studies | study.db with 5+ trials |
|
||||
|
||||
### Zernike Method Comparison: Standard vs OPD
|
||||
|
||||
The Zernike insights now support **two WFE computation methods**:
|
||||
|
||||
| Method | Description | When to Use |
|
||||
|--------|-------------|-------------|
|
||||
| **Standard (Z-only)** | Uses only Z-displacement at original (x,y) coordinates | Quick analysis, negligible lateral displacement |
|
||||
| **OPD (X,Y,Z)** ← RECOMMENDED | Accounts for lateral (X,Y) displacement via interpolation | Any surface with gravity loads, most rigorous |
|
||||
|
||||
**How OPD method works**:
|
||||
1. Builds interpolator from undeformed BDF mesh geometry
|
||||
2. For each deformed node at `(x+dx, y+dy, z+dz)`, interpolates `Z_ideal` at new XY position
|
||||
3. Computes `WFE = z_deformed - Z_ideal(x_def, y_def)`
|
||||
4. Fits Zernike polynomials to the surface error map
|
||||
|
||||
**Typical difference**: OPD method gives **8-11% higher** WFE values than Standard (more conservative/accurate).
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
@@ -40,11 +60,13 @@ Study Insights provide **physics understanding** of optimization results through
|
||||
|
||||
```
|
||||
optimization_engine/insights/
|
||||
├── __init__.py # Registry and public API
|
||||
├── base.py # StudyInsight base class, InsightConfig, InsightResult
|
||||
├── zernike_wfe.py # Mirror wavefront error visualization
|
||||
├── stress_field.py # Stress contour visualization
|
||||
├── modal_analysis.py # Mode shape visualization
|
||||
├── __init__.py # Registry and public API
|
||||
├── base.py # StudyInsight base class, InsightConfig, InsightResult
|
||||
├── zernike_wfe.py # Mirror wavefront error visualization (50 modes)
|
||||
├── zernike_opd_comparison.py # OPD vs Standard method comparison (lateral disp. analysis)
|
||||
├── msf_zernike.py # MSF band decomposition (100 modes, LSF/MSF/HSF)
|
||||
├── stress_field.py # Stress contour visualization
|
||||
├── modal_analysis.py # Mode shape visualization
|
||||
├── thermal_field.py # Temperature distribution
|
||||
└── design_space.py # Parameter-objective exploration
|
||||
```
|
||||
@@ -53,7 +75,10 @@ optimization_engine/insights/
|
||||
|
||||
```python
|
||||
StudyInsight (ABC)
|
||||
├── ZernikeWFEInsight
|
||||
├── ZernikeDashboardInsight # RECOMMENDED: Unified dashboard with all views
|
||||
├── ZernikeWFEInsight # Standard 50-mode WFE analysis (with OPD toggle)
|
||||
├── ZernikeOPDComparisonInsight # OPD method comparison (lateral displacement)
|
||||
├── MSFZernikeInsight # 100-mode MSF band analysis
|
||||
├── StressFieldInsight
|
||||
├── ModalInsight
|
||||
├── ThermalInsight
|
||||
@@ -167,14 +192,54 @@ result = insight.generate(config)
|
||||
|
||||
## Insight Type Details
|
||||
|
||||
### 0. Zernike Dashboard (`zernike_dashboard`) - RECOMMENDED
|
||||
|
||||
**Purpose**: Unified dashboard with all orientations (40°, 60°, 90°) and MSF band analysis on one page. Light theme, executive summary, and method comparison.
|
||||
|
||||
**Generates**: 1 comprehensive HTML file with:
|
||||
- Executive summary with metric cards (40-20, 60-20, MFG workload)
|
||||
- MSF band analysis (LSF/MSF/HSF decomposition)
|
||||
- 3D surface plots for each orientation
|
||||
- Zernike coefficient bar charts color-coded by band
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'n_modes': 50,
|
||||
'filter_low_orders': 4,
|
||||
'theme': 'light', # Light theme for reports
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'40_vs_20_filtered_rms': 6.53, # nm (OPD method)
|
||||
'60_vs_20_filtered_rms': 14.21, # nm (OPD method)
|
||||
'90_optician_workload': 26.34, # nm (J1-J3 filtered)
|
||||
'msf_rss_40': 2.1, # nm (MSF band contribution)
|
||||
}
|
||||
```
|
||||
|
||||
### 1. Zernike WFE Analysis (`zernike_wfe`)
|
||||
|
||||
**Purpose**: Visualize wavefront error for mirror optimization with Zernike polynomial decomposition.
|
||||
**Purpose**: Visualize wavefront error for mirror optimization with Zernike polynomial decomposition. **Now includes Standard/OPD method toggle and lateral displacement maps**.
|
||||
|
||||
**Generates**: 3 HTML files
|
||||
- `zernike_*_40_vs_20.html` - 40° vs 20° relative WFE
|
||||
- `zernike_*_60_vs_20.html` - 60° vs 20° relative WFE
|
||||
- `zernike_*_90_mfg.html` - 90° manufacturing (absolute)
|
||||
**Generates**: 6 HTML files
|
||||
- `zernike_*_40_vs_20.html` - 40° vs 20° relative WFE (with method toggle)
|
||||
- `zernike_*_40_lateral.html` - Lateral displacement map for 40°
|
||||
- `zernike_*_60_vs_20.html` - 60° vs 20° relative WFE (with method toggle)
|
||||
- `zernike_*_60_lateral.html` - Lateral displacement map for 60°
|
||||
- `zernike_*_90_mfg.html` - 90° manufacturing (with method toggle)
|
||||
- `zernike_*_90_mfg_lateral.html` - Lateral displacement map for 90°
|
||||
|
||||
**Features**:
|
||||
- Toggle buttons to switch between **Standard (Z-only)** and **OPD (X,Y,Z)** methods
|
||||
- Toggle between WFE view and **ΔX, ΔY, ΔZ displacement components**
|
||||
- Metrics comparison table showing both methods side-by-side
|
||||
- Lateral displacement statistics (Max, RMS in µm)
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
@@ -192,14 +257,81 @@ config = InsightConfig(
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'40_vs_20_filtered_rms': 45.2, # nm
|
||||
'60_vs_20_filtered_rms': 78.3, # nm
|
||||
'90_mfg_filtered_rms': 120.5, # nm
|
||||
'90_optician_workload': 89.4, # nm (J1-J3 filtered)
|
||||
'40_vs_20_filtered_rms_std': 6.01, # nm (Standard method)
|
||||
'40_vs_20_filtered_rms_opd': 6.53, # nm (OPD method)
|
||||
'60_vs_20_filtered_rms_std': 12.81, # nm
|
||||
'60_vs_20_filtered_rms_opd': 14.21, # nm
|
||||
'90_mfg_filtered_rms_std': 24.5, # nm
|
||||
'90_mfg_filtered_rms_opd': 26.34, # nm
|
||||
'90_optician_workload': 26.34, # nm (J1-J3 filtered)
|
||||
'lateral_40_max_um': 0.234, # µm max lateral displacement
|
||||
'lateral_60_max_um': 0.312, # µm
|
||||
'lateral_90_max_um': 0.089, # µm
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Stress Distribution (`stress_field`)
|
||||
### 2. MSF Zernike Analysis (`msf_zernike`)
|
||||
|
||||
**Purpose**: Detailed mid-spatial frequency analysis for telescope mirrors with gravity-induced support print-through.
|
||||
|
||||
**Generates**: 1 comprehensive HTML file with:
|
||||
- Band decomposition table (LSF/MSF/HSF RSS metrics)
|
||||
- MSF-only 3D surface visualization
|
||||
- Coefficient bar chart color-coded by band
|
||||
- Dominant MSF mode identification
|
||||
- Mesh resolution analysis
|
||||
|
||||
**Band Definitions** (for 1.2m class mirror):
|
||||
|
||||
| Band | Zernike Order | Feature Size | Physical Meaning |
|
||||
|------|---------------|--------------|------------------|
|
||||
| LSF | n ≤ 10 | > 120 mm | M2 hexapod correctable |
|
||||
| MSF | n = 11-50 | 24-109 mm | Support print-through |
|
||||
| HSF | n > 50 | < 24 mm | Near mesh resolution limit |
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'n_modes': 100, # Higher than zernike_wfe (100 vs 50)
|
||||
'lsf_max': 10, # n ≤ 10 is LSF
|
||||
'msf_max': 50, # n = 11-50 is MSF
|
||||
'disp_unit': 'mm',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Analyses Performed**:
|
||||
- Absolute WFE at each orientation (40°, 60°, 90°)
|
||||
- Relative to 20° (operational reference)
|
||||
- Relative to 90° (manufacturing/polishing reference)
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_modes': 100,
|
||||
'lsf_max_order': 10,
|
||||
'msf_max_order': 50,
|
||||
'mesh_nodes': 78290,
|
||||
'mesh_spacing_mm': 4.1,
|
||||
'max_resolvable_order': 157,
|
||||
'40deg_vs_20deg_lsf_rss': 12.3, # nm
|
||||
'40deg_vs_20deg_msf_rss': 8.7, # nm - KEY METRIC
|
||||
'40deg_vs_20deg_total_rss': 15.2, # nm
|
||||
'40deg_vs_20deg_msf_pct': 33.0, # % of total in MSF band
|
||||
# ... similar for 60deg, 90deg
|
||||
}
|
||||
```
|
||||
|
||||
**When to Use**:
|
||||
- Analyzing support structure print-through
|
||||
- Quantifying gravity-induced MSF content
|
||||
- Comparing MSF at different orientations
|
||||
- Validating mesh resolution is adequate for MSF capture
|
||||
|
||||
---
|
||||
|
||||
### 3. Stress Distribution (`stress_field`)
|
||||
|
||||
**Purpose**: Visualize Von Mises stress distribution with hot spot identification.
|
||||
|
||||
@@ -225,7 +357,7 @@ config = InsightConfig(
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Modal Analysis (`modal`)
|
||||
### 4. Modal Analysis (`modal`)
|
||||
|
||||
**Purpose**: Visualize natural frequencies and mode shapes.
|
||||
|
||||
@@ -249,7 +381,7 @@ config = InsightConfig(
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Thermal Analysis (`thermal`)
|
||||
### 5. Thermal Analysis (`thermal`)
|
||||
|
||||
**Purpose**: Visualize temperature distribution and gradients.
|
||||
|
||||
@@ -273,7 +405,7 @@ config = InsightConfig(
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Design Space Explorer (`design_space`)
|
||||
### 6. Design Space Explorer (`design_space`)
|
||||
|
||||
**Purpose**: Visualize parameter-objective relationships from optimization trials.
|
||||
|
||||
@@ -369,31 +501,44 @@ python -m optimization_engine.insights list
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Integration (Future)
|
||||
## Dashboard Integration
|
||||
|
||||
The insights module is designed for future dashboard integration:
|
||||
The Insights tab in the Atomizer Dashboard provides a 3-step workflow:
|
||||
|
||||
```python
|
||||
# Backend API endpoint
|
||||
@app.get("/api/study/{study_name}/insights")
|
||||
def list_study_insights(study_name: str):
|
||||
study_path = Path(f"studies/{study_name}")
|
||||
return list_available_insights(study_path)
|
||||
### Step 1: Select Iteration
|
||||
- Lists all available iterations (iter1, iter2, etc.) and best_design_archive
|
||||
- Shows OP2 file name and modification timestamp
|
||||
- Auto-selects "Best Design (Recommended)" if available
|
||||
|
||||
@app.post("/api/study/{study_name}/insights/{type}/generate")
|
||||
def generate_insight(study_name: str, type: str, config: dict = {}):
|
||||
insight = get_insight(type, Path(f"studies/{study_name}"))
|
||||
result = insight.generate(InsightConfig(**config))
|
||||
return {
|
||||
'success': result.success,
|
||||
'html_path': str(result.html_path),
|
||||
'summary': result.summary
|
||||
}
|
||||
### Step 2: Choose Insight Type
|
||||
- Groups insights by category (Optical, Structural, Thermal, etc.)
|
||||
- Shows insight name and description
|
||||
- Click to select, then "Generate Insight"
|
||||
|
||||
@app.get("/api/study/{study_name}/insights/{type}/plotly")
|
||||
def get_insight_plotly(study_name: str, type: str):
|
||||
insight = get_insight(type, Path(f"studies/{study_name}"))
|
||||
return insight.get_plotly_data()
|
||||
### Step 3: View Result
|
||||
- Displays summary metrics (RMS values, etc.)
|
||||
- Embedded Plotly visualization (if available)
|
||||
- "Open Full View" button for multi-file insights (like Zernike WFE)
|
||||
- Fullscreen mode for detailed analysis
|
||||
|
||||
### API Endpoints
|
||||
|
||||
```
|
||||
GET /api/insights/studies/{id}/iterations # List available iterations
|
||||
GET /api/insights/studies/{id}/available # List available insight types
|
||||
GET /api/insights/studies/{id}/generated # List previously generated files
|
||||
POST /api/insights/studies/{id}/generate/{type} # Generate insight for iteration
|
||||
GET /api/insights/studies/{id}/view/{type} # View generated HTML
|
||||
```
|
||||
|
||||
### Generate Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": "best_design_archive", // or "iter5", etc.
|
||||
"trial_id": null, // Optional specific trial
|
||||
"config": {} // Insight-specific config
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
@@ -402,4 +547,7 @@ def get_insight_plotly(study_name: str, type: str):
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.3.0 | 2025-12-22 | Added ZernikeDashboardInsight (unified view), OPD method toggle, lateral displacement maps |
|
||||
| 1.2.0 | 2024-12-22 | Dashboard overhaul: 3-step workflow, iteration selection, faster loading |
|
||||
| 1.1.0 | 2024-12-21 | Added MSF Zernike Analysis insight (6 insight types) |
|
||||
| 1.0.0 | 2024-12-20 | Initial release with 5 insight types |
|
||||
|
||||
@@ -27,7 +27,7 @@ Phase 4 Extractors (2025-12-19):
|
||||
- Part Introspection (E12): Comprehensive .prt analysis (expressions, mass, materials, attributes, groups, features)
|
||||
"""
|
||||
|
||||
# Zernike extractor for telescope mirror optimization
|
||||
# Zernike extractor for telescope mirror optimization (standard Z-only method)
|
||||
from optimization_engine.extractors.extract_zernike import (
|
||||
ZernikeExtractor,
|
||||
extract_zernike_from_op2,
|
||||
@@ -35,6 +35,29 @@ from optimization_engine.extractors.extract_zernike import (
|
||||
extract_zernike_relative_rms,
|
||||
)
|
||||
|
||||
# Analytic (parabola-based) Zernike extractor (accounts for lateral X/Y displacement)
|
||||
# Uses parabola formula - requires knowing focal length
|
||||
from optimization_engine.extractors.extract_zernike_opd import (
|
||||
ZernikeAnalyticExtractor,
|
||||
extract_zernike_analytic,
|
||||
extract_zernike_analytic_filtered_rms,
|
||||
compare_zernike_methods,
|
||||
# Backwards compatibility (deprecated)
|
||||
ZernikeOPDExtractor as _ZernikeOPDExtractor_deprecated,
|
||||
)
|
||||
|
||||
# OPD-based Zernike extractor (uses actual mesh geometry, no shape assumption)
|
||||
# MOST RIGOROUS method - RECOMMENDED for telescope mirror optimization
|
||||
from optimization_engine.extractors.extract_zernike_figure import (
|
||||
ZernikeOPDExtractor,
|
||||
extract_zernike_opd,
|
||||
extract_zernike_opd_filtered_rms,
|
||||
# Backwards compatibility (deprecated)
|
||||
ZernikeFigureExtractor,
|
||||
extract_zernike_figure,
|
||||
extract_zernike_figure_rms,
|
||||
)
|
||||
|
||||
# Part mass and material extractor (from NX .prt files)
|
||||
from optimization_engine.extractors.extract_part_mass_material import (
|
||||
extract_part_mass_material,
|
||||
@@ -114,11 +137,24 @@ __all__ = [
|
||||
'extract_total_reaction_force',
|
||||
'extract_reaction_component',
|
||||
'check_force_equilibrium',
|
||||
# Zernike (telescope mirrors)
|
||||
# Zernike (telescope mirrors) - Standard Z-only method
|
||||
'ZernikeExtractor',
|
||||
'extract_zernike_from_op2',
|
||||
'extract_zernike_filtered_rms',
|
||||
'extract_zernike_relative_rms',
|
||||
# Zernike OPD (RECOMMENDED - uses actual geometry, no shape assumption)
|
||||
'ZernikeOPDExtractor',
|
||||
'extract_zernike_opd',
|
||||
'extract_zernike_opd_filtered_rms',
|
||||
# Zernike Analytic (parabola-based with lateral displacement correction)
|
||||
'ZernikeAnalyticExtractor',
|
||||
'extract_zernike_analytic',
|
||||
'extract_zernike_analytic_filtered_rms',
|
||||
'compare_zernike_methods',
|
||||
# Backwards compatibility (deprecated)
|
||||
'ZernikeFigureExtractor',
|
||||
'extract_zernike_figure',
|
||||
'extract_zernike_figure_rms',
|
||||
# Temperature (Phase 3 - thermal)
|
||||
'extract_temperature',
|
||||
'extract_temperature_gradient',
|
||||
|
||||
852
optimization_engine/extractors/extract_zernike_figure.py
Normal file
852
optimization_engine/extractors/extract_zernike_figure.py
Normal file
@@ -0,0 +1,852 @@
|
||||
"""
|
||||
OPD Zernike Extractor (Most Rigorous Method)
|
||||
=============================================
|
||||
|
||||
This is the RECOMMENDED Zernike extractor for telescope mirror optimization.
|
||||
It computes surface error using the ACTUAL undeformed mesh geometry as the
|
||||
reference surface, rather than assuming any analytical shape.
|
||||
|
||||
WHY THIS IS THE MOST ROBUST:
|
||||
----------------------------
|
||||
1. Works with ANY surface shape (parabola, hyperbola, asphere, freeform)
|
||||
2. No need to know/estimate focal length or conic constant
|
||||
3. Uses the actual mesh geometry as ground truth
|
||||
4. Eliminates errors from prescription/shape approximations
|
||||
5. Properly accounts for lateral (X, Y) displacement via interpolation
|
||||
|
||||
HOW IT WORKS:
|
||||
-------------
|
||||
The key insight: The BDF geometry for nodes present in OP2 IS the undeformed
|
||||
reference surface (i.e., the optical figure before deformation).
|
||||
|
||||
1. Load BDF geometry for nodes that have displacements in OP2
|
||||
2. Build 2D interpolator z_figure(x, y) from undeformed coordinates
|
||||
3. For each deformed node at (x0+dx, y0+dy, z0+dz):
|
||||
- Interpolate z_figure at the deformed (x,y) position
|
||||
- Surface error = (z0 + dz) - z_interpolated
|
||||
4. Fit Zernike polynomials to the surface error map
|
||||
|
||||
The interpolation accounts for the fact that when a node moves laterally,
|
||||
it should be compared against the figure height at its NEW position.
|
||||
|
||||
USAGE:
|
||||
------
|
||||
from optimization_engine.extractors import ZernikeOPDExtractor
|
||||
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
result = extractor.extract_subcase('20')
|
||||
|
||||
# Simple convenience function
|
||||
from optimization_engine.extractors import extract_zernike_opd
|
||||
result = extract_zernike_opd(op2_file, subcase='20')
|
||||
|
||||
Author: Atomizer Framework
|
||||
Date: 2024
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional, List, Tuple, Union
|
||||
import numpy as np
|
||||
from scipy.interpolate import LinearNDInterpolator, CloughTocher2DInterpolator
|
||||
import warnings
|
||||
|
||||
# Import base Zernike functionality
|
||||
from .extract_zernike import (
|
||||
compute_zernike_coefficients,
|
||||
compute_aberration_magnitudes,
|
||||
zernike_noll,
|
||||
zernike_label,
|
||||
read_node_geometry,
|
||||
find_geometry_file,
|
||||
extract_displacements_by_subcase,
|
||||
UNIT_TO_NM,
|
||||
DEFAULT_N_MODES,
|
||||
DEFAULT_FILTER_ORDERS,
|
||||
)
|
||||
|
||||
try:
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
except ImportError:
|
||||
BDF = None
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Figure Geometry Parser
|
||||
# ============================================================================
|
||||
|
||||
def parse_nastran_grid_large(line: str, continuation: str) -> Tuple[int, float, float, float]:
|
||||
"""
|
||||
Parse a GRID* (large field format) card from Nastran BDF.
|
||||
|
||||
Large field format (16-char fields):
|
||||
GRID* ID CP X Y +
|
||||
* Z CD
|
||||
|
||||
Args:
|
||||
line: First line of GRID* card
|
||||
continuation: Continuation line starting with *
|
||||
|
||||
Returns:
|
||||
Tuple of (node_id, x, y, z)
|
||||
"""
|
||||
# GRID* line has: GRID* | ID | CP | X | Y | continuation marker
|
||||
# Fields are 16 characters each after the 8-char GRID* identifier
|
||||
|
||||
# Parse first line
|
||||
node_id = int(line[8:24].strip())
|
||||
# CP (coordinate system) at 24:40 - skip
|
||||
x = float(line[40:56].strip())
|
||||
y = float(line[56:72].strip())
|
||||
|
||||
# Parse continuation line for Z
|
||||
# Continuation line: * | Z | CD
|
||||
z = float(continuation[8:24].strip())
|
||||
|
||||
return node_id, x, y, z
|
||||
|
||||
|
||||
def parse_nastran_grid_small(line: str) -> Tuple[int, float, float, float]:
|
||||
"""
|
||||
Parse a GRID (small field format) card from Nastran BDF.
|
||||
|
||||
Small field format (8-char fields):
|
||||
GRID ID CP X Y Z CD PS SEID
|
||||
|
||||
Args:
|
||||
line: GRID card line
|
||||
|
||||
Returns:
|
||||
Tuple of (node_id, x, y, z)
|
||||
"""
|
||||
parts = line.split()
|
||||
node_id = int(parts[1])
|
||||
# parts[2] is CP
|
||||
x = float(parts[3]) if len(parts) > 3 else 0.0
|
||||
y = float(parts[4]) if len(parts) > 4 else 0.0
|
||||
z = float(parts[5]) if len(parts) > 5 else 0.0
|
||||
|
||||
return node_id, x, y, z
|
||||
|
||||
|
||||
def load_figure_geometry(figure_path: Union[str, Path]) -> Dict[int, np.ndarray]:
|
||||
"""
|
||||
Load figure node geometry from a Nastran DAT/BDF file.
|
||||
|
||||
Supports both GRID (small field) and GRID* (large field) formats.
|
||||
|
||||
Args:
|
||||
figure_path: Path to figure.dat file
|
||||
|
||||
Returns:
|
||||
Dict mapping node_id to (x, y, z) coordinates
|
||||
"""
|
||||
figure_path = Path(figure_path)
|
||||
|
||||
if not figure_path.exists():
|
||||
raise FileNotFoundError(f"Figure file not found: {figure_path}")
|
||||
|
||||
geometry = {}
|
||||
|
||||
with open(figure_path, 'r', encoding='utf-8', errors='ignore') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
|
||||
# Skip comments and empty lines
|
||||
if line.startswith('$') or line.strip() == '':
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Large field format: GRID*
|
||||
if line.startswith('GRID*'):
|
||||
if i + 1 < len(lines):
|
||||
continuation = lines[i + 1]
|
||||
try:
|
||||
node_id, x, y, z = parse_nastran_grid_large(line, continuation)
|
||||
geometry[node_id] = np.array([x, y, z])
|
||||
except (ValueError, IndexError) as e:
|
||||
warnings.warn(f"Failed to parse GRID* at line {i}: {e}")
|
||||
i += 2
|
||||
continue
|
||||
|
||||
# Small field format: GRID
|
||||
elif line.startswith('GRID') and not line.startswith('GRID*'):
|
||||
try:
|
||||
node_id, x, y, z = parse_nastran_grid_small(line)
|
||||
geometry[node_id] = np.array([x, y, z])
|
||||
except (ValueError, IndexError) as e:
|
||||
warnings.warn(f"Failed to parse GRID at line {i}: {e}")
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Check for END
|
||||
if 'ENDDATA' in line:
|
||||
break
|
||||
|
||||
i += 1
|
||||
|
||||
if not geometry:
|
||||
raise ValueError(f"No GRID cards found in {figure_path}")
|
||||
|
||||
return geometry
|
||||
|
||||
|
||||
def build_figure_interpolator(
|
||||
figure_geometry: Dict[int, np.ndarray],
|
||||
method: str = 'linear'
|
||||
) -> LinearNDInterpolator:
|
||||
"""
|
||||
Build a 2D interpolator for the figure surface z(x, y).
|
||||
|
||||
Args:
|
||||
figure_geometry: Dict mapping node_id to (x, y, z)
|
||||
method: Interpolation method ('linear' or 'cubic')
|
||||
|
||||
Returns:
|
||||
Interpolator function that takes (x, y) and returns z
|
||||
"""
|
||||
# Extract coordinates
|
||||
points = np.array(list(figure_geometry.values()))
|
||||
x = points[:, 0]
|
||||
y = points[:, 1]
|
||||
z = points[:, 2]
|
||||
|
||||
# Build interpolator
|
||||
xy_points = np.column_stack([x, y])
|
||||
|
||||
if method == 'cubic':
|
||||
# Clough-Tocher gives smoother results but is slower
|
||||
interpolator = CloughTocher2DInterpolator(xy_points, z)
|
||||
else:
|
||||
# Linear is faster and usually sufficient
|
||||
interpolator = LinearNDInterpolator(xy_points, z)
|
||||
|
||||
return interpolator
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Figure-Based OPD Calculation
|
||||
# ============================================================================
|
||||
|
||||
def compute_figure_opd(
|
||||
x0: np.ndarray,
|
||||
y0: np.ndarray,
|
||||
z0: np.ndarray,
|
||||
dx: np.ndarray,
|
||||
dy: np.ndarray,
|
||||
dz: np.ndarray,
|
||||
figure_interpolator
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""
|
||||
Compute surface error using actual figure geometry as reference.
|
||||
|
||||
The key insight: after lateral displacement, compare the deformed Z
|
||||
against what the ACTUAL figure surface Z is at that (x, y) position.
|
||||
|
||||
Args:
|
||||
x0, y0, z0: Original node coordinates
|
||||
dx, dy, dz: Displacement components from FEA
|
||||
figure_interpolator: Interpolator for figure z(x, y)
|
||||
|
||||
Returns:
|
||||
Tuple of:
|
||||
- x_def: Deformed X coordinates
|
||||
- y_def: Deformed Y coordinates
|
||||
- surface_error: Difference from ideal figure surface
|
||||
- lateral_magnitude: Magnitude of lateral displacement
|
||||
"""
|
||||
# Deformed coordinates
|
||||
x_def = x0 + dx
|
||||
y_def = y0 + dy
|
||||
z_def = z0 + dz
|
||||
|
||||
# Get ideal figure Z at deformed (x, y) positions
|
||||
z_figure_at_deformed = figure_interpolator(x_def, y_def)
|
||||
|
||||
# Handle any NaN values (outside interpolation domain)
|
||||
nan_mask = np.isnan(z_figure_at_deformed)
|
||||
if np.any(nan_mask):
|
||||
# For points outside figure, use original Z as fallback
|
||||
z_figure_at_deformed[nan_mask] = z0[nan_mask]
|
||||
warnings.warn(
|
||||
f"{np.sum(nan_mask)} nodes outside figure interpolation domain, "
|
||||
"using original Z as fallback"
|
||||
)
|
||||
|
||||
# Surface error = deformed Z - ideal figure Z at deformed position
|
||||
surface_error = z_def - z_figure_at_deformed
|
||||
|
||||
# Lateral displacement magnitude
|
||||
lateral_magnitude = np.sqrt(dx**2 + dy**2)
|
||||
|
||||
return x_def, y_def, surface_error, lateral_magnitude
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Main OPD Extractor Class (Most Rigorous Method)
|
||||
# ============================================================================
|
||||
|
||||
class ZernikeOPDExtractor:
|
||||
"""
|
||||
Zernike extractor using actual mesh geometry as the reference surface.
|
||||
|
||||
THIS IS THE RECOMMENDED EXTRACTOR for telescope mirror optimization.
|
||||
|
||||
This is the most rigorous approach for computing WFE because it:
|
||||
1. Uses the actual mesh geometry (not an analytical approximation)
|
||||
2. Accounts for lateral displacement via interpolation
|
||||
3. Works with any surface shape (parabola, hyperbola, asphere, freeform)
|
||||
4. No need to know focal length or optical prescription
|
||||
|
||||
The extractor works in two modes:
|
||||
1. Default: Uses BDF geometry for nodes present in OP2 (RECOMMENDED)
|
||||
2. With figure_file: Uses explicit figure.dat for reference geometry
|
||||
|
||||
Example:
|
||||
# Simple usage - BDF geometry filtered to OP2 nodes (RECOMMENDED)
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
results = extractor.extract_all_subcases()
|
||||
|
||||
# With explicit figure file (advanced - ensure coordinates match!)
|
||||
extractor = ZernikeOPDExtractor(op2_file, figure_file='figure.dat')
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
op2_path: Union[str, Path],
|
||||
figure_path: Optional[Union[str, Path]] = None,
|
||||
bdf_path: Optional[Union[str, Path]] = None,
|
||||
displacement_unit: str = 'mm',
|
||||
n_modes: int = DEFAULT_N_MODES,
|
||||
filter_orders: int = DEFAULT_FILTER_ORDERS,
|
||||
interpolation_method: str = 'linear'
|
||||
):
|
||||
"""
|
||||
Initialize figure-based Zernike extractor.
|
||||
|
||||
Args:
|
||||
op2_path: Path to OP2 results file
|
||||
figure_path: Path to figure.dat with undeformed figure geometry (OPTIONAL)
|
||||
If None, uses BDF geometry for nodes present in OP2
|
||||
bdf_path: Path to BDF geometry (auto-detected if None)
|
||||
displacement_unit: Unit of displacement in OP2
|
||||
n_modes: Number of Zernike modes to fit
|
||||
filter_orders: Number of low-order modes to filter for RMS
|
||||
interpolation_method: 'linear' or 'cubic' for figure interpolation
|
||||
"""
|
||||
self.op2_path = Path(op2_path)
|
||||
self.figure_path = Path(figure_path) if figure_path else None
|
||||
self.bdf_path = Path(bdf_path) if bdf_path else find_geometry_file(self.op2_path)
|
||||
self.displacement_unit = displacement_unit
|
||||
self.n_modes = n_modes
|
||||
self.filter_orders = filter_orders
|
||||
self.interpolation_method = interpolation_method
|
||||
self.use_explicit_figure = figure_path is not None
|
||||
|
||||
# Unit conversion
|
||||
self.nm_scale = UNIT_TO_NM.get(displacement_unit.lower(), 1e6)
|
||||
self.um_scale = self.nm_scale / 1000.0
|
||||
self.wfe_factor = 2.0 * self.nm_scale # WFE = 2 * surface error
|
||||
|
||||
# Lazy-loaded data
|
||||
self._figure_geo = None
|
||||
self._figure_interp = None
|
||||
self._node_geo = None
|
||||
self._displacements = None
|
||||
|
||||
@property
|
||||
def node_geometry(self) -> Dict[int, np.ndarray]:
|
||||
"""Lazy-load FEM node geometry from BDF."""
|
||||
if self._node_geo is None:
|
||||
self._node_geo = read_node_geometry(self.bdf_path)
|
||||
return self._node_geo
|
||||
|
||||
@property
|
||||
def displacements(self) -> Dict[str, Dict[str, np.ndarray]]:
|
||||
"""Lazy-load displacements from OP2."""
|
||||
if self._displacements is None:
|
||||
self._displacements = extract_displacements_by_subcase(self.op2_path)
|
||||
return self._displacements
|
||||
|
||||
@property
|
||||
def figure_geometry(self) -> Dict[int, np.ndarray]:
|
||||
"""
|
||||
Lazy-load figure geometry.
|
||||
|
||||
If explicit figure_path provided, load from that file.
|
||||
Otherwise, use BDF geometry filtered to nodes present in OP2.
|
||||
"""
|
||||
if self._figure_geo is None:
|
||||
if self.use_explicit_figure and self.figure_path:
|
||||
# Load from explicit figure.dat
|
||||
self._figure_geo = load_figure_geometry(self.figure_path)
|
||||
else:
|
||||
# Use BDF geometry filtered to OP2 nodes
|
||||
# Get all node IDs from OP2 (any subcase)
|
||||
first_subcase = next(iter(self.displacements.values()))
|
||||
op2_node_ids = set(int(nid) for nid in first_subcase['node_ids'])
|
||||
|
||||
# Filter BDF geometry to only OP2 nodes
|
||||
self._figure_geo = {
|
||||
nid: geo for nid, geo in self.node_geometry.items()
|
||||
if nid in op2_node_ids
|
||||
}
|
||||
return self._figure_geo
|
||||
|
||||
@property
|
||||
def figure_interpolator(self):
|
||||
"""Lazy-build figure interpolator."""
|
||||
if self._figure_interp is None:
|
||||
self._figure_interp = build_figure_interpolator(
|
||||
self.figure_geometry,
|
||||
method=self.interpolation_method
|
||||
)
|
||||
return self._figure_interp
|
||||
|
||||
def _build_figure_opd_data(self, subcase_label: str) -> Dict[str, np.ndarray]:
|
||||
"""
|
||||
Build OPD data using figure geometry as reference.
|
||||
|
||||
Uses the figure geometry (either from explicit file or BDF filtered to OP2 nodes)
|
||||
as the undeformed reference surface.
|
||||
"""
|
||||
if subcase_label not in self.displacements:
|
||||
available = list(self.displacements.keys())
|
||||
raise ValueError(f"Subcase '{subcase_label}' not found. Available: {available}")
|
||||
|
||||
data = self.displacements[subcase_label]
|
||||
node_ids = data['node_ids']
|
||||
disp = data['disp']
|
||||
|
||||
# Get figure node IDs
|
||||
figure_node_ids = set(self.figure_geometry.keys())
|
||||
|
||||
# Build arrays - only for nodes in figure
|
||||
x0, y0, z0 = [], [], []
|
||||
dx_arr, dy_arr, dz_arr = [], [], []
|
||||
matched_ids = []
|
||||
|
||||
for nid, vec in zip(node_ids, disp):
|
||||
nid_int = int(nid)
|
||||
|
||||
# Check if node is in figure
|
||||
if nid_int not in figure_node_ids:
|
||||
continue
|
||||
|
||||
# Use figure geometry as reference (undeformed surface)
|
||||
fig_geo = self.figure_geometry[nid_int]
|
||||
|
||||
x0.append(fig_geo[0])
|
||||
y0.append(fig_geo[1])
|
||||
z0.append(fig_geo[2])
|
||||
dx_arr.append(vec[0])
|
||||
dy_arr.append(vec[1])
|
||||
dz_arr.append(vec[2])
|
||||
matched_ids.append(nid_int)
|
||||
|
||||
if not x0:
|
||||
raise ValueError(
|
||||
f"No nodes matched between figure ({len(figure_node_ids)} nodes) "
|
||||
f"and displacement data ({len(node_ids)} nodes)"
|
||||
)
|
||||
|
||||
x0 = np.array(x0)
|
||||
y0 = np.array(y0)
|
||||
z0 = np.array(z0)
|
||||
dx_arr = np.array(dx_arr)
|
||||
dy_arr = np.array(dy_arr)
|
||||
dz_arr = np.array(dz_arr)
|
||||
|
||||
# Compute figure-based OPD
|
||||
x_def, y_def, surface_error, lateral_disp = compute_figure_opd(
|
||||
x0, y0, z0, dx_arr, dy_arr, dz_arr,
|
||||
self.figure_interpolator
|
||||
)
|
||||
|
||||
# Convert to WFE
|
||||
wfe_nm = surface_error * self.wfe_factor
|
||||
|
||||
return {
|
||||
'node_ids': np.array(matched_ids),
|
||||
'x_original': x0,
|
||||
'y_original': y0,
|
||||
'z_original': z0,
|
||||
'dx': dx_arr,
|
||||
'dy': dy_arr,
|
||||
'dz': dz_arr,
|
||||
'x_deformed': x_def,
|
||||
'y_deformed': y_def,
|
||||
'surface_error': surface_error,
|
||||
'wfe_nm': wfe_nm,
|
||||
'lateral_disp': lateral_disp,
|
||||
'n_figure_nodes': len(self.figure_geometry),
|
||||
'n_matched_nodes': len(matched_ids)
|
||||
}
|
||||
|
||||
def extract_subcase(
|
||||
self,
|
||||
subcase_label: str,
|
||||
include_coefficients: bool = True,
|
||||
include_diagnostics: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract Zernike metrics using figure-based OPD method.
|
||||
|
||||
Args:
|
||||
subcase_label: Subcase identifier
|
||||
include_coefficients: Include all Zernike coefficients
|
||||
include_diagnostics: Include lateral displacement diagnostics
|
||||
|
||||
Returns:
|
||||
Dict with RMS metrics, coefficients, and diagnostics
|
||||
"""
|
||||
opd_data = self._build_figure_opd_data(subcase_label)
|
||||
|
||||
# Use deformed coordinates for Zernike fitting
|
||||
X = opd_data['x_deformed']
|
||||
Y = opd_data['y_deformed']
|
||||
WFE = opd_data['wfe_nm']
|
||||
|
||||
# Fit Zernike coefficients
|
||||
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, self.n_modes)
|
||||
|
||||
# Compute RMS metrics
|
||||
x_c = X - np.mean(X)
|
||||
y_c = Y - np.mean(Y)
|
||||
r = np.hypot(x_c / R_max, y_c / R_max)
|
||||
theta = np.arctan2(y_c, x_c)
|
||||
|
||||
# Low-order Zernike basis
|
||||
Z_low = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
|
||||
])
|
||||
|
||||
# Filtered WFE (J5+)
|
||||
wfe_filtered = WFE - Z_low @ coeffs[:self.filter_orders]
|
||||
|
||||
global_rms = float(np.sqrt(np.mean(WFE**2)))
|
||||
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
|
||||
|
||||
# J1-J3 filtered (for manufacturing/optician)
|
||||
Z_j1to3 = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, 4)
|
||||
])
|
||||
wfe_j1to3 = WFE - Z_j1to3 @ coeffs[:3]
|
||||
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
|
||||
|
||||
# Aberration magnitudes
|
||||
aberrations = compute_aberration_magnitudes(coeffs)
|
||||
|
||||
result = {
|
||||
'subcase': subcase_label,
|
||||
'method': 'figure_opd',
|
||||
'rms_wfe_nm': filtered_rms, # Primary metric (J5+)
|
||||
'global_rms_nm': global_rms,
|
||||
'filtered_rms_nm': filtered_rms,
|
||||
'rms_filter_j1to3_nm': rms_j1to3,
|
||||
'n_nodes': len(X),
|
||||
'n_figure_nodes': opd_data['n_figure_nodes'],
|
||||
'figure_file': str(self.figure_path.name) if self.figure_path else 'BDF (filtered to OP2)',
|
||||
**aberrations
|
||||
}
|
||||
|
||||
if include_diagnostics:
|
||||
lateral = opd_data['lateral_disp']
|
||||
result.update({
|
||||
'max_lateral_displacement_um': float(np.max(lateral) * self.um_scale),
|
||||
'rms_lateral_displacement_um': float(np.sqrt(np.mean(lateral**2)) * self.um_scale),
|
||||
'mean_lateral_displacement_um': float(np.mean(lateral) * self.um_scale),
|
||||
})
|
||||
|
||||
if include_coefficients:
|
||||
result['coefficients'] = coeffs.tolist()
|
||||
result['coefficient_labels'] = [
|
||||
zernike_label(j) for j in range(1, self.n_modes + 1)
|
||||
]
|
||||
|
||||
return result
|
||||
|
||||
def extract_all_subcases(self) -> Dict[int, Dict[str, Any]]:
|
||||
"""Extract metrics for all subcases."""
|
||||
results = {}
|
||||
for label in self.displacements.keys():
|
||||
try:
|
||||
# Convert label to int if possible for consistent keys
|
||||
key = int(label) if label.isdigit() else label
|
||||
results[key] = self.extract_subcase(label)
|
||||
except Exception as e:
|
||||
warnings.warn(f"Failed to extract subcase {label}: {e}")
|
||||
return results
|
||||
|
||||
def extract_relative(
|
||||
self,
|
||||
target_subcase: str,
|
||||
reference_subcase: str,
|
||||
include_coefficients: bool = False
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract Zernike metrics relative to a reference subcase using OPD method.
|
||||
|
||||
Computes: WFE_relative = WFE_target(node_i) - WFE_reference(node_i)
|
||||
for each node, then fits Zernike to the difference field.
|
||||
|
||||
This is the CORRECT way to compute relative WFE for optimization.
|
||||
It properly accounts for lateral (X,Y) displacement via OPD interpolation.
|
||||
|
||||
Args:
|
||||
target_subcase: Subcase to analyze (e.g., "3" for 40 deg)
|
||||
reference_subcase: Reference subcase to subtract (e.g., "2" for 20 deg)
|
||||
include_coefficients: Whether to include all Zernike coefficients
|
||||
|
||||
Returns:
|
||||
Dict with relative metrics: relative_filtered_rms_nm, relative_rms_filter_j1to3, etc.
|
||||
"""
|
||||
# Build OPD data for both subcases
|
||||
target_data = self._build_figure_opd_data(target_subcase)
|
||||
ref_data = self._build_figure_opd_data(reference_subcase)
|
||||
|
||||
# Build node ID -> WFE map for reference
|
||||
ref_node_to_wfe = {
|
||||
int(nid): wfe for nid, wfe in zip(ref_data['node_ids'], ref_data['wfe_nm'])
|
||||
}
|
||||
|
||||
# Compute node-by-node relative WFE for common nodes
|
||||
X_rel, Y_rel, WFE_rel = [], [], []
|
||||
lateral_rel = []
|
||||
|
||||
for i, nid in enumerate(target_data['node_ids']):
|
||||
nid = int(nid)
|
||||
if nid not in ref_node_to_wfe:
|
||||
continue
|
||||
|
||||
# Use target's deformed coordinates for Zernike fitting
|
||||
X_rel.append(target_data['x_deformed'][i])
|
||||
Y_rel.append(target_data['y_deformed'][i])
|
||||
|
||||
# Relative WFE = target WFE - reference WFE
|
||||
target_wfe = target_data['wfe_nm'][i]
|
||||
ref_wfe = ref_node_to_wfe[nid]
|
||||
WFE_rel.append(target_wfe - ref_wfe)
|
||||
|
||||
lateral_rel.append(target_data['lateral_disp'][i])
|
||||
|
||||
X_rel = np.array(X_rel)
|
||||
Y_rel = np.array(Y_rel)
|
||||
WFE_rel = np.array(WFE_rel)
|
||||
lateral_rel = np.array(lateral_rel)
|
||||
|
||||
if len(X_rel) == 0:
|
||||
raise ValueError(f"No common nodes between subcases {target_subcase} and {reference_subcase}")
|
||||
|
||||
# Fit Zernike coefficients to the relative WFE
|
||||
coeffs, R_max = compute_zernike_coefficients(X_rel, Y_rel, WFE_rel, self.n_modes)
|
||||
|
||||
# Compute RMS metrics
|
||||
x_c = X_rel - np.mean(X_rel)
|
||||
y_c = Y_rel - np.mean(Y_rel)
|
||||
r = np.hypot(x_c / R_max, y_c / R_max)
|
||||
theta = np.arctan2(y_c, x_c)
|
||||
|
||||
# Low-order Zernike basis (for filtering)
|
||||
Z_low = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
|
||||
])
|
||||
|
||||
# Filtered WFE (J5+) - this is the primary optimization metric
|
||||
wfe_filtered = WFE_rel - Z_low @ coeffs[:self.filter_orders]
|
||||
|
||||
global_rms = float(np.sqrt(np.mean(WFE_rel**2)))
|
||||
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
|
||||
|
||||
# J1-J3 filtered (for manufacturing/optician workload)
|
||||
Z_j1to3 = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, 4)
|
||||
])
|
||||
wfe_j1to3 = WFE_rel - Z_j1to3 @ coeffs[:3]
|
||||
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
|
||||
|
||||
# Aberration magnitudes
|
||||
aberrations = compute_aberration_magnitudes(coeffs)
|
||||
|
||||
result = {
|
||||
'target_subcase': target_subcase,
|
||||
'reference_subcase': reference_subcase,
|
||||
'method': 'figure_opd_relative',
|
||||
'relative_global_rms_nm': global_rms,
|
||||
'relative_filtered_rms_nm': filtered_rms,
|
||||
'relative_rms_filter_j1to3': rms_j1to3,
|
||||
'n_common_nodes': len(X_rel),
|
||||
'max_lateral_displacement_um': float(np.max(lateral_rel) * self.um_scale),
|
||||
'rms_lateral_displacement_um': float(np.sqrt(np.mean(lateral_rel**2)) * self.um_scale),
|
||||
**{f'relative_{k}': v for k, v in aberrations.items()}
|
||||
}
|
||||
|
||||
if include_coefficients:
|
||||
result['coefficients'] = coeffs.tolist()
|
||||
result['coefficient_labels'] = [
|
||||
zernike_label(j) for j in range(1, self.n_modes + 1)
|
||||
]
|
||||
|
||||
return result
|
||||
|
||||
def extract_comparison(self, subcase_label: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare figure-based method vs standard Z-only method.
|
||||
"""
|
||||
from .extract_zernike_wfe import ZernikeExtractor
|
||||
|
||||
# Figure-based (this extractor)
|
||||
figure_result = self.extract_subcase(subcase_label)
|
||||
|
||||
# Standard Z-only
|
||||
standard_extractor = ZernikeExtractor(self.op2_path, self.bdf_path)
|
||||
standard_result = standard_extractor.extract_subcase(subcase_label)
|
||||
|
||||
# Compute deltas
|
||||
delta_filtered = figure_result['filtered_rms_nm'] - standard_result['filtered_rms_nm']
|
||||
pct_diff = 100.0 * delta_filtered / standard_result['filtered_rms_nm'] if standard_result['filtered_rms_nm'] > 0 else 0
|
||||
|
||||
return {
|
||||
'subcase': subcase_label,
|
||||
'figure_method': {
|
||||
'filtered_rms_nm': figure_result['filtered_rms_nm'],
|
||||
'global_rms_nm': figure_result['global_rms_nm'],
|
||||
'n_nodes': figure_result['n_nodes'],
|
||||
},
|
||||
'standard_method': {
|
||||
'filtered_rms_nm': standard_result['filtered_rms_nm'],
|
||||
'global_rms_nm': standard_result['global_rms_nm'],
|
||||
'n_nodes': standard_result['n_nodes'],
|
||||
},
|
||||
'delta': {
|
||||
'filtered_rms_nm': delta_filtered,
|
||||
'percent_difference': pct_diff,
|
||||
},
|
||||
'lateral_displacement': {
|
||||
'max_um': figure_result.get('max_lateral_displacement_um', 0),
|
||||
'rms_um': figure_result.get('rms_lateral_displacement_um', 0),
|
||||
},
|
||||
'figure_file': str(self.figure_path.name) if self.figure_path else 'BDF (filtered to OP2)',
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Convenience Functions
|
||||
# ============================================================================
|
||||
|
||||
def extract_zernike_opd(
|
||||
op2_file: Union[str, Path],
|
||||
figure_file: Optional[Union[str, Path]] = None,
|
||||
subcase: Union[int, str] = 1,
|
||||
**kwargs
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function for OPD-based Zernike extraction (most rigorous method).
|
||||
|
||||
THIS IS THE RECOMMENDED FUNCTION for telescope mirror optimization.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 results
|
||||
figure_file: Path to explicit figure.dat (uses BDF geometry if None - RECOMMENDED)
|
||||
subcase: Subcase identifier
|
||||
**kwargs: Additional args for ZernikeOPDExtractor
|
||||
|
||||
Returns:
|
||||
Dict with Zernike metrics, aberrations, and lateral displacement info
|
||||
"""
|
||||
extractor = ZernikeOPDExtractor(op2_file, figure_path=figure_file, **kwargs)
|
||||
return extractor.extract_subcase(str(subcase))
|
||||
|
||||
|
||||
def extract_zernike_opd_filtered_rms(
|
||||
op2_file: Union[str, Path],
|
||||
subcase: Union[int, str] = 1,
|
||||
**kwargs
|
||||
) -> float:
|
||||
"""
|
||||
Extract filtered RMS WFE using OPD method (most rigorous).
|
||||
|
||||
Primary metric for mirror optimization.
|
||||
"""
|
||||
result = extract_zernike_opd(op2_file, subcase=subcase, **kwargs)
|
||||
return result['filtered_rms_nm']
|
||||
|
||||
|
||||
# Backwards compatibility aliases
|
||||
ZernikeFigureExtractor = ZernikeOPDExtractor # Deprecated: use ZernikeOPDExtractor
|
||||
extract_zernike_figure = extract_zernike_opd # Deprecated: use extract_zernike_opd
|
||||
extract_zernike_figure_rms = extract_zernike_opd_filtered_rms # Deprecated
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Module Exports
|
||||
# ============================================================================
|
||||
|
||||
__all__ = [
|
||||
# Primary exports (new names)
|
||||
'ZernikeOPDExtractor',
|
||||
'extract_zernike_opd',
|
||||
'extract_zernike_opd_filtered_rms',
|
||||
|
||||
# Utility functions
|
||||
'load_figure_geometry',
|
||||
'build_figure_interpolator',
|
||||
'compute_figure_opd',
|
||||
|
||||
# Backwards compatibility (deprecated)
|
||||
'ZernikeFigureExtractor',
|
||||
'extract_zernike_figure',
|
||||
'extract_zernike_figure_rms',
|
||||
]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
figure_file = Path(sys.argv[2]) if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"Analyzing: {op2_file}")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
extractor = ZernikeFigureExtractor(op2_file, figure_path=figure_file)
|
||||
print(f"Figure file: {extractor.figure_path}")
|
||||
print(f"Figure nodes: {len(extractor.figure_geometry)}")
|
||||
print()
|
||||
|
||||
for label in extractor.displacements.keys():
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Subcase {label}")
|
||||
print('=' * 60)
|
||||
|
||||
comparison = extractor.extract_comparison(label)
|
||||
|
||||
print(f"\nStandard (Z-only) method:")
|
||||
print(f" Filtered RMS: {comparison['standard_method']['filtered_rms_nm']:.2f} nm")
|
||||
print(f" Nodes: {comparison['standard_method']['n_nodes']}")
|
||||
|
||||
print(f"\nFigure-based OPD method:")
|
||||
print(f" Filtered RMS: {comparison['figure_method']['filtered_rms_nm']:.2f} nm")
|
||||
print(f" Nodes: {comparison['figure_method']['n_nodes']}")
|
||||
|
||||
print(f"\nDifference:")
|
||||
print(f" Delta: {comparison['delta']['filtered_rms_nm']:+.2f} nm ({comparison['delta']['percent_difference']:+.1f}%)")
|
||||
|
||||
print(f"\nLateral Displacement:")
|
||||
print(f" Max: {comparison['lateral_displacement']['max_um']:.3f} um")
|
||||
print(f" RMS: {comparison['lateral_displacement']['rms_um']:.3f} um")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("Figure-Based Zernike Extractor")
|
||||
print("=" * 40)
|
||||
print("\nUsage: python extract_zernike_figure.py <op2_file> [figure.dat]")
|
||||
print("\nThis extractor uses actual figure geometry instead of parabola approximation.")
|
||||
746
optimization_engine/extractors/extract_zernike_opd.py
Normal file
746
optimization_engine/extractors/extract_zernike_opd.py
Normal file
@@ -0,0 +1,746 @@
|
||||
"""
|
||||
Analytic (Parabola-Based) Zernike Extractor for Telescope Mirror Optimization
|
||||
==============================================================================
|
||||
|
||||
This extractor computes OPD using an ANALYTICAL parabola formula, accounting for
|
||||
lateral (X, Y) displacements in addition to axial (Z) displacements.
|
||||
|
||||
NOTE: For most robust OPD analysis, prefer ZernikeOPDExtractor from extract_zernike_figure.py
|
||||
which uses actual mesh geometry as reference (no shape assumptions).
|
||||
|
||||
This analytic extractor is useful when:
|
||||
- You know the optical prescription (focal length)
|
||||
- The surface is a parabola (or close to it)
|
||||
- You want to compare against theoretical parabola
|
||||
|
||||
WHY LATERAL CORRECTION MATTERS:
|
||||
-------------------------------
|
||||
The standard Zernike approach uses original (x, y) coordinates with only Z-displacement.
|
||||
This is INCORRECT when lateral displacements are significant because:
|
||||
|
||||
1. A node at (x₀, y₀) moves to (x₀+Δx, y₀+Δy, z₀+Δz)
|
||||
2. The parabola surface at the NEW position has a different expected Z
|
||||
3. The true surface error is: (z₀+Δz) - z_parabola(x₀+Δx, y₀+Δy)
|
||||
|
||||
PARABOLA EQUATION:
|
||||
------------------
|
||||
For a paraboloid with vertex at origin and optical axis along Z:
|
||||
z = (x² + y²) / (4 * f)
|
||||
|
||||
where f = focal length = R / 2 (R = radius of curvature at vertex)
|
||||
|
||||
For a concave mirror (like telescope primary):
|
||||
z = -r² / (4 * f) (negative because surface curves away from +Z)
|
||||
|
||||
We support both conventions via the `concave` parameter.
|
||||
|
||||
Author: Atomizer Framework
|
||||
Date: 2024
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional, List, Tuple, Union
|
||||
import numpy as np
|
||||
from numpy.linalg import LinAlgError
|
||||
|
||||
# Import base Zernike functionality from existing module
|
||||
from .extract_zernike import (
|
||||
compute_zernike_coefficients,
|
||||
compute_aberration_magnitudes,
|
||||
zernike_noll,
|
||||
zernike_label,
|
||||
read_node_geometry,
|
||||
find_geometry_file,
|
||||
extract_displacements_by_subcase,
|
||||
UNIT_TO_NM,
|
||||
DEFAULT_N_MODES,
|
||||
DEFAULT_FILTER_ORDERS,
|
||||
)
|
||||
|
||||
try:
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
except ImportError:
|
||||
raise ImportError("pyNastran is required. Install with: pip install pyNastran")
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# OPD Calculation Functions
|
||||
# ============================================================================
|
||||
|
||||
def compute_parabola_z(
|
||||
x: np.ndarray,
|
||||
y: np.ndarray,
|
||||
focal_length: float,
|
||||
concave: bool = True
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Compute the Z coordinate on a paraboloid surface.
|
||||
|
||||
For a paraboloid: z = ±(x² + y²) / (4 * f)
|
||||
|
||||
Args:
|
||||
x, y: Coordinates on the surface
|
||||
focal_length: Focal length of the parabola (f = R_vertex / 2)
|
||||
concave: If True, mirror curves toward -Z (typical telescope primary)
|
||||
|
||||
Returns:
|
||||
Z coordinates on the parabola surface
|
||||
"""
|
||||
r_squared = x**2 + y**2
|
||||
z = r_squared / (4.0 * focal_length)
|
||||
|
||||
if concave:
|
||||
z = -z # Concave mirror curves toward -Z
|
||||
|
||||
return z
|
||||
|
||||
|
||||
def compute_true_opd(
|
||||
x0: np.ndarray,
|
||||
y0: np.ndarray,
|
||||
z0: np.ndarray,
|
||||
dx: np.ndarray,
|
||||
dy: np.ndarray,
|
||||
dz: np.ndarray,
|
||||
focal_length: float,
|
||||
concave: bool = True
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""
|
||||
Compute true Optical Path Difference accounting for lateral displacement.
|
||||
|
||||
The key insight: when a node moves laterally, we must compare its new Z
|
||||
position against what the parabola Z SHOULD BE at that new (x, y) location.
|
||||
|
||||
The CORRECT formulation:
|
||||
- Original surface: z0 = parabola(x0, y0) + manufacturing_errors
|
||||
- Deformed surface: z0 + dz = parabola(x0 + dx, y0 + dy) + total_errors
|
||||
|
||||
The surface error we want is:
|
||||
error = (z0 + dz) - parabola(x0 + dx, y0 + dy)
|
||||
|
||||
But since z0 ≈ parabola(x0, y0), we can compute the DIFFERENTIAL:
|
||||
error ≈ dz - [parabola(x0 + dx, y0 + dy) - parabola(x0, y0)]
|
||||
error ≈ dz - Δz_parabola
|
||||
|
||||
where Δz_parabola is the change in ideal parabola Z due to lateral movement.
|
||||
|
||||
Args:
|
||||
x0, y0, z0: Original node coordinates (undeformed)
|
||||
dx, dy, dz: Displacement components from FEA
|
||||
focal_length: Parabola focal length
|
||||
concave: Whether mirror is concave (typical)
|
||||
|
||||
Returns:
|
||||
Tuple of:
|
||||
- x_def: Deformed X coordinates (for Zernike fitting)
|
||||
- y_def: Deformed Y coordinates (for Zernike fitting)
|
||||
- surface_error: True surface error at deformed positions
|
||||
- lateral_magnitude: Magnitude of lateral displacement (diagnostic)
|
||||
"""
|
||||
# Deformed coordinates
|
||||
x_def = x0 + dx
|
||||
y_def = y0 + dy
|
||||
|
||||
# Compute the CHANGE in parabola Z due to lateral displacement
|
||||
# For parabola: z = ±r²/(4f), so:
|
||||
# Δz_parabola = z(x_def, y_def) - z(x0, y0)
|
||||
# = [(x0+dx)² + (y0+dy)² - x0² - y0²] / (4f)
|
||||
# = [2*x0*dx + dx² + 2*y0*dy + dy²] / (4f)
|
||||
|
||||
# Original r² and deformed r²
|
||||
r0_sq = x0**2 + y0**2
|
||||
r_def_sq = x_def**2 + y_def**2
|
||||
|
||||
# Change in r² due to lateral displacement
|
||||
delta_r_sq = r_def_sq - r0_sq
|
||||
|
||||
# Change in parabola Z (the Z shift that would keep the node on the ideal parabola)
|
||||
delta_z_parabola = delta_r_sq / (4.0 * focal_length)
|
||||
if concave:
|
||||
delta_z_parabola = -delta_z_parabola
|
||||
|
||||
# TRUE surface error = actual dz - expected dz (to stay on parabola)
|
||||
# If node moves laterally outward (larger r), parabola Z changes
|
||||
# The error is the difference between actual Z movement and expected Z movement
|
||||
surface_error = dz - delta_z_parabola
|
||||
|
||||
# Diagnostic: lateral displacement magnitude
|
||||
lateral_magnitude = np.sqrt(dx**2 + dy**2)
|
||||
|
||||
return x_def, y_def, surface_error, lateral_magnitude
|
||||
|
||||
|
||||
def estimate_focal_length_from_geometry(
|
||||
x: np.ndarray,
|
||||
y: np.ndarray,
|
||||
z: np.ndarray,
|
||||
concave: bool = True
|
||||
) -> float:
|
||||
"""
|
||||
Estimate focal length from node geometry by fitting a paraboloid.
|
||||
|
||||
Uses least-squares fit: z = a * (x² + y²) + b
|
||||
where focal_length = 1 / (4 * |a|)
|
||||
|
||||
Args:
|
||||
x, y, z: Node coordinates
|
||||
concave: Whether mirror is concave
|
||||
|
||||
Returns:
|
||||
Estimated focal length
|
||||
"""
|
||||
r_squared = x**2 + y**2
|
||||
|
||||
# Fit z = a * r² + b
|
||||
A = np.column_stack([r_squared, np.ones_like(r_squared)])
|
||||
|
||||
try:
|
||||
coeffs, _, _, _ = np.linalg.lstsq(A, z, rcond=None)
|
||||
a = coeffs[0]
|
||||
except LinAlgError:
|
||||
# Fallback: use simple ratio
|
||||
mask = r_squared > 0
|
||||
a = np.median(z[mask] / r_squared[mask])
|
||||
|
||||
# For concave mirror, a should be negative
|
||||
# focal_length = 1 / (4 * |a|)
|
||||
if abs(a) < 1e-12:
|
||||
raise ValueError("Cannot estimate focal length - surface appears flat")
|
||||
|
||||
focal_length = 1.0 / (4.0 * abs(a))
|
||||
|
||||
return focal_length
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Main Analytic (Parabola-Based) Extractor Class
|
||||
# ============================================================================
|
||||
|
||||
class ZernikeAnalyticExtractor:
|
||||
"""
|
||||
Analytic (parabola-based) Zernike extractor for telescope mirror optimization.
|
||||
|
||||
This extractor accounts for lateral (X, Y) displacements when computing
|
||||
wavefront error, using an analytical parabola formula as reference.
|
||||
|
||||
NOTE: For most robust analysis, prefer ZernikeOPDExtractor from
|
||||
extract_zernike_figure.py which uses actual mesh geometry (no shape assumptions).
|
||||
|
||||
Use this extractor when:
|
||||
- You know the focal length / optical prescription
|
||||
- The surface is parabolic (or near-parabolic)
|
||||
- You want to compare against theoretical parabola shape
|
||||
|
||||
Example:
|
||||
extractor = ZernikeAnalyticExtractor(
|
||||
op2_file, bdf_file,
|
||||
focal_length=5000.0 # mm
|
||||
)
|
||||
result = extractor.extract_subcase('20')
|
||||
|
||||
# Metrics available:
|
||||
print(f"Max lateral displacement: {result['max_lateral_disp_um']:.2f} µm")
|
||||
print(f"RMS lateral displacement: {result['rms_lateral_disp_um']:.2f} µm")
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
op2_path: Union[str, Path],
|
||||
bdf_path: Optional[Union[str, Path]] = None,
|
||||
focal_length: Optional[float] = None,
|
||||
concave: bool = True,
|
||||
displacement_unit: str = 'mm',
|
||||
n_modes: int = DEFAULT_N_MODES,
|
||||
filter_orders: int = DEFAULT_FILTER_ORDERS,
|
||||
auto_estimate_focal: bool = True
|
||||
):
|
||||
"""
|
||||
Initialize OPD-based Zernike extractor.
|
||||
|
||||
Args:
|
||||
op2_path: Path to OP2 results file
|
||||
bdf_path: Path to BDF/DAT geometry file (auto-detected if None)
|
||||
focal_length: Parabola focal length in same units as geometry
|
||||
If None and auto_estimate_focal=True, will estimate from mesh
|
||||
concave: Whether mirror is concave (curves toward -Z)
|
||||
displacement_unit: Unit of displacement in OP2 ('mm', 'm', 'um', 'nm')
|
||||
n_modes: Number of Zernike modes to fit
|
||||
filter_orders: Number of low-order modes to filter
|
||||
auto_estimate_focal: If True, estimate focal length from geometry
|
||||
"""
|
||||
self.op2_path = Path(op2_path)
|
||||
self.bdf_path = Path(bdf_path) if bdf_path else find_geometry_file(self.op2_path)
|
||||
self.focal_length = focal_length
|
||||
self.concave = concave
|
||||
self.displacement_unit = displacement_unit
|
||||
self.n_modes = n_modes
|
||||
self.filter_orders = filter_orders
|
||||
self.auto_estimate_focal = auto_estimate_focal
|
||||
|
||||
# Unit conversion factor
|
||||
self.nm_scale = UNIT_TO_NM.get(displacement_unit.lower(), 1e6)
|
||||
self.um_scale = self.nm_scale / 1000.0 # For lateral displacement reporting
|
||||
|
||||
# WFE = 2 * surface error (optical convention for reflection)
|
||||
self.wfe_factor = 2.0 * self.nm_scale
|
||||
|
||||
# Lazy-loaded data
|
||||
self._node_geo = None
|
||||
self._displacements = None
|
||||
self._estimated_focal = None
|
||||
|
||||
@property
|
||||
def node_geometry(self) -> Dict[int, np.ndarray]:
|
||||
"""Lazy-load node geometry from BDF."""
|
||||
if self._node_geo is None:
|
||||
self._node_geo = read_node_geometry(self.bdf_path)
|
||||
return self._node_geo
|
||||
|
||||
@property
|
||||
def displacements(self) -> Dict[str, Dict[str, np.ndarray]]:
|
||||
"""Lazy-load displacements from OP2."""
|
||||
if self._displacements is None:
|
||||
self._displacements = extract_displacements_by_subcase(self.op2_path)
|
||||
return self._displacements
|
||||
|
||||
def get_focal_length(self, x: np.ndarray, y: np.ndarray, z: np.ndarray) -> float:
|
||||
"""
|
||||
Get focal length, estimating from geometry if not provided.
|
||||
"""
|
||||
if self.focal_length is not None:
|
||||
return self.focal_length
|
||||
|
||||
if self._estimated_focal is not None:
|
||||
return self._estimated_focal
|
||||
|
||||
if self.auto_estimate_focal:
|
||||
self._estimated_focal = estimate_focal_length_from_geometry(
|
||||
x, y, z, self.concave
|
||||
)
|
||||
return self._estimated_focal
|
||||
|
||||
raise ValueError(
|
||||
"focal_length must be provided or auto_estimate_focal must be True"
|
||||
)
|
||||
|
||||
def _build_opd_data(
|
||||
self,
|
||||
subcase_label: str
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""
|
||||
Build OPD data arrays for a subcase using rigorous method.
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
- x_original, y_original, z_original: Original coordinates
|
||||
- dx, dy, dz: Displacement components
|
||||
- x_deformed, y_deformed: Deformed coordinates (for Zernike fitting)
|
||||
- surface_error: True surface error (mm or model units)
|
||||
- wfe_nm: Wavefront error in nanometers
|
||||
- lateral_disp: Lateral displacement magnitude
|
||||
"""
|
||||
if subcase_label not in self.displacements:
|
||||
available = list(self.displacements.keys())
|
||||
raise ValueError(f"Subcase '{subcase_label}' not found. Available: {available}")
|
||||
|
||||
data = self.displacements[subcase_label]
|
||||
node_ids = data['node_ids']
|
||||
disp = data['disp']
|
||||
|
||||
# Build arrays
|
||||
x0, y0, z0 = [], [], []
|
||||
dx_arr, dy_arr, dz_arr = [], [], []
|
||||
|
||||
for nid, vec in zip(node_ids, disp):
|
||||
geo = self.node_geometry.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
|
||||
x0.append(geo[0])
|
||||
y0.append(geo[1])
|
||||
z0.append(geo[2])
|
||||
dx_arr.append(vec[0])
|
||||
dy_arr.append(vec[1])
|
||||
dz_arr.append(vec[2])
|
||||
|
||||
x0 = np.array(x0)
|
||||
y0 = np.array(y0)
|
||||
z0 = np.array(z0)
|
||||
dx_arr = np.array(dx_arr)
|
||||
dy_arr = np.array(dy_arr)
|
||||
dz_arr = np.array(dz_arr)
|
||||
|
||||
# Get focal length
|
||||
focal = self.get_focal_length(x0, y0, z0)
|
||||
|
||||
# Compute true OPD
|
||||
x_def, y_def, surface_error, lateral_disp = compute_true_opd(
|
||||
x0, y0, z0, dx_arr, dy_arr, dz_arr, focal, self.concave
|
||||
)
|
||||
|
||||
# Convert to WFE (nm)
|
||||
wfe_nm = surface_error * self.wfe_factor
|
||||
|
||||
return {
|
||||
'x_original': x0,
|
||||
'y_original': y0,
|
||||
'z_original': z0,
|
||||
'dx': dx_arr,
|
||||
'dy': dy_arr,
|
||||
'dz': dz_arr,
|
||||
'x_deformed': x_def,
|
||||
'y_deformed': y_def,
|
||||
'surface_error': surface_error,
|
||||
'wfe_nm': wfe_nm,
|
||||
'lateral_disp': lateral_disp,
|
||||
'focal_length': focal
|
||||
}
|
||||
|
||||
def extract_subcase(
|
||||
self,
|
||||
subcase_label: str,
|
||||
include_coefficients: bool = False,
|
||||
include_diagnostics: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract Zernike metrics for a single subcase using rigorous OPD method.
|
||||
|
||||
Args:
|
||||
subcase_label: Subcase identifier (e.g., '20', '90')
|
||||
include_coefficients: Whether to include all Zernike coefficients
|
||||
include_diagnostics: Whether to include lateral displacement diagnostics
|
||||
|
||||
Returns:
|
||||
Dict with RMS metrics, aberrations, and lateral displacement info
|
||||
"""
|
||||
opd_data = self._build_opd_data(subcase_label)
|
||||
|
||||
# Use DEFORMED coordinates for Zernike fitting
|
||||
X = opd_data['x_deformed']
|
||||
Y = opd_data['y_deformed']
|
||||
WFE = opd_data['wfe_nm']
|
||||
|
||||
# Fit Zernike coefficients
|
||||
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, self.n_modes)
|
||||
|
||||
# Compute RMS metrics
|
||||
# Center on deformed coordinates
|
||||
x_c = X - np.mean(X)
|
||||
y_c = Y - np.mean(Y)
|
||||
r = np.hypot(x_c / R_max, y_c / R_max)
|
||||
theta = np.arctan2(y_c, x_c)
|
||||
|
||||
# Build low-order Zernike basis
|
||||
Z_low = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
|
||||
])
|
||||
|
||||
# Filtered WFE
|
||||
wfe_filtered = WFE - Z_low @ coeffs[:self.filter_orders]
|
||||
|
||||
global_rms = float(np.sqrt(np.mean(WFE**2)))
|
||||
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
|
||||
|
||||
# J1-J3 filtered (optician workload)
|
||||
Z_j1to3 = np.column_stack([
|
||||
zernike_noll(j, r, theta) for j in range(1, 4)
|
||||
])
|
||||
wfe_j1to3 = WFE - Z_j1to3 @ coeffs[:3]
|
||||
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
|
||||
|
||||
# Aberration magnitudes
|
||||
aberrations = compute_aberration_magnitudes(coeffs)
|
||||
|
||||
result = {
|
||||
'subcase': subcase_label,
|
||||
'method': 'opd_rigorous',
|
||||
'global_rms_nm': global_rms,
|
||||
'filtered_rms_nm': filtered_rms,
|
||||
'rms_filter_j1to3_nm': rms_j1to3,
|
||||
'n_nodes': len(X),
|
||||
'focal_length_used': opd_data['focal_length'],
|
||||
**aberrations
|
||||
}
|
||||
|
||||
if include_diagnostics:
|
||||
lateral = opd_data['lateral_disp']
|
||||
result.update({
|
||||
'max_lateral_disp_um': float(np.max(lateral) * self.um_scale),
|
||||
'rms_lateral_disp_um': float(np.sqrt(np.mean(lateral**2)) * self.um_scale),
|
||||
'mean_lateral_disp_um': float(np.mean(lateral) * self.um_scale),
|
||||
'p99_lateral_disp_um': float(np.percentile(lateral, 99) * self.um_scale),
|
||||
})
|
||||
|
||||
if include_coefficients:
|
||||
result['coefficients'] = coeffs.tolist()
|
||||
result['coefficient_labels'] = [
|
||||
zernike_label(j) for j in range(1, self.n_modes + 1)
|
||||
]
|
||||
|
||||
return result
|
||||
|
||||
def extract_comparison(
|
||||
self,
|
||||
subcase_label: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract metrics using BOTH methods for comparison.
|
||||
|
||||
Useful for understanding the impact of the rigorous method
|
||||
vs the standard Z-only approach.
|
||||
|
||||
Returns:
|
||||
Dict with metrics from both methods and the delta
|
||||
"""
|
||||
from .extract_zernike import ZernikeExtractor
|
||||
|
||||
# Rigorous OPD method
|
||||
opd_result = self.extract_subcase(subcase_label, include_diagnostics=True)
|
||||
|
||||
# Standard Z-only method
|
||||
standard_extractor = ZernikeExtractor(
|
||||
self.op2_path, self.bdf_path,
|
||||
self.displacement_unit, self.n_modes, self.filter_orders
|
||||
)
|
||||
standard_result = standard_extractor.extract_subcase(subcase_label)
|
||||
|
||||
# Compute deltas
|
||||
delta_global = opd_result['global_rms_nm'] - standard_result['global_rms_nm']
|
||||
delta_filtered = opd_result['filtered_rms_nm'] - standard_result['filtered_rms_nm']
|
||||
|
||||
return {
|
||||
'subcase': subcase_label,
|
||||
'opd_method': {
|
||||
'global_rms_nm': opd_result['global_rms_nm'],
|
||||
'filtered_rms_nm': opd_result['filtered_rms_nm'],
|
||||
},
|
||||
'standard_method': {
|
||||
'global_rms_nm': standard_result['global_rms_nm'],
|
||||
'filtered_rms_nm': standard_result['filtered_rms_nm'],
|
||||
},
|
||||
'delta': {
|
||||
'global_rms_nm': delta_global,
|
||||
'filtered_rms_nm': delta_filtered,
|
||||
'percent_difference_filtered': 100.0 * delta_filtered / standard_result['filtered_rms_nm'] if standard_result['filtered_rms_nm'] > 0 else 0.0,
|
||||
},
|
||||
'lateral_displacement': {
|
||||
'max_um': opd_result['max_lateral_disp_um'],
|
||||
'rms_um': opd_result['rms_lateral_disp_um'],
|
||||
'p99_um': opd_result['p99_lateral_disp_um'],
|
||||
},
|
||||
'recommendation': _get_method_recommendation(opd_result)
|
||||
}
|
||||
|
||||
def extract_all_subcases(
|
||||
self,
|
||||
reference_subcase: Optional[str] = None
|
||||
) -> Dict[str, Dict[str, Any]]:
|
||||
"""
|
||||
Extract metrics for all available subcases.
|
||||
|
||||
Args:
|
||||
reference_subcase: Reference for relative calculations (None to skip)
|
||||
|
||||
Returns:
|
||||
Dict mapping subcase label to metrics dict
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for label in self.displacements.keys():
|
||||
results[label] = self.extract_subcase(label)
|
||||
|
||||
return results
|
||||
|
||||
def get_diagnostic_data(
|
||||
self,
|
||||
subcase_label: str
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""
|
||||
Get raw data for visualization/debugging.
|
||||
|
||||
Returns arrays suitable for plotting lateral displacement maps,
|
||||
comparing methods, etc.
|
||||
"""
|
||||
return self._build_opd_data(subcase_label)
|
||||
|
||||
|
||||
def _get_method_recommendation(opd_result: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate recommendation based on lateral displacement magnitude.
|
||||
"""
|
||||
max_lateral = opd_result.get('max_lateral_disp_um', 0)
|
||||
rms_lateral = opd_result.get('rms_lateral_disp_um', 0)
|
||||
|
||||
if max_lateral > 10.0: # > 10 µm max lateral
|
||||
return "CRITICAL: Large lateral displacements detected. OPD method strongly recommended."
|
||||
elif max_lateral > 1.0: # > 1 µm
|
||||
return "RECOMMENDED: Significant lateral displacements. OPD method provides more accurate results."
|
||||
elif max_lateral > 0.1: # > 0.1 µm
|
||||
return "OPTIONAL: Small lateral displacements. OPD method provides minor improvement."
|
||||
else:
|
||||
return "EQUIVALENT: Negligible lateral displacements. Both methods give similar results."
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Convenience Functions
|
||||
# ============================================================================
|
||||
|
||||
def extract_zernike_analytic(
|
||||
op2_file: Union[str, Path],
|
||||
bdf_file: Optional[Union[str, Path]] = None,
|
||||
subcase: Union[int, str] = 1,
|
||||
focal_length: Optional[float] = None,
|
||||
displacement_unit: str = 'mm',
|
||||
**kwargs
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to extract Zernike metrics using analytic parabola method.
|
||||
|
||||
NOTE: For most robust analysis, prefer extract_zernike_opd() from
|
||||
extract_zernike_figure.py which uses actual mesh geometry.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 results file
|
||||
bdf_file: Path to BDF geometry (auto-detected if None)
|
||||
subcase: Subcase identifier
|
||||
focal_length: Parabola focal length (auto-estimated if None)
|
||||
displacement_unit: Unit of displacement in OP2
|
||||
**kwargs: Additional arguments for ZernikeAnalyticExtractor
|
||||
|
||||
Returns:
|
||||
Dict with RMS metrics, aberrations, and lateral displacement info
|
||||
"""
|
||||
extractor = ZernikeAnalyticExtractor(
|
||||
op2_file, bdf_file,
|
||||
focal_length=focal_length,
|
||||
displacement_unit=displacement_unit,
|
||||
**kwargs
|
||||
)
|
||||
return extractor.extract_subcase(str(subcase))
|
||||
|
||||
|
||||
def extract_zernike_analytic_filtered_rms(
|
||||
op2_file: Union[str, Path],
|
||||
subcase: Union[int, str] = 1,
|
||||
**kwargs
|
||||
) -> float:
|
||||
"""
|
||||
Extract filtered RMS WFE using analytic parabola method.
|
||||
|
||||
NOTE: For most robust analysis, prefer extract_zernike_opd_filtered_rms()
|
||||
from extract_zernike_figure.py.
|
||||
"""
|
||||
result = extract_zernike_analytic(op2_file, subcase=subcase, **kwargs)
|
||||
return result['filtered_rms_nm']
|
||||
|
||||
|
||||
def compare_zernike_methods(
|
||||
op2_file: Union[str, Path],
|
||||
bdf_file: Optional[Union[str, Path]] = None,
|
||||
subcase: Union[int, str] = 1,
|
||||
focal_length: Optional[float] = None,
|
||||
**kwargs
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare standard vs analytic (parabola) Zernike methods.
|
||||
|
||||
Use this to understand how much lateral displacement affects your results.
|
||||
"""
|
||||
extractor = ZernikeAnalyticExtractor(
|
||||
op2_file, bdf_file,
|
||||
focal_length=focal_length,
|
||||
**kwargs
|
||||
)
|
||||
return extractor.extract_comparison(str(subcase))
|
||||
|
||||
|
||||
# Backwards compatibility aliases
|
||||
ZernikeOPDExtractor = ZernikeAnalyticExtractor # Deprecated: use ZernikeAnalyticExtractor
|
||||
extract_zernike_opd = extract_zernike_analytic # Deprecated: use extract_zernike_analytic
|
||||
extract_zernike_opd_filtered_rms = extract_zernike_analytic_filtered_rms # Deprecated
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Module Exports
|
||||
# ============================================================================
|
||||
|
||||
__all__ = [
|
||||
# Main extractor class (new name)
|
||||
'ZernikeAnalyticExtractor',
|
||||
|
||||
# Convenience functions (new names)
|
||||
'extract_zernike_analytic',
|
||||
'extract_zernike_analytic_filtered_rms',
|
||||
'compare_zernike_methods',
|
||||
|
||||
# Utility functions
|
||||
'compute_true_opd',
|
||||
'compute_parabola_z',
|
||||
'estimate_focal_length_from_geometry',
|
||||
|
||||
# Backwards compatibility (deprecated)
|
||||
'ZernikeOPDExtractor',
|
||||
'extract_zernike_opd',
|
||||
'extract_zernike_opd_filtered_rms',
|
||||
]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
focal = float(sys.argv[2]) if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"Analyzing: {op2_file}")
|
||||
print(f"Focal length: {focal if focal else 'auto-estimate'}")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
extractor = ZernikeOPDExtractor(op2_file, focal_length=focal)
|
||||
|
||||
print(f"\nAvailable subcases: {list(extractor.displacements.keys())}")
|
||||
|
||||
for label in extractor.displacements.keys():
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Subcase {label}")
|
||||
print('=' * 60)
|
||||
|
||||
# Compare methods
|
||||
comparison = extractor.extract_comparison(label)
|
||||
|
||||
print(f"\nStandard (Z-only) method:")
|
||||
print(f" Global RMS: {comparison['standard_method']['global_rms_nm']:.2f} nm")
|
||||
print(f" Filtered RMS: {comparison['standard_method']['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
print(f"\nRigorous OPD method:")
|
||||
print(f" Global RMS: {comparison['opd_method']['global_rms_nm']:.2f} nm")
|
||||
print(f" Filtered RMS: {comparison['opd_method']['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
print(f"\nDifference (OPD - Standard):")
|
||||
print(f" Filtered RMS: {comparison['delta']['filtered_rms_nm']:+.2f} nm ({comparison['delta']['percent_difference_filtered']:+.1f}%)")
|
||||
|
||||
print(f"\nLateral Displacement:")
|
||||
print(f" Max: {comparison['lateral_displacement']['max_um']:.3f} µm")
|
||||
print(f" RMS: {comparison['lateral_displacement']['rms_um']:.3f} µm")
|
||||
print(f" P99: {comparison['lateral_displacement']['p99_um']:.3f} µm")
|
||||
|
||||
print(f"\n→ {comparison['recommendation']}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("OPD-Based Zernike Extractor")
|
||||
print("=" * 40)
|
||||
print("\nUsage: python extract_zernike_opd.py <op2_file> [focal_length]")
|
||||
print("\nThis module provides rigorous OPD-based Zernike extraction")
|
||||
print("that accounts for lateral (X, Y) displacements.")
|
||||
print("\nKey features:")
|
||||
print(" - True optical path difference calculation")
|
||||
print(" - Accounts for node lateral shift due to pinching/clamping")
|
||||
print(" - Lateral displacement diagnostics")
|
||||
print(" - Method comparison (standard vs OPD)")
|
||||
@@ -8,36 +8,57 @@ reality of specific designs through interactive 3D visualizations.
|
||||
Architecture:
|
||||
- StudyInsight: Abstract base class for all insight types
|
||||
- InsightRegistry: Central registry for available insight types
|
||||
- InsightSpec: Config-driven insight specification (from optimization_config.json)
|
||||
- InsightReport: Aggregates insights into HTML/PDF report with appendix
|
||||
- Each insight generates standalone HTML or Plotly data for dashboard
|
||||
|
||||
Available Insight Types:
|
||||
-----------------------
|
||||
| Type ID | Name | Description |
|
||||
|----------------|------------------------|------------------------------------------|
|
||||
| zernike_wfe | Zernike WFE Analysis | 3D wavefront error with Zernike decomp |
|
||||
| stress_field | Stress Distribution | Von Mises stress contours |
|
||||
| modal | Modal Analysis | Natural frequencies and mode shapes |
|
||||
| thermal | Thermal Analysis | Temperature distribution |
|
||||
| design_space | Design Space Explorer | Parameter-objective relationships |
|
||||
| Type ID | Name | Description |
|
||||
|------------------|------------------------|------------------------------------------|
|
||||
| zernike_dashboard| Zernike Dashboard | RECOMMENDED: Unified WFE dashboard |
|
||||
| zernike_wfe | Zernike WFE Analysis | 3D wavefront error with Zernike decomp |
|
||||
| msf_zernike | MSF Zernike Analysis | Mid-spatial frequency band decomposition |
|
||||
| stress_field | Stress Distribution | Von Mises stress contours |
|
||||
| modal | Modal Analysis | Natural frequencies and mode shapes |
|
||||
| thermal | Thermal Analysis | Temperature distribution |
|
||||
| design_space | Design Space Explorer | Parameter-objective relationships |
|
||||
|
||||
Config Schema (in optimization_config.json):
|
||||
-------------------------------------------
|
||||
```json
|
||||
"insights": [
|
||||
{
|
||||
"type": "zernike_wfe",
|
||||
"name": "WFE at 40 vs 20 deg",
|
||||
"enabled": true,
|
||||
"linked_objective": "rel_filtered_rms_40_vs_20",
|
||||
"config": { "target_subcase": "3", "reference_subcase": "2" },
|
||||
"include_in_report": true
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Quick Start:
|
||||
-----------
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, list_available_insights
|
||||
from optimization_engine.insights import (
|
||||
get_insight, list_available_insights,
|
||||
get_configured_insights, recommend_insights_for_study, InsightReport
|
||||
)
|
||||
from pathlib import Path
|
||||
|
||||
study_path = Path("studies/my_study")
|
||||
|
||||
# List available insights for a study
|
||||
available = list_available_insights(study_path)
|
||||
print(available)
|
||||
# Get recommended insights based on objectives
|
||||
recommendations = recommend_insights_for_study(study_path)
|
||||
for rec in recommendations:
|
||||
print(f"{rec['type']}: {rec['reason']}")
|
||||
|
||||
# Generate a specific insight
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
if insight and insight.can_generate():
|
||||
result = insight.generate()
|
||||
print(f"Generated: {result.html_path}")
|
||||
print(f"Summary: {result.summary}")
|
||||
# Generate report from configured insights
|
||||
report = InsightReport(study_path)
|
||||
results = report.generate_all() # Uses specs from optimization_config.json
|
||||
report_path = report.generate_report_html() # Creates STUDY_INSIGHTS_REPORT.html
|
||||
```
|
||||
|
||||
CLI Usage:
|
||||
@@ -49,6 +70,12 @@ python -m optimization_engine.insights generate studies/my_study
|
||||
# Generate specific insight type
|
||||
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
||||
|
||||
# Generate full report from config
|
||||
python -m optimization_engine.insights report studies/my_study
|
||||
|
||||
# Get recommendations for a study
|
||||
python -m optimization_engine.insights recommend studies/my_study
|
||||
|
||||
# List available insight types
|
||||
python -m optimization_engine.insights list
|
||||
```
|
||||
@@ -59,15 +86,22 @@ from .base import (
|
||||
StudyInsight,
|
||||
InsightConfig,
|
||||
InsightResult,
|
||||
InsightSpec,
|
||||
InsightReport,
|
||||
InsightRegistry,
|
||||
register_insight,
|
||||
get_insight,
|
||||
list_insights,
|
||||
list_available_insights,
|
||||
get_configured_insights,
|
||||
recommend_insights_for_study,
|
||||
)
|
||||
|
||||
# Import insight implementations (triggers @register_insight decorators)
|
||||
from .zernike_wfe import ZernikeWFEInsight
|
||||
from .zernike_opd_comparison import ZernikeOPDComparisonInsight
|
||||
from .msf_zernike import MSFZernikeInsight
|
||||
from .zernike_dashboard import ZernikeDashboardInsight # NEW: Unified dashboard
|
||||
from .stress_field import StressFieldInsight
|
||||
from .modal_analysis import ModalInsight
|
||||
from .thermal_field import ThermalInsight
|
||||
@@ -79,6 +113,8 @@ __all__ = [
|
||||
'StudyInsight',
|
||||
'InsightConfig',
|
||||
'InsightResult',
|
||||
'InsightSpec',
|
||||
'InsightReport',
|
||||
'InsightRegistry',
|
||||
'register_insight',
|
||||
|
||||
@@ -86,9 +122,14 @@ __all__ = [
|
||||
'get_insight',
|
||||
'list_insights',
|
||||
'list_available_insights',
|
||||
'get_configured_insights',
|
||||
'recommend_insights_for_study',
|
||||
|
||||
# Insight implementations
|
||||
'ZernikeWFEInsight',
|
||||
'ZernikeOPDComparisonInsight',
|
||||
'MSFZernikeInsight',
|
||||
'ZernikeDashboardInsight', # NEW: Unified dashboard
|
||||
'StressFieldInsight',
|
||||
'ModalInsight',
|
||||
'ThermalInsight',
|
||||
@@ -142,10 +183,14 @@ if __name__ == '__main__':
|
||||
print("Usage:")
|
||||
print(" python -m optimization_engine.insights list")
|
||||
print(" python -m optimization_engine.insights generate <study_path> [--type TYPE]")
|
||||
print(" python -m optimization_engine.insights report <study_path>")
|
||||
print(" python -m optimization_engine.insights recommend <study_path>")
|
||||
print()
|
||||
print("Commands:")
|
||||
print(" list - List all registered insight types")
|
||||
print(" generate - Generate insights for a study")
|
||||
print(" list - List all registered insight types")
|
||||
print(" generate - Generate insights for a study")
|
||||
print(" report - Generate full report from config")
|
||||
print(" recommend - Get AI recommendations for insights")
|
||||
print()
|
||||
print("Options:")
|
||||
print(" --type TYPE Generate only the specified insight type")
|
||||
@@ -166,6 +211,84 @@ if __name__ == '__main__':
|
||||
print(f" Applies to: {', '.join(info['applicable_to'])}")
|
||||
print()
|
||||
|
||||
elif command == 'recommend':
|
||||
if len(sys.argv) < 3:
|
||||
print("Error: Missing study path")
|
||||
print_usage()
|
||||
sys.exit(1)
|
||||
|
||||
study_path = Path(sys.argv[2])
|
||||
if not study_path.exists():
|
||||
print(f"Error: Study path does not exist: {study_path}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"\nRecommended Insights for: {study_path.name}")
|
||||
print("-" * 60)
|
||||
|
||||
recommendations = recommend_insights_for_study(study_path)
|
||||
if not recommendations:
|
||||
print("No recommendations available (no objectives found)")
|
||||
sys.exit(0)
|
||||
|
||||
for rec in recommendations:
|
||||
linked = f" -> {rec['linked_objective']}" if rec.get('linked_objective') else ""
|
||||
print(f"\n {rec['type']}{linked}")
|
||||
print(f" Name: {rec['name']}")
|
||||
print(f" Reason: {rec['reason']}")
|
||||
|
||||
print()
|
||||
print("Add these to optimization_config.json under 'insights' key")
|
||||
|
||||
elif command == 'report':
|
||||
if len(sys.argv) < 3:
|
||||
print("Error: Missing study path")
|
||||
print_usage()
|
||||
sys.exit(1)
|
||||
|
||||
study_path = Path(sys.argv[2])
|
||||
if not study_path.exists():
|
||||
print(f"Error: Study path does not exist: {study_path}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"\nGenerating Insight Report for: {study_path.name}")
|
||||
print("-" * 60)
|
||||
|
||||
# Check for configured insights
|
||||
specs = get_configured_insights(study_path)
|
||||
if not specs:
|
||||
print("No insights configured in optimization_config.json")
|
||||
print("Using recommendations based on objectives...")
|
||||
recommendations = recommend_insights_for_study(study_path)
|
||||
specs = [
|
||||
InsightSpec(
|
||||
type=rec['type'],
|
||||
name=rec['name'],
|
||||
linked_objective=rec.get('linked_objective'),
|
||||
config=rec.get('config', {})
|
||||
)
|
||||
for rec in recommendations
|
||||
]
|
||||
|
||||
if not specs:
|
||||
print("No insights to generate")
|
||||
sys.exit(0)
|
||||
|
||||
print(f"Generating {len(specs)} insight(s)...")
|
||||
|
||||
report = InsightReport(study_path)
|
||||
results = report.generate_all(specs)
|
||||
|
||||
for result in results:
|
||||
status = "OK" if result.success else f"FAIL: {result.error}"
|
||||
print(f" {result.insight_name}: {status}")
|
||||
|
||||
if any(r.success for r in results):
|
||||
report_path = report.generate_report_html()
|
||||
print(f"\nReport generated: {report_path}")
|
||||
print("Open in browser and use 'Save as PDF' button to export")
|
||||
else:
|
||||
print("\nNo insights generated successfully")
|
||||
|
||||
elif command == 'generate':
|
||||
if len(sys.argv) < 3:
|
||||
print("Error: Missing study path")
|
||||
|
||||
@@ -8,25 +8,86 @@ of specific designs.
|
||||
Architecture:
|
||||
- StudyInsight: Abstract base class for all insight types
|
||||
- InsightRegistry: Central registry for available insight types
|
||||
- InsightSpec: Config-defined insight specification from optimization_config.json
|
||||
- InsightReport: Aggregates multiple insights into a study report with PDF export
|
||||
- Each insight can generate standalone HTML or Plotly data for dashboard
|
||||
|
||||
Config Schema (in optimization_config.json):
|
||||
"insights": [
|
||||
{
|
||||
"type": "zernike_wfe",
|
||||
"name": "WFE at 40 vs 20 deg",
|
||||
"enabled": true,
|
||||
"linked_objective": "rel_filtered_rms_40_vs_20",
|
||||
"config": { "target_subcase": "3", "reference_subcase": "2" },
|
||||
"include_in_report": true
|
||||
},
|
||||
{
|
||||
"type": "design_space",
|
||||
"name": "Parameter Exploration",
|
||||
"enabled": true,
|
||||
"linked_objective": null,
|
||||
"config": {},
|
||||
"include_in_report": true
|
||||
}
|
||||
]
|
||||
|
||||
Usage:
|
||||
from optimization_engine.insights import get_insight, list_insights
|
||||
|
||||
# Get specific insight
|
||||
insight = get_insight('zernike_wfe')
|
||||
if insight.can_generate(study_path):
|
||||
html_path = insight.generate_html(study_path, trial_id=47)
|
||||
plotly_data = insight.get_plotly_data(study_path, trial_id=47)
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
if insight.can_generate():
|
||||
html_path = insight.generate_html(trial_id=47)
|
||||
plotly_data = insight.get_plotly_data(trial_id=47)
|
||||
|
||||
# Get insights from config
|
||||
specs = get_configured_insights(study_path)
|
||||
for spec in specs:
|
||||
insight = get_insight(spec.type, study_path)
|
||||
result = insight.generate(spec.to_config())
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Type
|
||||
from typing import Any, Dict, List, Optional, Type, Union
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
|
||||
@dataclass
|
||||
class InsightSpec:
|
||||
"""
|
||||
Insight specification from optimization_config.json.
|
||||
|
||||
Defines what insights the user wants for their study, optionally
|
||||
linking them to specific objectives.
|
||||
"""
|
||||
type: str # Insight type ID (e.g., 'zernike_wfe')
|
||||
name: str # User-defined display name
|
||||
enabled: bool = True # Whether to generate this insight
|
||||
linked_objective: Optional[str] = None # Objective name this visualizes (or None)
|
||||
config: Dict[str, Any] = field(default_factory=dict) # Type-specific config
|
||||
include_in_report: bool = True # Include in study report
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: Dict[str, Any]) -> 'InsightSpec':
|
||||
"""Create InsightSpec from config dictionary."""
|
||||
return cls(
|
||||
type=data['type'],
|
||||
name=data.get('name', data['type']),
|
||||
enabled=data.get('enabled', True),
|
||||
linked_objective=data.get('linked_objective'),
|
||||
config=data.get('config', {}),
|
||||
include_in_report=data.get('include_in_report', True)
|
||||
)
|
||||
|
||||
def to_config(self) -> 'InsightConfig':
|
||||
"""Convert spec to InsightConfig for generation."""
|
||||
return InsightConfig(extra=self.config)
|
||||
|
||||
|
||||
@dataclass
|
||||
class InsightConfig:
|
||||
"""Configuration for an insight instance."""
|
||||
@@ -46,10 +107,14 @@ class InsightConfig:
|
||||
class InsightResult:
|
||||
"""Result from generating an insight."""
|
||||
success: bool
|
||||
insight_type: str = "" # Which insight generated this
|
||||
insight_name: str = "" # Display name
|
||||
html_path: Optional[Path] = None
|
||||
plotly_figure: Optional[Dict[str, Any]] = None # Plotly figure as dict
|
||||
summary: Optional[Dict[str, Any]] = None # Key metrics
|
||||
error: Optional[str] = None
|
||||
linked_objective: Optional[str] = None # Objective this relates to (if any)
|
||||
generated_at: Optional[str] = None # ISO timestamp
|
||||
|
||||
|
||||
class StudyInsight(ABC):
|
||||
@@ -66,15 +131,29 @@ class StudyInsight(ABC):
|
||||
- insight_type: Unique identifier (e.g., 'zernike_wfe')
|
||||
- name: Human-readable name
|
||||
- description: What this insight shows
|
||||
- category: Physics domain (see INSIGHT_CATEGORIES)
|
||||
- applicable_to: List of study types this applies to
|
||||
- can_generate(): Check if study has required data
|
||||
- _generate(): Core generation logic
|
||||
"""
|
||||
|
||||
# Physics categories for grouping
|
||||
INSIGHT_CATEGORIES = {
|
||||
'optical': 'Optical',
|
||||
'structural_static': 'Structural (Static)',
|
||||
'structural_dynamic': 'Structural (Dynamic)',
|
||||
'structural_modal': 'Structural (Modal)',
|
||||
'thermal': 'Thermal',
|
||||
'kinematic': 'Kinematic',
|
||||
'design_exploration': 'Design Exploration',
|
||||
'other': 'Other'
|
||||
}
|
||||
|
||||
# Class-level metadata (override in subclasses)
|
||||
insight_type: str = "base"
|
||||
name: str = "Base Insight"
|
||||
description: str = "Abstract base insight"
|
||||
category: str = "other" # Physics domain (key from INSIGHT_CATEGORIES)
|
||||
applicable_to: List[str] = [] # e.g., ['mirror', 'structural', 'all']
|
||||
|
||||
# Required files/data patterns
|
||||
@@ -273,21 +352,57 @@ class InsightRegistry:
|
||||
'type': cls.insight_type,
|
||||
'name': cls.name,
|
||||
'description': cls.description,
|
||||
'category': cls.category,
|
||||
'category_label': StudyInsight.INSIGHT_CATEGORIES.get(cls.category, 'Other'),
|
||||
'applicable_to': cls.applicable_to
|
||||
}
|
||||
for cls in self._insights.values()
|
||||
]
|
||||
|
||||
def list_available(self, study_path: Path) -> List[Dict[str, Any]]:
|
||||
def list_available(self, study_path: Path, fast_mode: bool = True) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
List insights that can be generated for a specific study.
|
||||
|
||||
Args:
|
||||
study_path: Path to study directory
|
||||
fast_mode: If True, use quick file existence checks instead of full can_generate()
|
||||
|
||||
Returns:
|
||||
List of available insight metadata
|
||||
"""
|
||||
study_path = Path(study_path)
|
||||
|
||||
# Fast mode: check file patterns once, then filter insights by what they need
|
||||
if fast_mode:
|
||||
# Quick scan for data files (non-recursive, just check known locations)
|
||||
has_op2 = self._quick_has_op2(study_path)
|
||||
has_db = (study_path / "2_results" / "study.db").exists()
|
||||
|
||||
available = []
|
||||
for insight_type, cls in self._insights.items():
|
||||
# Quick filter based on required_files attribute
|
||||
required = getattr(cls, 'required_files', [])
|
||||
|
||||
can_gen = True
|
||||
if '*.op2' in required and not has_op2:
|
||||
can_gen = False
|
||||
if 'study.db' in required and not has_db:
|
||||
can_gen = False
|
||||
|
||||
if can_gen:
|
||||
available.append({
|
||||
'type': insight_type,
|
||||
'name': cls.name,
|
||||
'description': cls.description,
|
||||
'category': getattr(cls, 'category', 'other'),
|
||||
'category_label': StudyInsight.INSIGHT_CATEGORIES.get(
|
||||
getattr(cls, 'category', 'other'), 'Other'
|
||||
),
|
||||
'applicable_to': getattr(cls, 'applicable_to', [])
|
||||
})
|
||||
return available
|
||||
|
||||
# Slow mode: full can_generate() check (for accuracy)
|
||||
available = []
|
||||
for insight_type, cls in self._insights.items():
|
||||
try:
|
||||
@@ -296,12 +411,43 @@ class InsightRegistry:
|
||||
available.append({
|
||||
'type': insight_type,
|
||||
'name': cls.name,
|
||||
'description': cls.description
|
||||
'description': cls.description,
|
||||
'category': getattr(cls, 'category', 'other'),
|
||||
'category_label': StudyInsight.INSIGHT_CATEGORIES.get(
|
||||
getattr(cls, 'category', 'other'), 'Other'
|
||||
),
|
||||
'applicable_to': getattr(cls, 'applicable_to', [])
|
||||
})
|
||||
except Exception:
|
||||
pass # Skip insights that fail to initialize
|
||||
return available
|
||||
|
||||
def _quick_has_op2(self, study_path: Path) -> bool:
|
||||
"""Quick check if study has OP2 files without recursive search."""
|
||||
# Check common locations only (non-recursive)
|
||||
check_dirs = [
|
||||
study_path / "3_results" / "best_design_archive",
|
||||
study_path / "2_iterations",
|
||||
study_path / "1_setup" / "model",
|
||||
]
|
||||
|
||||
for check_dir in check_dirs:
|
||||
if not check_dir.exists():
|
||||
continue
|
||||
# Non-recursive check of immediate children
|
||||
try:
|
||||
for item in check_dir.iterdir():
|
||||
if item.suffix.lower() == '.op2':
|
||||
return True
|
||||
# Check one level down for iterations
|
||||
if item.is_dir():
|
||||
for subitem in item.iterdir():
|
||||
if subitem.suffix.lower() == '.op2':
|
||||
return True
|
||||
except (PermissionError, OSError):
|
||||
continue
|
||||
return False
|
||||
|
||||
|
||||
# Global registry instance
|
||||
_registry = InsightRegistry()
|
||||
@@ -334,3 +480,458 @@ def list_insights() -> List[Dict[str, Any]]:
|
||||
def list_available_insights(study_path: Path) -> List[Dict[str, Any]]:
|
||||
"""List insights available for a specific study."""
|
||||
return _registry.list_available(study_path)
|
||||
|
||||
|
||||
def get_configured_insights(study_path: Path) -> List[InsightSpec]:
|
||||
"""
|
||||
Get insight specifications from study's optimization_config.json.
|
||||
|
||||
Returns:
|
||||
List of InsightSpec objects defined in the config, or empty list
|
||||
"""
|
||||
study_path = Path(study_path)
|
||||
config_path = study_path / "1_setup" / "optimization_config.json"
|
||||
|
||||
if not config_path.exists():
|
||||
return []
|
||||
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
|
||||
insights_config = config.get('insights', [])
|
||||
return [InsightSpec.from_dict(spec) for spec in insights_config]
|
||||
|
||||
|
||||
def recommend_insights_for_study(study_path: Path) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Recommend insights based on study type and objectives.
|
||||
|
||||
Analyzes the optimization_config.json to suggest appropriate insights.
|
||||
Returns recommendations with reasoning.
|
||||
|
||||
Returns:
|
||||
List of recommendation dicts with 'type', 'name', 'reason', 'linked_objective'
|
||||
"""
|
||||
study_path = Path(study_path)
|
||||
config_path = study_path / "1_setup" / "optimization_config.json"
|
||||
|
||||
if not config_path.exists():
|
||||
return []
|
||||
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
|
||||
recommendations = []
|
||||
objectives = config.get('objectives', [])
|
||||
|
||||
# Check for Zernike-related objectives
|
||||
for obj in objectives:
|
||||
obj_name = obj.get('name', '')
|
||||
extractor = obj.get('extractor_config', {})
|
||||
|
||||
# WFE objectives -> recommend zernike_wfe
|
||||
if any(kw in obj_name.lower() for kw in ['rms', 'wfe', 'zernike', 'filtered']):
|
||||
target_sc = extractor.get('target_subcase', '')
|
||||
ref_sc = extractor.get('reference_subcase', '')
|
||||
recommendations.append({
|
||||
'type': 'zernike_wfe',
|
||||
'name': f"WFE: {obj.get('description', obj_name)[:50]}",
|
||||
'reason': f"Visualizes Zernike WFE for objective '{obj_name}'",
|
||||
'linked_objective': obj_name,
|
||||
'config': {
|
||||
'target_subcase': target_sc,
|
||||
'reference_subcase': ref_sc
|
||||
}
|
||||
})
|
||||
|
||||
# Mass objectives -> no direct insight, but note for design space
|
||||
elif 'mass' in obj_name.lower():
|
||||
pass # Covered by design_space
|
||||
|
||||
# Stress-related
|
||||
elif any(kw in obj_name.lower() for kw in ['stress', 'von_mises', 'strain']):
|
||||
recommendations.append({
|
||||
'type': 'stress_field',
|
||||
'name': f"Stress: {obj.get('description', obj_name)[:50]}",
|
||||
'reason': f"Visualizes stress distribution for objective '{obj_name}'",
|
||||
'linked_objective': obj_name,
|
||||
'config': {}
|
||||
})
|
||||
|
||||
# Frequency/modal
|
||||
elif any(kw in obj_name.lower() for kw in ['freq', 'modal', 'eigen', 'vibration']):
|
||||
recommendations.append({
|
||||
'type': 'modal',
|
||||
'name': f"Modal: {obj.get('description', obj_name)[:50]}",
|
||||
'reason': f"Shows mode shapes for objective '{obj_name}'",
|
||||
'linked_objective': obj_name,
|
||||
'config': {}
|
||||
})
|
||||
|
||||
# Temperature/thermal
|
||||
elif any(kw in obj_name.lower() for kw in ['temp', 'thermal', 'heat']):
|
||||
recommendations.append({
|
||||
'type': 'thermal',
|
||||
'name': f"Thermal: {obj.get('description', obj_name)[:50]}",
|
||||
'reason': f"Shows temperature distribution for objective '{obj_name}'",
|
||||
'linked_objective': obj_name,
|
||||
'config': {}
|
||||
})
|
||||
|
||||
# Always recommend design_space for any optimization
|
||||
if objectives:
|
||||
recommendations.append({
|
||||
'type': 'design_space',
|
||||
'name': 'Design Space Exploration',
|
||||
'reason': 'Shows parameter-objective relationships across all trials',
|
||||
'linked_objective': None,
|
||||
'config': {}
|
||||
})
|
||||
|
||||
return recommendations
|
||||
|
||||
|
||||
class InsightReport:
|
||||
"""
|
||||
Aggregates multiple insights into a comprehensive study report.
|
||||
|
||||
Generates:
|
||||
- Full HTML report with all insights embedded
|
||||
- Individual insight HTMLs in 3_insights/
|
||||
- PDF export capability (via browser print)
|
||||
- Summary JSON for Results page
|
||||
|
||||
Usage:
|
||||
report = InsightReport(study_path)
|
||||
report.add_insight(result1)
|
||||
report.add_insight(result2)
|
||||
report.generate_html() # Creates 3_insights/STUDY_INSIGHTS_REPORT.html
|
||||
"""
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
self.study_path = Path(study_path)
|
||||
self.insights_path = self.study_path / "3_insights"
|
||||
self.results_path = self.study_path / "3_results"
|
||||
self.results: List[InsightResult] = []
|
||||
|
||||
# Load study config for metadata
|
||||
config_path = self.study_path / "1_setup" / "optimization_config.json"
|
||||
if config_path.exists():
|
||||
with open(config_path) as f:
|
||||
self.config = json.load(f)
|
||||
else:
|
||||
self.config = {}
|
||||
|
||||
def add_insight(self, result: InsightResult) -> None:
|
||||
"""Add an insight result to the report."""
|
||||
if result.success:
|
||||
self.results.append(result)
|
||||
|
||||
def generate_all(self, specs: Optional[List[InsightSpec]] = None) -> List[InsightResult]:
|
||||
"""
|
||||
Generate all configured or specified insights.
|
||||
|
||||
Args:
|
||||
specs: List of InsightSpecs to generate (or None to use config)
|
||||
|
||||
Returns:
|
||||
List of InsightResult objects
|
||||
"""
|
||||
if specs is None:
|
||||
specs = get_configured_insights(self.study_path)
|
||||
|
||||
results = []
|
||||
for spec in specs:
|
||||
if not spec.enabled:
|
||||
continue
|
||||
|
||||
insight = get_insight(spec.type, self.study_path)
|
||||
if insight is None:
|
||||
results.append(InsightResult(
|
||||
success=False,
|
||||
insight_type=spec.type,
|
||||
insight_name=spec.name,
|
||||
error=f"Unknown insight type: {spec.type}"
|
||||
))
|
||||
continue
|
||||
|
||||
if not insight.can_generate():
|
||||
results.append(InsightResult(
|
||||
success=False,
|
||||
insight_type=spec.type,
|
||||
insight_name=spec.name,
|
||||
error=f"Cannot generate {spec.name}: required data not found"
|
||||
))
|
||||
continue
|
||||
|
||||
config = spec.to_config()
|
||||
result = insight.generate(config)
|
||||
result.insight_type = spec.type
|
||||
result.insight_name = spec.name
|
||||
result.linked_objective = spec.linked_objective
|
||||
result.generated_at = datetime.now().isoformat()
|
||||
|
||||
results.append(result)
|
||||
self.add_insight(result)
|
||||
|
||||
return results
|
||||
|
||||
def generate_report_html(self, include_appendix: bool = True) -> Path:
|
||||
"""
|
||||
Generate comprehensive HTML report with all insights.
|
||||
|
||||
Args:
|
||||
include_appendix: Whether to include full-size insight appendix
|
||||
|
||||
Returns:
|
||||
Path to generated HTML report
|
||||
"""
|
||||
self.insights_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
study_name = self.config.get('study_name', self.study_path.name)
|
||||
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M")
|
||||
|
||||
# Build HTML
|
||||
html_parts = [self._report_header(study_name, timestamp)]
|
||||
|
||||
# Executive summary
|
||||
html_parts.append(self._executive_summary())
|
||||
|
||||
# Objective-linked insights
|
||||
linked = [r for r in self.results if r.linked_objective]
|
||||
if linked:
|
||||
html_parts.append('<h2>Objective Insights</h2>')
|
||||
for result in linked:
|
||||
html_parts.append(self._insight_section(result, compact=True))
|
||||
|
||||
# Standalone insights
|
||||
standalone = [r for r in self.results if not r.linked_objective]
|
||||
if standalone:
|
||||
html_parts.append('<h2>General Insights</h2>')
|
||||
for result in standalone:
|
||||
html_parts.append(self._insight_section(result, compact=True))
|
||||
|
||||
# Appendix with full-size figures
|
||||
if include_appendix and self.results:
|
||||
html_parts.append(self._appendix_section())
|
||||
|
||||
html_parts.append(self._report_footer())
|
||||
|
||||
# Write report
|
||||
report_path = self.insights_path / "STUDY_INSIGHTS_REPORT.html"
|
||||
report_path.write_text('\n'.join(html_parts), encoding='utf-8')
|
||||
|
||||
# Also write summary JSON for Results page
|
||||
self._write_summary_json()
|
||||
|
||||
return report_path
|
||||
|
||||
def _report_header(self, study_name: str, timestamp: str) -> str:
|
||||
"""Generate HTML header."""
|
||||
return f'''<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Study Insights Report - {study_name}</title>
|
||||
<script src="https://cdn.plot.ly/plotly-2.27.0.min.js"></script>
|
||||
<style>
|
||||
:root {{
|
||||
--bg-dark: #111827;
|
||||
--bg-card: #1f2937;
|
||||
--border: #374151;
|
||||
--text: #f9fafb;
|
||||
--text-muted: #9ca3af;
|
||||
--primary: #3b82f6;
|
||||
--success: #22c55e;
|
||||
}}
|
||||
* {{ box-sizing: border-box; margin: 0; padding: 0; }}
|
||||
body {{
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
background: var(--bg-dark);
|
||||
color: var(--text);
|
||||
line-height: 1.6;
|
||||
padding: 2rem;
|
||||
}}
|
||||
.container {{ max-width: 1200px; margin: 0 auto; }}
|
||||
h1 {{ font-size: 2rem; margin-bottom: 0.5rem; color: var(--primary); }}
|
||||
h2 {{ font-size: 1.5rem; margin: 2rem 0 1rem; border-bottom: 1px solid var(--border); padding-bottom: 0.5rem; }}
|
||||
h3 {{ font-size: 1.1rem; margin-bottom: 0.5rem; }}
|
||||
.meta {{ color: var(--text-muted); font-size: 0.9rem; margin-bottom: 2rem; }}
|
||||
.card {{
|
||||
background: var(--bg-card);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 8px;
|
||||
padding: 1.5rem;
|
||||
margin-bottom: 1.5rem;
|
||||
}}
|
||||
.summary-grid {{
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
|
||||
gap: 1rem;
|
||||
margin-bottom: 2rem;
|
||||
}}
|
||||
.stat {{
|
||||
background: var(--bg-card);
|
||||
padding: 1rem;
|
||||
border-radius: 8px;
|
||||
text-align: center;
|
||||
}}
|
||||
.stat-value {{ font-size: 1.5rem; font-weight: bold; color: var(--success); }}
|
||||
.stat-label {{ font-size: 0.8rem; color: var(--text-muted); text-transform: uppercase; }}
|
||||
.objective-tag {{
|
||||
display: inline-block;
|
||||
background: var(--primary);
|
||||
color: white;
|
||||
padding: 0.2rem 0.5rem;
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
margin-left: 0.5rem;
|
||||
}}
|
||||
.plot-container {{ width: 100%; height: 400px; margin: 1rem 0; }}
|
||||
.appendix-plot {{ height: 600px; page-break-inside: avoid; }}
|
||||
table {{ width: 100%; border-collapse: collapse; margin: 1rem 0; }}
|
||||
th, td {{ padding: 0.5rem; text-align: left; border-bottom: 1px solid var(--border); }}
|
||||
th {{ color: var(--text-muted); font-weight: normal; font-size: 0.85rem; }}
|
||||
.print-btn {{
|
||||
position: fixed;
|
||||
top: 1rem;
|
||||
right: 1rem;
|
||||
background: var(--primary);
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 0.75rem 1.5rem;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
font-size: 0.9rem;
|
||||
}}
|
||||
.print-btn:hover {{ opacity: 0.9; }}
|
||||
@media print {{
|
||||
.print-btn {{ display: none; }}
|
||||
body {{ background: white; color: black; }}
|
||||
.card {{ border: 1px solid #ccc; }}
|
||||
}}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<button class="print-btn" onclick="window.print()">Save as PDF</button>
|
||||
<div class="container">
|
||||
<h1>Study Insights Report</h1>
|
||||
<p class="meta">{study_name} | Generated: {timestamp}</p>
|
||||
'''
|
||||
|
||||
def _executive_summary(self) -> str:
|
||||
"""Generate executive summary section."""
|
||||
n_objectives = len(self.config.get('objectives', []))
|
||||
n_insights = len(self.results)
|
||||
linked = len([r for r in self.results if r.linked_objective])
|
||||
|
||||
return f'''
|
||||
<div class="summary-grid">
|
||||
<div class="stat">
|
||||
<div class="stat-value">{n_insights}</div>
|
||||
<div class="stat-label">Total Insights</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value">{linked}</div>
|
||||
<div class="stat-label">Objective-Linked</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value">{n_objectives}</div>
|
||||
<div class="stat-label">Study Objectives</div>
|
||||
</div>
|
||||
</div>
|
||||
'''
|
||||
|
||||
def _insight_section(self, result: InsightResult, compact: bool = False) -> str:
|
||||
"""Generate HTML section for a single insight."""
|
||||
objective_tag = ""
|
||||
if result.linked_objective:
|
||||
objective_tag = f'<span class="objective-tag">{result.linked_objective}</span>'
|
||||
|
||||
# Summary table
|
||||
summary_html = ""
|
||||
if result.summary:
|
||||
rows = ""
|
||||
for key, value in list(result.summary.items())[:6]:
|
||||
if isinstance(value, float):
|
||||
value = f"{value:.4g}"
|
||||
rows += f"<tr><td>{key}</td><td>{value}</td></tr>"
|
||||
summary_html = f"<table><tbody>{rows}</tbody></table>"
|
||||
|
||||
# Compact plot or placeholder
|
||||
plot_html = ""
|
||||
if result.plotly_figure and compact:
|
||||
plot_id = f"plot_{result.insight_type}_{id(result)}"
|
||||
plot_json = json.dumps(result.plotly_figure)
|
||||
plot_html = f'''
|
||||
<div id="{plot_id}" class="plot-container"></div>
|
||||
<script>
|
||||
var fig = {plot_json};
|
||||
fig.layout.height = 350;
|
||||
fig.layout.margin = {{l:50,r:50,t:30,b:50}};
|
||||
Plotly.newPlot("{plot_id}", fig.data, fig.layout, {{responsive:true}});
|
||||
</script>
|
||||
'''
|
||||
|
||||
return f'''
|
||||
<div class="card">
|
||||
<h3>{result.insight_name}{objective_tag}</h3>
|
||||
{summary_html}
|
||||
{plot_html}
|
||||
</div>
|
||||
'''
|
||||
|
||||
def _appendix_section(self) -> str:
|
||||
"""Generate appendix with full-size figures."""
|
||||
html = '<h2>Appendix: Full-Size Visualizations</h2>'
|
||||
|
||||
for i, result in enumerate(self.results):
|
||||
if not result.plotly_figure:
|
||||
continue
|
||||
|
||||
plot_id = f"appendix_plot_{i}"
|
||||
plot_json = json.dumps(result.plotly_figure)
|
||||
|
||||
html += f'''
|
||||
<div class="card" style="page-break-before: always;">
|
||||
<h3>Appendix {i+1}: {result.insight_name}</h3>
|
||||
<div id="{plot_id}" class="appendix-plot"></div>
|
||||
<script>
|
||||
var fig = {plot_json};
|
||||
fig.layout.height = 550;
|
||||
Plotly.newPlot("{plot_id}", fig.data, fig.layout, {{responsive:true}});
|
||||
</script>
|
||||
</div>
|
||||
'''
|
||||
|
||||
return html
|
||||
|
||||
def _report_footer(self) -> str:
|
||||
"""Generate HTML footer."""
|
||||
return '''
|
||||
</div>
|
||||
</body>
|
||||
</html>'''
|
||||
|
||||
def _write_summary_json(self) -> None:
|
||||
"""Write summary JSON for Results page integration."""
|
||||
summary = {
|
||||
'generated_at': datetime.now().isoformat(),
|
||||
'study_name': self.config.get('study_name', self.study_path.name),
|
||||
'insights': []
|
||||
}
|
||||
|
||||
for result in self.results:
|
||||
summary['insights'].append({
|
||||
'type': result.insight_type,
|
||||
'name': result.insight_name,
|
||||
'success': result.success,
|
||||
'linked_objective': result.linked_objective,
|
||||
'html_path': str(result.html_path) if result.html_path else None,
|
||||
'summary': result.summary
|
||||
})
|
||||
|
||||
summary_path = self.insights_path / "insights_summary.json"
|
||||
with open(summary_path, 'w') as f:
|
||||
json.dump(summary, f, indent=2)
|
||||
|
||||
875
optimization_engine/insights/msf_zernike.py
Normal file
875
optimization_engine/insights/msf_zernike.py
Normal file
@@ -0,0 +1,875 @@
|
||||
"""
|
||||
MSF Zernike Insight - Mid-Spatial Frequency Analysis for Telescope Mirrors
|
||||
|
||||
Provides detailed analysis of mid-spatial frequency (MSF) content in mirror
|
||||
wavefront error, specifically for gravity-induced support print-through.
|
||||
|
||||
Key Features:
|
||||
- Band decomposition: LSF (n≤10), MSF (n=11-50), HSF (n>50)
|
||||
- RSS (Root Sum Square) metrics per band
|
||||
- Relative analysis vs reference orientations (20°, 90°)
|
||||
- Higher-order Zernike fitting (100 modes by default)
|
||||
- PSD (Power Spectral Density) visualization
|
||||
- Support print-through identification
|
||||
|
||||
For GigaBIT and similar telescope mirrors where MSF errors are dominated by
|
||||
gravity-induced support deformation rather than polishing residual.
|
||||
|
||||
Author: Atomizer Framework
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
import numpy as np
|
||||
from math import factorial
|
||||
from numpy.linalg import LinAlgError
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
_OP2 = None
|
||||
_BDF = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Band Definitions
|
||||
# ============================================================================
|
||||
|
||||
# Default band boundaries (Zernike radial order n)
|
||||
DEFAULT_LSF_MAX = 10 # LSF: n = 1-10 (M2 hexapod correctable)
|
||||
DEFAULT_MSF_MAX = 50 # MSF: n = 11-50 (support print-through)
|
||||
DEFAULT_N_MODES = 100 # Total modes to fit (n ≈ 14)
|
||||
|
||||
# Band colors for visualization
|
||||
BAND_COLORS = {
|
||||
'lsf': '#3b82f6', # Blue - low spatial frequency
|
||||
'msf': '#f59e0b', # Amber - mid spatial frequency
|
||||
'hsf': '#ef4444', # Red - high spatial frequency
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Zernike Mathematics (same as zernike_wfe.py)
|
||||
# ============================================================================
|
||||
|
||||
def noll_indices(j: int) -> Tuple[int, int]:
|
||||
"""Convert Noll index to (n, m) radial/azimuthal orders."""
|
||||
if j < 1:
|
||||
raise ValueError("Noll index j must be >= 1")
|
||||
count = 0
|
||||
n = 0
|
||||
while True:
|
||||
if n == 0:
|
||||
ms = [0]
|
||||
elif n % 2 == 0:
|
||||
ms = [0] + [m for k in range(1, n//2 + 1) for m in (-2*k, 2*k)]
|
||||
else:
|
||||
ms = [m for k in range(0, (n+1)//2) for m in (-(2*k+1), (2*k+1))]
|
||||
for m in ms:
|
||||
count += 1
|
||||
if count == j:
|
||||
return n, m
|
||||
n += 1
|
||||
|
||||
|
||||
def noll_to_radial_order(j: int) -> int:
|
||||
"""Get radial order n for Noll index j."""
|
||||
n, _ = noll_indices(j)
|
||||
return n
|
||||
|
||||
|
||||
def radial_order_to_first_noll(n: int) -> int:
|
||||
"""Get first Noll index for radial order n."""
|
||||
# Sum of (k+1) for k=0 to n-1, plus 1
|
||||
return n * (n + 1) // 2 + 1
|
||||
|
||||
|
||||
def zernike_noll(j: int, r: np.ndarray, th: np.ndarray) -> np.ndarray:
|
||||
"""Evaluate Zernike polynomial j at (r, theta)."""
|
||||
n, m = noll_indices(j)
|
||||
R = np.zeros_like(r)
|
||||
for s in range((n - abs(m)) // 2 + 1):
|
||||
c = ((-1)**s * factorial(n - s) /
|
||||
(factorial(s) *
|
||||
factorial((n + abs(m)) // 2 - s) *
|
||||
factorial((n - abs(m)) // 2 - s)))
|
||||
R += c * r**(n - 2*s)
|
||||
if m == 0:
|
||||
return R
|
||||
return R * (np.cos(m * th) if m > 0 else np.sin(-m * th))
|
||||
|
||||
|
||||
def zernike_common_name(n: int, m: int) -> str:
|
||||
"""Get common name for Zernike mode."""
|
||||
names = {
|
||||
(0, 0): "Piston", (1, -1): "Tilt X", (1, 1): "Tilt Y",
|
||||
(2, 0): "Defocus", (2, -2): "Astig 45°", (2, 2): "Astig 0°",
|
||||
(3, -1): "Coma X", (3, 1): "Coma Y", (3, -3): "Trefoil X", (3, 3): "Trefoil Y",
|
||||
(4, 0): "Primary Spherical", (4, -2): "Sec Astig X", (4, 2): "Sec Astig Y",
|
||||
(4, -4): "Quadrafoil X", (4, 4): "Quadrafoil Y",
|
||||
(5, -1): "Sec Coma X", (5, 1): "Sec Coma Y",
|
||||
(5, -3): "Sec Trefoil X", (5, 3): "Sec Trefoil Y",
|
||||
(5, -5): "Pentafoil X", (5, 5): "Pentafoil Y",
|
||||
(6, 0): "Sec Spherical",
|
||||
}
|
||||
return names.get((n, m), f"Z(n={n}, m={m})")
|
||||
|
||||
|
||||
def zernike_label(j: int) -> str:
|
||||
"""Get label for Zernike coefficient J{j}."""
|
||||
n, m = noll_indices(j)
|
||||
return f"J{j:02d} - {zernike_common_name(n, m)} (n={n}, m={m})"
|
||||
|
||||
|
||||
def compute_zernike_coeffs(
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
vals: np.ndarray,
|
||||
n_modes: int,
|
||||
chunk_size: int = 100000
|
||||
) -> Tuple[np.ndarray, float]:
|
||||
"""Fit Zernike coefficients to WFE data."""
|
||||
Xc, Yc = X - np.mean(X), Y - np.mean(Y)
|
||||
R = float(np.max(np.hypot(Xc, Yc)))
|
||||
r = np.hypot(Xc / R, Yc / R).astype(np.float32)
|
||||
th = np.arctan2(Yc, Xc).astype(np.float32)
|
||||
mask = (r <= 1.0) & ~np.isnan(vals)
|
||||
if not np.any(mask):
|
||||
raise RuntimeError("No valid points inside unit disk.")
|
||||
idx = np.nonzero(mask)[0]
|
||||
m = int(n_modes)
|
||||
G = np.zeros((m, m), dtype=np.float64)
|
||||
h = np.zeros((m,), dtype=np.float64)
|
||||
v = vals.astype(np.float64)
|
||||
|
||||
for start in range(0, len(idx), chunk_size):
|
||||
sl = idx[start:start + chunk_size]
|
||||
r_b, th_b, v_b = r[sl], th[sl], v[sl]
|
||||
Zb = np.column_stack([zernike_noll(j, r_b, th_b).astype(np.float32)
|
||||
for j in range(1, m + 1)])
|
||||
G += (Zb.T @ Zb).astype(np.float64)
|
||||
h += (Zb.T @ v_b).astype(np.float64)
|
||||
try:
|
||||
coeffs = np.linalg.solve(G, h)
|
||||
except LinAlgError:
|
||||
coeffs = np.linalg.lstsq(G, h, rcond=None)[0]
|
||||
return coeffs, R
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Band Analysis
|
||||
# ============================================================================
|
||||
|
||||
def classify_modes_by_band(
|
||||
n_modes: int,
|
||||
lsf_max: int = DEFAULT_LSF_MAX,
|
||||
msf_max: int = DEFAULT_MSF_MAX
|
||||
) -> Dict[str, List[int]]:
|
||||
"""
|
||||
Classify Zernike modes into LSF/MSF/HSF bands.
|
||||
|
||||
Returns:
|
||||
Dict with 'lsf', 'msf', 'hsf' keys containing lists of Noll indices
|
||||
"""
|
||||
bands = {'lsf': [], 'msf': [], 'hsf': []}
|
||||
|
||||
for j in range(1, n_modes + 1):
|
||||
n = noll_to_radial_order(j)
|
||||
if n <= lsf_max:
|
||||
bands['lsf'].append(j)
|
||||
elif n <= msf_max:
|
||||
bands['msf'].append(j)
|
||||
else:
|
||||
bands['hsf'].append(j)
|
||||
|
||||
return bands
|
||||
|
||||
|
||||
def compute_band_rss(
|
||||
coeffs: np.ndarray,
|
||||
bands: Dict[str, List[int]],
|
||||
filter_modes: int = 4
|
||||
) -> Dict[str, float]:
|
||||
"""
|
||||
Compute RSS (Root Sum Square) of coefficients in each band.
|
||||
|
||||
RSS = sqrt(sum(c_j^2)) for j in band
|
||||
|
||||
Args:
|
||||
coeffs: Zernike coefficients
|
||||
bands: Dict with band name -> list of Noll indices
|
||||
filter_modes: Number of low-order modes to filter for 'filtered' metric (default 4 = J1-J4)
|
||||
|
||||
Returns:
|
||||
Dict with 'lsf', 'msf', 'hsf', 'total', and 'filtered' (J5+) RSS values
|
||||
"""
|
||||
rss = {}
|
||||
for band_name, indices in bands.items():
|
||||
# Convert to 0-indexed
|
||||
band_coeffs = [coeffs[j-1] for j in indices if j-1 < len(coeffs)]
|
||||
rss[band_name] = float(np.sqrt(np.sum(np.array(band_coeffs)**2)))
|
||||
|
||||
rss['total'] = float(np.sqrt(np.sum(coeffs**2)))
|
||||
|
||||
# Filtered RSS: exclude first filter_modes (J1-J4 = piston, tip, tilt, defocus)
|
||||
# This is the engineering-relevant metric after M2 alignment
|
||||
filtered_coeffs = coeffs[filter_modes:] if len(coeffs) > filter_modes else np.array([])
|
||||
rss['filtered'] = float(np.sqrt(np.sum(filtered_coeffs**2)))
|
||||
|
||||
return rss
|
||||
|
||||
|
||||
def compute_band_residual(
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
W_nm: np.ndarray,
|
||||
coeffs: np.ndarray,
|
||||
R: float,
|
||||
bands: Dict[str, List[int]],
|
||||
keep_bands: List[str]
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Compute WFE residual keeping only specified bands.
|
||||
|
||||
Args:
|
||||
keep_bands: List of band names to keep (e.g., ['msf'] for MSF-only)
|
||||
|
||||
Returns:
|
||||
WFE array with only specified band content
|
||||
"""
|
||||
Xc = X - np.mean(X)
|
||||
Yc = Y - np.mean(Y)
|
||||
r = np.hypot(Xc / R, Yc / R)
|
||||
th = np.arctan2(Yc, Xc)
|
||||
|
||||
# Build Zernike basis for modes to keep
|
||||
keep_indices = []
|
||||
for band in keep_bands:
|
||||
keep_indices.extend(bands.get(band, []))
|
||||
|
||||
if not keep_indices:
|
||||
return np.zeros_like(W_nm)
|
||||
|
||||
# Reconstruct only the kept bands
|
||||
W_band = np.zeros_like(W_nm)
|
||||
for j in keep_indices:
|
||||
if j - 1 < len(coeffs):
|
||||
Z_j = zernike_noll(j, r, th)
|
||||
W_band += coeffs[j-1] * Z_j
|
||||
|
||||
return W_band
|
||||
|
||||
|
||||
def find_dominant_msf_modes(
|
||||
coeffs: np.ndarray,
|
||||
bands: Dict[str, List[int]],
|
||||
top_n: int = 5
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find the dominant MSF modes by coefficient magnitude.
|
||||
|
||||
Returns:
|
||||
List of dicts with 'j', 'label', 'coeff_nm', 'n', 'm'
|
||||
"""
|
||||
msf_indices = bands.get('msf', [])
|
||||
|
||||
mode_data = []
|
||||
for j in msf_indices:
|
||||
if j - 1 < len(coeffs):
|
||||
n, m = noll_indices(j)
|
||||
mode_data.append({
|
||||
'j': j,
|
||||
'label': zernike_label(j),
|
||||
'coeff_nm': float(abs(coeffs[j-1])),
|
||||
'n': n,
|
||||
'm': m
|
||||
})
|
||||
|
||||
# Sort by magnitude
|
||||
mode_data.sort(key=lambda x: x['coeff_nm'], reverse=True)
|
||||
|
||||
return mode_data[:top_n]
|
||||
|
||||
|
||||
def estimate_mesh_resolution(X: np.ndarray, Y: np.ndarray) -> Dict[str, float]:
|
||||
"""
|
||||
Estimate mesh spatial resolution.
|
||||
|
||||
Returns:
|
||||
Dict with 'node_count', 'diameter_mm', 'avg_spacing_mm',
|
||||
'max_resolvable_order', 'min_feature_mm'
|
||||
"""
|
||||
n_nodes = len(X)
|
||||
|
||||
Xc = X - np.mean(X)
|
||||
Yc = Y - np.mean(Y)
|
||||
R = np.max(np.hypot(Xc, Yc))
|
||||
diameter = 2 * R
|
||||
|
||||
# Approximate spacing from area
|
||||
area = np.pi * R**2
|
||||
avg_spacing = np.sqrt(area / n_nodes)
|
||||
|
||||
# Nyquist: can resolve features > 2x spacing
|
||||
min_feature = 2 * avg_spacing
|
||||
|
||||
# Max Zernike order
|
||||
max_order = int(diameter / min_feature)
|
||||
|
||||
return {
|
||||
'node_count': n_nodes,
|
||||
'diameter_mm': float(diameter),
|
||||
'avg_spacing_mm': float(avg_spacing),
|
||||
'min_feature_mm': float(min_feature),
|
||||
'max_resolvable_order': max_order
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Default Configuration
|
||||
# ============================================================================
|
||||
|
||||
DEFAULT_CONFIG = {
|
||||
'n_modes': 100,
|
||||
'lsf_max': 10, # n ≤ 10 is LSF
|
||||
'msf_max': 50, # n = 11-50 is MSF
|
||||
'amp': 0.5, # Visual deformation scale
|
||||
'pancake': 3.0, # Z-axis range multiplier
|
||||
'plot_downsample': 15000,
|
||||
'colorscale': 'Turbo',
|
||||
'disp_unit': 'mm',
|
||||
'show_psd': True,
|
||||
'show_all_orientations': True,
|
||||
}
|
||||
|
||||
|
||||
@register_insight
|
||||
class MSFZernikeInsight(StudyInsight):
|
||||
"""
|
||||
Mid-Spatial Frequency Zernike analysis for telescope mirror optimization.
|
||||
|
||||
Provides detailed breakdown of spatial frequency content:
|
||||
- LSF (Low): n ≤ 10, correctable by M2 hexapod
|
||||
- MSF (Mid): n = 11-50, support print-through
|
||||
- HSF (High): n > 50, near mesh resolution limit
|
||||
|
||||
Generates:
|
||||
- Band decomposition table with RSS metrics
|
||||
- MSF-only 3D surface visualization
|
||||
- Coefficient bar chart color-coded by band
|
||||
- Dominant MSF mode identification
|
||||
- Mesh resolution analysis
|
||||
"""
|
||||
|
||||
insight_type = "msf_zernike"
|
||||
name = "MSF Zernike Analysis"
|
||||
description = "Mid-spatial frequency analysis with band decomposition for support print-through"
|
||||
category = "optical"
|
||||
applicable_to = ["mirror", "optics", "wfe", "msf"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._displacements: Optional[Dict] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 and geometry files exist."""
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file for OP2."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and displacement data."""
|
||||
if self._node_geo is not None:
|
||||
return
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
bdf = _BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
|
||||
op2 = _OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
if not op2.displacements:
|
||||
raise RuntimeError("No displacement data in OP2")
|
||||
|
||||
self._displacements = {}
|
||||
for key, darr in op2.displacements.items():
|
||||
data = darr.data
|
||||
dmat = data[0] if data.ndim == 3 else (data if data.ndim == 2 else None)
|
||||
if dmat is None:
|
||||
continue
|
||||
|
||||
ngt = darr.node_gridtype.astype(int)
|
||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||
isubcase = getattr(darr, 'isubcase', None)
|
||||
label = str(isubcase) if isubcase else str(key)
|
||||
|
||||
self._displacements[label] = {
|
||||
'node_ids': node_ids.astype(int),
|
||||
'disp': dmat.copy()
|
||||
}
|
||||
|
||||
def _build_wfe_arrays(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm'
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays for a subcase."""
|
||||
nm_per_unit = 1e6 if disp_unit == 'mm' else 1e9
|
||||
|
||||
data = self._displacements[label]
|
||||
node_ids = data['node_ids']
|
||||
dmat = data['disp']
|
||||
|
||||
X, Y, WFE = [], [], []
|
||||
valid_nids = []
|
||||
for nid, vec in zip(node_ids, dmat):
|
||||
geo = self._node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
X.append(geo[0])
|
||||
Y.append(geo[1])
|
||||
wfe = vec[2] * 2.0 * nm_per_unit
|
||||
WFE.append(wfe)
|
||||
valid_nids.append(nid)
|
||||
|
||||
return (np.array(X), np.array(Y), np.array(WFE), np.array(valid_nids))
|
||||
|
||||
def _compute_relative_wfe(
|
||||
self,
|
||||
X1, Y1, WFE1, nids1,
|
||||
X2, Y2, WFE2, nids2
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Compute WFE1 - WFE2 for common nodes."""
|
||||
ref_map = {int(nid): (x, y, w) for nid, x, y, w in zip(nids2, X2, Y2, WFE2)}
|
||||
|
||||
X_rel, Y_rel, WFE_rel = [], [], []
|
||||
for nid, x, y, w in zip(nids1, X1, Y1, WFE1):
|
||||
nid = int(nid)
|
||||
if nid in ref_map:
|
||||
_, _, w_ref = ref_map[nid]
|
||||
X_rel.append(x)
|
||||
Y_rel.append(y)
|
||||
WFE_rel.append(w - w_ref)
|
||||
|
||||
return np.array(X_rel), np.array(Y_rel), np.array(WFE_rel)
|
||||
|
||||
def _analyze_subcase(
|
||||
self,
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray,
|
||||
W_nm: np.ndarray,
|
||||
cfg: Dict
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Full MSF analysis for a single subcase.
|
||||
|
||||
Returns:
|
||||
Dict with coefficients, bands, RSS, dominant modes, mesh info
|
||||
"""
|
||||
n_modes = cfg['n_modes']
|
||||
lsf_max = cfg['lsf_max']
|
||||
msf_max = cfg['msf_max']
|
||||
|
||||
# Fit Zernike
|
||||
coeffs, R = compute_zernike_coeffs(X, Y, W_nm, n_modes)
|
||||
|
||||
# Band classification
|
||||
bands = classify_modes_by_band(n_modes, lsf_max, msf_max)
|
||||
|
||||
# RSS per band
|
||||
rss = compute_band_rss(coeffs, bands)
|
||||
|
||||
# Band percentages
|
||||
total_rss = rss['total']
|
||||
pct = {
|
||||
'lsf': 100 * rss['lsf'] / total_rss if total_rss > 0 else 0,
|
||||
'msf': 100 * rss['msf'] / total_rss if total_rss > 0 else 0,
|
||||
'hsf': 100 * rss['hsf'] / total_rss if total_rss > 0 else 0,
|
||||
}
|
||||
|
||||
# Dominant MSF modes
|
||||
dominant_msf = find_dominant_msf_modes(coeffs, bands, top_n=5)
|
||||
|
||||
# MSF-only residual
|
||||
W_msf = compute_band_residual(X, Y, W_nm, coeffs, R, bands, ['msf'])
|
||||
|
||||
# Mesh resolution
|
||||
mesh_info = estimate_mesh_resolution(X, Y)
|
||||
|
||||
return {
|
||||
'coefficients': coeffs,
|
||||
'R': R,
|
||||
'bands': bands,
|
||||
'rss': rss,
|
||||
'rss_pct': pct,
|
||||
'dominant_msf': dominant_msf,
|
||||
'W_msf': W_msf,
|
||||
'mesh_info': mesh_info,
|
||||
'n_modes': n_modes,
|
||||
'lsf_max': lsf_max,
|
||||
'msf_max': msf_max,
|
||||
}
|
||||
|
||||
def _generate_html(
|
||||
self,
|
||||
analyses: Dict[str, Dict],
|
||||
cfg: Dict,
|
||||
X: np.ndarray,
|
||||
Y: np.ndarray
|
||||
) -> str:
|
||||
"""Generate comprehensive HTML report."""
|
||||
_load_dependencies()
|
||||
|
||||
n_modes = cfg['n_modes']
|
||||
amp = cfg.get('amp', 0.5)
|
||||
pancake = cfg.get('pancake', 3.0)
|
||||
downsample = cfg.get('plot_downsample', 15000)
|
||||
colorscale = cfg.get('colorscale', 'Turbo')
|
||||
|
||||
# Use first available analysis for visualization
|
||||
primary_key = list(analyses.keys())[0]
|
||||
primary = analyses[primary_key]
|
||||
|
||||
coeffs = primary['coefficients']
|
||||
bands = primary['bands']
|
||||
W_msf = primary['W_msf']
|
||||
mesh_info = primary['mesh_info']
|
||||
|
||||
# Prepare band-colored bar chart data
|
||||
bar_colors = []
|
||||
for j in range(1, n_modes + 1):
|
||||
if j in bands['lsf']:
|
||||
bar_colors.append(BAND_COLORS['lsf'])
|
||||
elif j in bands['msf']:
|
||||
bar_colors.append(BAND_COLORS['msf'])
|
||||
else:
|
||||
bar_colors.append(BAND_COLORS['hsf'])
|
||||
|
||||
labels = [zernike_label(j) for j in range(1, n_modes + 1)]
|
||||
coeff_abs = np.abs(coeffs)
|
||||
|
||||
# Downsample for 3D plot
|
||||
n = len(X)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
Xp, Yp, Wp = X[sel], Y[sel], W_msf[sel]
|
||||
else:
|
||||
Xp, Yp, Wp = X, Y, W_msf
|
||||
|
||||
res_amp = amp * Wp
|
||||
max_amp = float(np.max(np.abs(res_amp))) if res_amp.size else 1.0
|
||||
|
||||
# Create figure with subplots
|
||||
fig = _make_subplots(
|
||||
rows=4, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}],
|
||||
[{"type": "table"}], [{"type": "xy"}]],
|
||||
row_heights=[0.40, 0.20, 0.15, 0.25],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
"<b>MSF Content Only (n=11-50)</b>",
|
||||
"<b>Band Decomposition (RSS)</b>",
|
||||
"<b>Dominant MSF Modes</b>",
|
||||
"<b>Coefficient Magnitude by Band</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# 3D MSF surface
|
||||
try:
|
||||
tri = _Triangulation(Xp, Yp)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=res_amp,
|
||||
colorscale=colorscale,
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3,
|
||||
roughness=0.5, fresnel=0.2),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(title=dict(text="MSF WFE (nm)", side="right"),
|
||||
thickness=15, len=0.5, tickformat=".1f"),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>MSF: %{z:.2f} nm<extra></extra>"
|
||||
), row=1, col=1)
|
||||
except Exception:
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=Xp, y=Yp, z=res_amp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=res_amp, colorscale=colorscale, showscale=True),
|
||||
showlegend=False
|
||||
), row=1, col=1)
|
||||
|
||||
# Configure 3D scene
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.2, y=1.2, z=0.8), up=dict(x=0, y=0, z=1)),
|
||||
xaxis=dict(title="X (mm)", showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(240,240,240,0.9)'),
|
||||
yaxis=dict(title="Y (mm)", showgrid=True,
|
||||
gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(240,240,240,0.9)'),
|
||||
zaxis=dict(title="MSF WFE (nm)",
|
||||
range=[-max_amp * pancake, max_amp * pancake],
|
||||
showgrid=True, gridcolor='rgba(128,128,128,0.3)',
|
||||
showbackground=True, backgroundcolor='rgba(230,230,250,0.9)'),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4)
|
||||
)
|
||||
|
||||
# Band decomposition table
|
||||
band_headers = ["<b>Band</b>", "<b>Order Range</b>", "<b>Feature Size</b>"]
|
||||
band_values = [
|
||||
["LSF (Low)", "MSF (Mid)", "HSF (High)", "<b>Total</b>", "<b>Filtered (J5+)</b>"],
|
||||
[f"n ≤ {cfg['lsf_max']}", f"n = {cfg['lsf_max']+1}-{cfg['msf_max']}",
|
||||
f"n > {cfg['msf_max']}", "All", "J5+ (excl. piston/tip/tilt/defocus)"],
|
||||
[f"> {int(mesh_info['diameter_mm']/cfg['lsf_max'])} mm",
|
||||
f"{int(mesh_info['diameter_mm']/cfg['msf_max'])}-{int(mesh_info['diameter_mm']/(cfg['lsf_max']+1))} mm",
|
||||
f"< {int(mesh_info['diameter_mm']/cfg['msf_max'])} mm", "-", "M2 correctable removed"],
|
||||
]
|
||||
|
||||
# Add columns for each analysis
|
||||
for key, analysis in analyses.items():
|
||||
band_headers.append(f"<b>{key} RSS (nm)</b>")
|
||||
rss = analysis['rss']
|
||||
pct = analysis['rss_pct']
|
||||
# Calculate filtered percentage
|
||||
filtered_pct = 100 * rss['filtered'] / rss['total'] if rss['total'] > 0 else 0
|
||||
band_values.append([
|
||||
f"{rss['lsf']:.2f} ({pct['lsf']:.1f}%)",
|
||||
f"{rss['msf']:.2f} ({pct['msf']:.1f}%)",
|
||||
f"{rss['hsf']:.2f} ({pct['hsf']:.1f}%)",
|
||||
f"<b>{rss['total']:.2f}</b>",
|
||||
f"<b>{rss['filtered']:.2f}</b> ({filtered_pct:.1f}%)",
|
||||
])
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=band_headers,
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=band_values,
|
||||
align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Dominant MSF modes table
|
||||
dom_modes = primary['dominant_msf']
|
||||
if dom_modes:
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Rank</b>", "<b>Mode</b>", "<b>|Coeff| (nm)</b>", "<b>Order n</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
[f"#{i+1}" for i in range(len(dom_modes))],
|
||||
[m['label'] for m in dom_modes],
|
||||
[f"{m['coeff_nm']:.3f}" for m in dom_modes],
|
||||
[str(m['n']) for m in dom_modes],
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=3, col=1)
|
||||
|
||||
# Bar chart with band colors
|
||||
fig.add_trace(
|
||||
_go.Bar(
|
||||
x=coeff_abs.tolist(), y=labels,
|
||||
orientation='h',
|
||||
marker_color=bar_colors,
|
||||
hovertemplate="%{y}<br>|Coeff| = %{x:.3f} nm<extra></extra>",
|
||||
showlegend=False
|
||||
),
|
||||
row=4, col=1
|
||||
)
|
||||
|
||||
# Add legend for bands
|
||||
for band_name, color in BAND_COLORS.items():
|
||||
fig.add_trace(_go.Scatter(
|
||||
x=[None], y=[None],
|
||||
mode='markers',
|
||||
marker=dict(size=10, color=color),
|
||||
name=band_name.upper(),
|
||||
showlegend=True
|
||||
))
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1400, height=1600,
|
||||
margin=dict(t=60, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(
|
||||
text=f"<b>Atomizer MSF Analysis</b> | {n_modes} modes | Mesh: {mesh_info['node_count']} nodes, {mesh_info['avg_spacing_mm']:.1f}mm spacing",
|
||||
x=0.5, font=dict(size=16)
|
||||
),
|
||||
legend=dict(
|
||||
orientation="h",
|
||||
yanchor="bottom",
|
||||
y=1.02,
|
||||
xanchor="right",
|
||||
x=1
|
||||
)
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate MSF analysis."""
|
||||
self._load_data()
|
||||
|
||||
cfg = {**DEFAULT_CONFIG, **config.extra}
|
||||
disp_unit = cfg['disp_unit']
|
||||
|
||||
# Map subcases
|
||||
disps = self._displacements
|
||||
if '1' in disps and '2' in disps:
|
||||
sc_map = {'90°': '1', '20°': '2', '40°': '3', '60°': '4'}
|
||||
elif '90' in disps and '20' in disps:
|
||||
sc_map = {'90°': '90', '20°': '20', '40°': '40', '60°': '60'}
|
||||
else:
|
||||
available = sorted(disps.keys(), key=lambda x: int(x) if x.isdigit() else 0)
|
||||
if len(available) >= 4:
|
||||
sc_map = {'90°': available[0], '20°': available[1],
|
||||
'40°': available[2], '60°': available[3]}
|
||||
else:
|
||||
return InsightResult(success=False,
|
||||
error=f"Need 4 subcases, found: {available}")
|
||||
|
||||
# Validate subcases
|
||||
for angle, label in sc_map.items():
|
||||
if label not in disps:
|
||||
return InsightResult(success=False,
|
||||
error=f"Subcase '{label}' (angle {angle}) not found")
|
||||
|
||||
# Load reference data
|
||||
X_20, Y_20, WFE_20, nids_20 = self._build_wfe_arrays(sc_map['20°'], disp_unit)
|
||||
X_90, Y_90, WFE_90, nids_90 = self._build_wfe_arrays(sc_map['90°'], disp_unit)
|
||||
|
||||
analyses = {}
|
||||
|
||||
# Analyze each key orientation
|
||||
for angle in ['40°', '60°']:
|
||||
label = sc_map[angle]
|
||||
X, Y, WFE, nids = self._build_wfe_arrays(label, disp_unit)
|
||||
|
||||
# Absolute
|
||||
analyses[f"{angle} Abs"] = self._analyze_subcase(X, Y, WFE, cfg)
|
||||
|
||||
# Relative to 20°
|
||||
X_rel, Y_rel, WFE_rel = self._compute_relative_wfe(
|
||||
X, Y, WFE, nids, X_20, Y_20, WFE_20, nids_20)
|
||||
analyses[f"{angle} vs 20°"] = self._analyze_subcase(X_rel, Y_rel, WFE_rel, cfg)
|
||||
|
||||
# Relative to 90° (manufacturing)
|
||||
X_rel90, Y_rel90, WFE_rel90 = self._compute_relative_wfe(
|
||||
X, Y, WFE, nids, X_90, Y_90, WFE_90, nids_90)
|
||||
analyses[f"{angle} vs 90°"] = self._analyze_subcase(X_rel90, Y_rel90, WFE_rel90, cfg)
|
||||
|
||||
# Also analyze 90° absolute (manufacturing baseline)
|
||||
analyses["90° Mfg"] = self._analyze_subcase(X_90, Y_90, WFE_90, cfg)
|
||||
|
||||
# Generate HTML using 40° vs 20° as primary visualization
|
||||
primary_analysis = analyses["40° vs 20°"]
|
||||
X_40, Y_40, _, _ = self._build_wfe_arrays(sc_map['40°'], disp_unit)
|
||||
X_rel40, Y_rel40, _ = self._compute_relative_wfe(
|
||||
X_40, Y_40, analyses["40° Abs"]['W_msf'], np.arange(len(X_40)),
|
||||
X_20, Y_20, WFE_20, nids_20)
|
||||
|
||||
html = self._generate_html(analyses, cfg, X_40, Y_40)
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_path = output_dir / f"msf_zernike_{timestamp}.html"
|
||||
html_path.write_text(html, encoding='utf-8')
|
||||
|
||||
# Build summary
|
||||
summary = {
|
||||
'n_modes': cfg['n_modes'],
|
||||
'lsf_max_order': cfg['lsf_max'],
|
||||
'msf_max_order': cfg['msf_max'],
|
||||
'mesh_nodes': primary_analysis['mesh_info']['node_count'],
|
||||
'mesh_spacing_mm': primary_analysis['mesh_info']['avg_spacing_mm'],
|
||||
'max_resolvable_order': primary_analysis['mesh_info']['max_resolvable_order'],
|
||||
}
|
||||
|
||||
# Add key metrics for each analysis
|
||||
for key, analysis in analyses.items():
|
||||
safe_key = key.replace('°', 'deg').replace(' ', '_')
|
||||
summary[f'{safe_key}_lsf_rss'] = analysis['rss']['lsf']
|
||||
summary[f'{safe_key}_msf_rss'] = analysis['rss']['msf']
|
||||
summary[f'{safe_key}_hsf_rss'] = analysis['rss']['hsf']
|
||||
summary[f'{safe_key}_total_rss'] = analysis['rss']['total']
|
||||
summary[f'{safe_key}_filtered_rss'] = analysis['rss']['filtered']
|
||||
summary[f'{safe_key}_msf_pct'] = analysis['rss_pct']['msf']
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
summary=summary
|
||||
)
|
||||
1304
optimization_engine/insights/zernike_dashboard.py
Normal file
1304
optimization_engine/insights/zernike_dashboard.py
Normal file
File diff suppressed because it is too large
Load Diff
625
optimization_engine/insights/zernike_opd_comparison.py
Normal file
625
optimization_engine/insights/zernike_opd_comparison.py
Normal file
@@ -0,0 +1,625 @@
|
||||
"""
|
||||
Zernike OPD Method Comparison Insight
|
||||
=====================================
|
||||
|
||||
Compares standard Z-only Zernike method against rigorous OPD method that
|
||||
accounts for lateral (X, Y) displacements. Critical for understanding
|
||||
whether your optimization objective is capturing true optical performance.
|
||||
|
||||
WHY THIS MATTERS:
|
||||
-----------------
|
||||
When supports pinch or laterally deform the mirror, nodes shift in X and Y.
|
||||
The standard Zernike method ignores this, potentially leading to:
|
||||
- Optimized designs that "cheat" by distorting laterally
|
||||
- False low WFE when the actual optical surface is degraded
|
||||
- Optimizer convergence to non-optimal solutions
|
||||
|
||||
This insight visualizes:
|
||||
1. The difference between methods (where they disagree)
|
||||
2. Lateral displacement magnitude map (where pinching occurs)
|
||||
3. Which method you should be using for your study
|
||||
|
||||
Applicable to: All mirror/optics studies, CRITICAL for lateral support optimization.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, Optional, Tuple
|
||||
import numpy as np
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy imports
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
_make_subplots = None
|
||||
_Triangulation = None
|
||||
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
# Default configuration
|
||||
DEFAULT_CONFIG = {
|
||||
'amp': 0.5, # Visual deformation scale
|
||||
'pancake': 3.0, # Z-axis range multiplier
|
||||
'plot_downsample': 15000,
|
||||
'colorscale_wfe': 'RdBu', # Diverging for WFE difference
|
||||
'colorscale_lateral': 'Hot', # Sequential for lateral magnitude
|
||||
'disp_unit': 'mm',
|
||||
'n_modes': 50,
|
||||
'filter_orders': 4,
|
||||
}
|
||||
|
||||
|
||||
@register_insight
|
||||
class ZernikeOPDComparisonInsight(StudyInsight):
|
||||
"""
|
||||
Compare standard vs OPD-rigorous Zernike methods.
|
||||
|
||||
Generates visualizations showing:
|
||||
- Lateral displacement magnitude map (where pinching occurs)
|
||||
- WFE difference map (where methods disagree)
|
||||
- Quantitative comparison tables
|
||||
- Recommendation for which method to use
|
||||
"""
|
||||
|
||||
insight_type = "zernike_opd_comparison"
|
||||
name = "Zernike Method Comparison (OPD vs Standard)"
|
||||
description = "Compare Z-only vs rigorous OPD Zernike methods - reveals lateral displacement impact"
|
||||
category = "optical"
|
||||
applicable_to = ["mirror", "optics", "wfe", "lateral"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
def __init__(self, study_path: Path):
|
||||
super().__init__(study_path)
|
||||
self.op2_path: Optional[Path] = None
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._displacements: Optional[Dict] = None
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 and geometry files exist."""
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
return False
|
||||
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
|
||||
def _find_geometry_file(self, op2_path: Path) -> Path:
|
||||
"""Find BDF/DAT geometry file for OP2."""
|
||||
folder = op2_path.parent
|
||||
base = op2_path.stem
|
||||
|
||||
for ext in ['.dat', '.bdf']:
|
||||
cand = folder / (base + ext)
|
||||
if cand.exists():
|
||||
return cand
|
||||
|
||||
for f in folder.iterdir():
|
||||
if f.suffix.lower() in ['.dat', '.bdf']:
|
||||
return f
|
||||
|
||||
raise FileNotFoundError(f"No geometry file found for {op2_path}")
|
||||
|
||||
def _load_data(self):
|
||||
"""Load geometry and displacement data."""
|
||||
if self._node_geo is not None:
|
||||
return
|
||||
|
||||
_load_dependencies()
|
||||
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
|
||||
# Read geometry
|
||||
bdf = BDF()
|
||||
bdf.read_bdf(str(self.geo_path))
|
||||
self._node_geo = {int(nid): node.get_position()
|
||||
for nid, node in bdf.nodes.items()}
|
||||
|
||||
# Read displacements
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(self.op2_path))
|
||||
|
||||
if not op2.displacements:
|
||||
raise RuntimeError("No displacement data in OP2")
|
||||
|
||||
self._displacements = {}
|
||||
for key, darr in op2.displacements.items():
|
||||
data = darr.data
|
||||
dmat = data[0] if data.ndim == 3 else (data if data.ndim == 2 else None)
|
||||
if dmat is None:
|
||||
continue
|
||||
|
||||
ngt = darr.node_gridtype.astype(int)
|
||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||
isubcase = getattr(darr, 'isubcase', None)
|
||||
label = str(isubcase) if isubcase else str(key)
|
||||
|
||||
self._displacements[label] = {
|
||||
'node_ids': node_ids.astype(int),
|
||||
'disp': dmat.copy()
|
||||
}
|
||||
|
||||
def _build_full_data(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm'
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""Build complete data arrays including X, Y, Z displacements."""
|
||||
nm_per_unit = 1e6 if disp_unit == 'mm' else 1e9
|
||||
um_per_unit = nm_per_unit / 1000.0
|
||||
|
||||
data = self._displacements[label]
|
||||
node_ids = data['node_ids']
|
||||
dmat = data['disp']
|
||||
|
||||
x0, y0, z0 = [], [], []
|
||||
dx, dy, dz = [], [], []
|
||||
valid_nids = []
|
||||
|
||||
for nid, vec in zip(node_ids, dmat):
|
||||
geo = self._node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
continue
|
||||
|
||||
x0.append(geo[0])
|
||||
y0.append(geo[1])
|
||||
z0.append(geo[2])
|
||||
dx.append(vec[0])
|
||||
dy.append(vec[1])
|
||||
dz.append(vec[2])
|
||||
valid_nids.append(nid)
|
||||
|
||||
return {
|
||||
'x0': np.array(x0),
|
||||
'y0': np.array(y0),
|
||||
'z0': np.array(z0),
|
||||
'dx': np.array(dx),
|
||||
'dy': np.array(dy),
|
||||
'dz': np.array(dz),
|
||||
'nids': np.array(valid_nids),
|
||||
'nm_scale': nm_per_unit,
|
||||
'um_scale': um_per_unit,
|
||||
}
|
||||
|
||||
def _estimate_focal_length(self, x: np.ndarray, y: np.ndarray, z: np.ndarray) -> float:
|
||||
"""Estimate parabola focal length from geometry."""
|
||||
r_squared = x**2 + y**2
|
||||
mask = r_squared > 0
|
||||
if not np.any(mask):
|
||||
return 5000.0 # Default fallback
|
||||
|
||||
# Fit z = a * r² (assume centered parabola)
|
||||
A = r_squared[mask].reshape(-1, 1)
|
||||
b = z[mask]
|
||||
|
||||
try:
|
||||
a, _, _, _ = np.linalg.lstsq(A, b, rcond=None)
|
||||
a = a[0]
|
||||
except:
|
||||
a = np.median(z[mask] / r_squared[mask])
|
||||
|
||||
if abs(a) < 1e-12:
|
||||
return 5000.0
|
||||
|
||||
return 1.0 / (4.0 * abs(a))
|
||||
|
||||
def _compute_opd_comparison(
|
||||
self,
|
||||
data: Dict[str, np.ndarray],
|
||||
n_modes: int,
|
||||
filter_orders: int
|
||||
) -> Dict[str, Any]:
|
||||
"""Compute both methods and return comparison data."""
|
||||
from ..extractors.extract_zernike import compute_zernike_coefficients
|
||||
|
||||
x0 = data['x0']
|
||||
y0 = data['y0']
|
||||
z0 = data['z0']
|
||||
dx = data['dx']
|
||||
dy = data['dy']
|
||||
dz = data['dz']
|
||||
nm_scale = data['nm_scale']
|
||||
um_scale = data['um_scale']
|
||||
|
||||
# Estimate focal length
|
||||
focal = self._estimate_focal_length(x0, y0, z0)
|
||||
|
||||
# === STANDARD METHOD (Z-only) ===
|
||||
# Uses original (x0, y0) with Z-displacement
|
||||
wfe_standard = dz * 2.0 * nm_scale # WFE = 2 * surface error
|
||||
|
||||
coeffs_std, R_std = compute_zernike_coefficients(x0, y0, wfe_standard, n_modes)
|
||||
|
||||
# Compute filtered RMS
|
||||
x_c = x0 - np.mean(x0)
|
||||
y_c = y0 - np.mean(y0)
|
||||
r = np.hypot(x_c / R_std, y_c / R_std)
|
||||
th = np.arctan2(y_c, x_c)
|
||||
|
||||
from ..extractors.extract_zernike import zernike_noll
|
||||
Z_low = np.column_stack([zernike_noll(j, r, th) for j in range(1, filter_orders + 1)])
|
||||
wfe_std_filtered = wfe_standard - Z_low @ coeffs_std[:filter_orders]
|
||||
rms_std_filtered = float(np.sqrt(np.mean(wfe_std_filtered**2)))
|
||||
rms_std_global = float(np.sqrt(np.mean(wfe_standard**2)))
|
||||
|
||||
# === RIGOROUS OPD METHOD ===
|
||||
# Uses deformed (x0+dx, y0+dy) and computes true surface error
|
||||
# The CORRECT formulation uses DIFFERENTIAL approach:
|
||||
# surface_error = dz - delta_z_parabola
|
||||
# where delta_z_parabola is the Z change needed to stay on ideal parabola
|
||||
|
||||
x_def = x0 + dx
|
||||
y_def = y0 + dy
|
||||
|
||||
# Compute the CHANGE in parabola Z due to lateral displacement
|
||||
# For parabola: z = -r²/(4f), so:
|
||||
# Δz_parabola = z(x_def, y_def) - z(x0, y0)
|
||||
# = -[(x0+dx)² + (y0+dy)² - x0² - y0²] / (4f)
|
||||
r0_sq = x0**2 + y0**2
|
||||
r_def_sq = x_def**2 + y_def**2
|
||||
delta_r_sq = r_def_sq - r0_sq
|
||||
|
||||
# For concave mirror, z = -r²/(4f), so Δz = -Δr²/(4f)
|
||||
delta_z_parabola = -delta_r_sq / (4.0 * focal)
|
||||
|
||||
# TRUE surface error = actual dz - expected dz (to stay on parabola)
|
||||
surface_error = dz - delta_z_parabola
|
||||
wfe_opd = surface_error * 2.0 * nm_scale
|
||||
|
||||
coeffs_opd, R_opd = compute_zernike_coefficients(x_def, y_def, wfe_opd, n_modes)
|
||||
|
||||
x_c_def = x_def - np.mean(x_def)
|
||||
y_c_def = y_def - np.mean(y_def)
|
||||
r_def = np.hypot(x_c_def / R_opd, y_c_def / R_opd)
|
||||
th_def = np.arctan2(y_c_def, x_c_def)
|
||||
|
||||
Z_low_opd = np.column_stack([zernike_noll(j, r_def, th_def) for j in range(1, filter_orders + 1)])
|
||||
wfe_opd_filtered = wfe_opd - Z_low_opd @ coeffs_opd[:filter_orders]
|
||||
rms_opd_filtered = float(np.sqrt(np.mean(wfe_opd_filtered**2)))
|
||||
rms_opd_global = float(np.sqrt(np.mean(wfe_opd**2)))
|
||||
|
||||
# === LATERAL DISPLACEMENT ===
|
||||
lateral_disp = np.sqrt(dx**2 + dy**2) * um_scale # in µm
|
||||
|
||||
# === DIFFERENCE ANALYSIS ===
|
||||
# Where methods disagree (use original coords for visualization)
|
||||
wfe_difference = wfe_opd - wfe_standard # Could be different due to coordinate system
|
||||
# More meaningful: compare the residuals
|
||||
|
||||
return {
|
||||
# Coordinates for plotting
|
||||
'x_plot': x0,
|
||||
'y_plot': y0,
|
||||
|
||||
# Standard method
|
||||
'wfe_standard': wfe_standard,
|
||||
'wfe_std_filtered': wfe_std_filtered,
|
||||
'rms_std_global': rms_std_global,
|
||||
'rms_std_filtered': rms_std_filtered,
|
||||
|
||||
# OPD method
|
||||
'wfe_opd': wfe_opd,
|
||||
'wfe_opd_filtered': wfe_opd_filtered,
|
||||
'rms_opd_global': rms_opd_global,
|
||||
'rms_opd_filtered': rms_opd_filtered,
|
||||
'x_deformed': x_def,
|
||||
'y_deformed': y_def,
|
||||
|
||||
# Lateral displacement
|
||||
'lateral_disp_um': lateral_disp,
|
||||
'max_lateral_um': float(np.max(lateral_disp)),
|
||||
'rms_lateral_um': float(np.sqrt(np.mean(lateral_disp**2))),
|
||||
'p99_lateral_um': float(np.percentile(lateral_disp, 99)),
|
||||
|
||||
# Deltas
|
||||
'delta_global': rms_opd_global - rms_std_global,
|
||||
'delta_filtered': rms_opd_filtered - rms_std_filtered,
|
||||
'pct_diff_filtered': 100.0 * (rms_opd_filtered - rms_std_filtered) / rms_std_filtered if rms_std_filtered > 0 else 0.0,
|
||||
|
||||
# Parabola info
|
||||
'focal_length': focal,
|
||||
}
|
||||
|
||||
def _get_recommendation(self, comparison: Dict[str, Any]) -> Tuple[str, str]:
|
||||
"""Generate recommendation based on comparison results."""
|
||||
max_lateral = comparison['max_lateral_um']
|
||||
pct_diff = abs(comparison['pct_diff_filtered'])
|
||||
|
||||
if max_lateral > 10.0:
|
||||
severity = "CRITICAL"
|
||||
msg = (f"Large lateral displacements detected (max {max_lateral:.1f} µm). "
|
||||
f"Methods differ by {pct_diff:.1f}%. "
|
||||
f"You MUST use OPD method for accurate optimization.")
|
||||
elif max_lateral > 1.0:
|
||||
severity = "RECOMMENDED"
|
||||
msg = (f"Significant lateral displacements (max {max_lateral:.2f} µm). "
|
||||
f"Methods differ by {pct_diff:.1f}%. "
|
||||
f"OPD method recommended for better accuracy.")
|
||||
elif max_lateral > 0.1:
|
||||
severity = "OPTIONAL"
|
||||
msg = (f"Small lateral displacements (max {max_lateral:.3f} µm). "
|
||||
f"Methods differ by {pct_diff:.2f}%. "
|
||||
f"OPD method provides minor improvement.")
|
||||
else:
|
||||
severity = "EQUIVALENT"
|
||||
msg = (f"Negligible lateral displacements (max {max_lateral:.4f} µm). "
|
||||
f"Both methods give essentially identical results.")
|
||||
|
||||
return severity, msg
|
||||
|
||||
def _generate_html(
|
||||
self,
|
||||
title: str,
|
||||
comparison: Dict[str, Any],
|
||||
config: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Generate comparison HTML with multiple views."""
|
||||
_load_dependencies()
|
||||
|
||||
downsample = config.get('plot_downsample', 15000)
|
||||
colorscale_wfe = config.get('colorscale_wfe', 'RdBu')
|
||||
colorscale_lateral = config.get('colorscale_lateral', 'Hot')
|
||||
amp = config.get('amp', 0.5)
|
||||
|
||||
x = comparison['x_plot']
|
||||
y = comparison['y_plot']
|
||||
lateral = comparison['lateral_disp_um']
|
||||
wfe_std = comparison['wfe_std_filtered']
|
||||
wfe_opd = comparison['wfe_opd_filtered']
|
||||
|
||||
# Downsample if needed
|
||||
n = len(x)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
x, y = x[sel], y[sel]
|
||||
lateral = lateral[sel]
|
||||
wfe_std = wfe_std[sel]
|
||||
wfe_opd = wfe_opd[sel]
|
||||
|
||||
# Get recommendation
|
||||
severity, recommendation = self._get_recommendation(comparison)
|
||||
severity_color = {
|
||||
'CRITICAL': '#ef4444',
|
||||
'RECOMMENDED': '#f59e0b',
|
||||
'OPTIONAL': '#3b82f6',
|
||||
'EQUIVALENT': '#10b981'
|
||||
}.get(severity, '#6b7280')
|
||||
|
||||
# Create subplots: 2 rows, 2 cols
|
||||
# Row 1: Lateral displacement map, WFE difference map
|
||||
# Row 2: Comparison table, Recommendation
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=2,
|
||||
specs=[
|
||||
[{"type": "scene"}, {"type": "scene"}],
|
||||
[{"type": "table", "colspan": 2}, None]
|
||||
],
|
||||
row_heights=[0.65, 0.35],
|
||||
column_widths=[0.5, 0.5],
|
||||
vertical_spacing=0.08,
|
||||
horizontal_spacing=0.05,
|
||||
subplot_titles=[
|
||||
"<b>Lateral Displacement Magnitude (µm)</b>",
|
||||
"<b>Filtered WFE Comparison (nm)</b>",
|
||||
"<b>Method Comparison</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# === Plot 1: Lateral displacement map ===
|
||||
try:
|
||||
tri = _Triangulation(x, y)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=x, y=y, z=lateral * amp * 100, # Scale for visibility
|
||||
i=i, j=j, k=k,
|
||||
intensity=lateral,
|
||||
colorscale=colorscale_lateral,
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3),
|
||||
showscale=True,
|
||||
colorbar=dict(
|
||||
title=dict(text="Lateral (µm)", side="right"),
|
||||
x=0.42, thickness=15, len=0.5
|
||||
),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>Lateral: %{intensity:.3f} µm<extra></extra>"
|
||||
), row=1, col=1)
|
||||
except:
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=x, y=y, z=lateral * amp * 100,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=lateral, colorscale=colorscale_lateral, showscale=True)
|
||||
), row=1, col=1)
|
||||
|
||||
# === Plot 2: WFE comparison (show OPD method) ===
|
||||
try:
|
||||
tri = _Triangulation(x, y)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
fig.add_trace(_go.Mesh3d(
|
||||
x=x, y=y, z=wfe_opd * amp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=wfe_opd,
|
||||
colorscale='Turbo',
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3),
|
||||
showscale=True,
|
||||
colorbar=dict(
|
||||
title=dict(text="WFE (nm)", side="right"),
|
||||
x=1.0, thickness=15, len=0.5
|
||||
),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>WFE: %{intensity:.2f} nm<extra></extra>"
|
||||
), row=1, col=2)
|
||||
except:
|
||||
fig.add_trace(_go.Scatter3d(
|
||||
x=x, y=y, z=wfe_opd * amp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=wfe_opd, colorscale='Turbo', showscale=True)
|
||||
), row=1, col=2)
|
||||
|
||||
# Configure 3D scenes
|
||||
for scene_name in ['scene', 'scene2']:
|
||||
fig.update_layout(**{
|
||||
scene_name: dict(
|
||||
camera=dict(eye=dict(x=1.2, y=1.2, z=0.8)),
|
||||
xaxis=dict(title="X (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
yaxis=dict(title="Y (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
zaxis=dict(title="", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
aspectmode='data',
|
||||
)
|
||||
})
|
||||
|
||||
# === Comparison Table ===
|
||||
delta_sign = "+" if comparison['delta_filtered'] > 0 else ""
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(
|
||||
values=["<b>Metric</b>", "<b>Standard (Z-only)</b>", "<b>OPD (Rigorous)</b>", "<b>Difference</b>"],
|
||||
align="left",
|
||||
fill_color='#1f2937',
|
||||
font=dict(color='white', size=12),
|
||||
height=30
|
||||
),
|
||||
cells=dict(
|
||||
values=[
|
||||
# Metric names
|
||||
["Global RMS (nm)", "Filtered RMS (nm)", "Max Lateral Disp (µm)",
|
||||
"RMS Lateral Disp (µm)", "P99 Lateral Disp (µm)", "Focal Length (mm)",
|
||||
f"<b>RECOMMENDATION: {severity}</b>"],
|
||||
# Standard method
|
||||
[f"{comparison['rms_std_global']:.2f}", f"{comparison['rms_std_filtered']:.2f}",
|
||||
"N/A", "N/A", "N/A", f"{comparison['focal_length']:.1f}", ""],
|
||||
# OPD method
|
||||
[f"{comparison['rms_opd_global']:.2f}", f"{comparison['rms_opd_filtered']:.2f}",
|
||||
f"{comparison['max_lateral_um']:.3f}", f"{comparison['rms_lateral_um']:.3f}",
|
||||
f"{comparison['p99_lateral_um']:.3f}", f"{comparison['focal_length']:.1f}", ""],
|
||||
# Difference
|
||||
[f"{delta_sign}{comparison['delta_global']:.2f}",
|
||||
f"{delta_sign}{comparison['delta_filtered']:.2f} ({delta_sign}{comparison['pct_diff_filtered']:.1f}%)",
|
||||
"-", "-", "-", "-", recommendation],
|
||||
],
|
||||
align="left",
|
||||
fill_color=[['#374151']*7, ['#374151']*7, ['#374151']*7,
|
||||
['#374151']*6 + [severity_color]],
|
||||
font=dict(color='white', size=11),
|
||||
height=25
|
||||
)
|
||||
), row=2, col=1)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
width=1600,
|
||||
height=1000,
|
||||
margin=dict(t=80, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827',
|
||||
plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(
|
||||
text=f"<b>Atomizer Zernike Method Comparison - {title}</b><br>"
|
||||
f"<span style='font-size:14px;color:{severity_color}'>{severity}: {recommendation[:80]}...</span>",
|
||||
x=0.5,
|
||||
font=dict(size=18)
|
||||
)
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate method comparison visualization."""
|
||||
self._load_data()
|
||||
|
||||
cfg = {**DEFAULT_CONFIG, **config.extra}
|
||||
n_modes = cfg['n_modes']
|
||||
filter_orders = cfg['filter_orders']
|
||||
disp_unit = cfg['disp_unit']
|
||||
|
||||
# Find subcases
|
||||
disps = self._displacements
|
||||
available = list(disps.keys())
|
||||
|
||||
if not available:
|
||||
return InsightResult(success=False, error="No subcases found in OP2")
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_dir = config.output_dir or self.insights_path
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
html_files = []
|
||||
all_comparisons = {}
|
||||
|
||||
# Process each subcase
|
||||
for label in available:
|
||||
data = self._build_full_data(label, disp_unit)
|
||||
comparison = self._compute_opd_comparison(data, n_modes, filter_orders)
|
||||
|
||||
title = f"Subcase {label}"
|
||||
html = self._generate_html(title, comparison, cfg)
|
||||
|
||||
path = output_dir / f"zernike_opd_comparison_{timestamp}_{label}.html"
|
||||
path.write_text(html, encoding='utf-8')
|
||||
html_files.append(path)
|
||||
|
||||
severity, _ = self._get_recommendation(comparison)
|
||||
all_comparisons[label] = {
|
||||
'rms_std_filtered': comparison['rms_std_filtered'],
|
||||
'rms_opd_filtered': comparison['rms_opd_filtered'],
|
||||
'pct_diff': comparison['pct_diff_filtered'],
|
||||
'max_lateral_um': comparison['max_lateral_um'],
|
||||
'severity': severity,
|
||||
}
|
||||
|
||||
# Determine overall recommendation
|
||||
severities = [c['severity'] for c in all_comparisons.values()]
|
||||
if 'CRITICAL' in severities:
|
||||
overall = "CRITICAL: Use OPD method for all optimization"
|
||||
elif 'RECOMMENDED' in severities:
|
||||
overall = "RECOMMENDED: Switch to OPD method for better accuracy"
|
||||
elif 'OPTIONAL' in severities:
|
||||
overall = "OPTIONAL: OPD method provides minor improvement"
|
||||
else:
|
||||
overall = "EQUIVALENT: Standard method is fine for this study"
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_files[0],
|
||||
summary={
|
||||
'html_files': [str(p) for p in html_files],
|
||||
'comparisons': all_comparisons,
|
||||
'overall_recommendation': overall,
|
||||
}
|
||||
)
|
||||
@@ -7,6 +7,10 @@ Zernike polynomial decomposition. Generates three views:
|
||||
- 60 deg vs 20 deg (operational tilt comparison)
|
||||
- 90 deg Manufacturing (absolute with optician workload metrics)
|
||||
|
||||
Supports two WFE computation methods:
|
||||
- Standard: Z-displacement only at original (x, y) coordinates
|
||||
- OPD: Accounts for lateral (X, Y) displacement via interpolation (RECOMMENDED)
|
||||
|
||||
Applicable to: Mirror optimization studies with multi-subcase gravity loads.
|
||||
"""
|
||||
|
||||
@@ -19,6 +23,9 @@ from numpy.linalg import LinAlgError
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
# Lazy import for OPD extractor
|
||||
_ZernikeOPDExtractor = None
|
||||
|
||||
# Lazy imports to avoid startup overhead
|
||||
_plotly_loaded = False
|
||||
_go = None
|
||||
@@ -30,18 +37,20 @@ _BDF = None
|
||||
|
||||
def _load_dependencies():
|
||||
"""Lazy load heavy dependencies."""
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF
|
||||
global _plotly_loaded, _go, _make_subplots, _Triangulation, _OP2, _BDF, _ZernikeOPDExtractor
|
||||
if not _plotly_loaded:
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
from matplotlib.tri import Triangulation
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
from ..extractors.extract_zernike_figure import ZernikeOPDExtractor
|
||||
_go = go
|
||||
_make_subplots = make_subplots
|
||||
_Triangulation = Triangulation
|
||||
_OP2 = OP2
|
||||
_BDF = BDF
|
||||
_ZernikeOPDExtractor = ZernikeOPDExtractor
|
||||
_plotly_loaded = True
|
||||
|
||||
|
||||
@@ -144,10 +153,10 @@ def compute_zernike_coeffs(
|
||||
# Configuration Defaults
|
||||
# ============================================================================
|
||||
DEFAULT_CONFIG = {
|
||||
'n_modes': 50,
|
||||
'n_modes': 36, # Reduced from 50 for faster computation (covers through 7th order)
|
||||
'amp': 0.5, # Visual deformation scale
|
||||
'pancake': 3.0, # Z-axis range multiplier
|
||||
'plot_downsample': 10000,
|
||||
'plot_downsample': 8000, # Reduced from 10000 for faster rendering
|
||||
'filter_low_orders': 4, # Piston, tip, tilt, defocus
|
||||
'colorscale': 'Turbo',
|
||||
'disp_unit': 'mm',
|
||||
@@ -170,6 +179,7 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
insight_type = "zernike_wfe"
|
||||
name = "Zernike WFE Analysis"
|
||||
description = "3D wavefront error surface with Zernike decomposition"
|
||||
category = "optical"
|
||||
applicable_to = ["mirror", "optics", "wfe"]
|
||||
required_files = ["*.op2"]
|
||||
|
||||
@@ -179,29 +189,59 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
self.geo_path: Optional[Path] = None
|
||||
self._node_geo: Optional[Dict] = None
|
||||
self._displacements: Optional[Dict] = None
|
||||
self._opd_extractor: Optional[Any] = None # Cached OPD extractor
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if OP2 and geometry files exist."""
|
||||
# Look for OP2 in results or iterations
|
||||
search_paths = [
|
||||
self.results_path,
|
||||
self.study_path / "2_iterations",
|
||||
self.setup_path / "model",
|
||||
"""Check if OP2 and geometry files exist.
|
||||
|
||||
Uses fast non-recursive search to avoid slow glob operations.
|
||||
"""
|
||||
# Fast search order: best_design_archive first, then iterations
|
||||
search_locations = [
|
||||
(self.study_path / "3_results" / "best_design_archive", False), # (path, is_iter_parent)
|
||||
(self.study_path / "2_iterations", True), # iterations have subdirs
|
||||
(self.setup_path / "model", False),
|
||||
]
|
||||
|
||||
for search_path in search_paths:
|
||||
op2_candidates = []
|
||||
|
||||
for search_path, is_iter_parent in search_locations:
|
||||
if not search_path.exists():
|
||||
continue
|
||||
op2_files = list(search_path.glob("**/*solution*.op2"))
|
||||
if not op2_files:
|
||||
op2_files = list(search_path.glob("**/*.op2"))
|
||||
if op2_files:
|
||||
self.op2_path = max(op2_files, key=lambda p: p.stat().st_mtime)
|
||||
break
|
||||
|
||||
if self.op2_path is None:
|
||||
if is_iter_parent:
|
||||
# For iterations, check each subdir (one level only)
|
||||
try:
|
||||
for subdir in search_path.iterdir():
|
||||
if subdir.is_dir():
|
||||
for f in subdir.iterdir():
|
||||
if f.suffix.lower() == '.op2':
|
||||
op2_candidates.append(f)
|
||||
except (PermissionError, OSError):
|
||||
continue
|
||||
else:
|
||||
# Direct check (non-recursive)
|
||||
try:
|
||||
for f in search_path.iterdir():
|
||||
if f.suffix.lower() == '.op2':
|
||||
op2_candidates.append(f)
|
||||
# Also check one level down for best_design_archive
|
||||
elif f.is_dir():
|
||||
for sub in f.iterdir():
|
||||
if sub.suffix.lower() == '.op2':
|
||||
op2_candidates.append(sub)
|
||||
except (PermissionError, OSError):
|
||||
continue
|
||||
|
||||
if op2_candidates:
|
||||
break # Found some, stop searching
|
||||
|
||||
if not op2_candidates:
|
||||
return False
|
||||
|
||||
# Pick newest OP2
|
||||
self.op2_path = max(op2_candidates, key=lambda p: p.stat().st_mtime)
|
||||
|
||||
# Find geometry
|
||||
try:
|
||||
self.geo_path = self._find_geometry_file(self.op2_path)
|
||||
@@ -262,12 +302,16 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
'disp': dmat.copy()
|
||||
}
|
||||
|
||||
def _build_wfe_arrays(
|
||||
def _build_wfe_arrays_standard(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm'
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays for a subcase."""
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays using standard Z-only method.
|
||||
|
||||
This is the original method that uses only Z-displacement
|
||||
at the original (x, y) coordinates.
|
||||
"""
|
||||
nm_per_unit = 1e6 if disp_unit == 'mm' else 1e9
|
||||
|
||||
data = self._displacements[label]
|
||||
@@ -276,6 +320,8 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
|
||||
X, Y, WFE = [], [], []
|
||||
valid_nids = []
|
||||
dx_arr, dy_arr, dz_arr = [], [], []
|
||||
|
||||
for nid, vec in zip(node_ids, dmat):
|
||||
geo = self._node_geo.get(int(nid))
|
||||
if geo is None:
|
||||
@@ -285,27 +331,148 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
wfe = vec[2] * 2.0 * nm_per_unit # Z-disp to WFE
|
||||
WFE.append(wfe)
|
||||
valid_nids.append(nid)
|
||||
dx_arr.append(vec[0])
|
||||
dy_arr.append(vec[1])
|
||||
dz_arr.append(vec[2])
|
||||
|
||||
return (np.array(X), np.array(Y), np.array(WFE), np.array(valid_nids))
|
||||
return {
|
||||
'X': np.array(X),
|
||||
'Y': np.array(Y),
|
||||
'WFE': np.array(WFE),
|
||||
'node_ids': np.array(valid_nids),
|
||||
'dx': np.array(dx_arr),
|
||||
'dy': np.array(dy_arr),
|
||||
'dz': np.array(dz_arr),
|
||||
'lateral_disp': np.sqrt(np.array(dx_arr)**2 + np.array(dy_arr)**2),
|
||||
'method': 'standard',
|
||||
}
|
||||
|
||||
def _build_wfe_arrays_opd(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm'
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays using OPD method (accounts for lateral displacement).
|
||||
|
||||
This is the RECOMMENDED method that:
|
||||
1. Uses deformed (x+dx, y+dy) coordinates for Zernike fitting
|
||||
2. Computes true surface error via interpolation of undeformed geometry
|
||||
|
||||
Uses cached OPD extractor to avoid re-reading OP2/BDF files for each subcase.
|
||||
"""
|
||||
_load_dependencies()
|
||||
|
||||
# Reuse cached extractor to avoid re-reading files
|
||||
if self._opd_extractor is None:
|
||||
self._opd_extractor = _ZernikeOPDExtractor(
|
||||
self.op2_path,
|
||||
bdf_path=self.geo_path,
|
||||
displacement_unit=disp_unit
|
||||
)
|
||||
|
||||
opd_data = self._opd_extractor._build_figure_opd_data(label)
|
||||
|
||||
return {
|
||||
'X': opd_data['x_deformed'],
|
||||
'Y': opd_data['y_deformed'],
|
||||
'WFE': opd_data['wfe_nm'],
|
||||
'node_ids': opd_data['node_ids'],
|
||||
'dx': opd_data['dx'],
|
||||
'dy': opd_data['dy'],
|
||||
'dz': opd_data['dz'],
|
||||
'lateral_disp': opd_data['lateral_disp'],
|
||||
'x_original': opd_data['x_original'],
|
||||
'y_original': opd_data['y_original'],
|
||||
'method': 'opd',
|
||||
}
|
||||
|
||||
def _build_wfe_arrays(
|
||||
self,
|
||||
label: str,
|
||||
disp_unit: str = 'mm',
|
||||
method: str = 'opd'
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""Build X, Y, WFE arrays for a subcase.
|
||||
|
||||
Args:
|
||||
label: Subcase label
|
||||
disp_unit: Displacement unit ('mm' or 'm')
|
||||
method: 'opd' (recommended) or 'standard'
|
||||
|
||||
Returns:
|
||||
Dict with X, Y, WFE, node_ids, dx, dy, dz, lateral_disp arrays
|
||||
"""
|
||||
if method == 'opd':
|
||||
return self._build_wfe_arrays_opd(label, disp_unit)
|
||||
else:
|
||||
return self._build_wfe_arrays_standard(label, disp_unit)
|
||||
|
||||
def _compute_relative_wfe(
|
||||
self,
|
||||
X1, Y1, WFE1, nids1,
|
||||
X2, Y2, WFE2, nids2
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
"""Compute WFE1 - WFE2 for common nodes."""
|
||||
data1: Dict[str, np.ndarray],
|
||||
data2: Dict[str, np.ndarray]
|
||||
) -> Dict[str, np.ndarray]:
|
||||
"""Compute relative displacement and WFE (target - reference) for common nodes.
|
||||
|
||||
Args:
|
||||
data1: Target subcase data dict
|
||||
data2: Reference subcase data dict
|
||||
|
||||
Returns:
|
||||
Dict with relative WFE arrays AND relative displacement components (dx, dy, dz)
|
||||
"""
|
||||
X1, Y1, WFE1, nids1 = data1['X'], data1['Y'], data1['WFE'], data1['node_ids']
|
||||
X2, Y2, WFE2, nids2 = data2['X'], data2['Y'], data2['WFE'], data2['node_ids']
|
||||
|
||||
# Build reference maps for all data
|
||||
ref_map = {int(nid): (x, y, w) for nid, x, y, w in zip(nids2, X2, Y2, WFE2)}
|
||||
|
||||
X_rel, Y_rel, WFE_rel = [], [], []
|
||||
# Maps for displacement components
|
||||
dx1_map = {int(nid): d for nid, d in zip(data1['node_ids'], data1['dx'])}
|
||||
dy1_map = {int(nid): d for nid, d in zip(data1['node_ids'], data1['dy'])}
|
||||
dz1_map = {int(nid): d for nid, d in zip(data1['node_ids'], data1['dz'])}
|
||||
dx2_map = {int(nid): d for nid, d in zip(data2['node_ids'], data2['dx'])}
|
||||
dy2_map = {int(nid): d for nid, d in zip(data2['node_ids'], data2['dy'])}
|
||||
dz2_map = {int(nid): d for nid, d in zip(data2['node_ids'], data2['dz'])}
|
||||
|
||||
X_rel, Y_rel, WFE_rel, nids_rel = [], [], [], []
|
||||
dx_rel, dy_rel, dz_rel = [], [], []
|
||||
lateral_rel = []
|
||||
|
||||
for nid, x, y, w in zip(nids1, X1, Y1, WFE1):
|
||||
nid = int(nid)
|
||||
if nid in ref_map:
|
||||
_, _, w_ref = ref_map[nid]
|
||||
X_rel.append(x)
|
||||
Y_rel.append(y)
|
||||
WFE_rel.append(w - w_ref)
|
||||
if nid not in ref_map:
|
||||
continue
|
||||
if nid not in dx2_map:
|
||||
continue
|
||||
|
||||
return np.array(X_rel), np.array(Y_rel), np.array(WFE_rel)
|
||||
_, _, w_ref = ref_map[nid]
|
||||
X_rel.append(x)
|
||||
Y_rel.append(y)
|
||||
WFE_rel.append(w - w_ref)
|
||||
nids_rel.append(nid)
|
||||
|
||||
# Compute relative displacements (target - reference)
|
||||
dx_rel.append(dx1_map[nid] - dx2_map[nid])
|
||||
dy_rel.append(dy1_map[nid] - dy2_map[nid])
|
||||
dz_rel.append(dz1_map[nid] - dz2_map[nid])
|
||||
|
||||
# Relative lateral displacement magnitude
|
||||
lat1 = np.sqrt(dx1_map[nid]**2 + dy1_map[nid]**2)
|
||||
lat2 = np.sqrt(dx2_map[nid]**2 + dy2_map[nid]**2)
|
||||
lateral_rel.append(lat1 - lat2)
|
||||
|
||||
return {
|
||||
'X': np.array(X_rel),
|
||||
'Y': np.array(Y_rel),
|
||||
'WFE': np.array(WFE_rel),
|
||||
'node_ids': np.array(nids_rel),
|
||||
'dx': np.array(dx_rel), # Relative X displacement (mm)
|
||||
'dy': np.array(dy_rel), # Relative Y displacement (mm)
|
||||
'dz': np.array(dz_rel), # Relative Z displacement (mm)
|
||||
'lateral_disp': np.array(lateral_rel) if lateral_rel else np.zeros(len(X_rel)),
|
||||
'method': data1.get('method', 'unknown'),
|
||||
}
|
||||
|
||||
def _compute_metrics(
|
||||
self,
|
||||
@@ -585,8 +752,430 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate_lateral_map_html(
|
||||
self,
|
||||
title: str,
|
||||
data: Dict[str, np.ndarray],
|
||||
config: Dict,
|
||||
) -> str:
|
||||
"""Generate HTML for lateral displacement visualization.
|
||||
|
||||
Shows a 3D surface colored by lateral displacement magnitude,
|
||||
with metrics table showing max/RMS/mean lateral displacement.
|
||||
"""
|
||||
_load_dependencies()
|
||||
|
||||
X = data['X']
|
||||
Y = data['Y']
|
||||
lateral_disp = data['lateral_disp'] # in mm
|
||||
downsample = config.get('plot_downsample', 10000)
|
||||
|
||||
# Convert to µm for display
|
||||
lateral_um = lateral_disp * 1000.0 # mm to µm
|
||||
|
||||
# Downsample
|
||||
n = len(X)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
Xp, Yp, Lp = X[sel], Y[sel], lateral_um[sel]
|
||||
else:
|
||||
Xp, Yp, Lp = X, Y, lateral_um
|
||||
|
||||
# Build mesh
|
||||
mesh_traces = []
|
||||
try:
|
||||
tri = _Triangulation(Xp, Yp)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
mesh_traces.append(_go.Mesh3d(
|
||||
x=Xp, y=Yp, z=Lp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=Lp,
|
||||
colorscale='Viridis',
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(title=dict(text="Lateral (µm)", side="right"),
|
||||
thickness=15, len=0.6, tickformat=".3f"),
|
||||
hovertemplate="X: %{x:.1f}<br>Y: %{y:.1f}<br>Lateral: %{z:.4f} µm<extra></extra>"
|
||||
))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not mesh_traces:
|
||||
mesh_traces.append(_go.Scatter3d(
|
||||
x=Xp, y=Yp, z=Lp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=Lp, colorscale='Viridis', showscale=True),
|
||||
showlegend=False
|
||||
))
|
||||
|
||||
# Create figure with subplots
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}]],
|
||||
row_heights=[0.75, 0.25],
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=[
|
||||
f"<b>Lateral Displacement Map - {title}</b>",
|
||||
"<b>Lateral Displacement Statistics</b>"
|
||||
]
|
||||
)
|
||||
|
||||
for tr in mesh_traces:
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
|
||||
# Stats
|
||||
max_lat = float(np.max(lateral_um))
|
||||
rms_lat = float(np.sqrt(np.mean(lateral_um**2)))
|
||||
mean_lat = float(np.mean(lateral_um))
|
||||
min_lat = float(np.min(lateral_um))
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=["<b>Statistic</b>", "<b>Value (µm)</b>"],
|
||||
align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[
|
||||
["Max Lateral", "RMS Lateral", "Mean Lateral", "Min Lateral"],
|
||||
[f"{max_lat:.4f}", f"{rms_lat:.4f}", f"{mean_lat:.4f}", f"{min_lat:.4f}"]
|
||||
], align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Configure 3D scene
|
||||
max_z = float(np.max(Lp)) if Lp.size else 1.0
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.2, y=1.2, z=0.8), up=dict(x=0, y=0, z=1)),
|
||||
xaxis=dict(title="X (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
yaxis=dict(title="Y (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
zaxis=dict(title="Lateral (µm)",
|
||||
range=[0, max_z * 1.2],
|
||||
showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4)
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
width=1200, height=900,
|
||||
margin=dict(t=60, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text=f"<b>Lateral Displacement Analysis - {title}</b>",
|
||||
x=0.5, font=dict(size=18))
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate_dual_method_view_html(
|
||||
self,
|
||||
title: str,
|
||||
data_std: Dict[str, np.ndarray],
|
||||
data_opd: Dict[str, np.ndarray],
|
||||
rms_std: Dict,
|
||||
rms_opd: Dict,
|
||||
config: Dict,
|
||||
is_relative: bool = False,
|
||||
ref_title: str = "20 deg",
|
||||
) -> str:
|
||||
"""Generate HTML with toggle between Standard/OPD methods AND X/Y/Z displacement components.
|
||||
|
||||
For relative views, provides toggles to see:
|
||||
- WFE (Z): The main wavefront error view (default)
|
||||
- ΔX: Relative X displacement between subcases
|
||||
- ΔY: Relative Y displacement between subcases
|
||||
- ΔZ: Relative Z displacement between subcases
|
||||
"""
|
||||
_load_dependencies()
|
||||
|
||||
n_modes = config.get('n_modes', 50)
|
||||
amp = config.get('amp', 0.5)
|
||||
pancake = config.get('pancake', 3.0)
|
||||
downsample = config.get('plot_downsample', 10000)
|
||||
colorscale = config.get('colorscale', 'Turbo')
|
||||
|
||||
title_suffix = f" (relative to {ref_title})" if is_relative else " (absolute)"
|
||||
|
||||
# Build traces for both methods (WFE view)
|
||||
traces_std_wfe = []
|
||||
traces_opd_wfe = []
|
||||
|
||||
# Build displacement component traces (OPD method only, for relative views)
|
||||
traces_dx = []
|
||||
traces_dy = []
|
||||
traces_dz = []
|
||||
|
||||
# Helper to build mesh trace
|
||||
def build_mesh_trace(Xp, Yp, Zp, colorscale, label, unit, colorbar_title):
|
||||
try:
|
||||
tri = _Triangulation(Xp, Yp)
|
||||
if tri.triangles is not None and len(tri.triangles) > 0:
|
||||
i, j, k = tri.triangles.T
|
||||
return _go.Mesh3d(
|
||||
x=Xp, y=Yp, z=Zp,
|
||||
i=i, j=j, k=k,
|
||||
intensity=Zp,
|
||||
colorscale=colorscale,
|
||||
opacity=1.0,
|
||||
flatshading=False,
|
||||
lighting=dict(ambient=0.4, diffuse=0.8, specular=0.3),
|
||||
lightposition=dict(x=100, y=200, z=300),
|
||||
showscale=True,
|
||||
colorbar=dict(title=dict(text=colorbar_title, side="right"),
|
||||
thickness=15, len=0.6, tickformat=".2f" if 'µm' in unit else ".1f"),
|
||||
hovertemplate=f"{label}<br>X: %{{x:.1f}}<br>Y: %{{y:.1f}}<br>Value: %{{z:.3f}} {unit}<extra></extra>"
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return _go.Scatter3d(
|
||||
x=Xp, y=Yp, z=Zp,
|
||||
mode='markers',
|
||||
marker=dict(size=2, color=Zp, colorscale=colorscale, showscale=True),
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Build WFE traces for both methods
|
||||
for data, rms_data, traces, method_name in [
|
||||
(data_std, rms_std, traces_std_wfe, 'Standard'),
|
||||
(data_opd, rms_opd, traces_opd_wfe, 'OPD')
|
||||
]:
|
||||
X, Y = data['X'], data['Y']
|
||||
W_res_filt = rms_data['W_res_filt']
|
||||
|
||||
# Downsample
|
||||
n = len(X)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
Xp, Yp, Wp = X[sel], Y[sel], W_res_filt[sel]
|
||||
else:
|
||||
Xp, Yp, Wp = X, Y, W_res_filt
|
||||
|
||||
res_amp = amp * Wp
|
||||
traces.append(build_mesh_trace(Xp, Yp, res_amp, colorscale, method_name, 'nm', f'{method_name} WFE (nm)'))
|
||||
|
||||
# Build displacement component traces (for relative views)
|
||||
has_displacement_data = is_relative and 'dx' in data_opd and len(data_opd.get('dx', [])) > 0
|
||||
if has_displacement_data:
|
||||
X, Y = data_opd['X'], data_opd['Y']
|
||||
dx_mm = data_opd['dx'] # mm
|
||||
dy_mm = data_opd['dy'] # mm
|
||||
dz_mm = data_opd['dz'] # mm
|
||||
|
||||
# Convert to µm for display
|
||||
dx_um = dx_mm * 1000.0
|
||||
dy_um = dy_mm * 1000.0
|
||||
dz_um = dz_mm * 1000.0
|
||||
|
||||
n = len(X)
|
||||
if n > downsample:
|
||||
rng = np.random.default_rng(42)
|
||||
sel = rng.choice(n, size=downsample, replace=False)
|
||||
Xp, Yp = X[sel], Y[sel]
|
||||
dxp, dyp, dzp = dx_um[sel], dy_um[sel], dz_um[sel]
|
||||
else:
|
||||
Xp, Yp = X, Y
|
||||
dxp, dyp, dzp = dx_um, dy_um, dz_um
|
||||
|
||||
# Apply visual amplification for displacement views
|
||||
disp_amp = amp * 1000.0 # Scale factor for µm display
|
||||
|
||||
traces_dx.append(build_mesh_trace(Xp, Yp, dxp * amp, 'RdBu_r', 'ΔX Displacement', 'µm', 'ΔX (µm)'))
|
||||
traces_dy.append(build_mesh_trace(Xp, Yp, dyp * amp, 'RdBu_r', 'ΔY Displacement', 'µm', 'ΔY (µm)'))
|
||||
traces_dz.append(build_mesh_trace(Xp, Yp, dzp * amp, 'RdBu_r', 'ΔZ Displacement', 'µm', 'ΔZ (µm)'))
|
||||
|
||||
# Create figure
|
||||
fig = _make_subplots(
|
||||
rows=2, cols=1,
|
||||
specs=[[{"type": "scene"}], [{"type": "table"}]],
|
||||
row_heights=[0.65, 0.35],
|
||||
vertical_spacing=0.05,
|
||||
subplot_titles=[
|
||||
f"<b>Surface Analysis - {title}{title_suffix}</b>",
|
||||
"<b>Metrics Comparison</b>"
|
||||
]
|
||||
)
|
||||
|
||||
# Add all traces in order: [std_wfe, opd_wfe, dx, dy, dz, table]
|
||||
# Start with OPD WFE visible (default view)
|
||||
for tr in traces_std_wfe:
|
||||
tr.visible = False
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
for tr in traces_opd_wfe:
|
||||
tr.visible = True # Default view
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
for tr in traces_dx:
|
||||
tr.visible = False
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
for tr in traces_dy:
|
||||
tr.visible = False
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
for tr in traces_dz:
|
||||
tr.visible = False
|
||||
fig.add_trace(tr, row=1, col=1)
|
||||
|
||||
# Compute lateral stats (from OPD data)
|
||||
lateral_um = data_opd.get('lateral_disp', np.zeros(1)) * 1000.0
|
||||
max_lat = float(np.max(np.abs(lateral_um)))
|
||||
rms_lat = float(np.sqrt(np.mean(lateral_um**2)))
|
||||
|
||||
# Compute % difference
|
||||
std_filt = rms_std['filtered_rms']
|
||||
opd_filt = rms_opd['filtered_rms']
|
||||
pct_diff = 100.0 * (opd_filt - std_filt) / std_filt if std_filt > 0 else 0.0
|
||||
|
||||
# Displacement stats (for relative views)
|
||||
disp_stats_rows = []
|
||||
if has_displacement_data:
|
||||
dx_um = data_opd['dx'] * 1000.0
|
||||
dy_um = data_opd['dy'] * 1000.0
|
||||
dz_um = data_opd['dz'] * 1000.0
|
||||
disp_stats_rows = [
|
||||
"ΔX RMS (µm)", "ΔY RMS (µm)", "ΔZ RMS (µm)"
|
||||
]
|
||||
disp_stats_values_std = ["—", "—", "—"]
|
||||
disp_stats_values_opd = [
|
||||
f"{float(np.sqrt(np.mean(dx_um**2))):.4f}",
|
||||
f"{float(np.sqrt(np.mean(dy_um**2))):.4f}",
|
||||
f"{float(np.sqrt(np.mean(dz_um**2))):.4f}"
|
||||
]
|
||||
disp_stats_diff = ["—", "—", "—"]
|
||||
|
||||
# Comparison table
|
||||
table_headers = ["<b>Metric</b>", "<b>Standard (Z-only)</b>", "<b>OPD (X,Y,Z)</b>", "<b>Difference</b>"]
|
||||
table_rows = ["Global RMS (nm)", "Filtered RMS (nm)", "Method", "Max Lateral (µm)", "RMS Lateral (µm)"]
|
||||
table_std = [f"{rms_std['global_rms']:.2f}", f"{std_filt:.2f}", "Z-displacement only", "—", "—"]
|
||||
table_opd = [f"{rms_opd['global_rms']:.2f}", f"{opd_filt:.2f}", "Deformed coords + OPD", f"{max_lat:.3f}", f"{rms_lat:.3f}"]
|
||||
table_diff = ["—", f"{pct_diff:+.2f}%", "← RECOMMENDED", "—", "—"]
|
||||
|
||||
if has_displacement_data:
|
||||
table_rows.extend(disp_stats_rows)
|
||||
table_std.extend(disp_stats_values_std)
|
||||
table_opd.extend(disp_stats_values_opd)
|
||||
table_diff.extend(disp_stats_diff)
|
||||
|
||||
fig.add_trace(_go.Table(
|
||||
header=dict(values=table_headers, align="left", fill_color='#1f2937', font=dict(color='white')),
|
||||
cells=dict(values=[table_rows, table_std, table_opd, table_diff],
|
||||
align="left", fill_color='#374151', font=dict(color='white'))
|
||||
), row=2, col=1)
|
||||
|
||||
# Build visibility arrays for toggles
|
||||
n_std_wfe = len(traces_std_wfe)
|
||||
n_opd_wfe = len(traces_opd_wfe)
|
||||
n_dx = len(traces_dx)
|
||||
n_dy = len(traces_dy)
|
||||
n_dz = len(traces_dz)
|
||||
# Total traces before table: n_std_wfe + n_opd_wfe + n_dx + n_dy + n_dz
|
||||
# Then table is last
|
||||
|
||||
def make_visibility(show_std_wfe=False, show_opd_wfe=False, show_dx=False, show_dy=False, show_dz=False):
|
||||
vis = []
|
||||
vis.extend([show_std_wfe] * n_std_wfe)
|
||||
vis.extend([show_opd_wfe] * n_opd_wfe)
|
||||
vis.extend([show_dx] * n_dx)
|
||||
vis.extend([show_dy] * n_dy)
|
||||
vis.extend([show_dz] * n_dz)
|
||||
vis.append(True) # Table always visible
|
||||
return vis
|
||||
|
||||
# Build button definitions
|
||||
buttons_method = [
|
||||
dict(label="OPD Method (Recommended)",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_opd_wfe=True)}]),
|
||||
dict(label="Standard Method (Z-only)",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_std_wfe=True)}]),
|
||||
]
|
||||
|
||||
buttons_component = [
|
||||
dict(label="WFE (Z)",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_opd_wfe=True)}]),
|
||||
]
|
||||
|
||||
if has_displacement_data:
|
||||
buttons_component.extend([
|
||||
dict(label="ΔX Disp",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_dx=True)}]),
|
||||
dict(label="ΔY Disp",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_dy=True)}]),
|
||||
dict(label="ΔZ Disp",
|
||||
method="update",
|
||||
args=[{"visible": make_visibility(show_dz=True)}]),
|
||||
])
|
||||
|
||||
# Create update menus
|
||||
updatemenus = [
|
||||
dict(
|
||||
type="buttons",
|
||||
direction="right",
|
||||
x=0.0, y=1.15,
|
||||
xanchor="left",
|
||||
showactive=True,
|
||||
buttons=buttons_method,
|
||||
font=dict(size=11),
|
||||
pad=dict(r=10, t=10),
|
||||
),
|
||||
]
|
||||
|
||||
if has_displacement_data:
|
||||
updatemenus.append(
|
||||
dict(
|
||||
type="buttons",
|
||||
direction="right",
|
||||
x=0.55, y=1.15,
|
||||
xanchor="left",
|
||||
showactive=True,
|
||||
buttons=buttons_component,
|
||||
font=dict(size=11),
|
||||
pad=dict(r=10, t=10),
|
||||
)
|
||||
)
|
||||
|
||||
fig.update_layout(updatemenus=updatemenus)
|
||||
|
||||
# Configure 3D scene
|
||||
max_amp_opd = float(np.max(np.abs(amp * rms_opd['W_res_filt']))) if rms_opd['W_res_filt'].size else 1.0
|
||||
fig.update_scenes(
|
||||
camera=dict(eye=dict(x=1.2, y=1.2, z=0.8), up=dict(x=0, y=0, z=1)),
|
||||
xaxis=dict(title="X (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
yaxis=dict(title="Y (mm)", showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
zaxis=dict(title="Value",
|
||||
range=[-max_amp_opd * pancake, max_amp_opd * pancake],
|
||||
showgrid=True, gridcolor='rgba(128,128,128,0.3)'),
|
||||
aspectmode='manual',
|
||||
aspectratio=dict(x=1, y=1, z=0.4)
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
width=1400, height=1100,
|
||||
margin=dict(t=100, b=20, l=20, r=20),
|
||||
paper_bgcolor='#111827', plot_bgcolor='#1f2937',
|
||||
font=dict(color='white'),
|
||||
title=dict(text=f"<b>Atomizer Zernike Analysis - {title}</b>",
|
||||
x=0.5, font=dict(size=18)),
|
||||
annotations=[
|
||||
dict(text="<b>Method:</b>", x=0.0, y=1.18, xref="paper", yref="paper",
|
||||
showarrow=False, font=dict(size=12, color='white'), xanchor='left'),
|
||||
dict(text="<b>View:</b>", x=0.55, y=1.18, xref="paper", yref="paper",
|
||||
showarrow=False, font=dict(size=12, color='white'), xanchor='left') if has_displacement_data else {},
|
||||
]
|
||||
)
|
||||
|
||||
return fig.to_html(include_plotlyjs='cdn', full_html=True)
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate all Zernike WFE views."""
|
||||
"""Generate all Zernike WFE views with Standard/OPD toggle and lateral maps.
|
||||
|
||||
Performance optimizations:
|
||||
- Uses cached OPD extractor (reads OP2/BDF only once)
|
||||
- Loads all subcase data upfront to minimize I/O
|
||||
- Standard method reuses geometry from already-loaded data
|
||||
"""
|
||||
self._load_data()
|
||||
|
||||
# Merge config
|
||||
@@ -626,66 +1215,110 @@ class ZernikeWFEInsight(StudyInsight):
|
||||
html_files = []
|
||||
summary = {}
|
||||
|
||||
# Load data for BOTH methods - OPD method shares cached extractor
|
||||
# Standard method uses already-loaded _node_geo and _displacements
|
||||
# Reference: 20 deg
|
||||
X_ref, Y_ref, WFE_ref, nids_ref = self._build_wfe_arrays(sc_map['20'], disp_unit)
|
||||
rms_ref = self._compute_metrics(X_ref, Y_ref, WFE_ref, n_modes, filter_orders)
|
||||
data_ref_std = self._build_wfe_arrays_standard(sc_map['20'], disp_unit)
|
||||
data_ref_opd = self._build_wfe_arrays_opd(sc_map['20'], disp_unit)
|
||||
|
||||
# 40 deg
|
||||
data_40_std = self._build_wfe_arrays_standard(sc_map['40'], disp_unit)
|
||||
data_40_opd = self._build_wfe_arrays_opd(sc_map['40'], disp_unit)
|
||||
|
||||
# 60 deg
|
||||
data_60_std = self._build_wfe_arrays_standard(sc_map['60'], disp_unit)
|
||||
data_60_opd = self._build_wfe_arrays_opd(sc_map['60'], disp_unit)
|
||||
|
||||
# 90 deg
|
||||
X_90, Y_90, WFE_90, nids_90 = self._build_wfe_arrays(sc_map['90'], disp_unit)
|
||||
rms_90 = self._compute_metrics(X_90, Y_90, WFE_90, n_modes, filter_orders)
|
||||
mfg_metrics = self._compute_aberration_magnitudes(rms_90['coefficients'])
|
||||
data_90_std = self._build_wfe_arrays_standard(sc_map['90'], disp_unit)
|
||||
data_90_opd = self._build_wfe_arrays_opd(sc_map['90'], disp_unit)
|
||||
|
||||
# 40 deg vs 20 deg
|
||||
X_40, Y_40, WFE_40, nids_40 = self._build_wfe_arrays(sc_map['40'], disp_unit)
|
||||
X_40_rel, Y_40_rel, WFE_40_rel = self._compute_relative_wfe(
|
||||
X_40, Y_40, WFE_40, nids_40, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_40_abs = self._compute_metrics(X_40, Y_40, WFE_40, n_modes, filter_orders)
|
||||
rms_40_rel = self._compute_metrics(X_40_rel, Y_40_rel, WFE_40_rel, n_modes, filter_orders)
|
||||
# =========================================
|
||||
# 40 deg vs 20 deg (with dual method toggle)
|
||||
# =========================================
|
||||
rel_40_std = self._compute_relative_wfe(data_40_std, data_ref_std)
|
||||
rel_40_opd = self._compute_relative_wfe(data_40_opd, data_ref_opd)
|
||||
|
||||
html_40 = self._generate_view_html(
|
||||
"40 deg", X_40_rel, Y_40_rel, WFE_40_rel, rms_40_rel, cfg,
|
||||
is_relative=True, ref_title="20 deg",
|
||||
abs_pair=(rms_40_abs['global_rms'], rms_40_abs['filtered_rms']))
|
||||
rms_40_std = self._compute_metrics(rel_40_std['X'], rel_40_std['Y'], rel_40_std['WFE'],
|
||||
n_modes, filter_orders)
|
||||
rms_40_opd = self._compute_metrics(rel_40_opd['X'], rel_40_opd['Y'], rel_40_opd['WFE'],
|
||||
n_modes, filter_orders)
|
||||
|
||||
html_40 = self._generate_dual_method_view_html(
|
||||
"40 deg", rel_40_std, rel_40_opd, rms_40_std, rms_40_opd, cfg,
|
||||
is_relative=True, ref_title="20 deg")
|
||||
path_40 = output_dir / f"zernike_{timestamp}_40_vs_20.html"
|
||||
path_40.write_text(html_40, encoding='utf-8')
|
||||
html_files.append(path_40)
|
||||
summary['40_vs_20_filtered_rms'] = rms_40_rel['filtered_rms']
|
||||
|
||||
# 60 deg vs 20 deg
|
||||
X_60, Y_60, WFE_60, nids_60 = self._build_wfe_arrays(sc_map['60'], disp_unit)
|
||||
X_60_rel, Y_60_rel, WFE_60_rel = self._compute_relative_wfe(
|
||||
X_60, Y_60, WFE_60, nids_60, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_60_abs = self._compute_metrics(X_60, Y_60, WFE_60, n_modes, filter_orders)
|
||||
rms_60_rel = self._compute_metrics(X_60_rel, Y_60_rel, WFE_60_rel, n_modes, filter_orders)
|
||||
# Lateral map for 40 deg
|
||||
html_40_lat = self._generate_lateral_map_html("40 deg", data_40_opd, cfg)
|
||||
path_40_lat = output_dir / f"zernike_{timestamp}_40_lateral.html"
|
||||
path_40_lat.write_text(html_40_lat, encoding='utf-8')
|
||||
html_files.append(path_40_lat)
|
||||
|
||||
html_60 = self._generate_view_html(
|
||||
"60 deg", X_60_rel, Y_60_rel, WFE_60_rel, rms_60_rel, cfg,
|
||||
is_relative=True, ref_title="20 deg",
|
||||
abs_pair=(rms_60_abs['global_rms'], rms_60_abs['filtered_rms']))
|
||||
summary['40_vs_20_filtered_rms_std'] = rms_40_std['filtered_rms']
|
||||
summary['40_vs_20_filtered_rms_opd'] = rms_40_opd['filtered_rms']
|
||||
|
||||
# =========================================
|
||||
# 60 deg vs 20 deg (with dual method toggle)
|
||||
# =========================================
|
||||
rel_60_std = self._compute_relative_wfe(data_60_std, data_ref_std)
|
||||
rel_60_opd = self._compute_relative_wfe(data_60_opd, data_ref_opd)
|
||||
|
||||
rms_60_std = self._compute_metrics(rel_60_std['X'], rel_60_std['Y'], rel_60_std['WFE'],
|
||||
n_modes, filter_orders)
|
||||
rms_60_opd = self._compute_metrics(rel_60_opd['X'], rel_60_opd['Y'], rel_60_opd['WFE'],
|
||||
n_modes, filter_orders)
|
||||
|
||||
html_60 = self._generate_dual_method_view_html(
|
||||
"60 deg", rel_60_std, rel_60_opd, rms_60_std, rms_60_opd, cfg,
|
||||
is_relative=True, ref_title="20 deg")
|
||||
path_60 = output_dir / f"zernike_{timestamp}_60_vs_20.html"
|
||||
path_60.write_text(html_60, encoding='utf-8')
|
||||
html_files.append(path_60)
|
||||
summary['60_vs_20_filtered_rms'] = rms_60_rel['filtered_rms']
|
||||
|
||||
# 90 deg Manufacturing
|
||||
X_90_rel, Y_90_rel, WFE_90_rel = self._compute_relative_wfe(
|
||||
X_90, Y_90, WFE_90, nids_90, X_ref, Y_ref, WFE_ref, nids_ref)
|
||||
rms_90_rel = self._compute_metrics(X_90_rel, Y_90_rel, WFE_90_rel, n_modes, filter_orders)
|
||||
corr_abr = self._compute_aberration_magnitudes(rms_90_rel['coefficients'])
|
||||
correction_metrics = {
|
||||
'rms_filter_j1to3': rms_90_rel['rms_filter_j1to3'],
|
||||
**corr_abr
|
||||
}
|
||||
# Lateral map for 60 deg
|
||||
html_60_lat = self._generate_lateral_map_html("60 deg", data_60_opd, cfg)
|
||||
path_60_lat = output_dir / f"zernike_{timestamp}_60_lateral.html"
|
||||
path_60_lat.write_text(html_60_lat, encoding='utf-8')
|
||||
html_files.append(path_60_lat)
|
||||
|
||||
html_90 = self._generate_view_html(
|
||||
"90 deg (Manufacturing)", X_90, Y_90, WFE_90, rms_90, cfg,
|
||||
is_relative=False, is_manufacturing=True,
|
||||
mfg_metrics=mfg_metrics, correction_metrics=correction_metrics)
|
||||
summary['60_vs_20_filtered_rms_std'] = rms_60_std['filtered_rms']
|
||||
summary['60_vs_20_filtered_rms_opd'] = rms_60_opd['filtered_rms']
|
||||
|
||||
# =========================================
|
||||
# 90 deg Manufacturing (absolute, with dual method toggle)
|
||||
# =========================================
|
||||
rms_90_std = self._compute_metrics(data_90_std['X'], data_90_std['Y'], data_90_std['WFE'],
|
||||
n_modes, filter_orders)
|
||||
rms_90_opd = self._compute_metrics(data_90_opd['X'], data_90_opd['Y'], data_90_opd['WFE'],
|
||||
n_modes, filter_orders)
|
||||
|
||||
html_90 = self._generate_dual_method_view_html(
|
||||
"90 deg (Manufacturing)", data_90_std, data_90_opd, rms_90_std, rms_90_opd, cfg,
|
||||
is_relative=False)
|
||||
path_90 = output_dir / f"zernike_{timestamp}_90_mfg.html"
|
||||
path_90.write_text(html_90, encoding='utf-8')
|
||||
html_files.append(path_90)
|
||||
summary['90_mfg_filtered_rms'] = rms_90['filtered_rms']
|
||||
summary['90_optician_workload'] = rms_90['rms_filter_j1to3']
|
||||
|
||||
# Lateral map for 90 deg
|
||||
html_90_lat = self._generate_lateral_map_html("90 deg (Manufacturing)", data_90_opd, cfg)
|
||||
path_90_lat = output_dir / f"zernike_{timestamp}_90_mfg_lateral.html"
|
||||
path_90_lat.write_text(html_90_lat, encoding='utf-8')
|
||||
html_files.append(path_90_lat)
|
||||
|
||||
summary['90_mfg_filtered_rms_std'] = rms_90_std['filtered_rms']
|
||||
summary['90_mfg_filtered_rms_opd'] = rms_90_opd['filtered_rms']
|
||||
summary['90_optician_workload'] = rms_90_opd['rms_filter_j1to3']
|
||||
|
||||
# Lateral displacement summary
|
||||
lateral_40 = data_40_opd.get('lateral_disp', np.zeros(1)) * 1000.0 # mm to µm
|
||||
lateral_60 = data_60_opd.get('lateral_disp', np.zeros(1)) * 1000.0
|
||||
lateral_90 = data_90_opd.get('lateral_disp', np.zeros(1)) * 1000.0
|
||||
summary['lateral_40_max_um'] = float(np.max(lateral_40))
|
||||
summary['lateral_60_max_um'] = float(np.max(lateral_60))
|
||||
summary['lateral_90_max_um'] = float(np.max(lateral_90))
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
|
||||
150
tests/analyze_v11.py
Normal file
150
tests/analyze_v11.py
Normal file
@@ -0,0 +1,150 @@
|
||||
"""Analyze V11 optimization results."""
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
import json
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
db_path = Path('studies/M1_Mirror/m1_mirror_cost_reduction_V11/3_results/study.db')
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
c = conn.cursor()
|
||||
|
||||
# Get all completed trials with their objectives
|
||||
c.execute('''
|
||||
SELECT t.trial_id
|
||||
FROM trials t
|
||||
WHERE t.state = 'COMPLETE'
|
||||
ORDER BY t.trial_id
|
||||
''')
|
||||
completed_ids = [row[0] for row in c.fetchall()]
|
||||
print(f'Completed trials: {len(completed_ids)}')
|
||||
|
||||
# Build trial data
|
||||
trials = []
|
||||
for tid in completed_ids:
|
||||
c.execute('''
|
||||
SELECT key, value_json
|
||||
FROM trial_user_attributes
|
||||
WHERE trial_id = ?
|
||||
''', (tid,))
|
||||
attrs = {row[0]: json.loads(row[1]) for row in c.fetchall()}
|
||||
|
||||
if 'rel_filtered_rms_40_vs_20' in attrs:
|
||||
trials.append({
|
||||
'id': tid,
|
||||
'rms_40': attrs.get('rel_filtered_rms_40_vs_20', 999),
|
||||
'rms_60': attrs.get('rel_filtered_rms_60_vs_20', 999),
|
||||
'mfg_90': attrs.get('mfg_90_optician_workload', 999),
|
||||
'ws': attrs.get('weighted_sum', 999),
|
||||
'lateral': attrs.get('lateral_rms_um', 0),
|
||||
})
|
||||
|
||||
# Sort by weighted sum or calculate it
|
||||
for t in trials:
|
||||
if t['ws'] == 999:
|
||||
# Calculate weighted sum
|
||||
w40 = 100 / 4.0 # Target 4 nm
|
||||
w60 = 50 / 10.0 # Target 10 nm
|
||||
w90 = 20 / 20.0 # Target 20 nm
|
||||
t['ws'] = w40 * t['rms_40'] + w60 * t['rms_60'] + w90 * t['mfg_90']
|
||||
|
||||
# Find best
|
||||
trials.sort(key=lambda x: x['ws'])
|
||||
best = trials[0] if trials else None
|
||||
|
||||
print('\nV11 Optimization Summary')
|
||||
print('=' * 70)
|
||||
print(f'Completed trials: {len(trials)}')
|
||||
|
||||
if trials:
|
||||
rms40 = [t['rms_40'] for t in trials]
|
||||
rms60 = [t['rms_60'] for t in trials]
|
||||
mfg90 = [t['mfg_90'] for t in trials]
|
||||
|
||||
print(f'\n40-20 RMS range: {min(rms40):.2f} - {max(rms40):.2f} nm (target: 4.0 nm)')
|
||||
print(f'60-20 RMS range: {min(rms60):.2f} - {max(rms60):.2f} nm (target: 10.0 nm)')
|
||||
print(f'90 MFG range: {min(mfg90):.2f} - {max(mfg90):.2f} nm (target: 20.0 nm)')
|
||||
|
||||
print(f'\nBest Trial (#{best["id"]}):')
|
||||
print(f' 40-20 RMS: {best["rms_40"]:.2f} nm')
|
||||
print(f' 60-20 RMS: {best["rms_60"]:.2f} nm')
|
||||
print(f' MFG 90: {best["mfg_90"]:.2f} nm')
|
||||
print(f' Lateral: {best["lateral"]:.3f} um')
|
||||
print(f' WS: {best["ws"]:.1f}')
|
||||
|
||||
# Check if values make sense (not too good to be true)
|
||||
print('\nSanity Check:')
|
||||
if best['rms_40'] < 3.0:
|
||||
print(' WARNING: 40-20 RMS suspiciously low!')
|
||||
else:
|
||||
print(f' 40-20 RMS {best["rms_40"]:.2f} nm - looks reasonable (expected ~6-7nm)')
|
||||
|
||||
if best['rms_60'] < 8.0:
|
||||
print(' WARNING: 60-20 RMS suspiciously low!')
|
||||
else:
|
||||
print(f' 60-20 RMS {best["rms_60"]:.2f} nm - looks reasonable (expected ~13-15nm)')
|
||||
|
||||
# Compare to V7 baseline and V10 buggy values
|
||||
print('\n' + '=' * 70)
|
||||
print('Comparison to Known Values:')
|
||||
print('=' * 70)
|
||||
print('\nV7 Baseline (original Zernike, correct):')
|
||||
print(' 40-20 RMS: 6.05 nm')
|
||||
print(' 60-20 RMS: 13.03 nm')
|
||||
print(' MFG 90: 26.34 nm')
|
||||
|
||||
print('\nV10 BUGGY (abs(RMS_target - RMS_ref) - WRONG):')
|
||||
print(' 40-20 RMS: ~1.99 nm <- WAY TOO LOW')
|
||||
print(' 60-20 RMS: ~6.82 nm <- WAY TOO LOW')
|
||||
|
||||
print('\nV11 Best (extract_relative() - CORRECT):')
|
||||
print(f' 40-20 RMS: {best["rms_40"]:.2f} nm')
|
||||
print(f' 60-20 RMS: {best["rms_60"]:.2f} nm')
|
||||
print(f' MFG 90: {best["mfg_90"]:.2f} nm')
|
||||
|
||||
# Verdict
|
||||
print('\n' + '=' * 70)
|
||||
print('VERDICT:')
|
||||
print('=' * 70)
|
||||
if best['rms_40'] > 5.0 and best['rms_40'] < 10.0:
|
||||
print(' 40-20: CORRECT - matches expected V7 range')
|
||||
else:
|
||||
print(f' 40-20: INVESTIGATE - {best["rms_40"]:.2f} nm is outside expected range')
|
||||
|
||||
if best['rms_60'] > 10.0 and best['rms_60'] < 20.0:
|
||||
print(' 60-20: CORRECT - matches expected V7 range')
|
||||
else:
|
||||
print(f' 60-20: INVESTIGATE - {best["rms_60"]:.2f} nm is outside expected range')
|
||||
|
||||
conn.close()
|
||||
|
||||
# Now generate Zernike dashboard for visual verification
|
||||
print('\n' + '=' * 70)
|
||||
print('Generating Zernike Dashboard for Best Iteration...')
|
||||
print('=' * 70)
|
||||
|
||||
from optimization_engine.insights import ZernikeDashboardInsight, InsightConfig
|
||||
|
||||
study_path = Path('studies/M1_Mirror/m1_mirror_cost_reduction_V11')
|
||||
|
||||
# Find the iteration with the best design
|
||||
best_archive = study_path / '3_results' / 'best_design_archive'
|
||||
if best_archive.exists():
|
||||
op2_files = list(best_archive.glob('*.op2'))
|
||||
if op2_files:
|
||||
print(f'Found best design archive: {op2_files[0].name}')
|
||||
|
||||
insight = ZernikeDashboardInsight(study_path)
|
||||
if insight.can_generate():
|
||||
print('Generating dashboard...')
|
||||
result = insight.generate(InsightConfig())
|
||||
if result.success:
|
||||
print(f'\nDashboard generated: {result.html_path}')
|
||||
print(f'\nDashboard Summary:')
|
||||
for key, value in result.summary.items():
|
||||
print(f' {key}: {value}')
|
||||
else:
|
||||
print(f'Error: {result.error}')
|
||||
else:
|
||||
print('Cannot generate insight - no OP2 files found')
|
||||
252
tests/compare_v8_zernike_methods.py
Normal file
252
tests/compare_v8_zernike_methods.py
Normal file
@@ -0,0 +1,252 @@
|
||||
"""
|
||||
Compare V8 Best Candidate: OPD Method vs Standard Z-Only Method
|
||||
|
||||
This script extracts WFE using both methods and compares the results
|
||||
to quantify the difference between using full X,Y,Z displacement (OPD)
|
||||
vs just Z displacement (Standard).
|
||||
"""
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
import json
|
||||
import numpy as np
|
||||
|
||||
# Find V8 best trial
|
||||
db_path = Path('studies/M1_Mirror/m1_mirror_cost_reduction_V8/3_results/study.db')
|
||||
print(f"V8 Database: {db_path}")
|
||||
print(f"Exists: {db_path.exists()}")
|
||||
|
||||
if not db_path.exists():
|
||||
print("ERROR: V8 database not found!")
|
||||
sys.exit(1)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
c = conn.cursor()
|
||||
|
||||
# Get all completed trials with their objectives
|
||||
c.execute('''
|
||||
SELECT t.trial_id
|
||||
FROM trials t
|
||||
WHERE t.state = 'COMPLETE'
|
||||
ORDER BY t.trial_id
|
||||
''')
|
||||
completed_ids = [row[0] for row in c.fetchall()]
|
||||
print(f"Completed trials: {len(completed_ids)}")
|
||||
|
||||
# Build trial data and find best
|
||||
trials = []
|
||||
for tid in completed_ids:
|
||||
c.execute('''
|
||||
SELECT key, value_json
|
||||
FROM trial_user_attributes
|
||||
WHERE trial_id = ?
|
||||
''', (tid,))
|
||||
attrs = {row[0]: json.loads(row[1]) for row in c.fetchall()}
|
||||
|
||||
# V8 used different objective names - check what's available
|
||||
rms_40 = attrs.get('rel_filtered_rms_40_vs_20', attrs.get('filtered_rms_40_20', None))
|
||||
rms_60 = attrs.get('rel_filtered_rms_60_vs_20', attrs.get('filtered_rms_60_20', None))
|
||||
mfg_90 = attrs.get('mfg_90_optician_workload', attrs.get('optician_workload_90', None))
|
||||
ws = attrs.get('weighted_sum', None)
|
||||
|
||||
if rms_40 is not None:
|
||||
trials.append({
|
||||
'id': tid,
|
||||
'rms_40': rms_40,
|
||||
'rms_60': rms_60 if rms_60 else 999,
|
||||
'mfg_90': mfg_90 if mfg_90 else 999,
|
||||
'ws': ws if ws else 999,
|
||||
})
|
||||
|
||||
conn.close()
|
||||
|
||||
if not trials:
|
||||
# Check what keys are available
|
||||
conn = sqlite3.connect(db_path)
|
||||
c = conn.cursor()
|
||||
c.execute('SELECT DISTINCT key FROM trial_user_attributes LIMIT 20')
|
||||
keys = [row[0] for row in c.fetchall()]
|
||||
print(f"\nAvailable attribute keys: {keys}")
|
||||
conn.close()
|
||||
print("ERROR: No trials found with expected objective names!")
|
||||
sys.exit(1)
|
||||
|
||||
# Calculate weighted sum if not present
|
||||
for t in trials:
|
||||
if t['ws'] == 999:
|
||||
w40 = 100 / 4.0
|
||||
w60 = 50 / 10.0
|
||||
w90 = 20 / 20.0
|
||||
t['ws'] = w40 * t['rms_40'] + w60 * t['rms_60'] + w90 * t['mfg_90']
|
||||
|
||||
# Find best
|
||||
trials.sort(key=lambda x: x['ws'])
|
||||
best = trials[0]
|
||||
|
||||
print(f"\n{'='*70}")
|
||||
print("V8 Best Trial Summary")
|
||||
print('='*70)
|
||||
print(f"Best Trial: #{best['id']}")
|
||||
print(f" 40-20 RMS: {best['rms_40']:.2f} nm")
|
||||
print(f" 60-20 RMS: {best['rms_60']:.2f} nm")
|
||||
print(f" MFG 90: {best['mfg_90']:.2f} nm")
|
||||
print(f" WS: {best['ws']:.1f}")
|
||||
|
||||
# Now compare both extraction methods on this trial
|
||||
iter_path = Path(f'studies/M1_Mirror/m1_mirror_cost_reduction_V8/2_iterations/iter{best["id"]}')
|
||||
op2_path = iter_path / 'assy_m1_assyfem1_sim1-solution_1.op2'
|
||||
geo_path = iter_path / 'assy_m1_assyfem1_sim1-solution_1.dat'
|
||||
|
||||
print(f"\nIteration path: {iter_path}")
|
||||
print(f"OP2 exists: {op2_path.exists()}")
|
||||
print(f"Geometry exists: {geo_path.exists()}")
|
||||
|
||||
if not op2_path.exists():
|
||||
print("ERROR: OP2 file not found for best trial!")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"\n{'='*70}")
|
||||
print("Comparing Zernike Methods: OPD vs Standard")
|
||||
print('='*70)
|
||||
|
||||
# Import extractors
|
||||
from optimization_engine.extractors.extract_zernike_figure import ZernikeOPDExtractor
|
||||
from optimization_engine.extractors.extract_zernike import ZernikeExtractor
|
||||
|
||||
# Standard method (Z-only)
|
||||
print("\n1. STANDARD METHOD (Z-only displacement)")
|
||||
print("-" * 50)
|
||||
try:
|
||||
std_extractor = ZernikeExtractor(
|
||||
op2_path,
|
||||
bdf_path=geo_path,
|
||||
n_modes=50,
|
||||
filter_orders=4
|
||||
)
|
||||
|
||||
# Extract relative WFE
|
||||
std_40_20 = std_extractor.extract_relative(target_subcase='3', reference_subcase='2')
|
||||
std_60_20 = std_extractor.extract_relative(target_subcase='4', reference_subcase='2')
|
||||
|
||||
# MFG uses J1-J3 filter - need new extractor instance
|
||||
std_extractor_mfg = ZernikeExtractor(
|
||||
op2_path,
|
||||
bdf_path=geo_path,
|
||||
n_modes=50,
|
||||
filter_orders=3 # J1-J3 for manufacturing
|
||||
)
|
||||
std_90 = std_extractor_mfg.extract_subcase(subcase_label='1')
|
||||
|
||||
print(f" 40-20 Relative RMS: {std_40_20['relative_filtered_rms_nm']:.2f} nm")
|
||||
print(f" 60-20 Relative RMS: {std_60_20['relative_filtered_rms_nm']:.2f} nm")
|
||||
print(f" 90 MFG (J1-J3): {std_90['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
std_results = {
|
||||
'40_20': std_40_20['relative_filtered_rms_nm'],
|
||||
'60_20': std_60_20['relative_filtered_rms_nm'],
|
||||
'90_mfg': std_90['filtered_rms_nm'],
|
||||
}
|
||||
except Exception as e:
|
||||
print(f" ERROR: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
std_results = None
|
||||
|
||||
# OPD method (X,Y,Z displacement with mesh interpolation)
|
||||
print("\n2. OPD METHOD (X,Y,Z displacement with mesh interpolation)")
|
||||
print("-" * 50)
|
||||
try:
|
||||
opd_extractor = ZernikeOPDExtractor(
|
||||
op2_path,
|
||||
bdf_path=geo_path,
|
||||
n_modes=50,
|
||||
filter_orders=4
|
||||
)
|
||||
|
||||
# Extract relative WFE using OPD method
|
||||
opd_40_20 = opd_extractor.extract_relative(target_subcase='3', reference_subcase='2')
|
||||
opd_60_20 = opd_extractor.extract_relative(target_subcase='4', reference_subcase='2')
|
||||
|
||||
# MFG uses J1-J3 filter
|
||||
opd_extractor_mfg = ZernikeOPDExtractor(
|
||||
op2_path,
|
||||
bdf_path=geo_path,
|
||||
n_modes=50,
|
||||
filter_orders=3 # J1-J3 for manufacturing
|
||||
)
|
||||
opd_90 = opd_extractor_mfg.extract_subcase(subcase_label='1')
|
||||
|
||||
print(f" 40-20 Relative RMS: {opd_40_20['relative_filtered_rms_nm']:.2f} nm")
|
||||
print(f" 60-20 Relative RMS: {opd_60_20['relative_filtered_rms_nm']:.2f} nm")
|
||||
print(f" 90 MFG (J1-J3): {opd_90['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
# Also get lateral displacement info
|
||||
print(f"\n Lateral Displacement (40° vs 20°):")
|
||||
print(f" Max: {opd_40_20.get('max_lateral_displacement_um', 'N/A')} µm")
|
||||
print(f" RMS: {opd_40_20.get('rms_lateral_displacement_um', 'N/A')} µm")
|
||||
|
||||
opd_results = {
|
||||
'40_20': opd_40_20['relative_filtered_rms_nm'],
|
||||
'60_20': opd_60_20['relative_filtered_rms_nm'],
|
||||
'90_mfg': opd_90['filtered_rms_nm'],
|
||||
'lateral_max': opd_40_20.get('max_lateral_displacement_um', 0),
|
||||
'lateral_rms': opd_40_20.get('rms_lateral_displacement_um', 0),
|
||||
}
|
||||
except Exception as e:
|
||||
print(f" ERROR: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
opd_results = None
|
||||
|
||||
# Comparison
|
||||
if std_results and opd_results:
|
||||
print(f"\n{'='*70}")
|
||||
print("COMPARISON: OPD vs Standard Method")
|
||||
print('='*70)
|
||||
|
||||
print(f"\n{'Metric':<25} {'Standard':<12} {'OPD':<12} {'Delta':<12} {'Delta %':<10}")
|
||||
print("-" * 70)
|
||||
|
||||
for key, label in [('40_20', '40-20 RMS (nm)'), ('60_20', '60-20 RMS (nm)'), ('90_mfg', '90 MFG (nm)')]:
|
||||
std_val = std_results[key]
|
||||
opd_val = opd_results[key]
|
||||
delta = opd_val - std_val
|
||||
delta_pct = 100 * delta / std_val if std_val > 0 else 0
|
||||
|
||||
print(f"{label:<25} {std_val:<12.2f} {opd_val:<12.2f} {delta:+<12.2f} {delta_pct:+.1f}%")
|
||||
|
||||
print(f"\n{'='*70}")
|
||||
print("INTERPRETATION")
|
||||
print('='*70)
|
||||
|
||||
delta_40 = opd_results['40_20'] - std_results['40_20']
|
||||
delta_60 = opd_results['60_20'] - std_results['60_20']
|
||||
|
||||
print(f"""
|
||||
The OPD method accounts for lateral (X,Y) displacement when computing WFE.
|
||||
|
||||
For telescope mirrors with lateral supports:
|
||||
- Gravity causes the mirror to shift laterally (X,Y) as well as sag (Z)
|
||||
- The Standard method ignores this lateral shift
|
||||
- The OPD method interpolates the ideal surface at deformed (x+dx, y+dy) positions
|
||||
|
||||
Key observations:
|
||||
- 40-20 difference: {delta_40:+.2f} nm ({100*delta_40/std_results['40_20']:+.1f}%)
|
||||
- 60-20 difference: {delta_60:+.2f} nm ({100*delta_60/std_results['60_20']:+.1f}%)
|
||||
- Lateral displacement: Max {opd_results['lateral_max']:.3f} µm, RMS {opd_results['lateral_rms']:.3f} µm
|
||||
|
||||
Significance:
|
||||
""")
|
||||
|
||||
if abs(delta_40) < 0.5 and abs(delta_60) < 0.5:
|
||||
print(" -> SMALL DIFFERENCE: For this design, lateral displacement is minimal.")
|
||||
print(" Both methods give similar results.")
|
||||
else:
|
||||
print(" -> SIGNIFICANT DIFFERENCE: Lateral displacement affects WFE computation.")
|
||||
print(" OPD method is more physically accurate for this geometry.")
|
||||
|
||||
if opd_results['lateral_rms'] > 0.1:
|
||||
print(f"\n WARNING: Lateral RMS {opd_results['lateral_rms']:.3f} µm is notable.")
|
||||
print(" OPD method recommended for accurate optimization.")
|
||||
208
tests/test_zernike_methods_comparison.py
Normal file
208
tests/test_zernike_methods_comparison.py
Normal file
@@ -0,0 +1,208 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Compare all Zernike extraction methods on M1 Mirror data.
|
||||
|
||||
Methods compared:
|
||||
1. Standard - Z-displacement only at original (x,y)
|
||||
2. Parabola OPD - Uses parabola approximation with prescription focal length
|
||||
3. Figure OPD - Uses actual figure.dat geometry (most rigorous)
|
||||
|
||||
Run:
|
||||
python tests/test_zernike_methods_comparison.py studies/M1_Mirror/m1_mirror_cost_reduction_V9
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
import numpy as np
|
||||
|
||||
|
||||
def run_full_comparison(study_dir: Path):
|
||||
"""Run comparison of all three Zernike methods."""
|
||||
from optimization_engine.extractors.extract_zernike import ZernikeExtractor
|
||||
from optimization_engine.extractors.extract_zernike_opd import ZernikeAnalyticExtractor
|
||||
from optimization_engine.extractors.extract_zernike_figure import ZernikeOPDExtractor
|
||||
|
||||
# Find OP2 file
|
||||
op2_files = list(study_dir.glob('3_results/best_design_archive/**/*.op2'))
|
||||
if not op2_files:
|
||||
op2_files = list(study_dir.glob('2_iterations/iter1/*.op2'))
|
||||
if not op2_files:
|
||||
raise FileNotFoundError(f"No OP2 file found in {study_dir}")
|
||||
|
||||
op2_file = op2_files[0]
|
||||
|
||||
# Figure file is optional - extractor will use BDF geometry filtered to OP2 nodes
|
||||
figure_file = study_dir / '1_setup' / 'model' / 'figure.dat'
|
||||
use_figure_file = figure_file.exists()
|
||||
|
||||
print("=" * 80)
|
||||
print("ZERNIKE METHODS COMPARISON - M1 MIRROR")
|
||||
print("=" * 80)
|
||||
print(f"\nOP2 file: {op2_file.name}")
|
||||
if use_figure_file:
|
||||
print(f"Figure file: {figure_file.name}")
|
||||
else:
|
||||
print("Figure file: Not found (using BDF geometry filtered to OP2 nodes)")
|
||||
|
||||
# Initialize extractors
|
||||
print("\nInitializing extractors...")
|
||||
|
||||
std_extractor = ZernikeExtractor(op2_file)
|
||||
print(f" Standard: {len(std_extractor.node_geometry)} nodes in BDF")
|
||||
|
||||
analytic_extractor = ZernikeAnalyticExtractor(op2_file, focal_length=1445.0, concave=True)
|
||||
print(f" Analytic (Parabola f=1445mm): {len(analytic_extractor.node_geometry)} nodes")
|
||||
|
||||
# OPD extractor - ALWAYS use BDF geometry filtered to OP2 nodes (RECOMMENDED)
|
||||
# (figure.dat may have mismatched coordinates from different model state)
|
||||
opd_extractor = ZernikeOPDExtractor(
|
||||
op2_file,
|
||||
figure_path=None # Force use of BDF geometry
|
||||
)
|
||||
print(f" OPD (BDF geometry): {len(opd_extractor.figure_geometry)} figure nodes")
|
||||
|
||||
print()
|
||||
|
||||
# Get available subcases
|
||||
subcases = list(std_extractor.displacements.keys())
|
||||
print(f"Available subcases: {subcases}")
|
||||
|
||||
# Results table header
|
||||
print("\n" + "=" * 80)
|
||||
print(f"{'Subcase':<10} {'Standard':<15} {'Analytic':<15} {'OPD':<15} {'Max Lat (um)':<12}")
|
||||
print("-" * 80)
|
||||
|
||||
all_results = {}
|
||||
|
||||
for sc in subcases:
|
||||
try:
|
||||
# Extract with each method
|
||||
std_res = std_extractor.extract_subcase(sc)
|
||||
analytic_res = analytic_extractor.extract_subcase(sc)
|
||||
opd_res = opd_extractor.extract_subcase(sc)
|
||||
|
||||
std_rms = std_res['filtered_rms_nm']
|
||||
analytic_rms = analytic_res['filtered_rms_nm']
|
||||
opd_rms = opd_res['filtered_rms_nm']
|
||||
max_lat = opd_res.get('max_lateral_displacement_um', 0)
|
||||
|
||||
print(f"{sc:<10} {std_rms:<15.2f} {analytic_rms:<15.2f} {opd_rms:<15.2f} {max_lat:<12.3f}")
|
||||
|
||||
all_results[sc] = {
|
||||
'standard': std_res,
|
||||
'analytic': analytic_res,
|
||||
'opd': opd_res
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"{sc:<10} ERROR: {e}")
|
||||
|
||||
print("-" * 80)
|
||||
|
||||
# Detailed comparison for each subcase
|
||||
print("\n" + "=" * 80)
|
||||
print("DETAILED COMPARISON")
|
||||
print("=" * 80)
|
||||
|
||||
for sc, results in all_results.items():
|
||||
print(f"\n{'-' * 40}")
|
||||
print(f"SUBCASE {sc}")
|
||||
print(f"{'-' * 40}")
|
||||
|
||||
std = results['standard']
|
||||
analytic = results['analytic']
|
||||
opd = results['opd']
|
||||
|
||||
print(f"\n{'Metric':<25} {'Standard':<15} {'Analytic':<15} {'OPD':<15}")
|
||||
print("-" * 70)
|
||||
|
||||
# RMS metrics
|
||||
print(f"{'Filtered RMS (nm)':<25} {std['filtered_rms_nm']:<15.2f} {analytic['filtered_rms_nm']:<15.2f} {opd['filtered_rms_nm']:<15.2f}")
|
||||
print(f"{'Global RMS (nm)':<25} {std['global_rms_nm']:<15.2f} {analytic['global_rms_nm']:<15.2f} {opd['global_rms_nm']:<15.2f}")
|
||||
|
||||
# Aberrations
|
||||
for aberr in ['defocus', 'astigmatism', 'coma', 'trefoil', 'spherical']:
|
||||
key = f'{aberr}_nm'
|
||||
if key in std and key in analytic and key in opd:
|
||||
print(f"{aberr.capitalize():<25} {std[key]:<15.2f} {analytic[key]:<15.2f} {opd[key]:<15.2f}")
|
||||
|
||||
# Node count
|
||||
print(f"{'Nodes':<25} {std['n_nodes']:<15} {analytic['n_nodes']:<15} {opd['n_nodes']:<15}")
|
||||
|
||||
# Lateral displacement (only for OPD methods)
|
||||
print()
|
||||
print(f"Lateral displacement (OPD method):")
|
||||
print(f" Max: {opd.get('max_lateral_displacement_um', 0):.3f} um")
|
||||
print(f" RMS: {opd.get('rms_lateral_displacement_um', 0):.3f} um")
|
||||
print(f" Mean: {opd.get('mean_lateral_displacement_um', 0):.3f} um")
|
||||
|
||||
# Differences
|
||||
print()
|
||||
diff_std_opd = opd['filtered_rms_nm'] - std['filtered_rms_nm']
|
||||
diff_analytic_opd = opd['filtered_rms_nm'] - analytic['filtered_rms_nm']
|
||||
|
||||
print(f"Difference from Standard:")
|
||||
print(f" OPD: {diff_std_opd:+.2f} nm ({100*diff_std_opd/std['filtered_rms_nm']:+.1f}%)")
|
||||
print(f" Analytic: {analytic['filtered_rms_nm'] - std['filtered_rms_nm']:+.2f} nm")
|
||||
print()
|
||||
print(f"Difference OPD vs Analytic: {diff_analytic_opd:+.2f} nm")
|
||||
|
||||
# Tracking WFE analysis
|
||||
if '2' in all_results and '3' in all_results and '4' in all_results:
|
||||
print("\n" + "=" * 80)
|
||||
print("TRACKING WFE ANALYSIS (elevation changes)")
|
||||
print("=" * 80)
|
||||
|
||||
for method_name, method_key in [('Standard', 'standard'), ('Analytic', 'analytic'), ('OPD', 'opd')]:
|
||||
print(f"\n{method_name}:")
|
||||
|
||||
z20 = np.array(all_results['2'][method_key].get('coefficients', [0]*36))
|
||||
z40 = np.array(all_results['3'][method_key].get('coefficients', [0]*36))
|
||||
z60 = np.array(all_results['4'][method_key].get('coefficients', [0]*36))
|
||||
|
||||
if len(z20) > 4:
|
||||
# Differential (J5+)
|
||||
diff_40_20 = z40 - z20
|
||||
diff_60_20 = z60 - z20
|
||||
|
||||
rms_40_20 = np.sqrt(np.sum(diff_40_20[4:]**2))
|
||||
rms_60_20 = np.sqrt(np.sum(diff_60_20[4:]**2))
|
||||
|
||||
print(f" 40-20 deg tracking: {rms_40_20:.2f} nm RMS (J5+ filtered)")
|
||||
print(f" 60-20 deg tracking: {rms_60_20:.2f} nm RMS (J5+ filtered)")
|
||||
else:
|
||||
print(f" (coefficients not available)")
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
print("SUMMARY")
|
||||
print("=" * 80)
|
||||
print("""
|
||||
Key findings:
|
||||
- Standard method: Uses Z-displacement only at original (x,y) - fast but ignores lateral shift
|
||||
- Analytic method: Accounts for lateral shift using parabola formula (requires focal length)
|
||||
- OPD method: Uses actual mesh geometry - MOST RIGOROUS, no shape assumption
|
||||
|
||||
Recommendation: Use OPD method (ZernikeOPDExtractor) for all mirror optimization.
|
||||
The Analytic method is useful for comparison against theoretical parabola.
|
||||
""")
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser(description='Compare Zernike extraction methods')
|
||||
parser.add_argument('study_dir', type=str, help='Path to study directory')
|
||||
args = parser.parse_args()
|
||||
|
||||
study_dir = Path(args.study_dir).resolve()
|
||||
if not study_dir.exists():
|
||||
print(f"Study directory not found: {study_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
run_full_comparison(study_dir)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
Reference in New Issue
Block a user