feat: Add OPD method support to Zernike visualization with Standard/OPD toggle

Major improvements to Zernike WFE visualization:

- Add ZernikeDashboardInsight: Unified dashboard with all orientations (40°, 60°, 90°)
  on one page with light theme and executive summary
- Add OPD method toggle: Switch between Standard (Z-only) and OPD (X,Y,Z) methods
  in ZernikeWFEInsight with interactive buttons
- Add lateral displacement maps: Visualize X,Y displacement for each orientation
- Add displacement component views: Toggle between WFE, ΔX, ΔY, ΔZ in relative views
- Add metrics comparison table showing both methods side-by-side

New extractors:
- extract_zernike_figure.py: ZernikeOPDExtractor using BDF geometry interpolation
- extract_zernike_opd.py: Parabola-based OPD with focal length

Key finding: OPD method gives 8-11% higher WFE values than Standard method
(more conservative/accurate for surfaces with lateral displacement under gravity)

Documentation updates:
- SYS_12: Added E22 ZernikeOPD as recommended method
- SYS_16: Added ZernikeDashboard, updated ZernikeWFE with OPD features
- Cheatsheet: Added Zernike method comparison table

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2025-12-22 21:03:19 -05:00
parent d089003ced
commit d19fc39a2a
19 changed files with 8117 additions and 396 deletions

View File

@@ -27,7 +27,7 @@ Phase 4 Extractors (2025-12-19):
- Part Introspection (E12): Comprehensive .prt analysis (expressions, mass, materials, attributes, groups, features)
"""
# Zernike extractor for telescope mirror optimization
# Zernike extractor for telescope mirror optimization (standard Z-only method)
from optimization_engine.extractors.extract_zernike import (
ZernikeExtractor,
extract_zernike_from_op2,
@@ -35,6 +35,29 @@ from optimization_engine.extractors.extract_zernike import (
extract_zernike_relative_rms,
)
# Analytic (parabola-based) Zernike extractor (accounts for lateral X/Y displacement)
# Uses parabola formula - requires knowing focal length
from optimization_engine.extractors.extract_zernike_opd import (
ZernikeAnalyticExtractor,
extract_zernike_analytic,
extract_zernike_analytic_filtered_rms,
compare_zernike_methods,
# Backwards compatibility (deprecated)
ZernikeOPDExtractor as _ZernikeOPDExtractor_deprecated,
)
# OPD-based Zernike extractor (uses actual mesh geometry, no shape assumption)
# MOST RIGOROUS method - RECOMMENDED for telescope mirror optimization
from optimization_engine.extractors.extract_zernike_figure import (
ZernikeOPDExtractor,
extract_zernike_opd,
extract_zernike_opd_filtered_rms,
# Backwards compatibility (deprecated)
ZernikeFigureExtractor,
extract_zernike_figure,
extract_zernike_figure_rms,
)
# Part mass and material extractor (from NX .prt files)
from optimization_engine.extractors.extract_part_mass_material import (
extract_part_mass_material,
@@ -114,11 +137,24 @@ __all__ = [
'extract_total_reaction_force',
'extract_reaction_component',
'check_force_equilibrium',
# Zernike (telescope mirrors)
# Zernike (telescope mirrors) - Standard Z-only method
'ZernikeExtractor',
'extract_zernike_from_op2',
'extract_zernike_filtered_rms',
'extract_zernike_relative_rms',
# Zernike OPD (RECOMMENDED - uses actual geometry, no shape assumption)
'ZernikeOPDExtractor',
'extract_zernike_opd',
'extract_zernike_opd_filtered_rms',
# Zernike Analytic (parabola-based with lateral displacement correction)
'ZernikeAnalyticExtractor',
'extract_zernike_analytic',
'extract_zernike_analytic_filtered_rms',
'compare_zernike_methods',
# Backwards compatibility (deprecated)
'ZernikeFigureExtractor',
'extract_zernike_figure',
'extract_zernike_figure_rms',
# Temperature (Phase 3 - thermal)
'extract_temperature',
'extract_temperature_gradient',

View File

@@ -0,0 +1,852 @@
"""
OPD Zernike Extractor (Most Rigorous Method)
=============================================
This is the RECOMMENDED Zernike extractor for telescope mirror optimization.
It computes surface error using the ACTUAL undeformed mesh geometry as the
reference surface, rather than assuming any analytical shape.
WHY THIS IS THE MOST ROBUST:
----------------------------
1. Works with ANY surface shape (parabola, hyperbola, asphere, freeform)
2. No need to know/estimate focal length or conic constant
3. Uses the actual mesh geometry as ground truth
4. Eliminates errors from prescription/shape approximations
5. Properly accounts for lateral (X, Y) displacement via interpolation
HOW IT WORKS:
-------------
The key insight: The BDF geometry for nodes present in OP2 IS the undeformed
reference surface (i.e., the optical figure before deformation).
1. Load BDF geometry for nodes that have displacements in OP2
2. Build 2D interpolator z_figure(x, y) from undeformed coordinates
3. For each deformed node at (x0+dx, y0+dy, z0+dz):
- Interpolate z_figure at the deformed (x,y) position
- Surface error = (z0 + dz) - z_interpolated
4. Fit Zernike polynomials to the surface error map
The interpolation accounts for the fact that when a node moves laterally,
it should be compared against the figure height at its NEW position.
USAGE:
------
from optimization_engine.extractors import ZernikeOPDExtractor
extractor = ZernikeOPDExtractor(op2_file)
result = extractor.extract_subcase('20')
# Simple convenience function
from optimization_engine.extractors import extract_zernike_opd
result = extract_zernike_opd(op2_file, subcase='20')
Author: Atomizer Framework
Date: 2024
"""
from pathlib import Path
from typing import Dict, Any, Optional, List, Tuple, Union
import numpy as np
from scipy.interpolate import LinearNDInterpolator, CloughTocher2DInterpolator
import warnings
# Import base Zernike functionality
from .extract_zernike import (
compute_zernike_coefficients,
compute_aberration_magnitudes,
zernike_noll,
zernike_label,
read_node_geometry,
find_geometry_file,
extract_displacements_by_subcase,
UNIT_TO_NM,
DEFAULT_N_MODES,
DEFAULT_FILTER_ORDERS,
)
try:
from pyNastran.bdf.bdf import BDF
except ImportError:
BDF = None
# ============================================================================
# Figure Geometry Parser
# ============================================================================
def parse_nastran_grid_large(line: str, continuation: str) -> Tuple[int, float, float, float]:
"""
Parse a GRID* (large field format) card from Nastran BDF.
Large field format (16-char fields):
GRID* ID CP X Y +
* Z CD
Args:
line: First line of GRID* card
continuation: Continuation line starting with *
Returns:
Tuple of (node_id, x, y, z)
"""
# GRID* line has: GRID* | ID | CP | X | Y | continuation marker
# Fields are 16 characters each after the 8-char GRID* identifier
# Parse first line
node_id = int(line[8:24].strip())
# CP (coordinate system) at 24:40 - skip
x = float(line[40:56].strip())
y = float(line[56:72].strip())
# Parse continuation line for Z
# Continuation line: * | Z | CD
z = float(continuation[8:24].strip())
return node_id, x, y, z
def parse_nastran_grid_small(line: str) -> Tuple[int, float, float, float]:
"""
Parse a GRID (small field format) card from Nastran BDF.
Small field format (8-char fields):
GRID ID CP X Y Z CD PS SEID
Args:
line: GRID card line
Returns:
Tuple of (node_id, x, y, z)
"""
parts = line.split()
node_id = int(parts[1])
# parts[2] is CP
x = float(parts[3]) if len(parts) > 3 else 0.0
y = float(parts[4]) if len(parts) > 4 else 0.0
z = float(parts[5]) if len(parts) > 5 else 0.0
return node_id, x, y, z
def load_figure_geometry(figure_path: Union[str, Path]) -> Dict[int, np.ndarray]:
"""
Load figure node geometry from a Nastran DAT/BDF file.
Supports both GRID (small field) and GRID* (large field) formats.
Args:
figure_path: Path to figure.dat file
Returns:
Dict mapping node_id to (x, y, z) coordinates
"""
figure_path = Path(figure_path)
if not figure_path.exists():
raise FileNotFoundError(f"Figure file not found: {figure_path}")
geometry = {}
with open(figure_path, 'r', encoding='utf-8', errors='ignore') as f:
lines = f.readlines()
i = 0
while i < len(lines):
line = lines[i]
# Skip comments and empty lines
if line.startswith('$') or line.strip() == '':
i += 1
continue
# Large field format: GRID*
if line.startswith('GRID*'):
if i + 1 < len(lines):
continuation = lines[i + 1]
try:
node_id, x, y, z = parse_nastran_grid_large(line, continuation)
geometry[node_id] = np.array([x, y, z])
except (ValueError, IndexError) as e:
warnings.warn(f"Failed to parse GRID* at line {i}: {e}")
i += 2
continue
# Small field format: GRID
elif line.startswith('GRID') and not line.startswith('GRID*'):
try:
node_id, x, y, z = parse_nastran_grid_small(line)
geometry[node_id] = np.array([x, y, z])
except (ValueError, IndexError) as e:
warnings.warn(f"Failed to parse GRID at line {i}: {e}")
i += 1
continue
# Check for END
if 'ENDDATA' in line:
break
i += 1
if not geometry:
raise ValueError(f"No GRID cards found in {figure_path}")
return geometry
def build_figure_interpolator(
figure_geometry: Dict[int, np.ndarray],
method: str = 'linear'
) -> LinearNDInterpolator:
"""
Build a 2D interpolator for the figure surface z(x, y).
Args:
figure_geometry: Dict mapping node_id to (x, y, z)
method: Interpolation method ('linear' or 'cubic')
Returns:
Interpolator function that takes (x, y) and returns z
"""
# Extract coordinates
points = np.array(list(figure_geometry.values()))
x = points[:, 0]
y = points[:, 1]
z = points[:, 2]
# Build interpolator
xy_points = np.column_stack([x, y])
if method == 'cubic':
# Clough-Tocher gives smoother results but is slower
interpolator = CloughTocher2DInterpolator(xy_points, z)
else:
# Linear is faster and usually sufficient
interpolator = LinearNDInterpolator(xy_points, z)
return interpolator
# ============================================================================
# Figure-Based OPD Calculation
# ============================================================================
def compute_figure_opd(
x0: np.ndarray,
y0: np.ndarray,
z0: np.ndarray,
dx: np.ndarray,
dy: np.ndarray,
dz: np.ndarray,
figure_interpolator
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
"""
Compute surface error using actual figure geometry as reference.
The key insight: after lateral displacement, compare the deformed Z
against what the ACTUAL figure surface Z is at that (x, y) position.
Args:
x0, y0, z0: Original node coordinates
dx, dy, dz: Displacement components from FEA
figure_interpolator: Interpolator for figure z(x, y)
Returns:
Tuple of:
- x_def: Deformed X coordinates
- y_def: Deformed Y coordinates
- surface_error: Difference from ideal figure surface
- lateral_magnitude: Magnitude of lateral displacement
"""
# Deformed coordinates
x_def = x0 + dx
y_def = y0 + dy
z_def = z0 + dz
# Get ideal figure Z at deformed (x, y) positions
z_figure_at_deformed = figure_interpolator(x_def, y_def)
# Handle any NaN values (outside interpolation domain)
nan_mask = np.isnan(z_figure_at_deformed)
if np.any(nan_mask):
# For points outside figure, use original Z as fallback
z_figure_at_deformed[nan_mask] = z0[nan_mask]
warnings.warn(
f"{np.sum(nan_mask)} nodes outside figure interpolation domain, "
"using original Z as fallback"
)
# Surface error = deformed Z - ideal figure Z at deformed position
surface_error = z_def - z_figure_at_deformed
# Lateral displacement magnitude
lateral_magnitude = np.sqrt(dx**2 + dy**2)
return x_def, y_def, surface_error, lateral_magnitude
# ============================================================================
# Main OPD Extractor Class (Most Rigorous Method)
# ============================================================================
class ZernikeOPDExtractor:
"""
Zernike extractor using actual mesh geometry as the reference surface.
THIS IS THE RECOMMENDED EXTRACTOR for telescope mirror optimization.
This is the most rigorous approach for computing WFE because it:
1. Uses the actual mesh geometry (not an analytical approximation)
2. Accounts for lateral displacement via interpolation
3. Works with any surface shape (parabola, hyperbola, asphere, freeform)
4. No need to know focal length or optical prescription
The extractor works in two modes:
1. Default: Uses BDF geometry for nodes present in OP2 (RECOMMENDED)
2. With figure_file: Uses explicit figure.dat for reference geometry
Example:
# Simple usage - BDF geometry filtered to OP2 nodes (RECOMMENDED)
extractor = ZernikeOPDExtractor(op2_file)
results = extractor.extract_all_subcases()
# With explicit figure file (advanced - ensure coordinates match!)
extractor = ZernikeOPDExtractor(op2_file, figure_file='figure.dat')
"""
def __init__(
self,
op2_path: Union[str, Path],
figure_path: Optional[Union[str, Path]] = None,
bdf_path: Optional[Union[str, Path]] = None,
displacement_unit: str = 'mm',
n_modes: int = DEFAULT_N_MODES,
filter_orders: int = DEFAULT_FILTER_ORDERS,
interpolation_method: str = 'linear'
):
"""
Initialize figure-based Zernike extractor.
Args:
op2_path: Path to OP2 results file
figure_path: Path to figure.dat with undeformed figure geometry (OPTIONAL)
If None, uses BDF geometry for nodes present in OP2
bdf_path: Path to BDF geometry (auto-detected if None)
displacement_unit: Unit of displacement in OP2
n_modes: Number of Zernike modes to fit
filter_orders: Number of low-order modes to filter for RMS
interpolation_method: 'linear' or 'cubic' for figure interpolation
"""
self.op2_path = Path(op2_path)
self.figure_path = Path(figure_path) if figure_path else None
self.bdf_path = Path(bdf_path) if bdf_path else find_geometry_file(self.op2_path)
self.displacement_unit = displacement_unit
self.n_modes = n_modes
self.filter_orders = filter_orders
self.interpolation_method = interpolation_method
self.use_explicit_figure = figure_path is not None
# Unit conversion
self.nm_scale = UNIT_TO_NM.get(displacement_unit.lower(), 1e6)
self.um_scale = self.nm_scale / 1000.0
self.wfe_factor = 2.0 * self.nm_scale # WFE = 2 * surface error
# Lazy-loaded data
self._figure_geo = None
self._figure_interp = None
self._node_geo = None
self._displacements = None
@property
def node_geometry(self) -> Dict[int, np.ndarray]:
"""Lazy-load FEM node geometry from BDF."""
if self._node_geo is None:
self._node_geo = read_node_geometry(self.bdf_path)
return self._node_geo
@property
def displacements(self) -> Dict[str, Dict[str, np.ndarray]]:
"""Lazy-load displacements from OP2."""
if self._displacements is None:
self._displacements = extract_displacements_by_subcase(self.op2_path)
return self._displacements
@property
def figure_geometry(self) -> Dict[int, np.ndarray]:
"""
Lazy-load figure geometry.
If explicit figure_path provided, load from that file.
Otherwise, use BDF geometry filtered to nodes present in OP2.
"""
if self._figure_geo is None:
if self.use_explicit_figure and self.figure_path:
# Load from explicit figure.dat
self._figure_geo = load_figure_geometry(self.figure_path)
else:
# Use BDF geometry filtered to OP2 nodes
# Get all node IDs from OP2 (any subcase)
first_subcase = next(iter(self.displacements.values()))
op2_node_ids = set(int(nid) for nid in first_subcase['node_ids'])
# Filter BDF geometry to only OP2 nodes
self._figure_geo = {
nid: geo for nid, geo in self.node_geometry.items()
if nid in op2_node_ids
}
return self._figure_geo
@property
def figure_interpolator(self):
"""Lazy-build figure interpolator."""
if self._figure_interp is None:
self._figure_interp = build_figure_interpolator(
self.figure_geometry,
method=self.interpolation_method
)
return self._figure_interp
def _build_figure_opd_data(self, subcase_label: str) -> Dict[str, np.ndarray]:
"""
Build OPD data using figure geometry as reference.
Uses the figure geometry (either from explicit file or BDF filtered to OP2 nodes)
as the undeformed reference surface.
"""
if subcase_label not in self.displacements:
available = list(self.displacements.keys())
raise ValueError(f"Subcase '{subcase_label}' not found. Available: {available}")
data = self.displacements[subcase_label]
node_ids = data['node_ids']
disp = data['disp']
# Get figure node IDs
figure_node_ids = set(self.figure_geometry.keys())
# Build arrays - only for nodes in figure
x0, y0, z0 = [], [], []
dx_arr, dy_arr, dz_arr = [], [], []
matched_ids = []
for nid, vec in zip(node_ids, disp):
nid_int = int(nid)
# Check if node is in figure
if nid_int not in figure_node_ids:
continue
# Use figure geometry as reference (undeformed surface)
fig_geo = self.figure_geometry[nid_int]
x0.append(fig_geo[0])
y0.append(fig_geo[1])
z0.append(fig_geo[2])
dx_arr.append(vec[0])
dy_arr.append(vec[1])
dz_arr.append(vec[2])
matched_ids.append(nid_int)
if not x0:
raise ValueError(
f"No nodes matched between figure ({len(figure_node_ids)} nodes) "
f"and displacement data ({len(node_ids)} nodes)"
)
x0 = np.array(x0)
y0 = np.array(y0)
z0 = np.array(z0)
dx_arr = np.array(dx_arr)
dy_arr = np.array(dy_arr)
dz_arr = np.array(dz_arr)
# Compute figure-based OPD
x_def, y_def, surface_error, lateral_disp = compute_figure_opd(
x0, y0, z0, dx_arr, dy_arr, dz_arr,
self.figure_interpolator
)
# Convert to WFE
wfe_nm = surface_error * self.wfe_factor
return {
'node_ids': np.array(matched_ids),
'x_original': x0,
'y_original': y0,
'z_original': z0,
'dx': dx_arr,
'dy': dy_arr,
'dz': dz_arr,
'x_deformed': x_def,
'y_deformed': y_def,
'surface_error': surface_error,
'wfe_nm': wfe_nm,
'lateral_disp': lateral_disp,
'n_figure_nodes': len(self.figure_geometry),
'n_matched_nodes': len(matched_ids)
}
def extract_subcase(
self,
subcase_label: str,
include_coefficients: bool = True,
include_diagnostics: bool = True
) -> Dict[str, Any]:
"""
Extract Zernike metrics using figure-based OPD method.
Args:
subcase_label: Subcase identifier
include_coefficients: Include all Zernike coefficients
include_diagnostics: Include lateral displacement diagnostics
Returns:
Dict with RMS metrics, coefficients, and diagnostics
"""
opd_data = self._build_figure_opd_data(subcase_label)
# Use deformed coordinates for Zernike fitting
X = opd_data['x_deformed']
Y = opd_data['y_deformed']
WFE = opd_data['wfe_nm']
# Fit Zernike coefficients
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, self.n_modes)
# Compute RMS metrics
x_c = X - np.mean(X)
y_c = Y - np.mean(Y)
r = np.hypot(x_c / R_max, y_c / R_max)
theta = np.arctan2(y_c, x_c)
# Low-order Zernike basis
Z_low = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
])
# Filtered WFE (J5+)
wfe_filtered = WFE - Z_low @ coeffs[:self.filter_orders]
global_rms = float(np.sqrt(np.mean(WFE**2)))
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
# J1-J3 filtered (for manufacturing/optician)
Z_j1to3 = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, 4)
])
wfe_j1to3 = WFE - Z_j1to3 @ coeffs[:3]
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
# Aberration magnitudes
aberrations = compute_aberration_magnitudes(coeffs)
result = {
'subcase': subcase_label,
'method': 'figure_opd',
'rms_wfe_nm': filtered_rms, # Primary metric (J5+)
'global_rms_nm': global_rms,
'filtered_rms_nm': filtered_rms,
'rms_filter_j1to3_nm': rms_j1to3,
'n_nodes': len(X),
'n_figure_nodes': opd_data['n_figure_nodes'],
'figure_file': str(self.figure_path.name) if self.figure_path else 'BDF (filtered to OP2)',
**aberrations
}
if include_diagnostics:
lateral = opd_data['lateral_disp']
result.update({
'max_lateral_displacement_um': float(np.max(lateral) * self.um_scale),
'rms_lateral_displacement_um': float(np.sqrt(np.mean(lateral**2)) * self.um_scale),
'mean_lateral_displacement_um': float(np.mean(lateral) * self.um_scale),
})
if include_coefficients:
result['coefficients'] = coeffs.tolist()
result['coefficient_labels'] = [
zernike_label(j) for j in range(1, self.n_modes + 1)
]
return result
def extract_all_subcases(self) -> Dict[int, Dict[str, Any]]:
"""Extract metrics for all subcases."""
results = {}
for label in self.displacements.keys():
try:
# Convert label to int if possible for consistent keys
key = int(label) if label.isdigit() else label
results[key] = self.extract_subcase(label)
except Exception as e:
warnings.warn(f"Failed to extract subcase {label}: {e}")
return results
def extract_relative(
self,
target_subcase: str,
reference_subcase: str,
include_coefficients: bool = False
) -> Dict[str, Any]:
"""
Extract Zernike metrics relative to a reference subcase using OPD method.
Computes: WFE_relative = WFE_target(node_i) - WFE_reference(node_i)
for each node, then fits Zernike to the difference field.
This is the CORRECT way to compute relative WFE for optimization.
It properly accounts for lateral (X,Y) displacement via OPD interpolation.
Args:
target_subcase: Subcase to analyze (e.g., "3" for 40 deg)
reference_subcase: Reference subcase to subtract (e.g., "2" for 20 deg)
include_coefficients: Whether to include all Zernike coefficients
Returns:
Dict with relative metrics: relative_filtered_rms_nm, relative_rms_filter_j1to3, etc.
"""
# Build OPD data for both subcases
target_data = self._build_figure_opd_data(target_subcase)
ref_data = self._build_figure_opd_data(reference_subcase)
# Build node ID -> WFE map for reference
ref_node_to_wfe = {
int(nid): wfe for nid, wfe in zip(ref_data['node_ids'], ref_data['wfe_nm'])
}
# Compute node-by-node relative WFE for common nodes
X_rel, Y_rel, WFE_rel = [], [], []
lateral_rel = []
for i, nid in enumerate(target_data['node_ids']):
nid = int(nid)
if nid not in ref_node_to_wfe:
continue
# Use target's deformed coordinates for Zernike fitting
X_rel.append(target_data['x_deformed'][i])
Y_rel.append(target_data['y_deformed'][i])
# Relative WFE = target WFE - reference WFE
target_wfe = target_data['wfe_nm'][i]
ref_wfe = ref_node_to_wfe[nid]
WFE_rel.append(target_wfe - ref_wfe)
lateral_rel.append(target_data['lateral_disp'][i])
X_rel = np.array(X_rel)
Y_rel = np.array(Y_rel)
WFE_rel = np.array(WFE_rel)
lateral_rel = np.array(lateral_rel)
if len(X_rel) == 0:
raise ValueError(f"No common nodes between subcases {target_subcase} and {reference_subcase}")
# Fit Zernike coefficients to the relative WFE
coeffs, R_max = compute_zernike_coefficients(X_rel, Y_rel, WFE_rel, self.n_modes)
# Compute RMS metrics
x_c = X_rel - np.mean(X_rel)
y_c = Y_rel - np.mean(Y_rel)
r = np.hypot(x_c / R_max, y_c / R_max)
theta = np.arctan2(y_c, x_c)
# Low-order Zernike basis (for filtering)
Z_low = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
])
# Filtered WFE (J5+) - this is the primary optimization metric
wfe_filtered = WFE_rel - Z_low @ coeffs[:self.filter_orders]
global_rms = float(np.sqrt(np.mean(WFE_rel**2)))
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
# J1-J3 filtered (for manufacturing/optician workload)
Z_j1to3 = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, 4)
])
wfe_j1to3 = WFE_rel - Z_j1to3 @ coeffs[:3]
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
# Aberration magnitudes
aberrations = compute_aberration_magnitudes(coeffs)
result = {
'target_subcase': target_subcase,
'reference_subcase': reference_subcase,
'method': 'figure_opd_relative',
'relative_global_rms_nm': global_rms,
'relative_filtered_rms_nm': filtered_rms,
'relative_rms_filter_j1to3': rms_j1to3,
'n_common_nodes': len(X_rel),
'max_lateral_displacement_um': float(np.max(lateral_rel) * self.um_scale),
'rms_lateral_displacement_um': float(np.sqrt(np.mean(lateral_rel**2)) * self.um_scale),
**{f'relative_{k}': v for k, v in aberrations.items()}
}
if include_coefficients:
result['coefficients'] = coeffs.tolist()
result['coefficient_labels'] = [
zernike_label(j) for j in range(1, self.n_modes + 1)
]
return result
def extract_comparison(self, subcase_label: str) -> Dict[str, Any]:
"""
Compare figure-based method vs standard Z-only method.
"""
from .extract_zernike_wfe import ZernikeExtractor
# Figure-based (this extractor)
figure_result = self.extract_subcase(subcase_label)
# Standard Z-only
standard_extractor = ZernikeExtractor(self.op2_path, self.bdf_path)
standard_result = standard_extractor.extract_subcase(subcase_label)
# Compute deltas
delta_filtered = figure_result['filtered_rms_nm'] - standard_result['filtered_rms_nm']
pct_diff = 100.0 * delta_filtered / standard_result['filtered_rms_nm'] if standard_result['filtered_rms_nm'] > 0 else 0
return {
'subcase': subcase_label,
'figure_method': {
'filtered_rms_nm': figure_result['filtered_rms_nm'],
'global_rms_nm': figure_result['global_rms_nm'],
'n_nodes': figure_result['n_nodes'],
},
'standard_method': {
'filtered_rms_nm': standard_result['filtered_rms_nm'],
'global_rms_nm': standard_result['global_rms_nm'],
'n_nodes': standard_result['n_nodes'],
},
'delta': {
'filtered_rms_nm': delta_filtered,
'percent_difference': pct_diff,
},
'lateral_displacement': {
'max_um': figure_result.get('max_lateral_displacement_um', 0),
'rms_um': figure_result.get('rms_lateral_displacement_um', 0),
},
'figure_file': str(self.figure_path.name) if self.figure_path else 'BDF (filtered to OP2)',
}
# ============================================================================
# Convenience Functions
# ============================================================================
def extract_zernike_opd(
op2_file: Union[str, Path],
figure_file: Optional[Union[str, Path]] = None,
subcase: Union[int, str] = 1,
**kwargs
) -> Dict[str, Any]:
"""
Convenience function for OPD-based Zernike extraction (most rigorous method).
THIS IS THE RECOMMENDED FUNCTION for telescope mirror optimization.
Args:
op2_file: Path to OP2 results
figure_file: Path to explicit figure.dat (uses BDF geometry if None - RECOMMENDED)
subcase: Subcase identifier
**kwargs: Additional args for ZernikeOPDExtractor
Returns:
Dict with Zernike metrics, aberrations, and lateral displacement info
"""
extractor = ZernikeOPDExtractor(op2_file, figure_path=figure_file, **kwargs)
return extractor.extract_subcase(str(subcase))
def extract_zernike_opd_filtered_rms(
op2_file: Union[str, Path],
subcase: Union[int, str] = 1,
**kwargs
) -> float:
"""
Extract filtered RMS WFE using OPD method (most rigorous).
Primary metric for mirror optimization.
"""
result = extract_zernike_opd(op2_file, subcase=subcase, **kwargs)
return result['filtered_rms_nm']
# Backwards compatibility aliases
ZernikeFigureExtractor = ZernikeOPDExtractor # Deprecated: use ZernikeOPDExtractor
extract_zernike_figure = extract_zernike_opd # Deprecated: use extract_zernike_opd
extract_zernike_figure_rms = extract_zernike_opd_filtered_rms # Deprecated
# ============================================================================
# Module Exports
# ============================================================================
__all__ = [
# Primary exports (new names)
'ZernikeOPDExtractor',
'extract_zernike_opd',
'extract_zernike_opd_filtered_rms',
# Utility functions
'load_figure_geometry',
'build_figure_interpolator',
'compute_figure_opd',
# Backwards compatibility (deprecated)
'ZernikeFigureExtractor',
'extract_zernike_figure',
'extract_zernike_figure_rms',
]
if __name__ == '__main__':
import sys
if len(sys.argv) > 1:
op2_file = Path(sys.argv[1])
figure_file = Path(sys.argv[2]) if len(sys.argv) > 2 else None
print(f"Analyzing: {op2_file}")
print("=" * 60)
try:
extractor = ZernikeFigureExtractor(op2_file, figure_path=figure_file)
print(f"Figure file: {extractor.figure_path}")
print(f"Figure nodes: {len(extractor.figure_geometry)}")
print()
for label in extractor.displacements.keys():
print(f"\n{'=' * 60}")
print(f"Subcase {label}")
print('=' * 60)
comparison = extractor.extract_comparison(label)
print(f"\nStandard (Z-only) method:")
print(f" Filtered RMS: {comparison['standard_method']['filtered_rms_nm']:.2f} nm")
print(f" Nodes: {comparison['standard_method']['n_nodes']}")
print(f"\nFigure-based OPD method:")
print(f" Filtered RMS: {comparison['figure_method']['filtered_rms_nm']:.2f} nm")
print(f" Nodes: {comparison['figure_method']['n_nodes']}")
print(f"\nDifference:")
print(f" Delta: {comparison['delta']['filtered_rms_nm']:+.2f} nm ({comparison['delta']['percent_difference']:+.1f}%)")
print(f"\nLateral Displacement:")
print(f" Max: {comparison['lateral_displacement']['max_um']:.3f} um")
print(f" RMS: {comparison['lateral_displacement']['rms_um']:.3f} um")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
else:
print("Figure-Based Zernike Extractor")
print("=" * 40)
print("\nUsage: python extract_zernike_figure.py <op2_file> [figure.dat]")
print("\nThis extractor uses actual figure geometry instead of parabola approximation.")

View File

@@ -0,0 +1,746 @@
"""
Analytic (Parabola-Based) Zernike Extractor for Telescope Mirror Optimization
==============================================================================
This extractor computes OPD using an ANALYTICAL parabola formula, accounting for
lateral (X, Y) displacements in addition to axial (Z) displacements.
NOTE: For most robust OPD analysis, prefer ZernikeOPDExtractor from extract_zernike_figure.py
which uses actual mesh geometry as reference (no shape assumptions).
This analytic extractor is useful when:
- You know the optical prescription (focal length)
- The surface is a parabola (or close to it)
- You want to compare against theoretical parabola
WHY LATERAL CORRECTION MATTERS:
-------------------------------
The standard Zernike approach uses original (x, y) coordinates with only Z-displacement.
This is INCORRECT when lateral displacements are significant because:
1. A node at (x₀, y₀) moves to (x₀+Δx, y₀+Δy, z₀+Δz)
2. The parabola surface at the NEW position has a different expected Z
3. The true surface error is: (z₀+Δz) - z_parabola(x₀+Δx, y₀+Δy)
PARABOLA EQUATION:
------------------
For a paraboloid with vertex at origin and optical axis along Z:
z = (x² + y²) / (4 * f)
where f = focal length = R / 2 (R = radius of curvature at vertex)
For a concave mirror (like telescope primary):
z = -r² / (4 * f) (negative because surface curves away from +Z)
We support both conventions via the `concave` parameter.
Author: Atomizer Framework
Date: 2024
"""
from pathlib import Path
from typing import Dict, Any, Optional, List, Tuple, Union
import numpy as np
from numpy.linalg import LinAlgError
# Import base Zernike functionality from existing module
from .extract_zernike import (
compute_zernike_coefficients,
compute_aberration_magnitudes,
zernike_noll,
zernike_label,
read_node_geometry,
find_geometry_file,
extract_displacements_by_subcase,
UNIT_TO_NM,
DEFAULT_N_MODES,
DEFAULT_FILTER_ORDERS,
)
try:
from pyNastran.op2.op2 import OP2
from pyNastran.bdf.bdf import BDF
except ImportError:
raise ImportError("pyNastran is required. Install with: pip install pyNastran")
# ============================================================================
# OPD Calculation Functions
# ============================================================================
def compute_parabola_z(
x: np.ndarray,
y: np.ndarray,
focal_length: float,
concave: bool = True
) -> np.ndarray:
"""
Compute the Z coordinate on a paraboloid surface.
For a paraboloid: z = ±(x² + y²) / (4 * f)
Args:
x, y: Coordinates on the surface
focal_length: Focal length of the parabola (f = R_vertex / 2)
concave: If True, mirror curves toward -Z (typical telescope primary)
Returns:
Z coordinates on the parabola surface
"""
r_squared = x**2 + y**2
z = r_squared / (4.0 * focal_length)
if concave:
z = -z # Concave mirror curves toward -Z
return z
def compute_true_opd(
x0: np.ndarray,
y0: np.ndarray,
z0: np.ndarray,
dx: np.ndarray,
dy: np.ndarray,
dz: np.ndarray,
focal_length: float,
concave: bool = True
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
"""
Compute true Optical Path Difference accounting for lateral displacement.
The key insight: when a node moves laterally, we must compare its new Z
position against what the parabola Z SHOULD BE at that new (x, y) location.
The CORRECT formulation:
- Original surface: z0 = parabola(x0, y0) + manufacturing_errors
- Deformed surface: z0 + dz = parabola(x0 + dx, y0 + dy) + total_errors
The surface error we want is:
error = (z0 + dz) - parabola(x0 + dx, y0 + dy)
But since z0 ≈ parabola(x0, y0), we can compute the DIFFERENTIAL:
error ≈ dz - [parabola(x0 + dx, y0 + dy) - parabola(x0, y0)]
error ≈ dz - Δz_parabola
where Δz_parabola is the change in ideal parabola Z due to lateral movement.
Args:
x0, y0, z0: Original node coordinates (undeformed)
dx, dy, dz: Displacement components from FEA
focal_length: Parabola focal length
concave: Whether mirror is concave (typical)
Returns:
Tuple of:
- x_def: Deformed X coordinates (for Zernike fitting)
- y_def: Deformed Y coordinates (for Zernike fitting)
- surface_error: True surface error at deformed positions
- lateral_magnitude: Magnitude of lateral displacement (diagnostic)
"""
# Deformed coordinates
x_def = x0 + dx
y_def = y0 + dy
# Compute the CHANGE in parabola Z due to lateral displacement
# For parabola: z = ±r²/(4f), so:
# Δz_parabola = z(x_def, y_def) - z(x0, y0)
# = [(x0+dx)² + (y0+dy)² - x0² - y0²] / (4f)
# = [2*x0*dx + dx² + 2*y0*dy + dy²] / (4f)
# Original r² and deformed r²
r0_sq = x0**2 + y0**2
r_def_sq = x_def**2 + y_def**2
# Change in r² due to lateral displacement
delta_r_sq = r_def_sq - r0_sq
# Change in parabola Z (the Z shift that would keep the node on the ideal parabola)
delta_z_parabola = delta_r_sq / (4.0 * focal_length)
if concave:
delta_z_parabola = -delta_z_parabola
# TRUE surface error = actual dz - expected dz (to stay on parabola)
# If node moves laterally outward (larger r), parabola Z changes
# The error is the difference between actual Z movement and expected Z movement
surface_error = dz - delta_z_parabola
# Diagnostic: lateral displacement magnitude
lateral_magnitude = np.sqrt(dx**2 + dy**2)
return x_def, y_def, surface_error, lateral_magnitude
def estimate_focal_length_from_geometry(
x: np.ndarray,
y: np.ndarray,
z: np.ndarray,
concave: bool = True
) -> float:
"""
Estimate focal length from node geometry by fitting a paraboloid.
Uses least-squares fit: z = a * (x² + y²) + b
where focal_length = 1 / (4 * |a|)
Args:
x, y, z: Node coordinates
concave: Whether mirror is concave
Returns:
Estimated focal length
"""
r_squared = x**2 + y**2
# Fit z = a * r² + b
A = np.column_stack([r_squared, np.ones_like(r_squared)])
try:
coeffs, _, _, _ = np.linalg.lstsq(A, z, rcond=None)
a = coeffs[0]
except LinAlgError:
# Fallback: use simple ratio
mask = r_squared > 0
a = np.median(z[mask] / r_squared[mask])
# For concave mirror, a should be negative
# focal_length = 1 / (4 * |a|)
if abs(a) < 1e-12:
raise ValueError("Cannot estimate focal length - surface appears flat")
focal_length = 1.0 / (4.0 * abs(a))
return focal_length
# ============================================================================
# Main Analytic (Parabola-Based) Extractor Class
# ============================================================================
class ZernikeAnalyticExtractor:
"""
Analytic (parabola-based) Zernike extractor for telescope mirror optimization.
This extractor accounts for lateral (X, Y) displacements when computing
wavefront error, using an analytical parabola formula as reference.
NOTE: For most robust analysis, prefer ZernikeOPDExtractor from
extract_zernike_figure.py which uses actual mesh geometry (no shape assumptions).
Use this extractor when:
- You know the focal length / optical prescription
- The surface is parabolic (or near-parabolic)
- You want to compare against theoretical parabola shape
Example:
extractor = ZernikeAnalyticExtractor(
op2_file, bdf_file,
focal_length=5000.0 # mm
)
result = extractor.extract_subcase('20')
# Metrics available:
print(f"Max lateral displacement: {result['max_lateral_disp_um']:.2f} µm")
print(f"RMS lateral displacement: {result['rms_lateral_disp_um']:.2f} µm")
"""
def __init__(
self,
op2_path: Union[str, Path],
bdf_path: Optional[Union[str, Path]] = None,
focal_length: Optional[float] = None,
concave: bool = True,
displacement_unit: str = 'mm',
n_modes: int = DEFAULT_N_MODES,
filter_orders: int = DEFAULT_FILTER_ORDERS,
auto_estimate_focal: bool = True
):
"""
Initialize OPD-based Zernike extractor.
Args:
op2_path: Path to OP2 results file
bdf_path: Path to BDF/DAT geometry file (auto-detected if None)
focal_length: Parabola focal length in same units as geometry
If None and auto_estimate_focal=True, will estimate from mesh
concave: Whether mirror is concave (curves toward -Z)
displacement_unit: Unit of displacement in OP2 ('mm', 'm', 'um', 'nm')
n_modes: Number of Zernike modes to fit
filter_orders: Number of low-order modes to filter
auto_estimate_focal: If True, estimate focal length from geometry
"""
self.op2_path = Path(op2_path)
self.bdf_path = Path(bdf_path) if bdf_path else find_geometry_file(self.op2_path)
self.focal_length = focal_length
self.concave = concave
self.displacement_unit = displacement_unit
self.n_modes = n_modes
self.filter_orders = filter_orders
self.auto_estimate_focal = auto_estimate_focal
# Unit conversion factor
self.nm_scale = UNIT_TO_NM.get(displacement_unit.lower(), 1e6)
self.um_scale = self.nm_scale / 1000.0 # For lateral displacement reporting
# WFE = 2 * surface error (optical convention for reflection)
self.wfe_factor = 2.0 * self.nm_scale
# Lazy-loaded data
self._node_geo = None
self._displacements = None
self._estimated_focal = None
@property
def node_geometry(self) -> Dict[int, np.ndarray]:
"""Lazy-load node geometry from BDF."""
if self._node_geo is None:
self._node_geo = read_node_geometry(self.bdf_path)
return self._node_geo
@property
def displacements(self) -> Dict[str, Dict[str, np.ndarray]]:
"""Lazy-load displacements from OP2."""
if self._displacements is None:
self._displacements = extract_displacements_by_subcase(self.op2_path)
return self._displacements
def get_focal_length(self, x: np.ndarray, y: np.ndarray, z: np.ndarray) -> float:
"""
Get focal length, estimating from geometry if not provided.
"""
if self.focal_length is not None:
return self.focal_length
if self._estimated_focal is not None:
return self._estimated_focal
if self.auto_estimate_focal:
self._estimated_focal = estimate_focal_length_from_geometry(
x, y, z, self.concave
)
return self._estimated_focal
raise ValueError(
"focal_length must be provided or auto_estimate_focal must be True"
)
def _build_opd_data(
self,
subcase_label: str
) -> Dict[str, np.ndarray]:
"""
Build OPD data arrays for a subcase using rigorous method.
Returns:
Dict with:
- x_original, y_original, z_original: Original coordinates
- dx, dy, dz: Displacement components
- x_deformed, y_deformed: Deformed coordinates (for Zernike fitting)
- surface_error: True surface error (mm or model units)
- wfe_nm: Wavefront error in nanometers
- lateral_disp: Lateral displacement magnitude
"""
if subcase_label not in self.displacements:
available = list(self.displacements.keys())
raise ValueError(f"Subcase '{subcase_label}' not found. Available: {available}")
data = self.displacements[subcase_label]
node_ids = data['node_ids']
disp = data['disp']
# Build arrays
x0, y0, z0 = [], [], []
dx_arr, dy_arr, dz_arr = [], [], []
for nid, vec in zip(node_ids, disp):
geo = self.node_geometry.get(int(nid))
if geo is None:
continue
x0.append(geo[0])
y0.append(geo[1])
z0.append(geo[2])
dx_arr.append(vec[0])
dy_arr.append(vec[1])
dz_arr.append(vec[2])
x0 = np.array(x0)
y0 = np.array(y0)
z0 = np.array(z0)
dx_arr = np.array(dx_arr)
dy_arr = np.array(dy_arr)
dz_arr = np.array(dz_arr)
# Get focal length
focal = self.get_focal_length(x0, y0, z0)
# Compute true OPD
x_def, y_def, surface_error, lateral_disp = compute_true_opd(
x0, y0, z0, dx_arr, dy_arr, dz_arr, focal, self.concave
)
# Convert to WFE (nm)
wfe_nm = surface_error * self.wfe_factor
return {
'x_original': x0,
'y_original': y0,
'z_original': z0,
'dx': dx_arr,
'dy': dy_arr,
'dz': dz_arr,
'x_deformed': x_def,
'y_deformed': y_def,
'surface_error': surface_error,
'wfe_nm': wfe_nm,
'lateral_disp': lateral_disp,
'focal_length': focal
}
def extract_subcase(
self,
subcase_label: str,
include_coefficients: bool = False,
include_diagnostics: bool = True
) -> Dict[str, Any]:
"""
Extract Zernike metrics for a single subcase using rigorous OPD method.
Args:
subcase_label: Subcase identifier (e.g., '20', '90')
include_coefficients: Whether to include all Zernike coefficients
include_diagnostics: Whether to include lateral displacement diagnostics
Returns:
Dict with RMS metrics, aberrations, and lateral displacement info
"""
opd_data = self._build_opd_data(subcase_label)
# Use DEFORMED coordinates for Zernike fitting
X = opd_data['x_deformed']
Y = opd_data['y_deformed']
WFE = opd_data['wfe_nm']
# Fit Zernike coefficients
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, self.n_modes)
# Compute RMS metrics
# Center on deformed coordinates
x_c = X - np.mean(X)
y_c = Y - np.mean(Y)
r = np.hypot(x_c / R_max, y_c / R_max)
theta = np.arctan2(y_c, x_c)
# Build low-order Zernike basis
Z_low = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, self.filter_orders + 1)
])
# Filtered WFE
wfe_filtered = WFE - Z_low @ coeffs[:self.filter_orders]
global_rms = float(np.sqrt(np.mean(WFE**2)))
filtered_rms = float(np.sqrt(np.mean(wfe_filtered**2)))
# J1-J3 filtered (optician workload)
Z_j1to3 = np.column_stack([
zernike_noll(j, r, theta) for j in range(1, 4)
])
wfe_j1to3 = WFE - Z_j1to3 @ coeffs[:3]
rms_j1to3 = float(np.sqrt(np.mean(wfe_j1to3**2)))
# Aberration magnitudes
aberrations = compute_aberration_magnitudes(coeffs)
result = {
'subcase': subcase_label,
'method': 'opd_rigorous',
'global_rms_nm': global_rms,
'filtered_rms_nm': filtered_rms,
'rms_filter_j1to3_nm': rms_j1to3,
'n_nodes': len(X),
'focal_length_used': opd_data['focal_length'],
**aberrations
}
if include_diagnostics:
lateral = opd_data['lateral_disp']
result.update({
'max_lateral_disp_um': float(np.max(lateral) * self.um_scale),
'rms_lateral_disp_um': float(np.sqrt(np.mean(lateral**2)) * self.um_scale),
'mean_lateral_disp_um': float(np.mean(lateral) * self.um_scale),
'p99_lateral_disp_um': float(np.percentile(lateral, 99) * self.um_scale),
})
if include_coefficients:
result['coefficients'] = coeffs.tolist()
result['coefficient_labels'] = [
zernike_label(j) for j in range(1, self.n_modes + 1)
]
return result
def extract_comparison(
self,
subcase_label: str
) -> Dict[str, Any]:
"""
Extract metrics using BOTH methods for comparison.
Useful for understanding the impact of the rigorous method
vs the standard Z-only approach.
Returns:
Dict with metrics from both methods and the delta
"""
from .extract_zernike import ZernikeExtractor
# Rigorous OPD method
opd_result = self.extract_subcase(subcase_label, include_diagnostics=True)
# Standard Z-only method
standard_extractor = ZernikeExtractor(
self.op2_path, self.bdf_path,
self.displacement_unit, self.n_modes, self.filter_orders
)
standard_result = standard_extractor.extract_subcase(subcase_label)
# Compute deltas
delta_global = opd_result['global_rms_nm'] - standard_result['global_rms_nm']
delta_filtered = opd_result['filtered_rms_nm'] - standard_result['filtered_rms_nm']
return {
'subcase': subcase_label,
'opd_method': {
'global_rms_nm': opd_result['global_rms_nm'],
'filtered_rms_nm': opd_result['filtered_rms_nm'],
},
'standard_method': {
'global_rms_nm': standard_result['global_rms_nm'],
'filtered_rms_nm': standard_result['filtered_rms_nm'],
},
'delta': {
'global_rms_nm': delta_global,
'filtered_rms_nm': delta_filtered,
'percent_difference_filtered': 100.0 * delta_filtered / standard_result['filtered_rms_nm'] if standard_result['filtered_rms_nm'] > 0 else 0.0,
},
'lateral_displacement': {
'max_um': opd_result['max_lateral_disp_um'],
'rms_um': opd_result['rms_lateral_disp_um'],
'p99_um': opd_result['p99_lateral_disp_um'],
},
'recommendation': _get_method_recommendation(opd_result)
}
def extract_all_subcases(
self,
reference_subcase: Optional[str] = None
) -> Dict[str, Dict[str, Any]]:
"""
Extract metrics for all available subcases.
Args:
reference_subcase: Reference for relative calculations (None to skip)
Returns:
Dict mapping subcase label to metrics dict
"""
results = {}
for label in self.displacements.keys():
results[label] = self.extract_subcase(label)
return results
def get_diagnostic_data(
self,
subcase_label: str
) -> Dict[str, np.ndarray]:
"""
Get raw data for visualization/debugging.
Returns arrays suitable for plotting lateral displacement maps,
comparing methods, etc.
"""
return self._build_opd_data(subcase_label)
def _get_method_recommendation(opd_result: Dict[str, Any]) -> str:
"""
Generate recommendation based on lateral displacement magnitude.
"""
max_lateral = opd_result.get('max_lateral_disp_um', 0)
rms_lateral = opd_result.get('rms_lateral_disp_um', 0)
if max_lateral > 10.0: # > 10 µm max lateral
return "CRITICAL: Large lateral displacements detected. OPD method strongly recommended."
elif max_lateral > 1.0: # > 1 µm
return "RECOMMENDED: Significant lateral displacements. OPD method provides more accurate results."
elif max_lateral > 0.1: # > 0.1 µm
return "OPTIONAL: Small lateral displacements. OPD method provides minor improvement."
else:
return "EQUIVALENT: Negligible lateral displacements. Both methods give similar results."
# ============================================================================
# Convenience Functions
# ============================================================================
def extract_zernike_analytic(
op2_file: Union[str, Path],
bdf_file: Optional[Union[str, Path]] = None,
subcase: Union[int, str] = 1,
focal_length: Optional[float] = None,
displacement_unit: str = 'mm',
**kwargs
) -> Dict[str, Any]:
"""
Convenience function to extract Zernike metrics using analytic parabola method.
NOTE: For most robust analysis, prefer extract_zernike_opd() from
extract_zernike_figure.py which uses actual mesh geometry.
Args:
op2_file: Path to OP2 results file
bdf_file: Path to BDF geometry (auto-detected if None)
subcase: Subcase identifier
focal_length: Parabola focal length (auto-estimated if None)
displacement_unit: Unit of displacement in OP2
**kwargs: Additional arguments for ZernikeAnalyticExtractor
Returns:
Dict with RMS metrics, aberrations, and lateral displacement info
"""
extractor = ZernikeAnalyticExtractor(
op2_file, bdf_file,
focal_length=focal_length,
displacement_unit=displacement_unit,
**kwargs
)
return extractor.extract_subcase(str(subcase))
def extract_zernike_analytic_filtered_rms(
op2_file: Union[str, Path],
subcase: Union[int, str] = 1,
**kwargs
) -> float:
"""
Extract filtered RMS WFE using analytic parabola method.
NOTE: For most robust analysis, prefer extract_zernike_opd_filtered_rms()
from extract_zernike_figure.py.
"""
result = extract_zernike_analytic(op2_file, subcase=subcase, **kwargs)
return result['filtered_rms_nm']
def compare_zernike_methods(
op2_file: Union[str, Path],
bdf_file: Optional[Union[str, Path]] = None,
subcase: Union[int, str] = 1,
focal_length: Optional[float] = None,
**kwargs
) -> Dict[str, Any]:
"""
Compare standard vs analytic (parabola) Zernike methods.
Use this to understand how much lateral displacement affects your results.
"""
extractor = ZernikeAnalyticExtractor(
op2_file, bdf_file,
focal_length=focal_length,
**kwargs
)
return extractor.extract_comparison(str(subcase))
# Backwards compatibility aliases
ZernikeOPDExtractor = ZernikeAnalyticExtractor # Deprecated: use ZernikeAnalyticExtractor
extract_zernike_opd = extract_zernike_analytic # Deprecated: use extract_zernike_analytic
extract_zernike_opd_filtered_rms = extract_zernike_analytic_filtered_rms # Deprecated
# ============================================================================
# Module Exports
# ============================================================================
__all__ = [
# Main extractor class (new name)
'ZernikeAnalyticExtractor',
# Convenience functions (new names)
'extract_zernike_analytic',
'extract_zernike_analytic_filtered_rms',
'compare_zernike_methods',
# Utility functions
'compute_true_opd',
'compute_parabola_z',
'estimate_focal_length_from_geometry',
# Backwards compatibility (deprecated)
'ZernikeOPDExtractor',
'extract_zernike_opd',
'extract_zernike_opd_filtered_rms',
]
if __name__ == '__main__':
import sys
if len(sys.argv) > 1:
op2_file = Path(sys.argv[1])
focal = float(sys.argv[2]) if len(sys.argv) > 2 else None
print(f"Analyzing: {op2_file}")
print(f"Focal length: {focal if focal else 'auto-estimate'}")
print("=" * 60)
try:
extractor = ZernikeOPDExtractor(op2_file, focal_length=focal)
print(f"\nAvailable subcases: {list(extractor.displacements.keys())}")
for label in extractor.displacements.keys():
print(f"\n{'=' * 60}")
print(f"Subcase {label}")
print('=' * 60)
# Compare methods
comparison = extractor.extract_comparison(label)
print(f"\nStandard (Z-only) method:")
print(f" Global RMS: {comparison['standard_method']['global_rms_nm']:.2f} nm")
print(f" Filtered RMS: {comparison['standard_method']['filtered_rms_nm']:.2f} nm")
print(f"\nRigorous OPD method:")
print(f" Global RMS: {comparison['opd_method']['global_rms_nm']:.2f} nm")
print(f" Filtered RMS: {comparison['opd_method']['filtered_rms_nm']:.2f} nm")
print(f"\nDifference (OPD - Standard):")
print(f" Filtered RMS: {comparison['delta']['filtered_rms_nm']:+.2f} nm ({comparison['delta']['percent_difference_filtered']:+.1f}%)")
print(f"\nLateral Displacement:")
print(f" Max: {comparison['lateral_displacement']['max_um']:.3f} µm")
print(f" RMS: {comparison['lateral_displacement']['rms_um']:.3f} µm")
print(f" P99: {comparison['lateral_displacement']['p99_um']:.3f} µm")
print(f"\n{comparison['recommendation']}")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
else:
print("OPD-Based Zernike Extractor")
print("=" * 40)
print("\nUsage: python extract_zernike_opd.py <op2_file> [focal_length]")
print("\nThis module provides rigorous OPD-based Zernike extraction")
print("that accounts for lateral (X, Y) displacements.")
print("\nKey features:")
print(" - True optical path difference calculation")
print(" - Accounts for node lateral shift due to pinching/clamping")
print(" - Lateral displacement diagnostics")
print(" - Method comparison (standard vs OPD)")