Files
Atomizer/examples/Zernike_old_reference/zernike_Post_Script_NX.py
Antoine ec5e42d733 feat: Add M1 mirror Zernike optimization with correct RMS calculation
Major improvements to telescope mirror optimization workflow:

Assembly FEM Workflow (solve_simulation.py):
- Fixed multi-part assembly FEM update sequence
- Use ImportFromFile() for reliable expression updates
- Add DuplicateNodesCheckBuilder with MergeOccurrenceNodes=True
- Switch to Foreground solve mode for multi-subcase solutions
- Add detailed logging and diagnostics for node merge operations

Zernike RMS Calculation:
- CRITICAL FIX: Use correct surface-based RMS formula
  - Global RMS = sqrt(mean(W^2)) from actual WFE values
  - Filtered RMS = sqrt(mean(W_residual^2)) after removing low-order fit
  - This matches zernike_Post_Script_NX.py (optical standard)
- Previous WRONG formula was: sqrt(sum(coeffs^2))
- Add compute_rms_filter_j1to3() for optician workload metric

Subcase Mapping:
- Fix subcase mapping to match NX model:
  - Subcase 1 = 90 deg (polishing orientation)
  - Subcase 2 = 20 deg (reference)
  - Subcase 3 = 40 deg
  - Subcase 4 = 60 deg

New Study: M1 Mirror Zernike Optimization
- Full optimization config with 11 design variables
- 3 objectives: rel_filtered_rms_40_vs_20, rel_filtered_rms_60_vs_20, mfg_90_optician_workload
- Neural surrogate support for accelerated optimization

Documentation:
- Update ZERNIKE_INTEGRATION.md with correct RMS formula
- Update ASSEMBLY_FEM_WORKFLOW.md with expression import and node merge details
- Add reference scripts from original zernike_Post_Script_NX.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 16:30:15 -05:00

1013 lines
39 KiB
Python

# -*- coding: utf-8 -*-
#!/usr/bin/env python3
"""
Zernike per Subcase (relative to 20 deg) -> HTML + CSVs (HEADLESS) + dual-reference RMS (vs 20 deg and vs 90 deg)
- Auto-targets OP2:
* default: m1_optimization_baseline_Isogrid_fem1_i_fem1_sim1_test2-solution_1.op2 in CWD
* or: pass --op2 FULLPATH
- Auto-loads matching .dat/.bdf (same-basename preferred; else first found)
- Extracts subcases by label/ID (expects 90,20,40,60)
- Uses 20 deg as the reference for plots/pages and the *_Rel20 fields (vs 20 deg)
- Also computes RMS relative to 90 deg and exports them:
* EXP: RMS_<ang>_Global_Rel20, RMS_<ang>_Filtered_Rel20, RMS_<ang>_Global_Rel90, RMS_<ang>_Filtered_Rel90
* LOG: Rel_GlobalRMS_nm_<ang>_vs20, Rel_FilteredRMS_nm_<ang>_vs20, Rel_GlobalRMS_nm_<ang>_vs90, Rel_FilteredRMS_nm_<ang>_vs90
- Outputs:
* Combined nodal CSV (abs 20 deg + relative others vs 20 deg)
* One HTML for 20 deg (absolute) and one HTML per other subcase (relative to 20 deg),
with each relative page's RMS table also showing the absolute RMS for that angle
* One HTML for 90 deg (absolute) with manufacturing metrics (numeric only)
* Zernike coeff CSVs (|coeff| per page; with labels and optional bar chart)
* RMS summary CSV (ABS for all; REL20 for non-reference)
* EXP file with 90/20/40/60 ABS; REL20 (vs 20 deg); and REL90 (vs 90 deg)
* Appends a row to Results/RMS_log.csv (ABS & REL vs 20 deg & REL vs 90 deg)
- HTML auto-open controlled by OPEN_HTML flag
Requires: numpy pandas plotly pyNastran matplotlib
"""
import os, webbrowser, re, argparse, sys
import numpy as np
import pandas as pd
from math import factorial
from numpy.linalg import LinAlgError
from datetime import datetime
# Best-effort UTF-8 console (ignored if not supported)
try:
sys.stdout.reconfigure(encoding="utf-8")
sys.stderr.reconfigure(encoding="utf-8")
except Exception:
pass
# Try messagebox for friendlier fatal popup if available; otherwise print-only
try:
from tkinter import messagebox # optional
except Exception:
messagebox = None
# Plotly / mesh
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from matplotlib.tri import Triangulation
# Nastran IO
from pyNastran.op2.op2 import OP2
from pyNastran.bdf.bdf import BDF
# ============ MONKEY PATCH FOR UNKNOWN NASTRAN VERSIONS ============
import pyNastran.op2.op2_interface.version as pynastran_version
_original_parse = pynastran_version.parse_nastran_version
def patched_parse_nastran_version(data, version, encoding, log):
try:
return _original_parse(data, version, encoding, log)
except RuntimeError as e:
if 'unknown version' in str(e):
version_str = str(e).split('=')[1].strip("')")
log.warning(f'Unknown Nastran version {version_str}, attempting to parse anyway')
return 'msc', version_str # Assume MSC Nastran format
raise
pynastran_version.parse_nastran_version = patched_parse_nastran_version
print("[INFO] Applied pyNastran version compatibility patch", flush=True)
# ===================================================================
# ---------------- Config ----------------
N_MODES = 50
AMP = 2.0 # visual scale for residual plot
PANCAKE = 10.0 # flattens Z range for viewing
PLOT_DOWNSAMPLE = 10000
FILTER_LOW_ORDERS = 4 # piston, tip, tilt, defocus
SHOW_ZERNIKE_BAR = True # adds a horizontal bar chart of |coeff| with labels
REQUIRED_SUBCASES = [90, 20, 40, 60]
# Labels
POLISH_LABEL = "90" # polishing orientation (horizontal)
POLISH_TITLE = "90 deg" # for plot/table titles
REF_LABEL = "20" # reference subcase label as string
REF_TITLE = "20 deg" # for plot/table titles
# Displacement unit in OP2 -> nm scale for WFE = 2*Disp_Z
DISP_SRC_UNIT = "mm" # "mm" or "m"
NM_PER_UNIT = 1e6 if DISP_SRC_UNIT == "mm" else 1e9 # 1 mm = 1e6 nm, 1 m = 1e9 nm
# Name of fixed OP2 (used when --op2 is not supplied)
OP2_FIXED_NAME = "assy_m1_assyfem1_sim1-solution_1.op2"
# Auto-open HTML reports?
OPEN_HTML = False
# ----------------------------------------
# --------- Zernike utilities ----------
def noll_indices(j: int):
if j < 1:
raise ValueError("Noll index j must be >= 1")
count = 0
n = 0
while True:
if n == 0:
ms = [0]
elif n % 2 == 0:
ms = [0] + [m for k in range(1, n//2 + 1) for m in (-2*k, 2*k)]
else:
ms = [m for k in range(0, (n+1)//2) for m in (-(2*k+1), (2*k+1))]
for m in ms:
count += 1
if count == j:
return n, m
n += 1
def zernike_noll(j, r, th):
n, m = noll_indices(j)
R = np.zeros_like(r)
for s in range((n-abs(m))//2 + 1):
c = ((-1)**s * factorial(n-s) /
(factorial(s) *
factorial((n+abs(m))//2 - s) *
factorial((n-abs(m))//2 - s)))
R += c * r**(n-2*s)
if m == 0:
return R
return R * (np.cos(m*th) if m > 0 else np.sin(-m*th))
def compute_zernike_coeffs_chunked(X, Y, vals, n_modes, chunk_size=100000):
Xc, Yc = X - np.mean(X), Y - np.mean(Y)
R = float(np.max(np.hypot(Xc, Yc)))
r = np.hypot(Xc/R, Yc/R).astype(np.float32)
th = np.arctan2(Yc, Xc).astype(np.float32)
mask = (r <= 1.0) & ~np.isnan(vals)
if not np.any(mask):
raise RuntimeError("No valid points inside unit disk.")
idx = np.nonzero(mask)[0]
m = int(n_modes)
G = np.zeros((m, m), dtype=np.float64) # Z^T Z
h = np.zeros((m,), dtype=np.float64) # Z^T v)
v = vals.astype(np.float64)
for start in range(0, len(idx), chunk_size):
sl = idx[start:start+chunk_size]
r_b, th_b, v_b = r[sl], th[sl], v[sl]
Zb = np.column_stack([zernike_noll(j, r_b, th_b).astype(np.float32)
for j in range(1, m+1)])
G += (Zb.T @ Zb).astype(np.float64)
h += (Zb.T @ v_b).astype(np.float64)
try:
coeffs = np.linalg.solve(G, h)
except LinAlgError:
coeffs = np.linalg.lstsq(G, h, rcond=None)[0]
return coeffs, R
# --------------------------------------
# ------------- IO helpers -------------
def find_dat_or_bdf(op2_path: str):
folder = os.path.dirname(op2_path)
base = os.path.splitext(os.path.basename(op2_path))[0]
for ext in (".dat", ".bdf"):
cand = os.path.join(folder, base + ext)
if os.path.exists(cand):
return cand
for name in os.listdir(folder):
if name.lower().endswith((".dat", ".bdf")):
return os.path.join(folder, name)
raise FileNotFoundError("No .dat or .bdf found in folder; geometry required.")
def pick_files_cli():
ap = argparse.ArgumentParser()
ap.add_argument("--op2", default=None, help="Full path to OP2 (optional)")
ap.add_argument("--simdir", default=None, help="Directory to search (optional)")
args = ap.parse_args()
if args.op2:
op2_path = args.op2
else:
base_dir = args.simdir or os.getcwd()
op2_path = os.path.join(base_dir, OP2_FIXED_NAME)
if not os.path.exists(op2_path):
raise FileNotFoundError(f"OP2 not found: {op2_path}")
dat_path = find_dat_or_bdf(op2_path)
return op2_path, dat_path
def read_geometry(dat_path):
bdf = BDF()
bdf.read_bdf(dat_path)
node_geo = {int(nid): node.get_position() for nid, node in bdf.nodes.items()}
return node_geo
def _extract_first_int(text: str):
if not isinstance(text, str):
return None
m = re.search(r'-?\d+', text)
return int(m.group(0)) if m else None
def read_displacements_by_subcase(op2_path):
"""
Return dict with keys {'90','20','40','60'}.
Priority: subtitle numeric match, else isubcase numeric match.
If exact set not found:
- If 4+ numeric subcases exist, remap the 4 smallest to 90,20,40,60 (logged)
- Else: raise with a detailed dump.
Also writes an index CSV next to the OP2 for inspection.
"""
want = REQUIRED_SUBCASES[:] # [90,20,40,60]
out_exact = {}
debug_rows = []
op2 = OP2()
op2.read_op2(op2_path)
if not op2.displacements:
raise RuntimeError("No displacement subcases found in OP2.")
for key, darr in op2.displacements.items():
data = darr.data
dmat = data[0] if data.ndim == 3 else (data if data.ndim == 2 else None)
if dmat is None:
continue
ngt = darr.node_gridtype.astype(int)
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
subtitle_raw = getattr(darr, 'subtitle', None)
isub_raw = getattr(darr, 'isubcase', None)
isub_int = int(isub_raw) if isinstance(isub_raw, int) else None
sub_from_text = _extract_first_int(subtitle_raw) if isinstance(subtitle_raw, str) else None
debug_rows.append({
"table_key": str(key),
"isubcase": isub_int,
"subtitle": str(subtitle_raw) if subtitle_raw is not None else "",
"subtitle_int": sub_from_text if sub_from_text is not None else "",
"nnodes": int(len(node_ids)),
"_node_ids": node_ids.astype(int),
"_dmat": dmat.copy(),
})
if isinstance(sub_from_text, int) and sub_from_text in want:
out_exact[str(sub_from_text)] = {"node_ids": node_ids.astype(int), "disp": dmat.copy()}
continue
if isinstance(isub_int, int) and isub_int in want:
out_exact[str(isub_int)] = {"node_ids": node_ids.astype(int), "disp": dmat.copy()}
try:
folder = os.path.dirname(op2_path)
base = os.path.splitext(os.path.basename(op2_path))[0]
idx_path = os.path.join(folder, f"{base}_op2_displ_index.csv")
pd.DataFrame([{k: v for k, v in r.items() if not k.startswith('_')} for r in debug_rows])\
.to_csv(idx_path, index=False)
print(f"[DEBUG] Wrote displacement index: {idx_path}", flush=True)
except Exception as e:
print(f"[WARN] Could not write displacement index CSV: {e}", flush=True)
if all(str(x) in out_exact for x in want):
print("[INFO] Found exact subcases 90, 20, 40 and 60 via labels.", flush=True)
return out_exact
numerics = []
for r in debug_rows:
n = r.get("subtitle_int")
if not isinstance(n, int):
n = r.get("isubcase") if isinstance(r.get("isubcase"), int) else None
if isinstance(n, int):
numerics.append((n, r))
numerics = sorted(numerics, key=lambda t: t[0])
if len(numerics) >= 4:
chosen = numerics[:4]
mapping = dict(zip([c[0] for c in chosen], want))
out = {}
for n, r in chosen:
lbl = str(mapping[n])
out[lbl] = {"node_ids": r["_node_ids"], "disp": r["_dmat"]}
print("[WARN] Exact 90/20/40/60 not found; remapped smallest numeric subcases "
f"{[c[0] for c in chosen]} -> {want}", flush=True)
return out
print("\n[DEBUG] Displacement entries found (dump):", flush=True)
if debug_rows:
header = f'{"table_key":<18} {"isubcase":>7} {"subtitle":<40} {"subtitle_int":>12} {"nnodes":>8}'
print(header); print("-"*len(header))
for r in debug_rows:
print(f'{r["table_key"]:<18} {str(r["isubcase"]):>7} {r["subtitle"]:<40} '
f'{str(r["subtitle_int"]):>12} {r["nnodes"]:>8}', flush=True)
missing = [x for x in want if str(x) not in out_exact]
raise RuntimeError(f"Required subcases missing: {missing}. "
"Ensure SUBTITLEs (or isubcase IDs) are exactly 90, 20, 40 and 60.")
# --------------------------------------
# ------------- Core logic -------------
def build_dataframe_for_subcase(label, node_ids, dmat, node_geo):
rows = []
for nid, vec in zip(node_ids, dmat):
x0, y0, z0 = node_geo.get(int(nid), (np.nan,)*3)
rows.append({
"Subcase": label,
"NodeLabel": int(nid),
"X0": x0, "Y0": y0, "Z0": z0,
"Disp_Z": float(vec[2]),
"DispUnit": DISP_SRC_UNIT
})
df = pd.DataFrame(rows)
if df[["X0", "Y0"]].isna().any().any():
raise RuntimeError("Geometry missing (X0,Y0) for some nodes. Ensure .dat/.bdf matches the OP2.")
return df
def compute_relative_dz(df_abs_ref, df_abs_ang):
a = df_abs_ref.set_index("NodeLabel")
b = df_abs_ang.set_index("NodeLabel")
common = a.index.intersection(b.index)
b2 = b.loc[common].copy()
b2["Rel_Disp_Z"] = b2["Disp_Z"] - a.loc[common, "Disp_Z"]
b2.reset_index(inplace=True)
out = df_abs_ang.merge(b2[["NodeLabel", "Rel_Disp_Z"]], on="NodeLabel", how="inner")
out["Rel_WFE_nm"] = 2.0 * out["Rel_Disp_Z"] * NM_PER_UNIT
return out
def compute_rms_pair_only(X, Y, W_nm):
coeffs, Rmax = compute_zernike_coeffs_chunked(X, Y, W_nm, N_MODES)
Xc = X - np.mean(X); Yc = Y - np.mean(Y)
r = np.hypot(Xc/Rmax, Yc/Rmax); th = np.arctan2(Yc, Xc)
Z = np.column_stack([zernike_noll(j, r, th) for j in range(1, N_MODES+1)])
W_res_filt = W_nm - Z[:, :FILTER_LOW_ORDERS].dot(coeffs[:FILTER_LOW_ORDERS])
global_rms = float(np.sqrt((W_nm ** 2).mean()))
filtered_rms = float(np.sqrt((W_res_filt ** 2).mean()))
return global_rms, filtered_rms
def analyze_manufacturing_metrics(X, Y, W_nm):
"""
Manufacturing-related metrics at any angle. Returns individual mode magnitudes
and filtered RMS with different filter levels. (Numeric only.)
"""
coeffs, Rmax = compute_zernike_coeffs_chunked(X, Y, W_nm, N_MODES)
Xc = X - np.mean(X); Yc = Y - np.mean(Y)
r = np.hypot(Xc/Rmax, Yc/Rmax); th = np.arctan2(Yc, Xc)
Z = np.column_stack([zernike_noll(j, r, th) for j in range(1, N_MODES+1)])
W_res_filt3 = W_nm - Z[:, :3].dot(coeffs[:3]) # includes defocus
rms_filt3 = float(np.sqrt((W_res_filt3 ** 2).mean()))
W_res_filt4 = W_nm - Z[:, :4].dot(coeffs[:4]) # operational metric
rms_filt4 = float(np.sqrt((W_res_filt4 ** 2).mean()))
astig_rms = float(np.sqrt(coeffs[4]**2 + coeffs[5]**2)) # J5+J6
coma_rms = float(np.sqrt(coeffs[6]**2 + coeffs[7]**2)) # J7+J8
trefoil_rms = float(np.sqrt(coeffs[8]**2 + coeffs[9]**2)) # J9+J10
return {
"rms_filter_j1to3": rms_filt3,
"rms_filter_j1to4": rms_filt4,
"defocus_nm": float(abs(coeffs[3])),
"astigmatism_rms": astig_rms,
"coma_rms": coma_rms,
"trefoil_rms": trefoil_rms,
"spherical_nm": float(abs(coeffs[10]))
}
def zernike_common_name(n: int, m: int) -> str:
names = {
(0, 0): "Piston",
(1, -1): "Tilt X",
(1, 1): "Tilt Y",
(2, 0): "Defocus",
(2, -2): "Astig 45 deg",
(2, 2): "Astig 0 deg",
(3, -1): "Coma X",
(3, 1): "Coma Y",
(3, -3): "Trefoil X",
(3, 3): "Trefoil Y",
(4, 0): "Primary Spherical",
(4, -2): "Secondary Astig X",
(4, 2): "Secondary Astig Y",
(4, -4): "Quadrafoil X",
(4, 4): "Quadrafoil Y",
(5, -1): "Secondary Coma X",
(5, 1): "Secondary Coma Y",
(5, -3): "Secondary Trefoil X",
(5, 3): "Secondary Trefoil Y",
(5, -5): "Pentafoil X",
(5, 5): "Pentafoil Y",
(6, 0): "Secondary Spherical",
}
return names.get((n, m), f"Z(n={n}, m={m})")
def zernike_label_for_j(j: int) -> str:
n, m = noll_indices(j)
return f"J{j:02d} - {zernike_common_name(n, m)} (n={n}, m={m})"
def zernike_report(label, X, Y, W_nm, folder, base, is_relative: bool,
abs_pair=None, ref_title="20 deg", mfg_data=None, mfg_data_vs_20=None):
coeffs, Rmax = compute_zernike_coeffs_chunked(X, Y, W_nm, N_MODES)
Xc = X - np.mean(X); Yc = Y - np.mean(Y)
r = np.hypot(Xc/Rmax, Yc/Rmax); th = np.arctan2(Yc, Xc)
Z = np.column_stack([zernike_noll(j, r, th) for j in range(1, N_MODES+1)])
W_res_filt = W_nm - Z[:, :FILTER_LOW_ORDERS].dot(coeffs[:FILTER_LOW_ORDERS])
global_rms = float(np.sqrt((W_nm ** 2).mean()))
filtered_rms = float(np.sqrt((W_res_filt ** 2).mean()))
# Downsample for display only
n = len(X)
if n > PLOT_DOWNSAMPLE:
rng = np.random.default_rng(42)
sel = rng.choice(n, size=PLOT_DOWNSAMPLE, replace=False)
Xp, Yp, Wp = X[sel], Y[sel], W_res_filt[sel]
else:
Xp, Yp, Wp = X, Y, W_res_filt
res_amp = AMP * Wp
max_amp = float(np.max(np.abs(res_amp))) if res_amp.size else 1.0
# Triangulate the downsampled set
mesh_traces = []
try:
tri = Triangulation(Xp, Yp)
if tri.triangles is not None and len(tri.triangles) > 0:
i, j, k = tri.triangles.T
mesh_traces.append(go.Mesh3d(
x=Xp, y=Yp, z=res_amp,
i=i, j=j, k=k,
intensity=res_amp,
opacity=0.85, showscale=False
))
except Exception:
pass
mesh_traces.append(go.Scatter3d(
x=Xp, y=Yp, z=res_amp,
mode='markers',
marker=dict(size=2),
showlegend=False
))
labels = [zernike_label_for_j(j) for j in range(1, N_MODES+1)]
coeff_abs = np.abs(coeffs)
title_rel = f" (relative to {ref_title})" if is_relative else " (absolute)"
# Layout
if mfg_data is not None and mfg_data_vs_20 is not None:
if SHOW_ZERNIKE_BAR:
fig = make_subplots(
rows=5, cols=1,
specs=[[{"type":"scene"}],
[{"type":"table"}],
[{"type":"table"}],
[{"type":"table"}],
[{"type":"xy"}]],
row_heights=[0.40, 0.12, 0.12, 0.18, 0.18],
vertical_spacing=0.025,
subplot_titles=[
f"<b>Surface Residual & RMS of WFE - Subcase {label}{title_rel}</b>",
"<b>RMS Metrics (Absolute 90 deg)</b>",
"<b>Mode Magnitudes at 90 deg</b>",
"<b>Pre-Correction (90 deg - 20 deg)</b>",
"<b>|Zernike Coefficients| (nm)</b>"
]
)
else:
fig = make_subplots(
rows=5, cols=1,
specs=[[{"type":"scene"}],
[{"type":"table"}],
[{"type":"table"}],
[{"type":"table"}],
[{"type":"table"}]],
row_heights=[0.40, 0.12, 0.12, 0.18, 0.18],
vertical_spacing=0.025,
subplot_titles=[
f"<b>Surface Residual & RMS of WFE - Subcase {label}{title_rel}</b>",
"<b>RMS Metrics (Absolute 90 deg)</b>",
"<b>Mode Magnitudes at 90 deg</b>",
"<b>Pre-Correction (90 deg - 20 deg)</b>",
f"<b>Zernike Coefficients ({N_MODES} modes)</b>"
]
)
elif SHOW_ZERNIKE_BAR:
fig = make_subplots(
rows=4, cols=1,
specs=[[{"type":"scene"}],
[{"type":"table"}],
[{"type":"table"}],
[{"type":"xy"}]],
row_heights=[0.52, 0.12, 0.24, 0.12],
vertical_spacing=0.035,
subplot_titles=[
f"<b>Surface Residual & RMS of WFE - Subcase {label}{title_rel}</b>",
"<b>RMS Metrics</b>",
f"<b>Zernike Coefficients ({N_MODES} modes)</b>",
"<b>|Zernike Coefficients| (nm)</b>"
]
)
else:
fig = make_subplots(
rows=3, cols=1,
specs=[[{"type":"scene"}],
[{"type":"table"}],
[{"type":"table"}]],
row_heights=[0.60, 0.14, 0.26],
vertical_spacing=0.03,
subplot_titles=[
f"<b>Surface Residual & RMS of WFE - Subcase {label}{title_rel}</b>",
"<b>RMS Metrics</b>",
f"<b>Zernike Coefficients ({N_MODES} modes)</b>"
]
)
for tr in mesh_traces:
fig.add_trace(tr, row=1, col=1)
scene = fig.layout.scene
scene.camera = dict(eye=dict(x=0.5, y=0.5, z=0.5))
scene.zaxis = dict(range=[-max_amp * PANCAKE, max_amp * PANCAKE])
# RMS table
if is_relative and abs_pair is not None:
abs_global, abs_filtered = abs_pair
fig.add_trace(go.Table(
header=dict(values=["<b>Metric</b>", "<b>Relative (nm)</b>", "<b>Absolute (nm)</b>"], align="left"),
cells=dict(values=[
["Global RMS","Filtered RMS (modes 1-4 removed)"],
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"],
[f"{abs_global:.2f}", f"{abs_filtered:.2f}"],
], align="left")
), row=2, col=1)
elif mfg_data is not None and mfg_data_vs_20 is not None:
# 90 deg Absolute RMS (numeric only)
fig.add_trace(go.Table(
header=dict(values=["<b>Metric</b>","<b>Value (nm)</b>"], align="left"),
cells=dict(values=[
["Global RMS", "Filtered RMS (J1-J4)"],
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
], align="left")
), row=2, col=1)
# Mode magnitudes at 90 deg (absolute) — numeric only
fig.add_trace(go.Table(
header=dict(values=["<b>Mode</b>","<b>Value (nm)</b>"], align="left"),
cells=dict(values=[
["Filtered RMS (J1-J3, with defocus)",
"Astigmatism (J5+J6)",
"Coma (J7+J8)",
"Trefoil (J9+J10)",
"Spherical (J11)"],
[f"{mfg_data['rms_filter_j1to3']:.2f}",
f"{mfg_data['astigmatism_rms']:.2f}",
f"{mfg_data['coma_rms']:.2f}",
f"{mfg_data['trefoil_rms']:.2f}",
f"{mfg_data['spherical_nm']:.2f}"]
], align="left")
), row=3, col=1)
# Pre-correction 90 deg - 20 deg — numeric only
fig.add_trace(go.Table(
header=dict(values=["<b>Mode</b>", "<b>Correction (nm)</b>"], align="left"),
cells=dict(values=[
["Total RMS (J1-J3 filter)",
"Defocus (J4)",
"Astigmatism (J5+J6)",
"Coma (J7+J8)"],
[f"{mfg_data_vs_20['rms_filter_j1to3']:.2f}",
f"{mfg_data_vs_20['defocus_nm']:.2f}",
f"{mfg_data_vs_20['astigmatism_rms']:.2f}",
f"{mfg_data_vs_20['coma_rms']:.2f}"]
], align="left")
), row=4, col=1)
elif mfg_data is not None:
# Simple manufacturing table (compat) — numeric only
fig.add_trace(go.Table(
header=dict(values=["<b>Metric</b>","<b>Value (nm)</b>"], align="left"),
cells=dict(values=[
["Global RMS",
"Filtered RMS (J1-J4)",
"Filtered RMS (J1-J3, with defocus)",
"Defocus (J4)",
"Astigmatism (J5+J6)",
"Coma (J7+J8)",
"Trefoil (J9+J10)",
"Spherical (J11)"],
[f"{global_rms:.2f}",
f"{filtered_rms:.2f}",
f"{mfg_data['rms_filter_j1to3']:.2f}",
f"{mfg_data['defocus_nm']:.2f}",
f"{mfg_data['astigmatism_rms']:.2f}",
f"{mfg_data['coma_rms']:.2f}",
f"{mfg_data['trefoil_rms']:.2f}",
f"{mfg_data['spherical_nm']:.2f}"]
], align="left")
), row=2, col=1)
else:
# Standard absolute table
fig.add_trace(go.Table(
header=dict(values=["<b>Metric</b>","<b>Value (nm)</b>"], align="left"),
cells=dict(values=[
["Global RMS","Filtered RMS (modes 1-4 removed)"],
[f"{global_rms:.2f}", f"{filtered_rms:.2f}"]
], align="left")
), row=2, col=1)
# Zernike coefficients
if mfg_data is not None and mfg_data_vs_20 is not None:
pass # manufacturing case: bar chart only
elif SHOW_ZERNIKE_BAR:
fig.add_trace(go.Table(
header=dict(values=["<b>Noll j</b>","<b>Label</b>","<b>|Coeff| (nm)</b>"], align="left"),
cells=dict(values=[
list(range(1, N_MODES+1)),
labels,
list(np.round(coeff_abs, 3))
], align="left")
), row=3, col=1)
else:
fig.add_trace(go.Table(
header=dict(values=["<b>Noll j</b>","<b>Label</b>","<b>|Coeff| (nm)</b>"], align="left"),
cells=dict(values=[
list(range(1, N_MODES+1)),
labels,
list(np.round(coeff_abs, 3))
], align="left")
), row=3, col=1)
if SHOW_ZERNIKE_BAR:
bar_row = 5 if (mfg_data is not None and mfg_data_vs_20 is not None) else 4
fig.add_trace(
go.Bar(
x=coeff_abs.tolist(),
y=labels,
orientation='h',
hovertemplate="%{y}<br>|Coeff| = %{x:.3f} nm<extra></extra>",
showlegend=False
),
row=bar_row, col=1
)
fig.update_layout(
width=1400,
height=1400 if (mfg_data is not None and mfg_data_vs_20 is not None)
else (1200 if SHOW_ZERNIKE_BAR else 1000),
margin=dict(t=60,b=20)
)
# Save
suffix = f"_sub{label}"
html_path = os.path.join(folder, f"{base}{suffix}.html")
fig.write_html(html_path, include_plotlyjs='cdn')
pd.DataFrame({
"Noll_j": list(range(1, N_MODES+1)),
"Label": labels,
"Coeff_nm": [float(c) for c in coeffs],
"CoeffAbs_nm": [float(abs(c)) for c in coeffs]
}).to_csv(os.path.join(folder, f"{base}{suffix}_zcoeffs.csv"), index=False)
if OPEN_HTML:
try:
webbrowser.open('file://' + os.path.abspath(html_path))
except Exception:
pass
return {
"global_rms_nm": global_rms,
"filtered_rms_nm": filtered_rms,
"html_path": html_path
}
# --------------------------------------
def write_exp_file(folder, abs_by_angle, rel20_by_angle, rel90_by_angle, mfg_90=None, mfg_90_vs_20=None):
"""
Writes Iteration_results_expression.exp in [MilliMeter].
For each angle a in {90,20,40,60}, writes:
- RMS_a_Global, RMS_a_Filtered (absolute)
- RMS_a_Global_Rel20, RMS_a_Filtered_Rel20 (relative to 20 deg)
- RMS_a_Global_Rel90, RMS_a_Filtered_Rel90 (relative to 90 deg)
"""
required = ["90", "20", "40", "60"]
out_path = os.path.join(folder, "Iteration_results_expression.exp")
nm_to_mm = lambda x: float(x) * 1e-6
def _missing(dct):
return [a for a in required if a not in dct]
miss_abs = _missing(abs_by_angle)
miss_r20 = _missing(rel20_by_angle)
miss_r90 = _missing(rel90_by_angle)
if miss_abs or miss_r20 or miss_r90:
raise RuntimeError(f"EXP export aborted. Missing sets -> "
f"abs:{miss_abs}, rel20:{miss_r20}, rel90:{miss_r90}.")
lines = ["// Version: 3"]
for ang in required:
gA, fA = abs_by_angle[ang]
g20, f20 = rel20_by_angle[ang]
g90, f90 = rel90_by_angle[ang]
lines.append(f"[MilliMeter]RMS_{ang}_Global={nm_to_mm(gA):.7f}")
lines.append(f"[MilliMeter]RMS_{ang}_Filtered={nm_to_mm(fA):.7f}")
lines.append(f"[MilliMeter]RMS_{ang}_Global_Rel20={nm_to_mm(g20):.7f}")
lines.append(f"[MilliMeter]RMS_{ang}_Filtered_Rel20={nm_to_mm(f20):.7f}")
lines.append(f"[MilliMeter]RMS_{ang}_Global_Rel90={nm_to_mm(g90):.7f}")
lines.append(f"[MilliMeter]RMS_{ang}_Filtered_Rel90={nm_to_mm(f90):.7f}")
# Manufacturing metrics for 90 deg (numeric only)
if mfg_90_vs_20 and mfg_90:
lines.append(f"[MilliMeter]RMS_90_Optician_Workload={nm_to_mm(mfg_90_vs_20['rms_filter_j1to3']):.7f}")
lines.append(f"[MilliMeter]RMS_90_Astig_Abs={nm_to_mm(mfg_90['astigmatism_rms']):.7f}")
lines.append(f"[MilliMeter]RMS_90_Coma_Abs={nm_to_mm(mfg_90['coma_rms']):.7f}")
with open(out_path, "w", encoding="utf-8-sig", newline="\n") as f:
f.write("\n".join(lines) + "\n")
print(f"[OK] Wrote EXP: {out_path}")
def main():
op2_path, dat_path = pick_files_cli()
folder = os.path.dirname(op2_path)
base = os.path.splitext(os.path.basename(op2_path))[0]
results_dir = os.environ.get("ZERNIKE_RESULTS_DIR") or os.path.join(folder, "Results")
os.makedirs(results_dir, exist_ok=True)
env_run_id = os.environ.get("ZERNIKE_RUN_ID")
if env_run_id:
run_id = env_run_id
try:
run_dt = datetime.strptime(env_run_id, "%Y%m%d_%H%M%S")
except Exception:
run_dt = datetime.now()
else:
run_dt = datetime.now()
run_id = run_dt.strftime("%Y%m%d_%H%M%S")
base_ts = f"{base}_{run_id}"
node_geo = read_geometry(dat_path)
subdisp = read_displacements_by_subcase(op2_path) # dict: '90','20','40','60'
# ABS for 20 deg (reference for pages)
if REF_LABEL not in subdisp:
raise RuntimeError(f"Reference subcase '{REF_LABEL}' not found after mapping. "
"Check *_op2_displ_index.csv for labels.")
df_ref_abs = build_dataframe_for_subcase(
REF_LABEL,
subdisp[REF_LABEL]["node_ids"],
subdisp[REF_LABEL]["disp"],
node_geo
)
df_ref_abs["WFE_nm"] = 2.0 * df_ref_abs["Disp_Z"] * NM_PER_UNIT
Xref = df_ref_abs["X0"].to_numpy()
Yref = df_ref_abs["Y0"].to_numpy()
Wref = df_ref_abs["WFE_nm"].to_numpy()
abs_ref_global, abs_ref_filtered = compute_rms_pair_only(Xref, Yref, Wref)
# ABS for 90 deg (manufacturing reference; used for REL vs 90)
if POLISH_LABEL not in subdisp:
raise RuntimeError(f"Subcase '{POLISH_LABEL}' not found after mapping.")
df90_abs = build_dataframe_for_subcase(
POLISH_LABEL,
subdisp[POLISH_LABEL]["node_ids"],
subdisp[POLISH_LABEL]["disp"],
node_geo
)
df90_abs["WFE_nm"] = 2.0 * df90_abs["Disp_Z"] * NM_PER_UNIT
X90 = df90_abs["X0"].to_numpy()
Y90 = df90_abs["Y0"].to_numpy()
W90 = df90_abs["WFE_nm"].to_numpy()
abs90_global, abs90_filtered = compute_rms_pair_only(X90, Y90, W90)
combined_rows = []
def emit_rows(df, label, is_relative):
if is_relative:
for row in df.itertuples(index=False):
combined_rows.append({
"Subcase": label,
"NodeLabel": row.NodeLabel,
"X0": row.X0, "Y0": row.Y0, "Z0": row.Z0,
"Rel_Disp_Z": getattr(row, "Rel_Disp_Z"),
"WFE_nm": getattr(row, "Rel_WFE_nm"),
"DispUnit": DISP_SRC_UNIT,
"RelativeToRef": True,
"RefAngleDeg": REF_LABEL
})
else:
for row in df.itertuples(index=False):
combined_rows.append({
"Subcase": label,
"NodeLabel": row.NodeLabel,
"X0": row.X0, "Y0": row.Y0, "Z0": row.Z0,
"Disp_Z": row.Disp_Z,
"WFE_nm": row.WFE_nm,
"DispUnit": DISP_SRC_UNIT,
"RelativeToRef": False,
"RefAngleDeg": ""
})
# Emit absolute 20 deg
emit_rows(df_ref_abs, REF_LABEL, is_relative=False)
metrics_summary = [{
"Subcase": f"sub{REF_LABEL}",
"Abs_GlobalRMS_nm": abs_ref_global,
"Abs_FilteredRMS_nm": abs_ref_filtered,
"Rel20_GlobalRMS_nm": np.nan, # reference has no relative (vs 20 deg)
"Rel20_FilteredRMS_nm": np.nan
}]
abs_values_by_angle = {REF_LABEL: (abs_ref_global, abs_ref_filtered), POLISH_LABEL: (abs90_global, abs90_filtered)}
rel20_values_by_angle = {REF_LABEL: (0.0, 0.0)} # 20 vs 20 = 0
rel90_values_by_angle = {POLISH_LABEL: (0.0, 0.0)} # 90 vs 90 = 0
# Process all angles (ABS; REL vs 20 deg for pages)
all_angles = [str(a) for a in REQUIRED_SUBCASES]
for ang in [a for a in all_angles if a != REF_LABEL]:
if ang not in subdisp:
raise RuntimeError(f"Subcase '{ang}' not found after mapping. "
"Check *_op2_displ_index.csv for labels.")
if ang == POLISH_LABEL:
abs_global, abs_filtered = abs_values_by_angle[ang]
X_abs, Y_abs = X90, Y90
else:
df_abs = build_dataframe_for_subcase(ang, subdisp[ang]["node_ids"], subdisp[ang]["disp"], node_geo)
W_abs_nm = 2.0 * df_abs["Disp_Z"].to_numpy() * NM_PER_UNIT
X_abs = df_abs["X0"].to_numpy(); Y_abs = df_abs["Y0"].to_numpy()
abs_global, abs_filtered = compute_rms_pair_only(X_abs, Y_abs, W_abs_nm)
abs_values_by_angle[ang] = (abs_global, abs_filtered)
# REL vs 20
if ang == POLISH_LABEL:
df_rel20 = compute_relative_dz(df_ref_abs, df90_abs)
else:
df_rel20 = compute_relative_dz(df_ref_abs, df_abs)
emit_rows(df_rel20, ang, is_relative=True) # Only emit CSV rows for 40/60
X_rel20 = df_rel20["X0"].to_numpy(); Y_rel20 = df_rel20["Y0"].to_numpy()
W_rel20 = df_rel20["Rel_WFE_nm"].to_numpy()
# HTML for 40 and 60 only
if ang != POLISH_LABEL:
m_rel20 = zernike_report(
ang, X_rel20, Y_rel20, W_rel20, results_dir, base_ts,
is_relative=True, abs_pair=(abs_global, abs_filtered), ref_title=REF_TITLE
)
rel20_values_by_angle[ang] = (m_rel20["global_rms_nm"], m_rel20["filtered_rms_nm"])
metrics_summary.append({
"Subcase": f"sub{ang}",
"Abs_GlobalRMS_nm": abs_global,
"Abs_FilteredRMS_nm": abs_filtered,
"Rel20_GlobalRMS_nm": m_rel20["global_rms_nm"],
"Rel20_FilteredRMS_nm": m_rel20["filtered_rms_nm"]
})
else:
g20, f20 = compute_rms_pair_only(X_rel20, Y_rel20, W_rel20)
rel20_values_by_angle[ang] = (g20, f20)
# REL vs 90 for all angles (metrics only)
for ang in all_angles:
df_abs = build_dataframe_for_subcase(ang, subdisp[ang]["node_ids"], subdisp[ang]["disp"], node_geo)
df_rel90 = compute_relative_dz(df90_abs, df_abs)
X_rel90 = df_rel90["X0"].to_numpy(); Y_rel90 = df_rel90["Y0"].to_numpy()
W_rel90 = df_rel90["Rel_WFE_nm"].to_numpy()
g90, f90 = compute_rms_pair_only(X_rel90, Y_rel90, W_rel90)
rel90_values_by_angle[ang] = (g90, f90)
# Manufacturing metrics for 90
mfg_metrics_90 = None
mfg_metrics_90_vs_20 = None
if POLISH_LABEL in subdisp:
mfg_metrics_90 = analyze_manufacturing_metrics(X90, Y90, W90)
df90_rel20 = compute_relative_dz(df_ref_abs, df90_abs)
X90_rel = df90_rel20["X0"].to_numpy()
Y90_rel = df90_rel20["Y0"].to_numpy()
W90_rel = df90_rel20["Rel_WFE_nm"].to_numpy()
mfg_metrics_90_vs_20 = analyze_manufacturing_metrics(X90_rel, Y90_rel, W90_rel)
# Special HTML for 90 (numeric only)
_m90 = zernike_report(
POLISH_LABEL, X90, Y90, W90, results_dir, base_ts,
is_relative=False,
mfg_data=mfg_metrics_90,
mfg_data_vs_20=mfg_metrics_90_vs_20
)
# HTML for 20 deg relative to 90 deg
df_20_rel90 = compute_relative_dz(df90_abs, df_ref_abs)
X_20_rel90 = df_20_rel90["X0"].to_numpy()
Y_20_rel90 = df_20_rel90["Y0"].to_numpy()
W_20_rel90 = df_20_rel90["Rel_WFE_nm"].to_numpy()
_m_ref = zernike_report(
REF_LABEL, X_20_rel90, Y_20_rel90, W_20_rel90, results_dir, base_ts,
is_relative=True, abs_pair=(abs_ref_global, abs_ref_filtered), ref_title=POLISH_TITLE
)
# Combined nodal CSV
comb_df = pd.DataFrame(combined_rows)
comb_path = os.path.join(results_dir, f"{base_ts}_all_subcases_nodal.csv")
comb_df.to_csv(comb_path, index=False)
# Summary CSV (REL20 naming)
summary_path = os.path.join(results_dir, f"{base_ts}_RMS_summary.csv")
pd.DataFrame(metrics_summary, columns=[
"Subcase",
"Abs_GlobalRMS_nm","Abs_FilteredRMS_nm",
"Rel20_GlobalRMS_nm","Rel20_FilteredRMS_nm"
]).to_csv(summary_path, index=False)
# EXP export
write_exp_file(results_dir, abs_values_by_angle, rel20_values_by_angle, rel90_values_by_angle,
mfg_90=mfg_metrics_90, mfg_90_vs_20=mfg_metrics_90_vs_20)
# Persistent RMS log
log_path = os.path.join(results_dir, "RMS_log.csv")
row = {
"RunID": run_id,
"RunDateTimeLocal": run_dt.strftime("%Y-%m-%d %H:%M:%S"),
"OP2_Base": base,
"DispUnit": DISP_SRC_UNIT,
"ReferenceAngleDeg": REF_LABEL
}
for ang in [str(a) for a in REQUIRED_SUBCASES]:
gA, fA = abs_values_by_angle.get(ang, (np.nan, np.nan))
gR20, fR20 = rel20_values_by_angle.get(ang, (np.nan, np.nan))
gR90, fR90 = rel90_values_by_angle.get(ang, (np.nan, np.nan))
row[f"Abs_GlobalRMS_nm_{ang}"] = gA
row[f"Abs_FilteredRMS_nm_{ang}"] = fA
row[f"Rel_GlobalRMS_nm_{ang}_vs20"] = gR20
row[f"Rel_FilteredRMS_nm_{ang}_vs20"] = fR20
row[f"Rel_GlobalRMS_nm_{ang}_vs90"] = gR90
row[f"Rel_FilteredRMS_nm_{ang}_vs90"] = fR90
if mfg_metrics_90 and mfg_metrics_90_vs_20:
row["Mfg_90_OpticianWorkload_nm"] = mfg_metrics_90_vs_20['rms_filter_j1to3']
row["Mfg_90_Astig_Abs_nm"] = mfg_metrics_90['astigmatism_rms']
row["Mfg_90_Coma_Abs_nm"] = mfg_metrics_90['coma_rms']
pd.DataFrame([row]).to_csv(
log_path,
mode="a",
header=not os.path.exists(log_path),
index=False
)
# Console output (numeric only; ASCII-safe)
if mfg_metrics_90 and mfg_metrics_90_vs_20:
print("\n" + "="*60)
print("MANUFACTURING METRICS (90 deg horizontal polishing)")
print("="*60)
print(f"\n90 deg Absolute (surface shape on test stand):")
print(f" Filtered RMS (J1-J4): {mfg_metrics_90['rms_filter_j1to4']:.1f} nm")
print(f" Astigmatism (J5+J6): {mfg_metrics_90['astigmatism_rms']:.1f} nm")
print(f" Coma (J7+J8): {mfg_metrics_90['coma_rms']:.1f} nm")
print(f" Trefoil (J9+J10): {mfg_metrics_90['trefoil_rms']:.1f} nm")
print(f"\n90 deg - 20 deg (pre-correction to polish in):")
print(f" Total RMS (J1-J3): {mfg_metrics_90_vs_20['rms_filter_j1to3']:.1f} nm")
print(f" Defocus (J4): {mfg_metrics_90_vs_20['defocus_nm']:.1f} nm")
print(f" Astigmatism (J5+J6): {mfg_metrics_90_vs_20['astigmatism_rms']:.1f} nm")
print(f" Coma (J7+J8): {mfg_metrics_90_vs_20['coma_rms']:.1f} nm")
print("="*60 + "\n")
print("[OK] Wrote:")
print(" -", comb_path)
print(" -", summary_path)
print(" -", log_path, "(appended)")
for r in metrics_summary:
tag = r["Subcase"]
if tag == f"sub{REF_LABEL}":
print(f" - {base_ts}_{tag}.html (ABS Global={r['Abs_GlobalRMS_nm']:.2f} nm, "
f"ABS Filtered={r['Abs_FilteredRMS_nm']:.2f} nm)")
else:
print(f" - {base_ts}_{tag}.html (REL(vs20) Global={r['Rel20_GlobalRMS_nm']:.2f} nm, "
f"REL(vs20) Filtered={r['Rel20_FilteredRMS_nm']:.2f} nm | "
f"ABS Global={r['Abs_GlobalRMS_nm']:.2f} nm, "
f"ABS Filtered={r['Abs_FilteredRMS_nm']:.2f} nm)")
if __name__ == "__main__":
try:
main()
except Exception as e:
msg = f"Fatal error: {e}"
print(msg)
if messagebox:
try:
messagebox.showerror("Fatal error", str(e))
except Exception:
pass