{"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Organized M1 Mirror documentation with parent-child README hierarchy","insight":"DOCUMENTATION PATTERN: Studies use a two-level README hierarchy. Parent README at studies/{geometry_type}/README.md contains project-wide context (optical specs, design variables catalog, objectives catalog, campaign history, sub-studies index). Child README at studies/{geometry_type}/{study_name}/README.md references parent and contains study-specific details (active variables, algorithm config, results). This eliminates duplication, maintains single source of truth for specs, and makes sub-study docs concise. Pattern documented in OP_01_CREATE_STUDY.md and study-creation-core.md.","confidence":0.95,"tags":["documentation","readme","hierarchy","study-creation","organization"],"rule":"When creating studies for a geometry type: (1) Create parent README with project context if first study, (2) Add reference banner to child README: '> See [../README.md](../README.md) for project overview', (3) Update parent's sub-studies index table when adding new sub-studies."} {"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Created universal mirror optical specs extraction tool","insight":"TOOL PATTERN: Mirror optical specs (focal length, f-number, diameter) can be auto-estimated from FEA mesh geometry by fitting z = a*r² + b to node coordinates. Focal length = 1/(4*|a|). Tool at tools/extract_mirror_optical_specs.py works with any mirror study - just point it at an OP2 file or study directory. Reports fit quality to indicate if explicit focal length should be used instead. Use: python tools/extract_mirror_optical_specs.py path/to/study","confidence":0.9,"tags":["tools","mirror","optical-specs","zernike","opd","extraction"],"rule":"For mirror optimization: (1) Run extract_mirror_optical_specs.py to estimate optical prescription from mesh, (2) Validate against design specs, (3) Document in parent README, (4) Use explicit focal_length in ZernikeOPDExtractor if fit quality is poor."} {"timestamp":"2025-12-22T11:05:00","category":"success_pattern","context":"Implemented OPD-based Zernike method for lateral support optimization","insight":"PHYSICS PATTERN: Standard Zernike WFE analysis uses Z-displacement at original (x,y) coordinates. This is INCORRECT for lateral support optimization where nodes shift in X,Y. The rigorous OPD method computes: surface_error = dz - delta_z_parabola where delta_z_parabola = -delta_r²/(4f) for concave mirrors. This accounts for the fact that laterally displaced nodes should be compared against parabola height at their NEW position. Implemented in extract_zernike_opd.py with ZernikeOPDExtractor class. Use extract_comparison() to see method difference. Threshold: >10µm lateral displacement = CRITICAL to use OPD.","confidence":1.0,"tags":["zernike","opd","lateral-support","mirror","wfe","physics"],"rule":"For mirror optimization with lateral supports or any case where X,Y displacement may be significant: (1) Use ZernikeOPDExtractor instead of ZernikeExtractor, (2) Run zernike_opd_comparison insight to check lateral displacement magnitude, (3) If max lateral >10µm, OPD method is CRITICAL."} {"timestamp": "2025-12-24T08:13:38.640319", "category": "success_pattern", "context": "Neural surrogate turbo optimization with FEA validation", "insight": "For surrogate-based optimization, log FEA validation trials to a SEPARATE Optuna study.db for dashboard visibility. The surrogate exploration runs internally (not logged), but every FEA validation gets logged to Optuna using study.ask()/tell() pattern. This allows dashboard monitoring of FEA progress while keeping surrogate trials private.", "confidence": 0.95, "tags": ["surrogate", "turbo", "optuna", "dashboard", "fea", "neural"]} {"timestamp": "2025-12-28T10:15:00", "category": "success_pattern", "context": "Unified trial management with TrialManager and DashboardDB", "insight": "TRIAL MANAGEMENT PATTERN: Use TrialManager for consistent trial_NNNN naming across all optimization methods (Optuna, Turbo, GNN, manual). Key principles: (1) Trial numbers NEVER reset (monotonic), (2) Folders NEVER get overwritten, (3) Database always synced with filesystem, (4) Surrogate predictions are NOT trials - only FEA results. DashboardDB provides Optuna-compatible schema for dashboard integration. Path: optimization_engine/utils/trial_manager.py", "confidence": 0.95, "tags": ["trial_manager", "dashboard_db", "optuna", "trial_naming", "turbo"]} {"timestamp": "2025-12-28T10:15:00", "category": "success_pattern", "context": "GNN Turbo training data loading from multiple studies", "insight": "MULTI-STUDY TRAINING: When loading training data from multiple prior studies for GNN surrogate training, param names may have unit prefixes like '[mm]rib_thickness' or '[Degrees]angle'. Strip prefixes: if ']' in name: name = name.split(']', 1)[1]. Also, objective attribute names vary between studies (rel_filtered_rms_40_vs_20 vs obj_rel_filtered_rms_40_vs_20) - use fallback chain with 'or'. V5 successfully trained on 316 samples (V3: 297, V4: 19) with R²=[0.94, 0.94, 0.89, 0.95].", "confidence": 0.9, "tags": ["gnn", "turbo", "training_data", "multi_study", "param_naming"]} {"timestamp": "2025-12-28T12:28:04.706624", "category": "success_pattern", "context": "Implemented L-BFGS gradient optimizer for surrogate polish phase", "insight": "L-BFGS on trained MLP surrogates provides 100-1000x faster convergence than derivative-free methods (TPE, CMA-ES) for local refinement. Key: use multi-start from top FEA candidates, not random initialization. Integration: GradientOptimizer class in optimization_engine/gradient_optimizer.py.", "confidence": 0.9, "tags": ["optimization", "lbfgs", "surrogate", "gradient", "polish"]} {"timestamp": "2025-12-29T09:30:00", "category": "success_pattern", "context": "V6 pure TPE outperformed V5 surrogate+L-BFGS by 22%", "insight": "SIMPLE BEATS COMPLEX: V6 Pure TPE achieved WS=225.41 vs V5's WS=290.18 (22.3% better). Key insight: surrogates fail when gradient methods descend to OOD regions. Fix: EnsembleSurrogate with (1) N=5 MLPs for disagreement-based uncertainty, (2) OODDetector with KNN+z-score, (3) acquisition_score balancing exploitation+exploration, (4) trust-region L-BFGS that stays in training distribution. Never trust point predictions - always require uncertainty bounds. Protocol: SYS_16_SELF_AWARE_TURBO.md. Code: optimization_engine/surrogates/ensemble_surrogate.py", "confidence": 1.0, "tags": ["ensemble", "uncertainty", "ood", "surrogate", "v6", "tpe", "self-aware"]} {"timestamp": "2025-12-29T09:47:47.612485", "category": "success_pattern", "context": "Disk space optimization for FEA studies", "insight": "Per-trial FEA files are ~150MB but only OP2+JSON (~70MB) are essential. PRT/FEM/SIM/DAT are copies of master files and can be deleted after study completion. Archive to dalidou server for long-term storage.", "confidence": 0.95, "tags": ["disk_optimization", "archival", "study_management", "dalidou"], "related_files": ["optimization_engine/utils/study_archiver.py", "docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md"]} {"timestamp": "2026-01-02T14:30:00", "category": "success_pattern", "context": "Study Interview Mode implementation and routing update", "insight": "STUDY CREATION DEFAULT: Interview Mode is now the DEFAULT for all study creation requests. Triggers: create a study, new study, set up study, optimize this, minimize mass - any study creation intent. Benefits: (1) Material-aware validation checks stress vs yield, (2) Anti-pattern detection warns about mass-no-constraint, (3) Auto extractor mapping E1-E10, (4) State persistence for interrupted sessions, (5) Blueprint generation with full validation. Skip with: skip interview, quick setup, manual config. Implementation: optimization_engine/interview/ with StudyInterviewEngine, QuestionEngine, EngineeringValidator, StudyBlueprint. All 129 tests passing.", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "validation", "anti_pattern", "materials"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/study_interview.py"]} {"timestamp": "2026-01-02T14:45:00", "category": "success_pattern", "context": "Study Interview Mode implementation complete", "insight": "INTERVIEW MODE DEFAULT: Study creation now uses Interview Mode by default for all study creation requests. This is a major usability improvement. Triggers: create a study, new study, set up, optimize this - any study creation intent. Key features: (1) Material-aware validation with 12 materials and fuzzy name matching, (2) Anti-pattern detection for 12 common mistakes, (3) Auto extractor mapping E1-E24, (4) 7-phase interview flow, (5) State persistence for interrupted sessions, (6) Blueprint validation before generation. Skip with: skip interview, quick setup, manual. Implementation in optimization_engine/interview/ with 129 tests passing. Full documentation in: .claude/skills/modules/study-interview-mode.md, docs/protocols/operations/OP_01_CREATE_STUDY.md", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "usability", "materials", "anti_pattern", "validation"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/"]} {"timestamp": "2026-01-22T13:00:00", "category": "success_pattern", "context": "DevLoop closed-loop development system implementation", "insight": "DEVLOOP PATTERN: Implemented autonomous development cycle that coordinates Gemini (planning) + Claude Code (implementation) + Dashboard (testing) + LAC (learning). 7-stage loop: PLAN -> BUILD -> TEST -> ANALYZE -> FIX -> VERIFY -> LOOP. Key components: (1) DevLoopOrchestrator in optimization_engine/devloop/, (2) DashboardTestRunner for automated testing, (3) GeminiPlanner for strategic planning with mock fallback, (4) ClaudeCodeBridge for implementation, (5) ProblemAnalyzer for failure analysis. API at /api/devloop/* with WebSocket for real-time updates. CLI tool at tools/devloop_cli.py. Frontend panel DevLoopPanel.tsx. Test with: python tools/devloop_cli.py test --study support_arm", "confidence": 0.95, "tags": ["devloop", "automation", "testing", "gemini", "claude", "dashboard", "closed-loop"], "related_files": ["optimization_engine/devloop/orchestrator.py", "tools/devloop_cli.py", "docs/guides/DEVLOOP.md"]} {"timestamp": "2026-01-22T13:37:05.355957", "category": "success_pattern", "context": "Extracting mass from Nastran BDF files", "insight": "Use BDFMassExtractor from bdf_mass_extractor.py for reliable mass extraction. It uses elem.Mass() which handles unit conversions properly. The simpler extract_mass_from_bdf.py now wraps this.", "confidence": 0.9, "tags": ["mass", "bdf", "extraction", "pyNastran"]} {"timestamp": "2026-01-22T13:47:38.696196", "category": "success_pattern", "context": "Stress extraction from NX Nastran OP2 files", "insight": "pyNastran returns stress in kPa for NX kg-mm-s unit system. Divide by 1000 to get MPa. Must check ALL solid element types (CTETRA, CHEXA, CPENTA, CPYRAM) to find true max. Elemental Nodal gives peak stress (143.5 MPa), Elemental Centroid gives averaged (100.3 MPa).", "confidence": 0.95, "tags": ["stress", "extraction", "units", "pyNastran", "nastran"]} {"timestamp": "2026-01-22T15:12:01.584128", "category": "success_pattern", "context": "Dashboard study discovery", "insight": "Dashboard now supports atomizer_spec.json as primary config. Updated _load_study_info() in optimization.py to check atomizer_spec.json first, then fall back to optimization_config.json. Studies with atomizer_spec.json are now discoverable.", "confidence": 0.9, "tags": ["dashboard", "atomizer_spec", "config", "v2.0"]} {"timestamp": "2026-01-22T15:12:01.584128", "category": "success_pattern", "context": "Extracting stress from NX Nastran results", "insight": "CONFIRMED: pyNastran returns stress in kPa for NX kg-mm-s unit system. Divide by 1000 for MPa. Must check ALL solid types (CTETRA, CHEXA, CPENTA, CPYRAM) - CHEXA often has highest stress. Elemental Nodal (143.5 MPa) vs Elemental Centroid (100.3 MPa) - use Nodal for conservative peak stress.", "confidence": 1.0, "tags": ["stress", "extraction", "units", "nastran", "verified"]} {"timestamp": "2026-01-22T15:23:37.040324", "category": "success_pattern", "context": "Creating new study with DevLoop workflow", "insight": "DevLoop workflow: plan -> create dirs -> copy models -> atomizer_spec.json -> validate canvas -> run_optimization.py -> devloop test -> FEA validation. 8 steps completed for support_arm_lightweight.", "confidence": 0.95, "tags": ["devloop", "workflow", "study_creation", "success"]} {"timestamp": "2026-01-22T15:23:37.040324", "category": "success_pattern", "context": "Single-objective optimization with constraints", "insight": "Single-objective with constraints: one objective in array, constraints use threshold+operator, penalty in objective function, canvas edges ext->obj for objective, ext->con for constraints.", "confidence": 0.9, "tags": ["optimization", "single_objective", "constraints", "canvas"]} {"timestamp": "2026-01-22T16:15:11.449264", "category": "success_pattern", "context": "Atomizer UX System implementation - January 2026", "insight": "New study workflow: (1) Put files in studies/_inbox/project_name/models/, (2) Optionally add intake.yaml and context/goals.md, (3) Run atomizer intake project_name, (4) Run atomizer gate study_name to validate with test trials, (5) If passed, approve with --approve flag, (6) Run optimization, (7) Run atomizer finalize study_name to generate interactive HTML report. The CLI commands are: intake, gate, list, finalize.", "confidence": 1.0, "tags": ["workflow", "ux", "cli", "intake", "validation", "report"]} {"timestamp": "2026-01-22T21:10:37.956764", "category": "success_pattern", "context": "Stage 3 arm study setup and execution with DevLoop", "insight": "DevLoop test command (devloop_cli.py test --study) successfully validated study setup before optimization. The 5 standard tests (directory, spec JSON, README, run_optimization.py, model dir) caught structure issues early. Full workflow: (1) Copy model files, (2) Create atomizer_spec.json with extractors/objectives/constraints, (3) Create run_optimization.py from template, (4) Create README.md, (5) Run DevLoop tests, (6) Execute optimization.", "confidence": 0.95, "tags": ["devloop", "study_creation", "workflow", "testing"]}