# War-Room Codex Analysis (Code-Level Reality) Date: 2026-02-20 Scope: Static + runtime checks starting from `atomizer.py` and `optimization_engine/run_optimization.py`. ## 1) Entry Point Reality ### A. `atomizer.py` is runnable - Verified: `python3 atomizer.py --help` exits `0`. - Top-level imports are valid. - Command handlers branch into two very different worlds: - **Atomizer UX path**: `intake`, `gate`, `finalize`. - **Legacy study-script launcher**: `neural-optimize` shells into per-study `studies//run_optimization.py`. ### B. `optimization_engine/run_optimization.py` is currently broken at import time - Verified: `python3 optimization_engine/run_optimization.py --help` exits `1`. - Failure: - `optimization_engine/run_optimization.py:39` - imports `optimization_engine.future.llm_optimization_runner` - which imports `optimization_engine.extractor_orchestrator` at `optimization_engine/future/llm_optimization_runner.py:26` - but only `optimization_engine/future/extractor_orchestrator.py` exists. - Result: this entrypoint never reaches argument parsing. ## 2) Import Chains and Dependency Graph ## 2.1 Atomizer chain (actual reachable modules) ```mermaid graph TD A[atomizer.py] A --> TL[optimization_engine.config.template_loader] A --> AT[optimization_engine.processors.surrogates.auto_trainer] A --> SV[optimization_engine.validators.study_validator] A --> IN[optimization_engine.intake] A --> VG[optimization_engine.validation] A --> HR[optimization_engine.reporting.html_report] SV --> CV[validators.config_validator] SV --> MV[validators.model_validator] SV --> RV[validators.results_validator] ``` Key behavior: - `neural-optimize` does **not** call a central engine runner; it executes `studies//run_optimization.py` via subprocess. - This makes runtime behavior depend on many heterogeneous study scripts. ## 2.2 Unified runner chain (intended, but broken) ```mermaid graph TD R[optimization_engine/run_optimization.py] R --> WA[future.llm_workflow_analyzer] R --> LR[future.llm_optimization_runner] R --> NXU[nx.updater] R --> NXS[nx.solver] R --> CR[core.runner] LR --> HM[plugins.hook_manager] LR -.broken import.-> EO[optimization_engine.extractor_orchestrator (missing)] LR -.broken import.-> IC[optimization_engine.inline_code_generator (missing)] LR -.broken import.-> HG[optimization_engine.hook_generator (missing)] ``` Observations: - `OptimizationRunner` is imported in `optimization_engine/run_optimization.py:40` but not used. - Manual mode is scaffolding and exits (`optimization_engine/run_optimization.py:286-320`). ## 3) Tight vs Loose Coupling ### Tight coupling (high-risk refactor areas) - `atomizer.py` to repository layout (`studies//run_optimization.py`) and command shelling. - `core/runner.py` to config schema and exact extractor return shape (`result[metric_name]`). - Validation gate and intake to specific extractor functions and NX solver assumptions. - Template loader to incorrect engine-relative paths and missing module names. ### Loose coupling (good seams) - Hook function interface (`dict -> optional dict`) is flexible. - Extractor call abstraction (`name -> callable`) in runners can be standardized. - Lazy imports in `atomizer.py` for intake/gate/finalize reduce startup coupling. ## 4) Circular Dependencies Detected import cycles are limited and not the main blocker: - `optimization_engine.extractors -> optimization_engine.extractors.extract_zernike_figure -> optimization_engine.extractors` - `optimization_engine.model_discovery -> optimization_engine.model_discovery` (package self-cycle artifact) Main instability is from **missing/incorrect module paths**, not classic circular imports. ## 5) Dead Code / Orphan Findings ## 5.1 `optimization_engine/future/` (what is actually wired) Direct non-test references: - `llm_workflow_analyzer.py`: referenced by `optimization_engine/run_optimization.py`. - `llm_optimization_runner.py`: referenced by `optimization_engine/run_optimization.py` and some study scripts. - `report_generator.py`: referenced by dashboard backend route. Mostly test/deprecation only: - `research_agent.py`, `step_classifier.py`, `targeted_research_planner.py`, `workflow_decomposer.py`, `pynastran_research_agent.py`. Practically dead due broken imports: - `future/extractor_orchestrator.py`, `future/inline_code_generator.py`, `future/hook_generator.py` are intended runtime pieces, but callers import them using missing top-level paths. ## 5.2 Extractors (used vs orphaned) Clearly used in runtime paths: - `extract_displacement.py`, `extract_von_mises_stress.py`, `bdf_mass_extractor.py` - via validation/intake/wizard/base runner. Used mainly by studies/tools/dashboard: - `extract_frequency.py`, `extract_mass_from_expression.py`, `extract_zernike*.py`, `op2_extractor.py`, `stiffness_calculator.py`. Likely orphaned (no non-test references found): - `field_data_extractor.py` - `zernike_helpers.py` Low-usage / isolated: - `extract_stress_field_2d.py` (single tools reference) - `extract_zernike_surface.py` (single study script reference) ## 5.3 Orphaned module references (hard breaks) - Missing module imports in code: - `optimization_engine.extractor_orchestrator` (missing) - `optimization_engine.inline_code_generator` (missing) - `optimization_engine.hook_generator` (missing) - `optimization_engine.study_runner` (missing) Evidence: - `optimization_engine/future/llm_optimization_runner.py:26-28` - `optimization_engine/config/setup_wizard.py:26-27` - generated script template in `optimization_engine/config/template_loader.py:216` ## 6) Hooks/Plugins: Wired vs Scaffolding There are **two distinct hook concepts**: 1. `optimization_engine/plugins/*`: - lifecycle hook framework (`pre_solve`, `post_solve`, etc.) used by runners. 2. `optimization_engine/hooks/*`: - NX Open CAD/CAE API wrappers (not plugin lifecycle hooks). ### What is actually wired - `core/runner.py` executes hook points across trial lifecycle. - `future/llm_optimization_runner.py` also executes lifecycle hooks. ### Why most plugins are not actually loaded - Wrong plugin directory resolution: - `core/runner.py:76` uses `Path(__file__).parent / 'plugins'` -> `optimization_engine/core/plugins` (does not exist). - `future/llm_optimization_runner.py:140` uses `optimization_engine/future/plugins` (does not exist). - `config/setup_wizard.py:426` same issue (`optimization_engine/config/plugins`). - Real plugin directory is `optimization_engine/plugins/`. ### Additional plugin scaffolding mismatches - Hook point enum uses `custom_objective` (`plugins/hooks.py:24`) but directory present is `plugins/custom_objectives/` (plural). - `safety_factor_constraint.py` defines `register_hooks` but returns `[hook]` without calling `hook_manager.register_hook(...)` (`plugins/post_calculation/safety_factor_constraint.py:88-90`), so loader does not register it. Net: hook execution calls exist, but effective loaded-hook count is often zero. ## 7) Data Flow Through Actual Code ## 7.1 `atomizer.py` main flows 1. `neural-optimize`: - validate study via `validators.study_validator` - inspect training state via `AutoTrainer` - subprocess into `studies//run_optimization.py` - optional post-run retraining 2. `intake`: - `IntakeProcessor` populates context, introspection, baseline 3. `gate`: - `ValidationGate` validates spec + optional test trials + extractor probes 4. `finalize`: - `HTMLReportGenerator` builds report ## 7.2 `optimization_engine/run_optimization.py` intended flow - Parse args -> validate `prt/sim` - `--llm`: analyze request -> setup updater/solver closures -> `LLMOptimizationRunner` -> Optuna loop - `--config`: currently stub that exits Actual current behavior: import-time crash before step 1. ## 8) High-Risk Inconsistencies Blocking Migration - Broken import namespace split between `future/*` and expected top-level modules. - Template system points to wrong template/study roots and generates scripts importing missing `study_runner`. - Hook framework looks complete but plugin discovery paths are wrong in all main call sites. - `atomizer` delegates execution to many inconsistent study-local scripts, preventing predictable architecture.