8.1 KiB
8.1 KiB
War-Room Codex Analysis (Code-Level Reality)
Date: 2026-02-20
Scope: Static + runtime checks starting from atomizer.py and optimization_engine/run_optimization.py.
1) Entry Point Reality
A. atomizer.py is runnable
- Verified:
python3 atomizer.py --helpexits0. - Top-level imports are valid.
- Command handlers branch into two very different worlds:
- Atomizer UX path:
intake,gate,finalize. - Legacy study-script launcher:
neural-optimizeshells into per-studystudies/<name>/run_optimization.py.
- Atomizer UX path:
B. optimization_engine/run_optimization.py is currently broken at import time
- Verified:
python3 optimization_engine/run_optimization.py --helpexits1. - Failure:
optimization_engine/run_optimization.py:39- imports
optimization_engine.future.llm_optimization_runner - which imports
optimization_engine.extractor_orchestratoratoptimization_engine/future/llm_optimization_runner.py:26 - but only
optimization_engine/future/extractor_orchestrator.pyexists.
- Result: this entrypoint never reaches argument parsing.
2) Import Chains and Dependency Graph
2.1 Atomizer chain (actual reachable modules)
graph TD
A[atomizer.py]
A --> TL[optimization_engine.config.template_loader]
A --> AT[optimization_engine.processors.surrogates.auto_trainer]
A --> SV[optimization_engine.validators.study_validator]
A --> IN[optimization_engine.intake]
A --> VG[optimization_engine.validation]
A --> HR[optimization_engine.reporting.html_report]
SV --> CV[validators.config_validator]
SV --> MV[validators.model_validator]
SV --> RV[validators.results_validator]
Key behavior:
neural-optimizedoes not call a central engine runner; it executesstudies/<study>/run_optimization.pyvia subprocess.- This makes runtime behavior depend on many heterogeneous study scripts.
2.2 Unified runner chain (intended, but broken)
graph TD
R[optimization_engine/run_optimization.py]
R --> WA[future.llm_workflow_analyzer]
R --> LR[future.llm_optimization_runner]
R --> NXU[nx.updater]
R --> NXS[nx.solver]
R --> CR[core.runner]
LR --> HM[plugins.hook_manager]
LR -.broken import.-> EO[optimization_engine.extractor_orchestrator (missing)]
LR -.broken import.-> IC[optimization_engine.inline_code_generator (missing)]
LR -.broken import.-> HG[optimization_engine.hook_generator (missing)]
Observations:
OptimizationRunneris imported inoptimization_engine/run_optimization.py:40but not used.- Manual mode is scaffolding and exits (
optimization_engine/run_optimization.py:286-320).
3) Tight vs Loose Coupling
Tight coupling (high-risk refactor areas)
atomizer.pyto repository layout (studies/<study>/run_optimization.py) and command shelling.core/runner.pyto config schema and exact extractor return shape (result[metric_name]).- Validation gate and intake to specific extractor functions and NX solver assumptions.
- Template loader to incorrect engine-relative paths and missing module names.
Loose coupling (good seams)
- Hook function interface (
dict -> optional dict) is flexible. - Extractor call abstraction (
name -> callable) in runners can be standardized. - Lazy imports in
atomizer.pyfor intake/gate/finalize reduce startup coupling.
4) Circular Dependencies
Detected import cycles are limited and not the main blocker:
optimization_engine.extractors -> optimization_engine.extractors.extract_zernike_figure -> optimization_engine.extractorsoptimization_engine.model_discovery -> optimization_engine.model_discovery(package self-cycle artifact)
Main instability is from missing/incorrect module paths, not classic circular imports.
5) Dead Code / Orphan Findings
5.1 optimization_engine/future/ (what is actually wired)
Direct non-test references:
llm_workflow_analyzer.py: referenced byoptimization_engine/run_optimization.py.llm_optimization_runner.py: referenced byoptimization_engine/run_optimization.pyand some study scripts.report_generator.py: referenced by dashboard backend route.
Mostly test/deprecation only:
research_agent.py,step_classifier.py,targeted_research_planner.py,workflow_decomposer.py,pynastran_research_agent.py.
Practically dead due broken imports:
future/extractor_orchestrator.py,future/inline_code_generator.py,future/hook_generator.pyare intended runtime pieces, but callers import them using missing top-level paths.
5.2 Extractors (used vs orphaned)
Clearly used in runtime paths:
extract_displacement.py,extract_von_mises_stress.py,bdf_mass_extractor.py- via validation/intake/wizard/base runner.
Used mainly by studies/tools/dashboard:
extract_frequency.py,extract_mass_from_expression.py,extract_zernike*.py,op2_extractor.py,stiffness_calculator.py.
Likely orphaned (no non-test references found):
field_data_extractor.pyzernike_helpers.py
Low-usage / isolated:
extract_stress_field_2d.py(single tools reference)extract_zernike_surface.py(single study script reference)
5.3 Orphaned module references (hard breaks)
- Missing module imports in code:
optimization_engine.extractor_orchestrator(missing)optimization_engine.inline_code_generator(missing)optimization_engine.hook_generator(missing)optimization_engine.study_runner(missing)
Evidence:
optimization_engine/future/llm_optimization_runner.py:26-28optimization_engine/config/setup_wizard.py:26-27- generated script template in
optimization_engine/config/template_loader.py:216
6) Hooks/Plugins: Wired vs Scaffolding
There are two distinct hook concepts:
optimization_engine/plugins/*:
- lifecycle hook framework (
pre_solve,post_solve, etc.) used by runners.
optimization_engine/hooks/*:
- NX Open CAD/CAE API wrappers (not plugin lifecycle hooks).
What is actually wired
core/runner.pyexecutes hook points across trial lifecycle.future/llm_optimization_runner.pyalso executes lifecycle hooks.
Why most plugins are not actually loaded
- Wrong plugin directory resolution:
core/runner.py:76usesPath(__file__).parent / 'plugins'->optimization_engine/core/plugins(does not exist).future/llm_optimization_runner.py:140usesoptimization_engine/future/plugins(does not exist).config/setup_wizard.py:426same issue (optimization_engine/config/plugins).
- Real plugin directory is
optimization_engine/plugins/.
Additional plugin scaffolding mismatches
- Hook point enum uses
custom_objective(plugins/hooks.py:24) but directory present isplugins/custom_objectives/(plural). safety_factor_constraint.pydefinesregister_hooksbut returns[hook]without callinghook_manager.register_hook(...)(plugins/post_calculation/safety_factor_constraint.py:88-90), so loader does not register it.
Net: hook execution calls exist, but effective loaded-hook count is often zero.
7) Data Flow Through Actual Code
7.1 atomizer.py main flows
neural-optimize:
- validate study via
validators.study_validator - inspect training state via
AutoTrainer - subprocess into
studies/<study>/run_optimization.py - optional post-run retraining
intake:
IntakeProcessorpopulates context, introspection, baseline
gate:
ValidationGatevalidates spec + optional test trials + extractor probes
finalize:
HTMLReportGeneratorbuilds report
7.2 optimization_engine/run_optimization.py intended flow
- Parse args -> validate
prt/sim --llm: analyze request -> setup updater/solver closures ->LLMOptimizationRunner-> Optuna loop--config: currently stub that exits
Actual current behavior: import-time crash before step 1.
8) High-Risk Inconsistencies Blocking Migration
- Broken import namespace split between
future/*and expected top-level modules. - Template system points to wrong template/study roots and generates scripts importing missing
study_runner. - Hook framework looks complete but plugin discovery paths are wrong in all main call sites.
atomizerdelegates execution to many inconsistent study-local scripts, preventing predictable architecture.