Files
Atomizer/docs/DEV/ARSENAL-DEVELOPMENT-PLAN.md
2026-02-25 08:00:14 +00:00

20 KiB
Raw Blame History

Arsenal Development Plan — Atomizer Multi-Solver Integration

Status: DRAFT — pending CEO approval
Date: 2026-06-25
Authors: Technical Lead (architecture + feasibility), Auditor (risk + gates), Manager (orchestration)
Source docs: Arsenal V3 Final, Tool-Agnostic Architecture, Multi-Solver Roadmap, Arsenal Reference


1. Executive Summary

Goal: Integrate the 80/20 highest-leverage open-source tools from the Atomizer Arsenal into V2, enabling fully scripted optimization without NX GUI dependency.

What this unlocks:

  • Linux-headless optimization loops (no NX license bottleneck)
  • 100s1000s of trials per campaign (vs. current NX-constrained budgets)
  • Multi-solver validation (CalculiX vs Nastran cross-checking)
  • Path to CFD and multi-physics (expansion sprints)

The 80/20 stack (6 sprints):

Priority Tool Category Why
🔴 P1 CalculiX FEA solver Free structural solver, covers 80% of use cases
🔴 P1 Gmsh Meshing Universal mesher, Python API, STEP import
🔴 P1 Build123d CAD Code-native geometry, LLM-perfect, headless
🔴 P1 meshio + pyNastran Converters Format glue between all tools
🔴 P1 PyVista + ParaView Post-processing Automated visualization pipeline
🟡 P2 pymoo Optimization Multi-objective (NSGA-II/III)
🟡 P2 OpenFOAM CFD Thermal/flow screening
🟢 P3 preCICE Coupling Thermo-structural coupling
🟢 P3 OpenMDAO MDO System-level optimization

Antoine's time commitment: ~23 hours per sprint for validation gates.
HQ autonomous work: days of implementation per sprint.


2. Architecture Overview

The Arsenal integrates through Atomizer's 3-layer tool-agnostic architecture:

┌─────────────────────────────────────────────────────────┐
│  Layer 3: OPTIMIZATION INTELLIGENCE                      │
│  AtomizerSpec v3.0 │ Optuna/pymoo │ LAC │ LLM Agents    │
│  ── never touches tool-specific formats ──               │
├─────────────────────────────────────────────────────────┤
│  Layer 2: TOOL PROCESSORS (deterministic Python)         │
│  Geometry:  nx_geometry │ build123d_geometry │ step_import│
│  Meshing:   gmsh_mesher │ nx_mesher                      │
│  Solvers:   nastran_solver │ calculix_solver │ openfoam   │
│  Convert:   meshio_converter │ pynastran_bridge          │
│  Post:      pyvista_post │ paraview_post                 │
├─────────────────────────────────────────────────────────┤
│  Layer 1: UNIVERSAL DATA CONTRACTS                       │
│  AtomizerGeometry │ AtomizerMesh │ AtomizerBCs           │
│  AtomizerMaterial │ AtomizerResults                      │
│  ── solver-agnostic dataclasses (the Rosetta Stone) ──  │
└─────────────────────────────────────────────────────────┘

Key rule: Adding a new tool = writing ONE processor file. Contracts never change per tool.

2.1 V2 Repository Target Layout

atomizer/
├── contracts/
│   ├── geometry.py              # AtomizerGeometry
│   ├── mesh.py                  # AtomizerMesh
│   ├── boundary_conditions.py   # AtomizerBCs
│   ├── material.py              # AtomizerMaterial
│   ├── results.py               # AtomizerResults
│   └── validators.py            # Cross-contract sanity checks
├── processors/
│   ├── base.py                  # Abstract base classes
│   ├── geometry/
│   │   ├── build123d_geometry.py
│   │   ├── nx_geometry.py
│   │   └── step_import.py
│   ├── meshing/
│   │   ├── gmsh_mesher.py
│   │   └── nx_mesher.py
│   ├── solvers/
│   │   ├── nastran_solver.py
│   │   ├── calculix_solver.py
│   │   └── openfoam_solver.py   # Sprint 4
│   ├── converters/
│   │   ├── meshio_converter.py
│   │   └── pynastran_bridge.py
│   └── postprocessing/
│       ├── pyvista_post.py
│       └── paraview_post.py
├── orchestrator/
│   ├── pipeline.py              # Solver-agnostic run sequencing
│   ├── auto_select.py           # Toolchain routing logic
│   └── coupling.py              # Sprint 5: preCICE
├── optimization/
│   ├── engine.py                # Core optimization loop
│   ├── optuna_backend.py
│   └── pymoo_backend.py         # Sprint 6
└── mcp_servers/                 # Sprint 2-3
    ├── calculix/
    ├── gmsh/
    ├── build123d/
    └── pynastran/

3. Current Environment Assessment

Item Status Action Required
V2 repo (atomizer/ package) Not yet created — still V1 flat layout Sprint 0: scaffold package structure
pyNastran Installed None
CalculiX (ccx) Not installed apt install calculix-ccx or Docker
Gmsh Not installed pip install gmsh pygmsh
Build123d Not installed pip install build123d
meshio Not installed pip install meshio
PyVista Not installed pip install pyvista
ParaView/pvpython Not installed apt install paraview
pymoo Not installed pip install pymoo (Sprint 6)
OpenFOAM Not installed Docker recommended (Sprint 4)

First blocker: Package installs on T420. Either direct install or Docker-based dev environment.


4. Sprint Plan

Sprint 0: Environment + Scaffold (12 days)

Goal: Working dev environment and V2 package skeleton.

Task Owner Autonomy Deliverable
Install core Python packages (gmsh, build123d, meshio, pyvista) Developer/IT 🟢 Full Working imports
Install CalculiX (apt install calculix-ccx) Developer/IT 🟢 Full ccx on PATH
Scaffold atomizer/ package with __init__.py stubs Developer 🟢 Full Package importable
Create tests/benchmarks/ with 3 analytical reference cases Tech Lead 🟢 Full JSON expected-value files

Analytical benchmarks (known-answer problems):

  1. Cantilever beam — tip deflection = PL³/3EI (linear static)
  2. Plate with hole — SCF ≈ 3.0 at hole edge (stress concentration)
  3. Simply supported beam — first natural frequency = (π²/L²)√(EI/ρA) (modal)

Antoine gate: None — this is infrastructure.


Sprint 1: Contracts + Structural Bridge Baseline (1 week)

Goal: Contract-driven NX/Nastran baseline with format conversion verification.

Task Owner Autonomy Deliverable
Implement 5 contract dataclasses Developer 🟢 95% contracts/*.py with full type hints
Implement validators.py Developer 🟢 95% Unit-consistent checks, BC physics checks
Implement processors/base.py abstract classes Tech Lead → Developer 🟢 95% GeometryProcessor, MeshProcessor, SolverProcessor ABCs
Wrap existing NX/Nastran into nastran_solver.py NX Expert → Developer 🟡 70% Existing workflow through new interface
Implement meshio_converter.py Developer 🟢 95% BDF ↔ INP ↔ MSH ↔ VTK round-trip
Implement pynastran_bridge.py Developer 🟢 95% BDF read/modify, OP2 parse → AtomizerResults
Implement pipeline.py skeleton Tech Lead → Developer 🟢 90% Sequential pipeline runner
Unit tests for all contracts + converters Developer 🟢 95% pytest suite passing

Success criterion: Existing NX/Nastran workflow passes through new architecture with zero regression.

Antoine gate: Provide 3 trusted benchmark studies with known expected outputs for parity check.

Risk: Hidden assumptions in legacy NX/Nastran parsing may break when abstracted.
Mitigation: Run full diff on old vs new output before declaring parity.


Sprint 2: Open Structural Lane — CalculiX + Gmsh + Build123d (12 weeks)

Goal: Full Linux-headless structural optimization loop with zero NX dependency.

Task Owner Autonomy Deliverable
Implement build123d_geometry.py Developer 🟢 90% DV dict → Build123d → STEP
Implement step_import.py Developer 🟢 95% STEP → AtomizerGeometry
Implement gmsh_mesher.py Developer 🟢 90% STEP → Gmsh → AtomizerMesh (with quality metrics)
Implement calculix_solver.py Tech Lead → Developer 🟡 80% .inp generation + ccx exec + .frd parsing → AtomizerResults
Implement pyvista_post.py Developer 🟢 95% AtomizerResults → contour plots (PNG/VTK)
Implement optuna_backend.py (adapted) Optimizer → Developer 🟢 90% Contract-aware Optuna loop
Run 3 analytical benchmarks through full pipeline Tech Lead 🟢 90% Pass/fail report vs known answers
CalculiX vs Nastran parity comparison Tech Lead 🟡 70% Delta report on shared benchmark

Demo scenario:

Parametric bracket (thickness, fillet_radius, rib_width)
→ Build123d generates STEP
→ Gmsh meshes
→ CalculiX solves (SOL 101 equivalent)
→ PyVista renders stress contour
→ Optuna optimizes for min mass @ stress constraint
→ Full loop on Linux, no NX

Success criteria:

  • 3 analytical benchmarks pass within 2% of theory
  • CalculiX vs Nastran delta < 5% on stress/displacement for shared benchmark
  • Full optimization loop completes 50+ trials unattended

Antoine gate:

  1. Approve CalculiX vs Nastran parity tolerance (proposed: 5% on stress, 3% on displacement)
  2. Validate one benchmark result manually
  3. Review Build123d geometry quality for representative model

Risks:

  • Gmsh mesh quality on sharp fillets → Mitigation: define mesh size presets, add Jacobian quality check
  • CalculiX .frd parsing edge cases → Mitigation: test against multiple element types (CQUAD4 equivalent, CHEXA equivalent)
  • Build123d geometry failures on complex shapes → Mitigation: start with simple parametric bracket, escalate complexity gradually

Sprint 3: MVP Hardening + Client-Grade Post-Processing (1 week)

Goal: Production-quality outputs from both solver lanes, side-by-side comparison capability.

Task Owner Autonomy Deliverable
Implement paraview_post.py Developer 🟢 90% Publication-quality figures (headless pvpython)
Implement auto_select.py Tech Lead → Developer 🟡 75% Toolchain routing rules (NX lane vs open lane)
Implement spec/validator.py Developer 🟢 95% AtomizerSpec v3.0 schema validation
Dual-lane comparison report Post-Processor → Developer 🟢 85% Side-by-side Nastran vs CalculiX with pass/fail
Regression test suite Auditor → Developer 🟢 90% Automated benchmark + parity checks
MCP-CalculiX server MCP Engineer → Developer 🟡 70% Tool-call interface for CalculiX
MCP-Gmsh server MCP Engineer → Developer 🟡 70% Tool-call interface for Gmsh
Documentation: Integration Guide Secretary 🟢 95% docs/DEV/INTEGRATION-GUIDE.md

Success criteria:

  • Dual-lane comparison passes on all 3 benchmarks
  • Automated regression suite runs in CI
  • MCP servers pass basic smoke tests

Antoine gate:

  1. Approve report template and required figure list
  2. Define auto-select routing rules (when NX vs when open-source)
  3. Sign off MVP as "usable for real studies"

🏁 MVP EXIT — after Sprint 3, Atomizer can run full structural optimization on Linux without NX.


Sprint 4: CFD Lane — OpenFOAM (12 weeks)

Goal: Automated CFD/thermal screening within the same orchestration framework.

Task Owner Autonomy Deliverable
Install OpenFOAM (Docker recommended) IT 🟢 Full Working simpleFoam / buoyantSimpleFoam
Implement openfoam_solver.py Tech Lead → Developer 🟡 65% Case generation + execution + result parsing
Extend gmsh_mesher.py with CFD presets Developer 🟡 75% Boundary layer meshing, inflation layers
Extend contracts for CFD quantities Tech Lead → Developer 🟡 70% Pressure, velocity, temperature fields in AtomizerResults
2 CFD validation cases Tech Lead 🟡 60% Lid-driven cavity + pipe flow (known analytical/reference)

Demo scenario:

Electronics enclosure airflow
→ STEP → Gmsh CFD mesh → OpenFOAM (buoyantSimpleFoam)
→ PyVista thermal map + pressure drop
→ Variant ranking by max temperature

Antoine gate:

  1. Lock one validated starter case template before generalization
  2. Define screening-fidelity defaults (mesh size, turbulence model)
  3. Validate thermal results against engineering judgment

Risk: OpenFOAM setup complexity is high. Dictionary validation is error-prone.
Mitigation: Start with one locked template, generalize only after it works.


Sprint 5: Coupled Thermo-Structural — preCICE (12 weeks)

Goal: Two-way coupled OpenFOAM ↔ CalculiX simulations.

Task Owner Autonomy Deliverable
Install preCICE + adapters IT 🟢 Full preCICE + CalculiX adapter + OpenFOAM adapter
Implement coupling.py Tech Lead → Developer 🟡 60% Coupled orchestration with convergence monitoring
1 coupled benchmark Tech Lead 🔴 50% Heated plate with thermal expansion

Antoine gate: Approve coupling parameters and convergence criteria.

Risk: Coupling divergence, debugging complexity.
Mitigation: Conservative time-step, explicit convergence checks, one golden benchmark.


Sprint 6: Multi-Objective + MDO — pymoo + OpenMDAO (12 weeks)

Goal: Pareto optimization across multiple objectives and disciplines.

Task Owner Autonomy Deliverable
Implement pymoo_backend.py Optimizer → Developer 🟢 85% NSGA-II/III alongside Optuna
Pareto front visualization Post-Processor → Developer 🟢 90% Interactive Pareto plots
OpenMDAO integration (if scope allows) Tech Lead → Developer 🟡 60% MDO wiring for multi-discipline
1 multi-objective demo Optimizer + Tech Lead 🟡 70% Mass vs stress vs temperature Pareto

Antoine gate: Define objective scaling and hard constraints.


5. Semi-Autonomous Workflow

┌─────────────────────────────────────────────────────────┐
│                  PER-SPRINT LOOP                         │
│                                                          │
│  1. Antoine picks sprint / approves scope        [YOU]   │
│  2. HQ researches + designs architecture         [AUTO]  │
│  3. Auditor reviews design for risks             [AUTO]  │
│  4. Antoine approves architecture (15 min)       [YOU]   │
│  5. Developer implements + writes tests          [AUTO]  │
│  6. Auditor reviews code against protocols       [AUTO]  │
│  7. Antoine validates one benchmark (2-3 hrs)    [YOU]   │
│  8. Merge → next sprint                          [AUTO]  │
│                                                          │
│  Your time: ~2-3 hours/sprint                            │
│  HQ time: days of implementation/sprint                  │
└─────────────────────────────────────────────────────────┘

What runs fully autonomously (steps 2-3, 5-6):

  • Research tool capabilities, formats, Python bindings
  • Design processor architecture and contract extensions
  • Implement Python adapters, parsers, converters
  • Write and run unit tests against analytical benchmarks
  • Audit code against Atomizer protocols
  • Generate regression reports

What needs your engineering judgment (steps 1, 4, 7):

  • Tool priority sequencing (strategic call)
  • Architecture approval (15-min review)
  • Cross-solver parity acceptance (is 5% delta OK?)
  • Benchmark validation (does this match physics?)
  • Geometry quality assessment (does Build123d output look right?)

6. Quality Gates (Auditor-Defined)

Per-Processor Gate

  • Processor implements abstract base class correctly
  • Round-trip test: contract → native format → solve → parse → contract
  • At least 1 analytical benchmark passes within tolerance
  • Unit test coverage ≥ 80% for parser logic
  • Error handling for: missing files, solver crash, malformed output
  • Documentation: input/output formats, known limitations

Per-Sprint Gate

  • All processor gates pass
  • Integration test: full pipeline from AtomizerSpec → results
  • Regression: no existing tests broken
  • Parity check (if applicable): new solver vs reference solver
  • Tech Lead sign-off on physics validity
  • Auditor sign-off on code quality + protocol compliance

MVP Gate (Sprint 3 Exit)

  • 3 analytical benchmarks pass (cantilever, plate-hole, modal)
  • CalculiX vs Nastran parity < 5% stress, < 3% displacement
  • 50-trial optimization completes unattended
  • Dual-lane comparison report generates automatically
  • Antoine validates one real-world study through the new pipeline
  • All regression tests pass in CI

7. Risk Register

# Risk Impact Likelihood Mitigation Owner
R1 CalculiX vs Nastran parity exceeds tolerance High Medium Start with simple geometries, document element type mapping, investigate mesh sensitivity Tech Lead
R2 Gmsh mesh quality on complex geometry Medium Medium Define quality thresholds, add Jacobian/aspect-ratio checks, fallback presets Tech Lead
R3 Build123d limitations on complex parametric models Medium Low Start simple, escalate complexity, keep NX lane as fallback Developer
R4 Contract design locks in wrong abstractions High Low Freeze contracts only after Sprint 1 validation, allow minor evolution in Sprint 2 Tech Lead + Auditor
R5 Package dependency conflicts on T420 Low Medium Use venv or Docker dev environment IT
R6 OpenFOAM setup complexity delays Sprint 4 Medium High Docker-first approach, one locked template IT + Tech Lead
R7 Scope creep into expansion tools before MVP stable High Medium Hard gate: no Sprint 4+ work until Sprint 3 exit criteria met Manager + Auditor
R8 MCP server maintenance burden Low Medium MVP cap at 4 MCP servers, thin wrappers only MCP Engineer

8. Tool Installation Reference

Sprint 0-2 (Core)

# Python packages
pip install gmsh pygmsh build123d meshio pyvista

# CalculiX solver
sudo apt install calculix-ccx

# ParaView (Sprint 3)
sudo apt install paraview

Sprint 4+ (Expansion)

# OpenFOAM via Docker
docker pull openfoam/openfoam-dev

# preCICE
sudo apt install libprecice-dev python3-precice

# pymoo
pip install pymoo

# OpenMDAO
pip install openmdao

9. Success Metrics

Metric Sprint 3 Target Sprint 6 Target
Solver lanes operational 2 (Nastran + CalculiX) 4 (+OpenFOAM + coupled)
Optimization trials/hr (bracket-class) 50+ 100+
Analytical benchmarks passing 3 8+
Cross-solver parity verified Nastran ↔ CalculiX + OpenFOAM thermal
Automation level (no human in loop) 70% 85%
MCP servers 2 (CalculiX, Gmsh) 4

10. Immediate Next Steps

  1. Antoine: Approve this plan and Sprint 0-1 scope
  2. IT/Developer: Run Sprint 0 package installs (est. 1-2 hours)
  3. Tech Lead: Finalize contract dataclass specs with field-level detail
  4. Developer: Scaffold atomizer/ package structure
  5. Auditor: Prepare benchmark expected-value files for 3 analytical cases
  6. Manager: Create Mission Dashboard tasks for Sprint 0 + Sprint 1

This plan is a living document. Update after each sprint retrospective.