52 KiB
Arsenal Development Plan — Technical Architecture + Sprint Breakdown
Version: 1.0
Date: 2026-02-24
Author: Technical Lead
Status: Development Blueprint
Mission: Transform Atomizer from an NX/Nastran optimization tool into a multi-solver, multi-physics, multi-objective engineering optimization platform powered by the 80/20 highest-value Arsenal tools.
1. Executive Summary
This plan implements the Thin Contract + Smart Processor pattern identified in the Arsenal research to seamlessly integrate open-source simulation tools with Atomizer V2's existing optimization engine. The approach minimizes risk while maximizing value through incremental capability expansion.
Key Deliverables by Sprint
- Sprint 1-2: Universal format conversion + first open-source solver (CalculiX)
- Sprint 3-4: Multi-objective optimization + LLM-driven CAD generation
- Sprint 5-6: CFD/thermal capability + multi-physics coupling
Investment
- Software Cost: $0 (100% open-source tools)
- Development Time: 6 sprints (6 months)
- Risk Level: LOW (existing NX pipeline preserved as fallback)
2. Technical Architecture — Thin Contract + Smart Processor Pattern
2.1 The AtomizerData Contract (Universal Interchange)
The contract defines WHAT flows between tools semantically, not file formats. Each tool gets a thin processor that converts to/from its native format.
# atomizer/contracts/data_models.py
from dataclasses import dataclass
from typing import Dict, List, Optional
from pathlib import Path
from enum import Enum
class SolverType(Enum):
NASTRAN = "nastran"
CALCULIX = "calculix"
OPENFOAM = "openfoam"
FENICS = "fenics"
ELMER = "elmer"
@dataclass
class AtomizerGeometry:
"""3D geometry with named faces for boundary conditions"""
step_path: Path # STEP file (universal CAD exchange)
named_faces: Dict[str, str] # {"fixed_face": "bottom plane",
# "loaded_face": "top surface"}
design_variables: Dict[str, float] # {"thickness": 5.0, "rib_height": 20.0}
bounds: Dict[str, tuple] # {"thickness": (2.0, 15.0)}
material_zones: Dict[str, str] # {"main_body": "steel", "insert": "aluminum"}
@dataclass
class AtomizerMesh:
"""Volume mesh with element and surface groups"""
mesh_path: Path # Native mesh file (.msh, .bdf, .inp)
format: str # "gmsh", "nastran", "calculix"
element_count: int
node_count: int
element_groups: Dict[str, str] # {"bracket": "solid elements"}
surface_groups: Dict[str, str] # {"fixed": "nodes on bottom"}
@dataclass
class AtomizerBCs:
"""Physics-agnostic boundary condition description"""
structural: List[Dict] # [{"type": "fixed", "surface": "bottom"}]
thermal: Optional[List[Dict]] # [{"type": "heat_flux", "value": 500}]
fluid: Optional[List[Dict]] # [{"type": "inlet", "velocity": [5,0,0]}]
@dataclass
class AtomizerMaterial:
"""Material properties for multi-physics"""
name: str # "Steel AISI 304"
structural: Dict[str, float] # {"E": 210000, "nu": 0.3, "rho": 7850}
thermal: Optional[Dict[str, float]] # {"k": 16.2, "cp": 500, "alpha": 1.7e-5}
fluid: Optional[Dict[str, float]] # {"mu": 1e-3, "rho": 1000}
@dataclass
class AtomizerResults:
"""Solver-agnostic analysis results"""
vtk_path: Path # VTK file for visualization
solver: SolverType
physics_type: str # "structural", "thermal", "fluid"
max_stress: Optional[float] # [MPa]
max_displacement: Optional[float] # [mm]
max_temperature: Optional[float] # [°C]
mass: float # [kg]
natural_frequencies: Optional[List[float]] # [Hz]
convergence: Dict[str, bool] # {"solved": True, "converged": True}
solve_time: float # [seconds]
@dataclass
class AtomizerStudy:
"""Complete optimization study definition"""
name: str
geometry: AtomizerGeometry
mesh_settings: Dict # {"max_size": 3.0, "refinement": ["holes"]}
materials: List[AtomizerMaterial]
boundary_conditions: AtomizerBCs
objectives: List[Dict] # [{"minimize": "mass"}, {"minimize": "max_stress"}]
constraints: List[Dict] # [{"max_stress": {"<": 200}}]
solver_preferences: List[SolverType] # Order of preference
optimization_settings: Dict # Algorithm, population, generations
2.2 Tool-Specific Processors
Each tool gets a thin Python processor that handles format conversion deterministically (no LLM involved in conversion). The LLM orchestrates at the engineering level.
atomizer/processors/
├── __init__.py
├── base_processor.py # AbstractProcessor interface
├── gmsh_processor.py # Geometry → Mesh conversion
├── calculix_processor.py # AtomizerStudy ↔ CalculiX .inp/.frd
├── nastran_processor.py # AtomizerStudy ↔ Nastran .bdf/.op2
├── openfoam_processor.py # AtomizerStudy ↔ OpenFOAM case dir
├── fenics_processor.py # AtomizerStudy ↔ FEniCS Python script
├── build123d_processor.py # Parameters → Build123d → STEP
├── pyvista_processor.py # AtomizerResults → visualization
└── paraview_processor.py # AtomizerResults → report figures
2.3 Processor Interface
# atomizer/processors/base_processor.py
from abc import ABC, abstractmethod
from atomizer.contracts.data_models import AtomizerStudy, AtomizerResults
class AbstractProcessor(ABC):
"""Base class for all tool processors"""
@abstractmethod
def generate_input(self, study: AtomizerStudy) -> str:
"""Convert AtomizerStudy to tool's native input format"""
pass
@abstractmethod
def parse_results(self, output_path: str) -> AtomizerResults:
"""Parse tool's output to AtomizerResults"""
pass
@abstractmethod
def validate_setup(self, study: AtomizerStudy) -> bool:
"""Check if study is compatible with this processor"""
pass
2.4 Integration with Existing Atomizer Architecture
The Arsenal processors integrate alongside existing extractors and hooks:
EXISTING ATOMIZER PIPELINE:
AtomizerSpec → NX Journals → Nastran → OP2 Extractors → Optuna
NEW ARSENAL PIPELINE:
AtomizerSpec → AtomizerStudy → Processor → Open-Source Solver → AtomizerResults → pymoo
UNIFIED PIPELINE:
AtomizerSpec → Converter → {NX Pipeline | Arsenal Pipeline} → Unified Results → Optimization
Key Integration Points:
AtomizerSpecv3.0 extended with multi-solver, multi-physics support- New
MultiSolverEngineorchestrates solver selection - Existing hook system works with Arsenal processors
- LAC (Learning and Context) system enhanced for multi-solver optimization patterns
3. 80/20 Tool Selection & Validation
Based on the Arsenal research, these tools deliver 80% of the value with validated priority:
Tier 1: Universal Glue (Sprint 1)
- meshio ⭐⭐⭐⭐⭐ — Universal mesh format converter
- pyNastran ⭐⭐⭐⭐⭐ — Bridge to existing NX/Nastran world
- PyVista ⭐⭐⭐⭐⭐ — Instant visualization from Python
- Gmsh ⭐⭐⭐⭐⭐ — Universal meshing engine
Value: Connects everything. Any geometry → any mesh → any solver.
Tier 2: First Open-Source Solver (Sprint 2)
- CalculiX ⭐⭐⭐⭐⭐ — Free Abaqus-compatible FEA solver
- Build123d ⭐⭐⭐⭐ — LLM-friendly CAD generation
Value: NX-free optimization. Agents generate CAD from text.
Tier 3: Multi-Objective Optimization (Sprint 3)
- pymoo ⭐⭐⭐⭐⭐ — Proper Pareto front optimization
Value: Real engineering trade-offs. Consulting-grade deliverables.
Tier 4: Advanced Physics (Sprint 4-5)
- OpenFOAM ⭐⭐⭐⭐ — CFD/thermal capability
- preCICE ⭐⭐⭐⭐ — Multi-physics coupling
Value: Thermal + structural optimization. Heatsinks, electronics.
Tier 5: Advanced Capabilities (Sprint 6)
- FEniCS ⭐⭐⭐⭐ — Topology optimization via adjoint gradients
- pyvcad ⭐⭐⭐ — Lattice/AM structures
Value: Generative design. 30% weight reduction through topology optimization.
4. Sprint Breakdown
Sprint 1: Universal Glue Layer (Weeks 1-2)
Impact: ⭐⭐⭐⭐⭐ | Effort: LOW | Unlocks: Everything else
Deliverables:
- Universal format conversion pipeline
- Bridge between NX/Nastran and open-source tools
- Proof-of-concept round-trip validation
Technical Tasks:
pip install meshio pynastran gmsh pygmsh pyvista build123d
Python Modules to Create:
atomizer/processors/meshio_processor.pyatomizer/processors/pynastran_bridge.pyatomizer/processors/gmsh_processor.pyatomizer/processors/pyvista_processor.pyatomizer/contracts/data_models.pyatomizer/contracts/validators.py
Test Criteria:
- Convert existing Nastran BDF → CalculiX INP via meshio
- Build123d geometry → Gmsh mesh → Nastran BDF round-trip
- PyVista renders stress contours from sample OP2 file
- Accuracy: <1% difference in mass, volume between formats
Integration Points:
- Extend
AtomizerSpecv2.0 → v2.1 withsolver_preferences: List[str] - Add
MultiFormatExtractorto existing extractor library - Hook into existing optimization engine via
NXSolverEngine.get_alternative()
Antoine's Validation Gate:
- Review round-trip accuracy on existing LAC benchmark problems
- Approve AtomizerSpec v2.1 extensions
- Validate that existing NX workflows continue working unchanged
Dependencies: None (builds on existing Atomizer infrastructure)
Sprint 2: First Open-Source Solver (Weeks 3-4)
Impact: ⭐⭐⭐⭐⭐ | Effort: MEDIUM | Unlocks: NX-free optimization
Deliverables:
- CalculiX processor with full lifecycle integration
- First complete open-source optimization study
- Validation against NX/Nastran on LAC benchmarks
Technical Tasks:
sudo apt install calculix-ccx
Python Modules to Create:
atomizer/processors/calculix_processor.pyatomizer/solvers/calculix_engine.pyatomizer/validation/solver_validator.pytests/test_calculix_integration.py
CalculiX Processor Implementation:
# atomizer/processors/calculix_processor.py
class CalculiXProcessor(AbstractProcessor):
def generate_input(self, study: AtomizerStudy) -> str:
"""Convert AtomizerStudy → CalculiX .inp file"""
# Read mesh via meshio
mesh = meshio.read(study.geometry.mesh_path)
inp_content = []
# Write nodes
inp_content.append("*NODE")
for i, point in enumerate(mesh.points, 1):
inp_content.append(f"{i}, {point[0]:.6f}, {point[1]:.6f}, {point[2]:.6f}")
# Write elements
inp_content.append("*ELEMENT, TYPE=C3D10, ELSET=ALL")
for i, cell in enumerate(mesh.cells[0].data, 1):
nodes = ", ".join(str(n+1) for n in cell)
inp_content.append(f"{i}, {nodes}")
# Materials
for material in study.materials:
inp_content.extend([
f"*MATERIAL, NAME={material.name}",
f"*ELASTIC",
f"{material.structural['E']}, {material.structural['nu']}",
f"*DENSITY",
f"{material.structural['rho']}"
])
# Boundary conditions from contract
step_content = ["*STEP", "*STATIC"]
for bc in study.boundary_conditions.structural:
if bc['type'] == 'fixed':
step_content.append(f"*BOUNDARY\n{bc['surface']}, 1, 6, 0.0")
elif bc['type'] == 'force':
step_content.append(f"*CLOAD\n{bc['surface']}, 3, {bc['value'][2]}")
step_content.extend(["*NODE FILE\nU", "*EL FILE\nS", "*END STEP"])
inp_content.extend(step_content)
return "\n".join(inp_content)
def parse_results(self, frd_path: str) -> AtomizerResults:
"""Parse CalculiX .frd results → AtomizerResults"""
# Use meshio to read .frd file
mesh_result = meshio.read(frd_path)
# Extract stress, displacement from mesh data
stress_data = mesh_result.point_data.get('stress', [0])
displacement_data = mesh_result.point_data.get('displacement', [0])
max_stress = float(np.max(np.linalg.norm(stress_data, axis=1)))
max_displacement = float(np.max(np.linalg.norm(displacement_data, axis=1)))
# Convert to VTK for visualization
vtk_path = frd_path.replace('.frd', '.vtk')
meshio.write(vtk_path, mesh_result)
return AtomizerResults(
vtk_path=Path(vtk_path),
solver=SolverType.CALCULIX,
physics_type="structural",
max_stress=max_stress,
max_displacement=max_displacement,
mass=self._calculate_mass(mesh_result, study.materials),
convergence={"solved": True, "converged": True},
solve_time=self._get_solve_time(frd_path)
)
Test Criteria (Benchmark Problems):
- Cantilever beam: Analytical vs CalculiX vs NX/Nastran comparison
- Plate with hole: Stress concentration validation
- Modal analysis: Natural frequencies comparison
- Thermal stress: Coupled thermal-structural loading
- Nonlinear contact: Large deformation with contact
Success Criteria: <5% error vs analytical solutions, <3% error vs NX/Nastran
Integration Points:
- Add
CalculiXEnginetooptimization_engine/solvers/ - Extend
MultiSolverEngineto support solver fallback logic - Hook integration: all existing hooks work with CalculiX processors
- LAC integration: CalculiX results feed into learning patterns
Antoine's Validation Gate:
- Run full optimization study on LAC benchmark using only CalculiX
- Compare convergence rate, final optima vs existing NX optimization
- Approve solver selection logic and fallback mechanisms
Dependencies: Sprint 1 completed (meshio, format conversion working)
Sprint 3: Multi-Objective Optimization (Weeks 5-6)
Impact: ⭐⭐⭐⭐⭐ | Effort: LOW-MEDIUM | Unlocks: Real engineering trade-offs
Deliverables:
- Pareto front optimization with NSGA-II
- Multi-objective visualization dashboard
- Client-grade trade-off analysis reports
Python Modules to Create:
atomizer/optimizers/pymoo_engine.pyatomizer/visualization/pareto_plots.pyatomizer/reporting/tradeoff_analysis.pyoptimization_engine/objectives/multi_objective.py
pymoo Integration:
# atomizer/optimizers/pymoo_engine.py
from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.core.problem import Problem
from pymoo.optimize import minimize
class AtomizerMultiObjectiveProblem(Problem):
def __init__(self, atomizer_study: AtomizerStudy, processor_engine):
self.study = atomizer_study
self.processor = processor_engine
# Extract design variables and objectives from study
n_var = len(study.geometry.design_variables)
n_obj = len(study.objectives)
n_constr = len(study.constraints)
# Get bounds from study
xl = [bounds[0] for bounds in study.geometry.bounds.values()]
xu = [bounds[1] for bounds in study.geometry.bounds.values()]
super().__init__(n_var=n_var, n_obj=n_obj, n_constr=n_constr, xl=xl, xu=xu)
def _evaluate(self, X, out, *args, **kwargs):
"""Evaluate population X"""
objectives = []
constraints = []
for individual in X:
# Update geometry with new design variables
updated_study = self._update_study_variables(individual)
# Run simulation
results = self.processor.run_study(updated_study)
# Extract objectives
obj_values = []
for objective in self.study.objectives:
if objective['minimize'] == 'mass':
obj_values.append(results.mass)
elif objective['minimize'] == 'max_stress':
obj_values.append(results.max_stress)
elif objective['minimize'] == 'compliance':
obj_values.append(results.compliance)
objectives.append(obj_values)
# Extract constraints
constr_values = []
for constraint in self.study.constraints:
for field, condition in constraint.items():
result_value = getattr(results, field)
if '<' in condition:
constr_values.append(result_value - condition['<'])
constraints.append(constr_values)
out["F"] = np.array(objectives)
if constraints:
out["G"] = np.array(constraints)
class PymooEngine:
def run_optimization(self, study: AtomizerStudy, algorithm="NSGA2", **kwargs):
problem = AtomizerMultiObjectiveProblem(study, self.processor_engine)
if algorithm == "NSGA2":
optimizer = NSGA2(pop_size=kwargs.get('pop_size', 40))
elif algorithm == "NSGA3":
optimizer = NSGA3(pop_size=kwargs.get('pop_size', 50))
result = minimize(
problem,
optimizer,
termination=('n_gen', kwargs.get('n_gen', 100))
)
return self._format_pareto_results(result)
Pareto Visualization:
# atomizer/visualization/pareto_plots.py
import plotly.graph_objects as go
import plotly.express as px
class ParetoVisualizer:
def create_pareto_plot(self, pareto_front, objectives, design_vars):
"""Create interactive Pareto front plot"""
if len(objectives) == 2:
return self._plot_2d_pareto(pareto_front, objectives, design_vars)
elif len(objectives) == 3:
return self._plot_3d_pareto(pareto_front, objectives, design_vars)
else:
return self._plot_parallel_coordinates(pareto_front, objectives, design_vars)
def _plot_2d_pareto(self, front, objectives, design_vars):
fig = go.Figure()
# Pareto front
fig.add_trace(go.Scatter(
x=front[:, 0],
y=front[:, 1],
mode='markers',
marker=dict(size=10, color='red'),
hovertemplate='<b>Design Point</b><br>' +
f'{objectives[0]}: %{{x:.3f}}<br>' +
f'{objectives[1]}: %{{y:.3f}}<br>' +
'<extra></extra>',
name='Pareto Front'
))
fig.update_layout(
title='Pareto Front Analysis',
xaxis_title=objectives[0],
yaxis_title=objectives[1],
hovermode='closest'
)
return fig
Test Criteria:
- 2-objective optimization: minimize mass + minimize max_stress
- 3-objective optimization: mass + stress + displacement
- Pareto front coverage: >90% of theoretical front covered
- Performance: 40-individual population × 100 generations in <4 hours
AtomizerSpec v2.2 Extensions:
# Support for multiple objectives
"objectives": [
{"minimize": "mass", "weight": 1.0},
{"minimize": "max_stress", "weight": 1.0},
{"minimize": "max_displacement", "weight": 0.5}
],
"optimization": {
"algorithm": "NSGA2", # or "NSGA3", "TPE"
"population_size": 40,
"generations": 100,
"pareto_analysis": True
}
Antoine's Validation Gate:
- Review Pareto front plots for LAC benchmark problems
- Validate that trade-offs make engineering sense
- Approve decision-making tools (TOPSIS, weighted selection)
Dependencies: Sprint 2 completed (CalculiX working)
Sprint 4: LLM-Driven CAD Generation (Weeks 7-8)
Impact: ⭐⭐⭐⭐ | Effort: MEDIUM | Unlocks: Agent-generated geometry
Deliverables:
- Build123d processor for parametric CAD generation
- LLM agent generates geometry from text descriptions
- Integration with Zoo Text-to-CAD API for concept generation
Python Modules to Create:
atomizer/processors/build123d_processor.pyatomizer/agents/cad_generator.pyatomizer/templates/build123d_library.pyatomizer/geometry/validation_agent.py
Build123d Processor:
# atomizer/processors/build123d_processor.py
from build123d import *
class Build123dProcessor:
def generate_from_parameters(self, design_vars: Dict[str, float], template: str) -> Path:
"""Generate STEP geometry from design variables using Build123d template"""
if template == "bracket":
return self._generate_bracket(design_vars)
elif template == "plate_with_holes":
return self._generate_plate(design_vars)
elif template == "housing":
return self._generate_housing(design_vars)
def _generate_bracket(self, vars: Dict[str, float]) -> Path:
"""Generate parametric L-bracket"""
with BuildPart() as bracket:
# Base plate
Box(vars['length'], vars['width'], vars['thickness'])
# Vertical wall
with Locations((0, 0, vars['thickness'])):
Box(vars['length'], vars['wall_thickness'], vars['height'])
# Ribs (if enabled)
if vars.get('ribs', True):
for i in range(int(vars.get('rib_count', 3))):
x_pos = (i + 1) * vars['length'] / (vars['rib_count'] + 1)
with Locations((x_pos, vars['width']/2, vars['thickness'])):
# Triangular rib
with BuildSketch() as rib_profile:
Polygon([
(0, 0),
(vars['rib_width']/2, 0),
(0, vars['rib_height'])
])
Extrude(amount=vars['rib_thickness'])
# Bolt holes
for hole_x in vars.get('hole_positions_x', []):
for hole_y in vars.get('hole_positions_y', []):
with Locations((hole_x, hole_y, 0)):
Hole(radius=vars['hole_diameter']/2,
depth=vars['thickness'] + vars['height'])
# Export STEP
output_path = Path("generated_geometry.step")
bracket.part.export_step(str(output_path))
return output_path
def parametric_from_description(self, description: str) -> str:
"""Generate Build123d code from natural language description"""
# This would use an LLM agent to convert description to Build123d code
prompt = f"""
Generate Build123d Python code for: {description}
Requirements:
- Use context managers (with BuildPart() as part:)
- Make key dimensions parametric variables
- Include proper error handling
- Export as STEP file
- Return the code as a string
Template:
```python
from build123d import *
def generate_geometry(params):
with BuildPart() as part:
# Geometry generation here
pass
return part.part
```
"""
# Call LLM service here
return self._call_llm_service(prompt)
CAD Generation Agent:
# atomizer/agents/cad_generator.py
class CADGeneratorAgent:
def __init__(self, llm_service):
self.llm = llm_service
self.build123d_processor = Build123dProcessor()
def generate_concept_geometry(self, description: str, constraints: Dict) -> Path:
"""Generate concept geometry from natural language"""
# Step 1: Convert description to Build123d code
code = self.build123d_processor.parametric_from_description(description)
# Step 2: Validate code syntax
validated_code = self._validate_build123d_code(code)
# Step 3: Execute code to generate geometry
geometry_path = self._execute_build123d_code(validated_code, constraints)
# Step 4: Validate resulting geometry
validation_result = self._validate_geometry(geometry_path)
if not validation_result['valid']:
# Iterate with LLM to fix issues
fixed_code = self._fix_geometry_issues(code, validation_result['issues'])
geometry_path = self._execute_build123d_code(fixed_code, constraints)
return geometry_path
def _validate_geometry(self, step_path: Path) -> Dict:
"""Validate geometry for manufacturing and physics"""
# Load geometry
mesh = meshio.read(step_path)
issues = []
# Check if watertight
if not self._is_watertight(mesh):
issues.append("Geometry is not watertight")
# Check minimum feature size
min_feature = self._get_minimum_feature_size(mesh)
if min_feature < 1.0: # 1mm minimum
issues.append(f"Features below 1mm detected: {min_feature:.2f}mm")
# Check aspect ratios
max_aspect = self._get_max_aspect_ratio(mesh)
if max_aspect > 20:
issues.append(f"High aspect ratio detected: {max_aspect:.1f}")
return {
'valid': len(issues) == 0,
'issues': issues,
'metrics': {
'volume': mesh.volume,
'surface_area': mesh.surface_area,
'min_feature_size': min_feature
}
}
Test Criteria:
- Generate 10 different bracket geometries from text descriptions
- Validate all geometries are watertight and manufacturable
- Successfully mesh and solve in CalculiX
- Integration with existing optimization pipeline
Integration Points:
- Add Build123d processor to multi-solver engine
- Extend AtomizerStudy with parametric CAD templates
- LLM agents generate CAD code instead of requiring pre-made .prt files
Antoine's Validation Gate:
- Review generated geometries for engineering feasibility
- Test parametric variation for optimization compatibility
- Validate that generated CAD properly interfaces with NX when needed
Dependencies: Sprint 2 completed, Build123d installed and tested
Sprint 5: CFD + Thermal Capability (Weeks 9-10)
Impact: ⭐⭐⭐⭐ | Effort: HIGH | Unlocks: Thermal + flow optimization
Deliverables:
- OpenFOAM processor with case generation
- Thermal-structural coupling via preCICE
- Heatsink optimization demonstration
Technical Setup:
# OpenFOAM installation
docker pull openfoam/openfoam9-paraview56
# OR
sudo apt install openfoam9
Python Modules to Create:
atomizer/processors/openfoam_processor.pyatomizer/coupling/precice_manager.pyatomizer/processors/thermal_structural.pyexamples/heatsink_optimization.py
OpenFOAM Processor:
# atomizer/processors/openfoam_processor.py
class OpenFOAMProcessor(AbstractProcessor):
def __init__(self):
self.case_template_dir = Path("templates/openfoam_cases")
def generate_input(self, study: AtomizerStudy) -> str:
"""Generate OpenFOAM case directory structure"""
case_dir = Path("openfoam_case")
case_dir.mkdir(exist_ok=True)
# Create directory structure
(case_dir / "0").mkdir(exist_ok=True)
(case_dir / "constant").mkdir(exist_ok=True)
(case_dir / "system").mkdir(exist_ok=True)
# Generate mesh
self._convert_mesh_to_openfoam(study.geometry.mesh_path, case_dir)
# Generate boundary conditions (0/ directory)
self._generate_boundary_conditions(study.boundary_conditions, case_dir / "0")
# Generate physical properties (constant/ directory)
self._generate_transport_properties(study.materials, case_dir / "constant")
# Generate solver settings (system/ directory)
self._generate_control_dict(study, case_dir / "system")
self._generate_fv_schemes(case_dir / "system")
self._generate_fv_solution(case_dir / "system")
return str(case_dir)
def _generate_boundary_conditions(self, bcs: AtomizerBCs, zero_dir: Path):
"""Generate 0/U, 0/p, 0/T files"""
# Velocity field (0/U)
u_content = self._openfoam_header("volVectorField", "U")
u_content += """
dimensions [0 1 -1 0 0 0 0];
internalField uniform (0 0 0);
boundaryField
{
"""
for bc in bcs.fluid or []:
if bc['type'] == 'inlet':
u_content += f"""
{bc['surface']}
{{
type fixedValue;
value uniform ({bc['velocity'][0]} {bc['velocity'][1]} {bc['velocity'][2]});
}}
"""
elif bc['type'] == 'outlet':
u_content += f"""
{bc['surface']}
{{
type zeroGradient;
}}
"""
elif bc['type'] == 'wall':
u_content += f"""
{bc['surface']}
{{
type noSlip;
}}
"""
u_content += "}\n"
(zero_dir / "U").write_text(u_content)
# Similar generation for pressure (p) and temperature (T)
self._generate_pressure_bc(bcs, zero_dir)
self._generate_temperature_bc(bcs, zero_dir)
def parse_results(self, case_dir: str) -> AtomizerResults:
"""Parse OpenFOAM results"""
# Read final time directory
case_path = Path(case_dir)
time_dirs = [d for d in case_path.iterdir() if d.is_dir() and d.name.replace('.', '').isdigit()]
latest_time = max(time_dirs, key=lambda x: float(x.name))
# Convert OpenFOAM results to VTK for post-processing
self._run_openfoam_command(f"foamToVTK -case {case_dir} -latestTime")
vtk_dir = case_path / "VTK"
# Extract key metrics
max_temperature = self._extract_max_temperature(latest_time / "T")
pressure_drop = self._extract_pressure_drop(latest_time / "p")
return AtomizerResults(
vtk_path=vtk_dir,
solver=SolverType.OPENFOAM,
physics_type="thermal_fluid",
max_temperature=max_temperature,
convergence=self._check_convergence(case_path / "log.simpleFoam"),
solve_time=self._get_solve_time(case_path / "log.simpleFoam")
)
Thermal-Structural Coupling:
# atomizer/coupling/precice_manager.py
class PreCICECouplingManager:
def setup_thermal_structural_coupling(self, study: AtomizerStudy) -> Dict[str, str]:
"""Set up coupled thermal-structural analysis"""
# Create separate cases for thermal (OpenFOAM) and structural (CalculiX)
thermal_case = self._create_thermal_case(study)
structural_case = self._create_structural_case(study)
# Generate preCICE configuration
precice_config = self._generate_precice_config(study)
return {
'thermal_case': thermal_case,
'structural_case': structural_case,
'precice_config': precice_config
}
def run_coupled_simulation(self, coupling_setup: Dict[str, str]) -> AtomizerResults:
"""Execute coupled thermal-structural simulation"""
# Start preCICE solvers
thermal_proc = self._start_openfoam_precice(coupling_setup['thermal_case'])
structural_proc = self._start_calculix_precice(coupling_setup['structural_case'])
# Wait for convergence
self._wait_for_coupling_convergence()
# Combine results from both solvers
thermal_results = self._parse_openfoam_results(coupling_setup['thermal_case'])
structural_results = self._parse_calculix_results(coupling_setup['structural_case'])
return self._merge_coupled_results(thermal_results, structural_results)
Test Criteria:
- Heat transfer validation: Compare analytical solutions for simple geometries
- Flow validation: Pipe flow, flat plate boundary layer
- Coupled validation: Heated pipe with thermal expansion
- Heatsink optimization: Minimize max temperature + minimize pressure drop
Integration Points:
- Extend AtomizerStudy with thermal/fluid boundary conditions
- Add thermal objectives to pymoo multi-objective optimization
- Integrate with existing visualization (PyVista thermal contours)
Antoine's Validation Gate:
- Review CFD validation against analytical solutions
- Test coupled simulation convergence and stability
- Approve thermal optimization objectives and constraints
Dependencies: Sprint 3 completed (multi-objective framework), preCICE adapters installed
Sprint 6: Topology Optimization Pipeline (Weeks 11-12)
Impact: ⭐⭐⭐⭐⭐ | Effort: HIGH | Unlocks: 30% weight reduction capability
Deliverables:
- FEniCS topology optimization processor
- Reconstruction pipeline (density field → CAD)
- Complete topology optimization study
Technical Setup:
docker pull dolfinx/dolfinx
pip install fenics fenitop
Python Modules to Create:
atomizer/processors/fenics_processor.pyatomizer/topology/simp_optimizer.pyatomizer/reconstruction/density_to_cad.pyatomizer/validation/topo_validator.py
FEniCS Topology Optimization:
# atomizer/topology/simp_optimizer.py
from dolfin import *
from dolfin_adjoint import *
class SIMPTopologyOptimizer:
def __init__(self, study: AtomizerStudy):
self.study = study
self.mesh = self._load_mesh(study.geometry.mesh_path)
self.V = VectorFunctionSpace(self.mesh, "CG", 1) # Displacement
self.V0 = FunctionSpace(self.mesh, "DG", 0) # Density
def optimize_topology(self, volume_fraction=0.4, iterations=80) -> np.ndarray:
"""Run SIMP topology optimization"""
# Initialize density field
rho = Function(self.V0, name="Density")
rho.interpolate(Constant(volume_fraction)) # Start uniform
# Material properties with SIMP
E_base = self.study.materials[0].structural['E']
nu = self.study.materials[0].structural['nu']
p = 3 # SIMP penalty parameter
def E_simp(rho):
return rho**p * E_base
# Set up elasticity problem
u = Function(self.V, name="Displacement")
v = TestFunction(self.V)
# Apply boundary conditions from study
bcs = self._convert_bcs_to_fenics(self.study.boundary_conditions)
loads = self._convert_loads_to_fenics(self.study.boundary_conditions)
# Weak form with SIMP material
F = self._build_weak_form(u, v, rho, E_simp, nu, loads)
# Optimization loop
for iteration in range(iterations):
# Solve state problem
solve(F == 0, u, bcs)
# Compute compliance (objective)
compliance = assemble(self._compliance_form(u, rho, E_simp, nu))
# Compute sensitivity via adjoint
sensitivity = compute_gradient(compliance, Control(rho))
# Update density with MMA or OC method
rho_new = self._update_density_mma(rho, sensitivity, volume_fraction)
# Apply filter to avoid checkerboard
rho = self._apply_helmholtz_filter(rho_new)
# Check convergence
if iteration > 10 and self._check_convergence(compliance):
break
print(f"Iteration {iteration}: Compliance = {compliance:.6f}")
return rho.vector().get_local().reshape((-1,))
def _apply_helmholtz_filter(self, rho, radius=0.05):
"""Apply Helmholtz PDE filter for minimum feature size"""
rho_filtered = Function(self.V0)
phi = TestFunction(self.V0)
# Solve: -r²∇²ρ_f + ρ_f = ρ
F_filter = (radius**2 * dot(grad(rho_filtered), grad(phi)) +
rho_filtered * phi - rho * phi) * dx
solve(F_filter == 0, rho_filtered)
return rho_filtered
Reconstruction Pipeline:
# atomizer/reconstruction/density_to_cad.py
from skimage import measure
import trimesh
class DensityReconstructor:
def __init__(self):
self.threshold = 0.5
def reconstruct_geometry(self, density_field: np.ndarray, mesh_coords: np.ndarray) -> Path:
"""Convert density field to manufacturable CAD"""
# Step 1: Threshold density field
binary_field = (density_field > self.threshold).astype(float)
# Step 2: Marching cubes isosurface extraction
vertices, faces, normals, values = measure.marching_cubes(
binary_field.reshape(mesh_coords.shape[0:3]),
level=0.5
)
# Step 3: Create mesh and smooth
mesh = trimesh.Trimesh(vertices=vertices, faces=faces)
mesh = mesh.smoothed() # Laplacian smoothing
# Step 4: Skeleton extraction for rib identification
skeleton = self._extract_skeleton(mesh)
# Step 5: Generate parametric ribs using Build123d
if skeleton.shape[0] > 10: # If skeleton is substantial
cad_path = self._generate_ribs_from_skeleton(skeleton, mesh.bounds)
else:
# Fall back to direct STL → STEP conversion
cad_path = self._convert_mesh_to_step(mesh)
return cad_path
def _generate_ribs_from_skeleton(self, skeleton: np.ndarray, bounds: np.ndarray) -> Path:
"""Generate Build123d ribs following skeleton pattern"""
from build123d import *
with BuildPart() as topology_part:
# Create base volume
bbox = Box(bounds[1,0] - bounds[0,0],
bounds[1,1] - bounds[0,1],
bounds[1,2] - bounds[0,2])
# Add ribs following skeleton
for i in range(len(skeleton) - 1):
start_point = skeleton[i]
end_point = skeleton[i + 1]
# Create rib connecting these points
with BuildSketch() as rib_profile:
Rectangle(2.0, 2.0) # 2mm x 2mm rib cross-section
# Sweep along skeleton edge
path = Line(start_point, end_point)
Sweep(sections=[rib_profile.sketch], path=path)
# Export result
output_path = Path("reconstructed_topology.step")
topology_part.part.export_step(str(output_path))
return output_path
Validation Pipeline:
# atomizer/validation/topo_validator.py
class TopologyValidator:
def validate_reconstruction(self,
original_density: np.ndarray,
reconstructed_step: Path,
original_study: AtomizerStudy) -> Dict:
"""Validate that reconstructed geometry performs as predicted"""
# Step 1: Re-mesh reconstructed geometry
gmsh_proc = GmshProcessor()
new_mesh = gmsh_proc.mesh_geometry(reconstructed_step)
# Step 2: Run FEA on reconstructed geometry
calculix_proc = CalculiXProcessor()
validation_study = original_study.copy()
validation_study.geometry.step_path = reconstructed_step
validation_study.geometry.mesh_path = new_mesh
reconstructed_results = calculix_proc.run_study(validation_study)
# Step 3: Compare metrics
# Predict performance from topology optimization
predicted_compliance = self._estimate_compliance_from_density(original_density)
predicted_mass = self._estimate_mass_from_density(original_density)
# Actual performance from FEA
actual_compliance = reconstructed_results.compliance
actual_mass = reconstructed_results.mass
# Calculate errors
compliance_error = abs(actual_compliance - predicted_compliance) / predicted_compliance
mass_error = abs(actual_mass - predicted_mass) / predicted_mass
return {
'validation_passed': compliance_error < 0.1 and mass_error < 0.05,
'compliance_error': compliance_error,
'mass_error': mass_error,
'predicted': {'compliance': predicted_compliance, 'mass': predicted_mass},
'actual': {'compliance': actual_compliance, 'mass': actual_mass},
'reconstructed_results': reconstructed_results
}
Test Criteria:
- MBB beam: Classic 2D topology optimization benchmark
- L-bracket: 3D cantilever with volume constraint
- Multi-load: Bracket under combined loading
- Manufacturing constraints: Minimum feature size, symmetry
- Validation: <10% error between topology prediction and reconstructed FEA
Integration Points:
- New topology optimization path in AtomizerSpec v3.0
- Integration with existing multi-objective framework
- Reconstruction connects back to CalculiX for validation
Antoine's Validation Gate:
- Review topology optimization convergence and results quality
- Validate reconstruction accuracy against FEA
- Approve topology workflow for client deliverables
Dependencies: Sprint 2 (CalculiX), Sprint 4 (Build123d reconstruction)
5. Risk & Dependency Analysis
5.1 Critical Path Dependencies
Sprint 1 (Universal Glue)
↓ BLOCKS
Sprint 2 (CalculiX) → Sprint 3 (Multi-Objective)
↓ BLOCKS ↓ BLOCKS
Sprint 5 (CFD) Sprint 4 (CAD Generation)
↓ BLOCKS ↓ BLOCKS
Sprint 6 (Topology Optimization)
5.2 Parallelization Opportunities
Can run in parallel:
- Sprint 3 (Multi-objective) + Sprint 4 (CAD generation) after Sprint 2
- Sprint 5 (CFD) infrastructure setup during Sprint 3-4
- Documentation and testing throughout all sprints
Cannot parallelize:
- Sprint 1 must complete first (everything depends on format conversion)
- Sprint 2 must complete before Sprint 6 (topology needs validation solver)
5.3 Risk Mitigation
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| meshio conversion accuracy | LOW | HIGH | Extensive benchmark validation in Sprint 1 |
| CalculiX solver stability | MEDIUM | HIGH | Fallback to NX/Nastran, validation suite |
| FEniCS topology complexity | HIGH | MEDIUM | Start with 2D problems, iterate to 3D |
| OpenFOAM learning curve | HIGH | MEDIUM | Use Docker containers, existing MCP servers |
| Reconstruction quality | HIGH | MEDIUM | Multiple reconstruction approaches, validation loop |
| Performance degradation | LOW | MEDIUM | Benchmark testing, profile optimization |
5.4 Go/No-Go Decision Points
After Sprint 1:
- ✅ Format conversion <1% accuracy loss
- ✅ Round-trip validation passes
- ❌ Major accuracy issues → Pause for investigation
After Sprint 2:
- ✅ CalculiX within 5% of NX/Nastran results
- ✅ Complete optimization study runs end-to-end
- ❌ Solver instability → Revert to NX-only until resolved
After Sprint 3:
- ✅ Pareto fronts show sensible trade-offs
- ✅ Multi-objective visualization working
- ❌ Optimization doesn't converge → Debug algorithm parameters
6. Existing Code Reuse Strategy
6.1 Leverage Existing Atomizer Infrastructure
Reuse Directly (No Changes):
optimization_engine/hooks/→ All hooks work with new processorsoptimization_engine/extractors/op2_extractor.py→ For NX/Nastran validationoptimization_engine/insights/→ Zernike, modal analysis, etc.optimization_engine/validation/→ Existing validation framework- LAC learning and context system
- Dashboard and reporting infrastructure
Extend (Minor Changes):
AtomizerSpec→ v3.0 with multi-solver supportoptimization_engine/run_optimization.py→ Add processor routingoptimization_engine/nx/→ Enhanced with format conversion- Hook system → Register new hook points for processor lifecycle
Replace/Augment:
NXSolverEngine→ Enhanced withMultiSolverEngine- Single-objective optimization → Multi-objective with pymoo
- NX-only geometry → Multi-source geometry (Build123d, FreeCAD, etc.)
6.2 Architecture Integration Pattern
# optimization_engine/solvers/multi_solver_engine.py
class MultiSolverEngine:
def __init__(self):
self.nx_engine = NXSolverEngine() # Existing
self.calculix_engine = CalculiXEngine() # New
self.openfoam_engine = OpenFOAMEngine() # New
self.fenics_engine = FEniCSEngine() # New
def select_solver(self, study: AtomizerStudy) -> AbstractSolverEngine:
"""Select best solver based on study requirements"""
preferences = study.solver_preferences
physics = self._analyze_physics_requirements(study)
if physics.requires_topology_optimization:
return self.fenics_engine
elif physics.requires_cfd:
return self.openfoam_engine
elif "nastran" in preferences and self.nx_engine.available():
return self.nx_engine
else:
return self.calculix_engine # Default fallback
def run_study(self, study: AtomizerStudy) -> AtomizerResults:
"""Run study with optimal solver"""
engine = self.select_solver(study)
# Convert AtomizerStudy to solver format via processor
processor = engine.get_processor()
solver_input = processor.generate_input(study)
# Run solver
solver_output = engine.solve(solver_input)
# Convert results back to AtomizerResults
results = processor.parse_results(solver_output)
# Run existing hooks and validation
self.hook_manager.execute_hooks('post_solve', results)
return results
6.3 Migration Strategy
Phase A: Parallel Development
- New Arsenal tools run alongside existing NX pipeline
- Validation by comparing results between old and new
- Zero risk to existing workflows
Phase B: Selective Adoption
- Use Arsenal tools for new studies
- Maintain NX for existing projects and client deliverables
- Client chooses solver based on requirements
Phase C: Unified Platform
- Single AtomizerSpec works with any solver
- LLM agents select optimal solver automatically
- NX becomes one option in a multi-solver platform
7. Success Metrics & Validation
7.1 Technical Performance Targets
| Metric | Target | Measurement |
|---|---|---|
| Format Conversion Accuracy | <1% error | Mass, volume, stress comparison |
| Solver Validation | <5% vs analytical | Cantilever, plate with hole, modal |
| Multi-Objective Convergence | >90% Pareto coverage | Hypervolume indicator |
| CFD Validation | <10% vs analytical | Pipe flow, heat transfer |
| Topology Optimization | 20-40% weight reduction | Compliance-constrained designs |
| Reconstruction Accuracy | <15% performance loss | Topo-opt prediction vs FEA validation |
7.2 Integration Success Criteria
| Component | Success Criteria |
|---|---|
| Universal Glue | All existing LAC benchmarks convert and solve |
| CalculiX | Full optimization study runs without NX |
| Multi-Objective | Pareto plots show sensible engineering trade-offs |
| CAD Generation | LLM generates valid, manufacturable geometry |
| CFD Integration | Thermal optimization of realistic heatsink |
| Topology Optimization | Complete workflow from design space to STEP |
7.3 Client Impact Validation
Before Arsenal (Current Atomizer):
- Single-objective optimization
- NX/Nastran only
- Structural analysis only
- Manual geometry creation
- Windows-dependent
After Arsenal (Target Atomizer):
- Multi-objective Pareto optimization
- Any solver (NX, CalculiX, OpenFOAM, FEniCS)
- Multi-physics (structural + thermal + fluid)
- AI-generated geometry from text
- Cross-platform (Linux preferred)
Deliverable Quality:
- Pareto front plots (consulting-grade)
- Interactive 3D visualization (Trame web viewer)
- Multi-physics validation reports
- 30% weight reduction through topology optimization
8. Implementation Timeline & Resources
8.1 6-Sprint Timeline
| Sprint | Weeks | Focus | Team Lead | Antoine Involvement |
|---|---|---|---|---|
| 1 | 1-2 | Universal Glue | Technical Lead | Low - Review specs |
| 2 | 3-4 | CalculiX Integration | Technical Lead | Medium - Validate benchmarks |
| 3 | 5-6 | Multi-Objective | Technical Lead | Medium - Review Pareto plots |
| 4 | 7-8 | CAD Generation | Technical Lead | High - Validate AI-generated CAD |
| 5 | 9-10 | CFD + Thermal | Technical Lead | High - Review coupling results |
| 6 | 11-12 | Topology Optimization | Technical Lead | High - Validate complete workflow |
8.2 Resource Allocation
Technical Lead (Primary Developer):
- Architecture design and implementation
- Processor development
- Integration with existing Atomizer
- Testing and validation
Antoine (Domain Expert):
- Engineering validation of results
- Benchmark problem definition
- Client workflow design
- Final approval gates
Manager (Project Coordination):
- Sprint planning and tracking
- Risk management
- Stakeholder communication
- Resource coordination
8.3 Deliverable Schedule
| Week | Major Deliverables |
|---|---|
| 2 | Universal format conversion working |
| 4 | First CalculiX optimization complete |
| 6 | Multi-objective Pareto plots generated |
| 8 | AI-generated CAD in optimization loop |
| 10 | CFD thermal optimization demonstrated |
| 12 | Complete topology optimization pipeline |
9. Post-Development: Production Readiness
9.1 Validation & Testing
Unit Tests:
- Processor input/output validation
- Format conversion accuracy
- Solver integration
Integration Tests:
- End-to-end optimization studies
- Multi-physics coupling validation
- Performance benchmarks
Acceptance Tests:
- Client workflow simulation
- LAC benchmark reproduction
- Stress testing with large models
9.2 Documentation Requirements
Developer Documentation:
- Processor API reference
- Architecture diagrams
- Integration examples
User Documentation:
- AtomizerSpec v3.0 specification
- Multi-solver workflow guide
- Troubleshooting and FAQ
Client Documentation:
- Capability overview
- Case studies and examples
- Performance comparisons
9.3 Deployment Strategy
Development Environment:
- All Arsenal tools installed and tested
- Docker containers for reproducibility
- CI/CD pipeline for validation
Production Environment:
- Scalable solver execution
- Result caching and storage
- Performance monitoring
Client Delivery:
- Portable Docker containers
- Cloud deployment options
- On-premises installation support
10. Conclusion
This Arsenal Development Plan provides a comprehensive, risk-mitigated approach to transforming Atomizer into a multi-solver, multi-physics optimization platform. The Thin Contract + Smart Processor pattern ensures clean architecture while the incremental sprint approach minimizes development risk.
Key Advantages:
- Zero software cost - 100% open-source tools
- Preserve existing workflows - NX pipeline continues working
- Incremental value delivery - Each sprint provides usable capabilities
- Future-ready architecture - Easy to add new tools and capabilities
Expected Outcome: By completion, Atomizer will be the only optimization platform that combines:
- AI-driven workflow automation
- Multi-solver orchestration
- Multi-physics coupling
- Multi-objective optimization
- Topology optimization
- Cross-platform operation
This positions Atomizer as a unique, best-in-class solution that no commercial competitor can match.
TASK COMPLETE
- ✅ Technical architecture defined (Thin Contract + Smart Processor pattern)
- ✅ 80/20 tool selection validated and prioritized
- ✅ 6-sprint breakdown with concrete deliverables
- ✅ Risk analysis and mitigation strategies
- ✅ Integration strategy with existing Atomizer V2
- ✅ Implementation timeline and resource allocation
CONFIDENCE: HIGH - This plan provides the blueprint to build from.