15 KiB
Secondary Research Validation — Atomizer Project Standard
Date (UTC): 2026-02-18
Spec reviewed: /home/papa/obsidian-vault/2-Projects/P-Atomizer-Project-Standard/00-SPECIFICATION.md (v1.0 Draft)
Executive Summary
I validated the spec’s Appendix A/B claims against publicly available standards/docs. Bottom line:
- NASA-STD-7009B alignment: partially true, but the spec currently maps only a subset of explicit required records.
- ASME V&V 10-2019 alignment: directionally true, but detailed clause-level validation is limited by paywall; key V&V structure is still identifiable from official/committee-adjacent sources.
- Aerospace FEA documentation (ECSS/NASA): Atomizer structure is strong, but lacks explicit aerospace-style model checklists and formal verification report artifacts.
- OpenMDAO / Dakota / modeFRONTIER comparisons: spec claims are mostly fair; Atomizer is actually stronger in campaign chronology and rationale traceability.
- Design rationale (IBIS/QOC/DRL):
DECISIONS.mdformat is well aligned with practical modern ADR practice. - LLM-native documentation: no mature formal engineering standard exists; Atomizer is already close to current best practice patterns.
1) NASA-STD-7009B validation (model docs, V&V, configuration)
What NASA-STD-7009B explicitly requires (evidence)
From the 2024 NASA-STD-7009B requirements list ([M&S n] clauses):
- Assumptions/abstractions record:
[M&S 11]requires recording assumptions/abstractions + rationale + consequences. - Verification/validation:
[M&S 15]model shall be verified;[M&S 16]verification domain recorded;[M&S 17]model shall be validated;[M&S 18]validation domain recorded. - Uncertainty:
[M&S 19]uncertainty characterization process;[M&S 21]uncertainties incorporated into M&S; reporting uncertainty[M&S 33], process description[M&S 34]. - Use/appropriateness:
[M&S 22]proposed use;[M&S 23]appropriateness for proposed use. - Programmatic/configuration-like records: intended use
[M&S 40], lifecycle plan[M&S 41], acceptance criteria[M&S 43], data/supporting software maintained[M&S 45], defects/problems tracked[M&S 51]. - Decision reporting package: warnings
[M&S 32], results assessment[M&S 31], records included in decision reporting[M&S 38], risk rationale[M&S 39].
How Atomizer spec compares
Spec Appendix A maps NASA coverage to:
- assumptions →
DECISIONS.md+ KB rationale - V&V →
02-kb/analysis/validation/ - uncertainty →
02-kb/introspection/parameter-sensitivity.md - configuration management →
01-models/README.md+CHANGELOG.md
Assessment: good foundation, but not fully sufficient versus explicit NASA records.
Gaps vs NASA-STD-7009B
- No explicit artifact for verification domain and validation domain ([M&S 16], [M&S 18]).
- No explicit acceptance criteria register for V/V/UQ/sensitivity ([M&S 43]).
- No explicit M&S use appropriateness assessment ([M&S 23]).
- Uncertainty handling is sensitivity-oriented; NASA requires uncertainty record + reporting protocol ([M&S 19], [M&S 21], [M&S 33], [M&S 34]).
- No explicit defect/problem log for model/analysis infrastructure ([M&S 51]).
- No explicit decision-maker reporting checklist (warnings, risk acceptance rationale) ([M&S 32], [M&S 39]).
Recommended additions
02-kb/analysis/validation/verification-domain.md02-kb/analysis/validation/validation-domain.md02-kb/analysis/validation/acceptance-criteria.md02-kb/analysis/validation/use-appropriateness.md02-kb/analysis/validation/uncertainty-characterization.md02-kb/analysis/validation/model-defects-log.md04-reports/templates/ms-decision-report-checklist.md
2) ASME V&V 10-2019 validation (computational solid mechanics)
What is verifiable from public sources
Because ASME V&V 10 text is paywalled, I used official listing + ASME committee-adjacent material:
- Purpose: common language + conceptual framework + guidance for CSM V&V.
- Core structure (as used in V&V 10 examples):
- conceptual → mathematical → computational model chain
- code verification and calculation/solution verification
- validation against empirical data
- uncertainty quantification integrated in credibility process
How Atomizer spec compares
- Conceptual/computational model docs: mapped to
02-kb/analysis/models/,atomizer_spec.json,solver/. - Mesh/solution checks: mapped to
02-kb/analysis/mesh/. - Validation: mapped to
02-kb/analysis/validation/. - Uncertainty: mapped to reports + sensitivity.
Assessment: plausibly aligned at framework level, but clause-level compliance cannot be claimed without licensed standard text.
Practical gaps (high confidence despite paywall)
- Need explicit split of code verification vs solution verification in templates.
- Need explicit validation data pedigree + acceptance metric fields.
- Need explicit uncertainty quantification protocol beyond sensitivity.
Recommended additions
02-kb/analysis/validation/code-verification.md02-kb/analysis/validation/solution-verification.md02-kb/analysis/validation/validation-metrics.md02-kb/analysis/validation/uq-plan.md
3) Aerospace FEA documentation (ECSS / NASA / Airbus / Boeing / JPL)
ECSS (strongest open evidence)
From ECSS-E-ST-32-03C (Structural FEM standard), explicit “shall” checks include:
- Modeling guidelines shall be established/agreed.
- Reduced model delivered with instructions.
- Verification checks on OTMs/reduced model consistency.
- Mandatory model quality checks (free edges, shell warping/interior angles/normal orientation).
- Unit load/resultant consistency checks.
- Stiffness issues (zero-stiffness DOFs) identified/justified.
- Modal checks and reduced-vs-nonreduced comparison.
- Iterate model if correlation criteria not met.
NASA open evidence
- NASA-STD-5002 scope explicitly defines load-analysis methodologies/practices/requirements for spacecraft/payloads.
- FEMCI public material (NASA GSFC) emphasizes repeatable model checking/verification workflows (e.g., Craig-Bampton/LTM checks).
Airbus/Boeing/JPL
- Publicly available detailed internal document structures are limited (mostly proprietary). I did not find authoritative public templates equivalent to ECSS clause detail.
How Atomizer spec compares
Strong: hierarchical project organization, model/mesh/connections/BC/loads/solver folders, study traceability, decision trail.
Missing vs ECSS-style rigor: explicit model-quality-check checklist artifacts and correlation criteria fields.
Recommended additions
02-kb/analysis/validation/model-quality-checks.md(ECSS-like checklist)02-kb/analysis/validation/unit-load-checks.md02-kb/analysis/validation/reduced-model-correlation.md04-reports/templates/fea-verification-report.md
4) OpenMDAO project structure patterns
Key findings
OpenMDAO uses recorders/readers around a SQLite case database:
SqliteRecorderwrites case DB (cases.sql) under outputs dir (or custom path).- Case recording can include constraints/design vars/objectives/inputs/outputs/residuals.
CaseReaderenumerates sources (driver/system/solver/problem), case names, and recorded vars.
How Atomizer spec compares
- Atomizer
study.db+iteration_history.csvmaps well to OpenMDAO case recording intent. 03-studies/{NN}_...gives explicit campaign chronology (not native in OpenMDAO itself).
Recommended adoption patterns
- Add explicit recording policy template (what to always record per run).
- Add source naming convention for run/case prefixes in study scripts.
5) Dakota (Sandia) multi-study organization
Key findings
- Dakota input built from six blocks (
environment,method,model,variables,interface,responses). - Tabular history export (
tabular_data) writes variables/responses as columnar rows per evaluation. - Restart mechanism (
dakota.rst) supports resume/append/partial replay and restart utility processing.
Multi-study campaigns with many evaluations
Dakota supports large evaluation campaigns technically via restart + tabular history, but does not prescribe a rich project documentation structure for study evolution.
How Atomizer spec compares
03-studies/plus per-study READMEs/REPORTs gives stronger campaign story than typical Dakota practice.study.db(queryable) + CSV is a practical analog to Dakota restart/tabular outputs.
Recommended additions
- Add explicit resume/restart SOP in
playbooks/(inspired by Dakota restart discipline). - Add cross-study aggregation script template under
05-tools/scripts/.
6) Design rationale capture (IBIS/QOC/DRL) vs DECISIONS.md
Key findings
- IBIS: issue/positions/arguments structure.
- Modern practical equivalent in engineering/software orgs: ADRs (context/decision/consequences/status).
- Atomizer
DECISIONS.mdalready includes context, options, decision, consequences, status.
Alignment assessment
DECISIONS.md is well aligned with practical best practice (ADR/IBIS-inspired), and better than ad-hoc notes.
Minor enhancement (optional)
Add explicit criteria field (for QOC-like traceability):
- performance
- risk
- schedule
- cost
7) LLM-native documentation patterns (state of the art)
What is currently established
No formal consensus engineering standard yet (as of 2026) for “LLM-native engineering docs.”
High-signal practical patterns from platform docs
- Chunk long content safely (OpenAI cookbook: token limits, chunking, weighted/segment embeddings).
- Use explicit structure tags for complex prompts/docs (Anthropic XML-tag guidance for clarity/parseability).
How Atomizer spec compares
Strong alignment already:
- clear hierarchy and entry points (
PROJECT.md,AGENT.md) - small focused docs by topic/component
- explicit decision records with status
Gaps / recommendations
- Add formal chunk-size guidance for long docs (target section lengths).
- Add optional metadata schema (
owner,version,updated,confidence,source-links) for KB files. - Add
known-limitationssection in study reports.
8) modeFRONTIER workflow patterns
Key findings
Public ESTECO material shows:
- separation of Workflow Editor (automation graph/nodes, I/O links) and Planner (design vars, bounds, objectives, algorithms).
- support for multiple optimization/exploration plans in a project.
- strong emphasis on DOE + optimization + workflow reuse.
How Atomizer spec compares
Equivalent conceptual split exists implicitly:
- workflow/config in
atomizer_spec.json+ hooks/scripts - campaign execution in
03-studies/
Recommended adoption
- Make the split explicit in docs:
- “Automation Workflow” section (toolchain graph)
- “Optimization Plan” section (vars/bounds/objectives/algorithms)
Consolidated Gap List (from all topics)
High-priority gaps
- NASA-style explicit V&V/UQ records (domains, acceptance criteria, appropriateness, defects log).
- ECSS-style model-quality and unit-load verification checklists.
- ASME-style explicit separation of code verification vs solution verification.
Medium-priority gaps
- Formal restart/resume SOP for long campaigns.
- Structured validation metrics + acceptance thresholds.
Low-priority enhancements
- LLM chunking/metadata guidance.
- Explicit workflow-vs-plan split language (modeFRONTIER-inspired).
Source URLs (for claims)
Atomizer spec
/home/papa/obsidian-vault/2-Projects/P-Atomizer-Project-Standard/00-SPECIFICATION.md
NASA-STD-7009B
- NASA standard landing page: https://standards.nasa.gov/standard/NASA/NASA-STD-7009
- PDF used for requirement extraction: https://standards.nasa.gov/sites/default/files/standards/NASA/B/1/NASA-STD-7009B-Final-3-5-2024.pdf
ASME V&V 10 context
- ASME listing: https://www.asme.org/codes-standards/find-codes-standards/standard-for-verification-and-validation-in-computational-solid-mechanics
- ASME V&V 10.1 illustration listing: https://www.asme.org/codes-standards/find-codes-standards/an-illustration-of-the-concepts-of-verification-and-validation-in-computational-solid-mechanics
- Public summary (committee context and VVUQ framing): https://www.machinedesign.com/automation-iiot/article/21270513/standardizing-computational-models-verification-validation-and-uncertainty-quantification
- Standard metadata mirror (purpose statement): https://www.bsbedge.com/standard/standard-for-verification-and-validation-in-computational-solid-mechanics/V&V10
ECSS / aerospace FEA
- ECSS FEM standard page: https://ecss.nl/standard/ecss-e-st-32-03c-structural-finite-element-models/
- ECSS-E-ST-32-03C PDF: https://ecss.nl/wp-content/uploads/standards/ecss-e/ECSS-E-ST-32-03C31July2008.pdf
- NASA-STD-5002 page: https://standards.nasa.gov/standard/nasa/nasa-std-5002
- NASA FEMCI book PDF: https://etd.gsfc.nasa.gov/wp-content/uploads/2025/04/FEMCI-The-Book.pdf
OpenMDAO
- Case recording options: https://openmdao.org/newdocs/versions/latest/features/recording/case_recording_options.html
- Case reader: https://openmdao.org/newdocs/versions/latest/features/recording/case_reader.html
Dakota (Sandia)
- Input file structure: https://snl-dakota.github.io/docs/6.18.0/users/usingdakota/inputfile.html
- Tabular data output: https://snl-dakota.github.io/docs/6.18.0/users/usingdakota/reference/environment-tabular_data.html
- Restarting Dakota: https://snl-dakota.github.io/docs/6.18.0/users/usingdakota/running/restart.html
Design rationale
- IBIS overview: https://en.wikipedia.org/wiki/Issue-based_information_system
- Design rationale overview: https://en.wikipedia.org/wiki/Design_rationale
- ADR practice repo/templates: https://github.com/joelparkerhenderson/architecture-decision-record
LLM-native documentation patterns
- OpenAI cookbook (long input chunking for embeddings): https://cookbook.openai.com/examples/embedding_long_inputs
- Anthropic prompt structuring with XML tags: https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/use-xml-tags
modeFRONTIER
- Product overview/capabilities: https://engineering.esteco.com/modefrontier/
- UM20 training (Workflow Editor vs Planner): https://um20.esteco.com/speeches/training-1-introduction-to-modefrontier-2020-from-worflow-editor-to-optimization-planner/
Confidence Notes
- High confidence: NASA-STD-7009B requirements mapping, OpenMDAO, Dakota, ECSS model-check patterns, modeFRONTIER workflow/planner split.
- Medium confidence: ASME V&V 10 detailed requirement mapping (full 2019 text not publicly open).
- Low confidence / constrained by public sources: Airbus/Boeing/JPL internal documentation template details.