Files
Atomizer/DEVELOPMENT_ROADMAP.md
Anto01 0ce9ddf3e2 feat: Add LLM-native development roadmap and reorganize documentation
- Add DEVELOPMENT_ROADMAP.md with 7-phase plan for LLM-driven optimization
  - Phase 1: Plugin system with lifecycle hooks
  - Phase 2: Natural language configuration interface
  - Phase 3: Dynamic code generation for custom objectives
  - Phase 4: Intelligent analysis and decision support
  - Phase 5: Automated HTML/PDF reporting
  - Phase 6: NX MCP server integration
  - Phase 7: Self-improving feature registry

- Update README.md to reflect LLM-native philosophy
  - Emphasize natural language workflows
  - Link to development roadmap
  - Update architecture diagrams
  - Add future capability examples

- Reorganize documentation structure
  - Move old dev docs to docs/archive/
  - Clean up root directory
  - Preserve all working optimization engine code

This sets the foundation for transforming Atomizer into an AI-powered
engineering assistant that can autonomously configure optimizations,
generate custom analysis code, and provide intelligent recommendations.
2025-11-15 14:34:16 -05:00

22 KiB

Atomizer Development Roadmap

Vision: Transform Atomizer into an LLM-native engineering assistant for optimization

Last Updated: 2025-01-15


Vision Statement

Atomizer will become an LLM-driven optimization framework where AI acts as a scientist/programmer/coworker that can:

  • Understand natural language optimization requests
  • Configure studies autonomously
  • Write custom Python functions on-the-fly during optimization
  • Navigate and extend its own codebase
  • Make engineering decisions based on data analysis
  • Generate comprehensive optimization reports
  • Continuously expand its own capabilities through learning

Architecture Philosophy

LLM-First Design Principles

  1. Discoverability: Every feature must be discoverable and usable by LLM via feature registry
  2. Extensibility: Easy to add new capabilities without modifying core engine
  3. Safety: Validate all generated code, sandbox execution, rollback on errors
  4. Transparency: Log all LLM decisions and generated code for auditability
  5. Human-in-the-loop: Confirm critical decisions (e.g., deleting studies, pushing results)
  6. Documentation as Code: Auto-generate docs from code with semantic metadata

Development Phases

Phase 1: Foundation - Plugin & Extension System

Timeline: 2 weeks Status: 🔵 Not Started Goal: Make Atomizer extensible and LLM-navigable

Deliverables

  1. Plugin Architecture

    • Hook system for optimization lifecycle
      • pre_mesh: Execute before meshing
      • post_mesh: Execute after meshing, before solve
      • pre_solve: Execute before solver launch
      • post_solve: Execute after solve, before extraction
      • post_extraction: Execute after result extraction
    • Python script execution at any optimization stage
    • Journal script injection points
    • Custom objective/constraint function registration
  2. Feature Registry

    • Create optimization_engine/feature_registry.json
    • Centralized catalog of all capabilities
    • Metadata for each feature:
      • Function signature with type hints
      • Natural language description
      • Usage examples (code snippets)
      • When to use (semantic tags)
      • Parameters with validation rules
    • Auto-update mechanism when new features added
  3. Documentation System

    • Create docs/llm/ directory for LLM-readable docs
    • Function catalog with semantic search
    • Usage patterns library
    • Auto-generate from docstrings and registry

Files to Create:

optimization_engine/
├── plugins/
│   ├── __init__.py
│   ├── hooks.py              # Hook system core
│   ├── hook_manager.py       # Hook registration and execution
│   ├── validators.py         # Code validation utilities
│   └── examples/
│       ├── pre_mesh_example.py
│       └── custom_objective_example.py
├── feature_registry.json     # Capability catalog
└── registry_manager.py       # Registry CRUD operations

docs/llm/
├── capabilities.md           # Human-readable capability overview
├── examples.md               # Usage examples
└── api_reference.md          # Auto-generated API docs

Phase 2: LLM Integration Layer

Timeline: 2 weeks Status: 🔵 Not Started Goal: Enable natural language control of Atomizer

Deliverables

  1. Claude Skill for Atomizer

    • Create .claude/skills/atomizer.md
    • Define skill with full context of capabilities
    • Access to feature registry
    • Can read/write optimization configs
    • Execute Python scripts and journal files
  2. Natural Language Parser

    • Intent recognition system
      • Create study
      • Configure optimization
      • Analyze results
      • Generate report
      • Execute custom code
    • Entity extraction (parameters, metrics, constraints)
    • Ambiguity resolution via clarifying questions
  3. Conversational Workflow Manager

    • Multi-turn conversation state management
    • Context preservation across requests
    • Validation and confirmation before execution
    • Undo/rollback mechanism

Example Interactions:

User: "Optimize for minimal displacement, vary thickness from 2-5mm"
→ LLM: Creates study, asks for file drop, configures objective + design var

User: "Add RSS function combining stress and displacement"
→ LLM: Writes Python function, registers as custom objective, validates

User: "Use surrogate to predict these 10 parameter sets"
→ LLM: Checks surrogate quality (R², CV score), runs predictions or warns

Files to Create:

.claude/
└── skills/
    └── atomizer.md           # Claude skill definition

optimization_engine/
├── llm_interface/
│   ├── __init__.py
│   ├── intent_classifier.py  # NLP intent recognition
│   ├── entity_extractor.py   # Parameter/metric extraction
│   ├── workflow_manager.py   # Conversation state
│   └── validators.py         # Input validation

Phase 3: Dynamic Code Generation

Timeline: 3 weeks Status: 🔵 Not Started Goal: LLM writes and integrates custom code during optimization

Deliverables

  1. Custom Function Generator

    • Template system for common patterns:
      • RSS (Root Sum Square) of multiple metrics
      • Weighted objectives
      • Custom constraints (e.g., stress/yield_strength < 1)
      • Conditional objectives (if-then logic)
    • Code validation pipeline (syntax check, safety scan)
    • Unit test auto-generation
    • Auto-registration in feature registry
    • Persistent storage in optimization_engine/custom_functions/
  2. Journal Script Generator

    • Generate NX journal scripts from natural language
    • Library of common operations:
      • Modify geometry (fillets, chamfers, thickness)
      • Apply loads and boundary conditions
      • Extract custom data (centroid, inertia, custom expressions)
    • Validation against NXOpen API
    • Dry-run mode for testing
  3. Safe Execution Environment

    • Sandboxed Python execution (RestrictedPython or similar)
    • Whitelist of allowed imports
    • Error handling with detailed logs
    • Rollback mechanism on failure
    • Logging of all generated code to audit trail

Files to Create:

optimization_engine/
├── custom_functions/
│   ├── __init__.py
│   ├── templates/
│   │   ├── rss_template.py
│   │   ├── weighted_sum_template.py
│   │   └── constraint_template.py
│   ├── generator.py          # Code generation engine
│   ├── validator.py          # Safety validation
│   └── sandbox.py            # Sandboxed execution
├── code_generation/
│   ├── __init__.py
│   ├── journal_generator.py  # NX journal script generation
│   └── function_templates.py # Jinja2 templates

Phase 4: Intelligent Analysis & Decision Support

Timeline: 3 weeks Status: 🔵 Not Started Goal: LLM analyzes results and guides engineering decisions

Deliverables

  1. Result Analyzer

    • Statistical analysis module
      • Convergence detection (plateau in objective)
      • Pareto front identification (multi-objective)
      • Sensitivity analysis (which params matter most)
      • Outlier detection
    • Trend analysis (monotonic relationships, inflection points)
    • Recommendations engine (refine mesh, adjust bounds, add constraints)
  2. Surrogate Model Manager

    • Quality metrics calculation
      • R² (coefficient of determination)
      • CV score (cross-validation)
      • Prediction error distribution
      • Confidence intervals
    • Surrogate fitness assessment
      • "Ready to use" threshold (e.g., R² > 0.9)
      • Warning if predictions unreliable
    • Active learning suggestions (which points to sample next)
  3. Decision Assistant

    • Trade-off interpreter (explain Pareto fronts)
    • "What-if" analysis (predict outcome of parameter change)
    • Constraint violation diagnosis
    • Next-step recommendations

Example:

User: "Summarize optimization results"
→ LLM:
   Analyzes 50 trials, identifies best design at trial #34:
   - wall_thickness = 3.2mm (converged from initial 5mm)
   - max_stress = 187 MPa (target: 200 MPa ✓)
   - mass = 0.45 kg (15% lighter than baseline)

   Issues detected:
   - Stress constraint violated in 20% of trials (trials 5,12,18...)
   - Displacement shows high sensitivity to thickness (Sobol index: 0.78)

   Recommendations:
   1. Relax stress limit to 210 MPa OR
   2. Add fillet radius as design variable (currently fixed at 2mm)
   3. Consider thickness > 3mm for robustness

Files to Create:

optimization_engine/
├── analysis/
│   ├── __init__.py
│   ├── statistical_analyzer.py  # Convergence, sensitivity
│   ├── surrogate_quality.py     # R², CV, confidence intervals
│   ├── decision_engine.py       # Recommendations
│   └── visualizers.py           # Plot generators

Phase 5: Automated Reporting

Timeline: 2 weeks Status: 🔵 Not Started Goal: Generate comprehensive HTML/PDF optimization reports

Deliverables

  1. Report Generator

    • Template system (Jinja2)
      • Executive summary (1-page overview)
      • Detailed analysis (convergence plots, sensitivity charts)
      • Appendices (all trial data, config files)
    • Auto-generated plots (Chart.js for web, Matplotlib for PDF)
    • Embedded data tables (sortable, filterable)
    • LLM-written narrative explanations
  2. Multi-Format Export

    • HTML (interactive, shareable via link)
    • PDF (static, for archival/print)
    • Markdown (for version control, GitHub)
    • JSON (machine-readable, for post-processing)
  3. Smart Narrative Generation

    • LLM analyzes data and writes insights in natural language
    • Explains why certain designs performed better
    • Highlights unexpected findings (e.g., "Counter-intuitively, reducing thickness improved stress")
    • Includes engineering recommendations

Files to Create:

optimization_engine/
├── reporting/
│   ├── __init__.py
│   ├── templates/
│   │   ├── executive_summary.html.j2
│   │   ├── detailed_analysis.html.j2
│   │   └── markdown_report.md.j2
│   ├── report_generator.py      # Main report engine
│   ├── narrative_writer.py      # LLM-driven text generation
│   └── exporters/
│       ├── html_exporter.py
│       ├── pdf_exporter.py      # Using WeasyPrint or similar
│       └── markdown_exporter.py

Phase 6: NX MCP Enhancement

Timeline: 4 weeks Status: 🔵 Not Started Goal: Deep NX integration via Model Context Protocol

Deliverables

  1. NX Documentation MCP Server

    • Index full Siemens NX API documentation
    • Semantic search across NX docs (embeddings + vector DB)
    • Code examples from official documentation
    • Auto-suggest relevant API calls based on task
  2. Advanced NX Operations

    • Geometry manipulation library
      • Parametric CAD automation (change sketches, features)
      • Assembly management (add/remove components)
      • Advanced meshing controls (refinement zones, element types)
    • Multi-physics setup
      • Thermal-structural coupling
      • Modal analysis
      • Fatigue analysis setup
  3. Feature Bank Expansion

    • Library of 50+ pre-built NX operations
    • Topology optimization integration
    • Generative design workflows
    • Each feature documented in registry with examples

Files to Create:

mcp/
├── nx_documentation/
│   ├── __init__.py
│   ├── server.py                # MCP server implementation
│   ├── indexer.py               # NX docs indexing
│   ├── embeddings.py            # Vector embeddings for search
│   └── vector_db.py             # Chroma/Pinecone integration
├── nx_features/
│   ├── geometry/
│   │   ├── fillets.py
│   │   ├── chamfers.py
│   │   └── thickness_modifier.py
│   ├── analysis/
│   │   ├── thermal_structural.py
│   │   ├── modal_analysis.py
│   │   └── fatigue_setup.py
│   └── feature_registry.json    # NX feature catalog

Phase 7: Self-Improving System

Timeline: 4 weeks Status: 🔵 Not Started Goal: Atomizer learns from usage and expands itself

Deliverables

  1. Feature Learning System

    • When LLM creates custom function, prompt user to save to library
    • User provides name + description
    • Auto-update feature registry with new capability
    • Version control for user-contributed features
  2. Best Practices Database

    • Store successful optimization strategies
    • Pattern recognition (e.g., "Adding fillets always reduces stress by 10-20%")
    • Similarity search (find similar past optimizations)
    • Recommend strategies for new problems
  3. Continuous Documentation

    • Auto-generate docs when new features added
    • Keep examples updated with latest API
    • Version control for all generated code
    • Changelog auto-generation

Files to Create:

optimization_engine/
├── learning/
│   ├── __init__.py
│   ├── feature_learner.py       # Capture and save new features
│   ├── pattern_recognizer.py    # Identify successful patterns
│   ├── similarity_search.py     # Find similar optimizations
│   └── best_practices_db.json   # Pattern library
├── auto_documentation/
│   ├── __init__.py
│   ├── doc_generator.py         # Auto-generate markdown docs
│   ├── changelog_builder.py     # Track feature additions
│   └── example_extractor.py     # Extract examples from code

Final Architecture

Atomizer/
├── optimization_engine/
│   ├── core/                    # Existing optimization loop
│   ├── plugins/                 # NEW: Hook system (Phase 1)
│   │   ├── hooks.py
│   │   ├── pre_mesh/
│   │   ├── post_solve/
│   │   └── custom_objectives/
│   ├── custom_functions/        # NEW: User/LLM generated code (Phase 3)
│   ├── llm_interface/           # NEW: Natural language control (Phase 2)
│   ├── analysis/                # NEW: Result analysis (Phase 4)
│   ├── reporting/               # NEW: Report generation (Phase 5)
│   ├── learning/                # NEW: Self-improvement (Phase 7)
│   └── feature_registry.json    # NEW: Capability catalog (Phase 1)
├── .claude/
│   └── skills/
│       └── atomizer.md          # NEW: Claude skill (Phase 2)
├── mcp/
│   ├── nx_documentation/        # NEW: NX docs MCP server (Phase 6)
│   └── nx_features/             # NEW: NX feature bank (Phase 6)
├── docs/
│   └── llm/                     # NEW: LLM-readable docs (Phase 1)
│       ├── capabilities.md
│       ├── examples.md
│       └── api_reference.md
├── dashboard/                   # Existing web UI
└── examples/                    # Example projects

Implementation Priority

Immediate (Next 2 weeks)

  • Phase 1.1: Plugin/hook system in optimization loop
  • Phase 1.2: Feature registry JSON
  • Phase 1.3: Basic documentation structure

Short-term (1 month)

  • Phase 2: Claude skill + natural language interface
  • Phase 3.1: Custom function generator (RSS, weighted objectives)
  • Phase 4.1: Result analyzer with basic statistics

Medium-term (2-3 months)

  • Phase 4.2: Surrogate quality checker
  • Phase 5: HTML report generator
  • Phase 6.1: NX documentation MCP

Long-term (3-6 months)

  • Phase 4.3: Advanced decision support
  • Phase 6.2: Full NX feature bank
  • Phase 7: Self-improving system

Example Use Cases

Use Case 1: Natural Language Optimization Setup

User: "Let's make a new optimization"

LLM: "I'll help you set up a new optimization study. What would you like to name it?"

User: "bracket_weight_reduction"

LLM: "Study created! Please drop your .sim and .prt files into:
     C:\Atomizer\optimization_results\bracket_weight_reduction\sim\"

User: "Done. I want to minimize displacement, play with wall_thickness (3-8mm)
      and fillet_radius (1-5mm)"

LLM: "Got it! Let me configure:
     - Objective: Minimize max displacement
     - Design variables:
       * wall_thickness: 3.0 - 8.0 mm
       * fillet_radius: 1.0 - 5.0 mm

     How many iterations would you like? (Recommend 50 for 2 variables)"

User: "50 sounds good"

LLM: "Perfect! Configuration saved. Ready to start optimization?
     This will run 50 FEA simulations (estimated time: 2-3 hours)"

User: "Start it!"

LLM: "Optimization launched! 🚀
     - Live dashboard: http://localhost:8080/dashboard
     - Study: bracket_weight_reduction
     - Status: Trial 1/50 running..."

Use Case 2: Custom Metric via LLM

User: "Let's implement a function to calculate the RSS of stress and displacement"

LLM: "I'll create a Root Sum Square (RSS) function for you. Let me confirm:

     RSS = sqrt(stress² + displacement²)

     Should I normalize the metrics first? (Recommended for different units)"

User: "Yes, normalize to their targets"

LLM: "Creating custom objective function... Done!

     File: optimization_engine/custom_functions/rss_stress_displacement.py

     def rss_objective(stress, displacement, stress_target=200, disp_target=1.0):
         norm_stress = stress / stress_target
         norm_disp = displacement / disp_target
         return sqrt(norm_stress**2 + norm_disp**2)

     Added to feature registry. You can now use 'rss_stress_displacement'
     as an objective in your optimization config."

Use Case 3: Surrogate Model Usage

User: "Can you use the surrogate model to predict stress for these parameters:
      wall_thickness = [3.5, 4.0, 4.5, 5.0, 5.5]"

LLM: "Checking surrogate model quality...

     Surrogate Model Status:
     - Training data: 50 trials
     - R² score: 0.94 (excellent)
     - CV score: 0.91 (good generalization)
     - Confidence: HIGH ✓

     Predictions:
     | Thickness | Predicted Stress | Confidence Interval |
     |-----------|------------------|---------------------|
     | 3.5 mm    | 245.2 MPa       | ±12.3 MPa          |
     | 4.0 mm    | 198.7 MPa       | ±8.1 MPa           |
     | 4.5 mm    | 165.4 MPa       | ±7.2 MPa           |
     | 5.0 mm    | 142.1 MPa       | ±9.5 MPa           |
     | 5.5 mm    | 126.8 MPa       | ±11.8 MPa          |

     Note: 4.0mm appears optimal (stress closest to 200 MPa target)"

Use Case 4: Optimization Report

User: "Make an optimization report for bracket_weight_reduction"

LLM: "Generating comprehensive optimization report...

     Analyzing 50 completed trials...
     Writing narrative insights...
     Generating plots...

     Report complete!

     📄 HTML: optimization_results/bracket_weight_reduction/report.html
     📄 PDF: optimization_results/bracket_weight_reduction/report.pdf

     Key Findings:
     - Best design: Trial #34
       * wall_thickness = 4.2 mm
       * fillet_radius = 3.1 mm
       * Displacement = 0.78 mm (22% below target)
       * Mass = 0.51 kg (18% lighter than baseline)

     - Sensitivity: Thickness has 3x more impact than fillet radius
     - Recommendation: Lock thickness at 4.2mm, explore other variables

     Open report? [Y/n]"

Success Metrics

Phase 1 Success

  • 10+ plugins created and tested
  • Feature registry contains 50+ capabilities
  • LLM can discover and use all features

Phase 2 Success

  • LLM can create optimization from natural language in <5 turns
  • 90% of user requests understood correctly
  • Zero manual JSON editing required

Phase 3 Success

  • LLM generates 10+ custom functions with zero errors
  • All generated code passes safety validation
  • Users save 50% time vs. manual coding

Phase 4 Success

  • Surrogate quality detection 95% accurate
  • Recommendations lead to 30% faster convergence
  • Users report higher confidence in results

Phase 5 Success

  • Reports generated in <30 seconds
  • Narrative quality rated 4/5 by engineers
  • 80% of reports used without manual editing

Phase 6 Success

  • NX MCP answers 95% of API questions correctly
  • Feature bank covers 80% of common workflows
  • Users write 50% less manual journal code

Phase 7 Success

  • 20+ user-contributed features in library
  • Pattern recognition identifies 10+ best practices
  • Documentation auto-updates with zero manual effort

Risk Mitigation

Risk: LLM generates unsafe code

Mitigation:

  • Sandbox all execution
  • Whitelist allowed imports
  • Code review by static analysis tools
  • Rollback on any error

Risk: Feature registry becomes stale

Mitigation:

  • Auto-update on code changes (pre-commit hook)
  • CI/CD checks for registry sync
  • Weekly audit of documented vs. actual features

Risk: NX API changes break features

Mitigation:

  • Version pinning for NX (currently 2412)
  • Automated tests against NX API
  • Migration guides for version upgrades

Risk: User overwhelmed by LLM autonomy

Mitigation:

  • Confirm before executing destructive actions
  • "Explain mode" that shows what LLM plans to do
  • Undo/rollback for all operations

Next Steps

  1. Immediate: Start Phase 1 - Plugin System

    • Create optimization_engine/plugins/ structure
    • Design hook API
    • Implement first 3 hooks (pre_mesh, post_solve, custom_objective)
  2. Week 2: Feature Registry

    • Extract current capabilities into registry JSON
    • Write registry manager (CRUD operations)
    • Auto-generate initial docs
  3. Week 3: Claude Skill

    • Draft .claude/skills/atomizer.md
    • Test with sample optimization workflows
    • Iterate based on LLM performance

Last Updated: 2025-01-15 Maintainer: Antoine Polvé (antoine@atomaste.com) Status: 🔵 Planning Phase