Files
Atomizer/docs/06_PROTOCOLS_DETAILED/LLM_ORCHESTRATED_WORKFLOW.md
Anto01 e3bdb08a22 feat: Major update with validators, skills, dashboard, and docs reorganization
- Add validation framework (config, model, results, study validators)
- Add Claude Code skills (create-study, run-optimization, generate-report,
  troubleshoot, analyze-model)
- Add Atomizer Dashboard (React frontend + FastAPI backend)
- Reorganize docs into structured directories (00-09)
- Add neural surrogate modules and training infrastructure
- Add multi-objective optimization support

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-25 19:23:58 -05:00

17 KiB

LLM-Orchestrated Atomizer Workflow

Core Philosophy

Atomizer is LLM-first. The user talks to Claude Code, describes what they want in natural language, and the LLM orchestrates everything:

  • Interprets engineering intent
  • Creates optimized configurations
  • Sets up study structure
  • Runs optimizations
  • Generates reports
  • Implements custom features

The dashboard is for monitoring, not setup.


Architecture: Skills + Protocols + Validators

┌─────────────────────────────────────────────────────────────────────────┐
│                         USER (Natural Language)                          │
│  "I want to optimize this drone arm for weight while keeping it stiff"  │
└─────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────┐
│                      CLAUDE CODE (LLM Orchestrator)                      │
│                                                                          │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐ │
│  │    SKILLS    │  │  PROTOCOLS   │  │  VALIDATORS  │  │   KNOWLEDGE  │ │
│  │ (.claude/    │  │ (docs/06_)   │  │ (Python)     │  │   (docs/)    │ │
│  │  commands/)  │  │              │  │              │  │              │ │
│  └──────────────┘  └──────────────┘  └──────────────┘  └──────────────┘ │
│         │                 │                 │                 │         │
│         └─────────────────┴─────────────────┴─────────────────┘         │
│                                    │                                     │
│                          ORCHESTRATION LOGIC                             │
│                    (Intent → Plan → Execute → Validate)                  │
└─────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────┐
│                         ATOMIZER ENGINE                                  │
│                                                                          │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐    │
│  │   Config    │  │   Runner    │  │  Extractors │  │   Reports   │    │
│  │  Generator  │  │  (FEA/NN)   │  │  (OP2/CAD)  │  │  Generator  │    │
│  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘    │
└─────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────┐
│                      OUTPUTS (User-Visible)                              │
│                                                                          │
│  • study/1_setup/optimization_config.json  (config)                     │
│  • study/2_results/study.db                (optimization data)          │
│  • reports/                                (visualizations)              │
│  • Dashboard at localhost:3000             (live monitoring)            │
└─────────────────────────────────────────────────────────────────────────┘

The Three Pillars

1. SKILLS (What LLM Can Do)

Location: .claude/skills/*.md

Skills are instruction sets that tell Claude Code how to perform specific tasks with high rigor. They're like recipes that ensure consistency.

.claude/skills/
├── create-study.md           # Create new optimization study
├── analyze-model.md          # Analyze NX model for optimization
├── configure-surrogate.md    # Setup NN surrogate settings
├── generate-report.md        # Create performance reports
├── troubleshoot.md           # Debug common issues
└── extend-feature.md         # Add custom functionality

2. PROTOCOLS (How To Do It Right)

Location: docs/06_PROTOCOLS_DETAILED/

Protocols are step-by-step procedures that define the correct sequence for complex operations. They ensure rigor and reproducibility.

docs/06_PROTOCOLS_DETAILED/
├── PROTOCOL_01_STUDY_SETUP.md
├── PROTOCOL_02_MODEL_VALIDATION.md
├── PROTOCOL_03_OPTIMIZATION_RUN.md
├── PROTOCOL_11_MULTI_OBJECTIVE.md
├── PROTOCOL_12_HYBRID_SURROGATE.md
└── LLM_ORCHESTRATED_WORKFLOW.md  (this file)

3. VALIDATORS (Verify It's Correct)

Location: optimization_engine/validators/

Validators are Python modules that check configurations, outputs, and state. They catch errors before they cause problems.

# Example: optimization_engine/validators/config_validator.py
def validate_optimization_config(config: dict) -> ValidationResult:
    """Ensure config is valid before running."""
    errors = []
    warnings = []

    # Check required fields
    if 'design_variables' not in config:
        errors.append("Missing design_variables")

    # Check bounds make sense
    for var in config.get('design_variables', []):
        if var['bounds'][0] >= var['bounds'][1]:
            errors.append(f"{var['parameter']}: min >= max")

    return ValidationResult(errors, warnings)

Master Skill: /create-study

This is the primary entry point. When user says "I want to optimize X", this skill orchestrates everything.

Skill File: .claude/skills/create-study.md

# Create Study Skill

## Trigger
User wants to create a new optimization study.

## Required Information (Gather via conversation)

### 1. Model Information
- [ ] NX model file location (.prt)
- [ ] Simulation file (.sim)
- [ ] FEM file (.fem)
- [ ] Analysis types (static, modal, buckling, etc.)

### 2. Engineering Goals
- [ ] What to optimize (minimize mass, maximize stiffness, etc.)
- [ ] Target values (if any)
- [ ] Constraints (max stress, min frequency, etc.)
- [ ] Engineering context (what is this part for?)

### 3. Design Variables
- [ ] Which parameters can change
- [ ] Bounds for each (min/max)
- [ ] Integer vs continuous

### 4. Optimization Settings
- [ ] Number of trials
- [ ] Single vs multi-objective
- [ ] Enable NN surrogate? (recommend for >50 trials)

## Execution Steps

### Step 1: Analyze Model
Read the NX model to:
- Extract existing expressions (potential design variables)
- Identify geometry features
- Check simulation setup

### Step 2: Generate Configuration
Create optimization_config.json with:
- All gathered information
- Sensible defaults for missing info
- Appropriate protocol selection

### Step 3: Validate Configuration
Run config validator to check:
- All required fields present
- Bounds are reasonable
- Extraction methods are correct

### Step 4: Create Study Structure

studies/{study_name}/ ├── 1_setup/ │ ├── model/ # Copy of NX files │ ├── optimization_config.json │ └── workflow_config.json ├── 2_results/ └── README.md


### Step 5: Confirm with User
Show summary:
- Design variables and bounds
- Objectives and constraints
- Recommended settings
- Estimated time

### Step 6: Start Optimization (if confirmed)
Run the optimization with appropriate flags.

## Error Handling
- If model not found: Ask for correct path
- If expressions not found: List available ones
- If config invalid: Show errors, suggest fixes

## Output
- Valid study folder structure
- Validated optimization_config.json
- README with study description
- Optional: Start optimization run

Conversation Flow Example

USER: I want to optimize this bracket for weight. The model is at
      examples/Models/Bracket/bracket.prt

CLAUDE: I'll help you set up an optimization study for the bracket. Let me
        first analyze your model to understand its structure.

        [Reads model, extracts expressions]

        I found these parametric expressions in your model:
        - wall_thickness (current: 2mm)
        - fillet_radius (current: 3mm)
        - hole_diameter (current: 10mm)
        - rib_count (current: 4)

        Which of these should be design variables for optimization?

USER: All of them. Wall thickness 1-4mm, fillet 2-8mm, hole 5-20mm,
      ribs 2-6.

CLAUDE: Got it. Now for the engineering requirements:

        1. What's the loading condition? (I see a static analysis in your sim)
        2. Any stress limits? (Material yield strength?)
        3. Any displacement limits?
        4. Is there a target weight you're trying to achieve?

USER: It's holding a 50kg load. Aluminum 6061, so keep stress under 200 MPa.
      Max deflection 0.5mm. Want it as light as possible.

CLAUDE: Perfect. Here's your optimization setup:

        **Study: bracket_weight_optimization**

        **Design Variables:**
        | Parameter      | Min | Max | Type    |
        |---------------|-----|-----|---------|
        | wall_thickness | 1   | 4   | float   |
        | fillet_radius  | 2   | 8   | float   |
        | hole_diameter  | 5   | 20  | float   |
        | rib_count      | 2   | 6   | integer |

        **Objective:** Minimize mass

        **Constraints:**
        - Max von Mises stress < 200 MPa
        - Max displacement < 0.5 mm

        **Settings:**
        - Protocol: Single-objective constrained
        - Trials: 50 (recommended for 4 variables)
        - Sampler: TPE (good for constrained problems)

        Should I create this study? I can also enable neural network
        acceleration if you want more than 50 trials.

USER: Yes, create it. And enable the NN thing, let's do 200 trials.

CLAUDE: Creating study with NN surrogate enabled...

        [Creates folder structure]
        [Generates optimization_config.json]
        [Validates configuration]
        [Copies model files]

        Study created at: studies/bracket_weight_optimization/

        Configuration validated successfully:
        ✓ All design variables have valid bounds
        ✓ Extraction methods configured
        ✓ Surrogate settings enabled

        Ready to start optimization. The workflow will:
        1. Run 30 initial FEA trials
        2. Generate 100 space-filling training points
        3. Train NN surrogate
        4. Run 170 NN-accelerated trials with FEA validation

        Start now? (You can monitor at localhost:3000)

USER: Yes, start it.

CLAUDE: Starting optimization...

        [Runs optimization in background]

        Optimization running (ID: bracket_weight_optimization)
        - Monitor: http://localhost:3000
        - Optuna Dashboard: http://localhost:8081

        I'll notify you when Stage 1 (initial FEA) completes.

Skill Library

Core Skills (Must Have)

Skill Trigger Purpose
/create-study "optimize", "new study" Create optimization from scratch
/analyze-model "look at model", "what can I optimize" Extract model info
/run-optimization "start", "run" Execute optimization
/check-status "how's it going", "progress" Report on running studies
/generate-report "report", "results" Create visualizations

Advanced Skills (For Power Users)

Skill Trigger Purpose
/configure-surrogate "neural network", "surrogate" Setup NN acceleration
/add-constraint "add constraint" Modify existing study
/compare-studies "compare" Cross-study analysis
/export-results "export", "pareto" Export optimal designs
/troubleshoot "error", "failed" Debug issues

Custom Skills (Project-Specific)

Users can create their own skills for recurring tasks:

.claude/skills/
├── my-bracket-setup.md      # Pre-configured bracket optimization
├── thermal-analysis.md      # Custom thermal workflow
└── batch-runner.md          # Run multiple studies

Implementation Approach

Phase 1: Foundation (Current)

  • Basic skill system (create-study.md exists)
  • Config validation
  • Manual protocol following
  • Formalize skill structure
  • Create skill template

Phase 2: Skill Library

  • Implement all core skills
  • Add protocol references in skills
  • Create skill chaining (one skill calls another)
  • Add user confirmation checkpoints

Phase 3: Validators

  • Config validator (comprehensive)
  • Model validator (check NX setup)
  • Results validator (check outputs)
  • State validator (check study health)

Phase 4: Knowledge Integration

  • Physics knowledge base queries
  • Similar study lookup
  • Transfer learning suggestions
  • Best practices recommendations

Skill Template

Every skill should follow this structure:

# Skill Name

## Purpose
What this skill accomplishes.

## Triggers
Keywords/phrases that activate this skill.

## Prerequisites
What must be true before running.

## Information Gathering
Questions to ask user (with defaults).

## Protocol Reference
Link to detailed protocol in docs/06_PROTOCOLS_DETAILED/

## Execution Steps
1. Step one (with validation)
2. Step two (with validation)
3. ...

## Validation Checkpoints
- After step X, verify Y
- Before step Z, check W

## Error Handling
- Error type 1: Recovery action
- Error type 2: Recovery action

## User Confirmations
Points where user approval is needed.

## Outputs
What gets created/modified.

## Next Steps
What to suggest after completion.

Key Principles

1. Conversation > Configuration

Don't ask user to edit JSON. Have a conversation, then generate the config.

2. Validation at Every Step

Never proceed with invalid state. Check before, during, and after.

3. Sensible Defaults

Provide good defaults so user only specifies what they care about.

4. Explain Decisions

When making choices (sampler, n_trials, etc.), explain why.

5. Graceful Degradation

If something fails, recover gracefully with clear explanation.

6. Progressive Disclosure

Start simple, offer complexity only when needed.


Integration with Dashboard

The dashboard complements LLM interaction:

LLM Handles Dashboard Handles
Study setup Live monitoring
Configuration Progress visualization
Troubleshooting Results exploration
Reports Pareto front interaction
Custom features Historical comparison

The LLM creates, the dashboard observes.


Next Steps

  1. Formalize Skill Structure: Create template that all skills follow
  2. Implement Core Skills: Start with create-study, analyze-model
  3. Add Validators: Python modules for each validation type
  4. Test Conversation Flows: Verify natural interaction patterns
  5. Build Skill Chaining: Allow skills to call other skills

Document Version: 1.0 Created: 2025-11-25 Philosophy: Talk to the LLM, not the dashboard