Files
Atomizer/docs/07_DEVELOPMENT/ATOMIZER_ARCHITECTURE_OVERVIEW.md
Antoine f83dc6839f docs: Add comprehensive architecture overview with Mermaid diagrams
Complete visual guide to understanding Atomizer's architecture including:
- Session lifecycle (startup, active, closing)
- Protocol Operating System (4-layer architecture)
- Learning Atomizer Core (LAC) data flow
- Task classification and routing
- AVERVS execution framework
- Optimization flow with extractors
- Knowledge accumulation over time
- File structure reference

Includes 15+ Mermaid diagrams for visual learning.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 22:05:09 -05:00

17 KiB

Atomizer Architecture Overview

Version: 1.0 Last Updated: 2025-12-11 Purpose: Comprehensive guide to understanding how Atomizer works - from session management to learning systems.


Table of Contents

  1. What is Atomizer?
  2. The Big Picture
  3. Session Lifecycle
  4. Protocol Operating System (POS)
  5. Learning Atomizer Core (LAC)
  6. Task Classification & Routing
  7. Execution Framework (AVERVS)
  8. Optimization Flow
  9. Knowledge Accumulation
  10. File Structure Reference

1. What is Atomizer?

Atomizer is an LLM-first FEA optimization framework. Instead of clicking through complex GUI menus, engineers describe their optimization goals in natural language, and an AI assistant (Claude) configures, runs, and analyzes the optimization.

graph LR
    subgraph Traditional["Traditional Workflow"]
        A1[Engineer] -->|clicks| B1[NX GUI]
        B1 -->|manual setup| C1[Optuna Config]
        C1 -->|run| D1[Results]
    end

    subgraph Atomizer["Atomizer Workflow"]
        A2[Engineer] -->|"'Minimize mass while keeping stress < 250 MPa'"| B2[Atomizer Claude]
        B2 -->|auto-configures| C2[NX + Optuna]
        C2 -->|run| D2[Results + Insights]
        D2 -->|learns| B2
    end

    style Atomizer fill:#e1f5fe

Core Philosophy: "Talk, don't click."


2. The Big Picture

graph TB
    subgraph User["👤 Engineer"]
        U1[Natural Language Request]
    end

    subgraph Claude["🤖 Atomizer Claude"]
        C1[Session Manager]
        C2[Protocol Router]
        C3[Task Executor]
        C4[Learning System]
    end

    subgraph POS["📚 Protocol Operating System"]
        P1[Bootstrap Layer]
        P2[Operations Layer]
        P3[System Layer]
        P4[Extensions Layer]
    end

    subgraph LAC["🧠 Learning Atomizer Core"]
        L1[Optimization Memory]
        L2[Session Insights]
        L3[Skill Evolution]
    end

    subgraph Engine["⚙️ Optimization Engine"]
        E1[NX Open API]
        E2[Nastran Solver]
        E3[Optuna Optimizer]
        E4[Extractors]
    end

    U1 --> C1
    C1 --> C2
    C2 --> P1
    P1 --> P2
    P2 --> P3
    C2 --> C3
    C3 --> Engine
    C3 --> C4
    C4 --> LAC
    LAC -.->|prior knowledge| C2

    style Claude fill:#fff3e0
    style LAC fill:#e8f5e9
    style POS fill:#e3f2fd

3. Session Lifecycle

Every Claude session follows a structured lifecycle:

stateDiagram-v2
    [*] --> Startup: New Session

    state Startup {
        [*] --> EnvCheck: Check conda environment
        EnvCheck --> LoadContext: Load CLAUDE.md + Bootstrap
        LoadContext --> QueryLAC: Query prior knowledge
        QueryLAC --> DetectStudy: Check for active study
        DetectStudy --> [*]
    }

    Startup --> Active: Ready

    state Active {
        [*] --> Classify: Receive request
        Classify --> Route: Determine task type
        Route --> Execute: Load protocols & execute
        Execute --> Record: Record learnings
        Record --> [*]: Ready for next
    }

    Active --> Closing: Session ending

    state Closing {
        [*] --> SaveWork: Verify work saved
        SaveWork --> RecordLAC: Record insights to LAC
        RecordLAC --> RecordOutcome: Record optimization outcomes
        RecordOutcome --> Summarize: Summarize for user
        Summarize --> [*]
    }

    Closing --> [*]: Session complete

Startup Checklist

Step Action Purpose
1 Environment check Ensure atomizer conda env active
2 Load context Read CLAUDE.md, Bootstrap
3 Query LAC Get relevant prior learnings
4 Detect study Check for active study context

Closing Checklist

Step Action Purpose
1 Save work Commit files, validate configs
2 Record learnings Store failures, successes, workarounds
3 Record outcomes Store optimization results
4 Summarize Provide next steps to user

4. Protocol Operating System (POS)

The POS is Atomizer's documentation architecture - a layered system that provides the right context at the right time.

graph TB
    subgraph Layer1["Layer 1: Bootstrap (Always Loaded)"]
        B1[00_BOOTSTRAP.md<br/>Task classification & routing]
        B2[01_CHEATSHEET.md<br/>Quick reference]
        B3[02_CONTEXT_LOADER.md<br/>What to load when]
    end

    subgraph Layer2["Layer 2: Operations (Per Task)"]
        O1[OP_01 Create Study]
        O2[OP_02 Run Optimization]
        O3[OP_03 Monitor Progress]
        O4[OP_04 Analyze Results]
        O5[OP_05 Export Data]
        O6[OP_06 Troubleshoot]
    end

    subgraph Layer3["Layer 3: System (Technical Specs)"]
        S1[SYS_10 IMSO<br/>Adaptive sampling]
        S2[SYS_11 Multi-Objective<br/>Pareto optimization]
        S3[SYS_12 Extractors<br/>Physics extraction]
        S4[SYS_13 Dashboard<br/>Real-time monitoring]
        S5[SYS_14 Neural<br/>Surrogate acceleration]
        S6[SYS_15 Method Selector<br/>Algorithm selection]
    end

    subgraph Layer4["Layer 4: Extensions (Power Users)"]
        E1[EXT_01 Create Extractor]
        E2[EXT_02 Create Hook]
        E3[EXT_03 Create Protocol]
        E4[EXT_04 Create Skill]
    end

    Layer1 --> Layer2
    Layer2 --> Layer3
    Layer3 --> Layer4

    style Layer1 fill:#e3f2fd
    style Layer2 fill:#e8f5e9
    style Layer3 fill:#fff3e0
    style Layer4 fill:#fce4ec

Loading Rules

flowchart TD
    A[User Request] --> B{Classify Task}

    B -->|Create| C1[Load: study-creation-core.md]
    B -->|Run| C2[Load: OP_02_RUN_OPTIMIZATION.md]
    B -->|Monitor| C3[Load: OP_03_MONITOR_PROGRESS.md]
    B -->|Analyze| C4[Load: OP_04_ANALYZE_RESULTS.md]
    B -->|Debug| C5[Load: OP_06_TROUBLESHOOT.md]
    B -->|Extend| C6{Check Privilege}

    C1 --> D1{Signals?}
    D1 -->|Mirror/Zernike| E1[+ zernike-optimization.md]
    D1 -->|Neural/50+ trials| E2[+ SYS_14_NEURAL.md]
    D1 -->|Multi-objective| E3[+ SYS_11_MULTI.md]

    C6 -->|power_user| F1[Load: EXT_01 or EXT_02]
    C6 -->|admin| F2[Load: Any EXT_*]
    C6 -->|user| F3[Deny - explain]

5. Learning Atomizer Core (LAC)

LAC is Atomizer's persistent memory - it learns from every session.

graph TB
    subgraph LAC["🧠 Learning Atomizer Core"]
        subgraph OM["Optimization Memory"]
            OM1[bracket.jsonl]
            OM2[beam.jsonl]
            OM3[mirror.jsonl]
        end

        subgraph SI["Session Insights"]
            SI1[failure.jsonl<br/>What went wrong & why]
            SI2[success_pattern.jsonl<br/>What worked well]
            SI3[workaround.jsonl<br/>Known fixes]
            SI4[user_preference.jsonl<br/>User preferences]
        end

        subgraph SE["Skill Evolution"]
            SE1[suggested_updates.jsonl<br/>Protocol improvements]
        end
    end

    subgraph Session["Current Session"]
        S1[Query prior knowledge]
        S2[Execute tasks]
        S3[Record learnings]
    end

    S1 -->|read| LAC
    S3 -->|write| LAC
    LAC -.->|informs| S2

    style LAC fill:#e8f5e9

LAC Data Flow

sequenceDiagram
    participant U as User
    participant C as Claude
    participant LAC as LAC
    participant Opt as Optimizer

    Note over C,LAC: Session Start
    C->>LAC: query_similar_optimizations("bracket", ["mass"])
    LAC-->>C: Similar studies: TPE worked 85% of time
    C->>LAC: get_relevant_insights("bracket optimization")
    LAC-->>C: Insight: "20 startup trials improves convergence"

    Note over U,Opt: During Session
    U->>C: "Optimize my bracket for mass"
    C->>C: Apply prior knowledge
    C->>Opt: Configure with TPE, 20 startup trials
    Opt-->>C: Optimization complete

    Note over C,LAC: Discovery
    C->>C: Found: CMA-ES faster for this case
    C->>LAC: record_insight("success_pattern", "CMA-ES faster for simple brackets")

    Note over C,LAC: Session End
    C->>LAC: record_optimization_outcome(study="bracket_v4", converged=true, ...)

What LAC Stores

Category Examples Used For
Optimization Memory Method used, convergence, trials Recommending methods for similar problems
Failures "CMA-ES failed on discrete targets" Avoiding repeat mistakes
Success Patterns "TPE with 20 startup trials converges faster" Applying proven techniques
Workarounds "Load _i.prt before UpdateFemodel()" Fixing known issues
Protocol Updates "SYS_15 should mention CMA-ES limitation" Improving documentation

6. Task Classification & Routing

flowchart TD
    A[User Request] --> B{Contains keywords?}

    B -->|"new, create, set up, optimize"| C1[CREATE]
    B -->|"run, start, execute, begin"| C2[RUN]
    B -->|"status, progress, check, trials"| C3[MONITOR]
    B -->|"results, best, compare, pareto"| C4[ANALYZE]
    B -->|"error, failed, not working, help"| C5[DEBUG]
    B -->|"what is, how does, explain"| C6[EXPLAIN]
    B -->|"create extractor, add hook"| C7[EXTEND]

    C1 --> D1[OP_01 + study-creation-core]
    C2 --> D2[OP_02]
    C3 --> D3[OP_03]
    C4 --> D4[OP_04]
    C5 --> D5[OP_06]
    C6 --> D6[Relevant SYS_*]
    C7 --> D7{Privilege?}

    D7 -->|user| E1[Explain limitation]
    D7 -->|power_user+| E2[EXT_01 or EXT_02]

    style C1 fill:#c8e6c9
    style C2 fill:#bbdefb
    style C3 fill:#fff9c4
    style C4 fill:#d1c4e9
    style C5 fill:#ffccbc
    style C6 fill:#b2ebf2
    style C7 fill:#f8bbd9

7. Execution Framework (AVERVS)

Every task follows the AVERVS pattern:

graph LR
    A[Announce] --> V1[Validate]
    V1 --> E[Execute]
    E --> R[Report]
    R --> V2[Verify]
    V2 --> S[Suggest]

    style A fill:#e3f2fd
    style V1 fill:#fff3e0
    style E fill:#e8f5e9
    style R fill:#fce4ec
    style V2 fill:#f3e5f5
    style S fill:#e0f2f1

AVERVS in Action

sequenceDiagram
    participant U as User
    participant C as Claude
    participant NX as NX/Solver

    U->>C: "Create a study for my bracket"

    Note over C: A - Announce
    C->>U: "I'm going to analyze your model to discover expressions and setup"

    Note over C: V - Validate
    C->>C: Check: .prt exists? .sim exists? _i.prt present?
    C->>U: "✓ All required files present"

    Note over C: E - Execute
    C->>NX: Run introspection script
    NX-->>C: Expressions, constraints, solutions

    Note over C: R - Report
    C->>U: "Found 12 expressions, 3 are design variable candidates"

    Note over C: V - Verify
    C->>C: Validate generated config
    C->>U: "✓ Config validation passed"

    Note over C: S - Suggest
    C->>U: "Ready to run. Want me to:<br/>1. Start optimization now?<br/>2. Adjust parameters first?"

8. Optimization Flow

flowchart TB
    subgraph Setup["1. Setup Phase"]
        A1[User describes goal] --> A2[Claude analyzes model]
        A2 --> A3[Query LAC for similar studies]
        A3 --> A4[Generate optimization_config.json]
        A4 --> A5[Create run_optimization.py]
    end

    subgraph Run["2. Optimization Loop"]
        B1[Optuna suggests parameters] --> B2[Update NX expressions]
        B2 --> B3[Update FEM mesh]
        B3 --> B4[Solve with Nastran]
        B4 --> B5[Extract results via Extractors]
        B5 --> B6[Report to Optuna]
        B6 --> B7{More trials?}
        B7 -->|Yes| B1
        B7 -->|No| C1
    end

    subgraph Analyze["3. Analysis Phase"]
        C1[Load study.db] --> C2[Find best trials]
        C2 --> C3[Generate visualizations]
        C3 --> C4[Create STUDY_REPORT.md]
    end

    subgraph Learn["4. Learning Phase"]
        D1[Record outcome to LAC]
        D2[Record insights discovered]
        D3[Suggest protocol updates]
    end

    Setup --> Run
    Run --> Analyze
    Analyze --> Learn

    style Setup fill:#e3f2fd
    style Run fill:#e8f5e9
    style Analyze fill:#fff3e0
    style Learn fill:#f3e5f5

Extractors

Extractors bridge FEA results to optimization objectives:

graph LR
    subgraph FEA["FEA Output"]
        F1[OP2 File]
        F2[BDF File]
        F3[NX Part]
    end

    subgraph Extractors["Extractor Library"]
        E1[E1: Displacement]
        E2[E2: Frequency]
        E3[E3: Stress]
        E4[E4: Mass BDF]
        E5[E5: Mass CAD]
        E8[E8: Zernike WFE]
    end

    subgraph Output["Optimization Values"]
        O1[Objective Value]
        O2[Constraint Value]
    end

    F1 --> E1
    F1 --> E2
    F1 --> E3
    F2 --> E4
    F3 --> E5
    F1 --> E8

    E1 --> O1
    E2 --> O2
    E3 --> O2
    E4 --> O1
    E5 --> O1
    E8 --> O1

9. Knowledge Accumulation

Atomizer gets smarter over time:

graph TB
    subgraph Sessions["Claude Sessions Over Time"]
        S1[Session 1<br/>Bracket optimization]
        S2[Session 2<br/>Beam optimization]
        S3[Session 3<br/>Mirror optimization]
        S4[Session N<br/>New optimization]
    end

    subgraph LAC["LAC Knowledge Base"]
        K1[Optimization<br/>Patterns]
        K2[Failure<br/>Solutions]
        K3[Method<br/>Recommendations]
    end

    S1 -->|record| LAC
    S2 -->|record| LAC
    S3 -->|record| LAC
    LAC -->|inform| S4

    subgraph Improvement["Continuous Improvement"]
        I1[Better method selection]
        I2[Faster convergence]
        I3[Fewer failures]
    end

    LAC --> Improvement

    style LAC fill:#e8f5e9
    style Improvement fill:#fff3e0

Example: Method Selection Improvement

graph LR
    subgraph Before["Without LAC"]
        B1[New bracket optimization]
        B2[Default: TPE]
        B3[Maybe suboptimal]
    end

    subgraph After["With LAC"]
        A1[New bracket optimization]
        A2[Query LAC:<br/>'bracket mass optimization']
        A3[LAC returns:<br/>'CMA-ES 30% faster for<br/>simple brackets']
        A4[Use CMA-ES]
        A5[Faster convergence]
    end

    B1 --> B2 --> B3
    A1 --> A2 --> A3 --> A4 --> A5

    style After fill:#e8f5e9

10. File Structure Reference

Atomizer/
├── CLAUDE.md                        # 🎯 Main instructions (read first)
│
├── .claude/
│   ├── skills/
│   │   ├── 00_BOOTSTRAP.md          # Task classification
│   │   ├── 01_CHEATSHEET.md         # Quick reference
│   │   ├── 02_CONTEXT_LOADER.md     # What to load when
│   │   ├── core/
│   │   │   └── study-creation-core.md
│   │   └── modules/
│   │       ├── learning-atomizer-core.md  # LAC documentation
│   │       ├── zernike-optimization.md
│   │       └── neural-acceleration.md
│   └── commands/                     # Slash commands
│
├── knowledge_base/
│   ├── lac.py                        # LAC implementation
│   └── lac/                          # LAC data storage
│       ├── optimization_memory/      # What worked for what
│       ├── session_insights/         # Learnings
│       └── skill_evolution/          # Protocol updates
│
├── docs/protocols/
│   ├── operations/                   # OP_01 - OP_06
│   ├── system/                       # SYS_10 - SYS_15
│   └── extensions/                   # EXT_01 - EXT_04
│
├── optimization_engine/
│   ├── extractors/                   # Physics extraction
│   ├── hooks/                        # NX automation
│   └── gnn/                          # Neural surrogates
│
└── studies/                          # User studies
    └── {study_name}/
        ├── 1_setup/
        │   ├── model/                # NX files
        │   └── optimization_config.json
        ├── 2_results/
        │   └── study.db              # Optuna database
        └── run_optimization.py

Quick Reference: The Complete Flow

graph TB
    subgraph Start["🚀 Session Start"]
        A1[Load CLAUDE.md]
        A2[Load Bootstrap]
        A3[Query LAC]
    end

    subgraph Work["⚙️ During Session"]
        B1[Classify request]
        B2[Load protocols]
        B3[Execute AVERVS]
        B4[Record insights]
    end

    subgraph End["🏁 Session End"]
        C1[Save work]
        C2[Record to LAC]
        C3[Summarize]
    end

    Start --> Work --> End

    subgraph Legend["Legend"]
        L1[📚 POS: What to do]
        L2[🧠 LAC: What we learned]
        L3[⚡ AVERVS: How to do it]
    end

Summary

Component Purpose Key Files
CLAUDE.md Main instructions CLAUDE.md
Bootstrap Task routing 00_BOOTSTRAP.md
POS Protocol system docs/protocols/
LAC Learning system knowledge_base/lac.py
AVERVS Execution pattern Embedded in protocols
Extractors Physics extraction optimization_engine/extractors/

The key insight: Atomizer is not just an optimization tool - it's a learning optimization tool that gets better with every session.


Atomizer: Where engineers talk, AI optimizes, and every session makes the next one better.