docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
This commit is contained in:
649
docs/development/ATOMIZER_ARCHITECTURE_OVERVIEW.md
Normal file
649
docs/development/ATOMIZER_ARCHITECTURE_OVERVIEW.md
Normal file
@@ -0,0 +1,649 @@
|
||||
# Atomizer Architecture Overview
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: 2025-12-11
|
||||
**Purpose**: Comprehensive guide to understanding how Atomizer works - from session management to learning systems.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [What is Atomizer?](#1-what-is-atomizer)
|
||||
2. [The Big Picture](#2-the-big-picture)
|
||||
3. [Session Lifecycle](#3-session-lifecycle)
|
||||
4. [Protocol Operating System (POS)](#4-protocol-operating-system-pos)
|
||||
5. [Learning Atomizer Core (LAC)](#5-learning-atomizer-core-lac)
|
||||
6. [Task Classification & Routing](#6-task-classification--routing)
|
||||
7. [Execution Framework (AVERVS)](#7-execution-framework-avervs)
|
||||
8. [Optimization Flow](#8-optimization-flow)
|
||||
9. [Knowledge Accumulation](#9-knowledge-accumulation)
|
||||
10. [File Structure Reference](#10-file-structure-reference)
|
||||
|
||||
---
|
||||
|
||||
## 1. What is Atomizer?
|
||||
|
||||
Atomizer is an **LLM-first FEA optimization framework**. Instead of clicking through complex GUI menus, engineers describe their optimization goals in natural language, and an AI assistant (Claude) configures, runs, and analyzes the optimization.
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
subgraph Traditional["Traditional Workflow"]
|
||||
A1[Engineer] -->|clicks| B1[NX GUI]
|
||||
B1 -->|manual setup| C1[Optuna Config]
|
||||
C1 -->|run| D1[Results]
|
||||
end
|
||||
|
||||
subgraph Atomizer["Atomizer Workflow"]
|
||||
A2[Engineer] -->|"'Minimize mass while keeping stress < 250 MPa'"| B2[Atomizer Claude]
|
||||
B2 -->|auto-configures| C2[NX + Optuna]
|
||||
C2 -->|run| D2[Results + Insights]
|
||||
D2 -->|learns| B2
|
||||
end
|
||||
|
||||
style Atomizer fill:#e1f5fe
|
||||
```
|
||||
|
||||
**Core Philosophy**: LLM-driven FEA optimization.
|
||||
|
||||
---
|
||||
|
||||
## 2. The Big Picture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph User["👤 Engineer"]
|
||||
U1[Natural Language Request]
|
||||
end
|
||||
|
||||
subgraph Claude["🤖 Atomizer Claude"]
|
||||
C1[Session Manager]
|
||||
C2[Protocol Router]
|
||||
C3[Task Executor]
|
||||
C4[Learning System]
|
||||
end
|
||||
|
||||
subgraph POS["📚 Protocol Operating System"]
|
||||
P1[Bootstrap Layer]
|
||||
P2[Operations Layer]
|
||||
P3[System Layer]
|
||||
P4[Extensions Layer]
|
||||
end
|
||||
|
||||
subgraph LAC["🧠 Learning Atomizer Core"]
|
||||
L1[Optimization Memory]
|
||||
L2[Session Insights]
|
||||
L3[Skill Evolution]
|
||||
end
|
||||
|
||||
subgraph Engine["⚙️ Optimization Engine"]
|
||||
E1[NX Open API]
|
||||
E2[Nastran Solver]
|
||||
E3[Optuna Optimizer]
|
||||
E4[Extractors]
|
||||
end
|
||||
|
||||
U1 --> C1
|
||||
C1 --> C2
|
||||
C2 --> P1
|
||||
P1 --> P2
|
||||
P2 --> P3
|
||||
C2 --> C3
|
||||
C3 --> Engine
|
||||
C3 --> C4
|
||||
C4 --> LAC
|
||||
LAC -.->|prior knowledge| C2
|
||||
|
||||
style Claude fill:#fff3e0
|
||||
style LAC fill:#e8f5e9
|
||||
style POS fill:#e3f2fd
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Session Lifecycle
|
||||
|
||||
Every Claude session follows a structured lifecycle:
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> Startup: New Session
|
||||
|
||||
state Startup {
|
||||
[*] --> EnvCheck: Check conda environment
|
||||
EnvCheck --> LoadContext: Load CLAUDE.md + Bootstrap
|
||||
LoadContext --> QueryLAC: Query prior knowledge
|
||||
QueryLAC --> DetectStudy: Check for active study
|
||||
DetectStudy --> [*]
|
||||
}
|
||||
|
||||
Startup --> Active: Ready
|
||||
|
||||
state Active {
|
||||
[*] --> Classify: Receive request
|
||||
Classify --> Route: Determine task type
|
||||
Route --> Execute: Load protocols & execute
|
||||
Execute --> Record: Record learnings
|
||||
Record --> [*]: Ready for next
|
||||
}
|
||||
|
||||
Active --> Closing: Session ending
|
||||
|
||||
state Closing {
|
||||
[*] --> SaveWork: Verify work saved
|
||||
SaveWork --> RecordLAC: Record insights to LAC
|
||||
RecordLAC --> RecordOutcome: Record optimization outcomes
|
||||
RecordOutcome --> Summarize: Summarize for user
|
||||
Summarize --> [*]
|
||||
}
|
||||
|
||||
Closing --> [*]: Session complete
|
||||
```
|
||||
|
||||
### Startup Checklist
|
||||
|
||||
| Step | Action | Purpose |
|
||||
|------|--------|---------|
|
||||
| 1 | Environment check | Ensure `atomizer` conda env active |
|
||||
| 2 | Load context | Read CLAUDE.md, Bootstrap |
|
||||
| 3 | Query LAC | Get relevant prior learnings |
|
||||
| 4 | Detect study | Check for active study context |
|
||||
|
||||
### Closing Checklist
|
||||
|
||||
| Step | Action | Purpose |
|
||||
|------|--------|---------|
|
||||
| 1 | Save work | Commit files, validate configs |
|
||||
| 2 | Record learnings | Store failures, successes, workarounds |
|
||||
| 3 | Record outcomes | Store optimization results |
|
||||
| 4 | Summarize | Provide next steps to user |
|
||||
|
||||
---
|
||||
|
||||
## 4. Protocol Operating System (POS)
|
||||
|
||||
The POS is Atomizer's documentation architecture - a layered system that provides the right context at the right time.
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Layer1["Layer 1: Bootstrap (Always Loaded)"]
|
||||
B1[00_BOOTSTRAP.md<br/>Task classification & routing]
|
||||
B2[01_CHEATSHEET.md<br/>Quick reference]
|
||||
B3[02_CONTEXT_LOADER.md<br/>What to load when]
|
||||
end
|
||||
|
||||
subgraph Layer2["Layer 2: Operations (Per Task)"]
|
||||
O1[OP_01 Create Study]
|
||||
O2[OP_02 Run Optimization]
|
||||
O3[OP_03 Monitor Progress]
|
||||
O4[OP_04 Analyze Results]
|
||||
O5[OP_05 Export Data]
|
||||
O6[OP_06 Troubleshoot]
|
||||
end
|
||||
|
||||
subgraph Layer3["Layer 3: System (Technical Specs)"]
|
||||
S1[SYS_10 IMSO<br/>Adaptive sampling]
|
||||
S2[SYS_11 Multi-Objective<br/>Pareto optimization]
|
||||
S3[SYS_12 Extractors<br/>Physics extraction]
|
||||
S4[SYS_13 Dashboard<br/>Real-time monitoring]
|
||||
S5[SYS_14 Neural<br/>Surrogate acceleration]
|
||||
S6[SYS_15 Method Selector<br/>Algorithm selection]
|
||||
end
|
||||
|
||||
subgraph Layer4["Layer 4: Extensions (Power Users)"]
|
||||
E1[EXT_01 Create Extractor]
|
||||
E2[EXT_02 Create Hook]
|
||||
E3[EXT_03 Create Protocol]
|
||||
E4[EXT_04 Create Skill]
|
||||
end
|
||||
|
||||
Layer1 --> Layer2
|
||||
Layer2 --> Layer3
|
||||
Layer3 --> Layer4
|
||||
|
||||
style Layer1 fill:#e3f2fd
|
||||
style Layer2 fill:#e8f5e9
|
||||
style Layer3 fill:#fff3e0
|
||||
style Layer4 fill:#fce4ec
|
||||
```
|
||||
|
||||
### Loading Rules
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[User Request] --> B{Classify Task}
|
||||
|
||||
B -->|Create| C1[Load: study-creation-core.md]
|
||||
B -->|Run| C2[Load: OP_02_RUN_OPTIMIZATION.md]
|
||||
B -->|Monitor| C3[Load: OP_03_MONITOR_PROGRESS.md]
|
||||
B -->|Analyze| C4[Load: OP_04_ANALYZE_RESULTS.md]
|
||||
B -->|Debug| C5[Load: OP_06_TROUBLESHOOT.md]
|
||||
B -->|Extend| C6{Check Privilege}
|
||||
|
||||
C1 --> D1{Signals?}
|
||||
D1 -->|Mirror/Zernike| E1[+ zernike-optimization.md]
|
||||
D1 -->|Neural/50+ trials| E2[+ SYS_14_NEURAL.md]
|
||||
D1 -->|Multi-objective| E3[+ SYS_11_MULTI.md]
|
||||
|
||||
C6 -->|power_user| F1[Load: EXT_01 or EXT_02]
|
||||
C6 -->|admin| F2[Load: Any EXT_*]
|
||||
C6 -->|user| F3[Deny - explain]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Learning Atomizer Core (LAC)
|
||||
|
||||
LAC is Atomizer's persistent memory - it learns from every session.
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph LAC["🧠 Learning Atomizer Core"]
|
||||
subgraph OM["Optimization Memory"]
|
||||
OM1[bracket.jsonl]
|
||||
OM2[beam.jsonl]
|
||||
OM3[mirror.jsonl]
|
||||
end
|
||||
|
||||
subgraph SI["Session Insights"]
|
||||
SI1[failure.jsonl<br/>What went wrong & why]
|
||||
SI2[success_pattern.jsonl<br/>What worked well]
|
||||
SI3[workaround.jsonl<br/>Known fixes]
|
||||
SI4[user_preference.jsonl<br/>User preferences]
|
||||
end
|
||||
|
||||
subgraph SE["Skill Evolution"]
|
||||
SE1[suggested_updates.jsonl<br/>Protocol improvements]
|
||||
end
|
||||
end
|
||||
|
||||
subgraph Session["Current Session"]
|
||||
S1[Query prior knowledge]
|
||||
S2[Execute tasks]
|
||||
S3[Record learnings]
|
||||
end
|
||||
|
||||
S1 -->|read| LAC
|
||||
S3 -->|write| LAC
|
||||
LAC -.->|informs| S2
|
||||
|
||||
style LAC fill:#e8f5e9
|
||||
```
|
||||
|
||||
### LAC Data Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant C as Claude
|
||||
participant LAC as LAC
|
||||
participant Opt as Optimizer
|
||||
|
||||
Note over C,LAC: Session Start
|
||||
C->>LAC: query_similar_optimizations("bracket", ["mass"])
|
||||
LAC-->>C: Similar studies: TPE worked 85% of time
|
||||
C->>LAC: get_relevant_insights("bracket optimization")
|
||||
LAC-->>C: Insight: "20 startup trials improves convergence"
|
||||
|
||||
Note over U,Opt: During Session
|
||||
U->>C: "Optimize my bracket for mass"
|
||||
C->>C: Apply prior knowledge
|
||||
C->>Opt: Configure with TPE, 20 startup trials
|
||||
Opt-->>C: Optimization complete
|
||||
|
||||
Note over C,LAC: Discovery
|
||||
C->>C: Found: CMA-ES faster for this case
|
||||
C->>LAC: record_insight("success_pattern", "CMA-ES faster for simple brackets")
|
||||
|
||||
Note over C,LAC: Session End
|
||||
C->>LAC: record_optimization_outcome(study="bracket_v4", converged=true, ...)
|
||||
```
|
||||
|
||||
### What LAC Stores
|
||||
|
||||
| Category | Examples | Used For |
|
||||
|----------|----------|----------|
|
||||
| **Optimization Memory** | Method used, convergence, trials | Recommending methods for similar problems |
|
||||
| **Failures** | "CMA-ES failed on discrete targets" | Avoiding repeat mistakes |
|
||||
| **Success Patterns** | "TPE with 20 startup trials converges faster" | Applying proven techniques |
|
||||
| **Workarounds** | "Load _i.prt before UpdateFemodel()" | Fixing known issues |
|
||||
| **Protocol Updates** | "SYS_15 should mention CMA-ES limitation" | Improving documentation |
|
||||
|
||||
---
|
||||
|
||||
## 6. Task Classification & Routing
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[User Request] --> B{Contains keywords?}
|
||||
|
||||
B -->|"new, create, set up, optimize"| C1[CREATE]
|
||||
B -->|"run, start, execute, begin"| C2[RUN]
|
||||
B -->|"status, progress, check, trials"| C3[MONITOR]
|
||||
B -->|"results, best, compare, pareto"| C4[ANALYZE]
|
||||
B -->|"error, failed, not working, help"| C5[DEBUG]
|
||||
B -->|"what is, how does, explain"| C6[EXPLAIN]
|
||||
B -->|"create extractor, add hook"| C7[EXTEND]
|
||||
|
||||
C1 --> D1[OP_01 + study-creation-core]
|
||||
C2 --> D2[OP_02]
|
||||
C3 --> D3[OP_03]
|
||||
C4 --> D4[OP_04]
|
||||
C5 --> D5[OP_06]
|
||||
C6 --> D6[Relevant SYS_*]
|
||||
C7 --> D7{Privilege?}
|
||||
|
||||
D7 -->|user| E1[Explain limitation]
|
||||
D7 -->|power_user+| E2[EXT_01 or EXT_02]
|
||||
|
||||
style C1 fill:#c8e6c9
|
||||
style C2 fill:#bbdefb
|
||||
style C3 fill:#fff9c4
|
||||
style C4 fill:#d1c4e9
|
||||
style C5 fill:#ffccbc
|
||||
style C6 fill:#b2ebf2
|
||||
style C7 fill:#f8bbd9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Execution Framework (AVERVS)
|
||||
|
||||
Every task follows the AVERVS pattern:
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Announce] --> V1[Validate]
|
||||
V1 --> E[Execute]
|
||||
E --> R[Report]
|
||||
R --> V2[Verify]
|
||||
V2 --> S[Suggest]
|
||||
|
||||
style A fill:#e3f2fd
|
||||
style V1 fill:#fff3e0
|
||||
style E fill:#e8f5e9
|
||||
style R fill:#fce4ec
|
||||
style V2 fill:#f3e5f5
|
||||
style S fill:#e0f2f1
|
||||
```
|
||||
|
||||
### AVERVS in Action
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant C as Claude
|
||||
participant NX as NX/Solver
|
||||
|
||||
U->>C: "Create a study for my bracket"
|
||||
|
||||
Note over C: A - Announce
|
||||
C->>U: "I'm going to analyze your model to discover expressions and setup"
|
||||
|
||||
Note over C: V - Validate
|
||||
C->>C: Check: .prt exists? .sim exists? _i.prt present?
|
||||
C->>U: "✓ All required files present"
|
||||
|
||||
Note over C: E - Execute
|
||||
C->>NX: Run introspection script
|
||||
NX-->>C: Expressions, constraints, solutions
|
||||
|
||||
Note over C: R - Report
|
||||
C->>U: "Found 12 expressions, 3 are design variable candidates"
|
||||
|
||||
Note over C: V - Verify
|
||||
C->>C: Validate generated config
|
||||
C->>U: "✓ Config validation passed"
|
||||
|
||||
Note over C: S - Suggest
|
||||
C->>U: "Ready to run. Want me to:<br/>1. Start optimization now?<br/>2. Adjust parameters first?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Optimization Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Setup["1. Setup Phase"]
|
||||
A1[User describes goal] --> A2[Claude analyzes model]
|
||||
A2 --> A3[Query LAC for similar studies]
|
||||
A3 --> A4[Generate optimization_config.json]
|
||||
A4 --> A5[Create run_optimization.py]
|
||||
end
|
||||
|
||||
subgraph Run["2. Optimization Loop"]
|
||||
B1[Optuna suggests parameters] --> B2[Update NX expressions]
|
||||
B2 --> B3[Update FEM mesh]
|
||||
B3 --> B4[Solve with Nastran]
|
||||
B4 --> B5[Extract results via Extractors]
|
||||
B5 --> B6[Report to Optuna]
|
||||
B6 --> B7{More trials?}
|
||||
B7 -->|Yes| B1
|
||||
B7 -->|No| C1
|
||||
end
|
||||
|
||||
subgraph Analyze["3. Analysis Phase"]
|
||||
C1[Load study.db] --> C2[Find best trials]
|
||||
C2 --> C3[Generate visualizations]
|
||||
C3 --> C4[Create STUDY_REPORT.md]
|
||||
end
|
||||
|
||||
subgraph Learn["4. Learning Phase"]
|
||||
D1[Record outcome to LAC]
|
||||
D2[Record insights discovered]
|
||||
D3[Suggest protocol updates]
|
||||
end
|
||||
|
||||
Setup --> Run
|
||||
Run --> Analyze
|
||||
Analyze --> Learn
|
||||
|
||||
style Setup fill:#e3f2fd
|
||||
style Run fill:#e8f5e9
|
||||
style Analyze fill:#fff3e0
|
||||
style Learn fill:#f3e5f5
|
||||
```
|
||||
|
||||
### Extractors
|
||||
|
||||
Extractors bridge FEA results to optimization objectives:
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
subgraph FEA["FEA Output"]
|
||||
F1[OP2 File]
|
||||
F2[BDF File]
|
||||
F3[NX Part]
|
||||
end
|
||||
|
||||
subgraph Extractors["Extractor Library"]
|
||||
E1[E1: Displacement]
|
||||
E2[E2: Frequency]
|
||||
E3[E3: Stress]
|
||||
E4[E4: Mass BDF]
|
||||
E5[E5: Mass CAD]
|
||||
E8[E8: Zernike WFE]
|
||||
end
|
||||
|
||||
subgraph Output["Optimization Values"]
|
||||
O1[Objective Value]
|
||||
O2[Constraint Value]
|
||||
end
|
||||
|
||||
F1 --> E1
|
||||
F1 --> E2
|
||||
F1 --> E3
|
||||
F2 --> E4
|
||||
F3 --> E5
|
||||
F1 --> E8
|
||||
|
||||
E1 --> O1
|
||||
E2 --> O2
|
||||
E3 --> O2
|
||||
E4 --> O1
|
||||
E5 --> O1
|
||||
E8 --> O1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Knowledge Accumulation
|
||||
|
||||
Atomizer gets smarter over time:
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Sessions["Claude Sessions Over Time"]
|
||||
S1[Session 1<br/>Bracket optimization]
|
||||
S2[Session 2<br/>Beam optimization]
|
||||
S3[Session 3<br/>Mirror optimization]
|
||||
S4[Session N<br/>New optimization]
|
||||
end
|
||||
|
||||
subgraph LAC["LAC Knowledge Base"]
|
||||
K1[Optimization<br/>Patterns]
|
||||
K2[Failure<br/>Solutions]
|
||||
K3[Method<br/>Recommendations]
|
||||
end
|
||||
|
||||
S1 -->|record| LAC
|
||||
S2 -->|record| LAC
|
||||
S3 -->|record| LAC
|
||||
LAC -->|inform| S4
|
||||
|
||||
subgraph Improvement["Continuous Improvement"]
|
||||
I1[Better method selection]
|
||||
I2[Faster convergence]
|
||||
I3[Fewer failures]
|
||||
end
|
||||
|
||||
LAC --> Improvement
|
||||
|
||||
style LAC fill:#e8f5e9
|
||||
style Improvement fill:#fff3e0
|
||||
```
|
||||
|
||||
### Example: Method Selection Improvement
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
subgraph Before["Without LAC"]
|
||||
B1[New bracket optimization]
|
||||
B2[Default: TPE]
|
||||
B3[Maybe suboptimal]
|
||||
end
|
||||
|
||||
subgraph After["With LAC"]
|
||||
A1[New bracket optimization]
|
||||
A2[Query LAC:<br/>'bracket mass optimization']
|
||||
A3[LAC returns:<br/>'CMA-ES 30% faster for<br/>simple brackets']
|
||||
A4[Use CMA-ES]
|
||||
A5[Faster convergence]
|
||||
end
|
||||
|
||||
B1 --> B2 --> B3
|
||||
A1 --> A2 --> A3 --> A4 --> A5
|
||||
|
||||
style After fill:#e8f5e9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. File Structure Reference
|
||||
|
||||
```
|
||||
Atomizer/
|
||||
├── CLAUDE.md # 🎯 Main instructions (read first)
|
||||
│
|
||||
├── .claude/
|
||||
│ ├── skills/
|
||||
│ │ ├── 00_BOOTSTRAP.md # Task classification
|
||||
│ │ ├── 01_CHEATSHEET.md # Quick reference
|
||||
│ │ ├── 02_CONTEXT_LOADER.md # What to load when
|
||||
│ │ ├── core/
|
||||
│ │ │ └── study-creation-core.md
|
||||
│ │ └── modules/
|
||||
│ │ ├── learning-atomizer-core.md # LAC documentation
|
||||
│ │ ├── zernike-optimization.md
|
||||
│ │ └── neural-acceleration.md
|
||||
│ └── commands/ # Slash commands
|
||||
│
|
||||
├── knowledge_base/
|
||||
│ ├── lac.py # LAC implementation
|
||||
│ └── lac/ # LAC data storage
|
||||
│ ├── optimization_memory/ # What worked for what
|
||||
│ ├── session_insights/ # Learnings
|
||||
│ └── skill_evolution/ # Protocol updates
|
||||
│
|
||||
├── docs/protocols/
|
||||
│ ├── operations/ # OP_01 - OP_06
|
||||
│ ├── system/ # SYS_10 - SYS_15
|
||||
│ └── extensions/ # EXT_01 - EXT_04
|
||||
│
|
||||
├── optimization_engine/
|
||||
│ ├── extractors/ # Physics extraction
|
||||
│ ├── hooks/ # NX automation
|
||||
│ └── gnn/ # Neural surrogates
|
||||
│
|
||||
└── studies/ # User studies
|
||||
└── {study_name}/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # NX files
|
||||
│ └── optimization_config.json
|
||||
├── 2_results/
|
||||
│ └── study.db # Optuna database
|
||||
└── run_optimization.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference: The Complete Flow
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Start["🚀 Session Start"]
|
||||
A1[Load CLAUDE.md]
|
||||
A2[Load Bootstrap]
|
||||
A3[Query LAC]
|
||||
end
|
||||
|
||||
subgraph Work["⚙️ During Session"]
|
||||
B1[Classify request]
|
||||
B2[Load protocols]
|
||||
B3[Execute AVERVS]
|
||||
B4[Record insights]
|
||||
end
|
||||
|
||||
subgraph End["🏁 Session End"]
|
||||
C1[Save work]
|
||||
C2[Record to LAC]
|
||||
C3[Summarize]
|
||||
end
|
||||
|
||||
Start --> Work --> End
|
||||
|
||||
subgraph Legend["Legend"]
|
||||
L1[📚 POS: What to do]
|
||||
L2[🧠 LAC: What we learned]
|
||||
L3[⚡ AVERVS: How to do it]
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Component | Purpose | Key Files |
|
||||
|-----------|---------|-----------|
|
||||
| **CLAUDE.md** | Main instructions | `CLAUDE.md` |
|
||||
| **Bootstrap** | Task routing | `00_BOOTSTRAP.md` |
|
||||
| **POS** | Protocol system | `docs/protocols/` |
|
||||
| **LAC** | Learning system | `knowledge_base/lac.py` |
|
||||
| **AVERVS** | Execution pattern | Embedded in protocols |
|
||||
| **Extractors** | Physics extraction | `optimization_engine/extractors/` |
|
||||
|
||||
**The key insight**: Atomizer is not just an optimization tool - it's a *learning* optimization tool that gets better with every session.
|
||||
|
||||
---
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes, and every session makes the next one better.*
|
||||
1191
docs/development/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md
Normal file
1191
docs/development/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md
Normal file
File diff suppressed because it is too large
Load Diff
105
docs/development/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md
Normal file
105
docs/development/ATOMIZER_DASHBOARD_GAP_ANALYSIS.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Atomizer Dashboard: Gap Analysis & Future Work Plan
|
||||
|
||||
**Date**: November 22, 2025
|
||||
**Status**: Phase 1-5 Frontend Implementation Complete (Mock/Placeholder Data)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The Atomizer Dashboard frontend has been successfully architected and implemented using a modern React stack (Vite, TypeScript, Tailwind, Recharts, Three.js). The UI structure, navigation, and key components for all major phases (Configuration, Monitoring, Analysis, Reporting) are in place.
|
||||
|
||||
However, **significant backend integration and data pipeline work remains** to make these features fully functional with real engineering data. Currently, many components rely on placeholder data or simulated API responses.
|
||||
|
||||
---
|
||||
|
||||
## 1. Backend Integration Gaps
|
||||
|
||||
### 1.1 Study Configuration (Critical)
|
||||
- **Current State**: Frontend sends a JSON payload to `POST /api/optimization/studies`.
|
||||
- **Missing**:
|
||||
- Backend logic to parse this payload and initialize the actual optimization engine.
|
||||
- File upload handling for `.prt`, `.sim`, `.fem` files (currently UI only).
|
||||
- Validation logic to ensure the requested design variables exist in the NX model.
|
||||
- **Action Item**: Implement `StudyService.create_study()` in backend to handle file uploads and initialize `OptimizationRunner`.
|
||||
|
||||
### 1.2 Real-Time Data Streaming
|
||||
- **Current State**: WebSocket connection is established; frontend listens for `trial_completed` events.
|
||||
- **Missing**:
|
||||
- Backend broadcaster for `pareto_front` updates (needed for advanced plots).
|
||||
- Backend broadcaster for `optimizer_state` (needed for "Optimizer Thinking" visualization).
|
||||
- **Action Item**: Update `optimization_stream.py` to watch for and broadcast multi-objective data and internal optimizer state changes.
|
||||
|
||||
### 1.3 Report Generation
|
||||
- **Current State**: Frontend has a drag-and-drop builder; "Regenerate" button simulates a delay.
|
||||
- **Missing**:
|
||||
- Backend endpoint `POST /api/reports/generate` to accept the report structure.
|
||||
- Logic to compile the report into PDF/HTML using a library like `WeasyPrint` or `Pandoc`.
|
||||
- Integration with LLM to generate the "Executive Summary" text based on actual results.
|
||||
- **Action Item**: Build the report generation service in the backend.
|
||||
|
||||
---
|
||||
|
||||
## 2. Advanced Visualization Gaps
|
||||
|
||||
### 2.1 Parallel Coordinates Plot
|
||||
- **Current State**: Placeholder component displayed.
|
||||
- **Missing**:
|
||||
- D3.js implementation for the actual plot (Recharts is insufficient for this specific visualization).
|
||||
- Data normalization logic (scaling all variables to 0-1 range for display).
|
||||
- Interactive brushing (filtering lines by dragging axes).
|
||||
- **Action Item**: Implement a custom D3.js Parallel Coordinates component wrapped in React.
|
||||
|
||||
### 2.2 3D Mesh Viewer
|
||||
- **Current State**: Renders a rotating placeholder cube.
|
||||
- **Missing**:
|
||||
- **Data Pipeline**: Conversion of Nastran `.op2` or `.bdf` files to web-friendly formats (`.gltf` or `.obj`).
|
||||
- **Backend Endpoint**: API to serve the converted mesh files.
|
||||
- **Result Mapping**: Logic to parse nodal results (displacement/stress) and map them to vertex colors in Three.js.
|
||||
- **Action Item**: Create a backend utility (using `pyNastran` + `trimesh`) to convert FEA models to GLTF and extract result fields as textures/attributes.
|
||||
|
||||
---
|
||||
|
||||
## 3. Intelligent Features Gaps
|
||||
|
||||
### 3.1 LLM Integration
|
||||
- **Current State**: Not implemented in frontend.
|
||||
- **Missing**:
|
||||
- Chat interface for "Talk to your data".
|
||||
- Backend integration with Claude/GPT to analyze trial history and provide insights.
|
||||
- Automated "Reasoning" display (why the optimizer chose specific parameters).
|
||||
- **Action Item**: Add `LLMChat` component and corresponding backend route `POST /api/llm/analyze`.
|
||||
|
||||
### 3.2 Surrogate Model Visualization
|
||||
- **Current State**: Not implemented.
|
||||
- **Missing**:
|
||||
- Visualization of the Gaussian Process / Random Forest response surface.
|
||||
- 3D surface plots (for 2 variables) or slice plots (for >2 variables).
|
||||
- **Action Item**: Implement a 3D Surface Plot component using `react-plotly.js` or Three.js.
|
||||
|
||||
---
|
||||
|
||||
## 4. Work Plan (Prioritized)
|
||||
|
||||
### Phase 6: Backend Connection (Immediate)
|
||||
1. [ ] Implement file upload handling in FastAPI.
|
||||
2. [ ] Connect `Configurator` payload to `OptimizationRunner`.
|
||||
3. [ ] Ensure `optimization_history.json` updates trigger WebSocket events correctly.
|
||||
|
||||
### Phase 7: 3D Pipeline (High Value)
|
||||
1. [ ] Create `op2_to_gltf.py` utility using `pyNastran`.
|
||||
2. [ ] Create API endpoint to serve generated GLTF files.
|
||||
3. [ ] Update `MeshViewer.tsx` to load real models from URL.
|
||||
|
||||
### Phase 8: Advanced Viz (Scientific Rigor)
|
||||
1. [ ] Replace Parallel Coordinates placeholder with D3.js implementation.
|
||||
2. [ ] Implement "Compare Trials" view (side-by-side table + mesh).
|
||||
3. [ ] Add "Optimizer State" visualization (acquisition function heatmaps).
|
||||
|
||||
### Phase 9: Reporting & LLM (Productivity)
|
||||
1. [ ] Implement backend report generation (PDF export).
|
||||
2. [ ] Connect LLM API for automated result summarization.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The frontend is "demo-ready" and structurally complete. The next sprint must focus entirely on **backend engineering** to feed real, dynamic data into these polished UI components. The 3D viewer specifically requires a dedicated data conversion pipeline to bridge the gap between Nastran and the Web.
|
||||
581
docs/development/ATOMIZER_FIELD_INTEGRATION_PLAN.md
Normal file
581
docs/development/ATOMIZER_FIELD_INTEGRATION_PLAN.md
Normal file
@@ -0,0 +1,581 @@
|
||||
# Atomizer-Field Integration Plan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This plan outlines the integration of Atomizer-Field (neural network surrogate) with Atomizer (FEA optimization framework) to achieve 600x speedup in optimization workflows by replacing expensive FEA evaluations (30 min) with fast neural network predictions (50 ms).
|
||||
|
||||
**STATUS: ✅ INTEGRATION COMPLETE** (as of November 2025)
|
||||
|
||||
All phases have been implemented and tested. Neural acceleration is production-ready.
|
||||
|
||||
## 🎯 Goals - ALL ACHIEVED
|
||||
|
||||
1. ✅ **Unified Development**: Atomizer-Field integrated as subdirectory
|
||||
2. ✅ **Training Pipeline**: Automatic training data export → neural network training
|
||||
3. ✅ **Hybrid Optimization**: Smart switching between FEA and NN based on confidence
|
||||
4. ✅ **Production Ready**: Robust, tested integration with 18 comprehensive tests
|
||||
|
||||
## 📊 Current State - COMPLETE
|
||||
|
||||
### Atomizer (This Repo)
|
||||
- ✅ Training data export module (`training_data_exporter.py`) - 386 lines
|
||||
- ✅ Neural surrogate integration (`neural_surrogate.py`) - 1,013 lines
|
||||
- ✅ Neural-enhanced runner (`runner_with_neural.py`) - 516 lines
|
||||
- ✅ Comprehensive test suite
|
||||
- ✅ Complete documentation
|
||||
|
||||
### Atomizer-Field (Integrated)
|
||||
- ✅ Graph Neural Network implementation (`field_predictor.py`) - 490 lines
|
||||
- ✅ Parametric GNN (`parametric_predictor.py`) - 450 lines
|
||||
- ✅ BDF/OP2 parser for Nastran files (`neural_field_parser.py`) - 650 lines
|
||||
- ✅ Training pipeline (`train.py`, `train_parametric.py`)
|
||||
- ✅ Inference engine (`predict.py`)
|
||||
- ✅ Uncertainty quantification (`uncertainty.py`)
|
||||
- ✅ Physics-informed loss functions (`physics_losses.py`)
|
||||
- ✅ Pre-trained models available
|
||||
|
||||
## 🔄 Integration Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ATOMIZER │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Optimization Loop │
|
||||
│ ┌─────────────────────────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────┐ Decision ┌──────────┐ │ │
|
||||
│ │ │ │ ─────────> │ FEA │ │ │
|
||||
│ │ │ Optuna │ │ Solver │ │ │
|
||||
│ │ │ │ ─────────> │ (NX) │ │ │
|
||||
│ │ └──────────┘ Engine └──────────┘ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ │ ┌──────────┐ │ │ │
|
||||
│ │ └─────────>│ NN │<─────┘ │ │
|
||||
│ │ │ Surrogate│ │ │
|
||||
│ │ └──────────┘ │ │
|
||||
│ │ ↑ │ │
|
||||
│ └─────────────────────────┼────────────────────────┘ │
|
||||
│ │ │
|
||||
├─────────────────────────────┼────────────────────────────────┤
|
||||
│ ATOMIZER-FIELD │
|
||||
│ │ │
|
||||
│ ┌──────────────┐ ┌─────┴──────┐ ┌──────────────┐ │
|
||||
│ │ Training │ │ Model │ │ Inference │ │
|
||||
│ │ Pipeline │──>│ (GNN) │──>│ Engine │ │
|
||||
│ └──────────────┘ └────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 📋 Implementation Steps
|
||||
|
||||
### Phase 1: Repository Integration (Week 1)
|
||||
|
||||
#### 1.1 Clone and Structure
|
||||
```bash
|
||||
# Option A: Git Submodule (Recommended)
|
||||
git submodule add https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
||||
git submodule update --init --recursive
|
||||
|
||||
# Option B: Direct Clone
|
||||
git clone https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
||||
```
|
||||
|
||||
#### 1.2 Directory Structure
|
||||
```
|
||||
Atomizer/
|
||||
├── optimization_engine/
|
||||
│ ├── runner.py # Main optimization loop
|
||||
│ ├── training_data_exporter.py # Export for training
|
||||
│ └── neural_surrogate.py # NEW: NN integration layer
|
||||
├── atomizer-field/ # Atomizer-Field repo
|
||||
│ ├── models/ # GNN models
|
||||
│ ├── parsers/ # BDF/OP2 parsers
|
||||
│ ├── training/ # Training scripts
|
||||
│ └── inference/ # Inference engine
|
||||
├── studies/ # Optimization studies
|
||||
└── atomizer_field_training_data/ # Training data storage
|
||||
```
|
||||
|
||||
#### 1.3 Dependencies Integration
|
||||
```python
|
||||
# requirements.txt additions
|
||||
torch>=2.0.0
|
||||
torch-geometric>=2.3.0
|
||||
pyNastran>=1.4.0
|
||||
networkx>=3.0
|
||||
scipy>=1.10.0
|
||||
```
|
||||
|
||||
### Phase 2: Integration Layer (Week 1-2)
|
||||
|
||||
#### 2.1 Create Neural Surrogate Module
|
||||
|
||||
```python
|
||||
# optimization_engine/neural_surrogate.py
|
||||
"""
|
||||
Neural network surrogate integration for Atomizer.
|
||||
Interfaces with Atomizer-Field models for fast FEA predictions.
|
||||
"""
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional, Tuple
|
||||
import logging
|
||||
|
||||
# Import from atomizer-field
|
||||
from atomizer_field.inference import ModelInference
|
||||
from atomizer_field.parsers import BDFParser
|
||||
from atomizer_field.models import load_checkpoint
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class NeuralSurrogate:
|
||||
"""
|
||||
Wrapper for Atomizer-Field neural network models.
|
||||
|
||||
Provides:
|
||||
- Model loading and management
|
||||
- Inference with uncertainty quantification
|
||||
- Fallback to FEA when confidence is low
|
||||
- Performance tracking
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
model_path: Path,
|
||||
device: str = 'cuda' if torch.cuda.is_available() else 'cpu',
|
||||
confidence_threshold: float = 0.95):
|
||||
"""
|
||||
Initialize neural surrogate.
|
||||
|
||||
Args:
|
||||
model_path: Path to trained model checkpoint
|
||||
device: Computing device (cuda/cpu)
|
||||
confidence_threshold: Minimum confidence for NN predictions
|
||||
"""
|
||||
self.model_path = model_path
|
||||
self.device = device
|
||||
self.confidence_threshold = confidence_threshold
|
||||
|
||||
# Load model
|
||||
self.model = load_checkpoint(model_path, device=device)
|
||||
self.model.eval()
|
||||
|
||||
# Initialize inference engine
|
||||
self.inference_engine = ModelInference(self.model, device=device)
|
||||
|
||||
# Performance tracking
|
||||
self.prediction_count = 0
|
||||
self.fea_fallback_count = 0
|
||||
self.total_nn_time = 0.0
|
||||
self.total_fea_time = 0.0
|
||||
|
||||
def predict(self,
|
||||
design_variables: Dict[str, float],
|
||||
bdf_template: Path) -> Tuple[Dict[str, float], float, bool]:
|
||||
"""
|
||||
Predict FEA results using neural network.
|
||||
|
||||
Args:
|
||||
design_variables: Design parameter values
|
||||
bdf_template: Template BDF file with parametric geometry
|
||||
|
||||
Returns:
|
||||
Tuple of (predictions, confidence, used_nn)
|
||||
- predictions: Dict of predicted values (stress, displacement, etc.)
|
||||
- confidence: Prediction confidence score [0, 1]
|
||||
- used_nn: True if NN was used, False if fell back to FEA
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
try:
|
||||
# Update BDF with design variables
|
||||
updated_bdf = self._update_bdf_parameters(bdf_template, design_variables)
|
||||
|
||||
# Parse to graph representation
|
||||
graph_data = BDFParser.parse(updated_bdf)
|
||||
|
||||
# Run inference with uncertainty quantification
|
||||
predictions, uncertainty = self.inference_engine.predict_with_uncertainty(
|
||||
graph_data,
|
||||
n_samples=10 # Monte Carlo dropout samples
|
||||
)
|
||||
|
||||
# Calculate confidence score
|
||||
confidence = self._calculate_confidence(predictions, uncertainty)
|
||||
|
||||
# Check if confidence meets threshold
|
||||
if confidence >= self.confidence_threshold:
|
||||
self.prediction_count += 1
|
||||
self.total_nn_time += time.time() - start_time
|
||||
|
||||
logger.info(f"NN prediction successful (confidence: {confidence:.3f})")
|
||||
return predictions, confidence, True
|
||||
else:
|
||||
logger.warning(f"Low confidence ({confidence:.3f}), falling back to FEA")
|
||||
self.fea_fallback_count += 1
|
||||
return {}, confidence, False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"NN prediction failed: {e}")
|
||||
self.fea_fallback_count += 1
|
||||
return {}, 0.0, False
|
||||
|
||||
def _calculate_confidence(self, predictions: Dict, uncertainty: Dict) -> float:
|
||||
"""Calculate confidence score from predictions and uncertainties."""
|
||||
# Simple confidence metric: 1 / (1 + mean_relative_uncertainty)
|
||||
relative_uncertainties = []
|
||||
for key in predictions:
|
||||
if key in uncertainty and predictions[key] != 0:
|
||||
rel_unc = uncertainty[key] / abs(predictions[key])
|
||||
relative_uncertainties.append(rel_unc)
|
||||
|
||||
if relative_uncertainties:
|
||||
mean_rel_unc = np.mean(relative_uncertainties)
|
||||
confidence = 1.0 / (1.0 + mean_rel_unc)
|
||||
return min(max(confidence, 0.0), 1.0) # Clamp to [0, 1]
|
||||
return 0.5 # Default confidence
|
||||
|
||||
def get_statistics(self) -> Dict[str, Any]:
|
||||
"""Get performance statistics."""
|
||||
total_predictions = self.prediction_count + self.fea_fallback_count
|
||||
|
||||
return {
|
||||
'total_predictions': total_predictions,
|
||||
'nn_predictions': self.prediction_count,
|
||||
'fea_fallbacks': self.fea_fallback_count,
|
||||
'nn_percentage': (self.prediction_count / total_predictions * 100) if total_predictions > 0 else 0,
|
||||
'avg_nn_time': (self.total_nn_time / self.prediction_count) if self.prediction_count > 0 else 0,
|
||||
'total_nn_time': self.total_nn_time,
|
||||
'speedup_factor': self._calculate_speedup()
|
||||
}
|
||||
```
|
||||
|
||||
#### 2.2 Modify Optimization Runner
|
||||
|
||||
```python
|
||||
# In optimization_engine/runner.py
|
||||
|
||||
def __init__(self, config_path):
|
||||
# ... existing init ...
|
||||
|
||||
# Neural surrogate setup
|
||||
self.use_neural = self.config.get('neural_surrogate', {}).get('enabled', False)
|
||||
self.neural_surrogate = None
|
||||
|
||||
if self.use_neural:
|
||||
model_path = self.config['neural_surrogate'].get('model_path')
|
||||
if model_path and Path(model_path).exists():
|
||||
from optimization_engine.neural_surrogate import NeuralSurrogate
|
||||
self.neural_surrogate = NeuralSurrogate(
|
||||
model_path=Path(model_path),
|
||||
confidence_threshold=self.config['neural_surrogate'].get('confidence_threshold', 0.95)
|
||||
)
|
||||
logger.info(f"Neural surrogate loaded from {model_path}")
|
||||
else:
|
||||
logger.warning("Neural surrogate enabled but model not found")
|
||||
|
||||
def objective(self, trial):
|
||||
# ... existing code ...
|
||||
|
||||
# Try neural surrogate first
|
||||
if self.neural_surrogate:
|
||||
predictions, confidence, used_nn = self.neural_surrogate.predict(
|
||||
design_variables=design_vars,
|
||||
bdf_template=self.bdf_template_path
|
||||
)
|
||||
|
||||
if used_nn:
|
||||
# Use NN predictions
|
||||
extracted_results = predictions
|
||||
# Log to trial
|
||||
trial.set_user_attr('prediction_method', 'neural_network')
|
||||
trial.set_user_attr('nn_confidence', confidence)
|
||||
else:
|
||||
# Fall back to FEA
|
||||
extracted_results = self._run_fea_simulation(design_vars)
|
||||
trial.set_user_attr('prediction_method', 'fea')
|
||||
trial.set_user_attr('nn_confidence', confidence)
|
||||
else:
|
||||
# Standard FEA path
|
||||
extracted_results = self._run_fea_simulation(design_vars)
|
||||
trial.set_user_attr('prediction_method', 'fea')
|
||||
```
|
||||
|
||||
### Phase 3: Training Pipeline Integration (Week 2)
|
||||
|
||||
#### 3.1 Automated Training Script
|
||||
|
||||
```python
|
||||
# train_neural_surrogate.py
|
||||
"""
|
||||
Train Atomizer-Field model from exported optimization data.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
# Add atomizer-field to path
|
||||
sys.path.append('atomizer-field')
|
||||
|
||||
from atomizer_field.training import train_model
|
||||
from atomizer_field.data import create_dataset
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--data-dir', type=str, required=True,
|
||||
help='Path to training data directory')
|
||||
parser.add_argument('--output-dir', type=str, default='trained_models',
|
||||
help='Directory to save trained models')
|
||||
parser.add_argument('--epochs', type=int, default=200)
|
||||
parser.add_argument('--batch-size', type=int, default=32)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create dataset from exported data
|
||||
dataset = create_dataset(Path(args.data_dir))
|
||||
|
||||
# Train model
|
||||
model = train_model(
|
||||
dataset=dataset,
|
||||
epochs=args.epochs,
|
||||
batch_size=args.batch_size,
|
||||
output_dir=Path(args.output_dir)
|
||||
)
|
||||
|
||||
print(f"Model saved to {args.output_dir}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### Phase 4: Hybrid Optimization Mode (Week 3)
|
||||
|
||||
#### 4.1 Smart Sampling Strategy
|
||||
|
||||
```python
|
||||
# optimization_engine/hybrid_optimizer.py
|
||||
"""
|
||||
Hybrid optimization using both FEA and neural surrogates.
|
||||
"""
|
||||
|
||||
class HybridOptimizer:
|
||||
"""
|
||||
Intelligent optimization that:
|
||||
1. Uses FEA for initial exploration
|
||||
2. Trains NN on accumulated data
|
||||
3. Switches to NN for exploitation
|
||||
4. Validates critical points with FEA
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
self.fea_samples = []
|
||||
self.nn_model = None
|
||||
self.phase = 'exploration' # exploration -> training -> exploitation -> validation
|
||||
|
||||
def should_use_nn(self, trial_number: int) -> bool:
|
||||
"""Decide whether to use NN for this trial."""
|
||||
|
||||
if self.phase == 'exploration':
|
||||
# First N trials: always use FEA
|
||||
if trial_number < self.config['min_fea_samples']:
|
||||
return False
|
||||
else:
|
||||
self.phase = 'training'
|
||||
self._train_surrogate()
|
||||
|
||||
elif self.phase == 'training':
|
||||
self.phase = 'exploitation'
|
||||
|
||||
elif self.phase == 'exploitation':
|
||||
# Use NN with periodic FEA validation
|
||||
if trial_number % self.config['validation_frequency'] == 0:
|
||||
return False # Validate with FEA
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _train_surrogate(self):
|
||||
"""Train surrogate model on accumulated FEA data."""
|
||||
# Trigger training pipeline
|
||||
pass
|
||||
```
|
||||
|
||||
### Phase 5: Testing and Validation (Week 3-4)
|
||||
|
||||
#### 5.1 Integration Tests
|
||||
|
||||
```python
|
||||
# tests/test_neural_integration.py
|
||||
"""
|
||||
End-to-end tests for neural surrogate integration.
|
||||
"""
|
||||
|
||||
def test_nn_prediction_accuracy():
|
||||
"""Test NN predictions match FEA within tolerance."""
|
||||
pass
|
||||
|
||||
def test_confidence_based_fallback():
|
||||
"""Test fallback to FEA when confidence is low."""
|
||||
pass
|
||||
|
||||
def test_hybrid_optimization():
|
||||
"""Test complete hybrid optimization workflow."""
|
||||
pass
|
||||
|
||||
def test_speedup_measurement():
|
||||
"""Verify speedup metrics are accurate."""
|
||||
pass
|
||||
```
|
||||
|
||||
#### 5.2 Benchmark Studies
|
||||
|
||||
1. **Simple Beam**: Compare pure FEA vs hybrid
|
||||
2. **Complex Bracket**: Test confidence thresholds
|
||||
3. **Multi-objective**: Validate Pareto front quality
|
||||
|
||||
### Phase 6: Production Deployment (Week 4)
|
||||
|
||||
#### 6.1 Configuration Schema
|
||||
|
||||
```yaml
|
||||
# workflow_config.yaml
|
||||
study_name: "bracket_optimization_hybrid"
|
||||
|
||||
neural_surrogate:
|
||||
enabled: true
|
||||
model_path: "trained_models/bracket_gnn_v1.pth"
|
||||
confidence_threshold: 0.95
|
||||
|
||||
hybrid_mode:
|
||||
enabled: true
|
||||
min_fea_samples: 20 # Initial FEA exploration
|
||||
validation_frequency: 10 # Validate every 10th prediction
|
||||
retrain_frequency: 50 # Retrain NN every 50 trials
|
||||
|
||||
training_data_export:
|
||||
enabled: true
|
||||
export_dir: "atomizer_field_training_data/bracket_study"
|
||||
```
|
||||
|
||||
#### 6.2 Monitoring Dashboard
|
||||
|
||||
Add neural surrogate metrics to dashboard:
|
||||
- NN vs FEA usage ratio
|
||||
- Confidence distribution
|
||||
- Speedup factor
|
||||
- Prediction accuracy
|
||||
|
||||
## 📈 Expected Outcomes
|
||||
|
||||
### Performance Metrics
|
||||
- **Speedup**: 100-600x for optimization loop
|
||||
- **Accuracy**: <5% error vs FEA for trained domains
|
||||
- **Coverage**: 80-90% of evaluations use NN
|
||||
|
||||
### Engineering Benefits
|
||||
- **Exploration**: 1000s of designs vs 10s
|
||||
- **Optimization**: Days → Hours
|
||||
- **Iteration**: Real-time design changes
|
||||
|
||||
## 🚀 Quick Start Commands
|
||||
|
||||
```bash
|
||||
# 1. Clone Atomizer-Field
|
||||
git clone https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
||||
|
||||
# 2. Install dependencies
|
||||
pip install -r atomizer-field/requirements.txt
|
||||
|
||||
# 3. Run optimization with training data export
|
||||
cd studies/beam_optimization
|
||||
python run_optimization.py
|
||||
|
||||
# 4. Train neural surrogate
|
||||
python train_neural_surrogate.py \
|
||||
--data-dir atomizer_field_training_data/beam_study \
|
||||
--epochs 200
|
||||
|
||||
# 5. Run hybrid optimization
|
||||
python run_optimization.py --use-neural --model trained_models/beam_gnn.pth
|
||||
```
|
||||
|
||||
## 📅 Implementation Timeline - COMPLETED
|
||||
|
||||
| Week | Phase | Status | Deliverables |
|
||||
|------|-------|--------|-------------|
|
||||
| 1 | Repository Integration | ✅ Complete | Merged codebase, dependencies |
|
||||
| 1-2 | Integration Layer | ✅ Complete | Neural surrogate module, runner modifications |
|
||||
| 2 | Training Pipeline | ✅ Complete | Automated training scripts |
|
||||
| 3 | Hybrid Mode | ✅ Complete | Smart sampling, confidence-based switching |
|
||||
| 3-4 | Testing | ✅ Complete | 18 integration tests, benchmarks |
|
||||
| 4 | Deployment | ✅ Complete | Production config, monitoring |
|
||||
|
||||
## 🔍 Risk Mitigation - IMPLEMENTED
|
||||
|
||||
1. ✅ **Model Accuracy**: Extensive validation, confidence thresholds (configurable 0.0-1.0)
|
||||
2. ✅ **Edge Cases**: Automatic fallback to FEA when confidence is low
|
||||
3. ✅ **Performance**: GPU acceleration (10x faster), CPU fallback available
|
||||
4. ✅ **Data Quality**: Physics validation, outlier detection, 18 test cases
|
||||
|
||||
## 📚 Documentation - COMPLETE
|
||||
|
||||
- ✅ [Neural Features Complete Guide](NEURAL_FEATURES_COMPLETE.md) - Comprehensive feature overview
|
||||
- ✅ [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) - Step-by-step tutorial
|
||||
- ✅ [GNN Architecture](GNN_ARCHITECTURE.md) - Technical deep-dive
|
||||
- ✅ [Physics Loss Guide](PHYSICS_LOSS_GUIDE.md) - Loss function selection
|
||||
- ✅ [API Reference](ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md) - Integration API
|
||||
|
||||
## 🎯 Success Criteria - ALL MET
|
||||
|
||||
1. ✅ Successfully integrated Atomizer-Field (subdirectory integration)
|
||||
2. ✅ 2,200x speedup demonstrated on UAV arm benchmark (exceeded 100x goal!)
|
||||
3. ✅ <5% error vs FEA validation (achieved 2-4% on all objectives)
|
||||
4. ✅ Production-ready with monitoring and dashboard integration
|
||||
5. ✅ Comprehensive documentation (5 major docs, README updates)
|
||||
|
||||
## 📈 Performance Achieved
|
||||
|
||||
| Metric | Target | Achieved |
|
||||
|--------|--------|----------|
|
||||
| Speedup | 100x | **2,200x** |
|
||||
| Prediction Error | <5% | **2-4%** |
|
||||
| NN Usage Rate | 80% | **97%** |
|
||||
| Inference Time | <100ms | **4.5ms** |
|
||||
|
||||
## 🚀 What's Next
|
||||
|
||||
The integration is complete and production-ready. Future enhancements:
|
||||
|
||||
1. **More Pre-trained Models**: Additional model types and design spaces
|
||||
2. **Transfer Learning**: Use trained models as starting points for new problems
|
||||
3. **Active Learning**: Intelligently select FEA validation points
|
||||
4. **Multi-fidelity**: Combine coarse/fine mesh predictions
|
||||
|
||||
---
|
||||
|
||||
*Integration complete! Neural acceleration is now production-ready for FEA-based optimization.*
|
||||
|
||||
## Quick Start (Post-Integration)
|
||||
|
||||
```python
|
||||
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
|
||||
|
||||
# Load pre-trained model (no training needed!)
|
||||
surrogate = create_parametric_surrogate_for_study()
|
||||
|
||||
# Instant predictions
|
||||
result = surrogate.predict({
|
||||
"beam_half_core_thickness": 7.0,
|
||||
"beam_face_thickness": 2.5,
|
||||
"holes_diameter": 35.0,
|
||||
"hole_count": 10.0
|
||||
})
|
||||
|
||||
print(f"Prediction time: {result['inference_time_ms']:.1f} ms")
|
||||
```
|
||||
|
||||
See [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) for complete guide.
|
||||
691
docs/development/ATOMIZER_USER_GUIDE.md
Normal file
691
docs/development/ATOMIZER_USER_GUIDE.md
Normal file
@@ -0,0 +1,691 @@
|
||||
# Atomizer User Guide
|
||||
|
||||
**How to Use Atomizer So It Evolves the Right Way**
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: 2025-12-11
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Atomizer is not just an optimization tool - it's a **learning system**. Every session you have with Claude contributes to making future sessions better. This guide teaches you how to use Atomizer properly so that:
|
||||
|
||||
1. You get the best results from your optimizations
|
||||
2. The system learns and improves over time
|
||||
3. Knowledge is preserved and shared
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [The Right Mindset](#1-the-right-mindset)
|
||||
2. [Starting a Session](#2-starting-a-session)
|
||||
3. [Communicating with Atomizer Claude](#3-communicating-with-atomizer-claude)
|
||||
4. [Creating Optimization Studies](#4-creating-optimization-studies)
|
||||
5. [Running Optimizations](#5-running-optimizations)
|
||||
6. [Analyzing Results](#6-analyzing-results)
|
||||
7. [When Things Go Wrong](#7-when-things-go-wrong)
|
||||
8. [Contributing to Learning](#8-contributing-to-learning)
|
||||
9. [Ending a Session](#9-ending-a-session)
|
||||
10. [Best Practices Summary](#10-best-practices-summary)
|
||||
|
||||
---
|
||||
|
||||
## 1. The Right Mindset
|
||||
|
||||
### Think of Atomizer as a Knowledgeable Colleague
|
||||
|
||||
```
|
||||
❌ Wrong: "Atomizer is a tool I use"
|
||||
✅ Right: "Atomizer is a colleague who learns from our conversations"
|
||||
```
|
||||
|
||||
When you work with Atomizer:
|
||||
- **Explain your goals** - not just what you want, but *why*
|
||||
- **Share context** - what constraints matter? what tradeoffs are acceptable?
|
||||
- **Report outcomes** - did it work? what surprised you?
|
||||
- **Mention discoveries** - found something unexpected? Say so!
|
||||
|
||||
### The Learning Loop
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[You describe problem] --> B[Claude suggests approach]
|
||||
B --> C[Optimization runs]
|
||||
C --> D[Results analyzed]
|
||||
D --> E[Learnings recorded]
|
||||
E --> F[Next session is smarter]
|
||||
F --> A
|
||||
```
|
||||
|
||||
**Your job**: Keep Claude informed so the loop works.
|
||||
|
||||
---
|
||||
|
||||
## 2. Starting a Session
|
||||
|
||||
### What Happens Behind the Scenes
|
||||
|
||||
When you start a new Claude Code session in the Atomizer project:
|
||||
|
||||
1. Claude reads `CLAUDE.md` (system instructions)
|
||||
2. Claude checks for active studies
|
||||
3. Claude queries LAC for relevant prior knowledge
|
||||
4. Claude is ready to help
|
||||
|
||||
### Good Session Starters
|
||||
|
||||
```
|
||||
✅ "I need to optimize my bracket for minimum mass while keeping
|
||||
stress below 250 MPa. The model is in studies/bracket_v4/"
|
||||
|
||||
✅ "Continue working on the mirror optimization from yesterday.
|
||||
I think we were at 50 trials."
|
||||
|
||||
✅ "I'm having trouble with the beam study - the solver keeps
|
||||
timing out."
|
||||
```
|
||||
|
||||
### Bad Session Starters
|
||||
|
||||
```
|
||||
❌ "Optimize this" (no context)
|
||||
|
||||
❌ "Run the thing" (what thing?)
|
||||
|
||||
❌ "It's not working" (what isn't working?)
|
||||
```
|
||||
|
||||
### Providing Context Helps Learning
|
||||
|
||||
When you provide good context, Claude can:
|
||||
- Find similar past optimizations in LAC
|
||||
- Apply learnings from previous sessions
|
||||
- Make better method recommendations
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant You
|
||||
participant Claude
|
||||
participant LAC
|
||||
|
||||
You->>Claude: "Optimize bracket for mass, stress < 250 MPa"
|
||||
Claude->>LAC: Query similar: "bracket mass stress"
|
||||
LAC-->>Claude: Found: TPE worked 85% for brackets
|
||||
LAC-->>Claude: Insight: "20 startup trials helps"
|
||||
Claude->>You: "Based on 5 similar studies, I recommend TPE with 20 startup trials..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Communicating with Atomizer Claude
|
||||
|
||||
### Be Specific About Goals
|
||||
|
||||
```
|
||||
❌ Vague: "Make it lighter"
|
||||
|
||||
✅ Specific: "Minimize mass while keeping maximum displacement < 2mm
|
||||
and first natural frequency > 100 Hz"
|
||||
```
|
||||
|
||||
### Mention Constraints and Preferences
|
||||
|
||||
```
|
||||
✅ "I need results by Friday, so limit to 50 trials"
|
||||
|
||||
✅ "This is a preliminary study - rough results are fine"
|
||||
|
||||
✅ "This is for production - I need high confidence in the optimum"
|
||||
|
||||
✅ "I prefer TPE over CMA-ES based on past experience"
|
||||
```
|
||||
|
||||
### Ask Questions
|
||||
|
||||
Atomizer Claude is an expert. Use that expertise:
|
||||
|
||||
```
|
||||
✅ "What method do you recommend for this problem?"
|
||||
|
||||
✅ "How many trials should I run?"
|
||||
|
||||
✅ "Is this the right extractor for von Mises stress?"
|
||||
|
||||
✅ "Why did convergence slow down after trial 30?"
|
||||
```
|
||||
|
||||
### Report What You Observe
|
||||
|
||||
This is **critical for learning**:
|
||||
|
||||
```
|
||||
✅ "The optimization converged faster than expected - maybe because
|
||||
the design space is simple?"
|
||||
|
||||
✅ "I noticed the solver is slow when thickness < 2mm"
|
||||
|
||||
✅ "The Pareto front has a sharp knee around mass = 3kg"
|
||||
|
||||
✅ "This result doesn't make physical sense - stress should increase
|
||||
with thinner walls"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Creating Optimization Studies
|
||||
|
||||
### The Creation Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Describe your optimization goal] --> B{Claude analyzes model}
|
||||
B --> C[Claude suggests config]
|
||||
C --> D{You approve?}
|
||||
D -->|Yes| E[Files generated]
|
||||
D -->|Adjust| F[Discuss changes]
|
||||
F --> C
|
||||
E --> G[Ready to run]
|
||||
```
|
||||
|
||||
### What to Provide
|
||||
|
||||
| Information | Why It Matters |
|
||||
|-------------|----------------|
|
||||
| **Model files** (.prt, .sim, .fem) | Claude needs to analyze them |
|
||||
| **Optimization goal** | "Minimize mass", "Maximize stiffness" |
|
||||
| **Constraints** | "Stress < 250 MPa", "Frequency > 100 Hz" |
|
||||
| **Design variables** | Which parameters to vary (or let Claude discover) |
|
||||
| **Trial budget** | How many evaluations you can afford |
|
||||
|
||||
### Example: Good Study Creation Request
|
||||
|
||||
```
|
||||
"Create an optimization study for my UAV arm:
|
||||
|
||||
Goal: Minimize mass while maximizing stiffness (multi-objective)
|
||||
|
||||
Constraints:
|
||||
- Maximum stress < 200 MPa
|
||||
- First frequency > 50 Hz
|
||||
|
||||
Design variables:
|
||||
- Wall thickness (1-5 mm)
|
||||
- Rib spacing (10-50 mm)
|
||||
- Material (Al6061 or Al7075)
|
||||
|
||||
Budget: About 100 trials, I have time for a thorough study.
|
||||
|
||||
The model is in studies/uav_arm_v2/1_setup/model/"
|
||||
```
|
||||
|
||||
### Review the Generated Config
|
||||
|
||||
Claude will generate `optimization_config.json`. **Review it**:
|
||||
|
||||
```
|
||||
✅ Check that objectives match your goals
|
||||
✅ Verify constraints are correct
|
||||
✅ Confirm design variable bounds make sense
|
||||
✅ Ensure extractors are appropriate
|
||||
```
|
||||
|
||||
If something's wrong, say so! This helps Claude learn what works.
|
||||
|
||||
---
|
||||
|
||||
## 5. Running Optimizations
|
||||
|
||||
### Before Running
|
||||
|
||||
Ask Claude to validate:
|
||||
|
||||
```
|
||||
"Please validate the config before we run"
|
||||
```
|
||||
|
||||
This catches errors early.
|
||||
|
||||
### During Running
|
||||
|
||||
You can:
|
||||
- **Check progress**: "How many trials completed?"
|
||||
- **See best so far**: "What's the current best design?"
|
||||
- **Monitor convergence**: "Is it converging?"
|
||||
|
||||
### If You Need to Stop
|
||||
|
||||
```
|
||||
"Pause the optimization - I need to check something"
|
||||
```
|
||||
|
||||
Claude will help you resume later.
|
||||
|
||||
### Long-Running Optimizations
|
||||
|
||||
For studies with many trials:
|
||||
|
||||
```
|
||||
"Start the optimization and let it run overnight.
|
||||
I'll check results tomorrow."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Analyzing Results
|
||||
|
||||
### What to Ask For
|
||||
|
||||
```
|
||||
✅ "Show me the best design"
|
||||
|
||||
✅ "Plot the convergence history"
|
||||
|
||||
✅ "Show the Pareto front" (for multi-objective)
|
||||
|
||||
✅ "Compare the top 5 designs"
|
||||
|
||||
✅ "What parameters are most important?"
|
||||
|
||||
✅ "Generate a study report"
|
||||
```
|
||||
|
||||
### Validate Results Physically
|
||||
|
||||
**This is important for learning!** Tell Claude if results make sense:
|
||||
|
||||
```
|
||||
✅ "This looks right - thinner walls do reduce mass"
|
||||
|
||||
✅ "This is surprising - I expected more sensitivity to rib spacing"
|
||||
|
||||
❌ "This can't be right - stress should be higher with this geometry"
|
||||
```
|
||||
|
||||
When results don't make sense, investigate together:
|
||||
- Check extractor configuration
|
||||
- Verify solver completed correctly
|
||||
- Look for constraint violations
|
||||
|
||||
### Record Insights
|
||||
|
||||
If you discover something interesting:
|
||||
|
||||
```
|
||||
"Record this insight: For UAV arms with thin walls (<2mm),
|
||||
the frequency constraint becomes dominant before stress."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. When Things Go Wrong
|
||||
|
||||
### How to Report Errors
|
||||
|
||||
**Good error report:**
|
||||
```
|
||||
"The optimization failed at trial 23.
|
||||
Error message: 'OP2 file not found'
|
||||
The NX log shows 'Singular stiffness matrix'"
|
||||
```
|
||||
|
||||
**Bad error report:**
|
||||
```
|
||||
"It's broken"
|
||||
```
|
||||
|
||||
### Common Issues and What to Say
|
||||
|
||||
| Issue | How to Report |
|
||||
|-------|---------------|
|
||||
| Solver timeout | "Solver timed out after X minutes on trial Y" |
|
||||
| Missing file | "Can't find [filename] - should it be in [location]?" |
|
||||
| Unexpected results | "Results don't match physics - [explain why]" |
|
||||
| Slow convergence | "Still not converged after X trials - should I continue?" |
|
||||
|
||||
### Help Claude Help You
|
||||
|
||||
When troubleshooting:
|
||||
|
||||
```
|
||||
✅ "Here's what I already tried: [list attempts]"
|
||||
|
||||
✅ "This worked for a similar study last week"
|
||||
|
||||
✅ "The model works fine when I run it manually in NX"
|
||||
```
|
||||
|
||||
### Workarounds Should Be Recorded
|
||||
|
||||
If you find a workaround:
|
||||
|
||||
```
|
||||
"We found that loading the _i.prt file first fixes the mesh update issue.
|
||||
Please record this as a workaround."
|
||||
```
|
||||
|
||||
This helps future sessions avoid the same problem.
|
||||
|
||||
---
|
||||
|
||||
## 8. Contributing to Learning
|
||||
|
||||
### The Learning Atomizer Core (LAC)
|
||||
|
||||
LAC stores three types of knowledge:
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph LAC["What LAC Learns"]
|
||||
A[Optimization Memory<br/>What methods work for what]
|
||||
B[Session Insights<br/>Failures, successes, workarounds]
|
||||
C[Skill Evolution<br/>Protocol improvements]
|
||||
end
|
||||
```
|
||||
|
||||
### How You Contribute
|
||||
|
||||
#### 1. Report Outcomes
|
||||
|
||||
At the end of a successful optimization:
|
||||
|
||||
```
|
||||
"The optimization completed successfully. TPE worked well for this
|
||||
bracket problem - converged at trial 67 out of 100."
|
||||
```
|
||||
|
||||
Claude records this to LAC automatically.
|
||||
|
||||
#### 2. Share Discoveries
|
||||
|
||||
When you learn something:
|
||||
|
||||
```
|
||||
"I discovered that CMA-ES struggles with this type of problem
|
||||
because of the discrete frequency target. TPE handled it better."
|
||||
```
|
||||
|
||||
Claude will record this insight.
|
||||
|
||||
#### 3. Report Preferences
|
||||
|
||||
Your preferences help personalize future sessions:
|
||||
|
||||
```
|
||||
"I prefer seeing actual values in plots rather than normalized values"
|
||||
|
||||
"I like concise summaries - you don't need to explain basic FEA to me"
|
||||
```
|
||||
|
||||
#### 4. Suggest Improvements
|
||||
|
||||
If documentation was unclear:
|
||||
|
||||
```
|
||||
"The protocol didn't explain how to handle assemblies -
|
||||
you should add that."
|
||||
```
|
||||
|
||||
Claude will suggest a protocol update.
|
||||
|
||||
### What Gets Recorded
|
||||
|
||||
| Type | Example | Used For |
|
||||
|------|---------|----------|
|
||||
| **Success** | "TPE converged in 67 trials for bracket" | Method recommendations |
|
||||
| **Failure** | "CMA-ES failed on discrete targets" | Avoiding bad choices |
|
||||
| **Workaround** | "Load _i.prt before UpdateFemodel()" | Fixing known issues |
|
||||
| **Preference** | "User prefers concise output" | Personalization |
|
||||
|
||||
---
|
||||
|
||||
## 9. Ending a Session
|
||||
|
||||
### Before You Go
|
||||
|
||||
Take 30 seconds to wrap up properly:
|
||||
|
||||
```
|
||||
"Let's wrap up this session."
|
||||
```
|
||||
|
||||
Claude will:
|
||||
1. Summarize what was accomplished
|
||||
2. Record any learnings to LAC
|
||||
3. Note the current state of any studies
|
||||
4. Suggest next steps
|
||||
|
||||
### Good Session Endings
|
||||
|
||||
```
|
||||
✅ "We're done for today. The optimization is at trial 50,
|
||||
continuing overnight. I'll check results tomorrow."
|
||||
|
||||
✅ "Session complete. Please record that TPE worked well
|
||||
for this beam optimization."
|
||||
|
||||
✅ "Ending session. Next time I want to analyze the Pareto
|
||||
front in more detail."
|
||||
```
|
||||
|
||||
### Bad Session Endings
|
||||
|
||||
```
|
||||
❌ [Just closing the window without wrapping up]
|
||||
|
||||
❌ [Stopping mid-task without noting the state]
|
||||
```
|
||||
|
||||
### Session Summary
|
||||
|
||||
Ask for a summary:
|
||||
|
||||
```
|
||||
"Summarize this session"
|
||||
```
|
||||
|
||||
You'll get:
|
||||
- What was accomplished
|
||||
- Current state of studies
|
||||
- Learnings recorded
|
||||
- Recommended next steps
|
||||
|
||||
---
|
||||
|
||||
## 10. Best Practices Summary
|
||||
|
||||
### Do These Things
|
||||
|
||||
| Practice | Why |
|
||||
|----------|-----|
|
||||
| **Provide context** | Enables LAC queries and better recommendations |
|
||||
| **Explain your goals** | Claude can suggest better approaches |
|
||||
| **Report outcomes** | Builds optimization memory |
|
||||
| **Share discoveries** | Prevents repeat mistakes |
|
||||
| **Validate results** | Catches errors, improves extractors |
|
||||
| **Wrap up sessions** | Records learnings properly |
|
||||
|
||||
### Avoid These Things
|
||||
|
||||
| Anti-Pattern | Why It's Bad |
|
||||
|--------------|--------------|
|
||||
| Vague requests | Claude can't help effectively |
|
||||
| Ignoring results | Missed learning opportunities |
|
||||
| Not reporting errors | Same errors repeat |
|
||||
| Abandoning sessions | Learnings not recorded |
|
||||
| Skipping validation | Errors propagate |
|
||||
|
||||
### The Golden Rules
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ 1. CONTEXT IS KING │
|
||||
│ The more context you provide, the better Claude can help. │
|
||||
│ │
|
||||
│ 2. OUTCOMES MATTER │
|
||||
│ Always report whether things worked or didn't. │
|
||||
│ │
|
||||
│ 3. SHARE DISCOVERIES │
|
||||
│ If you learn something, say it. It helps everyone. │
|
||||
│ │
|
||||
│ 4. CLOSE THE LOOP │
|
||||
│ End sessions properly so learnings are captured. │
|
||||
│ │
|
||||
│ 5. TRUST BUT VERIFY │
|
||||
│ Review generated configs. Validate results physically. │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
### Session Start
|
||||
```
|
||||
"I need to [goal] for my [geometry type].
|
||||
The model is in [location].
|
||||
Constraints: [list].
|
||||
Budget: [trials]."
|
||||
```
|
||||
|
||||
### During Session
|
||||
```
|
||||
"What do you recommend for [decision]?"
|
||||
"The results show [observation] - does this make sense?"
|
||||
"I discovered that [insight]."
|
||||
```
|
||||
|
||||
### Session End
|
||||
```
|
||||
"Let's wrap up. Please record that [outcome/learning]."
|
||||
"Summarize this session."
|
||||
```
|
||||
|
||||
### Reporting Issues
|
||||
```
|
||||
"[Action] failed with error: [message]
|
||||
I already tried: [attempts]
|
||||
The expected behavior was: [expectation]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example: A Complete Session
|
||||
|
||||
Here's what a good session looks like:
|
||||
|
||||
```
|
||||
YOU: I need to optimize my bracket for minimum mass.
|
||||
Stress must stay below 250 MPa. The model is in
|
||||
studies/bracket_v5/1_setup/model/. I have time
|
||||
for about 100 trials.
|
||||
|
||||
CLAUDE: I'll analyze your model and check LAC for similar studies.
|
||||
[Analyzes model, queries LAC]
|
||||
|
||||
Based on 3 similar bracket optimizations, I recommend TPE
|
||||
with 20 startup trials. This converged in ~70 trials for
|
||||
similar problems.
|
||||
|
||||
Found 4 expressions: thickness_1, thickness_2, fillet_r, width
|
||||
I suggest varying all of them.
|
||||
|
||||
Here's the proposed config: [shows config]
|
||||
|
||||
Want me to proceed?
|
||||
|
||||
YOU: Looks good, but I want to fix fillet_r at 5mm -
|
||||
it's a manufacturing constraint.
|
||||
|
||||
CLAUDE: Updated config with fillet_r fixed at 5mm.
|
||||
Validated - ready to run.
|
||||
|
||||
Start the optimization?
|
||||
|
||||
YOU: Yes, run it.
|
||||
|
||||
[Optimization runs...]
|
||||
|
||||
YOU: How's progress?
|
||||
|
||||
CLAUDE: Trial 45/100 complete.
|
||||
Current best: 2.34 kg at trial 32
|
||||
Convergence: Good - variance dropping steadily
|
||||
|
||||
On track to complete in ~30 minutes.
|
||||
|
||||
[Optimization completes]
|
||||
|
||||
CLAUDE: Optimization complete.
|
||||
Best design: mass = 2.18 kg (trial 78)
|
||||
- thickness_1: 3.2 mm
|
||||
- thickness_2: 2.8 mm
|
||||
- width: 45 mm
|
||||
|
||||
Stress constraint satisfied: max stress = 237 MPa
|
||||
|
||||
Want me to generate a detailed report?
|
||||
|
||||
YOU: Yes. Also, I noticed convergence was faster than the similar
|
||||
studies you mentioned. Maybe because I fixed the fillet radius?
|
||||
|
||||
CLAUDE: Good observation. That reduces the design space from 4 to 3
|
||||
variables, which typically speeds convergence.
|
||||
|
||||
I'll record this insight: "Fixing fillet radius in bracket
|
||||
optimization reduced design space and improved convergence
|
||||
by ~20%."
|
||||
|
||||
[Generates report]
|
||||
|
||||
Report saved. Anything else?
|
||||
|
||||
YOU: No, let's wrap up.
|
||||
|
||||
CLAUDE: Session Summary:
|
||||
|
||||
## Accomplished
|
||||
- Created and ran bracket_v5 optimization
|
||||
- Found optimum: 2.18 kg at trial 78
|
||||
- Generated study report
|
||||
|
||||
## Learnings Recorded
|
||||
- TPE effective for 3-variable bracket optimization
|
||||
- Fixing fillet radius improved convergence ~20%
|
||||
|
||||
## Next Steps
|
||||
1. Review detailed report
|
||||
2. Validate design in NX
|
||||
3. Consider manufacturing feasibility
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you're stuck:
|
||||
|
||||
```
|
||||
"I'm not sure how to [task]. Can you help?"
|
||||
```
|
||||
|
||||
If something seems wrong:
|
||||
|
||||
```
|
||||
"This doesn't seem right because [reason]. Can we investigate?"
|
||||
```
|
||||
|
||||
If you want to learn:
|
||||
|
||||
```
|
||||
"Explain how [concept] works in Atomizer"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Remember: Every session makes Atomizer smarter. Your contributions matter.*
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes, and learning never stops.*
|
||||
1239
docs/development/DEVELOPMENT_GUIDANCE.md
Normal file
1239
docs/development/DEVELOPMENT_GUIDANCE.md
Normal file
File diff suppressed because it is too large
Load Diff
787
docs/development/DEVELOPMENT_ROADMAP.md
Normal file
787
docs/development/DEVELOPMENT_ROADMAP.md
Normal file
@@ -0,0 +1,787 @@
|
||||
# Atomizer Development Roadmap
|
||||
|
||||
> Vision: Transform Atomizer into an LLM-native engineering assistant for optimization
|
||||
|
||||
**Last Updated**: 2025-01-16
|
||||
|
||||
---
|
||||
|
||||
## Vision Statement
|
||||
|
||||
Atomizer will become an **LLM-driven optimization framework** where AI acts as a scientist/programmer/coworker that can:
|
||||
|
||||
- Understand natural language optimization requests
|
||||
- Configure studies autonomously
|
||||
- Write custom Python functions on-the-fly during optimization
|
||||
- Navigate and extend its own codebase
|
||||
- Make engineering decisions based on data analysis
|
||||
- Generate comprehensive optimization reports
|
||||
- Continuously expand its own capabilities through learning
|
||||
|
||||
---
|
||||
|
||||
## Architecture Philosophy
|
||||
|
||||
### LLM-First Design Principles
|
||||
|
||||
1. **Discoverability**: Every feature must be discoverable and usable by LLM via feature registry
|
||||
2. **Extensibility**: Easy to add new capabilities without modifying core engine
|
||||
3. **Safety**: Validate all generated code, sandbox execution, rollback on errors
|
||||
4. **Transparency**: Log all LLM decisions and generated code for auditability
|
||||
5. **Human-in-the-loop**: Confirm critical decisions (e.g., deleting studies, pushing results)
|
||||
6. **Documentation as Code**: Auto-generate docs from code with semantic metadata
|
||||
|
||||
---
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: Foundation - Plugin & Extension System ✅
|
||||
**Timeline**: 2 weeks
|
||||
**Status**: ✅ **COMPLETED** (2025-01-16)
|
||||
**Goal**: Make Atomizer extensible and LLM-navigable
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **Plugin Architecture** ✅
|
||||
- [x] Hook system for optimization lifecycle
|
||||
- [x] `pre_solve`: Execute before solver launch
|
||||
- [x] `post_solve`: Execute after solve, before extraction
|
||||
- [x] `post_extraction`: Execute after result extraction
|
||||
- [x] Python script execution at optimization stages
|
||||
- [x] Plugin auto-discovery and registration
|
||||
- [x] Hook manager with priority-based execution
|
||||
|
||||
2. **Logging Infrastructure** ✅
|
||||
- [x] Detailed per-trial logs (`trial_logs/`)
|
||||
- Complete iteration trace
|
||||
- Design variables, config, timeline
|
||||
- Extracted results and constraint evaluations
|
||||
- [x] High-level optimization log (`optimization.log`)
|
||||
- Configuration summary
|
||||
- Trial progress (START/COMPLETE entries)
|
||||
- Compact one-line-per-trial format
|
||||
- [x] Context passing system for hooks
|
||||
- `output_dir` passed from runner to all hooks
|
||||
- Trial number, design variables, results
|
||||
|
||||
3. **Project Organization** ✅
|
||||
- [x] Studies folder structure with templates
|
||||
- [x] Comprehensive studies documentation ([studies/README.md](studies/README.md))
|
||||
- [x] Model file organization (`model/` folder)
|
||||
- [x] Intelligent path resolution (`atomizer_paths.py`)
|
||||
- [x] Test suite for hook system
|
||||
|
||||
**Files Created**:
|
||||
```
|
||||
optimization_engine/
|
||||
├── plugins/
|
||||
│ ├── __init__.py
|
||||
│ ├── hook_manager.py # Hook registration and execution ✅
|
||||
│ ├── pre_solve/
|
||||
│ │ ├── detailed_logger.py # Per-trial detailed logs ✅
|
||||
│ │ └── optimization_logger.py # High-level optimization.log ✅
|
||||
│ ├── post_solve/
|
||||
│ │ └── log_solve_complete.py # Append solve completion ✅
|
||||
│ └── post_extraction/
|
||||
│ ├── log_results.py # Append extracted results ✅
|
||||
│ └── optimization_logger_results.py # Append to optimization.log ✅
|
||||
|
||||
studies/
|
||||
├── README.md # Comprehensive guide ✅
|
||||
└── bracket_stress_minimization/
|
||||
├── README.md # Study documentation ✅
|
||||
├── model/ # FEA files folder ✅
|
||||
│ ├── Bracket.prt
|
||||
│ ├── Bracket_sim1.sim
|
||||
│ └── Bracket_fem1.fem
|
||||
└── optimization_results/ # Auto-generated ✅
|
||||
├── optimization.log
|
||||
└── trial_logs/
|
||||
|
||||
tests/
|
||||
├── test_hooks_with_bracket.py # Hook validation test ✅
|
||||
├── run_5trial_test.py # Quick integration test ✅
|
||||
└── test_journal_optimization.py # Full optimization test ✅
|
||||
|
||||
atomizer_paths.py # Intelligent path resolution ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Research & Learning System
|
||||
**Timeline**: 2 weeks
|
||||
**Status**: 🟡 **NEXT PRIORITY**
|
||||
**Goal**: Enable autonomous research and feature generation when encountering unknown domains
|
||||
|
||||
#### Philosophy
|
||||
|
||||
When the LLM encounters a request it cannot fulfill with existing features (e.g., "Create NX materials XML"), it should:
|
||||
1. **Detect the knowledge gap** by searching the feature registry
|
||||
2. **Plan research strategy** prioritizing: user examples → NX MCP → web documentation
|
||||
3. **Execute interactive research** asking the user first for examples
|
||||
4. **Learn patterns and schemas** from gathered information
|
||||
5. **Generate new features** following learned patterns
|
||||
6. **Test and validate** with user confirmation
|
||||
7. **Document and integrate** into knowledge base and feature registry
|
||||
|
||||
This creates a **self-extending system** that grows more capable with each research session.
|
||||
|
||||
#### Key Deliverables
|
||||
|
||||
**Week 1: Interactive Research Foundation**
|
||||
|
||||
1. **Knowledge Base Structure**
|
||||
- [x] Create `knowledge_base/` folder hierarchy
|
||||
- [x] `nx_research/` - NX-specific learned patterns
|
||||
- [x] `research_sessions/[date]_[topic]/` - Session logs with rationale
|
||||
- [x] `templates/` - Reusable code patterns learned from research
|
||||
|
||||
2. **ResearchAgent Class** (`optimization_engine/research_agent.py`)
|
||||
- [ ] `identify_knowledge_gap(user_request)` - Search registry, identify missing features
|
||||
- [ ] `create_research_plan(knowledge_gap)` - Prioritize sources (user > MCP > web)
|
||||
- [ ] `execute_interactive_research(plan)` - Ask user for examples first
|
||||
- [ ] `synthesize_knowledge(findings)` - Extract patterns, schemas, best practices
|
||||
- [ ] `design_feature(synthesized_knowledge)` - Create feature spec from learned patterns
|
||||
- [ ] `validate_with_user(feature_spec)` - Confirm implementation meets needs
|
||||
|
||||
3. **Interactive Research Workflow**
|
||||
- [ ] Prompt templates for asking users for examples
|
||||
- [ ] Example parser (extract structure from XML, Python, journal scripts)
|
||||
- [ ] Pattern recognition (identify reusable templates)
|
||||
- [ ] Confidence tracking (how reliable is this knowledge?)
|
||||
|
||||
**Week 2: Web Integration & Feature Generation**
|
||||
|
||||
4. **Web Research Integration**
|
||||
- [ ] WebSearch integration for NXOpen documentation
|
||||
- [ ] NXOpenTSE scraping for code examples
|
||||
- [ ] Siemens official docs search and parsing
|
||||
- [ ] Multi-source synthesis (combine user examples + web docs)
|
||||
|
||||
5. **Feature Generation Pipeline**
|
||||
- [ ] Code generator using learned templates
|
||||
- [ ] Feature registry auto-update
|
||||
- [ ] Documentation auto-generation (following FEATURE_REGISTRY_ARCHITECTURE.md format)
|
||||
- [ ] Unit test scaffolding from examples
|
||||
|
||||
6. **Knowledge Base Management**
|
||||
- [ ] Research session logging (questions, sources, findings, decisions)
|
||||
- [ ] Confidence score tracking (user-validated > MCP > web docs)
|
||||
- [ ] Knowledge retrieval (search past research before starting new)
|
||||
- [ ] Template library growth (extract reusable patterns from generated code)
|
||||
|
||||
#### Success Criteria
|
||||
|
||||
- [ ] **Materials XML Example**: LLM asks for example XML → learns schema → generates new material XMLs
|
||||
- [ ] **Knowledge Persistence**: Research session saved with rationale, retrievable for future requests
|
||||
- [ ] **Multi-Source Synthesis**: Combines user example + NXOpenTSE + official docs into one coherent feature
|
||||
- [ ] **Template Reuse**: Second request for similar feature uses learned template (faster generation)
|
||||
- [ ] **User Validation Loop**: Generated feature tested, user confirms it works, feedback integrated
|
||||
|
||||
#### Example Workflow
|
||||
|
||||
```
|
||||
User: "Please create a new material XML for NX with titanium Ti-6Al-4V properties"
|
||||
|
||||
LLM (detects gap):
|
||||
"I don't have a feature for generating NX material XMLs yet. Let me research this.
|
||||
|
||||
Do you have an example .xml material file I can learn from?"
|
||||
|
||||
User: [uploads steel_material.xml]
|
||||
|
||||
LLM (analyzes example):
|
||||
"Thanks! I've learned the NX material XML schema:
|
||||
- Root element: <PhysicalMaterial>
|
||||
- Required fields: Name, Density, YoungModulus, PoissonRatio, ThermalExpansion
|
||||
- Units specified via attributes
|
||||
|
||||
Let me create titanium Ti-6Al-4V with these properties:
|
||||
- Density: 4430 kg/m³
|
||||
- Young's Modulus: 113.8 GPa
|
||||
- Poisson's Ratio: 0.342
|
||||
- Thermal Expansion: 8.6e-6 /K
|
||||
|
||||
[Generates XML]
|
||||
|
||||
Does this look correct?"
|
||||
|
||||
User: "Yes, perfect!"
|
||||
|
||||
LLM (documents):
|
||||
"Great! I've saved:
|
||||
1. NX material XML schema to knowledge_base/nx_research/material_xml_schema.md
|
||||
2. Template generator to optimization_engine/custom_functions/nx_material_generator.py
|
||||
3. Research session log to knowledge_base/research_sessions/2025-01-16_nx_materials/
|
||||
|
||||
Next time you request a material, I can generate it instantly using this template!"
|
||||
```
|
||||
|
||||
#### Files to Create
|
||||
|
||||
```
|
||||
knowledge_base/
|
||||
├── nx_research/
|
||||
│ ├── material_xml_schema.md # Learned from user example
|
||||
│ ├── journal_script_patterns.md # Common NXOpen patterns
|
||||
│ └── best_practices.md # Engineering guidelines
|
||||
├── research_sessions/
|
||||
│ └── 2025-01-16_nx_materials/
|
||||
│ ├── user_question.txt # Original request
|
||||
│ ├── sources_consulted.txt # User example, NXOpenTSE, etc.
|
||||
│ ├── findings.md # What was learned
|
||||
│ └── decision_rationale.md # Why this implementation
|
||||
└── templates/
|
||||
├── xml_generation_template.py # Learned from research
|
||||
└── journal_script_template.py
|
||||
|
||||
optimization_engine/
|
||||
├── research_agent.py # Main ResearchAgent class
|
||||
└── custom_functions/
|
||||
└── nx_material_generator.py # Generated from learned template
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: LLM Integration Layer
|
||||
**Timeline**: 2 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: Enable natural language control of Atomizer
|
||||
|
||||
#### Key Deliverables
|
||||
|
||||
1. **Feature Registry** - Centralized catalog of all Atomizer capabilities
|
||||
2. **Claude Skill** - LLM can navigate codebase and understand architecture
|
||||
3. **Natural Language Parser** - Intent recognition and entity extraction
|
||||
4. **Conversational Workflow** - Multi-turn conversations with context preservation
|
||||
|
||||
#### Success Vision
|
||||
|
||||
```
|
||||
User: "Create a stress minimization study for my bracket"
|
||||
LLM: "I'll set up a new study. Please drop your .sim file in the study folder."
|
||||
|
||||
User: "Done. Vary wall_thickness from 3-8mm"
|
||||
LLM: "Perfect! I've configured:
|
||||
- Objective: Minimize max von Mises stress
|
||||
- Design variable: wall_thickness (3.0-8.0mm)
|
||||
- Sampler: TPE with 50 trials
|
||||
Ready to start?"
|
||||
|
||||
User: "Yes!"
|
||||
LLM: "Optimization running! View progress at http://localhost:8080"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Dynamic Code Generation
|
||||
**Timeline**: 3 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: LLM writes and integrates custom code during optimization
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **Custom Function Generator**
|
||||
- [ ] Template system for common patterns:
|
||||
- RSS (Root Sum Square) of multiple metrics
|
||||
- Weighted objectives
|
||||
- Custom constraints (e.g., stress/yield_strength < 1)
|
||||
- Conditional objectives (if-then logic)
|
||||
- [ ] Code validation pipeline (syntax check, safety scan)
|
||||
- [ ] Unit test auto-generation
|
||||
- [ ] Auto-registration in feature registry
|
||||
- [ ] Persistent storage in `optimization_engine/custom_functions/`
|
||||
|
||||
2. **Journal Script Generator**
|
||||
- [ ] Generate NX journal scripts from natural language
|
||||
- [ ] Library of common operations:
|
||||
- Modify geometry (fillets, chamfers, thickness)
|
||||
- Apply loads and boundary conditions
|
||||
- Extract custom data (centroid, inertia, custom expressions)
|
||||
- [ ] Validation against NXOpen API
|
||||
- [ ] Dry-run mode for testing
|
||||
|
||||
3. **Safe Execution Environment**
|
||||
- [ ] Sandboxed Python execution (RestrictedPython or similar)
|
||||
- [ ] Whitelist of allowed imports
|
||||
- [ ] Error handling with detailed logs
|
||||
- [ ] Rollback mechanism on failure
|
||||
- [ ] Logging of all generated code to audit trail
|
||||
|
||||
**Files to Create**:
|
||||
```
|
||||
optimization_engine/
|
||||
├── custom_functions/
|
||||
│ ├── __init__.py
|
||||
│ ├── templates/
|
||||
│ │ ├── rss_template.py
|
||||
│ │ ├── weighted_sum_template.py
|
||||
│ │ └── constraint_template.py
|
||||
│ ├── generator.py # Code generation engine
|
||||
│ ├── validator.py # Safety validation
|
||||
│ └── sandbox.py # Sandboxed execution
|
||||
├── code_generation/
|
||||
│ ├── __init__.py
|
||||
│ ├── journal_generator.py # NX journal script generation
|
||||
│ └── function_templates.py # Jinja2 templates
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Intelligent Analysis & Decision Support
|
||||
**Timeline**: 3 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: LLM analyzes results and guides engineering decisions
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **Result Analyzer**
|
||||
- [ ] Statistical analysis module
|
||||
- Convergence detection (plateau in objective)
|
||||
- Pareto front identification (multi-objective)
|
||||
- Sensitivity analysis (which params matter most)
|
||||
- Outlier detection
|
||||
- [ ] Trend analysis (monotonic relationships, inflection points)
|
||||
- [ ] Recommendations engine (refine mesh, adjust bounds, add constraints)
|
||||
|
||||
2. **Surrogate Model Manager**
|
||||
- [ ] Quality metrics calculation
|
||||
- R² (coefficient of determination)
|
||||
- CV score (cross-validation)
|
||||
- Prediction error distribution
|
||||
- Confidence intervals
|
||||
- [ ] Surrogate fitness assessment
|
||||
- "Ready to use" threshold (e.g., R² > 0.9)
|
||||
- Warning if predictions unreliable
|
||||
- [ ] Active learning suggestions (which points to sample next)
|
||||
|
||||
3. **Decision Assistant**
|
||||
- [ ] Trade-off interpreter (explain Pareto fronts)
|
||||
- [ ] "What-if" analysis (predict outcome of parameter change)
|
||||
- [ ] Constraint violation diagnosis
|
||||
- [ ] Next-step recommendations
|
||||
|
||||
**Example**:
|
||||
```
|
||||
User: "Summarize optimization results"
|
||||
→ LLM:
|
||||
Analyzes 50 trials, identifies best design at trial #34:
|
||||
- wall_thickness = 3.2mm (converged from initial 5mm)
|
||||
- max_stress = 187 MPa (target: 200 MPa ✓)
|
||||
- mass = 0.45 kg (15% lighter than baseline)
|
||||
|
||||
Issues detected:
|
||||
- Stress constraint violated in 20% of trials (trials 5,12,18...)
|
||||
- Displacement shows high sensitivity to thickness (Sobol index: 0.78)
|
||||
|
||||
Recommendations:
|
||||
1. Relax stress limit to 210 MPa OR
|
||||
2. Add fillet radius as design variable (currently fixed at 2mm)
|
||||
3. Consider thickness > 3mm for robustness
|
||||
```
|
||||
|
||||
**Files to Create**:
|
||||
```
|
||||
optimization_engine/
|
||||
├── analysis/
|
||||
│ ├── __init__.py
|
||||
│ ├── statistical_analyzer.py # Convergence, sensitivity
|
||||
│ ├── surrogate_quality.py # R², CV, confidence intervals
|
||||
│ ├── decision_engine.py # Recommendations
|
||||
│ └── visualizers.py # Plot generators
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Automated Reporting
|
||||
**Timeline**: 2 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: Generate comprehensive HTML/PDF optimization reports
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **Report Generator**
|
||||
- [ ] Template system (Jinja2)
|
||||
- Executive summary (1-page overview)
|
||||
- Detailed analysis (convergence plots, sensitivity charts)
|
||||
- Appendices (all trial data, config files)
|
||||
- [ ] Auto-generated plots (Chart.js for web, Matplotlib for PDF)
|
||||
- [ ] Embedded data tables (sortable, filterable)
|
||||
- [ ] LLM-written narrative explanations
|
||||
|
||||
2. **Multi-Format Export**
|
||||
- [ ] HTML (interactive, shareable via link)
|
||||
- [ ] PDF (static, for archival/print)
|
||||
- [ ] Markdown (for version control, GitHub)
|
||||
- [ ] JSON (machine-readable, for post-processing)
|
||||
|
||||
3. **Smart Narrative Generation**
|
||||
- [ ] LLM analyzes data and writes insights in natural language
|
||||
- [ ] Explains why certain designs performed better
|
||||
- [ ] Highlights unexpected findings (e.g., "Counter-intuitively, reducing thickness improved stress")
|
||||
- [ ] Includes engineering recommendations
|
||||
|
||||
**Files to Create**:
|
||||
```
|
||||
optimization_engine/
|
||||
├── reporting/
|
||||
│ ├── __init__.py
|
||||
│ ├── templates/
|
||||
│ │ ├── executive_summary.html.j2
|
||||
│ │ ├── detailed_analysis.html.j2
|
||||
│ │ └── markdown_report.md.j2
|
||||
│ ├── report_generator.py # Main report engine
|
||||
│ ├── narrative_writer.py # LLM-driven text generation
|
||||
│ └── exporters/
|
||||
│ ├── html_exporter.py
|
||||
│ ├── pdf_exporter.py # Using WeasyPrint or similar
|
||||
│ └── markdown_exporter.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 7: NX MCP Enhancement
|
||||
**Timeline**: 4 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: Deep NX integration via Model Context Protocol
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **NX Documentation MCP Server**
|
||||
- [ ] Index full Siemens NX API documentation
|
||||
- [ ] Semantic search across NX docs (embeddings + vector DB)
|
||||
- [ ] Code examples from official documentation
|
||||
- [ ] Auto-suggest relevant API calls based on task
|
||||
|
||||
2. **Advanced NX Operations**
|
||||
- [ ] Geometry manipulation library
|
||||
- Parametric CAD automation (change sketches, features)
|
||||
- Assembly management (add/remove components)
|
||||
- Advanced meshing controls (refinement zones, element types)
|
||||
- [ ] Multi-physics setup
|
||||
- Thermal-structural coupling
|
||||
- Modal analysis
|
||||
- Fatigue analysis setup
|
||||
|
||||
3. **Feature Bank Expansion**
|
||||
- [ ] Library of 50+ pre-built NX operations
|
||||
- [ ] Topology optimization integration
|
||||
- [ ] Generative design workflows
|
||||
- [ ] Each feature documented in registry with examples
|
||||
|
||||
**Files to Create**:
|
||||
```
|
||||
mcp/
|
||||
├── nx_documentation/
|
||||
│ ├── __init__.py
|
||||
│ ├── server.py # MCP server implementation
|
||||
│ ├── indexer.py # NX docs indexing
|
||||
│ ├── embeddings.py # Vector embeddings for search
|
||||
│ └── vector_db.py # Chroma/Pinecone integration
|
||||
├── nx_features/
|
||||
│ ├── geometry/
|
||||
│ │ ├── fillets.py
|
||||
│ │ ├── chamfers.py
|
||||
│ │ └── thickness_modifier.py
|
||||
│ ├── analysis/
|
||||
│ │ ├── thermal_structural.py
|
||||
│ │ ├── modal_analysis.py
|
||||
│ │ └── fatigue_setup.py
|
||||
│ └── feature_registry.json # NX feature catalog
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 8: Self-Improving System
|
||||
**Timeline**: 4 weeks
|
||||
**Status**: 🔵 Not Started
|
||||
**Goal**: Atomizer learns from usage and expands itself
|
||||
|
||||
#### Deliverables
|
||||
|
||||
1. **Feature Learning System**
|
||||
- [ ] When LLM creates custom function, prompt user to save to library
|
||||
- [ ] User provides name + description
|
||||
- [ ] Auto-update feature registry with new capability
|
||||
- [ ] Version control for user-contributed features
|
||||
|
||||
2. **Best Practices Database**
|
||||
- [ ] Store successful optimization strategies
|
||||
- [ ] Pattern recognition (e.g., "Adding fillets always reduces stress by 10-20%")
|
||||
- [ ] Similarity search (find similar past optimizations)
|
||||
- [ ] Recommend strategies for new problems
|
||||
|
||||
3. **Continuous Documentation**
|
||||
- [ ] Auto-generate docs when new features added
|
||||
- [ ] Keep examples updated with latest API
|
||||
- [ ] Version control for all generated code
|
||||
- [ ] Changelog auto-generation
|
||||
|
||||
**Files to Create**:
|
||||
```
|
||||
optimization_engine/
|
||||
├── learning/
|
||||
│ ├── __init__.py
|
||||
│ ├── feature_learner.py # Capture and save new features
|
||||
│ ├── pattern_recognizer.py # Identify successful patterns
|
||||
│ ├── similarity_search.py # Find similar optimizations
|
||||
│ └── best_practices_db.json # Pattern library
|
||||
├── auto_documentation/
|
||||
│ ├── __init__.py
|
||||
│ ├── doc_generator.py # Auto-generate markdown docs
|
||||
│ ├── changelog_builder.py # Track feature additions
|
||||
│ └── example_extractor.py # Extract examples from code
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Architecture
|
||||
|
||||
```
|
||||
Atomizer/
|
||||
├── optimization_engine/
|
||||
│ ├── core/ # Existing optimization loop
|
||||
│ ├── plugins/ # NEW: Hook system (Phase 1) ✅
|
||||
│ │ ├── hook_manager.py
|
||||
│ │ ├── pre_solve/
|
||||
│ │ ├── post_solve/
|
||||
│ │ └── post_extraction/
|
||||
│ ├── research_agent.py # NEW: Research & Learning (Phase 2)
|
||||
│ ├── custom_functions/ # NEW: User/LLM generated code (Phase 4)
|
||||
│ ├── llm_interface/ # NEW: Natural language control (Phase 3)
|
||||
│ ├── analysis/ # NEW: Result analysis (Phase 5)
|
||||
│ ├── reporting/ # NEW: Report generation (Phase 6)
|
||||
│ ├── learning/ # NEW: Self-improvement (Phase 8)
|
||||
│ └── feature_registry.json # NEW: Capability catalog (Phase 1) ✅
|
||||
├── knowledge_base/ # NEW: Learned knowledge (Phase 2)
|
||||
│ ├── nx_research/ # NX-specific patterns and schemas
|
||||
│ ├── research_sessions/ # Session logs with rationale
|
||||
│ └── templates/ # Reusable code patterns
|
||||
├── .claude/
|
||||
│ └── skills/
|
||||
│ └── atomizer.md # NEW: Claude skill (Phase 1) ✅
|
||||
├── mcp/
|
||||
│ ├── nx_documentation/ # NEW: NX docs MCP server (Phase 7)
|
||||
│ └── nx_features/ # NEW: NX feature bank (Phase 7)
|
||||
├── docs/
|
||||
│ ├── FEATURE_REGISTRY_ARCHITECTURE.md # NEW: Registry design (Phase 1) ✅
|
||||
│ └── llm/ # NEW: LLM-readable docs (Phase 1)
|
||||
│ ├── capabilities.md
|
||||
│ ├── examples.md
|
||||
│ └── api_reference.md
|
||||
├── dashboard/ # Existing web UI
|
||||
└── examples/ # Example projects
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
### Use Case 1: Natural Language Optimization Setup
|
||||
```
|
||||
User: "Let's make a new optimization"
|
||||
|
||||
LLM: "I'll help you set up a new optimization study. What would you like to name it?"
|
||||
|
||||
User: "bracket_weight_reduction"
|
||||
|
||||
LLM: "Study created! Please drop your .sim and .prt files into:
|
||||
C:\Atomizer\optimization_results\bracket_weight_reduction\sim\"
|
||||
|
||||
User: "Done. I want to minimize displacement, play with wall_thickness (3-8mm)
|
||||
and fillet_radius (1-5mm)"
|
||||
|
||||
LLM: "Got it! Let me configure:
|
||||
- Objective: Minimize max displacement
|
||||
- Design variables:
|
||||
* wall_thickness: 3.0 - 8.0 mm
|
||||
* fillet_radius: 1.0 - 5.0 mm
|
||||
|
||||
How many iterations would you like? (Recommend 50 for 2 variables)"
|
||||
|
||||
User: "50 sounds good"
|
||||
|
||||
LLM: "Perfect! Configuration saved. Ready to start optimization?
|
||||
This will run 50 FEA simulations (estimated time: 2-3 hours)"
|
||||
|
||||
User: "Start it!"
|
||||
|
||||
LLM: "Optimization launched! 🚀
|
||||
- Live dashboard: http://localhost:8080/dashboard
|
||||
- Study: bracket_weight_reduction
|
||||
- Status: Trial 1/50 running..."
|
||||
```
|
||||
|
||||
### Use Case 2: Custom Metric via LLM
|
||||
```
|
||||
User: "Let's implement a function to calculate the RSS of stress and displacement"
|
||||
|
||||
LLM: "I'll create a Root Sum Square (RSS) function for you. Let me confirm:
|
||||
|
||||
RSS = sqrt(stress² + displacement²)
|
||||
|
||||
Should I normalize the metrics first? (Recommended for different units)"
|
||||
|
||||
User: "Yes, normalize to their targets"
|
||||
|
||||
LLM: "Creating custom objective function... Done!
|
||||
|
||||
File: optimization_engine/custom_functions/rss_stress_displacement.py
|
||||
|
||||
def rss_objective(stress, displacement, stress_target=200, disp_target=1.0):
|
||||
norm_stress = stress / stress_target
|
||||
norm_disp = displacement / disp_target
|
||||
return sqrt(norm_stress**2 + norm_disp**2)
|
||||
|
||||
Added to feature registry. You can now use 'rss_stress_displacement'
|
||||
as an objective in your optimization config."
|
||||
```
|
||||
|
||||
### Use Case 3: Surrogate Model Usage
|
||||
```
|
||||
User: "Can you use the surrogate model to predict stress for these parameters:
|
||||
wall_thickness = [3.5, 4.0, 4.5, 5.0, 5.5]"
|
||||
|
||||
LLM: "Checking surrogate model quality...
|
||||
|
||||
Surrogate Model Status:
|
||||
- Training data: 50 trials
|
||||
- R² score: 0.94 (excellent)
|
||||
- CV score: 0.91 (good generalization)
|
||||
- Confidence: HIGH ✓
|
||||
|
||||
Predictions:
|
||||
| Thickness | Predicted Stress | Confidence Interval |
|
||||
|-----------|------------------|---------------------|
|
||||
| 3.5 mm | 245.2 MPa | ±12.3 MPa |
|
||||
| 4.0 mm | 198.7 MPa | ±8.1 MPa |
|
||||
| 4.5 mm | 165.4 MPa | ±7.2 MPa |
|
||||
| 5.0 mm | 142.1 MPa | ±9.5 MPa |
|
||||
| 5.5 mm | 126.8 MPa | ±11.8 MPa |
|
||||
|
||||
Note: 4.0mm appears optimal (stress closest to 200 MPa target)"
|
||||
```
|
||||
|
||||
### Use Case 4: Optimization Report
|
||||
```
|
||||
User: "Make an optimization report for bracket_weight_reduction"
|
||||
|
||||
LLM: "Generating comprehensive optimization report...
|
||||
|
||||
Analyzing 50 completed trials...
|
||||
Writing narrative insights...
|
||||
Generating plots...
|
||||
|
||||
Report complete!
|
||||
|
||||
📄 HTML: optimization_results/bracket_weight_reduction/report.html
|
||||
📄 PDF: optimization_results/bracket_weight_reduction/report.pdf
|
||||
|
||||
Key Findings:
|
||||
- Best design: Trial #34
|
||||
* wall_thickness = 4.2 mm
|
||||
* fillet_radius = 3.1 mm
|
||||
* Displacement = 0.78 mm (22% below target)
|
||||
* Mass = 0.51 kg (18% lighter than baseline)
|
||||
|
||||
- Sensitivity: Thickness has 3x more impact than fillet radius
|
||||
- Recommendation: Lock thickness at 4.2mm, explore other variables
|
||||
|
||||
Open report? [Y/n]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Phase 1 Success ✅
|
||||
- [x] Hook system operational with 5 plugins created and tested
|
||||
- [x] Plugin auto-discovery and registration working
|
||||
- [x] Comprehensive logging system (trial logs + optimization log)
|
||||
- [x] Studies folder structure established with documentation
|
||||
- [x] Path resolution system working across all test scripts
|
||||
- [x] Integration tests passing (hook validation test)
|
||||
|
||||
### Phase 2 Success (Research Agent)
|
||||
- [ ] LLM detects knowledge gaps by searching feature registry
|
||||
- [ ] Interactive research workflow (ask user for examples first)
|
||||
- [ ] Successfully learns NX material XML schema from single user example
|
||||
- [ ] Knowledge persisted across sessions (research session logs retrievable)
|
||||
- [ ] Template library grows with each research session
|
||||
- [ ] Second similar request uses learned template (instant generation)
|
||||
|
||||
### Phase 3 Success (LLM Integration)
|
||||
- [ ] LLM can create optimization from natural language in <5 turns
|
||||
- [ ] 90% of user requests understood correctly
|
||||
- [ ] Zero manual JSON editing required
|
||||
|
||||
### Phase 4 Success (Code Generation)
|
||||
- [ ] LLM generates 10+ custom functions with zero errors
|
||||
- [ ] All generated code passes safety validation
|
||||
- [ ] Users save 50% time vs. manual coding
|
||||
|
||||
### Phase 5 Success (Analysis & Decision Support)
|
||||
- [ ] Surrogate quality detection 95% accurate
|
||||
- [ ] Recommendations lead to 30% faster convergence
|
||||
- [ ] Users report higher confidence in results
|
||||
|
||||
### Phase 6 Success (Automated Reporting)
|
||||
- [ ] Reports generated in <30 seconds
|
||||
- [ ] Narrative quality rated 4/5 by engineers
|
||||
- [ ] 80% of reports used without manual editing
|
||||
|
||||
### Phase 7 Success (NX MCP Enhancement)
|
||||
- [ ] NX MCP answers 95% of API questions correctly
|
||||
- [ ] Feature bank covers 80% of common workflows
|
||||
- [ ] Users write 50% less manual journal code
|
||||
|
||||
### Phase 8 Success (Self-Improving System)
|
||||
- [ ] 20+ user-contributed features in library
|
||||
- [ ] Pattern recognition identifies 10+ best practices
|
||||
- [ ] Documentation auto-updates with zero manual effort
|
||||
|
||||
---
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
### Risk: LLM generates unsafe code
|
||||
**Mitigation**:
|
||||
- Sandbox all execution
|
||||
- Whitelist allowed imports
|
||||
- Code review by static analysis tools
|
||||
- Rollback on any error
|
||||
|
||||
### Risk: Feature registry becomes stale
|
||||
**Mitigation**:
|
||||
- Auto-update on code changes (pre-commit hook)
|
||||
- CI/CD checks for registry sync
|
||||
- Weekly audit of documented vs. actual features
|
||||
|
||||
### Risk: NX API changes break features
|
||||
**Mitigation**:
|
||||
- Version pinning for NX (currently 2412)
|
||||
- Automated tests against NX API
|
||||
- Migration guides for version upgrades
|
||||
|
||||
### Risk: User overwhelmed by LLM autonomy
|
||||
**Mitigation**:
|
||||
- Confirm before executing destructive actions
|
||||
- "Explain mode" that shows what LLM plans to do
|
||||
- Undo/rollback for all operations
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-01-16
|
||||
**Maintainer**: Antoine Polvé (antoine@atomaste.com)
|
||||
**Status**: 🟢 Phase 1 Complete | 🟡 Phase 2 (Research Agent) - NEXT PRIORITY
|
||||
|
||||
---
|
||||
|
||||
## For Developers
|
||||
|
||||
**Active development tracking**: See [DEVELOPMENT.md](DEVELOPMENT.md) for:
|
||||
- Detailed todos for current phase
|
||||
- Completed features list
|
||||
- Known issues and bug tracking
|
||||
- Testing status and coverage
|
||||
- Development commands and workflows
|
||||
347
docs/development/LOGGING_MIGRATION_GUIDE.md
Normal file
347
docs/development/LOGGING_MIGRATION_GUIDE.md
Normal file
@@ -0,0 +1,347 @@
|
||||
# Logging Migration Guide
|
||||
|
||||
**How to migrate existing studies to use the new structured logging system**
|
||||
|
||||
## Overview
|
||||
|
||||
The new `optimization_engine.logger` module provides production-ready logging with:
|
||||
- Color-coded console output
|
||||
- Automatic file logging with rotation
|
||||
- Structured trial logging for dashboard integration
|
||||
- Zero external dependencies
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Step 1: Import the Logger
|
||||
|
||||
**Before:**
|
||||
```python
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
```
|
||||
|
||||
**After:**
|
||||
```python
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
from optimization_engine.logger import get_logger
|
||||
```
|
||||
|
||||
### Step 2: Initialize Logger in main()
|
||||
|
||||
**Before:**
|
||||
```python
|
||||
def main():
|
||||
study_dir = Path(__file__).parent
|
||||
results_dir = study_dir / "2_results"
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
|
||||
print("=" * 80)
|
||||
print("MY OPTIMIZATION STUDY")
|
||||
print("=" * 80)
|
||||
```
|
||||
|
||||
**After:**
|
||||
```python
|
||||
def main():
|
||||
study_dir = Path(__file__).parent
|
||||
results_dir = study_dir / "2_results"
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Initialize logger with file logging
|
||||
logger = get_logger(
|
||||
"my_study_name",
|
||||
study_dir=results_dir
|
||||
)
|
||||
|
||||
logger.info("=" * 80)
|
||||
logger.info("MY OPTIMIZATION STUDY")
|
||||
logger.info("=" * 80)
|
||||
```
|
||||
|
||||
### Step 3: Replace print() with logger calls
|
||||
|
||||
**Basic Replacements:**
|
||||
```python
|
||||
# Before
|
||||
print("Starting optimization...")
|
||||
print(f"[ERROR] Simulation failed")
|
||||
print(f"[WARNING] Constraint violated")
|
||||
|
||||
# After
|
||||
logger.info("Starting optimization...")
|
||||
logger.error("Simulation failed")
|
||||
logger.warning("Constraint violated")
|
||||
```
|
||||
|
||||
### Step 4: Use Structured Trial Logging
|
||||
|
||||
**Trial Start - Before:**
|
||||
```python
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Trial #{trial.number}")
|
||||
print(f"{'='*60}")
|
||||
print(f"Design Variables:")
|
||||
for name, value in design_vars.items():
|
||||
print(f" {name}: {value:.3f}")
|
||||
```
|
||||
|
||||
**Trial Start - After:**
|
||||
```python
|
||||
logger.trial_start(trial.number, design_vars)
|
||||
```
|
||||
|
||||
**Trial Complete - Before:**
|
||||
```python
|
||||
print(f"\nTrial #{trial.number} COMPLETE")
|
||||
print("Objectives:")
|
||||
for name, value in objectives.items():
|
||||
print(f" {name}: {value:.4f}")
|
||||
print("Constraints:")
|
||||
for name, value in constraints.items():
|
||||
print(f" {name}: {value:.4f}")
|
||||
print("[OK] Feasible" if feasible else "[WARNING] Infeasible")
|
||||
```
|
||||
|
||||
**Trial Complete - After:**
|
||||
```python
|
||||
logger.trial_complete(
|
||||
trial.number,
|
||||
objectives=objectives,
|
||||
constraints=constraints,
|
||||
feasible=feasible
|
||||
)
|
||||
```
|
||||
|
||||
**Trial Failed - Before:**
|
||||
```python
|
||||
print(f"\n[ERROR] Trial #{trial.number} FAILED")
|
||||
print(f"Error: {error_message}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
```
|
||||
|
||||
**Trial Failed - After:**
|
||||
```python
|
||||
logger.trial_failed(trial.number, error_message)
|
||||
logger.error("Full traceback:", exc_info=True)
|
||||
```
|
||||
|
||||
### Step 5: Use Study Lifecycle Methods
|
||||
|
||||
**Study Start - Before:**
|
||||
```python
|
||||
print("=" * 80)
|
||||
print(f"OPTIMIZATION STUDY: {study_name}")
|
||||
print("=" * 80)
|
||||
print(f"Trials: {n_trials}")
|
||||
print(f"Sampler: {sampler}")
|
||||
print("=" * 80)
|
||||
```
|
||||
|
||||
**Study Start - After:**
|
||||
```python
|
||||
logger.study_start(study_name, n_trials=n_trials, sampler=sampler)
|
||||
```
|
||||
|
||||
**Study Complete - Before:**
|
||||
```python
|
||||
print("=" * 80)
|
||||
print(f"STUDY COMPLETE: {study_name}")
|
||||
print("=" * 80)
|
||||
print(f"Total trials: {n_trials}")
|
||||
print(f"Successful: {n_successful}")
|
||||
print(f"Failed/Pruned: {n_trials - n_successful}")
|
||||
print("=" * 80)
|
||||
```
|
||||
|
||||
**Study Complete - After:**
|
||||
```python
|
||||
logger.study_complete(study_name, n_trials=n_trials, n_successful=n_successful)
|
||||
```
|
||||
|
||||
## Complete Example
|
||||
|
||||
### Before (Old Style)
|
||||
|
||||
```python
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import optuna
|
||||
|
||||
def main():
|
||||
study_dir = Path(__file__).parent
|
||||
results_dir = study_dir / "2_results"
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
|
||||
print("=" * 80)
|
||||
print("MY OPTIMIZATION STUDY")
|
||||
print("=" * 80)
|
||||
|
||||
def objective(trial):
|
||||
x = trial.suggest_float("x", -10, 10)
|
||||
|
||||
print(f"\nTrial #{trial.number}")
|
||||
print(f"x = {x:.4f}")
|
||||
|
||||
try:
|
||||
result = x ** 2
|
||||
print(f"Result: {result:.4f}")
|
||||
return result
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Trial failed: {e}")
|
||||
raise
|
||||
|
||||
study = optuna.create_study()
|
||||
study.optimize(objective, n_trials=10)
|
||||
|
||||
print("\nOptimization complete!")
|
||||
print(f"Best value: {study.best_value:.4f}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### After (New Style with Logger)
|
||||
|
||||
```python
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import optuna
|
||||
from optimization_engine.logger import get_logger
|
||||
|
||||
def main():
|
||||
study_dir = Path(__file__).parent
|
||||
results_dir = study_dir / "2_results"
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Initialize logger with file logging
|
||||
logger = get_logger("my_study", study_dir=results_dir)
|
||||
|
||||
logger.study_start("my_study", n_trials=10, sampler="TPESampler")
|
||||
|
||||
def objective(trial):
|
||||
x = trial.suggest_float("x", -10, 10)
|
||||
|
||||
logger.trial_start(trial.number, {"x": x})
|
||||
|
||||
try:
|
||||
result = x ** 2
|
||||
logger.trial_complete(
|
||||
trial.number,
|
||||
objectives={"f(x)": result},
|
||||
feasible=True
|
||||
)
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.trial_failed(trial.number, str(e))
|
||||
logger.error("Full traceback:", exc_info=True)
|
||||
raise
|
||||
|
||||
study = optuna.create_study()
|
||||
study.optimize(objective, n_trials=10)
|
||||
|
||||
logger.study_complete("my_study", n_trials=10, n_successful=len(study.trials))
|
||||
logger.info(f"Best value: {study.best_value:.4f}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
After migration, you'll get:
|
||||
|
||||
1. **Color-coded console output** - Green for INFO, Yellow for WARNING, Red for ERROR
|
||||
2. **Automatic file logging** - All output saved to `2_results/optimization.log`
|
||||
3. **Log rotation** - Automatic rotation at 50MB with 3 backups
|
||||
4. **Structured format** - Timestamps and module names in file logs
|
||||
5. **Dashboard integration** - Trial logs in structured format for future parsing
|
||||
|
||||
## Log File Location
|
||||
|
||||
After migration, logs will be automatically saved to:
|
||||
```
|
||||
studies/your_study/
|
||||
└── 2_results/
|
||||
├── optimization.log # Current log file
|
||||
├── optimization.log.1 # Backup 1 (most recent)
|
||||
├── optimization.log.2 # Backup 2
|
||||
└── optimization.log.3 # Backup 3 (oldest)
|
||||
```
|
||||
|
||||
## Testing Your Migration
|
||||
|
||||
Run your migrated study with a single trial:
|
||||
```bash
|
||||
cd studies/your_study
|
||||
python run_optimization.py --trials 1
|
||||
```
|
||||
|
||||
Check:
|
||||
- ✅ Console output is color-coded
|
||||
- ✅ File created at `2_results/optimization.log`
|
||||
- ✅ Trial start/complete messages format correctly
|
||||
- ✅ No errors about missing imports
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Error Handling with Context
|
||||
|
||||
**Before:**
|
||||
```python
|
||||
try:
|
||||
result = run_simulation()
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Simulation failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
raise
|
||||
```
|
||||
|
||||
**After:**
|
||||
```python
|
||||
try:
|
||||
result = run_simulation()
|
||||
except Exception as e:
|
||||
logger.error(f"Simulation failed: {e}", exc_info=True)
|
||||
raise
|
||||
```
|
||||
|
||||
### Conditional Logging
|
||||
|
||||
```python
|
||||
# Use log levels appropriately
|
||||
logger.debug("Detailed debugging info") # Only when debugging
|
||||
logger.info("Starting optimization...") # General progress
|
||||
logger.warning("Design var out of bounds") # Potential issues
|
||||
logger.error("Simulation failed") # Actual errors
|
||||
```
|
||||
|
||||
### Progress Messages
|
||||
|
||||
```python
|
||||
# Before
|
||||
print(f"Processing trial {i}/{total}...")
|
||||
|
||||
# After
|
||||
logger.info(f"Processing trial {i}/{total}...")
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After migrating your study:
|
||||
|
||||
1. Test with a few trials
|
||||
2. Check the log file in `2_results/optimization.log`
|
||||
3. Verify color output in console
|
||||
4. Update study documentation if needed
|
||||
|
||||
## Questions?
|
||||
|
||||
See:
|
||||
- [Phase 1.3 Implementation Plan](docs/07_DEVELOPMENT/Phase_1_3_Implementation_Plan.md)
|
||||
- [optimization_engine/logger.py](optimization_engine/logger.py) - Full API documentation
|
||||
- [drone_gimbal_arm_optimization](studies/drone_gimbal_arm_optimization/) - Reference implementation
|
||||
755
docs/development/NASTRAN_VISUALIZATION_RESEARCH.md
Normal file
755
docs/development/NASTRAN_VISUALIZATION_RESEARCH.md
Normal file
@@ -0,0 +1,755 @@
|
||||
# Nastran Visualization Research: OP2/BDF/DAT File Processing
|
||||
|
||||
**Research Date**: 2025-11-21
|
||||
**Purpose**: Investigate methods to visualize geometry/mesh and generate images of FEA metrics from Nastran files across optimization iterations
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Recommendation**: Use **pyNastran + PyVista** combination for Atomizer visualization needs.
|
||||
|
||||
- **pyNastran**: Read OP2/BDF files, extract results (stress, displacement, eigenvalues)
|
||||
- **PyVista**: Generate 3D visualizations and save images programmatically
|
||||
|
||||
This approach provides:
|
||||
✅ **Programmatic image generation** (no GUI needed)
|
||||
✅ **Full automation** for optimization iterations
|
||||
✅ **Rich visualization** (mesh, stress contours, displacement plots)
|
||||
✅ **Dashboard integration ready** (save PNGs for React dashboard)
|
||||
✅ **Lightweight** (no commercial FEA software required)
|
||||
|
||||
---
|
||||
|
||||
## 1. pyNastran Overview
|
||||
|
||||
### What It Does
|
||||
|
||||
pyNastran is a **Python library for reading/writing/processing Nastran files**:
|
||||
|
||||
- **BDF (Input Files)**: Geometry, mesh, materials, boundary conditions
|
||||
- **OP2 (Results Files)**: Stress, strain, displacement, eigenvalues, etc.
|
||||
- **F06 (Text Output)**: Less structured, slower to parse
|
||||
|
||||
**GitHub**: https://github.com/SteveDoyle2/pyNastran
|
||||
**Docs**: https://pynastran-git.readthedocs.io/
|
||||
|
||||
### Key Features
|
||||
|
||||
✅ **Fast OP2 Reading**: Vectorized, optimized for large files
|
||||
✅ **427+ Supported Cards**: Comprehensive BDF support
|
||||
✅ **HDF5 Export**: For massive files (reduces memory usage)
|
||||
✅ **Result Extraction**: Displacement, stress, strain, eigenvalues, SPC/MPC forces
|
||||
✅ **Built-in GUI**: VTK-based viewer (optional, not needed for automation)
|
||||
✅ **SORT2 Support**: Handles frequency/time-domain results
|
||||
|
||||
### Supported Results
|
||||
|
||||
From OP2 files:
|
||||
- Displacement, velocity, acceleration
|
||||
- Temperature
|
||||
- Eigenvectors & eigenvalues
|
||||
- Element stress/strain (CQUAD4, CTRIA3, CBAR, CBEAM, CTETRA, etc.)
|
||||
- SPC/MPC forces
|
||||
- Grid point forces
|
||||
- Strain energy
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install pyNastran
|
||||
```
|
||||
|
||||
**Dependencies**:
|
||||
- numpy, scipy
|
||||
- h5py (for HDF5 support)
|
||||
- matplotlib (optional, for basic plotting)
|
||||
- vtk (optional, for GUI only)
|
||||
- PyQt5/PySide2 (optional, for GUI only)
|
||||
|
||||
**Python Support**: 3.9-3.12
|
||||
|
||||
---
|
||||
|
||||
## 2. Reading OP2 Files with pyNastran
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from pyNastran.op2.op2 import read_op2
|
||||
|
||||
# Read OP2 file (with optional pandas DataFrames)
|
||||
op2 = read_op2('simulation.op2', build_dataframe=True, debug=False)
|
||||
|
||||
# Quick overview
|
||||
print(op2.get_op2_stats())
|
||||
```
|
||||
|
||||
### Accessing Results
|
||||
|
||||
**Displacement Results:**
|
||||
```python
|
||||
# Get displacements for subcase 1
|
||||
disp = op2.displacements[1] # subcase ID
|
||||
|
||||
# NumPy array: [n_times, n_nodes, 6] (tx, ty, tz, rx, ry, rz)
|
||||
displacement_data = disp.data
|
||||
|
||||
# Node IDs
|
||||
node_ids = disp.node_gridtype[:, 0]
|
||||
|
||||
# Pandas DataFrame (if build_dataframe=True)
|
||||
disp_df = disp.data_frame
|
||||
```
|
||||
|
||||
**Stress Results:**
|
||||
```python
|
||||
# Element stress (e.g., CQUAD4 plate elements)
|
||||
plate_stress = op2.cquad4_stress[1] # subcase ID
|
||||
|
||||
# Data array: [n_times, n_elements, n_values]
|
||||
# For CQUAD4: [fiber_distance, oxx, oyy, txy, angle, omax, omin, von_mises]
|
||||
von_mises = plate_stress.data[itime, :, 7] # Von Mises stress
|
||||
|
||||
# Element IDs
|
||||
element_ids = plate_stress.element_node[:, 0]
|
||||
```
|
||||
|
||||
**Eigenvalue Results:**
|
||||
```python
|
||||
# Eigenvectors
|
||||
eig1 = op2.eigenvectors[1]
|
||||
|
||||
# Extract mode 2
|
||||
mode2 = eig1.data[imode2, :, :]
|
||||
|
||||
# Frequencies
|
||||
eigenvalues = op2.eigenvalues[1]
|
||||
frequencies = eigenvalues.freqs
|
||||
```
|
||||
|
||||
### Reading Geometry from BDF
|
||||
|
||||
```python
|
||||
from pyNastran.bdf.bdf import read_bdf
|
||||
|
||||
# Read geometry
|
||||
model = read_bdf('model.bdf')
|
||||
|
||||
# Access nodes
|
||||
for nid, node in model.nodes.items():
|
||||
xyz = node.get_position()
|
||||
print(f"Node {nid}: {xyz}")
|
||||
|
||||
# Access elements
|
||||
for eid, element in model.elements.items():
|
||||
node_ids = element.node_ids
|
||||
print(f"Element {eid}: nodes {node_ids}")
|
||||
```
|
||||
|
||||
### Reading Geometry from OP2 (with OP2Geom)
|
||||
|
||||
```python
|
||||
from pyNastran.op2.op2_geom import read_op2_geom
|
||||
|
||||
# Read OP2 with embedded geometry
|
||||
model = read_op2_geom('simulation.op2')
|
||||
|
||||
# Now model has both geometry and results
|
||||
nodes = model.nodes
|
||||
elements = model.elements
|
||||
displacements = model.displacements[1]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. PyVista for 3D Visualization
|
||||
|
||||
### What It Does
|
||||
|
||||
PyVista is a **Python wrapper for VTK** providing:
|
||||
- 3D mesh visualization
|
||||
- Scalar field mapping (stress, temperature)
|
||||
- Vector field plotting (displacement)
|
||||
- **Programmatic screenshot generation** (no GUI needed)
|
||||
|
||||
**GitHub**: https://github.com/pyvista/pyvista
|
||||
**Docs**: https://docs.pyvista.org/
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install pyvista
|
||||
```
|
||||
|
||||
### Creating Mesh from Nastran Data
|
||||
|
||||
```python
|
||||
import pyvista as pv
|
||||
import numpy as np
|
||||
|
||||
# Example: Create mesh from pyNastran nodes/elements
|
||||
def create_pyvista_mesh(model, op2, subcase=1):
|
||||
"""Create PyVista mesh with displacement and stress data."""
|
||||
|
||||
# Get nodes
|
||||
node_ids = sorted(model.nodes.keys())
|
||||
points = np.array([model.nodes[nid].get_position() for nid in node_ids])
|
||||
|
||||
# Get quad elements (CQUAD4)
|
||||
cells = []
|
||||
for eid, element in model.elements.items():
|
||||
if element.type == 'CQUAD4':
|
||||
# PyVista quad: [4, node1, node2, node3, node4]
|
||||
nids = element.node_ids
|
||||
cells.extend([4] + nids)
|
||||
|
||||
cells = np.array(cells)
|
||||
celltypes = np.full(len(cells)//5, pv.CellType.QUAD, dtype=np.uint8)
|
||||
|
||||
# Create unstructured grid
|
||||
mesh = pv.UnstructuredGrid(cells, celltypes, points)
|
||||
|
||||
# Add displacement field
|
||||
disp = op2.displacements[subcase]
|
||||
disp_vectors = disp.data[0, :, :3] # tx, ty, tz
|
||||
mesh['displacement'] = disp_vectors
|
||||
|
||||
# Add stress (if available)
|
||||
if subcase in op2.cquad4_stress:
|
||||
stress = op2.cquad4_stress[subcase]
|
||||
von_mises = stress.data[0, :, 7] # Von Mises stress
|
||||
mesh['von_mises_stress'] = von_mises
|
||||
|
||||
return mesh
|
||||
```
|
||||
|
||||
### Programmatic Visualization & Screenshot
|
||||
|
||||
```python
|
||||
def generate_stress_plot(mesh, output_file='stress_plot.png'):
|
||||
"""Generate stress contour plot and save as image."""
|
||||
|
||||
# Create off-screen plotter (no GUI window)
|
||||
plotter = pv.Plotter(off_screen=True, window_size=[1920, 1080])
|
||||
|
||||
# Add mesh with stress coloring
|
||||
plotter.add_mesh(
|
||||
mesh,
|
||||
scalars='von_mises_stress',
|
||||
cmap='jet', # Color map
|
||||
show_edges=True,
|
||||
edge_color='black',
|
||||
scalar_bar_args={
|
||||
'title': 'Von Mises Stress (MPa)',
|
||||
'vertical': True,
|
||||
'position_x': 0.85,
|
||||
'position_y': 0.1
|
||||
}
|
||||
)
|
||||
|
||||
# Set camera view
|
||||
plotter.camera_position = 'iso' # Isometric view
|
||||
plotter.camera.zoom(1.2)
|
||||
|
||||
# Add title
|
||||
plotter.add_text('Stress Analysis - Trial #5', position='upper_left', font_size=14)
|
||||
|
||||
# Save screenshot
|
||||
plotter.screenshot(output_file, return_img=False, scale=2)
|
||||
plotter.close()
|
||||
|
||||
return output_file
|
||||
```
|
||||
|
||||
### Deformed Mesh Visualization
|
||||
|
||||
```python
|
||||
def generate_deformed_mesh_plot(mesh, scale_factor=100.0, output_file='deformed.png'):
|
||||
"""Plot deformed mesh with displacement."""
|
||||
|
||||
# Warp mesh by displacement vector
|
||||
warped = mesh.warp_by_vector('displacement', factor=scale_factor)
|
||||
|
||||
plotter = pv.Plotter(off_screen=True, window_size=[1920, 1080])
|
||||
|
||||
# Original mesh (transparent)
|
||||
plotter.add_mesh(mesh, opacity=0.2, color='gray', show_edges=True)
|
||||
|
||||
# Deformed mesh (colored by displacement magnitude)
|
||||
plotter.add_mesh(
|
||||
warped,
|
||||
scalars='displacement',
|
||||
cmap='rainbow',
|
||||
show_edges=True,
|
||||
scalar_bar_args={'title': 'Displacement Magnitude (mm)'}
|
||||
)
|
||||
|
||||
plotter.camera_position = 'iso'
|
||||
plotter.screenshot(output_file, scale=2)
|
||||
plotter.close()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Recommended Architecture for Atomizer
|
||||
|
||||
### Integration Approach
|
||||
|
||||
**Option A: Lightweight (Recommended)**
|
||||
- Use **pyNastran** to read OP2 files after each trial
|
||||
- Use **PyVista** to generate static PNG images
|
||||
- Store images in `studies/my_study/2_results/visualizations/trial_XXX_stress.png`
|
||||
- Display images in React dashboard via image gallery
|
||||
|
||||
**Option B: Full 3D (Advanced)**
|
||||
- Export PyVista mesh to VTK/glTF format
|
||||
- Use Three.js or react-three-fiber in dashboard
|
||||
- Interactive 3D viewer in browser
|
||||
|
||||
### Proposed Workflow
|
||||
|
||||
```
|
||||
Trial Completion
|
||||
↓
|
||||
NX Solver writes OP2 file
|
||||
↓
|
||||
pyNastran reads OP2 + BDF
|
||||
↓
|
||||
Extract: stress, displacement, geometry
|
||||
↓
|
||||
PyVista creates mesh + applies results
|
||||
↓
|
||||
Generate images:
|
||||
- stress_contour.png
|
||||
- displacement.png
|
||||
- deformed_shape.png
|
||||
↓
|
||||
Store in 2_results/visualizations/trial_XXX/
|
||||
↓
|
||||
Dashboard polls for new images
|
||||
↓
|
||||
Display in React gallery component
|
||||
```
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
studies/my_optimization/
|
||||
├── 1_setup/
|
||||
│ └── model/
|
||||
│ ├── model.prt
|
||||
│ ├── model.sim
|
||||
│ └── model.bdf ← BDF for geometry
|
||||
├── 2_results/
|
||||
│ ├── study.db
|
||||
│ ├── trial_log.json
|
||||
│ └── visualizations/ ← NEW: Generated images
|
||||
│ ├── trial_000/
|
||||
│ │ ├── stress_vonmises.png
|
||||
│ │ ├── displacement_magnitude.png
|
||||
│ │ └── deformed_shape.png
|
||||
│ ├── trial_001/
|
||||
│ │ └── ...
|
||||
│ └── pareto_front/ ← Best designs
|
||||
│ ├── trial_009_stress.png
|
||||
│ └── trial_042_stress.png
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation Example for Atomizer
|
||||
|
||||
### Visualization Module
|
||||
|
||||
```python
|
||||
# optimization_engine/visualizer.py
|
||||
|
||||
"""
|
||||
Nastran Visualization Module
|
||||
|
||||
Generates FEA result visualizations from OP2/BDF files using pyNastran + PyVista.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
import numpy as np
|
||||
import pyvista as pv
|
||||
from pyNastran.op2.op2_geom import read_op2_geom
|
||||
from pyNastran.bdf.bdf import read_bdf
|
||||
|
||||
|
||||
class NastranVisualizer:
|
||||
"""Generate visualizations from Nastran results."""
|
||||
|
||||
def __init__(self, bdf_path: Path, output_dir: Path):
|
||||
"""
|
||||
Initialize visualizer.
|
||||
|
||||
Args:
|
||||
bdf_path: Path to BDF file (for geometry)
|
||||
output_dir: Directory to save images
|
||||
"""
|
||||
self.bdf_path = bdf_path
|
||||
self.output_dir = Path(output_dir)
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load geometry once
|
||||
self.model = read_bdf(str(bdf_path))
|
||||
|
||||
def generate_trial_visualizations(self, op2_path: Path, trial_number: int, subcase: int = 1):
|
||||
"""
|
||||
Generate all visualizations for a trial.
|
||||
|
||||
Args:
|
||||
op2_path: Path to OP2 results file
|
||||
trial_number: Trial number
|
||||
subcase: Nastran subcase ID
|
||||
|
||||
Returns:
|
||||
dict: Paths to generated images
|
||||
"""
|
||||
# Read results
|
||||
op2 = read_op2_geom(str(op2_path))
|
||||
|
||||
# Create trial output directory
|
||||
trial_dir = self.output_dir / f"trial_{trial_number:03d}"
|
||||
trial_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Generate visualizations
|
||||
images = {}
|
||||
|
||||
# 1. Stress contour
|
||||
if subcase in op2.cquad4_stress:
|
||||
images['stress'] = self._plot_stress(op2, subcase, trial_dir / 'stress_vonmises.png', trial_number)
|
||||
|
||||
# 2. Displacement magnitude
|
||||
if subcase in op2.displacements:
|
||||
images['displacement'] = self._plot_displacement(op2, subcase, trial_dir / 'displacement.png', trial_number)
|
||||
|
||||
# 3. Deformed shape
|
||||
if subcase in op2.displacements:
|
||||
images['deformed'] = self._plot_deformed_shape(op2, subcase, trial_dir / 'deformed_shape.png', trial_number)
|
||||
|
||||
return images
|
||||
|
||||
def _create_mesh(self, op2, subcase):
|
||||
"""Create PyVista mesh from Nastran data."""
|
||||
# Get nodes
|
||||
node_ids = sorted(self.model.nodes.keys())
|
||||
points = np.array([self.model.nodes[nid].get_position() for nid in node_ids])
|
||||
|
||||
# Get CQUAD4 elements
|
||||
cells = []
|
||||
cell_data = []
|
||||
|
||||
for eid, element in self.model.elements.items():
|
||||
if element.type == 'CQUAD4':
|
||||
nids = [self.model.nodes.get_index(nid) for nid in element.node_ids]
|
||||
cells.extend([4] + nids)
|
||||
|
||||
if not cells:
|
||||
raise ValueError("No CQUAD4 elements found in model")
|
||||
|
||||
cells = np.array(cells)
|
||||
celltypes = np.full(len(cells)//5, pv.CellType.QUAD, dtype=np.uint8)
|
||||
|
||||
mesh = pv.UnstructuredGrid(cells, celltypes, points)
|
||||
|
||||
# Add result data
|
||||
if subcase in op2.displacements:
|
||||
disp = op2.displacements[subcase]
|
||||
disp_vectors = disp.data[0, :, :3] # tx, ty, tz
|
||||
mesh['displacement'] = disp_vectors
|
||||
mesh['displacement_magnitude'] = np.linalg.norm(disp_vectors, axis=1)
|
||||
|
||||
if subcase in op2.cquad4_stress:
|
||||
stress = op2.cquad4_stress[subcase]
|
||||
von_mises = stress.data[0, :, 7]
|
||||
mesh.cell_data['von_mises_stress'] = von_mises
|
||||
|
||||
return mesh
|
||||
|
||||
def _plot_stress(self, op2, subcase, output_path, trial_number):
|
||||
"""Plot Von Mises stress contour."""
|
||||
mesh = self._create_mesh(op2, subcase)
|
||||
|
||||
plotter = pv.Plotter(off_screen=True, window_size=[1920, 1080])
|
||||
plotter.add_mesh(
|
||||
mesh,
|
||||
scalars='von_mises_stress',
|
||||
cmap='jet',
|
||||
show_edges=True,
|
||||
edge_color='black',
|
||||
scalar_bar_args={
|
||||
'title': 'Von Mises Stress (MPa)',
|
||||
'vertical': True,
|
||||
'position_x': 0.85,
|
||||
'position_y': 0.1,
|
||||
'fmt': '%.1f'
|
||||
}
|
||||
)
|
||||
|
||||
plotter.camera_position = 'iso'
|
||||
plotter.camera.zoom(1.2)
|
||||
plotter.add_text(f'Stress Analysis - Trial #{trial_number}',
|
||||
position='upper_left', font_size=14, color='black')
|
||||
|
||||
plotter.screenshot(str(output_path), scale=2)
|
||||
plotter.close()
|
||||
|
||||
return output_path
|
||||
|
||||
def _plot_displacement(self, op2, subcase, output_path, trial_number):
|
||||
"""Plot displacement magnitude."""
|
||||
mesh = self._create_mesh(op2, subcase)
|
||||
|
||||
plotter = pv.Plotter(off_screen=True, window_size=[1920, 1080])
|
||||
plotter.add_mesh(
|
||||
mesh,
|
||||
scalars='displacement_magnitude',
|
||||
cmap='rainbow',
|
||||
show_edges=True,
|
||||
edge_color='gray',
|
||||
scalar_bar_args={
|
||||
'title': 'Displacement (mm)',
|
||||
'vertical': True,
|
||||
'position_x': 0.85,
|
||||
'position_y': 0.1
|
||||
}
|
||||
)
|
||||
|
||||
plotter.camera_position = 'iso'
|
||||
plotter.camera.zoom(1.2)
|
||||
plotter.add_text(f'Displacement - Trial #{trial_number}',
|
||||
position='upper_left', font_size=14, color='black')
|
||||
|
||||
plotter.screenshot(str(output_path), scale=2)
|
||||
plotter.close()
|
||||
|
||||
return output_path
|
||||
|
||||
def _plot_deformed_shape(self, op2, subcase, output_path, trial_number, scale_factor=100.0):
|
||||
"""Plot deformed vs undeformed shape."""
|
||||
mesh = self._create_mesh(op2, subcase)
|
||||
warped = mesh.warp_by_vector('displacement', factor=scale_factor)
|
||||
|
||||
plotter = pv.Plotter(off_screen=True, window_size=[1920, 1080])
|
||||
|
||||
# Original (transparent)
|
||||
plotter.add_mesh(mesh, opacity=0.2, color='gray', show_edges=True, edge_color='black')
|
||||
|
||||
# Deformed (colored)
|
||||
plotter.add_mesh(
|
||||
warped,
|
||||
scalars='displacement_magnitude',
|
||||
cmap='rainbow',
|
||||
show_edges=True,
|
||||
edge_color='black',
|
||||
scalar_bar_args={
|
||||
'title': f'Displacement (mm) [Scale: {scale_factor}x]',
|
||||
'vertical': True
|
||||
}
|
||||
)
|
||||
|
||||
plotter.camera_position = 'iso'
|
||||
plotter.camera.zoom(1.2)
|
||||
plotter.add_text(f'Deformed Shape - Trial #{trial_number}',
|
||||
position='upper_left', font_size=14, color='black')
|
||||
|
||||
plotter.screenshot(str(output_path), scale=2)
|
||||
plotter.close()
|
||||
|
||||
return output_path
|
||||
```
|
||||
|
||||
### Usage in Optimization Loop
|
||||
|
||||
```python
|
||||
# In optimization_engine/intelligent_optimizer.py
|
||||
|
||||
from optimization_engine.visualizer import NastranVisualizer
|
||||
|
||||
class IntelligentOptimizer:
|
||||
def __init__(self, ...):
|
||||
# ... existing code ...
|
||||
|
||||
# Initialize visualizer
|
||||
bdf_path = self.study_dir.parent / "1_setup" / "model" / f"{self.config['model_name']}.bdf"
|
||||
viz_dir = self.study_dir / "visualizations"
|
||||
|
||||
self.visualizer = NastranVisualizer(bdf_path, viz_dir)
|
||||
|
||||
def _run_trial(self, trial):
|
||||
# ... existing code: update model, solve, extract results ...
|
||||
|
||||
# NEW: Generate visualizations after successful solve
|
||||
if trial.state == optuna.trial.TrialState.COMPLETE:
|
||||
op2_path = self.get_op2_path(trial.number)
|
||||
|
||||
try:
|
||||
images = self.visualizer.generate_trial_visualizations(
|
||||
op2_path=op2_path,
|
||||
trial_number=trial.number,
|
||||
subcase=1
|
||||
)
|
||||
|
||||
# Store image paths in trial user_attrs for dashboard access
|
||||
trial.set_user_attr('visualization_images', {
|
||||
'stress': str(images.get('stress', '')),
|
||||
'displacement': str(images.get('displacement', '')),
|
||||
'deformed': str(images.get('deformed', ''))
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Visualization failed for trial {trial.number}: {e}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Alternative: Headless pyNastran GUI
|
||||
|
||||
pyNastran has a built-in GUI, but it can also be used **programmatically** for screenshots:
|
||||
|
||||
```python
|
||||
from pyNastran.gui.main_window import MainWindow
|
||||
|
||||
# This requires VTK + PyQt5/PySide2 (heavier dependencies)
|
||||
# Not recommended for automation - use PyVista instead
|
||||
```
|
||||
|
||||
**Verdict**: PyVista is simpler and more flexible for automation.
|
||||
|
||||
---
|
||||
|
||||
## 7. From Scratch Alternative
|
||||
|
||||
### Pros
|
||||
- Full control
|
||||
- No external dependencies beyond numpy/matplotlib
|
||||
|
||||
### Cons
|
||||
- **Reinventing the wheel** (mesh handling, element connectivity)
|
||||
- **2D plots only** (matplotlib doesn't do 3D well)
|
||||
- **Labor intensive** (weeks of development)
|
||||
- **Limited features** (no proper stress contours, deformed shapes)
|
||||
|
||||
**Verdict**: Not recommended. pyNastran + PyVista is mature, well-tested, and saves months of development.
|
||||
|
||||
---
|
||||
|
||||
## 8. Comparison Matrix
|
||||
|
||||
| Feature | pyNastran GUI | pyNastran + PyVista | From Scratch |
|
||||
|---------|---------------|---------------------|--------------|
|
||||
| **Programmatic** | ❌ (requires GUI) | ✅ Off-screen rendering | ✅ |
|
||||
| **Automation** | ❌ | ✅ | ✅ |
|
||||
| **3D Visualization** | ✅ | ✅ | ❌ (2D only) |
|
||||
| **Stress Contours** | ✅ | ✅ | ⚠️ (basic) |
|
||||
| **Deformed Shapes** | ✅ | ✅ | ❌ |
|
||||
| **Development Time** | N/A | ~1-2 days | ~3-4 weeks |
|
||||
| **Dependencies** | Heavy (VTK, Qt) | Light (numpy, vtk) | Minimal |
|
||||
| **Dashboard Ready** | ❌ | ✅ PNG images | ✅ |
|
||||
| **Maintenance** | N/A | Low | High |
|
||||
|
||||
---
|
||||
|
||||
## 9. Recommended Implementation Plan
|
||||
|
||||
### Phase 1: Basic Visualization (1-2 days)
|
||||
1. Install pyNastran + PyVista
|
||||
2. Create `NastranVisualizer` class
|
||||
3. Integrate into `IntelligentOptimizer` post-trial callback
|
||||
4. Generate 3 images per trial: stress, displacement, deformed shape
|
||||
5. Test with bracket study
|
||||
|
||||
### Phase 2: Dashboard Integration (1 day)
|
||||
1. Add `visualizations/` directory to study structure
|
||||
2. Store image paths in `trial.user_attrs`
|
||||
3. Create React component: `TrialVisualizationGallery`
|
||||
4. Display images in dashboard trial detail view
|
||||
|
||||
### Phase 3: Advanced Features (optional, 2-3 days)
|
||||
1. Eigenmode animation (GIF generation)
|
||||
2. Section cut views
|
||||
3. Multiple camera angles
|
||||
4. Custom color scales
|
||||
5. Comparison view (overlay 2 trials)
|
||||
|
||||
---
|
||||
|
||||
## 10. Installation & Testing
|
||||
|
||||
### Install Dependencies
|
||||
|
||||
```bash
|
||||
# Install pyNastran
|
||||
pip install pyNastran
|
||||
|
||||
# Install PyVista
|
||||
pip install pyvista
|
||||
|
||||
# Optional: For HDF5 support
|
||||
pip install h5py
|
||||
```
|
||||
|
||||
### Quick Test
|
||||
|
||||
```python
|
||||
# test_visualization.py
|
||||
from pathlib import Path
|
||||
from optimization_engine.visualizer import NastranVisualizer
|
||||
|
||||
# Paths (adjust to your study)
|
||||
bdf_path = Path("studies/bracket_stiffness_optimization/1_setup/model/Bracket.bdf")
|
||||
op2_path = Path("studies/bracket_stiffness_optimization/1_setup/model/Bracket.op2")
|
||||
output_dir = Path("test_visualizations")
|
||||
|
||||
# Create visualizer
|
||||
viz = NastranVisualizer(bdf_path, output_dir)
|
||||
|
||||
# Generate images
|
||||
images = viz.generate_trial_visualizations(op2_path, trial_number=0, subcase=1)
|
||||
|
||||
print(f"Generated images: {images}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Key Takeaways
|
||||
|
||||
✅ **pyNastran + PyVista** is the optimal solution
|
||||
✅ **Programmatic image generation** without GUI
|
||||
✅ **Production-ready** libraries with active development
|
||||
✅ **Dashboard integration** via PNG images
|
||||
✅ **Fast implementation** (1-2 days vs weeks from scratch)
|
||||
✅ **Extensible** for future 3D viewer (Three.js)
|
||||
|
||||
**Next Steps**:
|
||||
1. Install pyNastran + PyVista in Atomizer environment
|
||||
2. Implement `NastranVisualizer` class
|
||||
3. Integrate visualization callback in optimization loop
|
||||
4. Test with existing bracket study
|
||||
5. Add image gallery to React dashboard
|
||||
|
||||
---
|
||||
|
||||
## 12. References
|
||||
|
||||
**pyNastran**:
|
||||
- GitHub: https://github.com/SteveDoyle2/pyNastran
|
||||
- Docs: https://pynastran-git.readthedocs.io/
|
||||
- OP2 Demo: https://pynastran-git.readthedocs.io/en/latest/quick_start/op2_demo.html
|
||||
|
||||
**PyVista**:
|
||||
- GitHub: https://github.com/pyvista/pyvista
|
||||
- Docs: https://docs.pyvista.org/
|
||||
- Screenshot Examples: https://docs.pyvista.org/examples/02-plot/screenshot.html
|
||||
|
||||
**Alternative Libraries**:
|
||||
- OP_Map: https://github.com/felixrlopezm/NASTRAN-OP_Map (built on pyNastran, for Excel export)
|
||||
- FeResPost: https://ferespost.eu/ (Ruby/Python, commercial)
|
||||
|
||||
---
|
||||
|
||||
**Document Maintained By**: Atomizer Development Team
|
||||
**Last Updated**: 2025-11-21
|
||||
495
docs/development/NN_SURROGATE_AUTOMATION_PLAN.md
Normal file
495
docs/development/NN_SURROGATE_AUTOMATION_PLAN.md
Normal file
@@ -0,0 +1,495 @@
|
||||
# Neural Network Surrogate Automation Plan
|
||||
|
||||
## Vision: One-Click ML-Accelerated Optimization
|
||||
|
||||
Make neural network surrogates a **first-class citizen** in Atomizer, fully integrated into the optimization workflow so that:
|
||||
1. Non-coders can enable/configure NN acceleration via JSON config
|
||||
2. The system automatically builds, trains, and validates surrogates
|
||||
3. Knowledge accumulates in a reusable "Physics Knowledge Base"
|
||||
4. The dashboard provides full visibility and control
|
||||
|
||||
---
|
||||
|
||||
## Current State (What We Have)
|
||||
|
||||
```
|
||||
Manual Steps Required Today:
|
||||
1. Run optimization (30+ FEA trials)
|
||||
2. Manually run: generate_training_data.py
|
||||
3. Manually run: run_training_fea.py
|
||||
4. Manually run: train_nn_surrogate.py
|
||||
5. Manually run: generate_nn_report.py
|
||||
6. Manually enable --enable-nn flag
|
||||
7. No persistent knowledge storage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Target State (What We Want)
|
||||
|
||||
```
|
||||
Automated Flow:
|
||||
1. User creates optimization_config.json with surrogate_settings
|
||||
2. User runs: python run_optimization.py --trials 100
|
||||
3. System automatically:
|
||||
- Runs initial FEA exploration (20-30 trials)
|
||||
- Generates space-filling training points
|
||||
- Runs parallel FEA on training points
|
||||
- Trains and validates surrogate
|
||||
- Switches to NN-accelerated optimization
|
||||
- Validates top candidates with real FEA
|
||||
- Stores learned physics in Knowledge Base
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Extended Configuration Schema
|
||||
|
||||
### Current optimization_config.json
|
||||
```json
|
||||
{
|
||||
"study_name": "uav_arm_optimization",
|
||||
"optimization_settings": {
|
||||
"protocol": "protocol_11_multi_objective",
|
||||
"n_trials": 30
|
||||
},
|
||||
"design_variables": [...],
|
||||
"objectives": [...],
|
||||
"constraints": [...]
|
||||
}
|
||||
```
|
||||
|
||||
### Proposed Extended Schema
|
||||
```json
|
||||
{
|
||||
"study_name": "uav_arm_optimization",
|
||||
"description": "UAV Camera Support Arm",
|
||||
"engineering_context": "Drone gimbal arm for 850g camera payload",
|
||||
|
||||
"optimization_settings": {
|
||||
"protocol": "protocol_12_hybrid_surrogate",
|
||||
"n_trials": 200,
|
||||
"sampler": "NSGAIISampler"
|
||||
},
|
||||
|
||||
"design_variables": [...],
|
||||
"objectives": [...],
|
||||
"constraints": [...],
|
||||
|
||||
"surrogate_settings": {
|
||||
"enabled": true,
|
||||
"mode": "auto",
|
||||
|
||||
"training": {
|
||||
"initial_fea_trials": 30,
|
||||
"space_filling_samples": 100,
|
||||
"sampling_method": "lhs_with_corners",
|
||||
"parallel_workers": 2
|
||||
},
|
||||
|
||||
"model": {
|
||||
"architecture": "mlp",
|
||||
"hidden_layers": [64, 128, 64],
|
||||
"validation_method": "5_fold_cv",
|
||||
"min_accuracy_mape": 10.0,
|
||||
"retrain_threshold": 15.0
|
||||
},
|
||||
|
||||
"optimization": {
|
||||
"nn_trials_per_fea": 50,
|
||||
"validate_top_n": 5,
|
||||
"adaptive_sampling": true
|
||||
},
|
||||
|
||||
"knowledge_base": {
|
||||
"save_to_master": true,
|
||||
"master_db_path": "knowledge_base/physics_surrogates.db",
|
||||
"tags": ["cantilever", "aluminum", "modal", "static"],
|
||||
"reuse_similar": true
|
||||
}
|
||||
},
|
||||
|
||||
"simulation": {...},
|
||||
"reporting": {...}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Protocol 12 - Hybrid Surrogate Optimization
|
||||
|
||||
### Workflow Stages
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ PROTOCOL 12: HYBRID SURROGATE │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ STAGE 1: EXPLORATION (FEA Only) │
|
||||
│ ├─ Run initial_fea_trials with real FEA │
|
||||
│ ├─ Build baseline Pareto front │
|
||||
│ └─ Assess design space complexity │
|
||||
│ │
|
||||
│ STAGE 2: TRAINING DATA GENERATION │
|
||||
│ ├─ Generate space_filling_samples (LHS + corners) │
|
||||
│ ├─ Run parallel FEA on training points │
|
||||
│ ├─ Store all results in training_data.db │
|
||||
│ └─ Monitor for failures, retry if needed │
|
||||
│ │
|
||||
│ STAGE 3: SURROGATE TRAINING │
|
||||
│ ├─ Train NN on combined data (optimization + training) │
|
||||
│ ├─ Validate with k-fold cross-validation │
|
||||
│ ├─ Check accuracy >= min_accuracy_mape │
|
||||
│ └─ Generate performance report │
|
||||
│ │
|
||||
│ STAGE 4: NN-ACCELERATED OPTIMIZATION │
|
||||
│ ├─ Run nn_trials_per_fea NN evaluations per FEA validation │
|
||||
│ ├─ Validate top_n candidates with real FEA │
|
||||
│ ├─ Update surrogate with new data (adaptive) │
|
||||
│ └─ Repeat until n_trials reached │
|
||||
│ │
|
||||
│ STAGE 5: FINAL VALIDATION & REPORTING │
|
||||
│ ├─ Validate all Pareto-optimal designs with FEA │
|
||||
│ ├─ Generate comprehensive report │
|
||||
│ └─ Save learned physics to Knowledge Base │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Implementation: runner_protocol_12.py
|
||||
|
||||
```python
|
||||
class HybridSurrogateRunner:
|
||||
"""Protocol 12: Automated hybrid FEA/NN optimization."""
|
||||
|
||||
def __init__(self, config: dict):
|
||||
self.config = config
|
||||
self.surrogate_config = config.get('surrogate_settings', {})
|
||||
self.stage = "exploration"
|
||||
|
||||
def run(self):
|
||||
# Stage 1: Exploration
|
||||
self.run_exploration_stage()
|
||||
|
||||
# Stage 2: Training Data
|
||||
if self.surrogate_config.get('enabled', False):
|
||||
self.generate_training_data()
|
||||
self.run_parallel_fea_training()
|
||||
|
||||
# Stage 3: Train Surrogate
|
||||
self.train_and_validate_surrogate()
|
||||
|
||||
# Stage 4: NN-Accelerated
|
||||
self.run_nn_accelerated_optimization()
|
||||
|
||||
# Stage 5: Final
|
||||
self.validate_and_report()
|
||||
self.save_to_knowledge_base()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Physics Knowledge Base Architecture
|
||||
|
||||
### Purpose
|
||||
Store learned physics relationships so future optimizations can:
|
||||
1. **Warm-start** with pre-trained surrogates for similar problems
|
||||
2. **Transfer learn** from related geometries/materials
|
||||
3. **Build institutional knowledge** over time
|
||||
|
||||
### Database Schema: physics_surrogates.db
|
||||
|
||||
```sql
|
||||
-- Master registry of all trained surrogates
|
||||
CREATE TABLE surrogates (
|
||||
id INTEGER PRIMARY KEY,
|
||||
name TEXT NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
study_name TEXT,
|
||||
|
||||
-- Problem characterization
|
||||
geometry_type TEXT, -- 'cantilever', 'plate', 'shell', 'solid'
|
||||
material_family TEXT, -- 'aluminum', 'steel', 'composite'
|
||||
analysis_types TEXT, -- JSON: ['static', 'modal', 'buckling']
|
||||
|
||||
-- Design space
|
||||
n_parameters INTEGER,
|
||||
parameter_names TEXT, -- JSON array
|
||||
parameter_bounds TEXT, -- JSON: {name: [min, max]}
|
||||
|
||||
-- Objectives & Constraints
|
||||
objectives TEXT, -- JSON: [{name, goal}]
|
||||
constraints TEXT, -- JSON: [{name, type, threshold}]
|
||||
|
||||
-- Model info
|
||||
model_path TEXT, -- Path to .pt file
|
||||
architecture TEXT, -- JSON: model architecture
|
||||
training_samples INTEGER,
|
||||
|
||||
-- Performance metrics
|
||||
cv_mape_mass REAL,
|
||||
cv_mape_frequency REAL,
|
||||
cv_r2_mass REAL,
|
||||
cv_r2_frequency REAL,
|
||||
|
||||
-- Metadata
|
||||
tags TEXT, -- JSON array for search
|
||||
description TEXT,
|
||||
engineering_context TEXT
|
||||
);
|
||||
|
||||
-- Training data for each surrogate
|
||||
CREATE TABLE training_data (
|
||||
id INTEGER PRIMARY KEY,
|
||||
surrogate_id INTEGER REFERENCES surrogates(id),
|
||||
|
||||
-- Input parameters (normalized 0-1)
|
||||
params_json TEXT,
|
||||
params_normalized TEXT,
|
||||
|
||||
-- Output values
|
||||
mass REAL,
|
||||
frequency REAL,
|
||||
max_displacement REAL,
|
||||
max_stress REAL,
|
||||
|
||||
-- Source
|
||||
source TEXT, -- 'optimization', 'lhs', 'corner', 'adaptive'
|
||||
fea_timestamp TIMESTAMP
|
||||
);
|
||||
|
||||
-- Similarity index for finding related problems
|
||||
CREATE TABLE problem_similarity (
|
||||
surrogate_id INTEGER REFERENCES surrogates(id),
|
||||
|
||||
-- Embedding for similarity search
|
||||
geometry_embedding BLOB, -- Vector embedding of geometry type
|
||||
physics_embedding BLOB, -- Vector embedding of physics signature
|
||||
|
||||
-- Precomputed similarity features
|
||||
feature_vector TEXT -- JSON: normalized features for matching
|
||||
);
|
||||
```
|
||||
|
||||
### Knowledge Base API
|
||||
|
||||
```python
|
||||
class PhysicsKnowledgeBase:
|
||||
"""Central repository for learned physics surrogates."""
|
||||
|
||||
def __init__(self, db_path: str = "knowledge_base/physics_surrogates.db"):
|
||||
self.db_path = db_path
|
||||
|
||||
def find_similar_surrogate(self, config: dict) -> Optional[SurrogateMatch]:
|
||||
"""Find existing surrogate that could transfer to this problem."""
|
||||
# Extract features from config
|
||||
features = self._extract_problem_features(config)
|
||||
|
||||
# Query similar problems
|
||||
matches = self._query_similar(features)
|
||||
|
||||
# Return best match if similarity > threshold
|
||||
if matches and matches[0].similarity > 0.8:
|
||||
return matches[0]
|
||||
return None
|
||||
|
||||
def save_surrogate(self, study_name: str, model_path: str,
|
||||
config: dict, metrics: dict):
|
||||
"""Save trained surrogate to knowledge base."""
|
||||
# Store model and metadata
|
||||
# Index for future similarity search
|
||||
pass
|
||||
|
||||
def transfer_learn(self, base_surrogate_id: int,
|
||||
new_config: dict) -> nn.Module:
|
||||
"""Create new surrogate by transfer learning from existing one."""
|
||||
# Load base model
|
||||
# Freeze early layers
|
||||
# Fine-tune on new data
|
||||
pass
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Dashboard Integration
|
||||
|
||||
### New Dashboard Pages
|
||||
|
||||
#### 1. Surrogate Status Panel (in existing Dashboard)
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ SURROGATE STATUS │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ Mode: Hybrid (NN + FEA Validation) │
|
||||
│ Stage: NN-Accelerated Optimization │
|
||||
│ │
|
||||
│ Training Data: 150 samples (50 opt + 100 LHS) │
|
||||
│ Model Accuracy: MAPE 1.8% mass, 1.1% freq │
|
||||
│ Speedup: ~50x (10ms NN vs 500ms FEA) │
|
||||
│ │
|
||||
│ [View Report] [Retrain] [Disable NN] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### 2. Knowledge Base Browser
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ PHYSICS KNOWLEDGE BASE │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ Stored Surrogates: 12 │
|
||||
│ │
|
||||
│ [Cantilever Beams] 5 models, avg MAPE 2.1% │
|
||||
│ [Shell Structures] 3 models, avg MAPE 3.4% │
|
||||
│ [Solid Parts] 4 models, avg MAPE 4.2% │
|
||||
│ │
|
||||
│ Search: [aluminum modal_______] [Find Similar] │
|
||||
│ │
|
||||
│ Matching Models: │
|
||||
│ - uav_arm_v2 (92% match) - Transfer Learning Available │
|
||||
│ - bracket_opt (78% match) │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: User Workflow (Non-Coder Experience)
|
||||
|
||||
### Scenario: New Optimization with NN Acceleration
|
||||
|
||||
```
|
||||
Step 1: Create Study via Dashboard
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ NEW OPTIMIZATION STUDY │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ Study Name: [drone_motor_mount___________] │
|
||||
│ Description: [Motor mount bracket________] │
|
||||
│ │
|
||||
│ Model File: [Browse...] drone_mount.prt │
|
||||
│ Sim File: [Browse...] drone_mount_sim.sim │
|
||||
│ │
|
||||
│ ☑ Enable Neural Network Acceleration │
|
||||
│ ├─ Initial FEA Trials: [30____] │
|
||||
│ ├─ Training Samples: [100___] │
|
||||
│ ├─ Target Accuracy: [10% MAPE] │
|
||||
│ └─ ☑ Save to Knowledge Base │
|
||||
│ │
|
||||
│ Similar existing model found: "uav_arm_optimization" │
|
||||
│ ☑ Use as starting point (transfer learning) │
|
||||
│ │
|
||||
│ [Create Study] │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
|
||||
Step 2: System Automatically Executes Protocol 12
|
||||
- User sees progress in dashboard
|
||||
- No command-line needed
|
||||
- All stages automated
|
||||
|
||||
Step 3: Review Results
|
||||
- Pareto front with FEA-validated designs
|
||||
- NN performance report
|
||||
- Knowledge saved for future use
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Roadmap
|
||||
|
||||
### Phase 1: Config Schema Extension (1-2 days)
|
||||
- [ ] Define surrogate_settings schema
|
||||
- [ ] Update config validator
|
||||
- [ ] Create migration for existing configs
|
||||
|
||||
### Phase 2: Protocol 12 Runner (3-5 days)
|
||||
- [ ] Create HybridSurrogateRunner class
|
||||
- [ ] Implement stage transitions
|
||||
- [ ] Add progress callbacks for dashboard
|
||||
- [ ] Integrate existing scripts as modules
|
||||
|
||||
### Phase 3: Knowledge Base (2-3 days)
|
||||
- [ ] Create SQLite schema
|
||||
- [ ] Implement PhysicsKnowledgeBase API
|
||||
- [ ] Add similarity search
|
||||
- [ ] Basic transfer learning
|
||||
|
||||
### Phase 4: Dashboard Integration (2-3 days)
|
||||
- [ ] Surrogate status panel
|
||||
- [ ] Knowledge base browser
|
||||
- [ ] Study creation wizard with NN options
|
||||
|
||||
### Phase 5: Documentation & Testing (1-2 days)
|
||||
- [ ] User guide for non-coders
|
||||
- [ ] Integration tests
|
||||
- [ ] Example workflows
|
||||
|
||||
---
|
||||
|
||||
## Data Flow Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ optimization_config.json │
|
||||
│ (Single source of truth for study) │
|
||||
└──────────────────┬───────────────────┘
|
||||
│
|
||||
┌──────────────────▼───────────────────┐
|
||||
│ Protocol 12 Runner │
|
||||
│ (Orchestrates entire workflow) │
|
||||
└──────────────────┬───────────────────┘
|
||||
│
|
||||
┌─────────────────┬───────────┼───────────┬─────────────────┐
|
||||
│ │ │ │ │
|
||||
▼ ▼ ▼ ▼ ▼
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ FEA │ │Training │ │Surrogate│ │ NN │ │Knowledge│
|
||||
│ Solver │ │ Data │ │ Trainer │ │ Optim │ │ Base │
|
||||
└────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │ │ │
|
||||
▼ ▼ ▼ ▼ ▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ study.db │
|
||||
│ (Optuna trials + training data + surrogate metadata) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌──────────────────▼───────────────────┐
|
||||
│ physics_surrogates.db │
|
||||
│ (Master knowledge base - global) │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Benefits
|
||||
|
||||
### For Non-Coders
|
||||
1. **Single JSON config** - No Python scripts to run manually
|
||||
2. **Dashboard control** - Start/stop/monitor from browser
|
||||
3. **Automatic recommendations** - System suggests best settings
|
||||
4. **Knowledge reuse** - Similar problems get free speedup
|
||||
|
||||
### For the Organization
|
||||
1. **Institutional memory** - Physics knowledge persists
|
||||
2. **Faster iterations** - Each new study benefits from past work
|
||||
3. **Reproducibility** - Everything tracked in databases
|
||||
4. **Scalability** - Add more workers, train better models
|
||||
|
||||
### For the Workflow
|
||||
1. **End-to-end automation** - No manual steps between stages
|
||||
2. **Adaptive optimization** - System learns during run
|
||||
3. **Validated results** - Top candidates always FEA-verified
|
||||
4. **Rich reporting** - Performance metrics, comparisons, recommendations
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review this plan** - Get feedback on priorities
|
||||
2. **Start with config schema** - Extend optimization_config.json
|
||||
3. **Build Protocol 12** - Core automation logic
|
||||
4. **Knowledge Base MVP** - Basic save/load functionality
|
||||
5. **Dashboard integration** - Visual control panel
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Created: 2025-11-25*
|
||||
*Author: Claude Code + Antoine*
|
||||
217
docs/development/Philosophy.md
Normal file
217
docs/development/Philosophy.md
Normal file
@@ -0,0 +1,217 @@
|
||||
ATOMIZER: Philosophy & System Overview
|
||||
Vision Statement
|
||||
Atomizer is an advanced structural optimization platform that bridges the gap between traditional FEA workflows and modern AI-assisted engineering. It transforms the complex, manual process of structural optimization into an intelligent, automated system where engineers can focus on high-level design decisions while AI handles the computational orchestration.
|
||||
Core Philosophy
|
||||
The Problem We're Solving
|
||||
Traditional structural optimization is fragmented across multiple tools, requires deep expertise in numerical methods, and involves tedious manual iteration. Engineers spend 80% of their time on setup, file management, and result interpretation rather than actual engineering insight. Current tools are either too simplistic (missing advanced features) or too complex (requiring programming expertise).
|
||||
Atomizer eliminates this friction by creating a unified, intelligent optimization environment where:
|
||||
|
||||
Setup is conversational: Tell the system what you want to optimize in plain language
|
||||
Monitoring is intuitive: See everything happening in real-time with scientific visualizations
|
||||
Results are actionable: Get publication-ready reports with clear recommendations
|
||||
Iteration is intelligent: The system learns and adapts from each optimization run
|
||||
|
||||
Design Principles
|
||||
|
||||
Intelligence-First Architecture
|
||||
|
||||
LLMs handle configuration, not templates
|
||||
AI interprets results and suggests improvements
|
||||
System learns from each optimization to improve future runs
|
||||
|
||||
|
||||
Scientific Rigor Without Complexity
|
||||
|
||||
Professional visualizations that respect engineering standards
|
||||
No dumbing down of data, but clear presentation
|
||||
Dense information display with intuitive interaction
|
||||
|
||||
|
||||
Real-Time Everything
|
||||
|
||||
Live optimization monitoring
|
||||
Instant parameter adjustments
|
||||
Streaming mesh deformation visualization
|
||||
|
||||
|
||||
Seamless Integration
|
||||
|
||||
Works with existing NX/Nastran workflows
|
||||
Connects to Claude Code for advanced automation
|
||||
Exports to standard engineering formats
|
||||
|
||||
|
||||
|
||||
System Architecture
|
||||
The Atomizer Ecosystem
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ATOMIZER PLATFORM │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ NX FILES │──▶│ OPTIMIZATION │──▶│ REPORTS │ │
|
||||
│ │ (.bdf/.dat)│ │ ENGINE │ │ (PDF/MD) │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ ▲ ▲ │
|
||||
│ ▼ │ │ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ CLAUDE │◀─▶│ DASHBOARD │──▶│ PYNASTRAN │ │
|
||||
│ │ CODE │ │ (REACT) │ │ PROCESSOR │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────┐ │
|
||||
│ │ WEBSOCKET │ │
|
||||
│ │ REAL-TIME │ │
|
||||
│ └──────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
Component Breakdown
|
||||
1. Input Layer (NX Integration)
|
||||
|
||||
Accepts Nastran files directly from Windows Explorer
|
||||
Parses structural models, loads, constraints automatically
|
||||
Extracts optimization potential from existing designs
|
||||
|
||||
2. Intelligence Layer (Claude Integration)
|
||||
|
||||
Interprets engineering requirements in natural language
|
||||
Generates optimization configurations automatically
|
||||
Provides real-time assistance during optimization
|
||||
Helps write and refine optimization reports
|
||||
|
||||
3. Computation Layer (Optimization Engine)
|
||||
|
||||
Supports multiple algorithms (NSGA-II, Bayesian, Gradient-based)
|
||||
Manages surrogate models for expensive evaluations
|
||||
Handles parallel evaluations and distributed computing
|
||||
Maintains optimization history and checkpointing
|
||||
|
||||
4. Visualization Layer (Dashboard)
|
||||
|
||||
Real-time monitoring with scientific-grade plots
|
||||
3D mesh visualization with stress/displacement overlays
|
||||
Interactive parameter exploration via parallel coordinates
|
||||
Publication-ready figure generation
|
||||
|
||||
5. Output Layer (Reporting)
|
||||
|
||||
Automated report generation with all findings
|
||||
AI-assisted report editing and refinement
|
||||
Export to engineering-standard formats
|
||||
Full traceability and reproducibility
|
||||
|
||||
Technical Innovation
|
||||
What Makes Atomizer Different
|
||||
1. Protocol-Based Optimization
|
||||
Instead of rigid templates, Atomizer uses dynamic protocols that adapt to each problem:
|
||||
|
||||
LLM analyzes the structure and suggests optimization strategies
|
||||
Protocols evolve based on results and user feedback
|
||||
Each optimization builds on previous knowledge
|
||||
|
||||
2. Live Digital Twin
|
||||
During optimization, Atomizer maintains a live digital twin:
|
||||
|
||||
See mesh deformation in real-time as parameters change
|
||||
Watch stress patterns evolve with design iterations
|
||||
Understand the physics behind optimization decisions
|
||||
|
||||
3. Convergence Intelligence
|
||||
Beyond simple convergence plots:
|
||||
|
||||
Hypervolume tracking for multi-objective quality
|
||||
Diversity metrics to avoid premature convergence
|
||||
Surrogate model accuracy for efficiency monitoring
|
||||
Parameter sensitivity analysis in real-time
|
||||
|
||||
4. Collaborative AI
|
||||
Not just automation, but collaboration:
|
||||
|
||||
AI explains its decisions and reasoning
|
||||
Engineers can override and guide the process
|
||||
System learns from corrections and preferences
|
||||
Knowledge accumulates across projects
|
||||
|
||||
Workflow Revolution
|
||||
Traditional Workflow (Days/Weeks)
|
||||
|
||||
Manually set up optimization in CAE software
|
||||
Define parameters one by one with trial ranges
|
||||
Run optimization blindly
|
||||
Wait for completion
|
||||
Post-process results manually
|
||||
Generate reports in Word/PowerPoint
|
||||
Iterate if results are unsatisfactory
|
||||
|
||||
Atomizer Workflow (Hours)
|
||||
|
||||
Drop NX files into Atomizer
|
||||
Describe optimization goals in plain English
|
||||
Review and adjust AI-generated configuration
|
||||
Launch optimization with real-time monitoring
|
||||
Interact with live results and adjust if needed
|
||||
Receive comprehensive report automatically
|
||||
Refine report with AI assistance
|
||||
|
||||
Use Cases & Impact
|
||||
Primary Applications
|
||||
|
||||
Structural weight reduction while maintaining strength
|
||||
Multi-objective optimization (weight vs. cost vs. performance)
|
||||
Topology optimization with manufacturing constraints
|
||||
Material selection and thickness optimization
|
||||
Frequency response optimization
|
||||
Thermal-structural coupled optimization
|
||||
|
||||
Engineering Impact
|
||||
|
||||
10x faster optimization setup
|
||||
Real-time insights instead of black-box results
|
||||
Publication-ready outputs without post-processing
|
||||
Knowledge capture from every optimization run
|
||||
Democratized expertise - junior engineers can run advanced optimizations
|
||||
|
||||
Future Vision
|
||||
Near-term Roadmap
|
||||
|
||||
Integration with more CAE solvers beyond Nastran
|
||||
Cloud-based distributed optimization
|
||||
Machine learning surrogate models
|
||||
Automated optimization strategy selection
|
||||
Cross-project knowledge transfer
|
||||
|
||||
Long-term Vision
|
||||
Atomizer will become the intelligent layer above all CAE tools, where:
|
||||
|
||||
Engineers describe problems, not procedures
|
||||
Optimization strategies emerge from accumulated knowledge
|
||||
Results directly feed back into design tools
|
||||
Reports write themselves with engineering insights
|
||||
Every optimization makes the system smarter
|
||||
|
||||
Technical Stack Summary
|
||||
Core Technologies:
|
||||
|
||||
Frontend: React/TypeScript with Plotly.js, D3.js, Three.js
|
||||
Backend: FastAPI with WebSocket support
|
||||
Optimization: NSGA-II, Bayesian optimization, custom algorithms
|
||||
FEA Processing: pyNastran for OP2/BDF manipulation
|
||||
AI Integration: Claude API for configuration and assistance
|
||||
Visualization: Scientific-grade plots with dark theme
|
||||
Data Management: Structured study folders with version control
|
||||
|
||||
Success Metrics
|
||||
Atomizer succeeds when:
|
||||
|
||||
Engineers spend more time thinking about design than fighting with tools
|
||||
Optimization becomes accessible to non-specialists
|
||||
Results are trusted and reproducible
|
||||
Reports are directly usable in publications/presentations
|
||||
Each project contributes to collective knowledge
|
||||
The system feels like a collaborator, not just a tool
|
||||
|
||||
Final Philosophy
|
||||
Atomizer is not just another optimization tool - it's an optimization partner. It combines the rigor of traditional FEA, the power of modern optimization algorithms, the intelligence of AI, and the clarity of scientific visualization into a single, cohesive platform. The goal is not to replace engineering judgment but to amplify it, allowing engineers to explore design spaces that were previously too complex or time-consuming to investigate.
|
||||
The dashboard you're building is the window into this intelligent optimization process - where complex mathematics meets intuitive interaction, where real-time computation meets thoughtful analysis, and where AI assistance meets engineering expertise.
|
||||
|
||||
This is Atomizer: Where structural optimization becomes a conversation, not a computation.
|
||||
Reference in New Issue
Block a user