feat: add Atomizer HQ multi-agent cluster infrastructure
- 8-agent OpenClaw cluster (Manager, Tech-Lead, Secretary, Auditor, Optimizer, Study-Builder, NX-Expert, Webster) - Orchestration engine: orchestrate.py (sync delegation + handoffs) - Workflow engine: YAML-defined multi-step pipelines - Agent workspaces: SOUL.md, AGENTS.md, MEMORY.md per agent - Shared skills: delegate, orchestrate, atomizer-protocols - Capability registry (AGENTS_REGISTRY.json) - Cluster management: cluster.sh, systemd template - All secrets replaced with env var references
This commit is contained in:
74
hq/skills/atomizer-protocols/QUICK_REF.md
Normal file
74
hq/skills/atomizer-protocols/QUICK_REF.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# Atomizer QUICK_REF
|
||||
|
||||
> 2-page maximum intent: fastest lookup for humans + Claude Code.
|
||||
> If it grows, split into WORKFLOWS/* and PROTOCOLS/*.
|
||||
|
||||
_Last updated: 2026-01-29 (Mario)_
|
||||
|
||||
---
|
||||
|
||||
## 0) Non-negotiables (Safety / Correctness)
|
||||
|
||||
### NX process safety
|
||||
- **NEVER** kill `ugraf.exe` / user NX sessions directly.
|
||||
- Only close NX using **NXSessionManager.close_nx_if_allowed()** (sessions we started).
|
||||
|
||||
### Study derivation
|
||||
- When creating a new study version: **COPY the working `run_optimization.py` first**. Never rewrite from scratch.
|
||||
|
||||
### Relative WFE
|
||||
- **NEVER** compute relative WFE as `abs(RMS_a - RMS_b)`.
|
||||
- Always use `extract_relative()` (node-by-node difference → Zernike fit → RMS).
|
||||
|
||||
### CMA-ES baseline
|
||||
- `CmaEsSampler(x0=...)` does **not** evaluate baseline first.
|
||||
- Always `study.enqueue_trial(x0)` when baseline must be trial 0.
|
||||
|
||||
---
|
||||
|
||||
## 1) Canonical workflow order (UI + docs)
|
||||
|
||||
**Create → Validate → Run → Analyze → Report → Deliver**
|
||||
|
||||
Canvas is a **visual validation layer**. Spec is the source of truth.
|
||||
|
||||
---
|
||||
|
||||
## 2) Single source of truth: AtomizerSpec v2.0
|
||||
|
||||
- Published spec: `studies/<topic>/<study>/atomizer_spec.json`
|
||||
- Canvas edges are for visual validation; truth is in:
|
||||
- `objective.source.*`
|
||||
- `constraint.source.*`
|
||||
|
||||
---
|
||||
|
||||
## 3) Save strategy (S2)
|
||||
|
||||
- **Draft**: autosaved locally (browser storage)
|
||||
- **Publish**: explicit action that writes to `atomizer_spec.json`
|
||||
|
||||
---
|
||||
|
||||
## 4) Key folders
|
||||
|
||||
- `optimization_engine/` core logic
|
||||
- `atomizer-dashboard/` UI + backend
|
||||
- `knowledge_base/lac/` learnings (failures/workarounds/patterns)
|
||||
- `studies/` studies
|
||||
|
||||
---
|
||||
|
||||
## 5) Session start (Claude Code)
|
||||
|
||||
1. Read `PROJECT_STATUS.md`
|
||||
2. Read `knowledge_base/lac/session_insights/failure.jsonl`
|
||||
3. Read this file (`docs/QUICK_REF.md`)
|
||||
|
||||
---
|
||||
|
||||
## 6) References
|
||||
|
||||
- Deep protocols: `docs/protocols/`
|
||||
- System instructions: `CLAUDE.md`
|
||||
- Project coordination: `PROJECT_STATUS.md`
|
||||
69
hq/skills/atomizer-protocols/SKILL.md
Normal file
69
hq/skills/atomizer-protocols/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: atomizer-protocols
|
||||
description: Atomizer Engineering Co. protocols and procedures. Consult when performing operational or technical tasks (studies, optimization, reports, troubleshooting).
|
||||
version: 1.1
|
||||
---
|
||||
|
||||
# Atomizer Protocols Skill
|
||||
|
||||
Your company's operating system. Load `QUICK_REF.md` when you need the cheatsheet.
|
||||
|
||||
## When to Load
|
||||
- **When performing a protocol-related task** (creating studies, running optimizations, generating reports, etc.)
|
||||
- **NOT every session** — these are reference docs, not session context.
|
||||
|
||||
## Key Files
|
||||
- `QUICK_REF.md` — 2-page cheatsheet. Start here.
|
||||
- `protocols/OP_*` — Operational protocols (how to do things)
|
||||
- `protocols/SYS_*` — System protocols (technical specifications)
|
||||
|
||||
## Protocol Lookup
|
||||
|
||||
| Need | Read |
|
||||
|------|------|
|
||||
| Create a study | OP_01 |
|
||||
| Run optimization | OP_02 |
|
||||
| Monitor progress | OP_03 |
|
||||
| Analyze results | OP_04 |
|
||||
| Export training data | OP_05 |
|
||||
| Troubleshoot | OP_06 |
|
||||
| Disk optimization | OP_07 |
|
||||
| Generate report | OP_08 |
|
||||
| Hand off to another agent | OP_09 |
|
||||
| Start a new project | OP_10 |
|
||||
| Post-phase learning cycle | OP_11 |
|
||||
| Choose algorithm | SYS_15 |
|
||||
| Submit job to Windows | SYS_19 |
|
||||
| Read/write shared knowledge | SYS_20 |
|
||||
|
||||
## Protocol Index
|
||||
|
||||
### Operational (OP_01–OP_10)
|
||||
| ID | Name | Summary |
|
||||
|----|------|---------|
|
||||
| OP_01 | Create Study | Study lifecycle from creation through setup |
|
||||
| OP_02 | Run Optimization | How to launch and manage optimization runs |
|
||||
| OP_03 | Monitor Progress | Tracking convergence, detecting issues |
|
||||
| OP_04 | Analyze Results | Post-optimization analysis and interpretation |
|
||||
| OP_05 | Export Training Data | Preparing data for ML/surrogate models |
|
||||
| OP_06 | Troubleshoot | Diagnosing and fixing common failures |
|
||||
| OP_07 | Disk Optimization | Managing disk space during long runs |
|
||||
| OP_08 | Generate Report | Creating professional deliverables |
|
||||
| OP_09 | Agent Handoff | How agents pass work to each other |
|
||||
| OP_10 | Project Intake | How new projects get initialized |
|
||||
| OP_11 | Digestion | Post-phase learning cycle (store, discard, sort, repair, evolve, self-document) |
|
||||
|
||||
### System (SYS_10–SYS_20)
|
||||
| ID | Name | Summary |
|
||||
|----|------|---------|
|
||||
| SYS_10 | IMSO | Integrated Multi-Scale Optimization |
|
||||
| SYS_11 | Multi-Objective | Multi-objective optimization setup |
|
||||
| SYS_12 | Extractor Library | Available extractors and how to use them |
|
||||
| SYS_13 | Dashboard Tracking | Dashboard integration and monitoring |
|
||||
| SYS_14 | Neural Acceleration | GNN surrogate models |
|
||||
| SYS_15 | Method Selector | Algorithm selection guide |
|
||||
| SYS_16 | Self-Aware Turbo | Adaptive optimization strategies |
|
||||
| SYS_17 | Study Insights | Learning from study results |
|
||||
| SYS_18 | Context Engineering | How to maintain context across sessions |
|
||||
| SYS_19 | Job Queue | Windows execution bridge protocol |
|
||||
| SYS_20 | Agent Memory | How agents read/write shared knowledge |
|
||||
667
hq/skills/atomizer-protocols/protocols/OP_01_CREATE_STUDY.md
Normal file
667
hq/skills/atomizer-protocols/protocols/OP_01_CREATE_STUDY.md
Normal file
@@ -0,0 +1,667 @@
|
||||
# OP_01: Create Optimization Study
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create Optimization Study
|
||||
LAYER: Operations
|
||||
VERSION: 1.2
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2026-01-13
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [core/study-creation-core.md]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating a complete Atomizer optimization study from scratch. It covers gathering requirements, generating configuration files, and validating setup.
|
||||
|
||||
**Skill to Load**: `.claude/skills/core/study-creation-core.md`
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "new study", "create study" | Follow this protocol |
|
||||
| "set up optimization" | Follow this protocol |
|
||||
| "optimize my design" | Follow this protocol |
|
||||
| User provides NX model | Assess and follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### MANDATORY: Use TodoWrite for Study Creation
|
||||
|
||||
**BEFORE creating any files**, add ALL required outputs to TodoWrite:
|
||||
|
||||
```
|
||||
TodoWrite([
|
||||
{"content": "Create optimization_config.json", "status": "pending", "activeForm": "Creating config"},
|
||||
{"content": "Create run_optimization.py", "status": "pending", "activeForm": "Creating run script"},
|
||||
{"content": "Create README.md", "status": "pending", "activeForm": "Creating README"},
|
||||
{"content": "Create STUDY_REPORT.md", "status": "pending", "activeForm": "Creating report template"}
|
||||
])
|
||||
```
|
||||
|
||||
**Mark each item complete ONLY after the file is created.** Study is NOT complete until all 4 items are checked off.
|
||||
|
||||
> **WHY**: This requirement exists because README.md was forgotten TWICE (2025-12-17, 2026-01-13) despite being listed as mandatory. TodoWrite provides visible enforcement.
|
||||
|
||||
---
|
||||
|
||||
**Required Outputs** (ALL MANDATORY - study is INCOMPLETE without these):
|
||||
| File | Purpose | Location | Priority |
|
||||
|------|---------|----------|----------|
|
||||
| `optimization_config.json` | Design vars, objectives, constraints | `1_setup/` | 1 |
|
||||
| `run_optimization.py` | Execution script | Study root | 2 |
|
||||
| **`README.md`** | Engineering documentation | Study root | **3 - NEVER SKIP** |
|
||||
| `STUDY_REPORT.md` | Results template | Study root | 4 |
|
||||
|
||||
**CRITICAL**: README.md is MANDATORY for every study. A study without README.md is INCOMPLETE.
|
||||
|
||||
**Study Structure**:
|
||||
```
|
||||
studies/{geometry_type}/{study_name}/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # NX files (.prt, .sim, .fem)
|
||||
│ └── optimization_config.json
|
||||
├── 2_iterations/ # FEA trial folders (iter1, iter2, ...)
|
||||
├── 3_results/ # Optimization outputs (study.db, logs)
|
||||
├── README.md # MANDATORY
|
||||
├── STUDY_REPORT.md # MANDATORY
|
||||
└── run_optimization.py
|
||||
```
|
||||
|
||||
**IMPORTANT: Studies are organized by geometry type**:
|
||||
| Geometry Type | Folder | Examples |
|
||||
|---------------|--------|----------|
|
||||
| M1 Mirror | `studies/M1_Mirror/` | m1_mirror_adaptive_V14, m1_mirror_cost_reduction_V3 |
|
||||
| Simple Bracket | `studies/Simple_Bracket/` | bracket_stiffness_optimization |
|
||||
| UAV Arm | `studies/UAV_Arm/` | uav_arm_optimization |
|
||||
| Drone Gimbal | `studies/Drone_Gimbal/` | drone_gimbal_arm_optimization |
|
||||
| Simple Beam | `studies/Simple_Beam/` | simple_beam_optimization |
|
||||
| Other/Test | `studies/_Other/` | training_data_export_test |
|
||||
|
||||
When creating a new study:
|
||||
1. Identify the geometry type (mirror, bracket, beam, etc.)
|
||||
2. Place study under the appropriate `studies/{geometry_type}/` folder
|
||||
3. For new geometry types, create a new folder with descriptive name
|
||||
|
||||
---
|
||||
|
||||
## README Hierarchy (Parent-Child Documentation)
|
||||
|
||||
**Two-level documentation system**:
|
||||
|
||||
```
|
||||
studies/{geometry_type}/
|
||||
├── README.md # PARENT: Project-level context
|
||||
│ ├── Project overview # What is this geometry/component?
|
||||
│ ├── Physical system specs # Material, dimensions, constraints
|
||||
│ ├── Optical/mechanical specs # Domain-specific requirements
|
||||
│ ├── Design variables catalog # ALL possible variables with descriptions
|
||||
│ ├── Objectives catalog # ALL possible objectives
|
||||
│ ├── Campaign history # Summary of all sub-studies
|
||||
│ └── Sub-studies index # Links to each sub-study
|
||||
│
|
||||
├── sub_study_V1/
|
||||
│ └── README.md # CHILD: Study-specific details
|
||||
│ ├── Link to parent # "See ../README.md for context"
|
||||
│ ├── Study focus # What THIS study optimizes
|
||||
│ ├── Active variables # Which params enabled
|
||||
│ ├── Algorithm config # Sampler, trials, settings
|
||||
│ ├── Baseline/seeding # Starting point
|
||||
│ └── Results summary # Best trial, learnings
|
||||
│
|
||||
└── sub_study_V2/
|
||||
└── README.md # CHILD: References parent, adds specifics
|
||||
```
|
||||
|
||||
### Parent README Content (Geometry-Level)
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Project Overview | What the component is, purpose, context |
|
||||
| Physical System | Material, mass targets, loading conditions |
|
||||
| Domain Specs | Optical prescription (mirrors), structural limits (brackets) |
|
||||
| Design Variables | Complete catalog with ranges and descriptions |
|
||||
| Objectives | All possible metrics with formulas |
|
||||
| Campaign History | Evolution across sub-studies |
|
||||
| Sub-Studies Index | Table with links, status, best results |
|
||||
| Technical Notes | Domain-specific implementation details |
|
||||
|
||||
### Child README Content (Study-Level)
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Parent Reference | `> See [../README.md](../README.md) for project context` |
|
||||
| Study Focus | What differentiates THIS study |
|
||||
| Active Variables | Which parameters are enabled (subset of parent catalog) |
|
||||
| Algorithm Config | Sampler, n_trials, sigma, seed |
|
||||
| Baseline | Starting point (seeded from prior study or default) |
|
||||
| Results | Best trial, improvement metrics |
|
||||
| Key Learnings | What was discovered |
|
||||
|
||||
### When to Create Parent README
|
||||
|
||||
- **First study** for a geometry type → Create parent README immediately
|
||||
- **Subsequent studies** → Add to parent's sub-studies index
|
||||
- **New geometry type** → Create both parent and child READMEs
|
||||
|
||||
### Example Reference
|
||||
|
||||
See `studies/M1_Mirror/README.md` for a complete parent README example.
|
||||
|
||||
---
|
||||
|
||||
## Interview Mode (DEFAULT)
|
||||
|
||||
**Study creation now uses Interview Mode by default.** This provides guided study creation with intelligent validation.
|
||||
|
||||
### Triggers (Any of These Start Interview Mode)
|
||||
|
||||
- "create a study", "new study", "set up study"
|
||||
- "create a study for my bracket"
|
||||
- "optimize this model"
|
||||
- "I want to minimize mass"
|
||||
- Any study creation request without "skip interview" or "manual"
|
||||
|
||||
### When to Skip Interview Mode (Manual)
|
||||
|
||||
Use manual mode only when:
|
||||
- Power user who knows the exact configuration
|
||||
- Recreating a known study configuration
|
||||
- User explicitly says "skip interview", "quick setup", or "manual config"
|
||||
|
||||
### Starting Interview Mode
|
||||
|
||||
```python
|
||||
from optimization_engine.interview import StudyInterviewEngine
|
||||
|
||||
engine = StudyInterviewEngine(study_path)
|
||||
|
||||
# Run introspection first (if model available)
|
||||
introspection = {
|
||||
"expressions": [...], # From part introspection
|
||||
"model_path": "...",
|
||||
"sim_path": "..."
|
||||
}
|
||||
|
||||
session = engine.start_interview(study_name, introspection=introspection)
|
||||
action = engine.get_first_question()
|
||||
|
||||
# Present action.message to user
|
||||
# Process answers with: action = engine.process_answer(user_response)
|
||||
```
|
||||
|
||||
### Interview Benefits
|
||||
|
||||
- **Material-aware validation**: Checks stress limits against yield
|
||||
- **Anti-pattern detection**: Warns about mass minimization without constraints
|
||||
- **Auto extractor mapping**: Maps goals to correct extractors (E1-E10)
|
||||
- **State persistence**: Resume interrupted interviews
|
||||
- **Blueprint generation**: Creates validated configuration
|
||||
|
||||
See `.claude/skills/modules/study-interview-mode.md` for full documentation.
|
||||
|
||||
---
|
||||
|
||||
## Detailed Steps (Manual Mode - Power Users Only)
|
||||
|
||||
### Step 1: Gather Requirements
|
||||
|
||||
**Ask the user**:
|
||||
1. What are you trying to optimize? (objective)
|
||||
2. What can you change? (design variables)
|
||||
3. What limits must be respected? (constraints)
|
||||
4. Where are your NX files?
|
||||
|
||||
**Example Dialog**:
|
||||
```
|
||||
User: "I want to optimize my bracket"
|
||||
You: "What should I optimize for - minimum mass, maximum stiffness,
|
||||
target frequency, or something else?"
|
||||
User: "Minimize mass while keeping stress below 250 MPa"
|
||||
```
|
||||
|
||||
### Step 2: Analyze Model (Introspection)
|
||||
|
||||
**MANDATORY**: When user provides NX files, run comprehensive introspection:
|
||||
|
||||
```python
|
||||
from optimization_engine.hooks.nx_cad.model_introspection import (
|
||||
introspect_part,
|
||||
introspect_simulation,
|
||||
introspect_op2,
|
||||
introspect_study
|
||||
)
|
||||
|
||||
# Introspect the part file to get expressions, mass, features
|
||||
part_info = introspect_part("C:/path/to/model.prt")
|
||||
|
||||
# Introspect the simulation to get solutions, BCs, loads
|
||||
sim_info = introspect_simulation("C:/path/to/model.sim")
|
||||
|
||||
# If OP2 exists, check what results are available
|
||||
op2_info = introspect_op2("C:/path/to/results.op2")
|
||||
|
||||
# Or introspect entire study directory at once
|
||||
study_info = introspect_study("studies/my_study/")
|
||||
```
|
||||
|
||||
**Introspection Report Contents**:
|
||||
|
||||
| Source | Information Extracted |
|
||||
|--------|----------------------|
|
||||
| `.prt` | Expressions (count, values, types), bodies, mass, material, features |
|
||||
| `.sim` | Solutions, boundary conditions, loads, materials, mesh info, output requests |
|
||||
| `.op2` | Available results (displacement, stress, strain, SPC forces, etc.), subcases |
|
||||
|
||||
**Generate Introspection Report** at study creation:
|
||||
1. Save report to `studies/{study_name}/MODEL_INTROSPECTION.md`
|
||||
2. Include summary of what's available for optimization
|
||||
3. List potential design variables (expressions)
|
||||
4. List extractable results (from OP2)
|
||||
|
||||
**Key Questions Answered by Introspection**:
|
||||
- What expressions exist? (potential design variables)
|
||||
- What solution types? (static, modal, etc.)
|
||||
- What results are available in OP2? (displacement, stress, SPC forces)
|
||||
- Multi-solution required? (static + modal = set `solution_name=None`)
|
||||
|
||||
### Step 3: Select Protocol
|
||||
|
||||
Based on objectives:
|
||||
|
||||
| Scenario | Protocol | Sampler |
|
||||
|----------|----------|---------|
|
||||
| Single objective | Protocol 10 (IMSO) | TPE, CMA-ES, or GP |
|
||||
| 2-3 objectives | Protocol 11 | NSGA-II |
|
||||
| >50 trials, need speed | Protocol 14 | + Neural acceleration |
|
||||
|
||||
See [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_11_MULTI_OBJECTIVE](../system/SYS_11_MULTI_OBJECTIVE.md).
|
||||
|
||||
### Step 4: Select Extractors
|
||||
|
||||
Match physics to extractors from [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md):
|
||||
|
||||
| Need | Extractor ID | Function |
|
||||
|------|--------------|----------|
|
||||
| Max displacement | E1 | `extract_displacement()` |
|
||||
| Natural frequency | E2 | `extract_frequency()` |
|
||||
| Von Mises stress | E3 | `extract_solid_stress()` |
|
||||
| Mass from BDF | E4 | `extract_mass_from_bdf()` |
|
||||
| Mass from NX | E5 | `extract_mass_from_expression()` |
|
||||
| Wavefront error | E8-E10 | Zernike extractors |
|
||||
|
||||
### Step 5: Generate Configuration
|
||||
|
||||
Create `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "bracket_optimization",
|
||||
"description": "Minimize bracket mass while meeting stress constraint",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "thickness",
|
||||
"type": "continuous",
|
||||
"min": 2.0,
|
||||
"max": 10.0,
|
||||
"unit": "mm",
|
||||
"description": "Wall thickness"
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "mass",
|
||||
"type": "minimize",
|
||||
"unit": "kg",
|
||||
"description": "Total bracket mass"
|
||||
}
|
||||
],
|
||||
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_stress",
|
||||
"type": "less_than",
|
||||
"value": 250.0,
|
||||
"unit": "MPa",
|
||||
"description": "Maximum allowable von Mises stress"
|
||||
}
|
||||
],
|
||||
|
||||
"simulation": {
|
||||
"model_file": "1_setup/model/bracket.prt",
|
||||
"sim_file": "1_setup/model/bracket.sim",
|
||||
"solver": "nastran",
|
||||
"solution_name": null
|
||||
},
|
||||
|
||||
"optimization_settings": {
|
||||
"protocol": "protocol_10_single_objective",
|
||||
"sampler": "TPESampler",
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate run_optimization.py
|
||||
|
||||
**CRITICAL**: Always use the `FEARunner` class pattern with proper `NXSolver` initialization.
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
{study_name} - Optimization Runner
|
||||
Generated by Atomizer LLM
|
||||
"""
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, Any
|
||||
|
||||
# Add optimization engine to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
|
||||
|
||||
import optuna
|
||||
from optimization_engine.nx_solver import NXSolver
|
||||
from optimization_engine.utils import ensure_nx_running
|
||||
from optimization_engine.extractors import extract_solid_stress
|
||||
|
||||
# Paths
|
||||
STUDY_DIR = Path(__file__).parent
|
||||
SETUP_DIR = STUDY_DIR / "1_setup"
|
||||
ITERATIONS_DIR = STUDY_DIR / "2_iterations"
|
||||
RESULTS_DIR = STUDY_DIR / "3_results"
|
||||
CONFIG_PATH = SETUP_DIR / "optimization_config.json"
|
||||
|
||||
# Ensure directories exist
|
||||
ITERATIONS_DIR.mkdir(exist_ok=True)
|
||||
RESULTS_DIR.mkdir(exist_ok=True)
|
||||
|
||||
|
||||
class FEARunner:
|
||||
"""Runs actual FEA simulations. Always use this pattern!"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
self.config = config
|
||||
self.nx_solver = None
|
||||
self.nx_manager = None
|
||||
self.master_model_dir = SETUP_DIR / "model"
|
||||
|
||||
def setup(self):
|
||||
"""Setup NX and solver. Called lazily on first use."""
|
||||
study_name = self.config.get('study_name', 'my_study')
|
||||
|
||||
# Ensure NX is running
|
||||
self.nx_manager, nx_was_started = ensure_nx_running(
|
||||
session_id=study_name,
|
||||
auto_start=True,
|
||||
start_timeout=120
|
||||
)
|
||||
|
||||
# CRITICAL: Initialize NXSolver with named parameters, NOT config dict
|
||||
nx_settings = self.config.get('nx_settings', {})
|
||||
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
|
||||
|
||||
# Extract version from path
|
||||
version_match = re.search(r'NX(\d+)', nx_install_dir)
|
||||
nastran_version = version_match.group(1) if version_match else "2506"
|
||||
|
||||
self.nx_solver = NXSolver(
|
||||
master_model_dir=str(self.master_model_dir),
|
||||
nx_install_dir=nx_install_dir,
|
||||
nastran_version=nastran_version,
|
||||
timeout=nx_settings.get('simulation_timeout_s', 600),
|
||||
use_iteration_folders=True,
|
||||
study_name=study_name
|
||||
)
|
||||
|
||||
def run_fea(self, params: Dict[str, float], iter_num: int) -> Optional[Dict]:
|
||||
"""Run FEA simulation and extract results."""
|
||||
if self.nx_solver is None:
|
||||
self.setup()
|
||||
|
||||
# Create expression updates
|
||||
expressions = {var['expression_name']: params[var['name']]
|
||||
for var in self.config['design_variables']}
|
||||
|
||||
# Create iteration folder with model copies
|
||||
iter_folder = self.nx_solver.create_iteration_folder(
|
||||
iterations_base_dir=ITERATIONS_DIR,
|
||||
iteration_number=iter_num,
|
||||
expression_updates=expressions
|
||||
)
|
||||
|
||||
# Run simulation
|
||||
nx_settings = self.config.get('nx_settings', {})
|
||||
sim_file = iter_folder / nx_settings.get('sim_file', 'model.sim')
|
||||
|
||||
result = self.nx_solver.run_simulation(
|
||||
sim_file=sim_file,
|
||||
working_dir=iter_folder,
|
||||
expression_updates=expressions,
|
||||
solution_name=nx_settings.get('solution_name', 'Solution 1'),
|
||||
cleanup=False
|
||||
)
|
||||
|
||||
if not result['success']:
|
||||
return None
|
||||
|
||||
# Extract results
|
||||
op2_file = result['op2_file']
|
||||
stress_result = extract_solid_stress(op2_file)
|
||||
|
||||
return {
|
||||
'params': params,
|
||||
'max_stress': stress_result['max_von_mises'],
|
||||
'op2_file': op2_file
|
||||
}
|
||||
|
||||
|
||||
# Optimizer class would use FEARunner...
|
||||
# See m1_mirror_adaptive_V14/run_optimization.py for full example
|
||||
```
|
||||
|
||||
**WRONG** - causes `TypeError: expected str, bytes or os.PathLike object, not dict`:
|
||||
```python
|
||||
self.nx_solver = NXSolver(self.config) # ❌ NEVER DO THIS
|
||||
```
|
||||
|
||||
**Reference implementations**:
|
||||
- `studies/m1_mirror_adaptive_V14/run_optimization.py` (TPE single-objective)
|
||||
- `studies/m1_mirror_adaptive_V15/run_optimization.py` (NSGA-II multi-objective)
|
||||
|
||||
### Step 7: Generate Documentation
|
||||
|
||||
**README.md** (11 sections required):
|
||||
1. Engineering Problem
|
||||
2. Mathematical Formulation
|
||||
3. Optimization Algorithm
|
||||
4. Simulation Pipeline
|
||||
5. Result Extraction Methods
|
||||
6. Neural Acceleration (if applicable)
|
||||
7. Study File Structure
|
||||
8. Results Location
|
||||
9. Quick Start
|
||||
10. Configuration Reference
|
||||
11. References
|
||||
|
||||
**STUDY_REPORT.md** (template):
|
||||
```markdown
|
||||
# Study Report: {study_name}
|
||||
|
||||
## Executive Summary
|
||||
- Trials completed: _pending_
|
||||
- Best objective: _pending_
|
||||
- Constraint satisfaction: _pending_
|
||||
|
||||
## Optimization Progress
|
||||
_To be filled after run_
|
||||
|
||||
## Best Designs Found
|
||||
_To be filled after run_
|
||||
|
||||
## Recommendations
|
||||
_To be filled after analysis_
|
||||
```
|
||||
|
||||
### Step 7b: Capture Baseline Geometry Images (Recommended)
|
||||
|
||||
For better documentation, capture images of the starting geometry using the NX journal:
|
||||
|
||||
```bash
|
||||
# Capture baseline images for study documentation
|
||||
"C:\Program Files\Siemens\DesigncenterNX2512\NXBIN\run_journal.exe" ^
|
||||
"C:\Users\antoi\Atomizer\nx_journals\capture_study_images.py" ^
|
||||
-args "path/to/model.prt" "1_setup/" "model_name"
|
||||
```
|
||||
|
||||
This generates:
|
||||
- `1_setup/{model_name}_Top.png` - Top view
|
||||
- `1_setup/{model_name}_iso.png` - Isometric view
|
||||
|
||||
**Include in README.md**:
|
||||
```markdown
|
||||
## Baseline Geometry
|
||||
|
||||

|
||||
*Top view description*
|
||||
|
||||

|
||||
*Isometric view description*
|
||||
```
|
||||
|
||||
**Journal location**: `nx_journals/capture_study_images.py`
|
||||
|
||||
### Step 8: Validate NX Model File Chain
|
||||
|
||||
**CRITICAL**: NX simulation files have parent-child dependencies. ALL linked files must be copied to the study folder.
|
||||
|
||||
**Required File Chain Check**:
|
||||
```
|
||||
.sim (Simulation)
|
||||
└── .fem (FEM)
|
||||
└── _i.prt (Idealized Part) ← OFTEN MISSING!
|
||||
└── .prt (Geometry Part)
|
||||
```
|
||||
|
||||
**Validation Steps**:
|
||||
1. Open the `.sim` file in NX
|
||||
2. Go to **Assemblies → Assembly Navigator** or check **Part Navigator**
|
||||
3. Identify ALL child components (especially `*_i.prt` idealized parts)
|
||||
4. Copy ALL linked files to `1_setup/model/`
|
||||
|
||||
**Common Issue**: The `_i.prt` (idealized part) is often forgotten. Without it:
|
||||
- `UpdateFemodel()` runs but mesh doesn't change
|
||||
- Geometry changes don't propagate to FEM
|
||||
- All optimization trials produce identical results
|
||||
|
||||
**File Checklist**:
|
||||
| File Pattern | Description | Required |
|
||||
|--------------|-------------|----------|
|
||||
| `*.prt` | Geometry part | ✅ Always |
|
||||
| `*_i.prt` | Idealized part | ✅ If FEM uses idealization |
|
||||
| `*.fem` | FEM file | ✅ Always |
|
||||
| `*.sim` | Simulation file | ✅ Always |
|
||||
|
||||
**Introspection should report**:
|
||||
- List of all parts referenced by .sim
|
||||
- Warning if any referenced parts are missing from study folder
|
||||
|
||||
### Step 9: Final Validation Checklist
|
||||
|
||||
**CRITICAL**: Study is NOT complete until ALL items are checked:
|
||||
|
||||
- [ ] NX files exist in `1_setup/model/`
|
||||
- [ ] **ALL child parts copied** (especially `*_i.prt`)
|
||||
- [ ] Expression names match model
|
||||
- [ ] Config validates (JSON schema)
|
||||
- [ ] `run_optimization.py` has no syntax errors
|
||||
- [ ] **README.md exists** (MANDATORY - study is incomplete without it!)
|
||||
- [ ] README.md contains: Overview, Objectives, Constraints, Design Variables, Settings, Usage, Structure
|
||||
- [ ] STUDY_REPORT.md template exists
|
||||
|
||||
**README.md Minimum Content**:
|
||||
1. Overview/Purpose
|
||||
2. Objectives with weights
|
||||
3. Constraints (if any)
|
||||
4. Design variables with ranges
|
||||
5. Optimization settings
|
||||
6. Usage commands
|
||||
7. Directory structure
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Bracket
|
||||
|
||||
```
|
||||
User: "Optimize my bracket.prt for minimum mass, stress < 250 MPa"
|
||||
|
||||
Generated config:
|
||||
- 1 design variable (thickness)
|
||||
- 1 objective (minimize mass)
|
||||
- 1 constraint (stress < 250)
|
||||
- Protocol 10, TPE sampler
|
||||
- 50 trials
|
||||
```
|
||||
|
||||
### Example 2: Multi-Objective Beam
|
||||
|
||||
```
|
||||
User: "Minimize mass AND maximize stiffness for my beam"
|
||||
|
||||
Generated config:
|
||||
- 2 design variables (width, height)
|
||||
- 2 objectives (minimize mass, maximize stiffness)
|
||||
- Protocol 11, NSGA-II sampler
|
||||
- 50 trials (Pareto front)
|
||||
```
|
||||
|
||||
### Example 3: Telescope Mirror
|
||||
|
||||
```
|
||||
User: "Minimize wavefront error at 40deg vs 20deg reference"
|
||||
|
||||
Generated config:
|
||||
- Multiple design variables (mount positions)
|
||||
- 1 objective (minimize relative WFE)
|
||||
- Zernike extractor E9
|
||||
- Protocol 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "Expression not found" | Name mismatch | Verify expression names in NX |
|
||||
| "No feasible designs" | Constraints too tight | Relax constraint values |
|
||||
| Config validation fails | Missing required field | Check JSON schema |
|
||||
| Import error | Wrong path | Check sys.path setup |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
- **Next Step**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Skill**: `.claude/skills/core/study-creation-core.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.2 | 2026-01-13 | Added MANDATORY TodoWrite requirement for study creation (README forgotten twice) |
|
||||
| 1.1 | 2025-12-12 | Added FEARunner class pattern, NXSolver initialization warning |
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
321
hq/skills/atomizer-protocols/protocols/OP_02_RUN_OPTIMIZATION.md
Normal file
321
hq/skills/atomizer-protocols/protocols/OP_02_RUN_OPTIMIZATION.md
Normal file
@@ -0,0 +1,321 @@
|
||||
# OP_02: Run Optimization
|
||||
|
||||
<!--
|
||||
PROTOCOL: Run Optimization
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-12
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers executing optimization runs, including pre-flight validation, execution modes, monitoring, and handling common issues.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "start", "run", "execute" | Follow this protocol |
|
||||
| "begin optimization" | Follow this protocol |
|
||||
| Study setup complete | Execute this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Start Command**:
|
||||
```bash
|
||||
conda activate atomizer
|
||||
cd studies/{study_name}
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
**Common Options**:
|
||||
| Flag | Purpose |
|
||||
|------|---------|
|
||||
| `--n-trials 100` | Override trial count |
|
||||
| `--resume` | Continue interrupted run |
|
||||
| `--test` | Run single trial for validation |
|
||||
| `--export-training` | Export data for neural training |
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checklist
|
||||
|
||||
Before running, verify:
|
||||
|
||||
- [ ] **Environment**: `conda activate atomizer`
|
||||
- [ ] **Config exists**: `1_setup/optimization_config.json`
|
||||
- [ ] **Script exists**: `run_optimization.py`
|
||||
- [ ] **Model files**: NX files in `1_setup/model/`
|
||||
- [ ] **No conflicts**: No other optimization running on same study
|
||||
- [ ] **Disk space**: Sufficient for results
|
||||
|
||||
**Quick Validation**:
|
||||
```bash
|
||||
python run_optimization.py --test
|
||||
```
|
||||
This runs a single trial to verify setup.
|
||||
|
||||
---
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### 1. Standard Run
|
||||
|
||||
```bash
|
||||
python run_optimization.py
|
||||
```
|
||||
Uses settings from `optimization_config.json`.
|
||||
|
||||
### 2. Override Trials
|
||||
|
||||
```bash
|
||||
python run_optimization.py --n-trials 100
|
||||
```
|
||||
Override trial count from config.
|
||||
|
||||
### 3. Resume Interrupted
|
||||
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
Continues from last completed trial.
|
||||
|
||||
### 4. Neural Acceleration
|
||||
|
||||
```bash
|
||||
python run_optimization.py --neural
|
||||
```
|
||||
Requires trained surrogate model.
|
||||
|
||||
### 5. Export Training Data
|
||||
|
||||
```bash
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
Saves BDF/OP2 for neural network training.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Progress
|
||||
|
||||
### Option 1: Console Output
|
||||
The script prints progress:
|
||||
```
|
||||
Trial 15/50 complete. Best: 0.234 kg
|
||||
Trial 16/50 complete. Best: 0.234 kg
|
||||
```
|
||||
|
||||
### Option 2: Dashboard
|
||||
See [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md).
|
||||
|
||||
```bash
|
||||
# Start dashboard (separate terminal)
|
||||
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
|
||||
# Open browser
|
||||
http://localhost:3000
|
||||
```
|
||||
|
||||
### Option 3: Query Database
|
||||
|
||||
```bash
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study('study_name', 'sqlite:///2_results/study.db')
|
||||
print(f'Trials: {len(study.trials)}')
|
||||
print(f'Best value: {study.best_value}')
|
||||
"
|
||||
```
|
||||
|
||||
### Option 4: Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## During Execution
|
||||
|
||||
### What Happens Per Trial
|
||||
|
||||
1. **Sample parameters**: Optuna suggests design variable values
|
||||
2. **Update model**: NX expressions updated via journal
|
||||
3. **Solve**: NX Nastran runs FEA simulation
|
||||
4. **Extract results**: Extractors read OP2 file
|
||||
5. **Evaluate**: Check constraints, compute objectives
|
||||
6. **Record**: Trial stored in Optuna database
|
||||
|
||||
### Normal Output
|
||||
|
||||
```
|
||||
[2025-12-05 10:15:30] Trial 1 started
|
||||
[2025-12-05 10:17:45] NX solve complete (135.2s)
|
||||
[2025-12-05 10:17:46] Extraction complete
|
||||
[2025-12-05 10:17:46] Trial 1 complete: mass=0.342 kg, stress=198.5 MPa
|
||||
|
||||
[2025-12-05 10:17:47] Trial 2 started
|
||||
...
|
||||
```
|
||||
|
||||
### Expected Timing
|
||||
|
||||
| Operation | Typical Time |
|
||||
|-----------|--------------|
|
||||
| NX solve | 30s - 30min |
|
||||
| Extraction | <1s |
|
||||
| Per trial total | 1-30 min |
|
||||
| 50 trials | 1-24 hours |
|
||||
|
||||
---
|
||||
|
||||
## Handling Issues
|
||||
|
||||
### Trial Failed / Pruned
|
||||
|
||||
```
|
||||
[WARNING] Trial 12 pruned: Stress constraint violated (312.5 MPa > 250 MPa)
|
||||
```
|
||||
**Normal behavior** - optimizer learns from failures.
|
||||
|
||||
### NX Session Timeout
|
||||
|
||||
```
|
||||
[ERROR] NX session timeout after 600s
|
||||
```
|
||||
**Solution**: Increase timeout in config or simplify model.
|
||||
|
||||
### Expression Not Found
|
||||
|
||||
```
|
||||
[ERROR] Expression 'thicknes' not found in model
|
||||
```
|
||||
**Solution**: Check spelling, verify expression exists in NX.
|
||||
|
||||
### OP2 File Missing
|
||||
|
||||
```
|
||||
[ERROR] OP2 file not found: model.op2
|
||||
```
|
||||
**Solution**: Check NX solve completed. Review NX log file.
|
||||
|
||||
### Database Locked
|
||||
|
||||
```
|
||||
[ERROR] Database is locked
|
||||
```
|
||||
**Solution**: Another process using database. Wait or kill stale process.
|
||||
|
||||
---
|
||||
|
||||
## Stopping and Resuming
|
||||
|
||||
### Graceful Stop
|
||||
Press `Ctrl+C` once. Current trial completes, then exits.
|
||||
|
||||
### Force Stop
|
||||
Press `Ctrl+C` twice. Immediate exit (may lose current trial).
|
||||
|
||||
### Resume
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
Continues from last completed trial. Same study database used.
|
||||
|
||||
---
|
||||
|
||||
## Post-Run Actions
|
||||
|
||||
After optimization completes:
|
||||
|
||||
1. **Archive best design** (REQUIRED):
|
||||
```bash
|
||||
python tools/archive_best_design.py {study_name}
|
||||
```
|
||||
This copies the best iteration folder to `3_results/best_design_archive/<timestamp>/`
|
||||
with metadata. **Always do this** to preserve the winning design.
|
||||
|
||||
2. **Analyze results**:
|
||||
```bash
|
||||
python tools/analyze_study.py {study_name}
|
||||
```
|
||||
Generates comprehensive report with statistics, parameter bounds analysis.
|
||||
|
||||
3. **Find best iteration folder**:
|
||||
```bash
|
||||
python tools/find_best_iteration.py {study_name}
|
||||
```
|
||||
Shows which `iter{N}` folder contains the best design.
|
||||
|
||||
4. **View in dashboard**: `http://localhost:3000`
|
||||
|
||||
5. **Generate detailed report**: See [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
|
||||
### Automated Archiving
|
||||
|
||||
The `run_optimization.py` script should call `archive_best_design()` automatically
|
||||
at the end of each run. If implementing a new study, add this at the end:
|
||||
|
||||
```python
|
||||
# At end of optimization
|
||||
from tools.archive_best_design import archive_best_design
|
||||
archive_best_design(study_name)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol Integration
|
||||
|
||||
### With Protocol 10 (IMSO)
|
||||
If enabled, optimization runs in two phases:
|
||||
1. Characterization (10-30 trials)
|
||||
2. Optimization (remaining trials)
|
||||
|
||||
Dashboard shows phase transitions.
|
||||
|
||||
### With Protocol 11 (Multi-Objective)
|
||||
If 2+ objectives, uses NSGA-II. Returns Pareto front, not single best.
|
||||
|
||||
### With Protocol 13 (Dashboard)
|
||||
Writes `optimizer_state.json` every trial for real-time updates.
|
||||
|
||||
### With Protocol 14 (Neural)
|
||||
If `--neural` flag, uses trained surrogate for fast evaluation.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "ModuleNotFoundError" | Wrong environment | `conda activate atomizer` |
|
||||
| All trials pruned | Constraints too tight | Relax constraints |
|
||||
| Very slow | Model too complex | Simplify mesh, increase timeout |
|
||||
| No improvement | Wrong sampler | Try different algorithm |
|
||||
| "NX license error" | License unavailable | Check NX license server |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_01_CREATE_STUDY](./OP_01_CREATE_STUDY.md)
|
||||
- **Followed By**: [OP_03_MONITOR_PROGRESS](./OP_03_MONITOR_PROGRESS.md), [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Integrates With**: [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.1 | 2025-12-12 | Added mandatory archive_best_design step, analyze_study and find_best_iteration tools |
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
246
hq/skills/atomizer-protocols/protocols/OP_03_MONITOR_PROGRESS.md
Normal file
246
hq/skills/atomizer-protocols/protocols/OP_03_MONITOR_PROGRESS.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# OP_03: Monitor Progress
|
||||
|
||||
<!--
|
||||
PROTOCOL: Monitor Optimization Progress
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_13_DASHBOARD_TRACKING]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers monitoring optimization progress through console output, dashboard, database queries, and Optuna's built-in tools.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "status", "progress" | Follow this protocol |
|
||||
| "how many trials" | Query database |
|
||||
| "what's happening" | Check console or dashboard |
|
||||
| "is it running" | Check process status |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Method | Command/URL | Best For |
|
||||
|--------|-------------|----------|
|
||||
| Console | Watch terminal output | Quick check |
|
||||
| Dashboard | `http://localhost:3000` | Visual monitoring |
|
||||
| Database query | Python one-liner | Scripted checks |
|
||||
| Optuna Dashboard | `http://localhost:8080` | Detailed analysis |
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Methods
|
||||
|
||||
### 1. Console Output
|
||||
|
||||
If running in foreground, watch terminal:
|
||||
```
|
||||
[10:15:30] Trial 15/50 started
|
||||
[10:17:45] Trial 15/50 complete: mass=0.234 kg (best: 0.212 kg)
|
||||
[10:17:46] Trial 16/50 started
|
||||
```
|
||||
|
||||
### 2. Atomizer Dashboard
|
||||
|
||||
**Start Dashboard** (if not running):
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Terminal 2: Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
**View at**: `http://localhost:3000`
|
||||
|
||||
**Features**:
|
||||
- Real-time trial progress bar
|
||||
- Current optimizer phase (if Protocol 10)
|
||||
- Pareto front visualization (if multi-objective)
|
||||
- Parallel coordinates plot
|
||||
- Convergence chart
|
||||
|
||||
### 3. Database Query
|
||||
|
||||
**Quick status**:
|
||||
```bash
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///studies/my_study/2_results/study.db'
|
||||
)
|
||||
print(f'Trials completed: {len(study.trials)}')
|
||||
print(f'Best value: {study.best_value}')
|
||||
print(f'Best params: {study.best_params}')
|
||||
"
|
||||
```
|
||||
|
||||
**Detailed status**:
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///studies/my_study/2_results/study.db'
|
||||
)
|
||||
|
||||
# Trial counts by state
|
||||
from collections import Counter
|
||||
states = Counter(t.state.name for t in study.trials)
|
||||
print(f"Complete: {states.get('COMPLETE', 0)}")
|
||||
print(f"Pruned: {states.get('PRUNED', 0)}")
|
||||
print(f"Failed: {states.get('FAIL', 0)}")
|
||||
print(f"Running: {states.get('RUNNING', 0)}")
|
||||
|
||||
# Best trials
|
||||
if len(study.directions) > 1:
|
||||
print(f"Pareto front size: {len(study.best_trials)}")
|
||||
else:
|
||||
print(f"Best value: {study.best_value}")
|
||||
```
|
||||
|
||||
### 4. Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///studies/my_study/2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- Trial history table
|
||||
- Parameter importance
|
||||
- Optimization history plot
|
||||
- Slice plot (parameter vs objective)
|
||||
|
||||
### 5. Check Running Processes
|
||||
|
||||
```bash
|
||||
# Linux/Mac
|
||||
ps aux | grep run_optimization
|
||||
|
||||
# Windows
|
||||
tasklist | findstr python
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Metrics to Monitor
|
||||
|
||||
### Trial Progress
|
||||
- Completed trials vs target
|
||||
- Completion rate (trials/hour)
|
||||
- Estimated time remaining
|
||||
|
||||
### Objective Improvement
|
||||
- Current best value
|
||||
- Improvement trend
|
||||
- Plateau detection
|
||||
|
||||
### Constraint Satisfaction
|
||||
- Feasibility rate (% passing constraints)
|
||||
- Most violated constraint
|
||||
|
||||
### For Protocol 10 (IMSO)
|
||||
- Current phase (Characterization vs Optimization)
|
||||
- Current strategy (TPE, GP, CMA-ES)
|
||||
- Characterization confidence
|
||||
|
||||
### For Protocol 11 (Multi-Objective)
|
||||
- Pareto front size
|
||||
- Hypervolume indicator
|
||||
- Spread of solutions
|
||||
|
||||
---
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
### Healthy Optimization
|
||||
```
|
||||
Trial 45/50: mass=0.198 kg (best: 0.195 kg)
|
||||
Feasibility rate: 78%
|
||||
```
|
||||
- Progress toward target
|
||||
- Reasonable feasibility rate (60-90%)
|
||||
- Gradual improvement
|
||||
|
||||
### Potential Issues
|
||||
|
||||
**All Trials Pruned**:
|
||||
```
|
||||
Trial 20 pruned: constraint violated
|
||||
Trial 21 pruned: constraint violated
|
||||
...
|
||||
```
|
||||
→ Constraints too tight. Consider relaxing.
|
||||
|
||||
**No Improvement**:
|
||||
```
|
||||
Trial 30: best=0.234 (unchanged since trial 8)
|
||||
Trial 31: best=0.234 (unchanged since trial 8)
|
||||
```
|
||||
→ May have converged, or stuck in local minimum.
|
||||
|
||||
**High Failure Rate**:
|
||||
```
|
||||
Failed: 15/50 (30%)
|
||||
```
|
||||
→ Model issues. Check NX logs.
|
||||
|
||||
---
|
||||
|
||||
## Real-Time State File
|
||||
|
||||
If using Protocol 10, check:
|
||||
```bash
|
||||
cat studies/my_study/2_results/intelligent_optimizer/optimizer_state.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-12-05T10:15:30",
|
||||
"trial_number": 29,
|
||||
"total_trials": 50,
|
||||
"current_phase": "adaptive_optimization",
|
||||
"current_strategy": "GP_UCB",
|
||||
"is_multi_objective": false
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Dashboard shows old data | Backend not running | Start backend |
|
||||
| "No study found" | Wrong path | Check study name and path |
|
||||
| Trial count not increasing | Process stopped | Check if still running |
|
||||
| Dashboard not updating | Polling issue | Refresh browser |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Followed By**: [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Integrates With**: [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
302
hq/skills/atomizer-protocols/protocols/OP_04_ANALYZE_RESULTS.md
Normal file
302
hq/skills/atomizer-protocols/protocols/OP_04_ANALYZE_RESULTS.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# OP_04: Analyze Results
|
||||
|
||||
<!--
|
||||
PROTOCOL: Analyze Optimization Results
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers analyzing optimization results, including extracting best solutions, generating reports, comparing designs, and interpreting Pareto fronts.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "results", "what did we find" | Follow this protocol |
|
||||
| "best design" | Extract best trial |
|
||||
| "compare", "trade-off" | Pareto analysis |
|
||||
| "report" | Generate summary |
|
||||
| Optimization complete | Analyze and document |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Key Outputs**:
|
||||
| Output | Location | Purpose |
|
||||
|--------|----------|---------|
|
||||
| Best parameters | `study.best_params` | Optimal design |
|
||||
| Pareto front | `study.best_trials` | Trade-off solutions |
|
||||
| Trial history | `study.trials` | Full exploration |
|
||||
| Intelligence report | `intelligent_optimizer/` | Algorithm insights |
|
||||
|
||||
---
|
||||
|
||||
## Analysis Methods
|
||||
|
||||
### 1. Single-Objective Results
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///2_results/study.db'
|
||||
)
|
||||
|
||||
# Best result
|
||||
print(f"Best value: {study.best_value}")
|
||||
print(f"Best parameters: {study.best_params}")
|
||||
print(f"Best trial: #{study.best_trial.number}")
|
||||
|
||||
# Get full best trial details
|
||||
best = study.best_trial
|
||||
print(f"User attributes: {best.user_attrs}")
|
||||
```
|
||||
|
||||
### 2. Multi-Objective Results (Pareto Front)
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///2_results/study.db'
|
||||
)
|
||||
|
||||
# All Pareto-optimal solutions
|
||||
pareto_trials = study.best_trials
|
||||
print(f"Pareto front size: {len(pareto_trials)}")
|
||||
|
||||
# Print all Pareto solutions
|
||||
for trial in pareto_trials:
|
||||
print(f"Trial {trial.number}: {trial.values} - {trial.params}")
|
||||
|
||||
# Find extremes
|
||||
# Assuming objectives: [stiffness (max), mass (min)]
|
||||
best_stiffness = max(pareto_trials, key=lambda t: t.values[0])
|
||||
lightest = min(pareto_trials, key=lambda t: t.values[1])
|
||||
|
||||
print(f"Best stiffness: Trial {best_stiffness.number}")
|
||||
print(f"Lightest: Trial {lightest.number}")
|
||||
```
|
||||
|
||||
### 3. Parameter Importance
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Parameter importance (which parameters matter most)
|
||||
importance = optuna.importance.get_param_importances(study)
|
||||
for param, score in importance.items():
|
||||
print(f"{param}: {score:.3f}")
|
||||
```
|
||||
|
||||
### 4. Constraint Analysis
|
||||
|
||||
```python
|
||||
# Find feasibility rate
|
||||
completed = [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE]
|
||||
pruned = [t for t in study.trials if t.state == optuna.trial.TrialState.PRUNED]
|
||||
|
||||
feasibility_rate = len(completed) / (len(completed) + len(pruned))
|
||||
print(f"Feasibility rate: {feasibility_rate:.1%}")
|
||||
|
||||
# Analyze why trials were pruned
|
||||
for trial in pruned[:5]: # First 5 pruned
|
||||
reason = trial.user_attrs.get('pruning_reason', 'Unknown')
|
||||
print(f"Trial {trial.number}: {reason}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visualization
|
||||
|
||||
### Using Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
**Available Plots**:
|
||||
- Optimization history
|
||||
- Parameter importance
|
||||
- Slice plot (parameter vs objective)
|
||||
- Parallel coordinates
|
||||
- Contour plot (2D parameter interaction)
|
||||
|
||||
### Using Atomizer Dashboard
|
||||
|
||||
Navigate to `http://localhost:3000` and select study.
|
||||
|
||||
**Features**:
|
||||
- Pareto front plot with normalization
|
||||
- Parallel coordinates with selection
|
||||
- Real-time convergence chart
|
||||
|
||||
### Custom Visualization
|
||||
|
||||
```python
|
||||
import matplotlib.pyplot as plt
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Plot optimization history
|
||||
fig = optuna.visualization.plot_optimization_history(study)
|
||||
fig.show()
|
||||
|
||||
# Plot parameter importance
|
||||
fig = optuna.visualization.plot_param_importances(study)
|
||||
fig.show()
|
||||
|
||||
# Plot Pareto front (multi-objective)
|
||||
if len(study.directions) > 1:
|
||||
fig = optuna.visualization.plot_pareto_front(study)
|
||||
fig.show()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Generate Reports
|
||||
|
||||
### Update STUDY_REPORT.md
|
||||
|
||||
After analysis, fill in the template:
|
||||
|
||||
```markdown
|
||||
# Study Report: bracket_optimization
|
||||
|
||||
## Executive Summary
|
||||
- **Trials completed**: 50
|
||||
- **Best mass**: 0.195 kg
|
||||
- **Best parameters**: thickness=4.2mm, width=25.8mm
|
||||
- **Constraint satisfaction**: All constraints met
|
||||
|
||||
## Optimization Progress
|
||||
- Initial best: 0.342 kg (trial 1)
|
||||
- Final best: 0.195 kg (trial 38)
|
||||
- Improvement: 43%
|
||||
|
||||
## Best Designs Found
|
||||
|
||||
### Design 1 (Overall Best)
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| thickness | 4.2 mm |
|
||||
| width | 25.8 mm |
|
||||
|
||||
| Metric | Value | Constraint |
|
||||
|--------|-------|------------|
|
||||
| Mass | 0.195 kg | - |
|
||||
| Max stress | 238.5 MPa | < 250 MPa ✓ |
|
||||
|
||||
## Engineering Recommendations
|
||||
1. Recommended design: Trial 38 parameters
|
||||
2. Safety margin: 4.6% on stress constraint
|
||||
3. Consider manufacturing tolerance analysis
|
||||
```
|
||||
|
||||
### Export to CSV
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
|
||||
# All trials to DataFrame
|
||||
trials_data = []
|
||||
for trial in study.trials:
|
||||
if trial.state == optuna.trial.TrialState.COMPLETE:
|
||||
row = {'trial': trial.number, 'value': trial.value}
|
||||
row.update(trial.params)
|
||||
trials_data.append(row)
|
||||
|
||||
df = pd.DataFrame(trials_data)
|
||||
df.to_csv('optimization_results.csv', index=False)
|
||||
```
|
||||
|
||||
### Export Best Design for FEA Validation
|
||||
|
||||
```python
|
||||
# Get best parameters
|
||||
best_params = study.best_params
|
||||
|
||||
# Format for NX expression update
|
||||
for name, value in best_params.items():
|
||||
print(f"{name} = {value}")
|
||||
|
||||
# Or save as JSON
|
||||
import json
|
||||
with open('best_design.json', 'w') as f:
|
||||
json.dump(best_params, f, indent=2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Intelligence Report (Protocol 10)
|
||||
|
||||
If using Protocol 10, check intelligence files:
|
||||
|
||||
```bash
|
||||
# Landscape analysis
|
||||
cat 2_results/intelligent_optimizer/intelligence_report.json
|
||||
|
||||
# Characterization progress
|
||||
cat 2_results/intelligent_optimizer/characterization_progress.json
|
||||
```
|
||||
|
||||
**Key Insights**:
|
||||
- Landscape classification (smooth/rugged, unimodal/multimodal)
|
||||
- Algorithm recommendation rationale
|
||||
- Parameter correlations
|
||||
- Confidence metrics
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Before finalizing results:
|
||||
|
||||
- [ ] Best solution satisfies all constraints
|
||||
- [ ] Results are physically reasonable
|
||||
- [ ] Parameter values within manufacturing limits
|
||||
- [ ] Consider re-running FEA on best design to confirm
|
||||
- [ ] Document any anomalies or surprises
|
||||
- [ ] Update STUDY_REPORT.md
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Best value seems wrong | Constraint not enforced | Check objective function |
|
||||
| No Pareto solutions | All trials failed | Check constraints |
|
||||
| Unexpected best params | Local minimum | Try different starting points |
|
||||
| Can't load study | Wrong path | Verify database location |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md), [OP_03_MONITOR_PROGRESS](./OP_03_MONITOR_PROGRESS.md)
|
||||
- **Related**: [SYS_11_MULTI_OBJECTIVE](../system/SYS_11_MULTI_OBJECTIVE.md) for Pareto analysis
|
||||
- **Skill**: `.claude/skills/generate-report.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
@@ -0,0 +1,294 @@
|
||||
# OP_05: Export Training Data
|
||||
|
||||
<!--
|
||||
PROTOCOL: Export Neural Network Training Data
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_14_NEURAL_ACCELERATION]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers exporting FEA simulation data for training neural network surrogates. Proper data export enables Protocol 14 (Neural Acceleration).
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "export training data" | Follow this protocol |
|
||||
| "neural network data" | Follow this protocol |
|
||||
| Planning >50 trials | Consider export for acceleration |
|
||||
| Want to train surrogate | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Export Command**:
|
||||
```bash
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
|
||||
**Output Structure**:
|
||||
```
|
||||
atomizer_field_training_data/{study_name}/
|
||||
├── trial_0001/
|
||||
│ ├── input/model.bdf
|
||||
│ ├── output/model.op2
|
||||
│ └── metadata.json
|
||||
├── trial_0002/
|
||||
│ └── ...
|
||||
└── study_summary.json
|
||||
```
|
||||
|
||||
**Recommended Data Volume**:
|
||||
| Complexity | Training Samples | Validation Samples |
|
||||
|------------|-----------------|-------------------|
|
||||
| Simple (2-3 params) | 50-100 | 20-30 |
|
||||
| Medium (4-6 params) | 100-200 | 30-50 |
|
||||
| Complex (7+ params) | 200-500 | 50-100 |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Enable Export in Config
|
||||
|
||||
Add to `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"training_data_export": {
|
||||
"enabled": true,
|
||||
"export_dir": "atomizer_field_training_data/my_study",
|
||||
"export_bdf": true,
|
||||
"export_op2": true,
|
||||
"export_fields": ["displacement", "stress"],
|
||||
"include_failed": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `enabled` | bool | false | Enable export |
|
||||
| `export_dir` | string | - | Output directory |
|
||||
| `export_bdf` | bool | true | Save Nastran input |
|
||||
| `export_op2` | bool | true | Save binary results |
|
||||
| `export_fields` | list | all | Which result fields |
|
||||
| `include_failed` | bool | false | Include failed trials |
|
||||
|
||||
---
|
||||
|
||||
## Export Workflow
|
||||
|
||||
### Step 1: Run with Export Enabled
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
cd studies/my_study
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
|
||||
Or run standard optimization with config export enabled.
|
||||
|
||||
### Step 2: Verify Export
|
||||
|
||||
```bash
|
||||
ls atomizer_field_training_data/my_study/
|
||||
# Should see trial_0001/, trial_0002/, etc.
|
||||
|
||||
# Check a trial
|
||||
ls atomizer_field_training_data/my_study/trial_0001/
|
||||
# input/model.bdf
|
||||
# output/model.op2
|
||||
# metadata.json
|
||||
```
|
||||
|
||||
### Step 3: Check Metadata
|
||||
|
||||
```bash
|
||||
cat atomizer_field_training_data/my_study/trial_0001/metadata.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"trial_number": 1,
|
||||
"design_parameters": {
|
||||
"thickness": 5.2,
|
||||
"width": 30.0
|
||||
},
|
||||
"objectives": {
|
||||
"mass": 0.234,
|
||||
"max_stress": 198.5
|
||||
},
|
||||
"constraints_satisfied": true,
|
||||
"simulation_time": 145.2
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Check Study Summary
|
||||
|
||||
```bash
|
||||
cat atomizer_field_training_data/my_study/study_summary.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "my_study",
|
||||
"total_trials": 50,
|
||||
"successful_exports": 47,
|
||||
"failed_exports": 3,
|
||||
"design_parameters": ["thickness", "width"],
|
||||
"objectives": ["mass", "max_stress"],
|
||||
"export_timestamp": "2025-12-05T15:30:00"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Quality Checks
|
||||
|
||||
### Verify Sample Count
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
||||
export_dir = Path("atomizer_field_training_data/my_study")
|
||||
trials = list(export_dir.glob("trial_*"))
|
||||
print(f"Exported trials: {len(trials)}")
|
||||
|
||||
# Check for missing files
|
||||
for trial_dir in trials:
|
||||
bdf = trial_dir / "input" / "model.bdf"
|
||||
op2 = trial_dir / "output" / "model.op2"
|
||||
meta = trial_dir / "metadata.json"
|
||||
|
||||
if not all([bdf.exists(), op2.exists(), meta.exists()]):
|
||||
print(f"Missing files in {trial_dir}")
|
||||
```
|
||||
|
||||
### Check Parameter Coverage
|
||||
|
||||
```python
|
||||
import json
|
||||
import numpy as np
|
||||
|
||||
# Load all metadata
|
||||
params = []
|
||||
for trial_dir in export_dir.glob("trial_*"):
|
||||
with open(trial_dir / "metadata.json") as f:
|
||||
meta = json.load(f)
|
||||
params.append(meta["design_parameters"])
|
||||
|
||||
# Check coverage
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(params)
|
||||
print(df.describe())
|
||||
|
||||
# Look for gaps
|
||||
for col in df.columns:
|
||||
print(f"{col}: min={df[col].min():.2f}, max={df[col].max():.2f}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Space-Filling Sampling
|
||||
|
||||
For best neural network training, use space-filling designs:
|
||||
|
||||
### Latin Hypercube Sampling
|
||||
|
||||
```python
|
||||
from scipy.stats import qmc
|
||||
|
||||
# Generate space-filling samples
|
||||
n_samples = 100
|
||||
n_params = 4
|
||||
|
||||
sampler = qmc.LatinHypercube(d=n_params)
|
||||
samples = sampler.random(n=n_samples)
|
||||
|
||||
# Scale to parameter bounds
|
||||
lower = [2.0, 20.0, 5.0, 1.0]
|
||||
upper = [10.0, 50.0, 15.0, 5.0]
|
||||
scaled = qmc.scale(samples, lower, upper)
|
||||
```
|
||||
|
||||
### Sobol Sequence
|
||||
|
||||
```python
|
||||
sampler = qmc.Sobol(d=n_params)
|
||||
samples = sampler.random(n=n_samples)
|
||||
scaled = qmc.scale(samples, lower, upper)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps After Export
|
||||
|
||||
### 1. Parse to Neural Format
|
||||
|
||||
```bash
|
||||
cd atomizer-field
|
||||
python batch_parser.py ../atomizer_field_training_data/my_study
|
||||
```
|
||||
|
||||
### 2. Split Train/Validation
|
||||
|
||||
```python
|
||||
from sklearn.model_selection import train_test_split
|
||||
|
||||
# 80/20 split
|
||||
train_trials, val_trials = train_test_split(
|
||||
all_trials,
|
||||
test_size=0.2,
|
||||
random_state=42
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Train Model
|
||||
|
||||
```bash
|
||||
python train_parametric.py \
|
||||
--train_dir ../training_data/parsed \
|
||||
--val_dir ../validation_data/parsed \
|
||||
--epochs 200
|
||||
```
|
||||
|
||||
See [SYS_14_NEURAL_ACCELERATION](../system/SYS_14_NEURAL_ACCELERATION.md) for full training workflow.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| No export directory | Export not enabled | Add `training_data_export` to config |
|
||||
| Missing OP2 files | Solve failed | Check `include_failed: false` |
|
||||
| Incomplete metadata | Extraction error | Check extractor logs |
|
||||
| Low sample count | Too many failures | Relax constraints |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Related**: [SYS_14_NEURAL_ACCELERATION](../system/SYS_14_NEURAL_ACCELERATION.md)
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Skill**: `.claude/skills/modules/neural-acceleration.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
437
hq/skills/atomizer-protocols/protocols/OP_06_TROUBLESHOOT.md
Normal file
437
hq/skills/atomizer-protocols/protocols/OP_06_TROUBLESHOOT.md
Normal file
@@ -0,0 +1,437 @@
|
||||
# OP_06: Troubleshoot
|
||||
|
||||
<!--
|
||||
PROTOCOL: Troubleshoot Optimization Issues
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol provides systematic troubleshooting for common optimization issues, covering NX errors, extraction failures, database problems, and performance issues.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "error", "failed" | Follow this protocol |
|
||||
| "not working", "crashed" | Follow this protocol |
|
||||
| "help", "stuck" | Follow this protocol |
|
||||
| Unexpected behavior | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Diagnostic
|
||||
|
||||
```bash
|
||||
# 1. Check environment
|
||||
conda activate atomizer
|
||||
python --version # Should be 3.9+
|
||||
|
||||
# 2. Check study structure
|
||||
ls studies/my_study/
|
||||
# Should have: 1_setup/, run_optimization.py
|
||||
|
||||
# 3. Check model files
|
||||
ls studies/my_study/1_setup/model/
|
||||
# Should have: .prt, .sim files
|
||||
|
||||
# 4. Test single trial
|
||||
python run_optimization.py --test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Categories
|
||||
|
||||
### 1. Environment Errors
|
||||
|
||||
#### "ModuleNotFoundError: No module named 'optuna'"
|
||||
|
||||
**Cause**: Wrong Python environment
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
conda activate atomizer
|
||||
# Verify
|
||||
conda list | grep optuna
|
||||
```
|
||||
|
||||
#### "Python version mismatch"
|
||||
|
||||
**Cause**: Wrong Python version
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
python --version # Need 3.9+
|
||||
conda activate atomizer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. NX Model Setup Errors
|
||||
|
||||
#### "All optimization trials produce identical results"
|
||||
|
||||
**Cause**: Missing idealized part (`*_i.prt`) or broken file chain
|
||||
|
||||
**Symptoms**:
|
||||
- Journal shows "FE model updated" but results don't change
|
||||
- DAT files have same node coordinates with different expressions
|
||||
- OP2 file timestamps update but values are identical
|
||||
|
||||
**Root Cause**: NX simulation files have a parent-child hierarchy:
|
||||
```
|
||||
.sim → .fem → _i.prt → .prt (geometry)
|
||||
```
|
||||
|
||||
If the `_i.prt` (idealized part) is missing or not properly linked, `UpdateFemodel()` runs but the mesh doesn't regenerate because:
|
||||
- FEM mesh is tied to idealized geometry, not master geometry
|
||||
- Without idealized part updating, FEM has nothing new to mesh against
|
||||
|
||||
**Solution**:
|
||||
1. **Check file chain in NX**:
|
||||
- Open `.sim` file
|
||||
- Go to **Part Navigator** or **Assembly Navigator**
|
||||
- List ALL referenced parts
|
||||
|
||||
2. **Copy ALL linked files** to study folder:
|
||||
```bash
|
||||
# Typical file set needed:
|
||||
Model.prt # Geometry
|
||||
Model_fem1_i.prt # Idealized part ← OFTEN MISSING!
|
||||
Model_fem1.fem # FEM file
|
||||
Model_sim1.sim # Simulation file
|
||||
```
|
||||
|
||||
3. **Verify links are intact**:
|
||||
- Open model in NX after copying
|
||||
- Check that updates propagate: Geometry → Idealized → FEM → Sim
|
||||
|
||||
4. **CRITICAL CODE FIX** (already implemented in `solve_simulation.py`):
|
||||
The idealized part MUST be explicitly loaded before `UpdateFemodel()`:
|
||||
```python
|
||||
# Load idealized part BEFORE updating FEM
|
||||
for filename in os.listdir(working_dir):
|
||||
if '_i.prt' in filename.lower():
|
||||
idealized_part, status = theSession.Parts.Open(path)
|
||||
break
|
||||
|
||||
# Now UpdateFemodel() will work correctly
|
||||
feModel.UpdateFemodel()
|
||||
```
|
||||
Without loading the `_i.prt`, NX cannot propagate geometry changes to the mesh.
|
||||
|
||||
**Prevention**: Always use introspection to list all parts referenced by a simulation.
|
||||
|
||||
---
|
||||
|
||||
### 3. NX/Solver Errors
|
||||
|
||||
#### "NX session timeout after 600s"
|
||||
|
||||
**Cause**: Model too complex or NX stuck
|
||||
|
||||
**Solution**:
|
||||
1. Increase timeout in config:
|
||||
```json
|
||||
"simulation": {
|
||||
"timeout": 1200
|
||||
}
|
||||
```
|
||||
2. Simplify mesh if possible
|
||||
3. Check NX license availability
|
||||
|
||||
#### "Expression 'xxx' not found in model"
|
||||
|
||||
**Cause**: Expression name mismatch
|
||||
|
||||
**Solution**:
|
||||
1. Open model in NX
|
||||
2. Go to Tools → Expressions
|
||||
3. Verify exact expression name (case-sensitive)
|
||||
4. Update config to match
|
||||
|
||||
#### "NX license error"
|
||||
|
||||
**Cause**: License server unavailable
|
||||
|
||||
**Solution**:
|
||||
1. Check license server status
|
||||
2. Wait and retry
|
||||
3. Contact IT if persistent
|
||||
|
||||
#### "NX solve failed - check log"
|
||||
|
||||
**Cause**: Nastran solver error
|
||||
|
||||
**Solution**:
|
||||
1. Find log file: `1_setup/model/*.log` or `*.f06`
|
||||
2. Search for "FATAL" or "ERROR"
|
||||
3. Common causes:
|
||||
- Singular stiffness matrix (constraints issue)
|
||||
- Bad mesh (distorted elements)
|
||||
- Missing material properties
|
||||
|
||||
---
|
||||
|
||||
### 3. Extraction Errors
|
||||
|
||||
#### "OP2 file not found"
|
||||
|
||||
**Cause**: Solve didn't produce output
|
||||
|
||||
**Solution**:
|
||||
1. Check if solve completed
|
||||
2. Look for `.op2` file in model directory
|
||||
3. Check NX log for solve errors
|
||||
|
||||
#### "No displacement data for subcase X"
|
||||
|
||||
**Cause**: Wrong subcase number
|
||||
|
||||
**Solution**:
|
||||
1. Check available subcases in OP2:
|
||||
```python
|
||||
from pyNastran.op2.op2 import OP2
|
||||
op2 = OP2()
|
||||
op2.read_op2('model.op2')
|
||||
print(op2.displacements.keys())
|
||||
```
|
||||
2. Update subcase in extractor call
|
||||
|
||||
#### "Element type 'xxx' not supported"
|
||||
|
||||
**Cause**: Extractor doesn't support element type
|
||||
|
||||
**Solution**:
|
||||
1. Check available types in extractor
|
||||
2. Common types: `cquad4`, `ctria3`, `ctetra`, `chexa`
|
||||
3. May need different extractor
|
||||
|
||||
---
|
||||
|
||||
### 4. Database Errors
|
||||
|
||||
#### "Database is locked"
|
||||
|
||||
**Cause**: Another process using database
|
||||
|
||||
**Solution**:
|
||||
1. Check for running processes:
|
||||
```bash
|
||||
ps aux | grep run_optimization
|
||||
```
|
||||
2. Kill stale process if needed
|
||||
3. Wait for other optimization to finish
|
||||
|
||||
#### "Study 'xxx' not found"
|
||||
|
||||
**Cause**: Wrong study name or path
|
||||
|
||||
**Solution**:
|
||||
1. Check exact study name in database:
|
||||
```python
|
||||
import optuna
|
||||
storage = optuna.storages.RDBStorage('sqlite:///study.db')
|
||||
print(storage.get_all_study_summaries())
|
||||
```
|
||||
2. Use correct name when loading
|
||||
|
||||
#### "IntegrityError: UNIQUE constraint failed"
|
||||
|
||||
**Cause**: Duplicate trial number
|
||||
|
||||
**Solution**:
|
||||
1. Don't run multiple optimizations on same study simultaneously
|
||||
2. Use `--resume` flag for continuation
|
||||
|
||||
---
|
||||
|
||||
### 5. Constraint/Feasibility Errors
|
||||
|
||||
#### "All trials pruned"
|
||||
|
||||
**Cause**: No feasible region
|
||||
|
||||
**Solution**:
|
||||
1. Check constraint values:
|
||||
```python
|
||||
# In objective function, print constraint values
|
||||
print(f"Stress: {stress}, limit: 250")
|
||||
```
|
||||
2. Relax constraints
|
||||
3. Widen design variable bounds
|
||||
|
||||
#### "No improvement after N trials"
|
||||
|
||||
**Cause**: Stuck in local minimum or converged
|
||||
|
||||
**Solution**:
|
||||
1. Check if truly converged (good result)
|
||||
2. Try different starting region
|
||||
3. Use different sampler
|
||||
4. Increase exploration (lower `n_startup_trials`)
|
||||
|
||||
---
|
||||
|
||||
### 6. Performance Issues
|
||||
|
||||
#### "Trials running very slowly"
|
||||
|
||||
**Cause**: Complex model or inefficient extraction
|
||||
|
||||
**Solution**:
|
||||
1. Profile time per component:
|
||||
```python
|
||||
import time
|
||||
start = time.time()
|
||||
# ... operation ...
|
||||
print(f"Took: {time.time() - start:.1f}s")
|
||||
```
|
||||
2. Simplify mesh if NX is slow
|
||||
3. Check extraction isn't re-parsing OP2 multiple times
|
||||
|
||||
#### "Memory error"
|
||||
|
||||
**Cause**: Large OP2 file or many trials
|
||||
|
||||
**Solution**:
|
||||
1. Clear Python memory between trials
|
||||
2. Don't store all results in memory
|
||||
3. Use database for persistence
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
### Quick Health Check
|
||||
|
||||
```bash
|
||||
# Environment
|
||||
conda activate atomizer
|
||||
python -c "import optuna; print('Optuna OK')"
|
||||
python -c "import pyNastran; print('pyNastran OK')"
|
||||
|
||||
# Study structure
|
||||
ls -la studies/my_study/
|
||||
|
||||
# Config validity
|
||||
python -c "
|
||||
import json
|
||||
with open('studies/my_study/1_setup/optimization_config.json') as f:
|
||||
config = json.load(f)
|
||||
print('Config OK')
|
||||
print(f'Objectives: {len(config.get(\"objectives\", []))}')
|
||||
"
|
||||
|
||||
# Database status
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study('my_study', 'sqlite:///studies/my_study/2_results/study.db')
|
||||
print(f'Trials: {len(study.trials)}')
|
||||
"
|
||||
```
|
||||
|
||||
### NX Log Analysis
|
||||
|
||||
```bash
|
||||
# Find latest log
|
||||
ls -lt studies/my_study/1_setup/model/*.log | head -1
|
||||
|
||||
# Search for errors
|
||||
grep -i "error\|fatal\|fail" studies/my_study/1_setup/model/*.log
|
||||
```
|
||||
|
||||
### Trial Failure Analysis
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Failed trials
|
||||
failed = [t for t in study.trials
|
||||
if t.state == optuna.trial.TrialState.FAIL]
|
||||
print(f"Failed: {len(failed)}")
|
||||
|
||||
for t in failed[:5]:
|
||||
print(f"Trial {t.number}: {t.user_attrs}")
|
||||
|
||||
# Pruned trials
|
||||
pruned = [t for t in study.trials
|
||||
if t.state == optuna.trial.TrialState.PRUNED]
|
||||
print(f"Pruned: {len(pruned)}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recovery Actions
|
||||
|
||||
### Reset Study (Start Fresh)
|
||||
|
||||
```bash
|
||||
# Backup first
|
||||
cp -r studies/my_study/2_results studies/my_study/2_results_backup
|
||||
|
||||
# Delete results
|
||||
rm -rf studies/my_study/2_results/*
|
||||
|
||||
# Run fresh
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
### Resume Interrupted Study
|
||||
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
cp -r studies/my_study/2_results_backup/* studies/my_study/2_results/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Information to Provide
|
||||
|
||||
When asking for help, include:
|
||||
1. Error message (full traceback)
|
||||
2. Config file contents
|
||||
3. Study structure (`ls -la`)
|
||||
4. What you tried
|
||||
5. NX log excerpt (if NX error)
|
||||
|
||||
### Log Locations
|
||||
|
||||
| Log | Location |
|
||||
|-----|----------|
|
||||
| Optimization | Console output or redirect to file |
|
||||
| NX Solve | `1_setup/model/*.log`, `*.f06` |
|
||||
| Database | `2_results/study.db` (query with optuna) |
|
||||
| Intelligence | `2_results/intelligent_optimizer/*.json` |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Related**: All operation protocols
|
||||
- **System**: [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
@@ -0,0 +1,239 @@
|
||||
# OP_07: Disk Space Optimization
|
||||
|
||||
**Version:** 1.0
|
||||
**Last Updated:** 2025-12-29
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol manages disk space for Atomizer studies through:
|
||||
1. **Local cleanup** - Remove regenerable files from completed studies
|
||||
2. **Remote archival** - Archive to dalidou server (14TB available)
|
||||
3. **On-demand restore** - Pull archived studies when needed
|
||||
|
||||
## Disk Usage Analysis
|
||||
|
||||
### Typical Study Breakdown
|
||||
|
||||
| File Type | Size/Trial | Purpose | Keep? |
|
||||
|-----------|------------|---------|-------|
|
||||
| `.op2` | 68 MB | Nastran results | **YES** - Needed for analysis |
|
||||
| `.prt` | 30 MB | NX parts | NO - Copy of master |
|
||||
| `.dat` | 16 MB | Solver input | NO - Regenerable |
|
||||
| `.fem` | 14 MB | FEM mesh | NO - Copy of master |
|
||||
| `.sim` | 7 MB | Simulation | NO - Copy of master |
|
||||
| `.afm` | 4 MB | Assembly FEM | NO - Regenerable |
|
||||
| `.json` | <1 MB | Params/results | **YES** - Metadata |
|
||||
| Logs | <1 MB | F04/F06/log | NO - Diagnostic only |
|
||||
|
||||
**Per-trial overhead:** ~150 MB total, only ~70 MB essential
|
||||
|
||||
### M1_Mirror Example
|
||||
|
||||
```
|
||||
Current: 194 GB (28 studies, 2000+ trials)
|
||||
After cleanup: 95 GB (51% reduction)
|
||||
After archive: 5 GB (keep best_design_archive only)
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### 1. Analyze Disk Usage
|
||||
|
||||
```bash
|
||||
# Single study
|
||||
archive_study.bat analyze studies\M1_Mirror\m1_mirror_V12
|
||||
|
||||
# All studies in a project
|
||||
archive_study.bat analyze studies\M1_Mirror
|
||||
```
|
||||
|
||||
Output shows:
|
||||
- Total size
|
||||
- Essential vs deletable breakdown
|
||||
- Trial count per study
|
||||
- Per-extension analysis
|
||||
|
||||
### 2. Cleanup Completed Study
|
||||
|
||||
```bash
|
||||
# Dry run (default) - see what would be deleted
|
||||
archive_study.bat cleanup studies\M1_Mirror\m1_mirror_V12
|
||||
|
||||
# Actually delete
|
||||
archive_study.bat cleanup studies\M1_Mirror\m1_mirror_V12 --execute
|
||||
```
|
||||
|
||||
**What gets deleted:**
|
||||
- `.prt`, `.fem`, `.sim`, `.afm` in trial folders
|
||||
- `.dat`, `.f04`, `.f06`, `.log`, `.diag` solver files
|
||||
- Temp files (`.txt`, `.exp`, `.bak`)
|
||||
|
||||
**What is preserved:**
|
||||
- `1_setup/` folder (master model)
|
||||
- `3_results/` folder (database, reports)
|
||||
- All `.op2` files (Nastran results)
|
||||
- All `.json` files (params, metadata)
|
||||
- All `.npz` files (Zernike coefficients)
|
||||
- `best_design_archive/` folder
|
||||
|
||||
### 3. Archive to Remote Server
|
||||
|
||||
```bash
|
||||
# Dry run
|
||||
archive_study.bat archive studies\M1_Mirror\m1_mirror_V12
|
||||
|
||||
# Actually archive
|
||||
archive_study.bat archive studies\M1_Mirror\m1_mirror_V12 --execute
|
||||
|
||||
# Use Tailscale (when not on local network)
|
||||
archive_study.bat archive studies\M1_Mirror\m1_mirror_V12 --execute --tailscale
|
||||
```
|
||||
|
||||
**Process:**
|
||||
1. Creates compressed `.tar.gz` archive
|
||||
2. Uploads to `papa@192.168.86.50:/srv/storage/atomizer-archive/`
|
||||
3. Deletes local archive after successful upload
|
||||
|
||||
### 4. List Remote Archives
|
||||
|
||||
```bash
|
||||
archive_study.bat list
|
||||
|
||||
# Via Tailscale
|
||||
archive_study.bat list --tailscale
|
||||
```
|
||||
|
||||
### 5. Restore from Remote
|
||||
|
||||
```bash
|
||||
# Restore to studies/ folder
|
||||
archive_study.bat restore m1_mirror_V12
|
||||
|
||||
# Via Tailscale
|
||||
archive_study.bat restore m1_mirror_V12 --tailscale
|
||||
```
|
||||
|
||||
## Remote Server Setup
|
||||
|
||||
**Server:** dalidou (Lenovo W520)
|
||||
- Local IP: `192.168.86.50`
|
||||
- Tailscale IP: `100.80.199.40`
|
||||
- SSH user: `papa`
|
||||
- Archive path: `/srv/storage/atomizer-archive/`
|
||||
|
||||
### First-Time Setup
|
||||
|
||||
SSH into dalidou and create the archive directory:
|
||||
|
||||
```bash
|
||||
ssh papa@192.168.86.50
|
||||
mkdir -p /srv/storage/atomizer-archive
|
||||
```
|
||||
|
||||
Ensure SSH key authentication is set up for passwordless transfers:
|
||||
|
||||
```bash
|
||||
# On Windows (PowerShell)
|
||||
ssh-copy-id papa@192.168.86.50
|
||||
```
|
||||
|
||||
## Recommended Workflow
|
||||
|
||||
### During Active Optimization
|
||||
|
||||
Keep all files - you may need to re-run specific trials.
|
||||
|
||||
### After Study Completion
|
||||
|
||||
1. **Generate final report** (`STUDY_REPORT.md`)
|
||||
2. **Archive best design** to `3_results/best_design_archive/`
|
||||
3. **Cleanup:**
|
||||
```bash
|
||||
archive_study.bat cleanup studies\M1_Mirror\m1_mirror_V12 --execute
|
||||
```
|
||||
|
||||
### For Long-Term Storage
|
||||
|
||||
1. **After cleanup**, archive to server:
|
||||
```bash
|
||||
archive_study.bat archive studies\M1_Mirror\m1_mirror_V12 --execute
|
||||
```
|
||||
|
||||
2. **Optionally delete local** (keep only `3_results/best_design_archive/`)
|
||||
|
||||
### When Revisiting Old Study
|
||||
|
||||
1. **Restore:**
|
||||
```bash
|
||||
archive_study.bat restore m1_mirror_V12
|
||||
```
|
||||
|
||||
2. If you need to re-run trials, the `1_setup/` master files allow regenerating everything
|
||||
|
||||
## Safety Features
|
||||
|
||||
- **Dry run by default** - Must add `--execute` to actually delete/transfer
|
||||
- **Master files preserved** - `1_setup/` is never touched
|
||||
- **Results preserved** - `3_results/` is never touched
|
||||
- **Essential files preserved** - OP2, JSON, NPZ always kept
|
||||
|
||||
## Disk Space Targets
|
||||
|
||||
| Stage | M1_Mirror Target |
|
||||
|-------|------------------|
|
||||
| Active development | 200 GB (full) |
|
||||
| Completed studies | 95 GB (after cleanup) |
|
||||
| Archived (minimal local) | 5 GB (best only) |
|
||||
| Server archive | 50 GB compressed |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### SSH Connection Failed
|
||||
|
||||
```bash
|
||||
# Test connectivity
|
||||
ping 192.168.86.50
|
||||
|
||||
# Test SSH
|
||||
ssh papa@192.168.86.50 "echo connected"
|
||||
|
||||
# If on different network, use Tailscale
|
||||
ssh papa@100.80.199.40 "echo connected"
|
||||
```
|
||||
|
||||
### Archive Upload Slow
|
||||
|
||||
Large studies (50+ GB) take time. The tool uses `rsync` with progress display.
|
||||
For very large archives, consider running overnight or using direct LAN connection.
|
||||
|
||||
### Out of Disk Space During Archive
|
||||
|
||||
The archive is created locally first. Ensure you have ~1.5x the study size free:
|
||||
- 20 GB study = ~30 GB temp space needed
|
||||
|
||||
## Python API
|
||||
|
||||
```python
|
||||
from optimization_engine.utils.study_archiver import (
|
||||
analyze_study,
|
||||
cleanup_study,
|
||||
archive_to_remote,
|
||||
restore_from_remote,
|
||||
list_remote_archives,
|
||||
)
|
||||
|
||||
# Analyze
|
||||
analysis = analyze_study(Path("studies/M1_Mirror/m1_mirror_V12"))
|
||||
print(f"Deletable: {analysis['deletable_size']/1e9:.2f} GB")
|
||||
|
||||
# Cleanup (dry_run=False to actually delete)
|
||||
cleanup_study(Path("studies/M1_Mirror/m1_mirror_V12"), dry_run=False)
|
||||
|
||||
# Archive
|
||||
archive_to_remote(Path("studies/M1_Mirror/m1_mirror_V12"), dry_run=False)
|
||||
|
||||
# List remote
|
||||
archives = list_remote_archives()
|
||||
for a in archives:
|
||||
print(f"{a['name']}: {a['size']}")
|
||||
```
|
||||
276
hq/skills/atomizer-protocols/protocols/OP_08_GENERATE_REPORT.md
Normal file
276
hq/skills/atomizer-protocols/protocols/OP_08_GENERATE_REPORT.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# OP_08: Generate Study Report
|
||||
|
||||
<!--
|
||||
PROTOCOL: Automated Study Report Generation
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2026-01-06
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers automated generation of comprehensive study reports via the Dashboard API or CLI. Reports include executive summaries, optimization metrics, best solutions, and engineering recommendations.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "generate report" | Follow this protocol |
|
||||
| Dashboard "Report" button | API endpoint called |
|
||||
| Optimization complete | Auto-generate option |
|
||||
| CLI `atomizer report <study>` | Direct generation |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**API Endpoint**: `POST /api/optimization/studies/{study_id}/report/generate`
|
||||
|
||||
**Output**: `STUDY_REPORT.md` in study root directory
|
||||
|
||||
**Formats Supported**: Markdown (default), JSON (data export)
|
||||
|
||||
---
|
||||
|
||||
## Generation Methods
|
||||
|
||||
### 1. Via Dashboard
|
||||
|
||||
Click the "Generate Report" button in the study control panel. The report will be generated and displayed in the Reports tab.
|
||||
|
||||
### 2. Via API
|
||||
|
||||
```bash
|
||||
# Generate report
|
||||
curl -X POST http://localhost:8003/api/optimization/studies/my_study/report/generate
|
||||
|
||||
# Response
|
||||
{
|
||||
"success": true,
|
||||
"content": "# Study Report: ...",
|
||||
"path": "/path/to/STUDY_REPORT.md",
|
||||
"generated_at": "2026-01-06T12:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Via CLI
|
||||
|
||||
```bash
|
||||
# Using Claude Code
|
||||
"Generate a report for the bracket_optimization study"
|
||||
|
||||
# Direct Python
|
||||
python -m optimization_engine.reporting.markdown_report studies/bracket_optimization
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Report Sections
|
||||
|
||||
### Executive Summary
|
||||
|
||||
Generated automatically from trial data:
|
||||
- Total trials completed
|
||||
- Best objective value achieved
|
||||
- Improvement percentage from initial design
|
||||
- Key findings
|
||||
|
||||
### Results Table
|
||||
|
||||
| Metric | Initial | Final | Change |
|
||||
|--------|---------|-------|--------|
|
||||
| Objective 1 | X | Y | Z% |
|
||||
| Objective 2 | X | Y | Z% |
|
||||
|
||||
### Best Solution
|
||||
|
||||
- Trial number
|
||||
- All design variable values
|
||||
- All objective values
|
||||
- Constraint satisfaction status
|
||||
- User attributes (source, validation status)
|
||||
|
||||
### Design Variables Summary
|
||||
|
||||
| Variable | Min | Max | Best Value | Sensitivity |
|
||||
|----------|-----|-----|------------|-------------|
|
||||
| var_1 | 0.0 | 10.0 | 5.23 | High |
|
||||
| var_2 | 0.0 | 20.0 | 12.87 | Medium |
|
||||
|
||||
### Convergence Analysis
|
||||
|
||||
- Trials to 50% improvement
|
||||
- Trials to 90% improvement
|
||||
- Convergence rate assessment
|
||||
- Phase breakdown (exploration, exploitation, refinement)
|
||||
|
||||
### Recommendations
|
||||
|
||||
Auto-generated based on results:
|
||||
- Further optimization suggestions
|
||||
- Sensitivity observations
|
||||
- Next steps for validation
|
||||
|
||||
---
|
||||
|
||||
## Backend Implementation
|
||||
|
||||
**File**: `atomizer-dashboard/backend/api/routes/optimization.py`
|
||||
|
||||
```python
|
||||
@router.post("/studies/{study_id}/report/generate")
|
||||
async def generate_report(study_id: str, format: str = "markdown"):
|
||||
"""
|
||||
Generate comprehensive study report.
|
||||
|
||||
Args:
|
||||
study_id: Study identifier
|
||||
format: Output format (markdown, json)
|
||||
|
||||
Returns:
|
||||
Generated report content and file path
|
||||
"""
|
||||
# Load configuration
|
||||
config = load_config(study_dir)
|
||||
|
||||
# Query database for all trials
|
||||
trials = get_all_completed_trials(db)
|
||||
best_trial = get_best_trial(db)
|
||||
|
||||
# Calculate metrics
|
||||
stats = calculate_statistics(trials)
|
||||
|
||||
# Generate markdown
|
||||
report = generate_markdown_report(study_id, config, trials, best_trial, stats)
|
||||
|
||||
# Save to file
|
||||
report_path = study_dir / "STUDY_REPORT.md"
|
||||
report_path.write_text(report)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"content": report,
|
||||
"path": str(report_path),
|
||||
"generated_at": datetime.now().isoformat()
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Report Template
|
||||
|
||||
The generated report follows this structure:
|
||||
|
||||
```markdown
|
||||
# {Study Name} - Optimization Report
|
||||
|
||||
**Generated:** {timestamp}
|
||||
**Status:** {Completed/In Progress}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This optimization study completed **{n_trials} trials** and achieved a
|
||||
**{improvement}%** improvement in the primary objective.
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Trials | {n} |
|
||||
| Best Value | {best} |
|
||||
| Initial Value | {initial} |
|
||||
| Improvement | {pct}% |
|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
| Name | Direction | Weight | Best Value |
|
||||
|------|-----------|--------|------------|
|
||||
| {obj_name} | minimize | 1.0 | {value} |
|
||||
|
||||
---
|
||||
|
||||
## Design Variables
|
||||
|
||||
| Name | Min | Max | Best Value |
|
||||
|------|-----|-----|------------|
|
||||
| {var_name} | {min} | {max} | {best} |
|
||||
|
||||
---
|
||||
|
||||
## Best Solution
|
||||
|
||||
**Trial #{n}** achieved the optimal result.
|
||||
|
||||
### Parameters
|
||||
- var_1: {value}
|
||||
- var_2: {value}
|
||||
|
||||
### Objectives
|
||||
- objective_1: {value}
|
||||
|
||||
### Constraints
|
||||
- All constraints satisfied: Yes/No
|
||||
|
||||
---
|
||||
|
||||
## Convergence Analysis
|
||||
|
||||
- Initial best: {value} (trial 1)
|
||||
- Final best: {value} (trial {n})
|
||||
- 90% improvement reached at trial {n}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. Validate best solution with high-fidelity FEA
|
||||
2. Consider sensitivity analysis around optimal design point
|
||||
3. Check manufacturing feasibility of optimal parameters
|
||||
|
||||
---
|
||||
|
||||
*Generated by Atomizer Dashboard*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before generating a report:
|
||||
- [ ] Study must have at least 1 completed trial
|
||||
- [ ] study.db must exist in results directory
|
||||
- [ ] optimization_config.json must be present
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| "No trials found" | Empty database | Run optimization first |
|
||||
| "Config not found" | Missing config file | Verify study setup |
|
||||
| "Database locked" | Optimization running | Wait or pause first |
|
||||
| "Invalid study" | Study path not found | Check study ID |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Related**: [SYS_13_DASHBOARD](../system/SYS_13_DASHBOARD.md)
|
||||
- **Triggered By**: Dashboard Report button
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-06 | Initial release - Dashboard integration |
|
||||
@@ -0,0 +1,60 @@
|
||||
# OP_09 — Agent Handoff Protocol
|
||||
|
||||
## Purpose
|
||||
Defines how agents pass work to each other in a structured, traceable way.
|
||||
|
||||
## When to Use
|
||||
- Manager assigns work to a specialist
|
||||
- One agent's output becomes another's input
|
||||
- An agent needs help from another agent's expertise
|
||||
|
||||
## Handoff Format
|
||||
|
||||
When handing off work, include ALL of the following:
|
||||
|
||||
```
|
||||
## Handoff: [Source Agent] → [Target Agent]
|
||||
|
||||
**Task:** [What needs to be done — clear, specific, actionable]
|
||||
**Context:** [Why this is needed — project, deadline, priority]
|
||||
**Inputs:** [What the target agent needs — files, data, previous analysis]
|
||||
**Expected Output:** [What should come back — format, level of detail]
|
||||
**Protocol:** [Which protocol applies — OP_01, SYS_15, etc.]
|
||||
**Deadline:** [When this is needed — explicit or "ASAP"]
|
||||
**Thread:** [Link to relevant Slack thread for context]
|
||||
```
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Manager initiates most handoffs.** Other agents don't directly assign work to peers unless specifically authorized.
|
||||
2. **Always include context.** The target agent shouldn't need to search for background.
|
||||
3. **One handoff per message.** Don't bundle multiple tasks.
|
||||
4. **Acknowledge receipt.** Target agent confirms they've received and understood the handoff.
|
||||
5. **Report completion.** Target agent posts results in the same thread and notifies the source.
|
||||
|
||||
## Escalation
|
||||
If the target agent can't complete the handoff:
|
||||
1. Reply in the same thread explaining why
|
||||
2. Propose alternatives
|
||||
3. Manager decides next steps
|
||||
|
||||
## Examples
|
||||
|
||||
### Good Handoff
|
||||
```
|
||||
## Handoff: Manager → Technical Lead
|
||||
|
||||
**Task:** Break down the StarSpec M1 WFE optimization requirements
|
||||
**Context:** New client project. Contract attached. Priority: HIGH.
|
||||
**Inputs:** Contract PDF (attached), model files in knowledge_base/projects/starspec-m1/
|
||||
**Expected Output:** Parameter list, objectives, constraints, solver recommendation
|
||||
**Protocol:** OP_01 (Study Lifecycle) + OP_10 (Project Intake)
|
||||
**Deadline:** EOD today
|
||||
**Thread:** #starspec-m1-wfe (this thread)
|
||||
```
|
||||
|
||||
### Bad Handoff
|
||||
```
|
||||
@technical do the breakdown thing for the new project
|
||||
```
|
||||
*(Missing: context, inputs, expected output, deadline, protocol)*
|
||||
119
hq/skills/atomizer-protocols/protocols/OP_10_PROJECT_INTAKE.md
Normal file
119
hq/skills/atomizer-protocols/protocols/OP_10_PROJECT_INTAKE.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# OP_10 — Project Intake Protocol
|
||||
|
||||
## Purpose
|
||||
Defines how new projects enter the Atomizer Engineering system.
|
||||
|
||||
## Trigger
|
||||
Antoine (CEO) posts a new project request, typically in `#hq` or directly to `#secretary`.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Manager Acknowledges (< 5 min)
|
||||
- Manager acknowledges receipt in the originating channel
|
||||
- Creates a project channel: `#<client>-<short-description>`
|
||||
- Posts project kickoff message in new channel
|
||||
|
||||
### Step 2: Technical Breakdown (< 4 hours)
|
||||
Manager hands off to Technical Lead (per OP_09):
|
||||
- **Input:** Contract/requirements from Antoine
|
||||
- **Output:** Structured breakdown containing:
|
||||
- Geometry description
|
||||
- Design variables (parameters to optimize)
|
||||
- Objectives (what to minimize/maximize)
|
||||
- Constraints (limits that must be satisfied)
|
||||
- Solver requirements (SOL type, load cases)
|
||||
- Gap analysis (what's missing or unclear)
|
||||
|
||||
### Step 3: Algorithm Recommendation (after Step 2)
|
||||
Manager hands off to Optimizer:
|
||||
- **Input:** Technical Lead's breakdown
|
||||
- **Output:** Algorithm recommendation with:
|
||||
- Recommended algorithm and why
|
||||
- Population/trial budget
|
||||
- Expected convergence behavior
|
||||
- Alternatives considered
|
||||
|
||||
### Step 4: Project Plan Compilation (Manager)
|
||||
Manager compiles:
|
||||
- Technical breakdown
|
||||
- Algorithm recommendation
|
||||
- Timeline estimate
|
||||
- Risk assessment
|
||||
|
||||
### Step 5: CEO Approval
|
||||
Secretary presents the compiled plan to Antoine in `#secretary`:
|
||||
```
|
||||
📋 **New Project Plan — [Project Name]**
|
||||
|
||||
**Summary:** [1-2 sentences]
|
||||
**Timeline:** [Estimated duration]
|
||||
**Cost:** [Estimated API cost for this project]
|
||||
**Risk:** [High/Medium/Low + key risk]
|
||||
|
||||
⚠️ **Needs CEO approval to proceed.**
|
||||
|
||||
[Full plan in thread ↓]
|
||||
```
|
||||
|
||||
### Step 6: Kickoff (after approval)
|
||||
Manager posts in project channel:
|
||||
- Approved plan
|
||||
- Agent assignments
|
||||
- First task handoffs
|
||||
- Timeline milestones
|
||||
|
||||
## Templates
|
||||
|
||||
### Project Kickoff Message
|
||||
```
|
||||
🎯 **Project Kickoff: [Project Name]**
|
||||
|
||||
**Client:** [Client name]
|
||||
**Objective:** [What we're optimizing]
|
||||
**Timeline:** [Start → End]
|
||||
**Team:** [List of agents involved]
|
||||
|
||||
**Status:** 🟢 Active
|
||||
|
||||
**Milestones:**
|
||||
1. [ ] Technical breakdown
|
||||
2. [ ] Algorithm selection
|
||||
3. [ ] Study build
|
||||
4. [ ] Execution
|
||||
5. [ ] Analysis
|
||||
6. [ ] Audit
|
||||
7. [ ] Report
|
||||
8. [ ] Delivery
|
||||
```
|
||||
|
||||
### CONTEXT.md Template
|
||||
Create in `knowledge_base/projects/<project>/CONTEXT.md`:
|
||||
```markdown
|
||||
# CONTEXT.md — [Project Name]
|
||||
|
||||
## Client
|
||||
[Client name and context]
|
||||
|
||||
## Objective
|
||||
[What we're optimizing and why]
|
||||
|
||||
## Key Parameters
|
||||
| Parameter | Range | Units | Notes |
|
||||
|-----------|-------|-------|-------|
|
||||
|
||||
## Constraints
|
||||
- [List all constraints]
|
||||
|
||||
## Model
|
||||
- NX assembly: [filename]
|
||||
- FEM: [filename]
|
||||
- Simulation: [filename]
|
||||
- Solver: [SOL type]
|
||||
|
||||
## Decisions
|
||||
- [Date]: [Decision made]
|
||||
|
||||
## Status
|
||||
Phase: [Current phase]
|
||||
Channel: [Slack channel]
|
||||
```
|
||||
183
hq/skills/atomizer-protocols/protocols/OP_11_DIGESTION.md
Normal file
183
hq/skills/atomizer-protocols/protocols/OP_11_DIGESTION.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# OP_11 — Digestion Protocol
|
||||
|
||||
## Purpose
|
||||
Enforce a structured learning cycle after each project phase — modeled on human sleep consolidation. We store what matters, discard noise, sort knowledge, repair gaps, and evolve our processes.
|
||||
|
||||
> "I really want you to enforce digestion, and learning, like what we do (human) while dreaming, we store, discard unnecessary, sort things, repair etc. I want you to do the same. In the end, I want you to evolve and document yourself as well."
|
||||
> — Antoine Letarte, CEO (2026-02-11)
|
||||
|
||||
## Triggers
|
||||
|
||||
| Trigger | Scope | Who Initiates |
|
||||
|---------|-------|---------------|
|
||||
| **Phase completion** | Full digestion | Manager, after study phase closes |
|
||||
| **Milestone hit** | Focused digestion | Manager or lead agent |
|
||||
| **Weekly heartbeat** | Incremental housekeeping | Automated (cron/heartbeat) |
|
||||
| **Project close** | Deep digestion + retrospective | Manager |
|
||||
|
||||
## The Six Operations
|
||||
|
||||
### 1. 📥 STORE — Extract & Persist
|
||||
**Goal:** Capture what we learned that's reusable beyond this session.
|
||||
|
||||
**Actions:**
|
||||
- Extract key findings from daily logs into `MEMORY.md` (per agent)
|
||||
- Promote project-specific insights to `knowledge_base/projects/<project>/`
|
||||
- Record new solver quirks, expression names, NX behaviors → domain KB
|
||||
- Log performance data: what algorithm/settings worked, convergence rates
|
||||
- Capture Antoine's corrections as **ground truth** (highest priority)
|
||||
|
||||
**Output:** Updated MEMORY.md, project CONTEXT.md, domain KB entries
|
||||
|
||||
### 2. 🗑️ DISCARD — Prune & Clean
|
||||
**Goal:** Remove outdated, wrong, or redundant information.
|
||||
|
||||
**Actions:**
|
||||
- Identify contradictions in memory files (e.g., mass=11.33 vs 1133)
|
||||
- Remove stale daily logs older than 30 days (archive summary to MEMORY.md first)
|
||||
- Flag and remove dead references (deleted files, renamed paths, obsolete configs)
|
||||
- Clear TODO items that are done — mark complete, don't just leave them
|
||||
- Remove verbose/redundant entries (compress repeated patterns into single lessons)
|
||||
|
||||
**Anti-pattern to catch:** Information that was corrected but the wrong version still lives somewhere.
|
||||
|
||||
### 3. 📂 SORT — Organize Hierarchically
|
||||
**Goal:** Put knowledge at the right level of abstraction.
|
||||
|
||||
**Levels:**
|
||||
| Level | Location | Example |
|
||||
|-------|----------|---------|
|
||||
| **Session** | `memory/YYYY-MM-DD.md` | "Fixed FEM lookup to exclude _i parts" |
|
||||
| **Project** | `knowledge_base/projects/<project>/` | "Hydrotech beam uses CQUAD4 thin shells, SOL 101" |
|
||||
| **Domain** | `knowledge_base/domain/` or skills | "NX integer expressions need unit=Constant" |
|
||||
| **Company** | `atomizer-protocols`, `MEMORY.md` | "Always resolve paths with .resolve(), not .absolute()" |
|
||||
|
||||
**Actions:**
|
||||
- Review session notes → promote recurring patterns up one level
|
||||
- Check if project-specific knowledge is actually domain-general
|
||||
- Ensure company-level lessons are in protocols or QUICK_REF, not buried in daily logs
|
||||
|
||||
### 4. 🔧 REPAIR — Fix Gaps & Drift
|
||||
**Goal:** Reconcile what we documented vs what's actually true.
|
||||
|
||||
**Actions:**
|
||||
- Cross-reference CONTEXT.md with actual code/config (do they match?)
|
||||
- Verify file paths in docs still exist
|
||||
- Check if protocol descriptions match actual practice (drift detection)
|
||||
- Run through open gaps (G1, G2, etc.) — are any now resolved but not marked?
|
||||
- Validate agent SOUL.md and AGENTS.md reflect current capabilities and team composition
|
||||
|
||||
**Key question:** "If a brand-new agent read our docs cold, would they be able to do the work?"
|
||||
|
||||
### 5. 🧬 EVOLVE — Improve Processes
|
||||
**Goal:** Get smarter, not just busier.
|
||||
|
||||
**Actions:**
|
||||
- **What slowed us down?** → Fix the process, not just the symptom
|
||||
- **What did we repeat?** → Automate it or create a template
|
||||
- **What did we get wrong?** → Add a check, update a protocol
|
||||
- **What did Antoine correct?** → That's the highest-signal feedback. Build it in.
|
||||
- **Agent performance:** Did any agent struggle? Needs better context? Different model?
|
||||
- Propose protocol updates (new OP/SYS or amendments to existing)
|
||||
- Update QUICK_REF.md if new shortcuts or patterns emerged
|
||||
|
||||
**Output:** Protocol amendment proposals, agent config updates, new templates
|
||||
|
||||
### 6. 📝 SELF-DOCUMENT — Update the Mirror
|
||||
**Goal:** Our docs should reflect who we are *now*, not who we were at launch.
|
||||
|
||||
**Actions:**
|
||||
- Update AGENTS.md with current team composition and active channels
|
||||
- Update SOUL.md if role understanding has evolved
|
||||
- Update IDENTITY.md if capabilities changed
|
||||
- Refresh TOOLS.md with newly discovered tools or changed workflows
|
||||
- Update project README files with actual status
|
||||
- Ensure QUICK_REF.md reflects current best practices
|
||||
|
||||
**Test:** Read your own docs. Do they describe *you* today?
|
||||
|
||||
---
|
||||
|
||||
## Execution Format
|
||||
|
||||
### Phase Completion Digestion (Full)
|
||||
Run all 6 operations. Manager coordinates, each agent digests their own workspace.
|
||||
|
||||
```
|
||||
🧠 **Digestion Cycle — [Project] Phase [N] Complete**
|
||||
|
||||
**Trigger:** [Phase completion / Milestone / Weekly]
|
||||
**Scope:** [Full / Focused / Incremental]
|
||||
|
||||
### STORE
|
||||
- [What was captured and where]
|
||||
|
||||
### DISCARD
|
||||
- [What was pruned/removed]
|
||||
|
||||
### SORT
|
||||
- [What was promoted/reorganized]
|
||||
|
||||
### REPAIR
|
||||
- [What was fixed/reconciled]
|
||||
|
||||
### EVOLVE
|
||||
- [Process improvements proposed]
|
||||
|
||||
### SELF-DOCUMENT
|
||||
- [Docs updated]
|
||||
|
||||
**Commits:** [list of commits]
|
||||
**Next:** [What happens after digestion]
|
||||
```
|
||||
|
||||
### Weekly Heartbeat Digestion (Incremental)
|
||||
Lighter pass — focus on DISCARD and REPAIR. Run by Manager during weekly heartbeat.
|
||||
|
||||
**Checklist:**
|
||||
- [ ] Any contradictions in memory files?
|
||||
- [ ] Any stale TODOs that are actually done?
|
||||
- [ ] Any file paths that no longer exist?
|
||||
- [ ] Any corrections from Antoine not yet propagated?
|
||||
- [ ] Any process improvements worth capturing?
|
||||
|
||||
### Project Close Digestion (Deep)
|
||||
Full pass + retrospective. Captures the complete project learning.
|
||||
|
||||
**Additional steps:**
|
||||
- Write project retrospective: `knowledge_base/projects/<project>/RETROSPECTIVE.md`
|
||||
- Extract reusable components → propose for shared skills
|
||||
- Update LAC (Lessons and Corrections) if applicable
|
||||
- Archive project memory (compress daily logs into single summary)
|
||||
|
||||
---
|
||||
|
||||
## Responsibilities
|
||||
|
||||
| Agent | Digests |
|
||||
|-------|---------|
|
||||
| **Manager** | Orchestrates cycle, digests own workspace, coordinates cross-agent |
|
||||
| **Technical Lead** | Domain knowledge, model insights, solver quirks |
|
||||
| **Optimizer** | Algorithm performance, strategy effectiveness |
|
||||
| **Study Builder** | Code patterns, implementation lessons, reusable components |
|
||||
| **Auditor** | Quality patterns, common failure modes, review effectiveness |
|
||||
| **Secretary** | Communication patterns, Antoine preferences, admin workflows |
|
||||
|
||||
## Quality Gate
|
||||
|
||||
After digestion, Manager reviews:
|
||||
1. Were all 6 operations addressed?
|
||||
2. Were Antoine's corrections captured as ground truth?
|
||||
3. Are docs consistent with reality?
|
||||
4. Any proposed changes needing CEO approval?
|
||||
|
||||
If changes affect protocols or company-level knowledge:
|
||||
> ⚠️ **Needs CEO approval:** [summary of proposed changes]
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0.0 | 2026-02-11 | Initial protocol, per CEO directive |
|
||||
341
hq/skills/atomizer-protocols/protocols/SYS_10_IMSO.md
Normal file
341
hq/skills/atomizer-protocols/protocols/SYS_10_IMSO.md
Normal file
@@ -0,0 +1,341 @@
|
||||
# SYS_10: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
|
||||
<!--
|
||||
PROTOCOL: Intelligent Multi-Strategy Optimization
|
||||
LAYER: System
|
||||
VERSION: 2.1
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 10 implements adaptive optimization that automatically characterizes the problem landscape and selects the best optimization algorithm. This two-phase approach combines automated landscape analysis with algorithm-specific optimization.
|
||||
|
||||
**Key Innovation**: Adaptive characterization phase that intelligently determines when enough exploration has been done, then transitions to the optimal algorithm.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Single-objective optimization | Use this protocol |
|
||||
| "adaptive", "intelligent", "IMSO" mentioned | Load this protocol |
|
||||
| User unsure which algorithm to use | Recommend this protocol |
|
||||
| Complex landscape suspected | Use this protocol |
|
||||
|
||||
**Do NOT use when**: Multi-objective optimization needed (use SYS_11 instead)
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Parameter | Default | Range | Description |
|
||||
|-----------|---------|-------|-------------|
|
||||
| `min_trials` | 10 | 5-50 | Minimum characterization trials |
|
||||
| `max_trials` | 30 | 10-100 | Maximum characterization trials |
|
||||
| `confidence_threshold` | 0.85 | 0.0-1.0 | Stopping confidence level |
|
||||
| `check_interval` | 5 | 1-10 | Trials between checks |
|
||||
|
||||
**Landscape → Algorithm Mapping**:
|
||||
|
||||
| Landscape Type | Primary Strategy | Fallback |
|
||||
|----------------|------------------|----------|
|
||||
| smooth_unimodal | GP-BO | CMA-ES |
|
||||
| smooth_multimodal | GP-BO | TPE |
|
||||
| rugged_unimodal | TPE | CMA-ES |
|
||||
| rugged_multimodal | TPE | - |
|
||||
| noisy | TPE | - |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: ADAPTIVE CHARACTERIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Random/Sobol (unbiased exploration) │
|
||||
│ Trials: 10-30 (adapts to problem complexity) │
|
||||
│ │
|
||||
│ Every 5 trials: │
|
||||
│ → Analyze landscape metrics │
|
||||
│ → Check metric convergence │
|
||||
│ → Calculate characterization confidence │
|
||||
│ → Decide if ready to stop │
|
||||
│ │
|
||||
│ Stop when: │
|
||||
│ ✓ Confidence ≥ 85% │
|
||||
│ ✓ OR max trials reached (30) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TRANSITION: LANDSCAPE ANALYSIS & STRATEGY SELECTION │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Analyze: │
|
||||
│ - Smoothness (0-1) │
|
||||
│ - Multimodality (number of modes) │
|
||||
│ - Parameter correlation │
|
||||
│ - Noise level │
|
||||
│ │
|
||||
│ Classify & Recommend: │
|
||||
│ smooth_unimodal → GP-BO (best) or CMA-ES │
|
||||
│ smooth_multimodal → GP-BO │
|
||||
│ rugged_multimodal → TPE │
|
||||
│ rugged_unimodal → TPE or CMA-ES │
|
||||
│ noisy → TPE (most robust) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: OPTIMIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Recommended from Phase 1 │
|
||||
│ Warm Start: Initialize from best characterization point │
|
||||
│ Trials: User-specified (default 50) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Adaptive Characterization (`adaptive_characterization.py`)
|
||||
|
||||
**Confidence Calculation**:
|
||||
```python
|
||||
confidence = (
|
||||
0.40 * metric_stability_score + # Are metrics converging?
|
||||
0.30 * parameter_coverage_score + # Explored enough space?
|
||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
||||
0.10 * landscape_clarity_score # Clear classification?
|
||||
)
|
||||
```
|
||||
|
||||
**Stopping Criteria**:
|
||||
- **Minimum trials**: 10 (baseline data requirement)
|
||||
- **Maximum trials**: 30 (prevent over-characterization)
|
||||
- **Confidence threshold**: 85% (high confidence required)
|
||||
- **Check interval**: Every 5 trials
|
||||
|
||||
**Adaptive Behavior**:
|
||||
```python
|
||||
# Simple problem (smooth, unimodal, low noise):
|
||||
if smoothness > 0.6 and unimodal and noise < 0.3:
|
||||
required_samples = 10 + dimensionality
|
||||
# Stops at ~10-15 trials
|
||||
|
||||
# Complex problem (multimodal with N modes):
|
||||
if multimodal and n_modes > 2:
|
||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
||||
# Continues to ~20-30 trials
|
||||
```
|
||||
|
||||
### 2. Landscape Analyzer (`landscape_analyzer.py`)
|
||||
|
||||
**Metrics Computed**:
|
||||
|
||||
| Metric | Method | Interpretation |
|
||||
|--------|--------|----------------|
|
||||
| Smoothness (0-1) | Spearman correlation | >0.6: Good for CMA-ES, GP-BO |
|
||||
| Multimodality | DBSCAN clustering | Detects distinct good regions |
|
||||
| Correlation | Parameter-objective correlation | Identifies influential params |
|
||||
| Noise (0-1) | Local consistency check | True simulation instability |
|
||||
|
||||
**Landscape Classifications**:
|
||||
- `smooth_unimodal`: Single smooth bowl
|
||||
- `smooth_multimodal`: Multiple smooth regions
|
||||
- `rugged_unimodal`: Single rugged region
|
||||
- `rugged_multimodal`: Multiple rugged regions
|
||||
- `noisy`: High noise level
|
||||
|
||||
### 3. Strategy Selector (`strategy_selector.py`)
|
||||
|
||||
**Algorithm Characteristics**:
|
||||
|
||||
**GP-BO (Gaussian Process Bayesian Optimization)**:
|
||||
- Best for: Smooth, expensive functions (like FEA)
|
||||
- Explicit surrogate model with uncertainty quantification
|
||||
- Acquisition function balances exploration/exploitation
|
||||
|
||||
**CMA-ES (Covariance Matrix Adaptation Evolution Strategy)**:
|
||||
- Best for: Smooth unimodal problems
|
||||
- Fast convergence to local optimum
|
||||
- Adapts search distribution to landscape
|
||||
|
||||
**TPE (Tree-structured Parzen Estimator)**:
|
||||
- Best for: Multimodal, rugged, or noisy problems
|
||||
- Robust to noise and discontinuities
|
||||
- Good global exploration
|
||||
|
||||
### 4. Intelligent Optimizer (`intelligent_optimizer.py`)
|
||||
|
||||
**Workflow**:
|
||||
1. Create characterization study (Random/Sobol sampler)
|
||||
2. Run adaptive characterization with stopping criterion
|
||||
3. Analyze final landscape
|
||||
4. Select optimal strategy
|
||||
5. Create optimization study with recommended sampler
|
||||
6. Warm-start from best characterization point
|
||||
7. Run optimization
|
||||
8. Generate intelligence report
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Add to `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
},
|
||||
"landscape_analysis": {
|
||||
"min_trials_for_analysis": 10
|
||||
},
|
||||
"strategy_selection": {
|
||||
"allow_cmaes": true,
|
||||
"allow_gpbo": true,
|
||||
"allow_tpe": true
|
||||
}
|
||||
},
|
||||
"trials": {
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
|
||||
# Create optimizer
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="my_optimization",
|
||||
study_dir=Path("studies/my_study/2_results"),
|
||||
config=optimization_config,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define design variables
|
||||
design_vars = {
|
||||
'parameter1': (lower_bound, upper_bound),
|
||||
'parameter2': (lower_bound, upper_bound)
|
||||
}
|
||||
|
||||
# Run Protocol 10
|
||||
results = optimizer.optimize(
|
||||
objective_function=my_objective,
|
||||
design_variables=design_vars,
|
||||
n_trials=50,
|
||||
target_value=target,
|
||||
tolerance=0.1
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Benefits
|
||||
|
||||
**Efficiency**:
|
||||
- **Simple problems**: Early stop at ~10-15 trials (33% reduction)
|
||||
- **Complex problems**: Extended characterization at ~20-30 trials
|
||||
- **Right algorithm**: Uses optimal strategy for landscape type
|
||||
|
||||
**Example Performance** (Circular Plate Frequency Tuning):
|
||||
- TPE alone: ~95 trials to target
|
||||
- Random search: ~150+ trials
|
||||
- **Protocol 10**: ~56 trials (**41% reduction**)
|
||||
|
||||
---
|
||||
|
||||
## Intelligence Reports
|
||||
|
||||
Protocol 10 generates three tracking files:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `characterization_progress.json` | Metric evolution, confidence progression, stopping decision |
|
||||
| `intelligence_report.json` | Final landscape classification, parameter correlations, strategy recommendation |
|
||||
| `strategy_transitions.json` | Phase transitions, algorithm switches, performance metrics |
|
||||
|
||||
**Location**: `studies/{study_name}/2_results/intelligent_optimizer/`
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Characterization takes too long | Complex landscape | Increase `max_trials` or accept longer characterization |
|
||||
| Wrong algorithm selected | Insufficient exploration | Lower `confidence_threshold` or increase `min_trials` |
|
||||
| Poor convergence | Mismatch between landscape and algorithm | Review `intelligence_report.json`, consider manual override |
|
||||
| "No characterization data" | Study not using Protocol 10 | Enable `intelligent_optimization.enabled: true` |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: None
|
||||
- **Used By**: [OP_01_CREATE_STUDY](../operations/OP_01_CREATE_STUDY.md), [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Integrates With**: [SYS_13_DASHBOARD_TRACKING](./SYS_13_DASHBOARD_TRACKING.md)
|
||||
- **See Also**: [SYS_11_MULTI_OBJECTIVE](./SYS_11_MULTI_OBJECTIVE.md) for multi-objective optimization
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
- `optimization_engine/intelligent_optimizer.py` - Main orchestrator
|
||||
- `optimization_engine/adaptive_characterization.py` - Stopping criterion
|
||||
- `optimization_engine/landscape_analyzer.py` - Landscape metrics
|
||||
- `optimization_engine/strategy_selector.py` - Algorithm recommendation
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 2.1 | 2025-11-20 | Fixed strategy selector timing, multimodality detection, added simulation validation |
|
||||
| 2.0 | 2025-11-20 | Added adaptive characterization, two-study architecture |
|
||||
| 1.0 | 2025-11-19 | Initial implementation |
|
||||
|
||||
### Version 2.1 Bug Fixes Detail
|
||||
|
||||
**Fix #1: Strategy Selector - Use Characterization Trial Count**
|
||||
|
||||
*Problem*: Strategy selector used total trial count (including pruned) instead of characterization trial count, causing wrong algorithm selection after characterization.
|
||||
|
||||
*Solution* (`strategy_selector.py`): Use `char_trials = landscape.get('total_trials', trials_completed)` for decisions.
|
||||
|
||||
**Fix #2: Improved Multimodality Detection**
|
||||
|
||||
*Problem*: False multimodality detected on smooth continuous surfaces (2 modes detected when problem was unimodal).
|
||||
|
||||
*Solution* (`landscape_analyzer.py`): Added heuristic - if only 2 modes with smoothness > 0.6 and noise < 0.2, reclassify as unimodal (smooth continuous manifold).
|
||||
|
||||
**Fix #3: Simulation Validation**
|
||||
|
||||
*Problem*: 20% pruning rate due to extreme parameters causing mesh/solver failures.
|
||||
|
||||
*Solution*: Created `simulation_validator.py` with:
|
||||
- Hard limits (reject invalid parameters)
|
||||
- Soft limits (warn about risky parameters)
|
||||
- Aspect ratio checks
|
||||
- Model-specific validation rules
|
||||
|
||||
*Impact*: Reduced pruning rate from 20% to ~5%.
|
||||
338
hq/skills/atomizer-protocols/protocols/SYS_11_MULTI_OBJECTIVE.md
Normal file
338
hq/skills/atomizer-protocols/protocols/SYS_11_MULTI_OBJECTIVE.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# SYS_11: Multi-Objective Support
|
||||
|
||||
<!--
|
||||
PROTOCOL: Multi-Objective Optimization Support
|
||||
LAYER: System
|
||||
VERSION: 1.0
|
||||
STATUS: Active (MANDATORY)
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
**ALL** optimization engines in Atomizer **MUST** support both single-objective and multi-objective optimization without requiring code changes. This protocol ensures system robustness and prevents runtime failures when handling Pareto optimization.
|
||||
|
||||
**Key Requirement**: Code must work with both `study.best_trial` (single) and `study.best_trials` (multi) APIs.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| 2+ objectives defined in config | Use NSGA-II sampler |
|
||||
| "pareto", "multi-objective" mentioned | Load this protocol |
|
||||
| "tradeoff", "competing goals" | Suggest multi-objective approach |
|
||||
| "minimize X AND maximize Y" | Configure as multi-objective |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Single vs. Multi-Objective API**:
|
||||
|
||||
| Operation | Single-Objective | Multi-Objective |
|
||||
|-----------|-----------------|-----------------|
|
||||
| Best trial | `study.best_trial` | `study.best_trials[0]` |
|
||||
| Best params | `study.best_params` | `trial.params` |
|
||||
| Best value | `study.best_value` | `trial.values` (tuple) |
|
||||
| Direction | `direction='minimize'` | `directions=['minimize', 'maximize']` |
|
||||
| Sampler | TPE, CMA-ES, GP | NSGA-II (mandatory) |
|
||||
|
||||
---
|
||||
|
||||
## The Problem This Solves
|
||||
|
||||
Previously, optimization components only supported single-objective. When used with multi-objective studies:
|
||||
|
||||
1. Trials run successfully
|
||||
2. Trials saved to database
|
||||
3. **CRASH** when compiling results
|
||||
- `study.best_trial` raises RuntimeError
|
||||
- No tracking files generated
|
||||
- Silent failures
|
||||
|
||||
**Root Cause**: Optuna has different APIs:
|
||||
|
||||
```python
|
||||
# Single-Objective (works)
|
||||
study.best_trial # Returns Trial object
|
||||
study.best_params # Returns dict
|
||||
study.best_value # Returns float
|
||||
|
||||
# Multi-Objective (RAISES RuntimeError)
|
||||
study.best_trial # ❌ RuntimeError
|
||||
study.best_params # ❌ RuntimeError
|
||||
study.best_value # ❌ RuntimeError
|
||||
study.best_trials # ✓ Returns LIST of Pareto-optimal trials
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Solution Pattern
|
||||
|
||||
### 1. Always Check Study Type
|
||||
|
||||
```python
|
||||
is_multi_objective = len(study.directions) > 1
|
||||
```
|
||||
|
||||
### 2. Use Conditional Access
|
||||
|
||||
```python
|
||||
if is_multi_objective:
|
||||
best_trials = study.best_trials
|
||||
if best_trials:
|
||||
# Select representative trial (e.g., first Pareto solution)
|
||||
representative_trial = best_trials[0]
|
||||
best_params = representative_trial.params
|
||||
best_value = representative_trial.values # Tuple
|
||||
best_trial_num = representative_trial.number
|
||||
else:
|
||||
best_params = {}
|
||||
best_value = None
|
||||
best_trial_num = None
|
||||
else:
|
||||
# Single-objective: safe to use standard API
|
||||
best_params = study.best_params
|
||||
best_value = study.best_value
|
||||
best_trial_num = study.best_trial.number
|
||||
```
|
||||
|
||||
### 3. Return Rich Metadata
|
||||
|
||||
Always include in results:
|
||||
|
||||
```python
|
||||
{
|
||||
'best_params': best_params,
|
||||
'best_value': best_value, # float or tuple
|
||||
'best_trial': best_trial_num,
|
||||
'is_multi_objective': is_multi_objective,
|
||||
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
When creating or modifying any optimization component:
|
||||
|
||||
- [ ] **Study Creation**: Support `directions` parameter
|
||||
```python
|
||||
if len(objectives) > 1:
|
||||
directions = [obj['type'] for obj in objectives] # ['minimize', 'maximize']
|
||||
study = optuna.create_study(directions=directions, ...)
|
||||
else:
|
||||
study = optuna.create_study(direction='minimize', ...)
|
||||
```
|
||||
|
||||
- [ ] **Result Compilation**: Check `len(study.directions) > 1`
|
||||
- [ ] **Best Trial Access**: Use conditional logic
|
||||
- [ ] **Logging**: Print Pareto front size for multi-objective
|
||||
- [ ] **Reports**: Handle tuple objectives in visualization
|
||||
- [ ] **Testing**: Test with BOTH single and multi-objective cases
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**Multi-Objective Config Example**:
|
||||
|
||||
```json
|
||||
{
|
||||
"objectives": [
|
||||
{
|
||||
"name": "stiffness",
|
||||
"type": "maximize",
|
||||
"description": "Structural stiffness (N/mm)",
|
||||
"unit": "N/mm"
|
||||
},
|
||||
{
|
||||
"name": "mass",
|
||||
"type": "minimize",
|
||||
"description": "Total mass (kg)",
|
||||
"unit": "kg"
|
||||
}
|
||||
],
|
||||
"optimization_settings": {
|
||||
"sampler": "NSGAIISampler",
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Objective Function Return Format**:
|
||||
|
||||
```python
|
||||
# Single-objective: return float
|
||||
def objective_single(trial):
|
||||
# ... compute ...
|
||||
return objective_value # float
|
||||
|
||||
# Multi-objective: return tuple
|
||||
def objective_multi(trial):
|
||||
# ... compute ...
|
||||
return (stiffness, mass) # tuple of floats
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Semantic Directions
|
||||
|
||||
Use semantic direction values - no negative tricks:
|
||||
|
||||
```python
|
||||
# ✅ CORRECT: Semantic directions
|
||||
objectives = [
|
||||
{"name": "stiffness", "type": "maximize"},
|
||||
{"name": "mass", "type": "minimize"}
|
||||
]
|
||||
# Return: (stiffness, mass) - both positive values
|
||||
|
||||
# ❌ WRONG: Negative trick
|
||||
def objective(trial):
|
||||
return (-stiffness, mass) # Don't negate to fake maximize
|
||||
```
|
||||
|
||||
Optuna handles directions correctly when you specify `directions=['maximize', 'minimize']`.
|
||||
|
||||
---
|
||||
|
||||
## Testing Protocol
|
||||
|
||||
Before marking any optimization component complete:
|
||||
|
||||
### Test 1: Single-Objective
|
||||
```python
|
||||
# Config with 1 objective
|
||||
directions = None # or ['minimize']
|
||||
# Run optimization
|
||||
# Verify: completes without errors
|
||||
```
|
||||
|
||||
### Test 2: Multi-Objective
|
||||
```python
|
||||
# Config with 2+ objectives
|
||||
directions = ['minimize', 'minimize']
|
||||
# Run optimization
|
||||
# Verify: completes without errors
|
||||
# Verify: ALL tracking files generated
|
||||
```
|
||||
|
||||
### Test 3: Verify Outputs
|
||||
- `2_results/study.db` exists
|
||||
- `2_results/intelligent_optimizer/` has tracking files
|
||||
- `2_results/optimization_summary.json` exists
|
||||
- No RuntimeError in logs
|
||||
|
||||
---
|
||||
|
||||
## NSGA-II Configuration
|
||||
|
||||
For multi-objective optimization, use NSGA-II:
|
||||
|
||||
```python
|
||||
import optuna
|
||||
from optuna.samplers import NSGAIISampler
|
||||
|
||||
sampler = NSGAIISampler(
|
||||
population_size=50, # Pareto front population
|
||||
mutation_prob=None, # Auto-computed
|
||||
crossover_prob=0.9, # Recombination rate
|
||||
swapping_prob=0.5, # Gene swapping probability
|
||||
seed=42 # Reproducibility
|
||||
)
|
||||
|
||||
study = optuna.create_study(
|
||||
directions=['maximize', 'minimize'],
|
||||
sampler=sampler,
|
||||
study_name="multi_objective_study",
|
||||
storage="sqlite:///study.db"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pareto Front Handling
|
||||
|
||||
### Accessing Pareto Solutions
|
||||
|
||||
```python
|
||||
if is_multi_objective:
|
||||
pareto_trials = study.best_trials
|
||||
print(f"Found {len(pareto_trials)} Pareto-optimal solutions")
|
||||
|
||||
for trial in pareto_trials:
|
||||
print(f"Trial {trial.number}: {trial.values}")
|
||||
print(f" Params: {trial.params}")
|
||||
```
|
||||
|
||||
### Selecting Representative Solution
|
||||
|
||||
```python
|
||||
# Option 1: First Pareto solution
|
||||
representative = study.best_trials[0]
|
||||
|
||||
# Option 2: Weighted selection
|
||||
def weighted_selection(trials, weights):
|
||||
best_score = float('inf')
|
||||
best_trial = None
|
||||
for trial in trials:
|
||||
score = sum(w * v for w, v in zip(weights, trial.values))
|
||||
if score < best_score:
|
||||
best_score = score
|
||||
best_trial = trial
|
||||
return best_trial
|
||||
|
||||
# Option 3: Knee point (maximum distance from ideal line)
|
||||
# Requires more complex computation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| RuntimeError on `best_trial` | Multi-objective study using single API | Use conditional check pattern |
|
||||
| Empty Pareto front | No feasible solutions | Check constraints, relax if needed |
|
||||
| Only 1 Pareto solution | Objectives not conflicting | Verify objectives are truly competing |
|
||||
| NSGA-II with single objective | Wrong config | Use TPE/CMA-ES for single-objective |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: None (mandatory for all)
|
||||
- **Used By**: All optimization components
|
||||
- **Integrates With**:
|
||||
- [SYS_10_IMSO](./SYS_10_IMSO.md) (selects NSGA-II for multi-objective)
|
||||
- [SYS_13_DASHBOARD_TRACKING](./SYS_13_DASHBOARD_TRACKING.md) (Pareto visualization)
|
||||
- **See Also**: [OP_04_ANALYZE_RESULTS](../operations/OP_04_ANALYZE_RESULTS.md) for Pareto analysis
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
Files that implement this protocol:
|
||||
- `optimization_engine/intelligent_optimizer.py` - `_compile_results()` method
|
||||
- `optimization_engine/study_continuation.py` - Result handling
|
||||
- `optimization_engine/hybrid_study_creator.py` - Study creation
|
||||
|
||||
Files requiring this protocol:
|
||||
- [ ] `optimization_engine/study_continuation.py`
|
||||
- [ ] `optimization_engine/hybrid_study_creator.py`
|
||||
- [ ] `optimization_engine/intelligent_setup.py`
|
||||
- [ ] `optimization_engine/llm_optimization_runner.py`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-11-20 | Initial release, mandatory for all engines |
|
||||
@@ -0,0 +1,909 @@
|
||||
# SYS_12: Extractor Library
|
||||
|
||||
<!--
|
||||
PROTOCOL: Centralized Extractor Library
|
||||
LAYER: System
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
The Extractor Library provides centralized, reusable functions for extracting physics results from FEA output files. **Always use these extractors instead of writing custom extraction code** in studies.
|
||||
|
||||
**Key Principle**: If you're writing >20 lines of extraction code in `run_optimization.py`, stop and check this library first.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Need to extract displacement | Use E1 `extract_displacement` |
|
||||
| Need to extract frequency | Use E2 `extract_frequency` |
|
||||
| Need to extract stress | Use E3 `extract_solid_stress` |
|
||||
| Need to extract mass | Use E4 or E5 |
|
||||
| Need Zernike/wavefront | Use E8, E9, or E10 |
|
||||
| Need custom physics | Check library first, then EXT_01 |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| ID | Physics | Function | Input | Output |
|
||||
|----|---------|----------|-------|--------|
|
||||
| E1 | Displacement | `extract_displacement()` | .op2 | mm |
|
||||
| E2 | Frequency | `extract_frequency()` | .op2 | Hz |
|
||||
| E3 | Von Mises Stress | `extract_solid_stress()` | .op2 | MPa |
|
||||
| E4 | BDF Mass | `extract_mass_from_bdf()` | .bdf/.dat | kg |
|
||||
| E5 | CAD Expression Mass | `extract_mass_from_expression()` | .prt | kg |
|
||||
| E6 | Field Data | `FieldDataExtractor()` | .fld/.csv | varies |
|
||||
| E7 | Stiffness | `StiffnessCalculator()` | .fld + .op2 | N/mm |
|
||||
| E8 | Zernike WFE | `extract_zernike_from_op2()` | .op2 + .bdf | nm |
|
||||
| E9 | Zernike Relative | `extract_zernike_relative_rms()` | .op2 + .bdf | nm |
|
||||
| E10 | Zernike Builder | `ZernikeObjectiveBuilder()` | .op2 | nm |
|
||||
| E11 | Part Mass & Material | `extract_part_mass_material()` | .prt | kg + dict |
|
||||
| **Phase 2 (2025-12-06)** | | | | |
|
||||
| E12 | Principal Stress | `extract_principal_stress()` | .op2 | MPa |
|
||||
| E13 | Strain Energy | `extract_strain_energy()` | .op2 | J |
|
||||
| E14 | SPC Forces | `extract_spc_forces()` | .op2 | N |
|
||||
| **Phase 3 (2025-12-06)** | | | | |
|
||||
| E15 | Temperature | `extract_temperature()` | .op2 | K/°C |
|
||||
| E16 | Thermal Gradient | `extract_temperature_gradient()` | .op2 | K/mm |
|
||||
| E17 | Heat Flux | `extract_heat_flux()` | .op2 | W/mm² |
|
||||
| E18 | Modal Mass | `extract_modal_mass()` | .f06 | kg |
|
||||
| **Phase 4 (2025-12-19)** | | | | |
|
||||
| E19 | Part Introspection | `introspect_part()` | .prt | dict |
|
||||
| **Phase 5 (2025-12-22)** | | | | |
|
||||
| E20 | Zernike Analytic (Parabola) | `extract_zernike_analytic()` | .op2 + .bdf | nm |
|
||||
| E21 | Zernike Method Comparison | `compare_zernike_methods()` | .op2 + .bdf | dict |
|
||||
| E22 | **Zernike OPD (RECOMMENDED)** | `extract_zernike_opd()` | .op2 + .bdf | nm |
|
||||
|
||||
---
|
||||
|
||||
## Extractor Details
|
||||
|
||||
### E1: Displacement Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_displacement`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_displacement import extract_displacement
|
||||
|
||||
result = extract_displacement(op2_file, subcase=1)
|
||||
# Returns: {
|
||||
# 'max_displacement': float, # mm
|
||||
# 'max_disp_node': int,
|
||||
# 'max_disp_x': float,
|
||||
# 'max_disp_y': float,
|
||||
# 'max_disp_z': float
|
||||
# }
|
||||
|
||||
max_displacement = result['max_displacement'] # mm
|
||||
```
|
||||
|
||||
### E2: Frequency Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_frequency`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_frequency import extract_frequency
|
||||
|
||||
result = extract_frequency(op2_file, subcase=1, mode_number=1)
|
||||
# Returns: {
|
||||
# 'frequency': float, # Hz
|
||||
# 'mode_number': int,
|
||||
# 'eigenvalue': float,
|
||||
# 'all_frequencies': list # All modes
|
||||
# }
|
||||
|
||||
frequency = result['frequency'] # Hz
|
||||
```
|
||||
|
||||
### E3: Von Mises Stress Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_von_mises_stress`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
|
||||
|
||||
# RECOMMENDED: Check ALL solid element types (returns max across all)
|
||||
result = extract_solid_stress(op2_file, subcase=1)
|
||||
|
||||
# Or specify single element type
|
||||
result = extract_solid_stress(op2_file, subcase=1, element_type='chexa')
|
||||
|
||||
# Returns: {
|
||||
# 'max_von_mises': float, # MPa (auto-converted from kPa)
|
||||
# 'max_stress_element': int,
|
||||
# 'element_type': str, # e.g., 'CHEXA', 'CTETRA'
|
||||
# 'units': 'MPa'
|
||||
# }
|
||||
|
||||
max_stress = result['max_von_mises'] # MPa
|
||||
```
|
||||
|
||||
**IMPORTANT (Updated 2026-01-22):**
|
||||
- By default, checks ALL solid types: CTETRA, CHEXA, CPENTA, CPYRAM
|
||||
- CHEXA elements often have highest stress (not CTETRA!)
|
||||
- Auto-converts from kPa to MPa (NX kg-mm-s unit system outputs kPa)
|
||||
- Returns Elemental Nodal stress (peak), not Elemental Centroid (averaged)
|
||||
|
||||
### E4: BDF Mass Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_mass_from_bdf`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_mass_from_bdf
|
||||
|
||||
result = extract_mass_from_bdf(bdf_file)
|
||||
# Returns: {
|
||||
# 'total_mass': float, # kg (primary key)
|
||||
# 'mass_kg': float, # kg
|
||||
# 'mass_g': float, # grams
|
||||
# 'cg': [x, y, z], # center of gravity
|
||||
# 'num_elements': int
|
||||
# }
|
||||
|
||||
mass_kg = result['mass_kg'] # kg
|
||||
```
|
||||
|
||||
**Note**: Uses `BDFMassExtractor` internally. Reads mass from element geometry and material density in BDF/DAT file. NX kg-mm-s unit system - mass is directly in kg.
|
||||
|
||||
### E5: CAD Expression Mass
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_mass_from_expression`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_mass_from_expression import extract_mass_from_expression
|
||||
|
||||
mass_kg = extract_mass_from_expression(model_file, expression_name="p173") # kg
|
||||
```
|
||||
|
||||
**Note**: Requires `_temp_mass.txt` to be written by solve journal. Uses NX expression system.
|
||||
|
||||
### E11: Part Mass & Material Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_part_mass_material`
|
||||
|
||||
Extracts mass, volume, surface area, center of gravity, and material properties directly from NX .prt files using NXOpen.MeasureManager.
|
||||
|
||||
**Prerequisites**: Run the NX journal first to create the temp file:
|
||||
```bash
|
||||
run_journal.exe nx_journals/extract_part_mass_material.py model.prt
|
||||
```
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_part_mass_material, extract_part_mass
|
||||
|
||||
# Full extraction with all properties
|
||||
result = extract_part_mass_material(prt_file)
|
||||
# Returns: {
|
||||
# 'mass_kg': float, # Mass in kg
|
||||
# 'mass_g': float, # Mass in grams
|
||||
# 'volume_mm3': float, # Volume in mm^3
|
||||
# 'surface_area_mm2': float, # Surface area in mm^2
|
||||
# 'center_of_gravity_mm': [x, y, z], # CoG in mm
|
||||
# 'moments_of_inertia': {'Ixx', 'Iyy', 'Izz', 'unit'}, # or None
|
||||
# 'material': {
|
||||
# 'name': str or None, # Material name if assigned
|
||||
# 'density': float or None, # Density in kg/mm^3
|
||||
# 'density_unit': str
|
||||
# },
|
||||
# 'num_bodies': int
|
||||
# }
|
||||
|
||||
mass = result['mass_kg'] # kg
|
||||
material_name = result['material']['name'] # e.g., "Aluminum_6061"
|
||||
|
||||
# Simple mass-only extraction
|
||||
mass_kg = extract_part_mass(prt_file) # kg
|
||||
```
|
||||
|
||||
**Class-based version** for caching:
|
||||
```python
|
||||
from optimization_engine.extractors import PartMassExtractor
|
||||
|
||||
extractor = PartMassExtractor(prt_file)
|
||||
mass = extractor.mass_kg # Extracts and caches
|
||||
material = extractor.material_name
|
||||
```
|
||||
|
||||
**NX Open APIs Used** (by journal):
|
||||
- `NXOpen.MeasureManager.NewMassProperties()`
|
||||
- `NXOpen.MeasureBodies`
|
||||
- `NXOpen.Body.GetBodies()`
|
||||
- `NXOpen.PhysicalMaterial`
|
||||
|
||||
**IMPORTANT - Mass Accuracy Note**:
|
||||
> **Always prefer E11 (geometry-based) over E4 (BDF-based) for mass extraction.**
|
||||
>
|
||||
> Testing on hex-dominant meshes with tet/pyramid fill elements revealed that:
|
||||
> - **E11 from .prt**: 97.66 kg (accurate - matches NX GUI)
|
||||
> - **E4 pyNastran get_mass_breakdown()**: 90.73 kg (~7% under-reported)
|
||||
> - **E4 pyNastran sum(elem.Volume())*rho**: 100.16 kg (~2.5% over-reported)
|
||||
>
|
||||
> The `get_mass_breakdown()` function in pyNastran has known issues with mixed-element
|
||||
> meshes (CHEXA + CPENTA + CPYRAM + CTETRA). Use E11 with the NX journal for reliable
|
||||
> mass values. Only use E4 if material properties are overridden at FEM level.
|
||||
|
||||
### E6: Field Data Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.field_data_extractor`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.field_data_extractor import FieldDataExtractor
|
||||
|
||||
extractor = FieldDataExtractor(
|
||||
field_file="results.fld",
|
||||
result_column="Temperature",
|
||||
aggregation="max" # or "min", "mean", "std"
|
||||
)
|
||||
result = extractor.extract()
|
||||
# Returns: {
|
||||
# 'value': float,
|
||||
# 'stats': dict
|
||||
# }
|
||||
```
|
||||
|
||||
### E7: Stiffness Calculation
|
||||
|
||||
**Module**: `optimization_engine.extractors.stiffness_calculator`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.stiffness_calculator import StiffnessCalculator
|
||||
|
||||
calculator = StiffnessCalculator(
|
||||
field_file=field_file,
|
||||
op2_file=op2_file,
|
||||
force_component="FZ",
|
||||
displacement_component="UZ"
|
||||
)
|
||||
result = calculator.calculate()
|
||||
# Returns: {
|
||||
# 'stiffness': float, # N/mm
|
||||
# 'displacement': float,
|
||||
# 'force': float
|
||||
# }
|
||||
```
|
||||
|
||||
**Simple Alternative** (when force is known):
|
||||
```python
|
||||
applied_force = 1000.0 # N - MUST MATCH MODEL'S APPLIED LOAD
|
||||
stiffness = applied_force / max(abs(max_displacement), 1e-6) # N/mm
|
||||
```
|
||||
|
||||
### E8: Zernike Wavefront Error (Single Subcase)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_zernike import extract_zernike_from_op2
|
||||
|
||||
result = extract_zernike_from_op2(
|
||||
op2_file,
|
||||
bdf_file=None, # Auto-detect from op2 location
|
||||
subcase="20", # Subcase label (e.g., "20" = 20 deg elevation)
|
||||
displacement_unit="mm"
|
||||
)
|
||||
# Returns: {
|
||||
# 'global_rms_nm': float, # Total surface RMS in nm
|
||||
# 'filtered_rms_nm': float, # RMS with low orders removed
|
||||
# 'coefficients': list, # 50 Zernike coefficients
|
||||
# 'r_squared': float,
|
||||
# 'subcase': str
|
||||
# }
|
||||
|
||||
filtered_rms = result['filtered_rms_nm'] # nm
|
||||
```
|
||||
|
||||
### E9: Zernike Relative RMS (Between Subcases)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.extract_zernike import extract_zernike_relative_rms
|
||||
|
||||
result = extract_zernike_relative_rms(
|
||||
op2_file,
|
||||
bdf_file=None,
|
||||
target_subcase="40", # Target orientation
|
||||
reference_subcase="20", # Reference (usually polishing orientation)
|
||||
displacement_unit="mm"
|
||||
)
|
||||
# Returns: {
|
||||
# 'relative_filtered_rms_nm': float, # Differential WFE in nm
|
||||
# 'delta_coefficients': list, # Coefficient differences
|
||||
# 'target_subcase': str,
|
||||
# 'reference_subcase': str
|
||||
# }
|
||||
|
||||
relative_rms = result['relative_filtered_rms_nm'] # nm
|
||||
```
|
||||
|
||||
### E10: Zernike Objective Builder (Multi-Subcase)
|
||||
|
||||
**Module**: `optimization_engine.extractors.zernike_helpers`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
|
||||
|
||||
builder = ZernikeObjectiveBuilder(
|
||||
op2_finder=lambda: model_dir / "ASSY_M1-solution_1.op2"
|
||||
)
|
||||
|
||||
# Add relative objectives (target vs reference)
|
||||
builder.add_relative_objective("40", "20", metric="relative_filtered_rms_nm", weight=5.0)
|
||||
builder.add_relative_objective("60", "20", metric="relative_filtered_rms_nm", weight=5.0)
|
||||
|
||||
# Add absolute objective for polishing orientation
|
||||
builder.add_subcase_objective("90", metric="rms_filter_j1to3", weight=1.0)
|
||||
|
||||
# Evaluate all at once (efficient - parses OP2 only once)
|
||||
results = builder.evaluate_all()
|
||||
# Returns: {'rel_40_vs_20': 4.2, 'rel_60_vs_20': 8.7, 'rms_90': 15.3}
|
||||
```
|
||||
|
||||
### E20: Zernike Analytic (Parabola-Based with Lateral Correction)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_opd`
|
||||
|
||||
Uses an analytical parabola formula to account for lateral (X, Y) displacements. Requires knowing the focal length.
|
||||
|
||||
**Use when**: You know the optical prescription and want to compare against theoretical parabola.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_zernike_analytic, ZernikeAnalyticExtractor
|
||||
|
||||
# Full extraction with lateral displacement diagnostics
|
||||
result = extract_zernike_analytic(
|
||||
op2_file,
|
||||
subcase="20",
|
||||
focal_length=5000.0, # Required for analytic method
|
||||
)
|
||||
|
||||
# Class-based usage
|
||||
extractor = ZernikeAnalyticExtractor(op2_file, focal_length=5000.0)
|
||||
result = extractor.extract_subcase('20')
|
||||
```
|
||||
|
||||
### E21: Zernike Method Comparison
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_opd`
|
||||
|
||||
Compare standard (Z-only) vs analytic (parabola) methods.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import compare_zernike_methods
|
||||
|
||||
comparison = compare_zernike_methods(op2_file, subcase="20", focal_length=5000.0)
|
||||
print(comparison['recommendation'])
|
||||
```
|
||||
|
||||
### E22: Zernike OPD (RECOMMENDED - Most Rigorous)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_zernike_figure`
|
||||
|
||||
**MOST RIGOROUS METHOD** for computing WFE. Uses the actual BDF geometry (filtered to OP2 nodes) as the reference surface instead of assuming a parabolic shape.
|
||||
|
||||
**Advantages over E20 (Analytic)**:
|
||||
- No need to know focal length or optical prescription
|
||||
- Works with **any surface shape**: parabola, hyperbola, asphere, freeform
|
||||
- Uses the actual mesh geometry as the "ideal" surface reference
|
||||
- Interpolates `z_figure` at deformed `(x+dx, y+dy)` position for true OPD
|
||||
|
||||
**How it works**:
|
||||
1. Load BDF geometry for nodes present in OP2 (figure surface nodes)
|
||||
2. Build 2D interpolator `z_figure(x, y)` from undeformed coordinates
|
||||
3. For each deformed node at `(x0+dx, y0+dy, z0+dz)`:
|
||||
- Interpolate `z_figure` at the deformed (x,y) position
|
||||
- Surface error = `(z0 + dz) - z_interpolated`
|
||||
4. Fit Zernike polynomials to the surface error map
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
ZernikeOPDExtractor,
|
||||
extract_zernike_opd,
|
||||
extract_zernike_opd_filtered_rms,
|
||||
)
|
||||
|
||||
# Full extraction with diagnostics
|
||||
result = extract_zernike_opd(op2_file, subcase="20")
|
||||
# Returns: {
|
||||
# 'global_rms_nm': float,
|
||||
# 'filtered_rms_nm': float,
|
||||
# 'max_lateral_displacement_um': float,
|
||||
# 'rms_lateral_displacement_um': float,
|
||||
# 'coefficients': list, # 50 Zernike coefficients
|
||||
# 'method': 'opd',
|
||||
# 'figure_file': 'BDF (filtered to OP2)',
|
||||
# ...
|
||||
# }
|
||||
|
||||
# Simple usage for optimization objective
|
||||
rms = extract_zernike_opd_filtered_rms(op2_file, subcase="20")
|
||||
|
||||
# Class-based for multi-subcase analysis
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
results = extractor.extract_all_subcases()
|
||||
```
|
||||
|
||||
#### Relative WFE (CRITICAL for Optimization)
|
||||
|
||||
**Use `extract_relative()` for computing relative WFE between subcases!**
|
||||
|
||||
> **BUG WARNING (V10 Fix - 2025-12-22)**: The WRONG way to compute relative WFE is:
|
||||
> ```python
|
||||
> # ❌ WRONG: Difference of RMS values
|
||||
> result_40 = extractor.extract_subcase("3")
|
||||
> result_ref = extractor.extract_subcase("2")
|
||||
> rel_40 = abs(result_40['filtered_rms_nm'] - result_ref['filtered_rms_nm']) # WRONG!
|
||||
> ```
|
||||
>
|
||||
> This computes `|RMS(WFE_40) - RMS(WFE_20)|`, which is NOT the same as `RMS(WFE_40 - WFE_20)`.
|
||||
> The difference can be **3-4x lower** than the correct value, leading to false "too good to be true" results.
|
||||
|
||||
**The CORRECT approach uses `extract_relative()`:**
|
||||
|
||||
```python
|
||||
# ✅ CORRECT: Computes node-by-node WFE difference, then fits Zernike, then RMS
|
||||
extractor = ZernikeOPDExtractor(op2_file)
|
||||
|
||||
rel_40 = extractor.extract_relative("3", "2") # 40 deg vs 20 deg
|
||||
rel_60 = extractor.extract_relative("4", "2") # 60 deg vs 20 deg
|
||||
rel_90 = extractor.extract_relative("1", "2") # 90 deg vs 20 deg
|
||||
|
||||
# Returns: {
|
||||
# 'target_subcase': '3',
|
||||
# 'reference_subcase': '2',
|
||||
# 'method': 'figure_opd_relative',
|
||||
# 'relative_global_rms_nm': float, # RMS of the difference field
|
||||
# 'relative_filtered_rms_nm': float, # Use this for optimization!
|
||||
# 'relative_rms_filter_j1to3': float, # For manufacturing/optician workload
|
||||
# 'max_lateral_displacement_um': float,
|
||||
# 'rms_lateral_displacement_um': float,
|
||||
# 'delta_coefficients': list, # Zernike coeffs of difference
|
||||
# }
|
||||
|
||||
# Use in optimization objectives:
|
||||
objectives = {
|
||||
'rel_filtered_rms_40_vs_20': rel_40['relative_filtered_rms_nm'],
|
||||
'rel_filtered_rms_60_vs_20': rel_60['relative_filtered_rms_nm'],
|
||||
'mfg_90_optician_workload': rel_90['relative_rms_filter_j1to3'],
|
||||
}
|
||||
```
|
||||
|
||||
**Mathematical Difference**:
|
||||
```
|
||||
WRONG: |RMS(WFE_40) - RMS(WFE_20)| = |6.14 - 8.13| = 1.99 nm ← FALSE!
|
||||
CORRECT: RMS(WFE_40 - WFE_20) = RMS(diff_field) = 6.59 nm ← TRUE!
|
||||
```
|
||||
|
||||
The Standard `ZernikeExtractor` also has `extract_relative()` if you don't need the OPD method:
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_file, n_modes=50, filter_orders=4)
|
||||
rel_40 = extractor.extract_relative("3", "2") # Z-only method
|
||||
```
|
||||
|
||||
**Backwards compatibility**: The old names (`ZernikeFigureExtractor`, `extract_zernike_figure`, `extract_zernike_figure_rms`) still work but are deprecated.
|
||||
|
||||
**When to use which Zernike method**:
|
||||
|
||||
| Method | Class | When to Use | Assumptions |
|
||||
|--------|-------|-------------|-------------|
|
||||
| Standard (E8) | `ZernikeExtractor` | Quick analysis, negligible lateral displacement | Z-only at original (x,y) |
|
||||
| Analytic (E20) | `ZernikeAnalyticExtractor` | Known focal length, parabolic surface | Parabola shape |
|
||||
| **OPD (E22)** | `ZernikeOPDExtractor` | **Any surface, most rigorous** | None - uses actual geometry |
|
||||
|
||||
**IMPORTANT**: Do NOT provide a figure.dat file unless you're certain it matches your BDF geometry exactly. The default behavior (using BDF geometry filtered to OP2 nodes) is the safest option.
|
||||
|
||||
---
|
||||
|
||||
## Code Reuse Protocol
|
||||
|
||||
### The 20-Line Rule
|
||||
|
||||
If you're writing a function longer than ~20 lines in `run_optimization.py`:
|
||||
|
||||
1. **STOP** - This is a code smell
|
||||
2. **SEARCH** - Check this library
|
||||
3. **IMPORT** - Use existing extractor
|
||||
4. **Only if truly new** - Create via EXT_01
|
||||
|
||||
### Correct Pattern
|
||||
|
||||
```python
|
||||
# ✅ CORRECT: Import and use
|
||||
from optimization_engine.extractors import extract_displacement, extract_frequency
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
disp_result = extract_displacement(op2_file)
|
||||
freq_result = extract_frequency(op2_file)
|
||||
return disp_result['max_displacement']
|
||||
```
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Duplicate code in study
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
|
||||
# Don't write 50 lines of OP2 parsing here
|
||||
from pyNastran.op2.op2 import OP2
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(op2_file))
|
||||
# ... 40 more lines ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Adding New Extractors
|
||||
|
||||
If needed physics isn't in library:
|
||||
|
||||
1. Check [EXT_01_CREATE_EXTRACTOR](../extensions/EXT_01_CREATE_EXTRACTOR.md)
|
||||
2. Create in `optimization_engine/extractors/new_extractor.py`
|
||||
3. Add to `optimization_engine/extractors/__init__.py`
|
||||
4. Update this document
|
||||
|
||||
**Do NOT** add extraction code directly to `run_optimization.py`.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "No displacement data found" | Wrong subcase number | Check subcase in OP2 |
|
||||
| "OP2 file not found" | Solve failed | Check NX logs |
|
||||
| "Unknown element type: auto" | Element type not specified | Specify `element_type='cquad4'` or `'ctetra'` |
|
||||
| "No stress results in OP2" | Wrong element type specified | Use correct type for your mesh |
|
||||
| Import error | Module not exported | Check `__init__.py` exports |
|
||||
|
||||
### Element Type Selection Guide
|
||||
|
||||
**Critical**: You must specify the correct element type for stress extraction based on your mesh:
|
||||
|
||||
| Mesh Type | Elements | `element_type=` |
|
||||
|-----------|----------|-----------------|
|
||||
| **Shell** (thin structures) | CQUAD4, CTRIA3 | `'cquad4'` or `'ctria3'` |
|
||||
| **Solid** (3D volumes) | CTETRA, CHEXA | `'ctetra'` or `'chexa'` |
|
||||
|
||||
**How to check your mesh type:**
|
||||
1. Open .dat/.bdf file
|
||||
2. Search for element cards (CQUAD4, CTETRA, etc.)
|
||||
3. Use the dominant element type
|
||||
|
||||
**Common models:**
|
||||
- **Bracket (solid)**: Uses CTETRA → `element_type='ctetra'`
|
||||
- **Beam (shell)**: Uses CQUAD4 → `element_type='cquad4'`
|
||||
- **Mirror (shell)**: Uses CQUAD4 → `element_type='cquad4'`
|
||||
|
||||
**Von Mises column mapping** (handled automatically):
|
||||
- Shell elements (8 columns): von Mises at column 7
|
||||
- Solid elements (10 columns): von Mises at column 9
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: pyNastran for OP2 parsing
|
||||
- **Used By**: All optimization studies
|
||||
- **Extended By**: [EXT_01_CREATE_EXTRACTOR](../extensions/EXT_01_CREATE_EXTRACTOR.md)
|
||||
- **See Also**: [modules/extractors-catalog.md](../../.claude/skills/modules/extractors-catalog.md)
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 Extractors (2025-12-06)
|
||||
|
||||
### E12: Principal Stress Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_principal_stress`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_principal_stress
|
||||
|
||||
result = extract_principal_stress(op2_file, subcase=1, element_type='ctetra')
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'sigma1_max': float, # Maximum principal stress (MPa)
|
||||
# 'sigma2_max': float, # Intermediate principal stress
|
||||
# 'sigma3_min': float, # Minimum principal stress
|
||||
# 'element_count': int
|
||||
# }
|
||||
```
|
||||
|
||||
### E13: Strain Energy Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_strain_energy`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_strain_energy, extract_total_strain_energy
|
||||
|
||||
result = extract_strain_energy(op2_file, subcase=1)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'total_strain_energy': float, # J
|
||||
# 'max_element_energy': float,
|
||||
# 'max_element_id': int
|
||||
# }
|
||||
|
||||
# Convenience function
|
||||
total_energy = extract_total_strain_energy(op2_file) # J
|
||||
```
|
||||
|
||||
### E14: SPC Forces (Reaction Forces)
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_spc_forces`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_spc_forces, extract_total_reaction_force
|
||||
|
||||
result = extract_spc_forces(op2_file, subcase=1)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'total_force_magnitude': float, # N
|
||||
# 'total_force_x': float,
|
||||
# 'total_force_y': float,
|
||||
# 'total_force_z': float,
|
||||
# 'node_count': int
|
||||
# }
|
||||
|
||||
# Convenience function
|
||||
total_reaction = extract_total_reaction_force(op2_file) # N
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 Extractors (2025-12-06)
|
||||
|
||||
### E15: Temperature Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_temperature`
|
||||
|
||||
For SOL 153 (Steady-State) and SOL 159 (Transient) thermal analyses.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_temperature, get_max_temperature
|
||||
|
||||
result = extract_temperature(op2_file, subcase=1, return_field=False)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'max_temperature': float, # K or °C
|
||||
# 'min_temperature': float,
|
||||
# 'avg_temperature': float,
|
||||
# 'max_node_id': int,
|
||||
# 'node_count': int,
|
||||
# 'unit': str
|
||||
# }
|
||||
|
||||
# Convenience function for constraints
|
||||
max_temp = get_max_temperature(op2_file) # Returns inf on failure
|
||||
```
|
||||
|
||||
### E16: Thermal Gradient Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_temperature`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_temperature_gradient
|
||||
|
||||
result = extract_temperature_gradient(op2_file, subcase=1)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'max_gradient': float, # K/mm (approximation)
|
||||
# 'temperature_range': float, # Max - Min temperature
|
||||
# 'gradient_location': tuple # (max_node, min_node)
|
||||
# }
|
||||
```
|
||||
|
||||
### E17: Heat Flux Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_temperature`
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_heat_flux
|
||||
|
||||
result = extract_heat_flux(op2_file, subcase=1)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'max_heat_flux': float, # W/mm²
|
||||
# 'avg_heat_flux': float,
|
||||
# 'element_count': int
|
||||
# }
|
||||
```
|
||||
|
||||
### E18: Modal Mass Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_modal_mass`
|
||||
|
||||
For SOL 103 (Normal Modes) F06 files with MEFFMASS output.
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
extract_modal_mass,
|
||||
extract_frequencies,
|
||||
get_first_frequency,
|
||||
get_modal_mass_ratio
|
||||
)
|
||||
|
||||
# Get all modes
|
||||
result = extract_modal_mass(f06_file, mode=None)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'mode_count': int,
|
||||
# 'frequencies': list, # Hz
|
||||
# 'modes': list of mode dicts
|
||||
# }
|
||||
|
||||
# Get specific mode
|
||||
result = extract_modal_mass(f06_file, mode=1)
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'frequency': float, # Hz
|
||||
# 'modal_mass_x': float, # kg
|
||||
# 'modal_mass_y': float,
|
||||
# 'modal_mass_z': float,
|
||||
# 'participation_x': float # 0-1
|
||||
# }
|
||||
|
||||
# Convenience functions
|
||||
freq = get_first_frequency(f06_file) # Hz
|
||||
ratio = get_modal_mass_ratio(f06_file, direction='z', n_modes=10) # 0-1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 Extractors (2025-12-19)
|
||||
|
||||
### E19: Part Introspection (Comprehensive)
|
||||
|
||||
**Module**: `optimization_engine.extractors.introspect_part`
|
||||
|
||||
Comprehensive introspection of NX .prt files. Extracts everything available from a part in a single call.
|
||||
|
||||
**Prerequisites**: Uses PowerShell with proper license server setup (see LAC workaround).
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
introspect_part,
|
||||
get_expressions_dict,
|
||||
get_expression_value,
|
||||
print_introspection_summary
|
||||
)
|
||||
|
||||
# Full introspection
|
||||
result = introspect_part("path/to/model.prt")
|
||||
# Returns: {
|
||||
# 'success': bool,
|
||||
# 'part_file': str,
|
||||
# 'expressions': {
|
||||
# 'user': [{'name', 'value', 'rhs', 'units', 'type'}, ...],
|
||||
# 'internal': [...],
|
||||
# 'user_count': int,
|
||||
# 'total_count': int
|
||||
# },
|
||||
# 'mass_properties': {
|
||||
# 'mass_kg': float,
|
||||
# 'mass_g': float,
|
||||
# 'volume_mm3': float,
|
||||
# 'surface_area_mm2': float,
|
||||
# 'center_of_gravity_mm': [x, y, z]
|
||||
# },
|
||||
# 'materials': {
|
||||
# 'assigned': [{'name', 'body', 'properties': {...}}],
|
||||
# 'available': [...]
|
||||
# },
|
||||
# 'bodies': {
|
||||
# 'solid_bodies': [{'name', 'is_solid', 'attributes': [...]}],
|
||||
# 'sheet_bodies': [...],
|
||||
# 'counts': {'solid', 'sheet', 'total'}
|
||||
# },
|
||||
# 'attributes': [{'title', 'type', 'value'}, ...],
|
||||
# 'groups': [{'name', 'member_count', 'members': [...]}],
|
||||
# 'features': {
|
||||
# 'total_count': int,
|
||||
# 'by_type': {'Extrude': 5, 'Revolve': 2, ...}
|
||||
# },
|
||||
# 'datums': {
|
||||
# 'planes': [...],
|
||||
# 'csys': [...],
|
||||
# 'axes': [...]
|
||||
# },
|
||||
# 'units': {
|
||||
# 'base_units': {'Length': 'MilliMeter', ...},
|
||||
# 'system': 'Metric (mm)'
|
||||
# },
|
||||
# 'linked_parts': {
|
||||
# 'loaded_parts': [...],
|
||||
# 'fem_parts': [...],
|
||||
# 'sim_parts': [...],
|
||||
# 'idealized_parts': [...]
|
||||
# }
|
||||
# }
|
||||
|
||||
# Convenience functions
|
||||
expr_dict = get_expressions_dict(result) # {'name': value, ...}
|
||||
pocket_radius = get_expression_value(result, 'Pocket_Radius') # float
|
||||
|
||||
# Print formatted summary
|
||||
print_introspection_summary(result)
|
||||
```
|
||||
|
||||
**What It Extracts**:
|
||||
- **Expressions**: All user and internal expressions with values, RHS formulas, units
|
||||
- **Mass Properties**: Mass, volume, surface area, center of gravity
|
||||
- **Materials**: Material names and properties (density, Young's modulus, etc.)
|
||||
- **Bodies**: Solid and sheet bodies with their attributes
|
||||
- **Part Attributes**: All NX_* system attributes plus user attributes
|
||||
- **Groups**: Named groups and their members
|
||||
- **Features**: Feature tree summary by type
|
||||
- **Datums**: Datum planes, coordinate systems, axes
|
||||
- **Units**: Base units and unit system
|
||||
- **Linked Parts**: FEM, SIM, idealized parts loaded in session
|
||||
|
||||
**Use Cases**:
|
||||
- Study setup: Extract actual expression values for baseline
|
||||
- Debugging: Verify model state before optimization
|
||||
- Documentation: Generate part specifications
|
||||
- Validation: Compare expected vs actual parameter values
|
||||
|
||||
**NX Journal Execution** (LAC Workaround):
|
||||
```python
|
||||
# CRITICAL: Use PowerShell with [Environment]::SetEnvironmentVariable()
|
||||
# NOT cmd /c SET or $env: syntax (these fail)
|
||||
powershell -Command "[Environment]::SetEnvironmentVariable('SPLM_LICENSE_SERVER', '28000@server', 'Process'); & 'run_journal.exe' 'introspect_part.py' -args 'model.prt' 'output_dir'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
```
|
||||
optimization_engine/extractors/
|
||||
├── __init__.py # Exports all extractors
|
||||
├── extract_displacement.py # E1
|
||||
├── extract_frequency.py # E2
|
||||
├── extract_von_mises_stress.py # E3
|
||||
├── bdf_mass_extractor.py # E4
|
||||
├── extract_mass_from_expression.py # E5
|
||||
├── field_data_extractor.py # E6
|
||||
├── stiffness_calculator.py # E7
|
||||
├── extract_zernike.py # E8, E9 (Standard Z-only)
|
||||
├── extract_zernike_opd.py # E20, E21 (Parabola OPD)
|
||||
├── extract_zernike_figure.py # E22 (Figure OPD - most rigorous)
|
||||
├── zernike_helpers.py # E10
|
||||
├── extract_part_mass_material.py # E11 (Part mass & material)
|
||||
├── extract_zernike_surface.py # Surface utilities
|
||||
├── op2_extractor.py # Low-level OP2 access
|
||||
├── extract_principal_stress.py # E12 (Phase 2)
|
||||
├── extract_strain_energy.py # E13 (Phase 2)
|
||||
├── extract_spc_forces.py # E14 (Phase 2)
|
||||
├── extract_temperature.py # E15, E16, E17 (Phase 3)
|
||||
├── extract_modal_mass.py # E18 (Phase 3)
|
||||
├── introspect_part.py # E19 (Phase 4)
|
||||
├── test_phase2_extractors.py # Phase 2 tests
|
||||
└── test_phase3_extractors.py # Phase 3 tests
|
||||
|
||||
nx_journals/
|
||||
├── extract_part_mass_material.py # E11 NX journal (prereq)
|
||||
└── introspect_part.py # E19 NX journal (comprehensive introspection)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial consolidation from scattered docs |
|
||||
| 1.1 | 2025-12-06 | Added Phase 2: E12 (principal stress), E13 (strain energy), E14 (SPC forces) |
|
||||
| 1.2 | 2025-12-06 | Added Phase 3: E15-E17 (thermal), E18 (modal mass) |
|
||||
| 1.3 | 2025-12-07 | Added Element Type Selection Guide; documented shell vs solid stress columns |
|
||||
| 1.4 | 2025-12-19 | Added Phase 4: E19 (comprehensive part introspection) |
|
||||
| 1.5 | 2025-12-22 | Added Phase 5: E20 (Parabola OPD), E21 (comparison), E22 (Figure OPD - most rigorous) |
|
||||
@@ -0,0 +1,435 @@
|
||||
# SYS_13: Real-Time Dashboard Tracking
|
||||
|
||||
<!--
|
||||
PROTOCOL: Real-Time Dashboard Tracking
|
||||
LAYER: System
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 13 implements a comprehensive real-time web dashboard for monitoring optimization studies. It provides live visualization of optimizer state, Pareto fronts, parallel coordinates, and trial history with automatic updates every trial.
|
||||
|
||||
**Key Feature**: Every trial completion writes state to JSON, enabling live browser updates.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "dashboard", "visualization" mentioned | Load this protocol |
|
||||
| "real-time", "monitoring" requested | Enable dashboard tracking |
|
||||
| Multi-objective study | Dashboard shows Pareto front |
|
||||
| Want to see progress visually | Point to `localhost:3000` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Dashboard URLs**:
|
||||
| Service | URL | Purpose |
|
||||
|---------|-----|---------|
|
||||
| Frontend | `http://localhost:3000` | Main dashboard |
|
||||
| Backend API | `http://localhost:8000` | REST API |
|
||||
| Optuna Dashboard | `http://localhost:8080` | Alternative viewer |
|
||||
|
||||
**Start Commands**:
|
||||
```bash
|
||||
# Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Trial Completion (Optuna)
|
||||
│
|
||||
▼
|
||||
Realtime Callback (optimization_engine/realtime_tracking.py)
|
||||
│
|
||||
▼
|
||||
Write optimizer_state.json
|
||||
│
|
||||
▼
|
||||
Backend API /optimizer-state endpoint
|
||||
│
|
||||
▼
|
||||
Frontend Components (2s polling)
|
||||
│
|
||||
▼
|
||||
User sees live updates in browser
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backend Components
|
||||
|
||||
### 1. Real-Time Tracking System (`realtime_tracking.py`)
|
||||
|
||||
**Purpose**: Write JSON state files after every trial completion.
|
||||
|
||||
**Integration** (in `intelligent_optimizer.py`):
|
||||
```python
|
||||
from optimization_engine.realtime_tracking import create_realtime_callback
|
||||
|
||||
# Create callback
|
||||
callback = create_realtime_callback(
|
||||
tracking_dir=results_dir / "intelligent_optimizer",
|
||||
optimizer_ref=self,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Register with Optuna
|
||||
study.optimize(objective, n_trials=n_trials, callbacks=[callback])
|
||||
```
|
||||
|
||||
**Data Structure** (`optimizer_state.json`):
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-11-21T15:27:28.828930",
|
||||
"trial_number": 29,
|
||||
"total_trials": 50,
|
||||
"current_phase": "adaptive_optimization",
|
||||
"current_strategy": "GP_UCB",
|
||||
"is_multi_objective": true,
|
||||
"study_directions": ["maximize", "minimize"]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. REST API Endpoints
|
||||
|
||||
**Base**: `/api/optimization/studies/{study_id}/`
|
||||
|
||||
| Endpoint | Method | Returns |
|
||||
|----------|--------|---------|
|
||||
| `/metadata` | GET | Objectives, design vars, constraints with units |
|
||||
| `/optimizer-state` | GET | Current phase, strategy, progress |
|
||||
| `/pareto-front` | GET | Pareto-optimal solutions (multi-objective) |
|
||||
| `/history` | GET | All trial history |
|
||||
| `/` | GET | List all studies |
|
||||
|
||||
**Unit Inference**:
|
||||
```python
|
||||
def _infer_objective_unit(objective: Dict) -> str:
|
||||
name = objective.get("name", "").lower()
|
||||
desc = objective.get("description", "").lower()
|
||||
|
||||
if "frequency" in name or "hz" in desc:
|
||||
return "Hz"
|
||||
elif "stiffness" in name or "n/mm" in desc:
|
||||
return "N/mm"
|
||||
elif "mass" in name or "kg" in desc:
|
||||
return "kg"
|
||||
# ... more patterns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Components
|
||||
|
||||
### 1. OptimizerPanel (`components/OptimizerPanel.tsx`)
|
||||
|
||||
**Displays**:
|
||||
- Current phase (Characterization, Exploration, Exploitation, Adaptive)
|
||||
- Current strategy (TPE, GP, NSGA-II, etc.)
|
||||
- Progress bar with trial count
|
||||
- Multi-objective indicator
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Intelligent Optimizer Status │
|
||||
├─────────────────────────────────┤
|
||||
│ Phase: [Adaptive Optimization] │
|
||||
│ Strategy: [GP_UCB] │
|
||||
│ Progress: [████████░░] 29/50 │
|
||||
│ Multi-Objective: ✓ │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 2. ParetoPlot (`components/ParetoPlot.tsx`)
|
||||
|
||||
**Features**:
|
||||
- Scatter plot of Pareto-optimal solutions
|
||||
- Pareto front line connecting optimal points
|
||||
- **3 Normalization Modes**:
|
||||
- **Raw**: Original engineering values
|
||||
- **Min-Max**: Scales to [0, 1]
|
||||
- **Z-Score**: Standardizes to mean=0, std=1
|
||||
- Tooltip shows raw values regardless of normalization
|
||||
- Color-coded: green=feasible, red=infeasible
|
||||
|
||||
### 3. ParallelCoordinatesPlot (`components/ParallelCoordinatesPlot.tsx`)
|
||||
|
||||
**Features**:
|
||||
- High-dimensional visualization (objectives + design variables)
|
||||
- Interactive trial selection
|
||||
- Normalized [0, 1] axes
|
||||
- Color coding: green (feasible), red (infeasible), yellow (selected)
|
||||
|
||||
```
|
||||
Stiffness Mass support_angle tip_thickness
|
||||
│ │ │ │
|
||||
│ ╱─────╲ ╱ │
|
||||
│ ╱ ╲─────────╱ │
|
||||
│ ╱ ╲ │
|
||||
```
|
||||
|
||||
### 4. Dashboard Layout
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────┐
|
||||
│ Study Selection │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ Metrics Grid (Best, Avg, Trials, Pruned) │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [OptimizerPanel] [ParetoPlot] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [ParallelCoordinatesPlot - Full Width] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Convergence] [Parameter Space] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Recent Trials Table] │
|
||||
└──────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**In `optimization_config.json`**:
|
||||
```json
|
||||
{
|
||||
"dashboard_settings": {
|
||||
"enabled": true,
|
||||
"port": 8000,
|
||||
"realtime_updates": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Study Requirements**:
|
||||
- Must use Protocol 10 (IntelligentOptimizer) for optimizer state
|
||||
- Must have `optimization_config.json` with objectives and design_variables
|
||||
- Real-time tracking enabled automatically with Protocol 10
|
||||
|
||||
---
|
||||
|
||||
## Usage Workflow
|
||||
|
||||
### 1. Start Dashboard
|
||||
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Terminal 2: Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### 2. Start Optimization
|
||||
|
||||
```bash
|
||||
cd studies/my_study
|
||||
conda activate atomizer
|
||||
python run_optimization.py --n-trials 50
|
||||
```
|
||||
|
||||
### 3. View Dashboard
|
||||
|
||||
- Open browser to `http://localhost:3000`
|
||||
- Select study from dropdown
|
||||
- Watch real-time updates every trial
|
||||
|
||||
### 4. Interact with Plots
|
||||
|
||||
- Toggle normalization on Pareto plot
|
||||
- Click lines in parallel coordinates to select trials
|
||||
- Hover for detailed trial information
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Backend endpoint latency | ~10ms |
|
||||
| Frontend polling interval | 2 seconds |
|
||||
| Real-time write overhead | <5ms per trial |
|
||||
| Dashboard initial load | <500ms |
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Protocols
|
||||
|
||||
### Protocol 10 Integration
|
||||
- Real-time callback integrated into `IntelligentOptimizer.optimize()`
|
||||
- Tracks phase transitions (characterization → adaptive optimization)
|
||||
- Reports strategy changes
|
||||
|
||||
### Protocol 11 Integration
|
||||
- Pareto front endpoint checks `len(study.directions) > 1`
|
||||
- Dashboard conditionally renders Pareto plots
|
||||
- Uses Optuna's `study.best_trials` for Pareto front
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "No Pareto front data yet" | Single-objective or no trials | Wait for trials, check objectives |
|
||||
| OptimizerPanel shows "Not available" | Not using Protocol 10 | Enable IntelligentOptimizer |
|
||||
| Units not showing | Missing unit in config | Add `unit` field or use pattern in description |
|
||||
| Dashboard not updating | Backend not running | Start backend with uvicorn |
|
||||
| CORS errors | Backend/frontend mismatch | Check ports, restart both |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [SYS_10_IMSO](./SYS_10_IMSO.md), [SYS_11_MULTI_OBJECTIVE](./SYS_11_MULTI_OBJECTIVE.md)
|
||||
- **Used By**: [OP_03_MONITOR_PROGRESS](../operations/OP_03_MONITOR_PROGRESS.md)
|
||||
- **See Also**: Optuna Dashboard for alternative visualization
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
**Backend**:
|
||||
- `atomizer-dashboard/backend/api/main.py` - FastAPI app
|
||||
- `atomizer-dashboard/backend/api/routes/optimization.py` - Endpoints
|
||||
- `optimization_engine/realtime_tracking.py` - Callback system
|
||||
|
||||
**Frontend**:
|
||||
- `atomizer-dashboard/frontend/src/pages/Dashboard.tsx` - Main page
|
||||
- `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx`
|
||||
- `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx`
|
||||
- `atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx`
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Backend API Example (FastAPI)
|
||||
|
||||
```python
|
||||
@router.get("/studies/{study_id}/pareto-front")
|
||||
async def get_pareto_front(study_id: str):
|
||||
"""Get Pareto-optimal solutions for multi-objective studies."""
|
||||
study = optuna.load_study(study_name=study_id, storage=storage)
|
||||
|
||||
if len(study.directions) == 1:
|
||||
return {"is_multi_objective": False}
|
||||
|
||||
return {
|
||||
"is_multi_objective": True,
|
||||
"pareto_front": [
|
||||
{
|
||||
"trial_number": t.number,
|
||||
"values": t.values,
|
||||
"params": t.params,
|
||||
"user_attrs": dict(t.user_attrs)
|
||||
}
|
||||
for t in study.best_trials
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Frontend OptimizerPanel (React/TypeScript)
|
||||
|
||||
```typescript
|
||||
export function OptimizerPanel({ studyId }: { studyId: string }) {
|
||||
const [state, setState] = useState<OptimizerState | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const fetchState = async () => {
|
||||
const res = await fetch(`/api/optimization/studies/${studyId}/optimizer-state`);
|
||||
setState(await res.json());
|
||||
};
|
||||
fetchState();
|
||||
const interval = setInterval(fetchState, 1000);
|
||||
return () => clearInterval(interval);
|
||||
}, [studyId]);
|
||||
|
||||
return (
|
||||
<Card title="Optimizer Status">
|
||||
<div>Phase: {state?.current_phase}</div>
|
||||
<div>Strategy: {state?.current_strategy}</div>
|
||||
<ProgressBar value={state?.trial_number} max={state?.total_trials} />
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Callback Integration
|
||||
|
||||
**CRITICAL**: Every `study.optimize()` call must include the realtime callback:
|
||||
|
||||
```python
|
||||
# In IntelligentOptimizer
|
||||
self.realtime_callback = create_realtime_callback(
|
||||
tracking_dir=self.tracking_dir,
|
||||
optimizer_ref=self,
|
||||
verbose=self.verbose
|
||||
)
|
||||
|
||||
# Register with ALL optimize calls
|
||||
self.study.optimize(
|
||||
objective_function,
|
||||
n_trials=check_interval,
|
||||
callbacks=[self.realtime_callback] # Required for real-time updates
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Chart Library Options
|
||||
|
||||
The dashboard supports two chart libraries:
|
||||
|
||||
| Feature | Recharts | Plotly |
|
||||
|---------|----------|--------|
|
||||
| Load Speed | Fast | Slower (lazy loaded) |
|
||||
| Interactivity | Basic | Advanced |
|
||||
| Export | Screenshot | PNG/SVG native |
|
||||
| 3D Support | No | Yes |
|
||||
| Real-time Updates | Better | Good |
|
||||
|
||||
**Recommendation**: Use Recharts during active optimization, Plotly for post-analysis.
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# Both backend and frontend
|
||||
python start_dashboard.py
|
||||
|
||||
# Or manually:
|
||||
cd atomizer-dashboard/backend && python -m uvicorn main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
```
|
||||
|
||||
Access at: `http://localhost:3003`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.2 | 2025-12-05 | Added chart library options |
|
||||
| 1.1 | 2025-12-05 | Added implementation code snippets |
|
||||
| 1.0 | 2025-11-21 | Initial release with real-time tracking |
|
||||
1094
hq/skills/atomizer-protocols/protocols/SYS_14_NEURAL_ACCELERATION.md
Normal file
1094
hq/skills/atomizer-protocols/protocols/SYS_14_NEURAL_ACCELERATION.md
Normal file
File diff suppressed because it is too large
Load Diff
442
hq/skills/atomizer-protocols/protocols/SYS_15_METHOD_SELECTOR.md
Normal file
442
hq/skills/atomizer-protocols/protocols/SYS_15_METHOD_SELECTOR.md
Normal file
@@ -0,0 +1,442 @@
|
||||
# SYS_15: Adaptive Method Selector
|
||||
|
||||
<!--
|
||||
PROTOCOL: Adaptive Method Selector
|
||||
LAYER: System
|
||||
VERSION: 2.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-07
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE, SYS_14_NEURAL_ACCELERATION]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
The **Adaptive Method Selector (AMS)** analyzes optimization problems and recommends the best method (turbo, hybrid_loop, pure_fea, etc.) based on:
|
||||
|
||||
1. **Static Analysis**: Problem characteristics from config (dimensionality, objectives, constraints)
|
||||
2. **Dynamic Analysis**: Early FEA trial metrics (smoothness, correlations, feasibility)
|
||||
3. **NN Quality Assessment**: Relative accuracy thresholds comparing NN error to problem variability
|
||||
4. **Runtime Monitoring**: Continuous optimization performance assessment
|
||||
|
||||
**Key Value**: Eliminates guesswork in choosing optimization strategies by providing data-driven recommendations with relative accuracy thresholds.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Starting a new optimization | Run method selector first |
|
||||
| "which method", "recommend" mentioned | Suggest method selector |
|
||||
| Unsure between turbo/hybrid/fea | Use method selector |
|
||||
| > 20 FEA trials completed | Re-run for updated recommendation |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### CLI Usage
|
||||
|
||||
```bash
|
||||
python -m optimization_engine.method_selector <config_path> [db_path]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Config-only analysis (before any FEA trials)
|
||||
python -m optimization_engine.method_selector 1_setup/optimization_config.json
|
||||
|
||||
# Full analysis with FEA data
|
||||
python -m optimization_engine.method_selector 1_setup/optimization_config.json 2_results/study.db
|
||||
```
|
||||
|
||||
### Python API
|
||||
|
||||
```python
|
||||
from optimization_engine.method_selector import AdaptiveMethodSelector
|
||||
|
||||
selector = AdaptiveMethodSelector()
|
||||
recommendation = selector.recommend("1_setup/optimization_config.json", "2_results/study.db")
|
||||
|
||||
print(recommendation.method) # 'turbo', 'hybrid_loop', 'pure_fea', 'gnn_field'
|
||||
print(recommendation.confidence) # 0.0 - 1.0
|
||||
print(recommendation.parameters) # {'nn_trials': 5000, 'batch_size': 100, ...}
|
||||
print(recommendation.reasoning) # Explanation string
|
||||
print(recommendation.alternatives) # Other methods with scores
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available Methods
|
||||
|
||||
| Method | Description | Best For |
|
||||
|--------|-------------|----------|
|
||||
| **TURBO** | Aggressive NN exploration with single-best FEA validation | Low-dimensional, smooth responses |
|
||||
| **HYBRID_LOOP** | Iterative train→predict→validate→retrain cycle | Moderate complexity, uncertain landscape |
|
||||
| **PURE_FEA** | Traditional FEA-only optimization | High-dimensional, complex physics |
|
||||
| **GNN_FIELD** | Graph neural network for field prediction | Need full field visualization |
|
||||
|
||||
---
|
||||
|
||||
## Selection Criteria
|
||||
|
||||
### Static Factors (from config)
|
||||
|
||||
| Factor | Favors TURBO | Favors HYBRID_LOOP | Favors PURE_FEA |
|
||||
|--------|--------------|---------------------|-----------------|
|
||||
| **n_variables** | ≤5 | 5-10 | >10 |
|
||||
| **n_objectives** | 1-3 | 2-4 | Any |
|
||||
| **n_constraints** | ≤3 | 3-5 | >5 |
|
||||
| **FEA budget** | >50 trials | 30-50 trials | <30 trials |
|
||||
|
||||
### Dynamic Factors (from FEA trials)
|
||||
|
||||
| Factor | Measurement | Impact |
|
||||
|--------|-------------|--------|
|
||||
| **Response smoothness** | Lipschitz constant estimate | Smooth → NN works well |
|
||||
| **Variable sensitivity** | Correlation with objectives | High correlation → easier to learn |
|
||||
| **Feasibility rate** | % of valid designs | Low feasibility → need more exploration |
|
||||
| **Objective correlations** | Pairwise correlations | Strong correlations → simpler landscape |
|
||||
|
||||
---
|
||||
|
||||
## NN Quality Assessment
|
||||
|
||||
The method selector uses **relative accuracy thresholds** to assess NN suitability. Instead of absolute error limits, it compares NN error to the problem's natural variability (coefficient of variation).
|
||||
|
||||
### Core Concept
|
||||
|
||||
```
|
||||
NN Suitability = f(nn_error / coefficient_of_variation)
|
||||
|
||||
If nn_error >> CV → NN is unreliable (not learning, just noise)
|
||||
If nn_error ≈ CV → NN captures the trend (hybrid recommended)
|
||||
If nn_error << CV → NN is excellent (turbo viable)
|
||||
```
|
||||
|
||||
### Physics-Based Classification
|
||||
|
||||
Objectives are classified by their expected predictability:
|
||||
|
||||
| Objective Type | Examples | Max Expected Error | CV Ratio Limit |
|
||||
|----------------|----------|-------------------|----------------|
|
||||
| **Linear** | mass, volume | 2% | 0.5 |
|
||||
| **Smooth** | frequency, avg stress | 5% | 1.0 |
|
||||
| **Nonlinear** | max stress, stiffness | 10% | 2.0 |
|
||||
| **Chaotic** | contact, buckling | 20% | 3.0 |
|
||||
|
||||
### CV Ratio Interpretation
|
||||
|
||||
The **CV Ratio** = NN Error / (Coefficient of Variation × 100):
|
||||
|
||||
| CV Ratio | Quality | Interpretation |
|
||||
|----------|---------|----------------|
|
||||
| < 0.5 | ✓ Great | NN captures physics much better than noise |
|
||||
| 0.5 - 1.0 | ✓ Good | NN adds significant value for exploration |
|
||||
| 1.0 - 2.0 | ~ OK | NN is marginal, use with validation |
|
||||
| > 2.0 | ✗ Poor | NN not learning effectively, use FEA |
|
||||
|
||||
### Method Recommendations Based on Quality
|
||||
|
||||
| Turbo Suitability | Hybrid Suitability | Recommendation |
|
||||
|-------------------|--------------------|-----------------------|
|
||||
| > 80% | any | **TURBO** - trust NN fully |
|
||||
| 50-80% | > 50% | **TURBO** with monitoring |
|
||||
| < 50% | > 50% | **HYBRID_LOOP** - verify periodically |
|
||||
| < 30% | < 50% | **PURE_FEA** or retrain first |
|
||||
|
||||
### Data Sources
|
||||
|
||||
NN quality metrics are collected from:
|
||||
1. `validation_report.json` - FEA validation results
|
||||
2. `turbo_report.json` - Turbo mode validation history
|
||||
3. `study.db` - Trial `nn_error_percent` user attributes
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ AdaptiveMethodSelector │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────┐ ┌────────────────────┐ ┌───────────────────┐ │
|
||||
│ │ ProblemProfiler │ │EarlyMetricsCollector│ │ NNQualityAssessor │ │
|
||||
│ │(static analysis)│ │ (dynamic analysis) │ │ (NN accuracy) │ │
|
||||
│ └───────┬─────────┘ └─────────┬──────────┘ └─────────┬─────────┘ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ _score_methods() │ │
|
||||
│ │ (rule-based scoring with static + dynamic + NN factors) │ │
|
||||
│ └───────────────────────────────┬─────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ MethodRecommendation │ │
|
||||
│ │ method, confidence, parameters, reasoning, warnings │ │
|
||||
│ └─────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────┐ │
|
||||
│ │ RuntimeAdvisor │ ← Monitors during optimization │
|
||||
│ │ (pivot advisor) │ │
|
||||
│ └──────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Components
|
||||
|
||||
### 1. ProblemProfiler
|
||||
|
||||
Extracts static problem characteristics from `optimization_config.json`:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ProblemProfile:
|
||||
n_variables: int
|
||||
variable_names: List[str]
|
||||
variable_bounds: Dict[str, Tuple[float, float]]
|
||||
n_objectives: int
|
||||
objective_names: List[str]
|
||||
n_constraints: int
|
||||
fea_time_estimate: float
|
||||
max_fea_trials: int
|
||||
is_multi_objective: bool
|
||||
has_constraints: bool
|
||||
```
|
||||
|
||||
### 2. EarlyMetricsCollector
|
||||
|
||||
Computes metrics from first N FEA trials in `study.db`:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class EarlyMetrics:
|
||||
n_trials_analyzed: int
|
||||
objective_means: Dict[str, float]
|
||||
objective_stds: Dict[str, float]
|
||||
coefficient_of_variation: Dict[str, float]
|
||||
objective_correlations: Dict[str, float]
|
||||
variable_objective_correlations: Dict[str, Dict[str, float]]
|
||||
feasibility_rate: float
|
||||
response_smoothness: float # 0-1, higher = better for NN
|
||||
variable_sensitivity: Dict[str, float]
|
||||
```
|
||||
|
||||
### 3. NNQualityAssessor
|
||||
|
||||
Assesses NN surrogate quality relative to problem complexity:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class NNQualityMetrics:
|
||||
has_nn_data: bool = False
|
||||
n_validations: int = 0
|
||||
nn_errors: Dict[str, float] # Absolute % error per objective
|
||||
cv_ratios: Dict[str, float] # nn_error / (CV * 100) per objective
|
||||
expected_errors: Dict[str, float] # Physics-based threshold
|
||||
overall_quality: float # 0-1, based on absolute thresholds
|
||||
turbo_suitability: float # 0-1, based on CV ratios
|
||||
hybrid_suitability: float # 0-1, more lenient threshold
|
||||
objective_types: Dict[str, str] # 'linear', 'smooth', 'nonlinear', 'chaotic'
|
||||
```
|
||||
|
||||
### 4. AdaptiveMethodSelector
|
||||
|
||||
Main entry point that combines static + dynamic + NN quality analysis:
|
||||
|
||||
```python
|
||||
selector = AdaptiveMethodSelector(min_trials=20)
|
||||
recommendation = selector.recommend(config_path, db_path, results_dir=results_dir)
|
||||
|
||||
# Access last NN quality for display
|
||||
print(f"Turbo suitability: {selector.last_nn_quality.turbo_suitability:.0%}")
|
||||
```
|
||||
|
||||
### 5. RuntimeAdvisor
|
||||
|
||||
Monitors optimization progress and suggests pivots:
|
||||
|
||||
```python
|
||||
advisor = RuntimeAdvisor()
|
||||
pivot_advice = advisor.assess(db_path, config_path, current_method="turbo")
|
||||
|
||||
if pivot_advice.should_pivot:
|
||||
print(f"Consider switching to {pivot_advice.recommended_method}")
|
||||
print(f"Reason: {pivot_advice.reason}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
======================================================================
|
||||
OPTIMIZATION METHOD ADVISOR
|
||||
======================================================================
|
||||
|
||||
Problem Profile:
|
||||
Variables: 2 (support_angle, tip_thickness)
|
||||
Objectives: 3 (mass, stress, stiffness)
|
||||
Constraints: 1
|
||||
Max FEA budget: ~72 trials
|
||||
|
||||
NN Quality Assessment:
|
||||
Validations analyzed: 10
|
||||
|
||||
| Objective | NN Error | CV | Ratio | Type | Quality |
|
||||
|---------------|----------|--------|-------|------------|---------|
|
||||
| mass | 3.7% | 16.0% | 0.23 | linear | ✓ Great |
|
||||
| stress | 2.0% | 7.7% | 0.26 | nonlinear | ✓ Great |
|
||||
| stiffness | 7.8% | 38.9% | 0.20 | nonlinear | ✓ Great |
|
||||
|
||||
Overall Quality: 22%
|
||||
Turbo Suitability: 77%
|
||||
Hybrid Suitability: 88%
|
||||
|
||||
----------------------------------------------------------------------
|
||||
|
||||
RECOMMENDED: TURBO
|
||||
Confidence: 100%
|
||||
Reason: low-dimensional design space; sufficient FEA budget; smooth landscape (79%); good NN quality (77%)
|
||||
|
||||
Suggested parameters:
|
||||
--nn-trials: 5000
|
||||
--batch-size: 100
|
||||
--retrain-every: 10
|
||||
--epochs: 150
|
||||
|
||||
Alternatives:
|
||||
- hybrid_loop (90%): uncertain landscape - hybrid adapts; NN adds value with periodic retraining
|
||||
- pure_fea (50%): default recommendation
|
||||
|
||||
Warnings:
|
||||
! mass: NN error (3.7%) above expected (2%) - consider retraining or using hybrid mode
|
||||
|
||||
======================================================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Parameter Recommendations
|
||||
|
||||
The selector suggests optimal parameters based on problem characteristics:
|
||||
|
||||
| Parameter | Low-D (≤3 vars) | Medium-D (4-6 vars) | High-D (>6 vars) |
|
||||
|-----------|-----------------|---------------------|------------------|
|
||||
| `--nn-trials` | 5000 | 10000 | 20000 |
|
||||
| `--batch-size` | 100 | 100 | 200 |
|
||||
| `--retrain-every` | 10 | 15 | 20 |
|
||||
| `--epochs` | 150 | 200 | 300 |
|
||||
|
||||
---
|
||||
|
||||
## Scoring Algorithm
|
||||
|
||||
Each method receives a score based on weighted factors:
|
||||
|
||||
```python
|
||||
# TURBO scoring
|
||||
turbo_score = 50 # base score
|
||||
turbo_score += 30 if n_variables <= 5 else -20 # dimensionality
|
||||
turbo_score += 25 if smoothness > 0.7 else -10 # response smoothness
|
||||
turbo_score += 20 if fea_budget > 50 else -15 # budget
|
||||
turbo_score += 15 if feasibility > 0.8 else -5 # feasibility
|
||||
turbo_score = max(0, min(100, turbo_score)) # clamp 0-100
|
||||
|
||||
# Similar for HYBRID_LOOP, PURE_FEA, GNN_FIELD
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with run_optimization.py
|
||||
|
||||
The method selector can be integrated into the optimization workflow:
|
||||
|
||||
```python
|
||||
# At start of optimization
|
||||
from optimization_engine.method_selector import recommend_method
|
||||
|
||||
recommendation = recommend_method(config_path, db_path)
|
||||
print(f"Recommended method: {recommendation.method}")
|
||||
print(f"Parameters: {recommendation.parameters}")
|
||||
|
||||
# Ask user confirmation
|
||||
if user_confirms:
|
||||
if recommendation.method == 'turbo':
|
||||
os.system(f"python run_nn_optimization.py --turbo "
|
||||
f"--nn-trials {recommendation.parameters['nn_trials']} "
|
||||
f"--batch-size {recommendation.parameters['batch_size']}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "Insufficient trials" | < 20 FEA trials | Run more FEA trials first |
|
||||
| Low confidence score | Conflicting signals | Try hybrid_loop as safe default |
|
||||
| PURE_FEA recommended | High dimensionality | Consider dimension reduction |
|
||||
| GNN_FIELD recommended | Need field visualization | Set up atomizer-field |
|
||||
|
||||
### Config Format Compatibility
|
||||
|
||||
The method selector supports multiple config JSON formats:
|
||||
|
||||
| Old Format | New Format | Both Supported |
|
||||
|------------|------------|----------------|
|
||||
| `parameter` | `name` | Variable name |
|
||||
| `bounds: [min, max]` | `min`, `max` | Variable bounds |
|
||||
| `goal` | `direction` | Objective direction |
|
||||
|
||||
**Example equivalent configs:**
|
||||
```json
|
||||
// Old format (UAV study style)
|
||||
{"design_variables": [{"parameter": "angle", "bounds": [30, 60]}]}
|
||||
|
||||
// New format (beam study style)
|
||||
{"design_variables": [{"name": "angle", "min": 30, "max": 60}]}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**:
|
||||
- [SYS_10_IMSO](./SYS_10_IMSO.md) for optimization framework
|
||||
- [SYS_14_NEURAL_ACCELERATION](./SYS_14_NEURAL_ACCELERATION.md) for neural methods
|
||||
- **Used By**: [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
|
||||
- **See Also**: [modules/method-selection.md](../../.claude/skills/modules/method-selection.md)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
```
|
||||
optimization_engine/
|
||||
└── method_selector.py # Complete AMS implementation
|
||||
├── ProblemProfiler # Static config analysis
|
||||
├── EarlyMetricsCollector # Dynamic FEA metrics
|
||||
├── NNQualityMetrics # NN accuracy dataclass
|
||||
├── NNQualityAssessor # Relative accuracy assessment
|
||||
├── AdaptiveMethodSelector # Main recommendation engine
|
||||
├── RuntimeAdvisor # Mid-run pivot advisor
|
||||
├── print_recommendation() # CLI output with NN quality table
|
||||
└── recommend_method() # Convenience function
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 2.1 | 2025-12-07 | Added config format flexibility (parameter/name, bounds/min-max, goal/direction) |
|
||||
| 2.0 | 2025-12-07 | Added NNQualityAssessor with relative accuracy thresholds |
|
||||
| 1.0 | 2025-12-06 | Initial implementation with 4 methods |
|
||||
@@ -0,0 +1,360 @@
|
||||
# SYS_16: Self-Aware Turbo (SAT) Optimization
|
||||
|
||||
## Version: 3.0
|
||||
## Status: VALIDATED
|
||||
## Created: 2025-12-28
|
||||
## Updated: 2025-12-31
|
||||
|
||||
---
|
||||
|
||||
## Quick Summary
|
||||
|
||||
**SAT v3 achieved WS=205.58, beating all previous methods (V7 TPE: 218.26, V6 TPE: 225.41).**
|
||||
|
||||
SAT is a surrogate-accelerated optimization method that:
|
||||
1. Trains an **ensemble of 5 MLPs** on historical FEA data
|
||||
2. Uses **adaptive exploration** that decreases over time (15%→8%→3%)
|
||||
3. Filters candidates to prevent **duplicate evaluations**
|
||||
4. Applies **soft mass constraints** in the acquisition function
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Study | Training Data | Key Fix | Best WS |
|
||||
|---------|-------|---------------|---------|---------|
|
||||
| v1 | V7 | 129 (V6 only) | - | 218.26 |
|
||||
| v2 | V8 | 196 (V6 only) | Duplicate prevention | 271.38 |
|
||||
| **v3** | **V9** | **556 (V5-V8)** | **Adaptive exploration + mass targeting** | **205.58** |
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
V5 surrogate + L-BFGS failed catastrophically because:
|
||||
1. MLP predicted WS=280 but actual was WS=376 (30%+ error)
|
||||
2. L-BFGS descended to regions **outside training distribution**
|
||||
3. Surrogate had no way to signal uncertainty
|
||||
4. All L-BFGS solutions converged to the same "fake optimum"
|
||||
|
||||
**Root cause:** The surrogate is overconfident in regions where it has no data.
|
||||
|
||||
---
|
||||
|
||||
## Solution: Uncertainty-Aware Surrogate with Active Learning
|
||||
|
||||
### Core Principles
|
||||
|
||||
1. **Never trust a point prediction** - Always require uncertainty bounds
|
||||
2. **High uncertainty = run FEA** - Don't optimize where you don't know
|
||||
3. **Actively fill gaps** - Prioritize FEA in high-uncertainty regions
|
||||
4. **Validate gradient solutions** - Check L-BFGS results against FEA before trusting
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### 1. Ensemble Surrogate (Epistemic Uncertainty)
|
||||
|
||||
Instead of one MLP, train **N independent models** with different initializations:
|
||||
|
||||
```python
|
||||
class EnsembleSurrogate:
|
||||
def __init__(self, n_models=5):
|
||||
self.models = [MLP() for _ in range(n_models)]
|
||||
|
||||
def predict(self, x):
|
||||
preds = [m.predict(x) for m in self.models]
|
||||
mean = np.mean(preds, axis=0)
|
||||
std = np.std(preds, axis=0) # Epistemic uncertainty
|
||||
return mean, std
|
||||
|
||||
def is_confident(self, x, threshold=0.1):
|
||||
mean, std = self.predict(x)
|
||||
# Confident if std < 10% of mean
|
||||
return (std / (mean + 1e-6)) < threshold
|
||||
```
|
||||
|
||||
**Why this works:** Models trained on different random seeds will agree in well-sampled regions but disagree wildly in extrapolation regions.
|
||||
|
||||
### 2. Distance-Based OOD Detection
|
||||
|
||||
Track training data distribution and flag points that are "too far":
|
||||
|
||||
```python
|
||||
class OODDetector:
|
||||
def __init__(self, X_train):
|
||||
self.X_train = X_train
|
||||
self.mean = X_train.mean(axis=0)
|
||||
self.std = X_train.std(axis=0)
|
||||
# Fit KNN for local density
|
||||
self.knn = NearestNeighbors(n_neighbors=5)
|
||||
self.knn.fit(X_train)
|
||||
|
||||
def distance_to_training(self, x):
|
||||
"""Return distance to nearest training points."""
|
||||
distances, _ = self.knn.kneighbors(x.reshape(1, -1))
|
||||
return distances.mean()
|
||||
|
||||
def is_in_distribution(self, x, threshold=2.0):
|
||||
"""Check if point is within 2 std of training data."""
|
||||
z_scores = np.abs((x - self.mean) / (self.std + 1e-6))
|
||||
return z_scores.max() < threshold
|
||||
```
|
||||
|
||||
### 3. Trust-Region L-BFGS
|
||||
|
||||
Constrain L-BFGS to stay within training distribution:
|
||||
|
||||
```python
|
||||
def trust_region_lbfgs(surrogate, ood_detector, x0, max_iter=100):
|
||||
"""L-BFGS that respects training data boundaries."""
|
||||
|
||||
def constrained_objective(x):
|
||||
# If OOD, return large penalty
|
||||
if not ood_detector.is_in_distribution(x):
|
||||
return 1e9
|
||||
|
||||
mean, std = surrogate.predict(x)
|
||||
# If uncertain, return upper confidence bound (pessimistic)
|
||||
if std > 0.1 * mean:
|
||||
return mean + 2 * std # Be conservative
|
||||
|
||||
return mean
|
||||
|
||||
result = minimize(constrained_objective, x0, method='L-BFGS-B')
|
||||
return result.x
|
||||
```
|
||||
|
||||
### 4. Acquisition Function with Uncertainty
|
||||
|
||||
Use **Expected Improvement with Uncertainty** (like Bayesian Optimization):
|
||||
|
||||
```python
|
||||
def acquisition_score(x, surrogate, best_so_far):
|
||||
"""Score = potential improvement weighted by confidence."""
|
||||
mean, std = surrogate.predict(x)
|
||||
|
||||
# Expected improvement (lower is better for minimization)
|
||||
improvement = best_so_far - mean
|
||||
|
||||
# Exploration bonus for uncertain regions
|
||||
exploration = 0.5 * std
|
||||
|
||||
# High score = worth evaluating with FEA
|
||||
return improvement + exploration
|
||||
|
||||
def select_next_fea_candidates(surrogate, candidates, best_so_far, n=5):
|
||||
"""Select candidates balancing exploitation and exploration."""
|
||||
scores = [acquisition_score(c, surrogate, best_so_far) for c in candidates]
|
||||
|
||||
# Pick top candidates by acquisition score
|
||||
top_indices = np.argsort(scores)[-n:]
|
||||
return [candidates[i] for i in top_indices]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Algorithm: Self-Aware Turbo (SAT)
|
||||
|
||||
```
|
||||
INITIALIZE:
|
||||
- Load existing FEA data (X_train, Y_train)
|
||||
- Train ensemble surrogate on data
|
||||
- Fit OOD detector on X_train
|
||||
- Set best_ws = min(Y_train)
|
||||
|
||||
PHASE 1: UNCERTAINTY MAPPING (10% of budget)
|
||||
FOR i in 1..N_mapping:
|
||||
- Sample random point x
|
||||
- Get uncertainty: mean, std = surrogate.predict(x)
|
||||
- If std > threshold: run FEA, add to training data
|
||||
- Retrain ensemble periodically
|
||||
|
||||
This fills in the "holes" in the surrogate's knowledge.
|
||||
|
||||
PHASE 2: EXPLOITATION WITH VALIDATION (80% of budget)
|
||||
FOR i in 1..N_exploit:
|
||||
- Generate 1000 TPE samples
|
||||
- Filter to keep only confident predictions (std < 10% of mean)
|
||||
- Filter to keep only in-distribution (OOD check)
|
||||
- Rank by predicted WS
|
||||
|
||||
- Take top 5 candidates
|
||||
- Run FEA on all 5
|
||||
|
||||
- For each FEA result:
|
||||
- Compare predicted vs actual
|
||||
- If error > 20%: mark region as "unreliable", force exploration there
|
||||
- If error < 10%: update best, retrain surrogate
|
||||
|
||||
- Every 10 iterations: retrain ensemble with new data
|
||||
|
||||
PHASE 3: L-BFGS REFINEMENT (10% of budget)
|
||||
- Only run L-BFGS if ensemble R² > 0.95 on validation set
|
||||
- Use trust-region L-BFGS (stay within training distribution)
|
||||
|
||||
FOR each L-BFGS solution:
|
||||
- Check ensemble disagreement
|
||||
- If models agree (std < 5%): run FEA to validate
|
||||
- If models disagree: skip, too uncertain
|
||||
|
||||
- Compare L-BFGS prediction vs FEA
|
||||
- If error > 15%: ABORT L-BFGS phase, return to Phase 2
|
||||
- If error < 10%: accept as candidate
|
||||
|
||||
FINAL:
|
||||
- Return best FEA-validated design
|
||||
- Report uncertainty bounds for all objectives
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Differences from V5
|
||||
|
||||
| Aspect | V5 (Failed) | SAT (Proposed) |
|
||||
|--------|-------------|----------------|
|
||||
| **Model** | Single MLP | Ensemble of 5 MLPs |
|
||||
| **Uncertainty** | None | Ensemble disagreement + OOD detection |
|
||||
| **L-BFGS** | Trust blindly | Trust-region, validate every step |
|
||||
| **Extrapolation** | Accept | Reject or penalize |
|
||||
| **Active learning** | No | Yes - prioritize uncertain regions |
|
||||
| **Validation** | After L-BFGS | Throughout |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
1. [ ] `EnsembleSurrogate` class with N=5 MLPs
|
||||
2. [ ] `OODDetector` with KNN + z-score checks
|
||||
3. [ ] `acquisition_score()` balancing exploitation/exploration
|
||||
4. [ ] Trust-region L-BFGS with OOD penalties
|
||||
5. [ ] Automatic retraining when new FEA data arrives
|
||||
6. [ ] Logging of prediction errors to track surrogate quality
|
||||
7. [ ] Early abort if L-BFGS predictions consistently wrong
|
||||
|
||||
---
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
**In well-sampled regions:**
|
||||
- Ensemble agrees → Low uncertainty → Trust predictions
|
||||
- L-BFGS finds valid optima → FEA confirms → Success
|
||||
|
||||
**In poorly-sampled regions:**
|
||||
- Ensemble disagrees → High uncertainty → Run FEA instead
|
||||
- L-BFGS penalized → Stays in trusted zone → No fake optima
|
||||
|
||||
**At distribution boundaries:**
|
||||
- OOD detector flags → Reject predictions
|
||||
- Acquisition prioritizes → Active learning fills gaps
|
||||
|
||||
---
|
||||
|
||||
## Metrics to Track
|
||||
|
||||
1. **Surrogate R² on validation set** - Target > 0.95 before L-BFGS
|
||||
2. **Prediction error histogram** - Should be centered at 0
|
||||
3. **OOD rejection rate** - How often we refuse to predict
|
||||
4. **Ensemble disagreement** - Average std across predictions
|
||||
5. **L-BFGS success rate** - % of L-BFGS solutions that validate
|
||||
|
||||
---
|
||||
|
||||
## When to Use SAT vs Pure TPE
|
||||
|
||||
| Scenario | Recommendation |
|
||||
|----------|----------------|
|
||||
| < 100 existing samples | Pure TPE (not enough for good surrogate) |
|
||||
| 100-500 samples | SAT Phase 1-2 only (no L-BFGS) |
|
||||
| > 500 samples | Full SAT with L-BFGS refinement |
|
||||
| High-dimensional (>20 params) | Pure TPE (curse of dimensionality) |
|
||||
| Noisy FEA | Pure TPE (surrogates struggle with noise) |
|
||||
|
||||
---
|
||||
|
||||
## SAT v3 Implementation Details
|
||||
|
||||
### Adaptive Exploration Schedule
|
||||
|
||||
```python
|
||||
def get_exploration_weight(trial_num):
|
||||
if trial_num <= 30: return 0.15 # Phase 1: 15% exploration
|
||||
elif trial_num <= 80: return 0.08 # Phase 2: 8% exploration
|
||||
else: return 0.03 # Phase 3: 3% exploitation
|
||||
```
|
||||
|
||||
### Acquisition Function (v3)
|
||||
|
||||
```python
|
||||
# Normalize components
|
||||
norm_ws = (pred_ws - pred_ws.min()) / (pred_ws.max() - pred_ws.min())
|
||||
norm_dist = distances / distances.max()
|
||||
mass_penalty = max(0, pred_mass - 118.0) * 5.0 # Soft threshold at 118 kg
|
||||
|
||||
# Adaptive acquisition (lower = better)
|
||||
acquisition = norm_ws - exploration_weight * norm_dist + norm_mass_penalty
|
||||
```
|
||||
|
||||
### Candidate Generation (v3)
|
||||
|
||||
```python
|
||||
for _ in range(1000):
|
||||
if random() < 0.7 and best_x is not None:
|
||||
# 70% exploitation: sample near best
|
||||
scale = uniform(0.05, 0.15)
|
||||
candidate = sample_near_point(best_x, scale)
|
||||
else:
|
||||
# 30% exploration: random sampling
|
||||
candidate = sample_random()
|
||||
```
|
||||
|
||||
### Key Configuration (v3)
|
||||
|
||||
```json
|
||||
{
|
||||
"n_ensemble_models": 5,
|
||||
"training_epochs": 800,
|
||||
"candidates_per_round": 1000,
|
||||
"min_distance_threshold": 0.03,
|
||||
"mass_soft_threshold": 118.0,
|
||||
"exploit_near_best_ratio": 0.7,
|
||||
"lbfgs_polish_trials": 10
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## V9 Results
|
||||
|
||||
| Phase | Trials | Best WS | Mean WS |
|
||||
|-------|--------|---------|---------|
|
||||
| Phase 1 (explore) | 30 | 232.00 | 394.48 |
|
||||
| Phase 2 (balanced) | 50 | 222.01 | 360.51 |
|
||||
| Phase 3 (exploit) | 57+ | **205.58** | 262.57 |
|
||||
|
||||
**Key metrics:**
|
||||
- 100% feasibility rate
|
||||
- 100% unique designs (no duplicates)
|
||||
- Surrogate R² = 0.99
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- Gaussian Process literature on uncertainty quantification
|
||||
- Deep Ensembles: Lakshminarayanan et al. (2017)
|
||||
- Bayesian Optimization with Expected Improvement
|
||||
- Trust-region methods for constrained optimization
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
- **V9 Study:** `studies/M1_Mirror/m1_mirror_cost_reduction_flat_back_V9/`
|
||||
- **Script:** `run_sat_optimization.py`
|
||||
- **Ensemble:** `optimization_engine/surrogates/ensemble_surrogate.py`
|
||||
|
||||
---
|
||||
|
||||
*The key insight: A surrogate that knows when it doesn't know is infinitely more valuable than one that's confidently wrong.*
|
||||
553
hq/skills/atomizer-protocols/protocols/SYS_17_STUDY_INSIGHTS.md
Normal file
553
hq/skills/atomizer-protocols/protocols/SYS_17_STUDY_INSIGHTS.md
Normal file
@@ -0,0 +1,553 @@
|
||||
# SYS_16: Study Insights
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Status**: Active
|
||||
**Purpose**: Physics-focused visualizations for FEA optimization results
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Study Insights provide **physics understanding** of optimization results through interactive 3D visualizations. Unlike the Analysis page (which shows optimizer metrics like convergence and Pareto fronts), Insights answer the question: **"What does this design actually look like?"**
|
||||
|
||||
### Analysis vs Insights
|
||||
|
||||
| Aspect | **Analysis** | **Insights** |
|
||||
|--------|--------------|--------------|
|
||||
| Focus | Optimization performance | Physics understanding |
|
||||
| Questions | "Is the optimizer converging?" | "What does the best design look like?" |
|
||||
| Data Source | `study.db` (trials, objectives) | Simulation outputs (OP2, mesh, fields) |
|
||||
| Typical Plots | Convergence, Pareto, parameters | 3D surfaces, stress contours, mode shapes |
|
||||
| When Used | During/after optimization | After specific trial of interest |
|
||||
|
||||
---
|
||||
|
||||
## Available Insight Types
|
||||
|
||||
| Type ID | Name | Applicable To | Data Required |
|
||||
|---------|------|---------------|---------------|
|
||||
| `zernike_dashboard` | **Zernike Dashboard (RECOMMENDED)** | Mirror, optics | OP2 with displacement subcases |
|
||||
| `zernike_wfe` | Zernike WFE Analysis | Mirror, optics | OP2 with displacement subcases |
|
||||
| `zernike_opd_comparison` | Zernike OPD Method Comparison | Mirror, optics, lateral | OP2 with displacement subcases |
|
||||
| `msf_zernike` | MSF Zernike Analysis | Mirror, optics | OP2 with displacement subcases |
|
||||
| `stress_field` | Stress Distribution | Structural, bracket, beam | OP2 with stress results |
|
||||
| `modal` | Modal Analysis | Vibration, dynamic | OP2 with eigenvalue/eigenvector |
|
||||
| `thermal` | Thermal Analysis | Thermo-structural | OP2 with temperature results |
|
||||
| `design_space` | Design Space Explorer | All optimization studies | study.db with 5+ trials |
|
||||
|
||||
### Zernike Method Comparison: Standard vs OPD
|
||||
|
||||
The Zernike insights now support **two WFE computation methods**:
|
||||
|
||||
| Method | Description | When to Use |
|
||||
|--------|-------------|-------------|
|
||||
| **Standard (Z-only)** | Uses only Z-displacement at original (x,y) coordinates | Quick analysis, negligible lateral displacement |
|
||||
| **OPD (X,Y,Z)** ← RECOMMENDED | Accounts for lateral (X,Y) displacement via interpolation | Any surface with gravity loads, most rigorous |
|
||||
|
||||
**How OPD method works**:
|
||||
1. Builds interpolator from undeformed BDF mesh geometry
|
||||
2. For each deformed node at `(x+dx, y+dy, z+dz)`, interpolates `Z_ideal` at new XY position
|
||||
3. Computes `WFE = z_deformed - Z_ideal(x_def, y_def)`
|
||||
4. Fits Zernike polynomials to the surface error map
|
||||
|
||||
**Typical difference**: OPD method gives **8-11% higher** WFE values than Standard (more conservative/accurate).
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Module Structure
|
||||
|
||||
```
|
||||
optimization_engine/insights/
|
||||
├── __init__.py # Registry and public API
|
||||
├── base.py # StudyInsight base class, InsightConfig, InsightResult
|
||||
├── zernike_wfe.py # Mirror wavefront error visualization (50 modes)
|
||||
├── zernike_opd_comparison.py # OPD vs Standard method comparison (lateral disp. analysis)
|
||||
├── msf_zernike.py # MSF band decomposition (100 modes, LSF/MSF/HSF)
|
||||
├── stress_field.py # Stress contour visualization
|
||||
├── modal_analysis.py # Mode shape visualization
|
||||
├── thermal_field.py # Temperature distribution
|
||||
└── design_space.py # Parameter-objective exploration
|
||||
```
|
||||
|
||||
### Class Hierarchy
|
||||
|
||||
```python
|
||||
StudyInsight (ABC)
|
||||
├── ZernikeDashboardInsight # RECOMMENDED: Unified dashboard with all views
|
||||
├── ZernikeWFEInsight # Standard 50-mode WFE analysis (with OPD toggle)
|
||||
├── ZernikeOPDComparisonInsight # OPD method comparison (lateral displacement)
|
||||
├── MSFZernikeInsight # 100-mode MSF band analysis
|
||||
├── StressFieldInsight
|
||||
├── ModalInsight
|
||||
├── ThermalInsight
|
||||
└── DesignSpaceInsight
|
||||
```
|
||||
|
||||
### Key Classes
|
||||
|
||||
#### StudyInsight (Base Class)
|
||||
|
||||
```python
|
||||
class StudyInsight(ABC):
|
||||
insight_type: str # Unique identifier (e.g., 'zernike_wfe')
|
||||
name: str # Human-readable name
|
||||
description: str # What this insight shows
|
||||
applicable_to: List[str] # Study types this applies to
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
"""Check if required data exists."""
|
||||
|
||||
def generate(self, config: InsightConfig) -> InsightResult:
|
||||
"""Generate visualization."""
|
||||
|
||||
def generate_html(self, trial_id=None, **kwargs) -> Path:
|
||||
"""Generate standalone HTML file."""
|
||||
|
||||
def get_plotly_data(self, trial_id=None, **kwargs) -> dict:
|
||||
"""Get Plotly figure for dashboard embedding."""
|
||||
```
|
||||
|
||||
#### InsightConfig
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InsightConfig:
|
||||
trial_id: Optional[int] = None # Which trial to visualize
|
||||
colorscale: str = 'Turbo' # Plotly colorscale
|
||||
amplification: float = 1.0 # Deformation scale factor
|
||||
lighting: bool = True # 3D lighting effects
|
||||
output_dir: Optional[Path] = None # Where to save HTML
|
||||
extra: Dict[str, Any] = {} # Type-specific config
|
||||
```
|
||||
|
||||
#### InsightResult
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class InsightResult:
|
||||
success: bool
|
||||
html_path: Optional[Path] = None # Generated HTML file
|
||||
plotly_figure: Optional[dict] = None # Figure for dashboard
|
||||
summary: Optional[dict] = None # Key metrics
|
||||
error: Optional[str] = None # Error message if failed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Python API
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, list_available_insights
|
||||
from pathlib import Path
|
||||
|
||||
study_path = Path("studies/my_mirror_study")
|
||||
|
||||
# List what's available
|
||||
available = list_available_insights(study_path)
|
||||
for info in available:
|
||||
print(f"{info['type']}: {info['name']}")
|
||||
|
||||
# Generate specific insight
|
||||
insight = get_insight('zernike_wfe', study_path)
|
||||
if insight and insight.can_generate():
|
||||
result = insight.generate()
|
||||
print(f"Generated: {result.html_path}")
|
||||
print(f"40-20 Filtered RMS: {result.summary['40_vs_20_filtered_rms']:.2f} nm")
|
||||
```
|
||||
|
||||
### CLI
|
||||
|
||||
```bash
|
||||
# List all insight types
|
||||
python -m optimization_engine.insights list
|
||||
|
||||
# Generate all available insights for a study
|
||||
python -m optimization_engine.insights generate studies/my_study
|
||||
|
||||
# Generate specific insight
|
||||
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
||||
```
|
||||
|
||||
### With Configuration
|
||||
|
||||
```python
|
||||
from optimization_engine.insights import get_insight, InsightConfig
|
||||
|
||||
insight = get_insight('stress_field', study_path)
|
||||
config = InsightConfig(
|
||||
colorscale='Hot',
|
||||
extra={
|
||||
'yield_stress': 250, # MPa
|
||||
'stress_unit': 'MPa'
|
||||
}
|
||||
)
|
||||
result = insight.generate(config)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Insight Type Details
|
||||
|
||||
### 0. Zernike Dashboard (`zernike_dashboard`) - RECOMMENDED
|
||||
|
||||
**Purpose**: Unified dashboard with all orientations (40°, 60°, 90°) and MSF band analysis on one page. Light theme, executive summary, and method comparison.
|
||||
|
||||
**Generates**: 1 comprehensive HTML file with:
|
||||
- Executive summary with metric cards (40-20, 60-20, MFG workload)
|
||||
- MSF band analysis (LSF/MSF/HSF decomposition)
|
||||
- 3D surface plots for each orientation
|
||||
- Zernike coefficient bar charts color-coded by band
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'n_modes': 50,
|
||||
'filter_low_orders': 4,
|
||||
'theme': 'light', # Light theme for reports
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'40_vs_20_filtered_rms': 6.53, # nm (OPD method)
|
||||
'60_vs_20_filtered_rms': 14.21, # nm (OPD method)
|
||||
'90_optician_workload': 26.34, # nm (J1-J3 filtered)
|
||||
'msf_rss_40': 2.1, # nm (MSF band contribution)
|
||||
}
|
||||
```
|
||||
|
||||
### 1. Zernike WFE Analysis (`zernike_wfe`)
|
||||
|
||||
**Purpose**: Visualize wavefront error for mirror optimization with Zernike polynomial decomposition. **Now includes Standard/OPD method toggle and lateral displacement maps**.
|
||||
|
||||
**Generates**: 6 HTML files
|
||||
- `zernike_*_40_vs_20.html` - 40° vs 20° relative WFE (with method toggle)
|
||||
- `zernike_*_40_lateral.html` - Lateral displacement map for 40°
|
||||
- `zernike_*_60_vs_20.html` - 60° vs 20° relative WFE (with method toggle)
|
||||
- `zernike_*_60_lateral.html` - Lateral displacement map for 60°
|
||||
- `zernike_*_90_mfg.html` - 90° manufacturing (with method toggle)
|
||||
- `zernike_*_90_mfg_lateral.html` - Lateral displacement map for 90°
|
||||
|
||||
**Features**:
|
||||
- Toggle buttons to switch between **Standard (Z-only)** and **OPD (X,Y,Z)** methods
|
||||
- Toggle between WFE view and **ΔX, ΔY, ΔZ displacement components**
|
||||
- Metrics comparison table showing both methods side-by-side
|
||||
- Lateral displacement statistics (Max, RMS in µm)
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
amplification=0.5, # Reduce deformation scaling
|
||||
colorscale='Turbo',
|
||||
extra={
|
||||
'n_modes': 50,
|
||||
'filter_low_orders': 4, # Remove piston, tip, tilt, defocus
|
||||
'disp_unit': 'mm',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'40_vs_20_filtered_rms_std': 6.01, # nm (Standard method)
|
||||
'40_vs_20_filtered_rms_opd': 6.53, # nm (OPD method)
|
||||
'60_vs_20_filtered_rms_std': 12.81, # nm
|
||||
'60_vs_20_filtered_rms_opd': 14.21, # nm
|
||||
'90_mfg_filtered_rms_std': 24.5, # nm
|
||||
'90_mfg_filtered_rms_opd': 26.34, # nm
|
||||
'90_optician_workload': 26.34, # nm (J1-J3 filtered)
|
||||
'lateral_40_max_um': 0.234, # µm max lateral displacement
|
||||
'lateral_60_max_um': 0.312, # µm
|
||||
'lateral_90_max_um': 0.089, # µm
|
||||
}
|
||||
```
|
||||
|
||||
### 2. MSF Zernike Analysis (`msf_zernike`)
|
||||
|
||||
**Purpose**: Detailed mid-spatial frequency analysis for telescope mirrors with gravity-induced support print-through.
|
||||
|
||||
**Generates**: 1 comprehensive HTML file with:
|
||||
- Band decomposition table (LSF/MSF/HSF RSS metrics)
|
||||
- MSF-only 3D surface visualization
|
||||
- Coefficient bar chart color-coded by band
|
||||
- Dominant MSF mode identification
|
||||
- Mesh resolution analysis
|
||||
|
||||
**Band Definitions** (for 1.2m class mirror):
|
||||
|
||||
| Band | Zernike Order | Feature Size | Physical Meaning |
|
||||
|------|---------------|--------------|------------------|
|
||||
| LSF | n ≤ 10 | > 120 mm | M2 hexapod correctable |
|
||||
| MSF | n = 11-50 | 24-109 mm | Support print-through |
|
||||
| HSF | n > 50 | < 24 mm | Near mesh resolution limit |
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'n_modes': 100, # Higher than zernike_wfe (100 vs 50)
|
||||
'lsf_max': 10, # n ≤ 10 is LSF
|
||||
'msf_max': 50, # n = 11-50 is MSF
|
||||
'disp_unit': 'mm',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Analyses Performed**:
|
||||
- Absolute WFE at each orientation (40°, 60°, 90°)
|
||||
- Relative to 20° (operational reference)
|
||||
- Relative to 90° (manufacturing/polishing reference)
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_modes': 100,
|
||||
'lsf_max_order': 10,
|
||||
'msf_max_order': 50,
|
||||
'mesh_nodes': 78290,
|
||||
'mesh_spacing_mm': 4.1,
|
||||
'max_resolvable_order': 157,
|
||||
'40deg_vs_20deg_lsf_rss': 12.3, # nm
|
||||
'40deg_vs_20deg_msf_rss': 8.7, # nm - KEY METRIC
|
||||
'40deg_vs_20deg_total_rss': 15.2, # nm
|
||||
'40deg_vs_20deg_msf_pct': 33.0, # % of total in MSF band
|
||||
# ... similar for 60deg, 90deg
|
||||
}
|
||||
```
|
||||
|
||||
**When to Use**:
|
||||
- Analyzing support structure print-through
|
||||
- Quantifying gravity-induced MSF content
|
||||
- Comparing MSF at different orientations
|
||||
- Validating mesh resolution is adequate for MSF capture
|
||||
|
||||
---
|
||||
|
||||
### 3. Stress Distribution (`stress_field`)
|
||||
|
||||
**Purpose**: Visualize Von Mises stress distribution with hot spot identification.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
colorscale='Hot',
|
||||
extra={
|
||||
'yield_stress': 250, # MPa - shows safety factor
|
||||
'stress_unit': 'MPa',
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'max_stress': 187.5, # MPa
|
||||
'mean_stress': 45.2, # MPa
|
||||
'p95_stress': 120.3, # 95th percentile
|
||||
'p99_stress': 165.8, # 99th percentile
|
||||
'safety_factor': 1.33, # If yield_stress provided
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Modal Analysis (`modal`)
|
||||
|
||||
**Purpose**: Visualize natural frequencies and mode shapes.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
amplification=50.0, # Mode shape scale
|
||||
extra={
|
||||
'n_modes': 20, # Number of modes to show
|
||||
'show_mode': 1, # Which mode shape to display
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_modes': 20,
|
||||
'first_frequency_hz': 125.4,
|
||||
'frequencies_hz': [125.4, 287.8, 312.5, ...],
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Thermal Analysis (`thermal`)
|
||||
|
||||
**Purpose**: Visualize temperature distribution and gradients.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
colorscale='Thermal',
|
||||
extra={
|
||||
'temp_unit': 'K', # or 'C', 'F'
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'max_temp': 423.5, # K
|
||||
'min_temp': 293.0, # K
|
||||
'mean_temp': 345.2, # K
|
||||
'temp_range': 130.5, # K
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Design Space Explorer (`design_space`)
|
||||
|
||||
**Purpose**: Visualize parameter-objective relationships from optimization trials.
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
config = InsightConfig(
|
||||
extra={
|
||||
'primary_objective': 'filtered_rms', # Color by this objective
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Summary Output**:
|
||||
```python
|
||||
{
|
||||
'n_trials': 100,
|
||||
'n_params': 4,
|
||||
'n_objectives': 2,
|
||||
'best_trial_id': 47,
|
||||
'best_params': {'p1': 0.5, 'p2': 1.2, ...},
|
||||
'best_values': {'filtered_rms': 45.2, 'mass': 2.34},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Directory
|
||||
|
||||
Insights are saved to `{study}/3_insights/`:
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── 1_setup/
|
||||
├── 2_results/
|
||||
└── 3_insights/ # Created by insights module
|
||||
├── zernike_20241220_143022_40_vs_20.html
|
||||
├── zernike_20241220_143022_60_vs_20.html
|
||||
├── zernike_20241220_143022_90_mfg.html
|
||||
├── stress_20241220_143025.html
|
||||
└── design_space_20241220_143030.html
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Creating New Insight Types
|
||||
|
||||
To add a new insight type (power_user+):
|
||||
|
||||
### 1. Create the insight class
|
||||
|
||||
```python
|
||||
# optimization_engine/insights/my_insight.py
|
||||
|
||||
from .base import StudyInsight, InsightConfig, InsightResult, register_insight
|
||||
|
||||
@register_insight
|
||||
class MyInsight(StudyInsight):
|
||||
insight_type = "my_insight"
|
||||
name = "My Custom Insight"
|
||||
description = "Description of what it shows"
|
||||
applicable_to = ["structural", "all"]
|
||||
|
||||
def can_generate(self) -> bool:
|
||||
# Check if required data exists
|
||||
return self.results_path.exists()
|
||||
|
||||
def _generate(self, config: InsightConfig) -> InsightResult:
|
||||
# Generate visualization
|
||||
# ... build Plotly figure ...
|
||||
|
||||
html_path = config.output_dir / f"my_insight_{timestamp}.html"
|
||||
html_path.write_text(fig.to_html(...))
|
||||
|
||||
return InsightResult(
|
||||
success=True,
|
||||
html_path=html_path,
|
||||
summary={'key_metric': value}
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Register in `__init__.py`
|
||||
|
||||
```python
|
||||
from .my_insight import MyInsight
|
||||
```
|
||||
|
||||
### 3. Test
|
||||
|
||||
```bash
|
||||
python -m optimization_engine.insights list
|
||||
# Should show "my_insight" in the list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
The Insights tab in the Atomizer Dashboard provides a 3-step workflow:
|
||||
|
||||
### Step 1: Select Iteration
|
||||
- Lists all available iterations (iter1, iter2, etc.) and best_design_archive
|
||||
- Shows OP2 file name and modification timestamp
|
||||
- Auto-selects "Best Design (Recommended)" if available
|
||||
|
||||
### Step 2: Choose Insight Type
|
||||
- Groups insights by category (Optical, Structural, Thermal, etc.)
|
||||
- Shows insight name and description
|
||||
- Click to select, then "Generate Insight"
|
||||
|
||||
### Step 3: View Result
|
||||
- Displays summary metrics (RMS values, etc.)
|
||||
- Embedded Plotly visualization (if available)
|
||||
- "Open Full View" button for multi-file insights (like Zernike WFE)
|
||||
- Fullscreen mode for detailed analysis
|
||||
|
||||
### API Endpoints
|
||||
|
||||
```
|
||||
GET /api/insights/studies/{id}/iterations # List available iterations
|
||||
GET /api/insights/studies/{id}/available # List available insight types
|
||||
GET /api/insights/studies/{id}/generated # List previously generated files
|
||||
POST /api/insights/studies/{id}/generate/{type} # Generate insight for iteration
|
||||
GET /api/insights/studies/{id}/view/{type} # View generated HTML
|
||||
```
|
||||
|
||||
### Generate Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": "best_design_archive", // or "iter5", etc.
|
||||
"trial_id": null, // Optional specific trial
|
||||
"config": {} // Insight-specific config
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.3.0 | 2025-12-22 | Added ZernikeDashboardInsight (unified view), OPD method toggle, lateral displacement maps |
|
||||
| 1.2.0 | 2024-12-22 | Dashboard overhaul: 3-step workflow, iteration selection, faster loading |
|
||||
| 1.1.0 | 2024-12-21 | Added MSF Zernike Analysis insight (6 insight types) |
|
||||
| 1.0.0 | 2024-12-20 | Initial release with 5 insight types |
|
||||
@@ -0,0 +1,307 @@
|
||||
---
|
||||
protocol_id: SYS_17
|
||||
version: 1.0
|
||||
last_updated: 2025-12-29
|
||||
status: active
|
||||
owner: system
|
||||
code_dependencies:
|
||||
- optimization_engine.context.*
|
||||
requires_protocols: []
|
||||
---
|
||||
|
||||
# SYS_17: Context Engineering System
|
||||
|
||||
## Overview
|
||||
|
||||
The Context Engineering System implements the **Agentic Context Engineering (ACE)** framework, enabling Atomizer to learn from every optimization run and accumulate institutional knowledge over time.
|
||||
|
||||
## When to Load This Protocol
|
||||
|
||||
Load SYS_17 when:
|
||||
- User asks about "learning", "playbook", or "context engineering"
|
||||
- Debugging why certain knowledge isn't being applied
|
||||
- Configuring context behavior
|
||||
- Analyzing what the system has learned
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### The ACE Framework
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Generator │────▶│ Reflector │────▶│ Curator │
|
||||
│ (Opt Runs) │ │ (Analysis) │ │ (Playbook) │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │
|
||||
└───────────── Feedback ───────────────┘
|
||||
```
|
||||
|
||||
1. **Generator**: OptimizationRunner produces trial outcomes
|
||||
2. **Reflector**: Analyzes outcomes, extracts patterns
|
||||
3. **Curator**: Playbook stores and manages insights
|
||||
4. **Feedback**: Success/failure updates insight scores
|
||||
|
||||
### Playbook Item Structure
|
||||
|
||||
```
|
||||
[str-00001] helpful=8 harmful=0 :: "Use shell elements for thin walls"
|
||||
│ │ │ │
|
||||
│ │ │ └── Insight content
|
||||
│ │ └── Times advice led to failure
|
||||
│ └── Times advice led to success
|
||||
└── Unique ID (category-number)
|
||||
```
|
||||
|
||||
### Categories
|
||||
|
||||
| Code | Name | Description | Example |
|
||||
|------|------|-------------|---------|
|
||||
| `str` | STRATEGY | Optimization approaches | "Start with TPE, switch to CMA-ES" |
|
||||
| `mis` | MISTAKE | Things to avoid | "Don't use coarse mesh for stress" |
|
||||
| `tool` | TOOL | Tool usage tips | "Use GP sampler for few-shot" |
|
||||
| `cal` | CALCULATION | Formulas | "Safety factor = yield/max_stress" |
|
||||
| `dom` | DOMAIN | Domain knowledge | "Zernike coefficients for mirrors" |
|
||||
| `wf` | WORKFLOW | Workflow patterns | "Load _i.prt before UpdateFemodel()" |
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. AtomizerPlaybook
|
||||
|
||||
Location: `optimization_engine/context/playbook.py`
|
||||
|
||||
The central knowledge store. Handles:
|
||||
- Adding insights (with auto-deduplication)
|
||||
- Recording helpful/harmful outcomes
|
||||
- Generating filtered context for LLM
|
||||
- Pruning consistently harmful items
|
||||
- Persistence (JSON)
|
||||
|
||||
**Quick Usage:**
|
||||
```python
|
||||
from optimization_engine.context import get_playbook, save_playbook, InsightCategory
|
||||
|
||||
playbook = get_playbook()
|
||||
playbook.add_insight(InsightCategory.STRATEGY, "Use shell elements for thin walls")
|
||||
playbook.record_outcome("str-00001", helpful=True)
|
||||
save_playbook()
|
||||
```
|
||||
|
||||
### 2. AtomizerReflector
|
||||
|
||||
Location: `optimization_engine/context/reflector.py`
|
||||
|
||||
Analyzes optimization outcomes to extract insights:
|
||||
- Classifies errors (convergence, mesh, singularity, etc.)
|
||||
- Extracts success patterns
|
||||
- Generates study-level insights
|
||||
|
||||
**Quick Usage:**
|
||||
```python
|
||||
from optimization_engine.context import AtomizerReflector, OptimizationOutcome
|
||||
|
||||
reflector = AtomizerReflector(playbook)
|
||||
outcome = OptimizationOutcome(trial_number=42, success=True, ...)
|
||||
insights = reflector.analyze_trial(outcome)
|
||||
reflector.commit_insights()
|
||||
```
|
||||
|
||||
### 3. FeedbackLoop
|
||||
|
||||
Location: `optimization_engine/context/feedback_loop.py`
|
||||
|
||||
Automated learning loop that:
|
||||
- Processes trial results
|
||||
- Updates playbook scores based on outcomes
|
||||
- Tracks which items were active per trial
|
||||
- Finalizes learning at study end
|
||||
|
||||
**Quick Usage:**
|
||||
```python
|
||||
from optimization_engine.context import FeedbackLoop
|
||||
|
||||
feedback = FeedbackLoop(playbook_path)
|
||||
feedback.process_trial_result(trial_number=42, success=True, ...)
|
||||
feedback.finalize_study({"name": "study", "total_trials": 100, ...})
|
||||
```
|
||||
|
||||
### 4. SessionState
|
||||
|
||||
Location: `optimization_engine/context/session_state.py`
|
||||
|
||||
Manages context isolation:
|
||||
- **Exposed**: Always in LLM context (task type, recent actions, errors)
|
||||
- **Isolated**: On-demand access (full history, NX paths, F06 content)
|
||||
|
||||
**Quick Usage:**
|
||||
```python
|
||||
from optimization_engine.context import get_session, TaskType
|
||||
|
||||
session = get_session()
|
||||
session.exposed.task_type = TaskType.RUN_OPTIMIZATION
|
||||
session.add_action("Started trial 42")
|
||||
context = session.get_llm_context()
|
||||
```
|
||||
|
||||
### 5. CompactionManager
|
||||
|
||||
Location: `optimization_engine/context/compaction.py`
|
||||
|
||||
Handles long sessions:
|
||||
- Triggers compaction at threshold (default 50 events)
|
||||
- Summarizes old events into statistics
|
||||
- Preserves errors and milestones
|
||||
|
||||
### 6. CacheOptimizer
|
||||
|
||||
Location: `optimization_engine/context/cache_monitor.py`
|
||||
|
||||
Optimizes for KV-cache:
|
||||
- Three-tier context structure (stable/semi-stable/dynamic)
|
||||
- Tracks cache hit rate
|
||||
- Estimates cost savings
|
||||
|
||||
## Integration with OptimizationRunner
|
||||
|
||||
### Option 1: Mixin
|
||||
|
||||
```python
|
||||
from optimization_engine.context.runner_integration import ContextEngineeringMixin
|
||||
|
||||
class MyRunner(ContextEngineeringMixin, OptimizationRunner):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.init_context_engineering()
|
||||
```
|
||||
|
||||
### Option 2: Wrapper
|
||||
|
||||
```python
|
||||
from optimization_engine.context.runner_integration import ContextAwareRunner
|
||||
|
||||
runner = OptimizationRunner(config_path=...)
|
||||
context_runner = ContextAwareRunner(runner)
|
||||
context_runner.run(n_trials=100)
|
||||
```
|
||||
|
||||
## Dashboard API
|
||||
|
||||
Base URL: `/api/context`
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/playbook` | GET | Playbook summary |
|
||||
| `/playbook/items` | GET | List items (with filters) |
|
||||
| `/playbook/items/{id}` | GET | Get specific item |
|
||||
| `/playbook/feedback` | POST | Record helpful/harmful |
|
||||
| `/playbook/insights` | POST | Add new insight |
|
||||
| `/playbook/prune` | POST | Prune harmful items |
|
||||
| `/playbook/context` | GET | Get LLM context string |
|
||||
| `/session` | GET | Session state |
|
||||
| `/learning/report` | GET | Learning report |
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Record Immediately
|
||||
|
||||
Don't wait until session end:
|
||||
```python
|
||||
# RIGHT: Record immediately
|
||||
playbook.add_insight(InsightCategory.MISTAKE, "Convergence failed with X")
|
||||
playbook.save(path)
|
||||
|
||||
# WRONG: Wait until end
|
||||
# (User might close session, learning lost)
|
||||
```
|
||||
|
||||
### 2. Be Specific
|
||||
|
||||
```python
|
||||
# GOOD: Specific and actionable
|
||||
"For bracket optimization with >5 variables, TPE outperforms random search"
|
||||
|
||||
# BAD: Vague
|
||||
"TPE is good"
|
||||
```
|
||||
|
||||
### 3. Include Context
|
||||
|
||||
```python
|
||||
playbook.add_insight(
|
||||
InsightCategory.STRATEGY,
|
||||
"Shell elements reduce solve time by 40% for thickness < 2mm",
|
||||
tags=["mesh", "shell", "performance"]
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Review Harmful Items
|
||||
|
||||
Periodically check items with negative scores:
|
||||
```python
|
||||
harmful = [i for i in playbook.items.values() if i.net_score < 0]
|
||||
for item in harmful:
|
||||
print(f"{item.id}: {item.content[:50]}... (score={item.net_score})")
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Playbook Not Updating
|
||||
|
||||
1. Check playbook path:
|
||||
```python
|
||||
print(playbook_path) # Should be knowledge_base/playbook.json
|
||||
```
|
||||
|
||||
2. Verify save is called:
|
||||
```python
|
||||
playbook.save(path) # Must be explicit
|
||||
```
|
||||
|
||||
### Insights Not Appearing in Context
|
||||
|
||||
1. Check confidence threshold:
|
||||
```python
|
||||
# Default is 0.5 - new items start at 0.5
|
||||
context = playbook.get_context_for_task("opt", min_confidence=0.3)
|
||||
```
|
||||
|
||||
2. Check if items exist:
|
||||
```python
|
||||
print(f"Total items: {len(playbook.items)}")
|
||||
```
|
||||
|
||||
### Learning Not Working
|
||||
|
||||
1. Verify FeedbackLoop is finalized:
|
||||
```python
|
||||
feedback.finalize_study(...) # MUST be called
|
||||
```
|
||||
|
||||
2. Check context_items_used parameter:
|
||||
```python
|
||||
# Items must be explicitly tracked
|
||||
feedback.process_trial_result(
|
||||
...,
|
||||
context_items_used=list(playbook.items.keys())[:10]
|
||||
)
|
||||
```
|
||||
|
||||
## Files Reference
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `optimization_engine/context/__init__.py` | Module exports |
|
||||
| `optimization_engine/context/playbook.py` | Knowledge store |
|
||||
| `optimization_engine/context/reflector.py` | Outcome analysis |
|
||||
| `optimization_engine/context/session_state.py` | Context isolation |
|
||||
| `optimization_engine/context/feedback_loop.py` | Learning loop |
|
||||
| `optimization_engine/context/compaction.py` | Long session management |
|
||||
| `optimization_engine/context/cache_monitor.py` | KV-cache optimization |
|
||||
| `optimization_engine/context/runner_integration.py` | Runner integration |
|
||||
| `knowledge_base/playbook.json` | Persistent storage |
|
||||
|
||||
## See Also
|
||||
|
||||
- `docs/CONTEXT_ENGINEERING_REPORT.md` - Full implementation report
|
||||
- `.claude/skills/00_BOOTSTRAP_V2.md` - Enhanced bootstrap
|
||||
- `tests/test_context_engineering.py` - Unit tests
|
||||
- `tests/test_context_integration.py` - Integration tests
|
||||
93
hq/skills/atomizer-protocols/protocols/SYS_19_JOB_QUEUE.md
Normal file
93
hq/skills/atomizer-protocols/protocols/SYS_19_JOB_QUEUE.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# SYS_19 — Job Queue Protocol
|
||||
|
||||
## Purpose
|
||||
Defines how agents submit and monitor optimization jobs that execute on Windows (NX/Simcenter).
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Linux (Agents) Windows (NX/Simcenter)
|
||||
/job-queue/ C:\Atomizer\job-queue\
|
||||
├── inbox/ ← results ├── inbox/
|
||||
├── outbox/ → jobs ├── outbox/
|
||||
└── archive/ (processed) └── archive/
|
||||
```
|
||||
|
||||
Syncthing keeps these directories in sync (5-30 second delay).
|
||||
|
||||
## Submitting a Job
|
||||
|
||||
### Study Builder creates job directory:
|
||||
```
|
||||
outbox/job-YYYYMMDD-HHMMSS-<name>/
|
||||
├── job.json # Job manifest (REQUIRED)
|
||||
├── run_optimization.py # The script to execute
|
||||
├── atomizer_spec.json # Study configuration (if applicable)
|
||||
├── README.md # Human-readable description
|
||||
└── 1_setup/ # Model files
|
||||
├── *.prt # NX parts
|
||||
├── *_i.prt # Idealized parts
|
||||
├── *.fem # FEM files
|
||||
└── *.sim # Simulation files
|
||||
```
|
||||
|
||||
### job.json Format
|
||||
```json
|
||||
{
|
||||
"job_id": "job-20260210-143022-wfe",
|
||||
"created_at": "2026-02-10T14:30:22Z",
|
||||
"created_by": "study-builder",
|
||||
"project": "starspec-m1-wfe",
|
||||
"channel": "#starspec-m1-wfe",
|
||||
"type": "optimization",
|
||||
"script": "run_optimization.py",
|
||||
"args": ["--start"],
|
||||
"status": "submitted",
|
||||
"notify": {
|
||||
"on_complete": true,
|
||||
"on_fail": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Monitoring a Job
|
||||
|
||||
Agents check job status by reading job.json files:
|
||||
- `outbox/` → Submitted, waiting for sync
|
||||
- After Antoine runs the script, results appear in `inbox/`
|
||||
|
||||
### Status Values
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| `submitted` | Agent placed job in outbox |
|
||||
| `running` | Antoine started execution |
|
||||
| `completed` | Finished successfully |
|
||||
| `failed` | Execution failed |
|
||||
|
||||
## Receiving Results
|
||||
|
||||
Results arrive in `inbox/` with updated job.json and result files:
|
||||
```
|
||||
inbox/job-YYYYMMDD-HHMMSS-<name>/
|
||||
├── job.json # Updated status
|
||||
├── 3_results/ # Output data
|
||||
│ ├── study.db # Optuna study database
|
||||
│ ├── *.csv # Result tables
|
||||
│ └── *.png # Generated plots
|
||||
└── stdout.log # Execution log
|
||||
```
|
||||
|
||||
## Post-Processing
|
||||
|
||||
1. Manager's heartbeat detects new results in `inbox/`
|
||||
2. Manager notifies Post-Processor
|
||||
3. Post-Processor analyzes results
|
||||
4. Move processed job to `archive/` with timestamp
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Never modify files in inbox/ directly** — copy first, then process
|
||||
2. **Always include job.json** — it's the job's identity
|
||||
3. **Use descriptive names** — `job-20260210-143022-starspec-wfe` not `job-1`
|
||||
4. **Include README.md** — so Antoine knows what the job does at a glance
|
||||
5. **Relative paths only** — no absolute Windows/Linux paths in scripts
|
||||
@@ -0,0 +1,60 @@
|
||||
# SYS_20 — Agent Memory Protocol
|
||||
|
||||
## Purpose
|
||||
Defines how agents read and write shared knowledge across the company.
|
||||
|
||||
## Memory Layers
|
||||
|
||||
### Layer 1: Company Memory (Shared, Read-Only)
|
||||
**Location:** `atomizer-protocols` and `atomizer-company` skills
|
||||
**Access:** All agents read. Manager proposes updates → Antoine approves.
|
||||
**Contains:** Protocols, company identity, LAC critical lessons.
|
||||
|
||||
### Layer 2: Agent Memory (Per-Agent, Read-Write)
|
||||
**Location:** Each agent's `MEMORY.md` and `memory/` directory
|
||||
**Access:** Each agent owns their memory. Auditor can read others (for audits).
|
||||
**Contains:**
|
||||
- `MEMORY.md` — Long-term role knowledge, lessons, patterns
|
||||
- `memory/<project>.md` — Per-project working notes
|
||||
- `memory/YYYY-MM-DD.md` — Daily activity log
|
||||
|
||||
### Layer 3: Project Knowledge (Shared, via Repo)
|
||||
**Location:** `/repos/Atomizer/knowledge_base/projects/<project>/`
|
||||
**Access:** All agents read. Manager coordinates writes.
|
||||
**Contains:**
|
||||
- `CONTEXT.md` — Project briefing (parameters, objectives, constraints)
|
||||
- `decisions.md` — Key decisions made during the project
|
||||
- `model-knowledge.md` — CAD/FEM details from KB Agent
|
||||
|
||||
## Rules
|
||||
|
||||
### Writing Memory
|
||||
1. **Write immediately** — don't wait until end of session
|
||||
2. **Write in your own workspace** — never modify another agent's files
|
||||
3. **Daily logs are raw** — `memory/YYYY-MM-DD.md` captures what happened
|
||||
4. **MEMORY.md is curated** — distill lessons from daily logs periodically
|
||||
|
||||
### Reading Memory
|
||||
1. **Start every session** by reading MEMORY.md + recent daily logs
|
||||
2. **Before starting a project**, read the project's CONTEXT.md
|
||||
3. **Before making technical decisions**, check LAC_CRITICAL.md
|
||||
|
||||
### Sharing Knowledge
|
||||
When an agent discovers something the company should know:
|
||||
1. Write it to your own MEMORY.md first
|
||||
2. Flag it to Manager: "New insight worth sharing: [summary]"
|
||||
3. Manager reviews and decides whether to promote to company knowledge
|
||||
4. If promoted: Manager directs update to shared skills or knowledge_base/
|
||||
|
||||
### What to Remember
|
||||
- Technical decisions and their reasoning
|
||||
- Things that went wrong and why
|
||||
- Things that worked well
|
||||
- Client preferences and patterns
|
||||
- Solver quirks and workarounds
|
||||
- Algorithm performance on different problem types
|
||||
|
||||
### What NOT to Store
|
||||
- API keys, passwords, tokens
|
||||
- Client confidential data (store only what's needed for the work)
|
||||
- Raw FEA output files (too large — store summaries and key metrics)
|
||||
Reference in New Issue
Block a user