- Add validation framework (config, model, results, study validators) - Add Claude Code skills (create-study, run-optimization, generate-report, troubleshoot, analyze-model) - Add Atomizer Dashboard (React frontend + FastAPI backend) - Reorganize docs into structured directories (00-09) - Add neural surrogate modules and training infrastructure - Add multi-objective optimization support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
619 lines
18 KiB
Markdown
619 lines
18 KiB
Markdown
# Create Optimization Study Skill
|
|
|
|
**Last Updated**: November 25, 2025
|
|
**Version**: 1.1 - Complete Study Scaffolding with Validator Integration
|
|
|
|
You are helping the user create a complete Atomizer optimization study from a natural language description.
|
|
|
|
## Your Role
|
|
|
|
Guide the user through an interactive conversation to:
|
|
1. Understand their optimization problem
|
|
2. Classify objectives, constraints, and design variables
|
|
3. Create the complete study infrastructure
|
|
4. Generate all required files with proper configuration
|
|
5. Provide clear next steps for running the optimization
|
|
|
|
## Study Structure
|
|
|
|
A complete Atomizer study has this structure:
|
|
|
|
**CRITICAL**: All study files, including README.md and results, MUST be located within the study directory. NEVER create study documentation at the project root.
|
|
|
|
```
|
|
studies/{study_name}/
|
|
├── 1_setup/
|
|
│ ├── model/
|
|
│ │ ├── {Model}.prt # NX Part file (user provides)
|
|
│ │ ├── {Model}_sim1.sim # NX Simulation file (user provides)
|
|
│ │ └── {Model}_fem1.fem # FEM mesh file (auto-generated by NX)
|
|
│ ├── optimization_config.json # YOU GENERATE THIS
|
|
│ └── workflow_config.json # YOU GENERATE THIS
|
|
├── 2_results/ # Created automatically during optimization
|
|
│ ├── study.db # Optuna SQLite database
|
|
│ ├── optimization_history_incremental.json
|
|
│ └── [various analysis files]
|
|
├── run_optimization.py # YOU GENERATE THIS
|
|
├── reset_study.py # YOU GENERATE THIS
|
|
├── README.md # YOU GENERATE THIS (INSIDE study directory!)
|
|
└── NX_FILE_MODIFICATIONS_REQUIRED.md # YOU GENERATE THIS (if needed)
|
|
```
|
|
|
|
## Interactive Discovery Process
|
|
|
|
### Step 1: Problem Understanding
|
|
|
|
Ask clarifying questions to understand:
|
|
|
|
**Engineering Context**:
|
|
- "What component are you optimizing?"
|
|
- "What is the engineering application or scenario?"
|
|
- "What are the real-world requirements or constraints?"
|
|
|
|
**Objectives**:
|
|
- "What do you want to optimize?" (minimize/maximize)
|
|
- "Is this single-objective or multi-objective?"
|
|
- "What are the target values or acceptable ranges?"
|
|
|
|
**Constraints**:
|
|
- "What limits must be satisfied?"
|
|
- "What are the threshold values?"
|
|
- "Are these hard constraints (must satisfy) or soft constraints (prefer to satisfy)?"
|
|
|
|
**Design Variables**:
|
|
- "What parameters can be changed?"
|
|
- "What are the min/max bounds for each parameter?"
|
|
- "Are these NX expressions, geometry features, or material properties?"
|
|
|
|
**Simulation Setup**:
|
|
- "What NX model files do you have?"
|
|
- "What analysis types are needed?" (static, modal, thermal, etc.)
|
|
- "What results need to be extracted?" (stress, displacement, frequency, mass, etc.)
|
|
|
|
### Step 2: Classification & Analysis
|
|
|
|
Use the `analyze-workflow` skill to classify the problem:
|
|
|
|
```bash
|
|
# Invoke the analyze-workflow skill with user's description
|
|
# This returns JSON with classified engineering features, extractors, etc.
|
|
```
|
|
|
|
Review the classification with the user and confirm:
|
|
- Are the objectives correctly identified?
|
|
- Are constraints properly classified?
|
|
- Are extractors mapped to the right result types?
|
|
- Is the protocol selection appropriate?
|
|
|
|
### Step 3: Protocol Selection
|
|
|
|
Based on analysis, recommend protocol:
|
|
|
|
**Protocol 11 (Multi-Objective NSGA-II)**:
|
|
- Use when: 2-3 conflicting objectives
|
|
- Algorithm: NSGAIISampler
|
|
- Output: Pareto front of optimal trade-offs
|
|
- Example: Minimize mass + Maximize frequency
|
|
|
|
**Protocol 10 (Single-Objective with Intelligent Strategies)**:
|
|
- Use when: 1 objective with constraints
|
|
- Algorithm: TPE, CMA-ES, or adaptive
|
|
- Output: Single optimal solution
|
|
- Example: Minimize stress subject to displacement < 1.5mm
|
|
|
|
**Legacy (Basic TPE)**:
|
|
- Use when: Simple single-objective problem
|
|
- Algorithm: TPE
|
|
- Output: Single optimal solution
|
|
- Example: Quick exploration or testing
|
|
|
|
### Step 4: Extractor Mapping
|
|
|
|
Map each result extraction to centralized extractors:
|
|
|
|
| User Need | Extractor | Parameters |
|
|
|-----------|-----------|------------|
|
|
| Displacement | `extract_displacement` | `op2_file`, `subcase` |
|
|
| Von Mises Stress | `extract_solid_stress` | `op2_file`, `subcase`, `element_type` |
|
|
| Natural Frequency | `extract_frequency` | `op2_file`, `subcase`, `mode_number` |
|
|
| FEM Mass | `extract_mass_from_bdf` | `bdf_file` |
|
|
| CAD Mass | `extract_mass_from_expression` | `prt_file`, `expression_name` |
|
|
|
|
### Step 5: Multi-Solution Detection
|
|
|
|
Check if multi-solution workflow is needed:
|
|
|
|
**Indicators**:
|
|
- Extracting both static results (stress, displacement) AND modal results (frequency)
|
|
- User mentions "static + modal analysis"
|
|
- Objectives/constraints require different solution types
|
|
|
|
**Action**:
|
|
- Set `solution_name=None` in `run_optimization.py` to solve all solutions
|
|
- Document requirement in `NX_FILE_MODIFICATIONS_REQUIRED.md`
|
|
- Use `SolveAllSolutions()` protocol (see [NX_MULTI_SOLUTION_PROTOCOL.md](../docs/NX_MULTI_SOLUTION_PROTOCOL.md))
|
|
|
|
## File Generation
|
|
|
|
### 1. optimization_config.json
|
|
|
|
```json
|
|
{
|
|
"study_name": "{study_name}",
|
|
"description": "{concise description}",
|
|
"engineering_context": "{detailed real-world context}",
|
|
|
|
"optimization_settings": {
|
|
"protocol": "protocol_11_multi_objective", // or protocol_10, etc.
|
|
"n_trials": 30,
|
|
"sampler": "NSGAIISampler", // or "TPESampler"
|
|
"pruner": null,
|
|
"timeout_per_trial": 600
|
|
},
|
|
|
|
"design_variables": [
|
|
{
|
|
"parameter": "{nx_expression_name}",
|
|
"bounds": [min, max],
|
|
"description": "{what this controls}"
|
|
}
|
|
],
|
|
|
|
"objectives": [
|
|
{
|
|
"name": "{objective_name}",
|
|
"goal": "minimize", // or "maximize"
|
|
"weight": 1.0,
|
|
"description": "{what this measures}",
|
|
"target": {target_value},
|
|
"extraction": {
|
|
"action": "extract_{type}",
|
|
"domain": "result_extraction",
|
|
"params": {
|
|
"result_type": "{type}",
|
|
"metric": "{specific_metric}"
|
|
}
|
|
}
|
|
}
|
|
],
|
|
|
|
"constraints": [
|
|
{
|
|
"name": "{constraint_name}",
|
|
"type": "less_than", // or "greater_than"
|
|
"threshold": {value},
|
|
"description": "{engineering justification}",
|
|
"extraction": {
|
|
"action": "extract_{type}",
|
|
"domain": "result_extraction",
|
|
"params": {
|
|
"result_type": "{type}",
|
|
"metric": "{specific_metric}"
|
|
}
|
|
}
|
|
}
|
|
],
|
|
|
|
"simulation": {
|
|
"model_file": "{Model}.prt",
|
|
"sim_file": "{Model}_sim1.sim",
|
|
"fem_file": "{Model}_fem1.fem",
|
|
"solver": "nastran",
|
|
"analysis_types": ["static", "modal"] // or just ["static"]
|
|
},
|
|
|
|
"reporting": {
|
|
"generate_plots": true,
|
|
"save_incremental": true,
|
|
"llm_summary": false
|
|
}
|
|
}
|
|
```
|
|
|
|
### 2. workflow_config.json
|
|
|
|
```json
|
|
{
|
|
"workflow_id": "{study_name}_workflow",
|
|
"description": "{workflow description}",
|
|
"steps": [] // Can be empty for now, used by future intelligent workflow system
|
|
}
|
|
```
|
|
|
|
### 3. run_optimization.py
|
|
|
|
Generate a complete Python script based on protocol:
|
|
|
|
**Key sections**:
|
|
- Import statements (centralized extractors, NXSolver, Optuna)
|
|
- Configuration loading
|
|
- Objective function with proper:
|
|
- Design variable sampling
|
|
- Simulation execution with multi-solution support
|
|
- Result extraction using centralized extractors
|
|
- Constraint checking
|
|
- Return format (tuple for multi-objective, float for single-objective)
|
|
- Study creation with proper:
|
|
- Directions for multi-objective (`['minimize', 'maximize']`)
|
|
- Sampler selection (NSGAIISampler or TPESampler)
|
|
- Storage location
|
|
- Results display and dashboard instructions
|
|
|
|
**IMPORTANT**: Always include structured logging from Phase 1.3:
|
|
- Import: `from optimization_engine.logger import get_logger`
|
|
- Initialize in main(): `logger = get_logger("{study_name}", study_dir=results_dir)`
|
|
- Replace all print() with logger.info/warning/error
|
|
- Use structured methods:
|
|
- `logger.study_start(study_name, n_trials, sampler)`
|
|
- `logger.trial_start(trial.number, design_vars)`
|
|
- `logger.trial_complete(trial.number, objectives, constraints, feasible)`
|
|
- `logger.trial_failed(trial.number, error)`
|
|
- `logger.study_complete(study_name, n_trials, n_successful)`
|
|
- Error handling: `logger.error("message", exc_info=True)` for tracebacks
|
|
|
|
**Template**: Use [studies/drone_gimbal_arm_optimization/run_optimization.py](../studies/drone_gimbal_arm_optimization/run_optimization.py:1) as reference
|
|
|
|
### 4. reset_study.py
|
|
|
|
Simple script to delete Optuna database:
|
|
|
|
```python
|
|
"""Reset {study_name} optimization study by deleting database."""
|
|
import optuna
|
|
from pathlib import Path
|
|
|
|
study_dir = Path(__file__).parent
|
|
storage = f"sqlite:///{study_dir / '2_results' / 'study.db'}"
|
|
study_name = "{study_name}"
|
|
|
|
try:
|
|
optuna.delete_study(study_name=study_name, storage=storage)
|
|
print(f"[OK] Deleted study: {study_name}")
|
|
except KeyError:
|
|
print(f"[WARNING] Study '{study_name}' not found (database may not exist)")
|
|
except Exception as e:
|
|
print(f"[ERROR] Error: {e}")
|
|
```
|
|
|
|
### 5. README.md
|
|
|
|
**CRITICAL: ALWAYS place README.md INSIDE the study directory at `studies/{study_name}/README.md`**
|
|
|
|
Never create study documentation at the project root. All study-specific documentation must be centralized within the study directory structure.
|
|
|
|
Comprehensive documentation including:
|
|
- Engineering scenario and context
|
|
- Problem statement with real-world constraints
|
|
- Multi-objective trade-offs (if applicable)
|
|
- Design variables and their effects
|
|
- Expected outcomes
|
|
- Study configuration details
|
|
- File structure explanation
|
|
- Running instructions
|
|
- Dashboard monitoring guide
|
|
- Results interpretation guide
|
|
- Comparison with other studies
|
|
- Technical notes
|
|
|
|
**Location**: `studies/{study_name}/README.md` (NOT at project root)
|
|
**Template**: Use [studies/drone_gimbal_arm_optimization/README.md](../studies/drone_gimbal_arm_optimization/README.md:1) as reference
|
|
|
|
### 6. NX_FILE_MODIFICATIONS_REQUIRED.md (if needed)
|
|
|
|
If multi-solution workflow or specific NX setup is required:
|
|
|
|
```markdown
|
|
# NX File Modifications Required
|
|
|
|
Before running this optimization, you must modify the NX simulation files.
|
|
|
|
## Required Changes
|
|
|
|
### 1. Add Modal Analysis Solution (if needed)
|
|
|
|
Current: Only static analysis (SOL 101)
|
|
Required: Static + Modal (SOL 101 + SOL 103)
|
|
|
|
Steps:
|
|
1. Open `{Model}_sim1.sim` in NX
|
|
2. Solution → Create → Modal Analysis
|
|
3. Set frequency extraction parameters
|
|
4. Save simulation
|
|
|
|
### 2. Update Load Cases (if needed)
|
|
|
|
Current: [describe current loads]
|
|
Required: [describe required loads]
|
|
|
|
Steps: [specific instructions]
|
|
|
|
### 3. Verify Material Properties
|
|
|
|
Required: [material name and properties]
|
|
|
|
## Verification
|
|
|
|
After modifications:
|
|
1. Run simulation manually in NX
|
|
2. Verify OP2 files are generated
|
|
3. Check solution_1.op2 and solution_2.op2 exist (if multi-solution)
|
|
```
|
|
|
|
## User Interaction Best Practices
|
|
|
|
### Ask Before Generating
|
|
|
|
Always confirm with user:
|
|
1. "Here's what I understand about your optimization problem: [summary]. Is this correct?"
|
|
2. "I'll use Protocol {X} because [reasoning]. Does this sound right?"
|
|
3. "I'll create extractors for: [list]. Are these the results you need?"
|
|
4. "Should I generate the complete study structure now?"
|
|
|
|
### Provide Clear Next Steps
|
|
|
|
After generating files:
|
|
```
|
|
✓ Created study: studies/{study_name}/
|
|
✓ Generated optimization config
|
|
✓ Generated run_optimization.py with {protocol}
|
|
✓ Generated README.md with full documentation
|
|
|
|
Next Steps:
|
|
1. Place your NX files in studies/{study_name}/1_setup/model/
|
|
- {Model}.prt
|
|
- {Model}_sim1.sim
|
|
2. [If NX modifications needed] Read NX_FILE_MODIFICATIONS_REQUIRED.md
|
|
3. Test with 3 trials: cd studies/{study_name} && python run_optimization.py --trials 3
|
|
4. Monitor in dashboard: http://localhost:3003
|
|
5. Full run: python run_optimization.py --trials {n_trials}
|
|
```
|
|
|
|
### Handle Edge Cases
|
|
|
|
**User has incomplete information**:
|
|
- Suggest reasonable defaults based on similar studies
|
|
- Document assumptions clearly in README
|
|
- Mark as "REQUIRES USER INPUT" in generated files
|
|
|
|
**User wants custom extractors**:
|
|
- Explain centralized extractor library
|
|
- If truly custom, guide them to create in `optimization_engine/extractors/`
|
|
- Inherit from `OP2Extractor` base class
|
|
|
|
**User unsure about bounds**:
|
|
- Recommend conservative bounds based on engineering judgment
|
|
- Suggest iterative approach: "Start with [bounds], then refine based on initial results"
|
|
|
|
**User doesn't have NX files yet**:
|
|
- Generate all Python/JSON files anyway
|
|
- Create placeholder model directory
|
|
- Provide clear instructions for adding NX files later
|
|
|
|
## Integration with Dashboard
|
|
|
|
Always mention dashboard capabilities:
|
|
|
|
**For Multi-Objective Studies**:
|
|
- "You'll see the Pareto front in real-time on the dashboard"
|
|
- "Use parallel coordinates plot to explore trade-offs"
|
|
- "Green lines = feasible, red lines = constraint violations"
|
|
|
|
**For Single-Objective Studies**:
|
|
- "Monitor convergence in real-time"
|
|
- "See best value improving over trials"
|
|
- "Check parameter space exploration"
|
|
|
|
**Dashboard Access**:
|
|
```bash
|
|
# Terminal 1: Backend
|
|
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload
|
|
|
|
# Terminal 2: Frontend
|
|
cd atomizer-dashboard/frontend && npm run dev
|
|
|
|
# Browser: http://localhost:3003
|
|
```
|
|
|
|
## Common Patterns
|
|
|
|
### Pattern 1: Mass Minimization with Constraints
|
|
|
|
```
|
|
Objective: Minimize mass
|
|
Constraints: Stress < limit, Displacement < limit, Frequency > limit
|
|
Protocol: Protocol 10 (single-objective TPE)
|
|
Extractors: extract_mass_from_expression, extract_solid_stress,
|
|
extract_displacement, extract_frequency
|
|
Multi-Solution: Yes (static + modal)
|
|
```
|
|
|
|
### Pattern 2: Mass vs Frequency Trade-off
|
|
|
|
```
|
|
Objectives: Minimize mass, Maximize frequency
|
|
Constraints: Stress < limit, Displacement < limit
|
|
Protocol: Protocol 11 (multi-objective NSGA-II)
|
|
Extractors: extract_mass_from_expression, extract_frequency,
|
|
extract_solid_stress, extract_displacement
|
|
Multi-Solution: Yes (static + modal)
|
|
```
|
|
|
|
### Pattern 3: Stress Minimization
|
|
|
|
```
|
|
Objective: Minimize stress
|
|
Constraints: Displacement < limit
|
|
Protocol: Protocol 10 (single-objective TPE)
|
|
Extractors: extract_solid_stress, extract_displacement
|
|
Multi-Solution: No (static only)
|
|
```
|
|
|
|
## Validation Integration
|
|
|
|
After generating files, always validate the study setup using the validator system:
|
|
|
|
### Config Validation
|
|
|
|
```python
|
|
from optimization_engine.validators import validate_config_file
|
|
|
|
result = validate_config_file("studies/{study_name}/1_setup/optimization_config.json")
|
|
if result.is_valid:
|
|
print("[OK] Configuration is valid!")
|
|
else:
|
|
for error in result.errors:
|
|
print(f"[ERROR] {error}")
|
|
```
|
|
|
|
### Model Validation
|
|
|
|
```python
|
|
from optimization_engine.validators import validate_study_model
|
|
|
|
result = validate_study_model("{study_name}")
|
|
if result.is_valid:
|
|
print(f"[OK] Model files valid!")
|
|
print(f" Part: {result.prt_file.name}")
|
|
print(f" Simulation: {result.sim_file.name}")
|
|
else:
|
|
for error in result.errors:
|
|
print(f"[ERROR] {error}")
|
|
```
|
|
|
|
### Complete Study Validation
|
|
|
|
```python
|
|
from optimization_engine.validators import validate_study
|
|
|
|
result = validate_study("{study_name}")
|
|
print(result) # Shows complete health check
|
|
```
|
|
|
|
### Validation Checklist for Generated Studies
|
|
|
|
Before declaring a study complete, ensure:
|
|
|
|
1. **Config Validation Passes**:
|
|
- All design variables have valid bounds (min < max)
|
|
- All objectives have proper extraction methods
|
|
- All constraints have thresholds defined
|
|
- Protocol matches objective count
|
|
|
|
2. **Model Files Ready** (user must provide):
|
|
- Part file (.prt) exists in model directory
|
|
- Simulation file (.sim) exists
|
|
- FEM file (.fem) will be auto-generated
|
|
|
|
3. **Run Script Works**:
|
|
- Test with `python run_optimization.py --trials 1`
|
|
- Verify imports resolve correctly
|
|
- Verify NX solver can be reached
|
|
|
|
### Automated Pre-Flight Check
|
|
|
|
Add this to run_optimization.py:
|
|
|
|
```python
|
|
def preflight_check():
|
|
"""Validate study setup before running."""
|
|
from optimization_engine.validators import validate_study
|
|
|
|
result = validate_study(STUDY_NAME)
|
|
|
|
if not result.is_ready_to_run:
|
|
print("[X] Study validation failed!")
|
|
print(result)
|
|
sys.exit(1)
|
|
|
|
print("[OK] Pre-flight check passed!")
|
|
return True
|
|
|
|
if __name__ == "__main__":
|
|
preflight_check()
|
|
# ... rest of optimization
|
|
```
|
|
|
|
## Critical Reminders
|
|
|
|
### Multi-Objective Return Format
|
|
|
|
```python
|
|
# ✅ CORRECT: Return tuple with proper semantic directions
|
|
study = optuna.create_study(
|
|
directions=['minimize', 'maximize'], # Semantic directions
|
|
sampler=NSGAIISampler()
|
|
)
|
|
|
|
def objective(trial):
|
|
return (mass, frequency) # Return positive values
|
|
```
|
|
|
|
```python
|
|
# ❌ WRONG: Using negative values
|
|
return (mass, -frequency) # Creates degenerate Pareto front
|
|
```
|
|
|
|
### Multi-Solution NX Protocol
|
|
|
|
```python
|
|
# ✅ CORRECT: Solve all solutions
|
|
result = nx_solver.run_simulation(
|
|
sim_file=sim_file,
|
|
working_dir=model_dir,
|
|
expression_updates=design_vars,
|
|
solution_name=None # None = solve ALL solutions
|
|
)
|
|
```
|
|
|
|
```python
|
|
# ❌ WRONG: Only solves first solution
|
|
solution_name="Solution 1" # Multi-solution workflows will fail
|
|
```
|
|
|
|
### Extractor Selection
|
|
|
|
Always use centralized extractors from `optimization_engine/extractors/`:
|
|
- Standardized error handling
|
|
- Consistent return formats
|
|
- Well-tested and documented
|
|
- No code duplication
|
|
|
|
## Output Format
|
|
|
|
After completing study creation, provide:
|
|
|
|
1. **Summary Table**:
|
|
```
|
|
Study Created: {study_name}
|
|
Protocol: {protocol}
|
|
Objectives: {list}
|
|
Constraints: {list}
|
|
Design Variables: {list}
|
|
Multi-Solution: {Yes/No}
|
|
```
|
|
|
|
2. **File Checklist**:
|
|
```
|
|
✓ studies/{study_name}/1_setup/optimization_config.json
|
|
✓ studies/{study_name}/1_setup/workflow_config.json
|
|
✓ studies/{study_name}/run_optimization.py
|
|
✓ studies/{study_name}/reset_study.py
|
|
✓ studies/{study_name}/README.md
|
|
[✓] studies/{study_name}/NX_FILE_MODIFICATIONS_REQUIRED.md (if needed)
|
|
```
|
|
|
|
3. **Next Steps** (as shown earlier)
|
|
|
|
## Remember
|
|
|
|
- Be conversational and helpful
|
|
- Ask clarifying questions early
|
|
- Confirm understanding before generating
|
|
- Provide context for technical decisions
|
|
- Make next steps crystal clear
|
|
- Anticipate common mistakes
|
|
- Reference existing studies as examples
|
|
- Always test-run your generated code mentally
|
|
|
|
The goal is for the user to have a COMPLETE, WORKING study that they can run immediately after placing their NX files.
|