Neural Acceleration (MLP Surrogate): - Add run_nn_optimization.py with hybrid FEA/NN workflow - MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout - Three workflow modes: - --all: Sequential export->train->optimize->validate - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle - --turbo: Aggressive single-best validation (RECOMMENDED) - Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes - Separate nn_study.db to avoid overloading dashboard Performance Results (bracket_pareto_3obj study): - NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15% - Found minimum mass designs at boundary (angle~30deg, thick~30mm) - 100x speedup vs pure FEA exploration Protocol Operating System: - Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader - Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14) - Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs NX Automation: - Add optimization_engine/hooks/ for NX CAD/CAE automation - Add study_wizard.py for guided study creation - Fix FEM mesh update: load idealized part before UpdateFemodel() New Study: - bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness) - 167 FEA trials + 5000 NN trials completed - Demonstrates full hybrid workflow 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
12 KiB
12 KiB
Protocol Execution Framework (PEF)
Version: 1.0 Purpose: Meta-protocol defining how LLM sessions execute Atomizer protocols. The "protocol for using protocols."
Core Execution Pattern
For ANY task, follow this 6-step pattern:
┌─────────────────────────────────────────────────────────────┐
│ 1. ANNOUNCE │
│ State what you're about to do in plain language │
│ "I'll create an optimization study for your bracket..." │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 2. VALIDATE │
│ Check prerequisites are met │
│ - Required files exist? │
│ - Environment ready? │
│ - User has confirmed understanding? │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 3. EXECUTE │
│ Perform the action following protocol steps │
│ - Load required context per 02_CONTEXT_LOADER.md │
│ - Follow protocol step-by-step │
│ - Handle errors with OP_06 patterns │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 4. VERIFY │
│ Confirm success │
│ - Files created correctly? │
│ - No errors in output? │
│ - Results make sense? │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 5. REPORT │
│ Summarize what was done │
│ - List files created/modified │
│ - Show key results │
│ - Note any warnings │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 6. SUGGEST │
│ Offer logical next steps │
│ - What should user do next? │
│ - Related operations available? │
│ - Dashboard URL if relevant? │
└─────────────────────────────────────────────────────────────┘
Task Classification Rules
Before executing, classify the user's request:
Step 1: Identify Task Category
TASK_CATEGORIES = {
"CREATE": {
"keywords": ["new", "create", "set up", "optimize", "study", "build"],
"protocol": "OP_01_CREATE_STUDY",
"privilege": "user"
},
"RUN": {
"keywords": ["start", "run", "execute", "begin", "launch"],
"protocol": "OP_02_RUN_OPTIMIZATION",
"privilege": "user"
},
"MONITOR": {
"keywords": ["status", "progress", "check", "how many", "trials"],
"protocol": "OP_03_MONITOR_PROGRESS",
"privilege": "user"
},
"ANALYZE": {
"keywords": ["results", "best", "compare", "pareto", "report"],
"protocol": "OP_04_ANALYZE_RESULTS",
"privilege": "user"
},
"EXPORT": {
"keywords": ["export", "training data", "neural data"],
"protocol": "OP_05_EXPORT_TRAINING_DATA",
"privilege": "user"
},
"DEBUG": {
"keywords": ["error", "failed", "not working", "crashed", "help"],
"protocol": "OP_06_TROUBLESHOOT",
"privilege": "user"
},
"EXTEND": {
"keywords": ["add extractor", "create hook", "new protocol"],
"protocol": "EXT_*",
"privilege": "power_user+"
}
}
Step 2: Check Privilege
def check_privilege(task_category, user_role):
required = TASK_CATEGORIES[task_category]["privilege"]
privilege_hierarchy = ["user", "power_user", "admin"]
if privilege_hierarchy.index(user_role) >= privilege_hierarchy.index(required):
return True
else:
# Inform user they need higher privilege
return False
Step 3: Load Context
Follow rules in 02_CONTEXT_LOADER.md to load appropriate documentation.
Validation Checkpoints
Before executing any protocol step, validate:
Pre-Study Creation
- Model files exist (
.prt,.sim) - Working directory is writable
- User has described objectives clearly
- Conda environment is atomizer
Pre-Run
optimization_config.jsonexists and is validrun_optimization.pyexists- Model files copied to
1_setup/model/ - No conflicting process running
Pre-Analysis
study.dbexists with completed trials- No optimization currently running
Pre-Extension (power_user+)
- User has confirmed their role
- Extension doesn't duplicate existing functionality
- Tests can be written for new code
Error Recovery Protocol
When something fails during execution:
Step 1: Identify Failure Point
Which step failed?
├─ File creation? → Check permissions, disk space
├─ NX solve? → Check NX log, timeout, expressions
├─ Extraction? → Check OP2 exists, subcase correct
├─ Database? → Check SQLite file, trial count
└─ Unknown? → Capture full error, check OP_06
Step 2: Attempt Recovery
RECOVERY_ACTIONS = {
"file_permission": "Check directory permissions, try different location",
"nx_timeout": "Increase timeout in config, simplify model",
"nx_expression_error": "Verify expression names match NX model",
"op2_missing": "Check NX solve completed successfully",
"extractor_error": "Verify correct subcase and element types",
"database_locked": "Wait for other process to finish, or kill stale process",
}
Step 3: Escalate if Needed
If recovery fails:
- Log the error with full context
- Inform user of the issue
- Suggest manual intervention if appropriate
- Offer to retry after user fixes underlying issue
Protocol Combination Rules
Some protocols work together, others conflict:
Valid Combinations
OP_01 + SYS_10 # Create study with IMSO
OP_01 + SYS_11 # Create multi-objective study
OP_01 + SYS_14 # Create study with neural acceleration
OP_02 + SYS_13 # Run with dashboard tracking
OP_04 + SYS_11 # Analyze multi-objective results
Invalid Combinations
SYS_10 + SYS_11 # Single-obj IMSO with multi-obj NSGA (pick one)
TPESampler + SYS_11 # TPE is single-objective; use NSGAIISampler
EXT_* without privilege # Extensions require power_user or admin
Automatic Protocol Inference
If objectives.length == 1:
→ Use Protocol 10 (single-objective)
→ Sampler: TPE, CMA-ES, or GP
If objectives.length > 1:
→ Use Protocol 11 (multi-objective)
→ Sampler: NSGA-II (mandatory)
If n_trials > 50 OR surrogate_settings present:
→ Add Protocol 14 (neural acceleration)
Execution Logging
During execution, maintain awareness of:
Session State
session_state = {
"current_study": None, # Active study name
"loaded_protocols": [], # Protocols currently loaded
"completed_steps": [], # Steps completed this session
"pending_actions": [], # Actions waiting for user
"last_error": None, # Most recent error if any
}
User Communication
- Always explain what you're doing
- Show progress for long operations
- Warn before destructive actions
- Confirm before expensive operations (many trials)
Confirmation Requirements
Some actions require explicit user confirmation:
Always Confirm
- Deleting files or studies
- Overwriting existing study
- Running >100 trials
- Modifying master NX files (FORBIDDEN - but confirm user understands)
- Creating extension (power_user+)
Confirm If Uncertain
- Ambiguous objective (minimize or maximize?)
- Multiple possible extractors
- Complex multi-solution setup
No Confirmation Needed
- Creating new study in empty directory
- Running validation checks
- Reading/analyzing results
- Checking status
Output Format Standards
When reporting results:
Study Creation Output
Created study: {study_name}
Files generated:
- studies/{study_name}/1_setup/optimization_config.json
- studies/{study_name}/run_optimization.py
- studies/{study_name}/README.md
- studies/{study_name}/STUDY_REPORT.md
Configuration:
- Design variables: {count}
- Objectives: {list}
- Constraints: {list}
- Protocol: {protocol}
- Trials: {n_trials}
Next steps:
1. Copy your NX files to studies/{study_name}/1_setup/model/
2. Run: conda activate atomizer && python run_optimization.py
3. Monitor: http://localhost:3000
Run Status Output
Study: {study_name}
Status: {running|completed|failed}
Trials: {completed}/{total}
Best value: {value} ({objective_name})
Elapsed: {time}
Dashboard: http://localhost:3000
Error Output
Error: {error_type}
Message: {error_message}
Location: {file}:{line}
Diagnosis:
{explanation}
Recovery:
{steps to fix}
Reference: OP_06_TROUBLESHOOT.md
Quality Checklist
Before considering any task complete:
For Study Creation
optimization_config.jsonvalidates successfullyrun_optimization.pyhas no syntax errorsREADME.mdhas all 11 required sectionsSTUDY_REPORT.mdtemplate created- No code duplication (used extractors from library)
For Execution
- Optimization started without errors
- Dashboard shows real-time updates (if enabled)
- Trials are progressing
For Analysis
- Best result(s) identified
- Constraints satisfied
- Report generated if requested
For Extensions
- New code added to correct location
__init__.pyupdated with exports- Documentation updated
- Tests written (or noted as TODO)