Neural Acceleration (MLP Surrogate): - Add run_nn_optimization.py with hybrid FEA/NN workflow - MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout - Three workflow modes: - --all: Sequential export->train->optimize->validate - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle - --turbo: Aggressive single-best validation (RECOMMENDED) - Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes - Separate nn_study.db to avoid overloading dashboard Performance Results (bracket_pareto_3obj study): - NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15% - Found minimum mass designs at boundary (angle~30deg, thick~30mm) - 100x speedup vs pure FEA exploration Protocol Operating System: - Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader - Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14) - Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs NX Automation: - Add optimization_engine/hooks/ for NX CAD/CAE automation - Add study_wizard.py for guided study creation - Fix FEM mesh update: load idealized part before UpdateFemodel() New Study: - bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness) - 167 FEA trials + 5000 NN trials completed - Demonstrates full hybrid workflow 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
7.1 KiB
7.1 KiB
Atomizer LLM Bootstrap
Version: 1.0 Purpose: First file any LLM session reads. Provides instant orientation and task routing.
Quick Orientation (30 Seconds)
Atomizer = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
Your Role: Help users set up, run, and analyze structural optimization studies through conversation.
Core Philosophy: "Talk, don't click." Users describe what they want; you configure and execute.
Task Classification Tree
When a user request arrives, classify it:
User Request
│
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this"
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
│
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
│
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ └─► Load: OP_03_MONITOR_PROGRESS.md
│
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ └─► Load: OP_04_ANALYZE_RESULTS.md
│
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
│ └─► Load: OP_06_TROUBLESHOOT.md
│
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ └─► Load relevant SYS_* protocol
│
├─► EXTEND functionality?
│ ├─ "add extractor", "new hook", "create protocol"
│ └─► Check privilege, then load EXT_* protocol
│
└─► EXPLAIN/LEARN?
├─ "what is", "how does", "explain"
└─► Load relevant SYS_* protocol for reference
Protocol Routing Table
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|---|---|---|---|---|
| Create study | "new", "set up", "create", "optimize" | OP_01 | create-study-wizard.md | user |
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
Role Detection
Determine user's privilege level:
| Role | How to Detect | Can Do | Cannot Do |
|---|---|---|---|
| user | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
| power_user | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
| admin | Explicit declaration, admin config present | Full access | - |
Default: Assume user unless explicitly told otherwise.
Context Loading Rules
After classifying the task, load context in this order:
1. Always Loaded (via CLAUDE.md)
- This file (00_BOOTSTRAP.md)
- Python environment rules
- Code reuse protocol
2. Load Per Task Type
See 02_CONTEXT_LOADER.md for complete loading rules.
Quick Reference:
CREATE_STUDY → create-study-wizard.md (PRIMARY)
→ Use: from optimization_engine.study_wizard import StudyWizard, create_study
→ modules/extractors-catalog.md (if asks about extractors)
→ modules/zernike-optimization.md (if telescope/mirror)
→ modules/neural-acceleration.md (if >50 trials)
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
→ SYS_10_IMSO.md (if adaptive)
→ SYS_13_DASHBOARD_TRACKING.md (if monitoring)
DEBUG → OP_06_TROUBLESHOOT.md
→ Relevant SYS_* based on error type
Execution Framework
For ANY task, follow this pattern:
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. VERIFY → Confirm success
5. REPORT → Summarize what was done
6. SUGGEST → Offer logical next steps
See PROTOCOL_EXECUTION.md for detailed execution rules.
Emergency Quick Paths
"I just want to run an optimization"
- Do you have a
.prtand.simfile? → Yes: OP_01 → OP_02 - Getting errors? → OP_06
- Want to see progress? → OP_03
"Something broke"
- Read the error message
- Load OP_06_TROUBLESHOOT.md
- Follow diagnostic flowchart
"What did my optimization find?"
- Load OP_04_ANALYZE_RESULTS.md
- Query the study database
- Generate report
Protocol Directory Map
docs/protocols/
├── operations/ # Layer 2: How-to guides
│ ├── OP_01_CREATE_STUDY.md
│ ├── OP_02_RUN_OPTIMIZATION.md
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
│ └── OP_06_TROUBLESHOOT.md
│
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
│ └── SYS_14_NEURAL_ACCELERATION.md
│
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
├── EXT_03_CREATE_PROTOCOL.md
├── EXT_04_CREATE_SKILL.md
└── templates/
Key Constraints (Always Apply)
- Python Environment: Always use
conda activate atomizer - Never modify master files: Copy NX files to study working directory first
- Code reuse: Check
optimization_engine/extractors/before writing new extraction code - Validation: Always validate config before running optimization
- Documentation: Every study needs README.md and STUDY_REPORT.md
Next Steps After Bootstrap
- If you know the task type → Go to relevant OP_* or SYS_* protocol
- If unclear → Ask user clarifying question
- If complex task → Read
01_CHEATSHEET.mdfor quick reference - If need detailed loading rules → Read
02_CONTEXT_LOADER.md