Phase 1 - Session Bootstrap: - Add .claude/ATOMIZER_CONTEXT.md as single entry point for new sessions - Add study state detection and task routing Phase 2 - Code Deduplication: - Add optimization_engine/base_runner.py (ConfigDrivenRunner) - Add optimization_engine/generic_surrogate.py (ConfigDrivenSurrogate) - Add optimization_engine/study_state.py for study detection - Add optimization_engine/templates/ with registry and templates - Studies now require ~50 lines instead of ~300 Phase 3 - Skill Consolidation: - Add YAML frontmatter metadata to all skills (versioning, dependencies) - Consolidate create-study.md into core/study-creation-core.md - Update 00_BOOTSTRAP.md, 01_CHEATSHEET.md, 02_CONTEXT_LOADER.md Phase 4 - Self-Expanding Knowledge: - Add optimization_engine/auto_doc.py for auto-generating documentation - Generate docs/generated/EXTRACTORS.md (27 extractors documented) - Generate docs/generated/TEMPLATES.md (6 templates) - Generate docs/generated/EXTRACTOR_CHEATSHEET.md Phase 5 - Subagent Implementation: - Add .claude/commands/study-builder.md (create studies) - Add .claude/commands/nx-expert.md (NX Open API) - Add .claude/commands/protocol-auditor.md (config validation) - Add .claude/commands/results-analyzer.md (results analysis) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
7.2 KiB
7.2 KiB
skill_id, version, last_updated, type, code_dependencies, requires_skills
| skill_id | version | last_updated | type | code_dependencies | requires_skills |
|---|---|---|---|---|---|
| SKILL_000 | 2.0 | 2025-12-07 | bootstrap |
Atomizer LLM Bootstrap
Version: 2.0 Updated: 2025-12-07 Purpose: First file any LLM session reads. Provides instant orientation and task routing.
Quick Orientation (30 Seconds)
Atomizer = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
Your Role: Help users set up, run, and analyze structural optimization studies through conversation.
Core Philosophy: "Talk, don't click." Users describe what they want; you configure and execute.
Task Classification Tree
When a user request arrives, classify it:
User Request
│
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this"
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
│
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
│
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ └─► Load: OP_03_MONITOR_PROGRESS.md
│
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ └─► Load: OP_04_ANALYZE_RESULTS.md
│
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
│ └─► Load: OP_06_TROUBLESHOOT.md
│
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ └─► Load relevant SYS_* protocol
│
├─► EXTEND functionality?
│ ├─ "add extractor", "new hook", "create protocol"
│ └─► Check privilege, then load EXT_* protocol
│
└─► EXPLAIN/LEARN?
├─ "what is", "how does", "explain"
└─► Load relevant SYS_* protocol for reference
Protocol Routing Table
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|---|---|---|---|---|
| Create study | "new", "set up", "create", "optimize" | OP_01 | core/study-creation-core.md | user |
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
Role Detection
Determine user's privilege level:
| Role | How to Detect | Can Do | Cannot Do |
|---|---|---|---|
| user | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
| power_user | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
| admin | Explicit declaration, admin config present | Full access | - |
Default: Assume user unless explicitly told otherwise.
Context Loading Rules
After classifying the task, load context in this order:
1. Always Loaded (via CLAUDE.md)
- This file (00_BOOTSTRAP.md)
- Python environment rules
- Code reuse protocol
2. Load Per Task Type
See 02_CONTEXT_LOADER.md for complete loading rules.
Quick Reference:
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
→ modules/zernike-optimization.md (if telescope/mirror)
→ modules/neural-acceleration.md (if >50 trials)
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
DEBUG → OP_06_TROUBLESHOOT.md
→ Relevant SYS_* based on error type
Execution Framework
For ANY task, follow this pattern:
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. VERIFY → Confirm success
5. REPORT → Summarize what was done
6. SUGGEST → Offer logical next steps
See PROTOCOL_EXECUTION.md for detailed execution rules.
Emergency Quick Paths
"I just want to run an optimization"
- Do you have a
.prtand.simfile? → Yes: OP_01 → OP_02 - Getting errors? → OP_06
- Want to see progress? → OP_03
"Something broke"
- Read the error message
- Load OP_06_TROUBLESHOOT.md
- Follow diagnostic flowchart
"What did my optimization find?"
- Load OP_04_ANALYZE_RESULTS.md
- Query the study database
- Generate report
Protocol Directory Map
docs/protocols/
├── operations/ # Layer 2: How-to guides
│ ├── OP_01_CREATE_STUDY.md
│ ├── OP_02_RUN_OPTIMIZATION.md
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
│ └── OP_06_TROUBLESHOOT.md
│
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
│ └── SYS_14_NEURAL_ACCELERATION.md
│
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
├── EXT_03_CREATE_PROTOCOL.md
├── EXT_04_CREATE_SKILL.md
└── templates/
Key Constraints (Always Apply)
- Python Environment: Always use
conda activate atomizer - Never modify master files: Copy NX files to study working directory first
- Code reuse: Check
optimization_engine/extractors/before writing new extraction code - Validation: Always validate config before running optimization
- Documentation: Every study needs README.md and STUDY_REPORT.md
Next Steps After Bootstrap
- If you know the task type → Go to relevant OP_* or SYS_* protocol
- If unclear → Ask user clarifying question
- If complex task → Read
01_CHEATSHEET.mdfor quick reference - If need detailed loading rules → Read
02_CONTEXT_LOADER.md