docs: Consolidate documentation and fix protocol numbering (partial)
Phase 2 of restructuring plan: - Rename SYS_16_STUDY_INSIGHTS -> SYS_17_STUDY_INSIGHTS - Rename SYS_17_CONTEXT_ENGINEERING -> SYS_18_CONTEXT_ENGINEERING - Promote Bootstrap V3.0 (Context Engineering) as default - Archive old Bootstrap V2.0 - Create knowledge_base/playbook.json for ACE framework - Add OP_08 (Generate Report) to routing tables - Add SYS_16-18 to protocol tables - Update docs/protocols/README.md to version 1.1 - Update CLAUDE.md with new protocols - Create docs/plans/RESTRUCTURING_PLAN.md for continuation Remaining: Phase 2.8 (Cheatsheet), Phases 3-6 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,17 +1,20 @@
|
||||
---
|
||||
skill_id: SKILL_000
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
version: 3.0
|
||||
last_updated: 2025-12-29
|
||||
type: bootstrap
|
||||
code_dependencies: []
|
||||
code_dependencies:
|
||||
- optimization_engine.context.playbook
|
||||
- optimization_engine.context.session_state
|
||||
- optimization_engine.context.feedback_loop
|
||||
requires_skills: []
|
||||
---
|
||||
|
||||
# Atomizer LLM Bootstrap
|
||||
# Atomizer LLM Bootstrap v3.0 - Context-Aware Sessions
|
||||
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
|
||||
**Version**: 3.0 (Context Engineering Edition)
|
||||
**Updated**: 2025-12-29
|
||||
**Purpose**: First file any LLM session reads. Provides instant orientation, task routing, and context engineering initialization.
|
||||
|
||||
---
|
||||
|
||||
@@ -23,6 +26,8 @@ requires_skills: []
|
||||
|
||||
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
|
||||
|
||||
**NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run.
|
||||
|
||||
---
|
||||
|
||||
## Session Startup Checklist
|
||||
@@ -31,23 +36,29 @@ On **every new session**, complete these steps:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION STARTUP │
|
||||
│ SESSION STARTUP (v3.0) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ STEP 1: Environment Check │
|
||||
│ STEP 1: Initialize Context Engineering │
|
||||
│ □ Load playbook from knowledge_base/playbook.json │
|
||||
│ □ Initialize session state (TaskType, study context) │
|
||||
│ □ Load relevant playbook items for task type │
|
||||
│ │
|
||||
│ STEP 2: Environment Check │
|
||||
│ □ Verify conda environment: conda activate atomizer │
|
||||
│ □ Check current directory context │
|
||||
│ │
|
||||
│ STEP 2: Context Loading │
|
||||
│ STEP 3: Context Loading │
|
||||
│ □ CLAUDE.md loaded (system instructions) │
|
||||
│ □ This file (00_BOOTSTRAP.md) for task routing │
|
||||
│ □ This file (00_BOOTSTRAP_V2.md) for task routing │
|
||||
│ □ Check for active study in studies/ directory │
|
||||
│ │
|
||||
│ STEP 3: Knowledge Query (LAC) │
|
||||
│ □ Query knowledge_base/lac/ for relevant prior learnings │
|
||||
│ □ Note any pending protocol updates │
|
||||
│ STEP 4: Knowledge Query (Enhanced) │
|
||||
│ □ Query AtomizerPlaybook for relevant insights │
|
||||
│ □ Filter by task type, min confidence 0.5 │
|
||||
│ □ Include top mistakes for error prevention │
|
||||
│ │
|
||||
│ STEP 4: User Context │
|
||||
│ STEP 5: User Context │
|
||||
│ □ What is the user trying to accomplish? │
|
||||
│ □ Is there an active study context? │
|
||||
│ □ What privilege level? (default: user) │
|
||||
@@ -55,127 +66,217 @@ On **every new session**, complete these steps:
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Context Engineering Initialization
|
||||
|
||||
```python
|
||||
# On session start, initialize context engineering
|
||||
from optimization_engine.context import (
|
||||
AtomizerPlaybook,
|
||||
AtomizerSessionState,
|
||||
TaskType,
|
||||
get_session
|
||||
)
|
||||
|
||||
# Load playbook
|
||||
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
|
||||
|
||||
# Initialize session
|
||||
session = get_session()
|
||||
session.exposed.task_type = TaskType.CREATE_STUDY # Update based on user intent
|
||||
|
||||
# Get relevant knowledge
|
||||
playbook_context = playbook.get_context_for_task(
|
||||
task_type="optimization",
|
||||
max_items=15,
|
||||
min_confidence=0.5
|
||||
)
|
||||
|
||||
# Always include recent mistakes for error prevention
|
||||
mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Tree
|
||||
|
||||
When a user request arrives, classify it:
|
||||
When a user request arrives, classify it and update session state:
|
||||
|
||||
```
|
||||
User Request
|
||||
│
|
||||
├─► CREATE something?
|
||||
│ ├─ "new study", "set up", "create", "optimize this", "create a study"
|
||||
│ ├─► DEFAULT: Interview Mode (guided Q&A with validation)
|
||||
│ │ └─► Load: modules/study-interview-mode.md + OP_01
|
||||
│ │
|
||||
│ └─► MANUAL mode? (power users, explicit request)
|
||||
│ ├─ "quick setup", "skip interview", "manual config"
|
||||
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
|
||||
│ ├─ "new study", "set up", "create", "optimize this"
|
||||
│ ├─ session.exposed.task_type = TaskType.CREATE_STUDY
|
||||
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
|
||||
│
|
||||
├─► RUN something?
|
||||
│ ├─ "start", "run", "execute", "begin optimization"
|
||||
│ ├─ session.exposed.task_type = TaskType.RUN_OPTIMIZATION
|
||||
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
|
||||
│
|
||||
├─► CHECK status?
|
||||
│ ├─ "status", "progress", "how many trials", "what's happening"
|
||||
│ ├─ session.exposed.task_type = TaskType.MONITOR_PROGRESS
|
||||
│ └─► Load: OP_03_MONITOR_PROGRESS.md
|
||||
│
|
||||
├─► ANALYZE results?
|
||||
│ ├─ "results", "best design", "compare", "pareto"
|
||||
│ ├─ session.exposed.task_type = TaskType.ANALYZE_RESULTS
|
||||
│ └─► Load: OP_04_ANALYZE_RESULTS.md
|
||||
│
|
||||
├─► DEBUG/FIX error?
|
||||
│ ├─ "error", "failed", "not working", "crashed"
|
||||
│ └─► Load: OP_06_TROUBLESHOOT.md
|
||||
│ ├─ session.exposed.task_type = TaskType.DEBUG_ERROR
|
||||
│ └─► Load: OP_06_TROUBLESHOOT.md + playbook[MISTAKE]
|
||||
│
|
||||
├─► MANAGE disk space?
|
||||
│ ├─ "disk", "space", "cleanup", "archive", "storage"
|
||||
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
|
||||
│
|
||||
├─► GENERATE report?
|
||||
│ ├─ "report", "summary", "generate", "document"
|
||||
│ └─► Load: OP_08_GENERATE_REPORT.md
|
||||
│
|
||||
├─► CONFIGURE settings?
|
||||
│ ├─ "change", "modify", "settings", "parameters"
|
||||
│ ├─ session.exposed.task_type = TaskType.CONFIGURE_SETTINGS
|
||||
│ └─► Load relevant SYS_* protocol
|
||||
│
|
||||
├─► EXTEND functionality?
|
||||
│ ├─ "add extractor", "new hook", "create protocol"
|
||||
│ └─► Check privilege, then load EXT_* protocol
|
||||
├─► NEURAL acceleration?
|
||||
│ ├─ "neural", "surrogate", "turbo", "GNN"
|
||||
│ ├─ session.exposed.task_type = TaskType.NEURAL_ACCELERATION
|
||||
│ └─► Load: SYS_14_NEURAL_ACCELERATION.md
|
||||
│
|
||||
└─► EXPLAIN/LEARN?
|
||||
├─ "what is", "how does", "explain"
|
||||
└─► Load relevant SYS_* protocol for reference
|
||||
└─► EXTEND functionality?
|
||||
├─ "add extractor", "new hook", "create protocol"
|
||||
└─► Check privilege, then load EXT_* protocol
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol Routing Table
|
||||
## Protocol Routing Table (With Context Loading)
|
||||
|
||||
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|
||||
|-------------|----------|----------|---------------|-----------|
|
||||
| **Create study (DEFAULT)** | "new", "set up", "create", "optimize", "create a study" | OP_01 | **modules/study-interview-mode.md** | user |
|
||||
| Create study (manual) | "quick setup", "skip interview", "manual config" | OP_01 | core/study-creation-core.md | power_user |
|
||||
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
|
||||
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
|
||||
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
|
||||
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
|
||||
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
|
||||
| **Disk management** | "disk", "space", "cleanup", "archive" | **OP_07** | modules/study-disk-optimization.md | user |
|
||||
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
|
||||
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
|
||||
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
|
||||
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
|
||||
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
|
||||
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
|
||||
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
|
||||
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
|
||||
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
|
||||
| User Intent | Keywords | Protocol | Skill to Load | Playbook Filter |
|
||||
|-------------|----------|----------|---------------|-----------------|
|
||||
| Create study | "new", "set up", "create" | OP_01 | study-creation-core.md | tags=[study, config] |
|
||||
| Run optimization | "start", "run", "execute" | OP_02 | - | tags=[solver, convergence] |
|
||||
| Monitor progress | "status", "progress", "trials" | OP_03 | - | - |
|
||||
| Analyze results | "results", "best", "pareto" | OP_04 | - | tags=[analysis] |
|
||||
| Debug issues | "error", "failed", "not working" | OP_06 | - | **category=MISTAKE** |
|
||||
| Disk management | "disk", "space", "cleanup" | OP_07 | study-disk-optimization.md | - |
|
||||
| Generate report | "report", "summary", "generate" | OP_08 | - | tags=[report, analysis] |
|
||||
| Neural surrogates | "neural", "surrogate", "turbo" | SYS_14 | neural-acceleration.md | tags=[neural, surrogate] |
|
||||
|
||||
---
|
||||
|
||||
## Role Detection
|
||||
## Playbook Integration Pattern
|
||||
|
||||
Determine user's privilege level:
|
||||
### Loading Playbook Context
|
||||
|
||||
| Role | How to Detect | Can Do | Cannot Do |
|
||||
|------|---------------|--------|-----------|
|
||||
| **user** | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
|
||||
| **power_user** | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
|
||||
| **admin** | Explicit declaration, admin config present | Full access | - |
|
||||
```python
|
||||
def load_context_for_task(task_type: TaskType, session: AtomizerSessionState):
|
||||
"""Load full context including playbook for LLM consumption."""
|
||||
context_parts = []
|
||||
|
||||
**Default**: Assume `user` unless explicitly told otherwise.
|
||||
# 1. Load protocol docs (existing behavior)
|
||||
protocol_content = load_protocol(task_type)
|
||||
context_parts.append(protocol_content)
|
||||
|
||||
---
|
||||
# 2. Load session state (exposed only)
|
||||
context_parts.append(session.get_llm_context())
|
||||
|
||||
## Context Loading Rules
|
||||
# 3. Load relevant playbook items
|
||||
playbook = AtomizerPlaybook.load(PLAYBOOK_PATH)
|
||||
playbook_context = playbook.get_context_for_task(
|
||||
task_type=task_type.value,
|
||||
max_items=15,
|
||||
min_confidence=0.6
|
||||
)
|
||||
context_parts.append(playbook_context)
|
||||
|
||||
After classifying the task, load context in this order:
|
||||
# 4. Add error-specific items if debugging
|
||||
if task_type == TaskType.DEBUG_ERROR:
|
||||
mistakes = playbook.get_by_category(InsightCategory.MISTAKE)
|
||||
for item in mistakes[:5]:
|
||||
context_parts.append(item.to_context_string())
|
||||
|
||||
### 1. Always Loaded (via CLAUDE.md)
|
||||
- This file (00_BOOTSTRAP.md)
|
||||
- Python environment rules
|
||||
- Code reuse protocol
|
||||
|
||||
### 2. Load Per Task Type
|
||||
See `02_CONTEXT_LOADER.md` for complete loading rules.
|
||||
|
||||
**Quick Reference**:
|
||||
return "\n\n---\n\n".join(context_parts)
|
||||
```
|
||||
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
|
||||
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
|
||||
→ modules/zernike-optimization.md (if telescope/mirror)
|
||||
→ modules/neural-acceleration.md (if >50 trials)
|
||||
|
||||
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
|
||||
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
|
||||
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
|
||||
### Real-Time Recording
|
||||
|
||||
DEBUG → OP_06_TROUBLESHOOT.md
|
||||
→ Relevant SYS_* based on error type
|
||||
**CRITICAL**: Record insights IMMEDIATELY when they occur. Do not wait until session end.
|
||||
|
||||
```python
|
||||
# On discovering a workaround
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.WORKFLOW,
|
||||
content="For mesh update issues, load _i.prt file before UpdateFemodel()",
|
||||
tags=["mesh", "nx", "update"]
|
||||
)
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
|
||||
# On trial failure
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"Convergence failure with tolerance < 1e-8 on large meshes",
|
||||
source_trial=trial_number,
|
||||
tags=["convergence", "solver"]
|
||||
)
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Framework
|
||||
## Error Handling Protocol (Enhanced)
|
||||
|
||||
When ANY error occurs:
|
||||
|
||||
1. **Preserve the error** - Add to session state
|
||||
2. **Check playbook** - Look for matching mistake patterns
|
||||
3. **Learn from it** - If novel error, add to playbook
|
||||
4. **Show to user** - Include error context in response
|
||||
|
||||
```python
|
||||
# On error
|
||||
session.add_error(f"{error_type}: {error_message}", error_type=error_type)
|
||||
|
||||
# Check playbook for similar errors
|
||||
similar = playbook.search_by_content(error_message, category=InsightCategory.MISTAKE)
|
||||
if similar:
|
||||
print(f"Known issue: {similar[0].content}")
|
||||
# Provide solution from playbook
|
||||
else:
|
||||
# New error - record for future reference
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"{error_type}: {error_message[:200]}",
|
||||
tags=["error", error_type]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Budget Management
|
||||
|
||||
Total context budget: ~100K tokens
|
||||
|
||||
Allocation:
|
||||
- **Stable prefix**: 5K tokens (cached across requests)
|
||||
- **Protocols**: 10K tokens
|
||||
- **Playbook items**: 5K tokens
|
||||
- **Session state**: 2K tokens
|
||||
- **Conversation history**: 30K tokens
|
||||
- **Working space**: 48K tokens
|
||||
|
||||
If approaching limit:
|
||||
1. Trigger compaction of old events
|
||||
2. Reduce playbook items to top 5
|
||||
3. Summarize conversation history
|
||||
|
||||
---
|
||||
|
||||
## Execution Framework (AVERVS)
|
||||
|
||||
For ANY task, follow this pattern:
|
||||
|
||||
@@ -183,60 +284,122 @@ For ANY task, follow this pattern:
|
||||
1. ANNOUNCE → State what you're about to do
|
||||
2. VALIDATE → Check prerequisites are met
|
||||
3. EXECUTE → Perform the action
|
||||
4. VERIFY → Confirm success
|
||||
5. REPORT → Summarize what was done
|
||||
6. SUGGEST → Offer logical next steps
|
||||
4. RECORD → Record outcome to playbook (NEW!)
|
||||
5. VERIFY → Confirm success
|
||||
6. REPORT → Summarize what was done
|
||||
7. SUGGEST → Offer logical next steps
|
||||
```
|
||||
|
||||
See `PROTOCOL_EXECUTION.md` for detailed execution rules.
|
||||
### Recording After Execution
|
||||
|
||||
```python
|
||||
# After successful execution
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.STRATEGY,
|
||||
content=f"Approach worked: {brief_description}",
|
||||
tags=relevant_tags
|
||||
)
|
||||
|
||||
# After failure
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"Failed approach: {brief_description}. Reason: {reason}",
|
||||
tags=relevant_tags
|
||||
)
|
||||
|
||||
# Always save after recording
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Emergency Quick Paths
|
||||
## Session Closing Checklist (Enhanced)
|
||||
|
||||
Before ending a session, complete:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION CLOSING (v3.0) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. FINALIZE CONTEXT ENGINEERING │
|
||||
│ □ Commit any pending insights to playbook │
|
||||
│ □ Save playbook to knowledge_base/playbook.json │
|
||||
│ □ Export learning report if optimization completed │
|
||||
│ │
|
||||
│ 2. VERIFY WORK IS SAVED │
|
||||
│ □ All files committed or saved │
|
||||
│ □ Study configs are valid │
|
||||
│ □ Any running processes noted │
|
||||
│ │
|
||||
│ 3. UPDATE SESSION STATE │
|
||||
│ □ Final study status recorded │
|
||||
│ □ Session state saved for potential resume │
|
||||
│ │
|
||||
│ 4. SUMMARIZE FOR USER │
|
||||
│ □ What was accomplished │
|
||||
│ □ What the system learned (new playbook items) │
|
||||
│ □ Current state of any studies │
|
||||
│ □ Recommended next steps │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Finalization Code
|
||||
|
||||
```python
|
||||
# At session end
|
||||
from optimization_engine.context import FeedbackLoop, save_playbook
|
||||
|
||||
# If optimization was run, finalize learning
|
||||
if optimization_completed:
|
||||
feedback = FeedbackLoop(playbook_path)
|
||||
result = feedback.finalize_study({
|
||||
"name": study_name,
|
||||
"total_trials": n_trials,
|
||||
"best_value": best_value,
|
||||
"convergence_rate": success_rate
|
||||
})
|
||||
print(f"Learning finalized: {result['insights_added']} insights added")
|
||||
|
||||
# Always save playbook
|
||||
save_playbook()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Engineering Components Reference
|
||||
|
||||
| Component | Purpose | Location |
|
||||
|-----------|---------|----------|
|
||||
| **AtomizerPlaybook** | Knowledge store with helpful/harmful tracking | `optimization_engine/context/playbook.py` |
|
||||
| **AtomizerReflector** | Analyzes outcomes, extracts insights | `optimization_engine/context/reflector.py` |
|
||||
| **AtomizerSessionState** | Context isolation (exposed/isolated) | `optimization_engine/context/session_state.py` |
|
||||
| **FeedbackLoop** | Connects outcomes to playbook updates | `optimization_engine/context/feedback_loop.py` |
|
||||
| **CompactionManager** | Handles long sessions | `optimization_engine/context/compaction.py` |
|
||||
| **ContextCacheOptimizer** | KV-cache optimization | `optimization_engine/context/cache_monitor.py` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Paths
|
||||
|
||||
### "I just want to run an optimization"
|
||||
1. Do you have a `.prt` and `.sim` file? → Yes: OP_01 → OP_02
|
||||
2. Getting errors? → OP_06
|
||||
3. Want to see progress? → OP_03
|
||||
1. Initialize session state as RUN_OPTIMIZATION
|
||||
2. Load playbook items for [solver, convergence]
|
||||
3. Load OP_02_RUN_OPTIMIZATION.md
|
||||
4. After run, finalize feedback loop
|
||||
|
||||
### "Something broke"
|
||||
1. Read the error message
|
||||
2. Load OP_06_TROUBLESHOOT.md
|
||||
3. Follow diagnostic flowchart
|
||||
1. Initialize session state as DEBUG_ERROR
|
||||
2. Load ALL mistake items from playbook
|
||||
3. Load OP_06_TROUBLESHOOT.md
|
||||
4. Record any new errors discovered
|
||||
|
||||
### "What did my optimization find?"
|
||||
1. Load OP_04_ANALYZE_RESULTS.md
|
||||
2. Query the study database
|
||||
3. Generate report
|
||||
|
||||
---
|
||||
|
||||
## Protocol Directory Map
|
||||
|
||||
```
|
||||
docs/protocols/
|
||||
├── operations/ # Layer 2: How-to guides
|
||||
│ ├── OP_01_CREATE_STUDY.md
|
||||
│ ├── OP_02_RUN_OPTIMIZATION.md
|
||||
│ ├── OP_03_MONITOR_PROGRESS.md
|
||||
│ ├── OP_04_ANALYZE_RESULTS.md
|
||||
│ ├── OP_05_EXPORT_TRAINING_DATA.md
|
||||
│ └── OP_06_TROUBLESHOOT.md
|
||||
│
|
||||
├── system/ # Layer 3: Core specifications
|
||||
│ ├── SYS_10_IMSO.md
|
||||
│ ├── SYS_11_MULTI_OBJECTIVE.md
|
||||
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
|
||||
│ ├── SYS_13_DASHBOARD_TRACKING.md
|
||||
│ └── SYS_14_NEURAL_ACCELERATION.md
|
||||
│
|
||||
└── extensions/ # Layer 4: Extensibility guides
|
||||
├── EXT_01_CREATE_EXTRACTOR.md
|
||||
├── EXT_02_CREATE_HOOK.md
|
||||
├── EXT_03_CREATE_PROTOCOL.md
|
||||
├── EXT_04_CREATE_SKILL.md
|
||||
└── templates/
|
||||
```
|
||||
1. Initialize session state as ANALYZE_RESULTS
|
||||
2. Load OP_04_ANALYZE_RESULTS.md
|
||||
3. Query the study database
|
||||
4. Generate report
|
||||
|
||||
---
|
||||
|
||||
@@ -246,72 +409,22 @@ docs/protocols/
|
||||
2. **Never modify master files**: Copy NX files to study working directory first
|
||||
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
|
||||
4. **Validation**: Always validate config before running optimization
|
||||
5. **Documentation**: Every study needs README.md and STUDY_REPORT.md
|
||||
5. **Record immediately**: Don't wait until session end to record insights
|
||||
6. **Save playbook**: After every insight, save the playbook
|
||||
|
||||
---
|
||||
|
||||
## Next Steps After Bootstrap
|
||||
## Migration from v2.0
|
||||
|
||||
1. If you know the task type → Go to relevant OP_* or SYS_* protocol
|
||||
2. If unclear → Ask user clarifying question
|
||||
3. If complex task → Read `01_CHEATSHEET.md` for quick reference
|
||||
4. If need detailed loading rules → Read `02_CONTEXT_LOADER.md`
|
||||
If upgrading from BOOTSTRAP v2.0:
|
||||
|
||||
1. The LAC system is now superseded by AtomizerPlaybook
|
||||
2. Session insights are now structured PlaybookItems
|
||||
3. Helpful/harmful tracking replaces simple confidence scores
|
||||
4. Context is now explicitly exposed vs isolated
|
||||
|
||||
The old LAC files in `knowledge_base/lac/` are still readable but new insights should use the playbook system.
|
||||
|
||||
---
|
||||
|
||||
## Session Closing Checklist
|
||||
|
||||
Before ending a session, complete:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION CLOSING │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. VERIFY WORK IS SAVED │
|
||||
│ □ All files committed or saved │
|
||||
│ □ Study configs are valid │
|
||||
│ □ Any running processes noted │
|
||||
│ │
|
||||
│ 2. RECORD LEARNINGS TO LAC │
|
||||
│ □ Any failures and their solutions → failure.jsonl │
|
||||
│ □ Success patterns discovered → success_pattern.jsonl │
|
||||
│ □ User preferences noted → user_preference.jsonl │
|
||||
│ □ Protocol improvements → suggested_updates.jsonl │
|
||||
│ │
|
||||
│ 3. RECORD OPTIMIZATION OUTCOMES │
|
||||
│ □ If optimization completed, record to optimization_memory/ │
|
||||
│ □ Include: method, geometry_type, converged, convergence_trial │
|
||||
│ │
|
||||
│ 4. SUMMARIZE FOR USER │
|
||||
│ □ What was accomplished │
|
||||
│ □ Current state of any studies │
|
||||
│ □ Recommended next steps │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Session Summary Template
|
||||
|
||||
```markdown
|
||||
# Session Summary
|
||||
|
||||
**Date**: {YYYY-MM-DD}
|
||||
**Study Context**: {study_name or "General"}
|
||||
|
||||
## Accomplished
|
||||
- {task 1}
|
||||
- {task 2}
|
||||
|
||||
## Current State
|
||||
- Study: {status}
|
||||
- Trials: {N completed}
|
||||
- Next action needed: {action}
|
||||
|
||||
## Learnings Recorded
|
||||
- {insight 1}
|
||||
|
||||
## Recommended Next Steps
|
||||
1. {step 1}
|
||||
2. {step 2}
|
||||
```
|
||||
*Atomizer v3.0: Where engineers talk, AI optimizes, and the system learns.*
|
||||
|
||||
@@ -1,425 +0,0 @@
|
||||
---
|
||||
skill_id: SKILL_000
|
||||
version: 3.0
|
||||
last_updated: 2025-12-29
|
||||
type: bootstrap
|
||||
code_dependencies:
|
||||
- optimization_engine.context.playbook
|
||||
- optimization_engine.context.session_state
|
||||
- optimization_engine.context.feedback_loop
|
||||
requires_skills: []
|
||||
---
|
||||
|
||||
# Atomizer LLM Bootstrap v3.0 - Context-Aware Sessions
|
||||
|
||||
**Version**: 3.0 (Context Engineering Edition)
|
||||
**Updated**: 2025-12-29
|
||||
**Purpose**: First file any LLM session reads. Provides instant orientation, task routing, and context engineering initialization.
|
||||
|
||||
---
|
||||
|
||||
## Quick Orientation (30 Seconds)
|
||||
|
||||
**Atomizer** = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
|
||||
|
||||
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
|
||||
|
||||
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
|
||||
|
||||
**NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run.
|
||||
|
||||
---
|
||||
|
||||
## Session Startup Checklist
|
||||
|
||||
On **every new session**, complete these steps:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION STARTUP (v3.0) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ STEP 1: Initialize Context Engineering │
|
||||
│ □ Load playbook from knowledge_base/playbook.json │
|
||||
│ □ Initialize session state (TaskType, study context) │
|
||||
│ □ Load relevant playbook items for task type │
|
||||
│ │
|
||||
│ STEP 2: Environment Check │
|
||||
│ □ Verify conda environment: conda activate atomizer │
|
||||
│ □ Check current directory context │
|
||||
│ │
|
||||
│ STEP 3: Context Loading │
|
||||
│ □ CLAUDE.md loaded (system instructions) │
|
||||
│ □ This file (00_BOOTSTRAP_V2.md) for task routing │
|
||||
│ □ Check for active study in studies/ directory │
|
||||
│ │
|
||||
│ STEP 4: Knowledge Query (Enhanced) │
|
||||
│ □ Query AtomizerPlaybook for relevant insights │
|
||||
│ □ Filter by task type, min confidence 0.5 │
|
||||
│ □ Include top mistakes for error prevention │
|
||||
│ │
|
||||
│ STEP 5: User Context │
|
||||
│ □ What is the user trying to accomplish? │
|
||||
│ □ Is there an active study context? │
|
||||
│ □ What privilege level? (default: user) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Context Engineering Initialization
|
||||
|
||||
```python
|
||||
# On session start, initialize context engineering
|
||||
from optimization_engine.context import (
|
||||
AtomizerPlaybook,
|
||||
AtomizerSessionState,
|
||||
TaskType,
|
||||
get_session
|
||||
)
|
||||
|
||||
# Load playbook
|
||||
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
|
||||
|
||||
# Initialize session
|
||||
session = get_session()
|
||||
session.exposed.task_type = TaskType.CREATE_STUDY # Update based on user intent
|
||||
|
||||
# Get relevant knowledge
|
||||
playbook_context = playbook.get_context_for_task(
|
||||
task_type="optimization",
|
||||
max_items=15,
|
||||
min_confidence=0.5
|
||||
)
|
||||
|
||||
# Always include recent mistakes for error prevention
|
||||
mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Tree
|
||||
|
||||
When a user request arrives, classify it and update session state:
|
||||
|
||||
```
|
||||
User Request
|
||||
│
|
||||
├─► CREATE something?
|
||||
│ ├─ "new study", "set up", "create", "optimize this"
|
||||
│ ├─ session.exposed.task_type = TaskType.CREATE_STUDY
|
||||
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
|
||||
│
|
||||
├─► RUN something?
|
||||
│ ├─ "start", "run", "execute", "begin optimization"
|
||||
│ ├─ session.exposed.task_type = TaskType.RUN_OPTIMIZATION
|
||||
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
|
||||
│
|
||||
├─► CHECK status?
|
||||
│ ├─ "status", "progress", "how many trials", "what's happening"
|
||||
│ ├─ session.exposed.task_type = TaskType.MONITOR_PROGRESS
|
||||
│ └─► Load: OP_03_MONITOR_PROGRESS.md
|
||||
│
|
||||
├─► ANALYZE results?
|
||||
│ ├─ "results", "best design", "compare", "pareto"
|
||||
│ ├─ session.exposed.task_type = TaskType.ANALYZE_RESULTS
|
||||
│ └─► Load: OP_04_ANALYZE_RESULTS.md
|
||||
│
|
||||
├─► DEBUG/FIX error?
|
||||
│ ├─ "error", "failed", "not working", "crashed"
|
||||
│ ├─ session.exposed.task_type = TaskType.DEBUG_ERROR
|
||||
│ └─► Load: OP_06_TROUBLESHOOT.md + playbook[MISTAKE]
|
||||
│
|
||||
├─► MANAGE disk space?
|
||||
│ ├─ "disk", "space", "cleanup", "archive", "storage"
|
||||
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
|
||||
│
|
||||
├─► CONFIGURE settings?
|
||||
│ ├─ "change", "modify", "settings", "parameters"
|
||||
│ ├─ session.exposed.task_type = TaskType.CONFIGURE_SETTINGS
|
||||
│ └─► Load relevant SYS_* protocol
|
||||
│
|
||||
├─► NEURAL acceleration?
|
||||
│ ├─ "neural", "surrogate", "turbo", "GNN"
|
||||
│ ├─ session.exposed.task_type = TaskType.NEURAL_ACCELERATION
|
||||
│ └─► Load: SYS_14_NEURAL_ACCELERATION.md
|
||||
│
|
||||
└─► EXTEND functionality?
|
||||
├─ "add extractor", "new hook", "create protocol"
|
||||
└─► Check privilege, then load EXT_* protocol
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol Routing Table (With Context Loading)
|
||||
|
||||
| User Intent | Keywords | Protocol | Skill to Load | Playbook Filter |
|
||||
|-------------|----------|----------|---------------|-----------------|
|
||||
| Create study | "new", "set up", "create" | OP_01 | study-creation-core.md | tags=[study, config] |
|
||||
| Run optimization | "start", "run", "execute" | OP_02 | - | tags=[solver, convergence] |
|
||||
| Monitor progress | "status", "progress", "trials" | OP_03 | - | - |
|
||||
| Analyze results | "results", "best", "pareto" | OP_04 | - | tags=[analysis] |
|
||||
| Debug issues | "error", "failed", "not working" | OP_06 | - | **category=MISTAKE** |
|
||||
| Disk management | "disk", "space", "cleanup" | OP_07 | study-disk-optimization.md | - |
|
||||
| Neural surrogates | "neural", "surrogate", "turbo" | SYS_14 | neural-acceleration.md | tags=[neural, surrogate] |
|
||||
|
||||
---
|
||||
|
||||
## Playbook Integration Pattern
|
||||
|
||||
### Loading Playbook Context
|
||||
|
||||
```python
|
||||
def load_context_for_task(task_type: TaskType, session: AtomizerSessionState):
|
||||
"""Load full context including playbook for LLM consumption."""
|
||||
context_parts = []
|
||||
|
||||
# 1. Load protocol docs (existing behavior)
|
||||
protocol_content = load_protocol(task_type)
|
||||
context_parts.append(protocol_content)
|
||||
|
||||
# 2. Load session state (exposed only)
|
||||
context_parts.append(session.get_llm_context())
|
||||
|
||||
# 3. Load relevant playbook items
|
||||
playbook = AtomizerPlaybook.load(PLAYBOOK_PATH)
|
||||
playbook_context = playbook.get_context_for_task(
|
||||
task_type=task_type.value,
|
||||
max_items=15,
|
||||
min_confidence=0.6
|
||||
)
|
||||
context_parts.append(playbook_context)
|
||||
|
||||
# 4. Add error-specific items if debugging
|
||||
if task_type == TaskType.DEBUG_ERROR:
|
||||
mistakes = playbook.get_by_category(InsightCategory.MISTAKE)
|
||||
for item in mistakes[:5]:
|
||||
context_parts.append(item.to_context_string())
|
||||
|
||||
return "\n\n---\n\n".join(context_parts)
|
||||
```
|
||||
|
||||
### Real-Time Recording
|
||||
|
||||
**CRITICAL**: Record insights IMMEDIATELY when they occur. Do not wait until session end.
|
||||
|
||||
```python
|
||||
# On discovering a workaround
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.WORKFLOW,
|
||||
content="For mesh update issues, load _i.prt file before UpdateFemodel()",
|
||||
tags=["mesh", "nx", "update"]
|
||||
)
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
|
||||
# On trial failure
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"Convergence failure with tolerance < 1e-8 on large meshes",
|
||||
source_trial=trial_number,
|
||||
tags=["convergence", "solver"]
|
||||
)
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Protocol (Enhanced)
|
||||
|
||||
When ANY error occurs:
|
||||
|
||||
1. **Preserve the error** - Add to session state
|
||||
2. **Check playbook** - Look for matching mistake patterns
|
||||
3. **Learn from it** - If novel error, add to playbook
|
||||
4. **Show to user** - Include error context in response
|
||||
|
||||
```python
|
||||
# On error
|
||||
session.add_error(f"{error_type}: {error_message}", error_type=error_type)
|
||||
|
||||
# Check playbook for similar errors
|
||||
similar = playbook.search_by_content(error_message, category=InsightCategory.MISTAKE)
|
||||
if similar:
|
||||
print(f"Known issue: {similar[0].content}")
|
||||
# Provide solution from playbook
|
||||
else:
|
||||
# New error - record for future reference
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"{error_type}: {error_message[:200]}",
|
||||
tags=["error", error_type]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Budget Management
|
||||
|
||||
Total context budget: ~100K tokens
|
||||
|
||||
Allocation:
|
||||
- **Stable prefix**: 5K tokens (cached across requests)
|
||||
- **Protocols**: 10K tokens
|
||||
- **Playbook items**: 5K tokens
|
||||
- **Session state**: 2K tokens
|
||||
- **Conversation history**: 30K tokens
|
||||
- **Working space**: 48K tokens
|
||||
|
||||
If approaching limit:
|
||||
1. Trigger compaction of old events
|
||||
2. Reduce playbook items to top 5
|
||||
3. Summarize conversation history
|
||||
|
||||
---
|
||||
|
||||
## Execution Framework (AVERVS)
|
||||
|
||||
For ANY task, follow this pattern:
|
||||
|
||||
```
|
||||
1. ANNOUNCE → State what you're about to do
|
||||
2. VALIDATE → Check prerequisites are met
|
||||
3. EXECUTE → Perform the action
|
||||
4. RECORD → Record outcome to playbook (NEW!)
|
||||
5. VERIFY → Confirm success
|
||||
6. REPORT → Summarize what was done
|
||||
7. SUGGEST → Offer logical next steps
|
||||
```
|
||||
|
||||
### Recording After Execution
|
||||
|
||||
```python
|
||||
# After successful execution
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.STRATEGY,
|
||||
content=f"Approach worked: {brief_description}",
|
||||
tags=relevant_tags
|
||||
)
|
||||
|
||||
# After failure
|
||||
playbook.add_insight(
|
||||
category=InsightCategory.MISTAKE,
|
||||
content=f"Failed approach: {brief_description}. Reason: {reason}",
|
||||
tags=relevant_tags
|
||||
)
|
||||
|
||||
# Always save after recording
|
||||
playbook.save(PLAYBOOK_PATH)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Closing Checklist (Enhanced)
|
||||
|
||||
Before ending a session, complete:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION CLOSING (v3.0) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. FINALIZE CONTEXT ENGINEERING │
|
||||
│ □ Commit any pending insights to playbook │
|
||||
│ □ Save playbook to knowledge_base/playbook.json │
|
||||
│ □ Export learning report if optimization completed │
|
||||
│ │
|
||||
│ 2. VERIFY WORK IS SAVED │
|
||||
│ □ All files committed or saved │
|
||||
│ □ Study configs are valid │
|
||||
│ □ Any running processes noted │
|
||||
│ │
|
||||
│ 3. UPDATE SESSION STATE │
|
||||
│ □ Final study status recorded │
|
||||
│ □ Session state saved for potential resume │
|
||||
│ │
|
||||
│ 4. SUMMARIZE FOR USER │
|
||||
│ □ What was accomplished │
|
||||
│ □ What the system learned (new playbook items) │
|
||||
│ □ Current state of any studies │
|
||||
│ □ Recommended next steps │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Finalization Code
|
||||
|
||||
```python
|
||||
# At session end
|
||||
from optimization_engine.context import FeedbackLoop, save_playbook
|
||||
|
||||
# If optimization was run, finalize learning
|
||||
if optimization_completed:
|
||||
feedback = FeedbackLoop(playbook_path)
|
||||
result = feedback.finalize_study({
|
||||
"name": study_name,
|
||||
"total_trials": n_trials,
|
||||
"best_value": best_value,
|
||||
"convergence_rate": success_rate
|
||||
})
|
||||
print(f"Learning finalized: {result['insights_added']} insights added")
|
||||
|
||||
# Always save playbook
|
||||
save_playbook()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Engineering Components Reference
|
||||
|
||||
| Component | Purpose | Location |
|
||||
|-----------|---------|----------|
|
||||
| **AtomizerPlaybook** | Knowledge store with helpful/harmful tracking | `optimization_engine/context/playbook.py` |
|
||||
| **AtomizerReflector** | Analyzes outcomes, extracts insights | `optimization_engine/context/reflector.py` |
|
||||
| **AtomizerSessionState** | Context isolation (exposed/isolated) | `optimization_engine/context/session_state.py` |
|
||||
| **FeedbackLoop** | Connects outcomes to playbook updates | `optimization_engine/context/feedback_loop.py` |
|
||||
| **CompactionManager** | Handles long sessions | `optimization_engine/context/compaction.py` |
|
||||
| **ContextCacheOptimizer** | KV-cache optimization | `optimization_engine/context/cache_monitor.py` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Paths
|
||||
|
||||
### "I just want to run an optimization"
|
||||
1. Initialize session state as RUN_OPTIMIZATION
|
||||
2. Load playbook items for [solver, convergence]
|
||||
3. Load OP_02_RUN_OPTIMIZATION.md
|
||||
4. After run, finalize feedback loop
|
||||
|
||||
### "Something broke"
|
||||
1. Initialize session state as DEBUG_ERROR
|
||||
2. Load ALL mistake items from playbook
|
||||
3. Load OP_06_TROUBLESHOOT.md
|
||||
4. Record any new errors discovered
|
||||
|
||||
### "What did my optimization find?"
|
||||
1. Initialize session state as ANALYZE_RESULTS
|
||||
2. Load OP_04_ANALYZE_RESULTS.md
|
||||
3. Query the study database
|
||||
4. Generate report
|
||||
|
||||
---
|
||||
|
||||
## Key Constraints (Always Apply)
|
||||
|
||||
1. **Python Environment**: Always use `conda activate atomizer`
|
||||
2. **Never modify master files**: Copy NX files to study working directory first
|
||||
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
|
||||
4. **Validation**: Always validate config before running optimization
|
||||
5. **Record immediately**: Don't wait until session end to record insights
|
||||
6. **Save playbook**: After every insight, save the playbook
|
||||
|
||||
---
|
||||
|
||||
## Migration from v2.0
|
||||
|
||||
If upgrading from BOOTSTRAP v2.0:
|
||||
|
||||
1. The LAC system is now superseded by AtomizerPlaybook
|
||||
2. Session insights are now structured PlaybookItems
|
||||
3. Helpful/harmful tracking replaces simple confidence scores
|
||||
4. Context is now explicitly exposed vs isolated
|
||||
|
||||
The old LAC files in `knowledge_base/lac/` are still readable but new insights should use the playbook system.
|
||||
|
||||
---
|
||||
|
||||
*Atomizer v3.0: Where engineers talk, AI optimizes, and the system learns.*
|
||||
@@ -671,4 +671,4 @@ feedback.process_trial_result(
|
||||
|---------|-----|---------|
|
||||
| Context API | `http://localhost:5000/api/context` | Playbook management |
|
||||
|
||||
**Full documentation**: `docs/protocols/system/SYS_17_CONTEXT_ENGINEERING.md`
|
||||
**Full documentation**: `docs/protocols/system/SYS_18_CONTEXT_ENGINEERING.md`
|
||||
|
||||
317
.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md
Normal file
317
.claude/skills/archive/00_BOOTSTRAP_V2.0_archived.md
Normal file
@@ -0,0 +1,317 @@
|
||||
---
|
||||
skill_id: SKILL_000
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
type: bootstrap
|
||||
code_dependencies: []
|
||||
requires_skills: []
|
||||
---
|
||||
|
||||
# Atomizer LLM Bootstrap
|
||||
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
|
||||
|
||||
---
|
||||
|
||||
## Quick Orientation (30 Seconds)
|
||||
|
||||
**Atomizer** = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
|
||||
|
||||
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
|
||||
|
||||
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
|
||||
|
||||
---
|
||||
|
||||
## Session Startup Checklist
|
||||
|
||||
On **every new session**, complete these steps:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION STARTUP │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ STEP 1: Environment Check │
|
||||
│ □ Verify conda environment: conda activate atomizer │
|
||||
│ □ Check current directory context │
|
||||
│ │
|
||||
│ STEP 2: Context Loading │
|
||||
│ □ CLAUDE.md loaded (system instructions) │
|
||||
│ □ This file (00_BOOTSTRAP.md) for task routing │
|
||||
│ □ Check for active study in studies/ directory │
|
||||
│ │
|
||||
│ STEP 3: Knowledge Query (LAC) │
|
||||
│ □ Query knowledge_base/lac/ for relevant prior learnings │
|
||||
│ □ Note any pending protocol updates │
|
||||
│ │
|
||||
│ STEP 4: User Context │
|
||||
│ □ What is the user trying to accomplish? │
|
||||
│ □ Is there an active study context? │
|
||||
│ □ What privilege level? (default: user) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Tree
|
||||
|
||||
When a user request arrives, classify it:
|
||||
|
||||
```
|
||||
User Request
|
||||
│
|
||||
├─► CREATE something?
|
||||
│ ├─ "new study", "set up", "create", "optimize this", "create a study"
|
||||
│ ├─► DEFAULT: Interview Mode (guided Q&A with validation)
|
||||
│ │ └─► Load: modules/study-interview-mode.md + OP_01
|
||||
│ │
|
||||
│ └─► MANUAL mode? (power users, explicit request)
|
||||
│ ├─ "quick setup", "skip interview", "manual config"
|
||||
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
|
||||
│
|
||||
├─► RUN something?
|
||||
│ ├─ "start", "run", "execute", "begin optimization"
|
||||
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
|
||||
│
|
||||
├─► CHECK status?
|
||||
│ ├─ "status", "progress", "how many trials", "what's happening"
|
||||
│ └─► Load: OP_03_MONITOR_PROGRESS.md
|
||||
│
|
||||
├─► ANALYZE results?
|
||||
│ ├─ "results", "best design", "compare", "pareto"
|
||||
│ └─► Load: OP_04_ANALYZE_RESULTS.md
|
||||
│
|
||||
├─► DEBUG/FIX error?
|
||||
│ ├─ "error", "failed", "not working", "crashed"
|
||||
│ └─► Load: OP_06_TROUBLESHOOT.md
|
||||
│
|
||||
├─► MANAGE disk space?
|
||||
│ ├─ "disk", "space", "cleanup", "archive", "storage"
|
||||
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
|
||||
│
|
||||
├─► CONFIGURE settings?
|
||||
│ ├─ "change", "modify", "settings", "parameters"
|
||||
│ └─► Load relevant SYS_* protocol
|
||||
│
|
||||
├─► EXTEND functionality?
|
||||
│ ├─ "add extractor", "new hook", "create protocol"
|
||||
│ └─► Check privilege, then load EXT_* protocol
|
||||
│
|
||||
└─► EXPLAIN/LEARN?
|
||||
├─ "what is", "how does", "explain"
|
||||
└─► Load relevant SYS_* protocol for reference
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol Routing Table
|
||||
|
||||
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|
||||
|-------------|----------|----------|---------------|-----------|
|
||||
| **Create study (DEFAULT)** | "new", "set up", "create", "optimize", "create a study" | OP_01 | **modules/study-interview-mode.md** | user |
|
||||
| Create study (manual) | "quick setup", "skip interview", "manual config" | OP_01 | core/study-creation-core.md | power_user |
|
||||
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
|
||||
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
|
||||
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
|
||||
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
|
||||
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
|
||||
| **Disk management** | "disk", "space", "cleanup", "archive" | **OP_07** | modules/study-disk-optimization.md | user |
|
||||
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
|
||||
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
|
||||
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
|
||||
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
|
||||
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
|
||||
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
|
||||
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
|
||||
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
|
||||
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
|
||||
|
||||
---
|
||||
|
||||
## Role Detection
|
||||
|
||||
Determine user's privilege level:
|
||||
|
||||
| Role | How to Detect | Can Do | Cannot Do |
|
||||
|------|---------------|--------|-----------|
|
||||
| **user** | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
|
||||
| **power_user** | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
|
||||
| **admin** | Explicit declaration, admin config present | Full access | - |
|
||||
|
||||
**Default**: Assume `user` unless explicitly told otherwise.
|
||||
|
||||
---
|
||||
|
||||
## Context Loading Rules
|
||||
|
||||
After classifying the task, load context in this order:
|
||||
|
||||
### 1. Always Loaded (via CLAUDE.md)
|
||||
- This file (00_BOOTSTRAP.md)
|
||||
- Python environment rules
|
||||
- Code reuse protocol
|
||||
|
||||
### 2. Load Per Task Type
|
||||
See `02_CONTEXT_LOADER.md` for complete loading rules.
|
||||
|
||||
**Quick Reference**:
|
||||
```
|
||||
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
|
||||
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
|
||||
→ modules/zernike-optimization.md (if telescope/mirror)
|
||||
→ modules/neural-acceleration.md (if >50 trials)
|
||||
|
||||
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
|
||||
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
|
||||
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
|
||||
|
||||
DEBUG → OP_06_TROUBLESHOOT.md
|
||||
→ Relevant SYS_* based on error type
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Framework
|
||||
|
||||
For ANY task, follow this pattern:
|
||||
|
||||
```
|
||||
1. ANNOUNCE → State what you're about to do
|
||||
2. VALIDATE → Check prerequisites are met
|
||||
3. EXECUTE → Perform the action
|
||||
4. VERIFY → Confirm success
|
||||
5. REPORT → Summarize what was done
|
||||
6. SUGGEST → Offer logical next steps
|
||||
```
|
||||
|
||||
See `PROTOCOL_EXECUTION.md` for detailed execution rules.
|
||||
|
||||
---
|
||||
|
||||
## Emergency Quick Paths
|
||||
|
||||
### "I just want to run an optimization"
|
||||
1. Do you have a `.prt` and `.sim` file? → Yes: OP_01 → OP_02
|
||||
2. Getting errors? → OP_06
|
||||
3. Want to see progress? → OP_03
|
||||
|
||||
### "Something broke"
|
||||
1. Read the error message
|
||||
2. Load OP_06_TROUBLESHOOT.md
|
||||
3. Follow diagnostic flowchart
|
||||
|
||||
### "What did my optimization find?"
|
||||
1. Load OP_04_ANALYZE_RESULTS.md
|
||||
2. Query the study database
|
||||
3. Generate report
|
||||
|
||||
---
|
||||
|
||||
## Protocol Directory Map
|
||||
|
||||
```
|
||||
docs/protocols/
|
||||
├── operations/ # Layer 2: How-to guides
|
||||
│ ├── OP_01_CREATE_STUDY.md
|
||||
│ ├── OP_02_RUN_OPTIMIZATION.md
|
||||
│ ├── OP_03_MONITOR_PROGRESS.md
|
||||
│ ├── OP_04_ANALYZE_RESULTS.md
|
||||
│ ├── OP_05_EXPORT_TRAINING_DATA.md
|
||||
│ └── OP_06_TROUBLESHOOT.md
|
||||
│
|
||||
├── system/ # Layer 3: Core specifications
|
||||
│ ├── SYS_10_IMSO.md
|
||||
│ ├── SYS_11_MULTI_OBJECTIVE.md
|
||||
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
|
||||
│ ├── SYS_13_DASHBOARD_TRACKING.md
|
||||
│ └── SYS_14_NEURAL_ACCELERATION.md
|
||||
│
|
||||
└── extensions/ # Layer 4: Extensibility guides
|
||||
├── EXT_01_CREATE_EXTRACTOR.md
|
||||
├── EXT_02_CREATE_HOOK.md
|
||||
├── EXT_03_CREATE_PROTOCOL.md
|
||||
├── EXT_04_CREATE_SKILL.md
|
||||
└── templates/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Constraints (Always Apply)
|
||||
|
||||
1. **Python Environment**: Always use `conda activate atomizer`
|
||||
2. **Never modify master files**: Copy NX files to study working directory first
|
||||
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
|
||||
4. **Validation**: Always validate config before running optimization
|
||||
5. **Documentation**: Every study needs README.md and STUDY_REPORT.md
|
||||
|
||||
---
|
||||
|
||||
## Next Steps After Bootstrap
|
||||
|
||||
1. If you know the task type → Go to relevant OP_* or SYS_* protocol
|
||||
2. If unclear → Ask user clarifying question
|
||||
3. If complex task → Read `01_CHEATSHEET.md` for quick reference
|
||||
4. If need detailed loading rules → Read `02_CONTEXT_LOADER.md`
|
||||
|
||||
---
|
||||
|
||||
## Session Closing Checklist
|
||||
|
||||
Before ending a session, complete:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ SESSION CLOSING │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. VERIFY WORK IS SAVED │
|
||||
│ □ All files committed or saved │
|
||||
│ □ Study configs are valid │
|
||||
│ □ Any running processes noted │
|
||||
│ │
|
||||
│ 2. RECORD LEARNINGS TO LAC │
|
||||
│ □ Any failures and their solutions → failure.jsonl │
|
||||
│ □ Success patterns discovered → success_pattern.jsonl │
|
||||
│ □ User preferences noted → user_preference.jsonl │
|
||||
│ □ Protocol improvements → suggested_updates.jsonl │
|
||||
│ │
|
||||
│ 3. RECORD OPTIMIZATION OUTCOMES │
|
||||
│ □ If optimization completed, record to optimization_memory/ │
|
||||
│ □ Include: method, geometry_type, converged, convergence_trial │
|
||||
│ │
|
||||
│ 4. SUMMARIZE FOR USER │
|
||||
│ □ What was accomplished │
|
||||
│ □ Current state of any studies │
|
||||
│ □ Recommended next steps │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Session Summary Template
|
||||
|
||||
```markdown
|
||||
# Session Summary
|
||||
|
||||
**Date**: {YYYY-MM-DD}
|
||||
**Study Context**: {study_name or "General"}
|
||||
|
||||
## Accomplished
|
||||
- {task 1}
|
||||
- {task 2}
|
||||
|
||||
## Current State
|
||||
- Study: {status}
|
||||
- Trials: {N completed}
|
||||
- Next action needed: {action}
|
||||
|
||||
## Learnings Recorded
|
||||
- {insight 1}
|
||||
|
||||
## Recommended Next Steps
|
||||
1. {step 1}
|
||||
2. {step 2}
|
||||
```
|
||||
@@ -497,7 +497,7 @@ docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md (lines 71-851 - MANY reference
|
||||
docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md (lines 60, 85, 315)
|
||||
docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md (lines 231-1080)
|
||||
docs/protocols/system/SYS_15_METHOD_SELECTOR.md (lines 42-422)
|
||||
docs/protocols/system/SYS_16_STUDY_INSIGHTS.md (lines 62-498)
|
||||
docs/protocols/system/SYS_17_STUDY_INSIGHTS.md (lines 62-498)
|
||||
docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md (lines 35-287)
|
||||
docs/protocols/extensions/EXT_02_CREATE_HOOK.md (lines 45-357)
|
||||
docs/protocols/extensions/EXT_03_CREATE_INSIGHT.md
|
||||
|
||||
@@ -283,7 +283,7 @@ python -m optimization_engine.insights recommend studies/my_study
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **Protocol Specification**: `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md`
|
||||
- **Protocol Specification**: `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md`
|
||||
- **OPD Method Physics**: `docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md`
|
||||
- **Zernike Integration**: `docs/ZERNIKE_INTEGRATION.md`
|
||||
- **Extractor Catalog**: `.claude/skills/modules/extractors-catalog.md`
|
||||
|
||||
@@ -93,6 +93,7 @@ The Protocol Operating System (POS) provides layered documentation:
|
||||
| Export neural data | OP_05 | `docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md` |
|
||||
| Debug issues | OP_06 | `docs/protocols/operations/OP_06_TROUBLESHOOT.md` |
|
||||
| **Free disk space** | OP_07 | `docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md` |
|
||||
| **Generate report** | OP_08 | `docs/protocols/operations/OP_08_GENERATE_REPORT.md` |
|
||||
|
||||
## System Protocols (Technical Specs)
|
||||
|
||||
@@ -104,6 +105,9 @@ The Protocol Operating System (POS) provides layered documentation:
|
||||
| 13 | Dashboard | "dashboard", "real-time", monitoring |
|
||||
| 14 | Neural Acceleration | >50 trials, "neural", "surrogate" |
|
||||
| 15 | Method Selector | "which method", "recommend", "turbo vs" |
|
||||
| 16 | Self-Aware Turbo | "SAT", "turbo v3", high-efficiency optimization |
|
||||
| 17 | Study Insights | "insight", "visualization", physics analysis |
|
||||
| 18 | Context Engineering | "ACE", "playbook", session context |
|
||||
|
||||
**Full specs**: `docs/protocols/system/SYS_{N}_{NAME}.md`
|
||||
|
||||
@@ -156,8 +160,8 @@ git remote | xargs -L1 git push --all
|
||||
Atomizer/
|
||||
├── .claude/skills/ # LLM skills (Bootstrap + Core + Modules)
|
||||
├── docs/protocols/ # Protocol Operating System
|
||||
│ ├── operations/ # OP_01 - OP_07
|
||||
│ ├── system/ # SYS_10 - SYS_15
|
||||
│ ├── operations/ # OP_01 - OP_08
|
||||
│ ├── system/ # SYS_10 - SYS_18
|
||||
│ └── extensions/ # EXT_01 - EXT_04
|
||||
├── optimization_engine/ # Core Python modules (v2.0)
|
||||
│ ├── core/ # Optimization runners, method_selector, gradient_optimizer
|
||||
|
||||
@@ -39,7 +39,7 @@ This folder contains detailed physics and domain-specific documentation for Atom
|
||||
| `.claude/skills/modules/extractors-catalog.md` | Quick extractor lookup |
|
||||
| `.claude/skills/modules/insights-catalog.md` | Quick insight lookup |
|
||||
| `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` | Extractor specifications |
|
||||
| `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` | Insight specifications |
|
||||
| `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` | Insight specifications |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -315,7 +315,7 @@ studies/
|
||||
### Protocol Documentation
|
||||
|
||||
- `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` - Extractor specifications (E8-E10: Standard Zernike, E20-E21: OPD method)
|
||||
- `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` - Insight specifications (`zernike_wfe`, `zernike_opd_comparison`)
|
||||
- `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` - Insight specifications (`zernike_wfe`, `zernike_opd_comparison`)
|
||||
|
||||
### Skill Modules (Quick Lookup)
|
||||
|
||||
|
||||
@@ -558,7 +558,7 @@ The `concave` parameter in the code handles this sign flip.
|
||||
| `.claude/skills/modules/extractors-catalog.md` | Quick extractor lookup |
|
||||
| `.claude/skills/modules/insights-catalog.md` | Quick insight lookup |
|
||||
| `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` | Extractor specifications (E8-E10, E20-E21) |
|
||||
| `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` | Insight specifications |
|
||||
| `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` | Insight specifications |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -944,5 +944,5 @@ Error response format:
|
||||
## See Also
|
||||
|
||||
- [Context Engineering Report](../CONTEXT_ENGINEERING_REPORT.md) - Full implementation report
|
||||
- [SYS_17 Protocol](../protocols/system/SYS_17_CONTEXT_ENGINEERING.md) - System protocol
|
||||
- [SYS_18 Protocol](../protocols/system/SYS_18_CONTEXT_ENGINEERING.md) - System protocol
|
||||
- [Cheatsheet](../../.claude/skills/01_CHEATSHEET.md) - Quick reference
|
||||
|
||||
549
docs/plans/RESTRUCTURING_PLAN.md
Normal file
549
docs/plans/RESTRUCTURING_PLAN.md
Normal file
@@ -0,0 +1,549 @@
|
||||
# Atomizer Massive Restructuring Plan
|
||||
|
||||
**Created:** 2026-01-06
|
||||
**Purpose:** Comprehensive TODO list for Ralph mode execution with skip permissions
|
||||
**Status:** IN PROGRESS (Phase 2 partially complete)
|
||||
|
||||
---
|
||||
|
||||
## Progress Summary
|
||||
|
||||
**Completed:**
|
||||
- [x] Phase 1: Safe Cleanup (FULLY DONE)
|
||||
- [x] Phase 2.1-2.7: Protocol renaming, Bootstrap V3.0 promotion, routing updates
|
||||
|
||||
**In Progress:**
|
||||
- Phase 2.8-2.10: Cheatsheet updates and commit
|
||||
|
||||
**Remaining:**
|
||||
- Phases 3-6 and final push
|
||||
|
||||
---
|
||||
|
||||
## RALPH MODE TODO LIST
|
||||
|
||||
### PHASE 2 (Remaining - Documentation)
|
||||
|
||||
#### 2.8 Add OP_08 to 01_CHEATSHEET.md
|
||||
```
|
||||
File: .claude/skills/01_CHEATSHEET.md
|
||||
Action: Add row to "I want to..." table after OP_07 entry (around line 33)
|
||||
|
||||
Add this line:
|
||||
| **Generate report** | **OP_08** | `python -m optimization_engine.reporting.report_generator <study>` |
|
||||
|
||||
Also add a section around line 280:
|
||||
## Report Generation (OP_08)
|
||||
|
||||
### Quick Commands
|
||||
| Task | Command |
|
||||
|------|---------|
|
||||
| Generate markdown report | `python -m optimization_engine.reporting.markdown_report <study>` |
|
||||
| Generate HTML visualization | `python tools/zernike_html_generator.py <study>` |
|
||||
|
||||
**Full details**: `docs/protocols/operations/OP_08_GENERATE_REPORT.md`
|
||||
```
|
||||
|
||||
#### 2.9 SKIP (Already verified V3.0 Bootstrap has no circular refs)
|
||||
|
||||
#### 2.10 Commit Phase 2 Changes
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git add -A
|
||||
git commit -m "$(cat <<'EOF'
|
||||
docs: Consolidate documentation and fix protocol numbering
|
||||
|
||||
- Rename SYS_16_STUDY_INSIGHTS -> SYS_17_STUDY_INSIGHTS
|
||||
- Rename SYS_17_CONTEXT_ENGINEERING -> SYS_18_CONTEXT_ENGINEERING
|
||||
- Promote Bootstrap V3.0 (Context Engineering) as default
|
||||
- Create knowledge_base/playbook.json for ACE framework
|
||||
- Add OP_08 (Generate Report) to all routing tables
|
||||
- Add SYS_16-18 to all protocol tables
|
||||
- Update docs/protocols/README.md version 1.1
|
||||
- Update CLAUDE.md with new protocols
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PHASE 3: Code Organization
|
||||
|
||||
#### 3.1 Move ensemble_surrogate.py
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git mv optimization_engine/surrogates/ensemble_surrogate.py optimization_engine/processors/surrogates/ensemble_surrogate.py
|
||||
```
|
||||
|
||||
#### 3.2 Update processors/surrogates/__init__.py
|
||||
```
|
||||
File: optimization_engine/processors/surrogates/__init__.py
|
||||
Action: Add to __getattr__ function and __all__ list:
|
||||
|
||||
In __getattr__, add these elif blocks:
|
||||
elif name == 'EnsembleSurrogate':
|
||||
from .ensemble_surrogate import EnsembleSurrogate
|
||||
return EnsembleSurrogate
|
||||
elif name == 'OODDetector':
|
||||
from .ensemble_surrogate import OODDetector
|
||||
return OODDetector
|
||||
elif name == 'create_and_train_ensemble':
|
||||
from .ensemble_surrogate import create_and_train_ensemble
|
||||
return create_and_train_ensemble
|
||||
|
||||
In __all__, add:
|
||||
'EnsembleSurrogate',
|
||||
'OODDetector',
|
||||
'create_and_train_ensemble',
|
||||
```
|
||||
|
||||
#### 3.3 Add deprecation shim to surrogates/__init__.py
|
||||
```
|
||||
File: optimization_engine/surrogates/__init__.py
|
||||
Replace contents with:
|
||||
|
||||
"""
|
||||
DEPRECATED: This module has been moved to optimization_engine.processors.surrogates
|
||||
|
||||
Please update your imports:
|
||||
from optimization_engine.processors.surrogates import EnsembleSurrogate
|
||||
|
||||
This module will be removed in a future version.
|
||||
"""
|
||||
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"optimization_engine.surrogates is deprecated. "
|
||||
"Use optimization_engine.processors.surrogates instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2
|
||||
)
|
||||
|
||||
# Redirect imports
|
||||
from optimization_engine.processors.surrogates import (
|
||||
EnsembleSurrogate,
|
||||
OODDetector,
|
||||
create_and_train_ensemble
|
||||
)
|
||||
|
||||
__all__ = ['EnsembleSurrogate', 'OODDetector', 'create_and_train_ensemble']
|
||||
```
|
||||
|
||||
#### 3.4 Check future/ imports
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
grep -r "from optimization_engine.future" --include="*.py" | grep -v "future/" | head -20
|
||||
```
|
||||
Analyze output and decide which modules need to move out of future/
|
||||
|
||||
#### 3.5 Move workflow_decomposer.py (if imported by production code)
|
||||
If grep shows imports from config/ or core/:
|
||||
```bash
|
||||
git mv optimization_engine/future/workflow_decomposer.py optimization_engine/config/workflow_decomposer.py
|
||||
# Update imports in capability_matcher.py and any other files
|
||||
```
|
||||
|
||||
#### 3.6 Create tests/ directory structure
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
mkdir -p tests/unit/gnn tests/unit/extractors tests/integration tests/fixtures/sample_data
|
||||
```
|
||||
|
||||
#### 3.7 Move test files from archive/test_scripts/
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git mv archive/test_scripts/test_neural_surrogate.py tests/unit/
|
||||
git mv archive/test_scripts/test_nn_surrogate.py tests/unit/
|
||||
git mv archive/test_scripts/test_parametric_surrogate.py tests/unit/
|
||||
git mv archive/test_scripts/test_adaptive_characterization.py tests/unit/
|
||||
git mv archive/test_scripts/test_training_data_export.py tests/unit/
|
||||
git mv optimization_engine/gnn/test_*.py tests/unit/gnn/ 2>/dev/null || true
|
||||
git mv optimization_engine/extractors/test_phase3_extractors.py tests/unit/extractors/ 2>/dev/null || true
|
||||
```
|
||||
|
||||
#### 3.8 Create tests/conftest.py
|
||||
```
|
||||
File: tests/conftest.py
|
||||
Content:
|
||||
"""
|
||||
Pytest configuration and shared fixtures for Atomizer tests.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
@pytest.fixture
|
||||
def sample_study_dir(tmp_path):
|
||||
"""Create a temporary study directory structure."""
|
||||
study = tmp_path / "test_study"
|
||||
(study / "1_setup").mkdir(parents=True)
|
||||
(study / "2_iterations").mkdir()
|
||||
(study / "3_results").mkdir()
|
||||
return study
|
||||
|
||||
@pytest.fixture
|
||||
def sample_config():
|
||||
"""Sample optimization config for testing."""
|
||||
return {
|
||||
"study_name": "test_study",
|
||||
"design_variables": [
|
||||
{"name": "param1", "lower": 0, "upper": 10, "type": "continuous"}
|
||||
],
|
||||
"objectives": [
|
||||
{"name": "minimize_mass", "direction": "minimize"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.9 Rename bracket_displacement_maximizing/results to 3_results
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
# Check if results/ exists first
|
||||
if [ -d "studies/bracket_displacement_maximizing/results" ]; then
|
||||
git mv studies/bracket_displacement_maximizing/results studies/bracket_displacement_maximizing/3_results
|
||||
fi
|
||||
```
|
||||
|
||||
#### 3.10 Rename Drone_Gimbal/2_results to 3_results
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
# Check if 2_results/ exists first
|
||||
if [ -d "studies/Drone_Gimbal/2_results" ]; then
|
||||
git mv studies/Drone_Gimbal/2_results studies/Drone_Gimbal/3_results
|
||||
fi
|
||||
```
|
||||
|
||||
#### 3.11 Commit Phase 3 Changes
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git add -A
|
||||
git commit -m "$(cat <<'EOF'
|
||||
refactor: Reorganize code structure and create tests directory
|
||||
|
||||
- Consolidate surrogates module to processors/surrogates/
|
||||
- Add deprecation shim for old import path
|
||||
- Create tests/ directory with pytest structure
|
||||
- Move test files from archive/test_scripts/
|
||||
- Standardize study folder naming (3_results/)
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PHASE 4: Dependency Management
|
||||
|
||||
#### 4.1-4.2 Add neural and gnn optional deps to pyproject.toml
|
||||
```
|
||||
File: pyproject.toml
|
||||
After the [project.optional-dependencies] section, add:
|
||||
|
||||
neural = [
|
||||
"torch>=2.0.0",
|
||||
"torch-geometric>=2.3.0",
|
||||
"tensorboard>=2.13.0",
|
||||
]
|
||||
|
||||
gnn = [
|
||||
"torch>=2.0.0",
|
||||
"torch-geometric>=2.3.0",
|
||||
]
|
||||
|
||||
all = ["atomizer[neural,gnn,dev,dashboard]"]
|
||||
```
|
||||
|
||||
#### 4.3 Remove mcp optional deps
|
||||
```
|
||||
File: pyproject.toml
|
||||
Delete this section:
|
||||
|
||||
mcp = [
|
||||
"mcp>=0.1.0",
|
||||
]
|
||||
```
|
||||
|
||||
#### 4.4 Remove mcp_server from packages.find
|
||||
```
|
||||
File: pyproject.toml
|
||||
Change:
|
||||
include = ["mcp_server*", "optimization_engine*", "nx_journals*"]
|
||||
To:
|
||||
include = ["optimization_engine*", "nx_journals*"]
|
||||
```
|
||||
|
||||
#### 4.5 Commit Phase 4 Changes
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git add pyproject.toml
|
||||
git commit -m "$(cat <<'EOF'
|
||||
build: Add optional dependency groups and clean up pyproject.toml
|
||||
|
||||
- Add neural optional group (torch, torch-geometric, tensorboard)
|
||||
- Add gnn optional group (torch, torch-geometric)
|
||||
- Add all optional group for convenience
|
||||
- Remove mcp optional group (not implemented)
|
||||
- Remove mcp_server from packages.find (not implemented)
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PHASE 5: Study Organization
|
||||
|
||||
#### 5.1 Create archive directory
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
mkdir -p studies/M1_Mirror/_archive
|
||||
```
|
||||
|
||||
#### 5.2 Move V1-V8 cost_reduction studies
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer/studies/M1_Mirror
|
||||
# Move cost_reduction V2-V8 (V1 doesn't exist as base is just "cost_reduction")
|
||||
for v in V2 V3 V4 V5 V6 V7 V8; do
|
||||
if [ -d "m1_mirror_cost_reduction_$v" ]; then
|
||||
mv "m1_mirror_cost_reduction_$v" _archive/
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
#### 5.3 Move V1-V8 flat_back studies
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer/studies/M1_Mirror
|
||||
# Move flat_back V1-V8 (note: V2 may not exist)
|
||||
for v in V1 V3 V4 V5 V6 V7 V8; do
|
||||
if [ -d "m1_mirror_cost_reduction_flat_back_$v" ]; then
|
||||
mv "m1_mirror_cost_reduction_flat_back_$v" _archive/
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
#### 5.4 Create MANIFEST.md
|
||||
```
|
||||
File: studies/M1_Mirror/_archive/MANIFEST.md
|
||||
Content:
|
||||
|
||||
# M1 Mirror Archived Studies
|
||||
|
||||
**Archived:** 2026-01-06
|
||||
**Reason:** Repository cleanup - keeping only V9+ studies active
|
||||
|
||||
## Archived Studies
|
||||
|
||||
### Cost Reduction Series
|
||||
| Study | Trials | Best WS | Notes |
|
||||
|-------|--------|---------|-------|
|
||||
| V2 | TBD | TBD | Early exploration |
|
||||
| V3 | TBD | TBD | - |
|
||||
| V4 | TBD | TBD | - |
|
||||
| V5 | TBD | TBD | - |
|
||||
| V6 | TBD | TBD | - |
|
||||
| V7 | TBD | TBD | - |
|
||||
| V8 | TBD | TBD | - |
|
||||
|
||||
### Cost Reduction Flat Back Series
|
||||
| Study | Trials | Best WS | Notes |
|
||||
|-------|--------|---------|-------|
|
||||
| V1 | TBD | TBD | Initial flat back design |
|
||||
| V3 | TBD | TBD | V2 was skipped |
|
||||
| V4 | TBD | TBD | - |
|
||||
| V5 | TBD | TBD | - |
|
||||
| V6 | TBD | TBD | - |
|
||||
| V7 | TBD | TBD | - |
|
||||
| V8 | TBD | TBD | - |
|
||||
|
||||
## Restoration Instructions
|
||||
|
||||
To restore a study:
|
||||
1. Move from _archive/ to parent directory
|
||||
2. Verify database integrity: `sqlite3 3_results/study.db ".tables"`
|
||||
3. Check optimization_config.json exists
|
||||
```
|
||||
|
||||
#### 5.5 Commit Phase 5 Changes
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git add -A
|
||||
git commit -m "$(cat <<'EOF'
|
||||
chore: Archive old M1_Mirror studies (V1-V8)
|
||||
|
||||
- Create studies/M1_Mirror/_archive/ directory
|
||||
- Move cost_reduction V2-V8 to archive
|
||||
- Move flat_back V1-V8 to archive
|
||||
- Create MANIFEST.md documenting archived studies
|
||||
- Keep V9+ studies active for reference
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### PHASE 6: Documentation Polish
|
||||
|
||||
#### 6.1 Update README.md with LLM section
|
||||
```
|
||||
File: README.md
|
||||
Add this section after the main description:
|
||||
|
||||
## For AI Assistants
|
||||
|
||||
Atomizer is designed for LLM-first interaction. Key resources:
|
||||
|
||||
- **[CLAUDE.md](CLAUDE.md)** - System instructions for Claude Code
|
||||
- **[.claude/skills/](/.claude/skills/)** - LLM skill modules
|
||||
- **[docs/protocols/](docs/protocols/)** - Protocol Operating System
|
||||
|
||||
### Knowledge Base (LAC)
|
||||
|
||||
The Learning Atomizer Core (`knowledge_base/lac/`) accumulates optimization knowledge:
|
||||
- `session_insights/` - Learnings from past sessions
|
||||
- `optimization_memory/` - Optimization outcomes by geometry type
|
||||
- `playbook.json` - ACE framework knowledge store
|
||||
|
||||
For detailed AI interaction guidance, see CLAUDE.md.
|
||||
```
|
||||
|
||||
#### 6.2-6.4 Create optimization_memory JSONL files
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
mkdir -p knowledge_base/lac/optimization_memory
|
||||
```
|
||||
|
||||
```
|
||||
File: knowledge_base/lac/optimization_memory/bracket.jsonl
|
||||
Content (one JSON per line):
|
||||
{"geometry_type": "bracket", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}
|
||||
```
|
||||
|
||||
```
|
||||
File: knowledge_base/lac/optimization_memory/beam.jsonl
|
||||
Content:
|
||||
{"geometry_type": "beam", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}
|
||||
```
|
||||
|
||||
```
|
||||
File: knowledge_base/lac/optimization_memory/mirror.jsonl
|
||||
Content:
|
||||
{"geometry_type": "mirror", "study_name": "m1_mirror_adaptive_V14", "method": "IMSO", "objectives": ["wfe_40_20", "mass_kg"], "trials": 100, "converged": true, "notes": "SAT v3 achieved WS=205.58"}
|
||||
```
|
||||
|
||||
#### 6.5 Move implementation plans to docs/plans
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git mv .claude/skills/modules/DYNAMIC_RESPONSE_IMPLEMENTATION_PLAN.md docs/plans/
|
||||
git mv .claude/skills/modules/OPTIMIZATION_ENGINE_MIGRATION_PLAN.md docs/plans/
|
||||
git mv .claude/skills/modules/atomizer_fast_solver_technologies.md docs/plans/
|
||||
```
|
||||
|
||||
#### 6.6 Final consistency verification
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
# Verify protocol files exist
|
||||
ls docs/protocols/operations/OP_0*.md
|
||||
ls docs/protocols/system/SYS_1*.md
|
||||
|
||||
# Verify imports work
|
||||
python -c "import optimization_engine; print('OK')"
|
||||
|
||||
# Verify no broken references
|
||||
grep -r "SYS_16_STUDY" . --include="*.md" | head -5 # Should be empty
|
||||
grep -r "SYS_17_CONTEXT" . --include="*.md" | head -5 # Should be empty
|
||||
|
||||
# Count todos completed
|
||||
echo "Verification complete"
|
||||
```
|
||||
|
||||
#### 6.7 Commit Phase 6 Changes
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git add -A
|
||||
git commit -m "$(cat <<'EOF'
|
||||
docs: Final documentation polish and consistency fixes
|
||||
|
||||
- Update README.md with LLM assistant section
|
||||
- Create optimization_memory JSONL structure
|
||||
- Move implementation plans from skills/modules to docs/plans
|
||||
- Verify all protocol references are consistent
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### FINAL: Push to Both Remotes
|
||||
|
||||
```bash
|
||||
cd c:\Users\antoi\Atomizer
|
||||
git push origin main
|
||||
git push github main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Files Modified in This Restructuring
|
||||
|
||||
**Documentation (Phase 2):**
|
||||
- `docs/protocols/README.md` - Updated protocol listings
|
||||
- `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` - Renamed from SYS_16
|
||||
- `docs/protocols/system/SYS_18_CONTEXT_ENGINEERING.md` - Renamed from SYS_17
|
||||
- `CLAUDE.md` - Updated routing tables
|
||||
- `.claude/skills/00_BOOTSTRAP.md` - Replaced with V3.0
|
||||
- `.claude/skills/01_CHEATSHEET.md` - Added OP_08
|
||||
- `knowledge_base/playbook.json` - Created
|
||||
|
||||
**Code (Phase 3):**
|
||||
- `optimization_engine/processors/surrogates/__init__.py` - Added exports
|
||||
- `optimization_engine/surrogates/__init__.py` - Deprecation shim
|
||||
- `tests/conftest.py` - Created
|
||||
|
||||
**Dependencies (Phase 4):**
|
||||
- `pyproject.toml` - Updated optional groups
|
||||
|
||||
**Studies (Phase 5):**
|
||||
- `studies/M1_Mirror/_archive/` - Created with V1-V8 studies
|
||||
|
||||
**Final Polish (Phase 6):**
|
||||
- `README.md` - Added LLM section
|
||||
- `knowledge_base/lac/optimization_memory/` - Created structure
|
||||
- `docs/plans/` - Moved implementation plans
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria Checklist
|
||||
|
||||
- [ ] All imports work: `python -c "import optimization_engine"`
|
||||
- [ ] Dashboard starts: `python launch_dashboard.py`
|
||||
- [ ] No SYS_16 duplication (only SELF_AWARE_TURBO)
|
||||
- [ ] Bootstrap V3.0 is active version
|
||||
- [ ] OP_08 discoverable in all routing tables
|
||||
- [ ] Studies use consistent 3_results/ naming
|
||||
- [ ] Tests directory exists with conftest.py
|
||||
- [ ] All changes pushed to both remotes
|
||||
@@ -1,7 +1,7 @@
|
||||
# Atomizer Protocol Operating System (POS)
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: 2025-12-05
|
||||
**Version**: 1.1
|
||||
**Last Updated**: 2026-01-06
|
||||
|
||||
---
|
||||
|
||||
@@ -22,13 +22,19 @@ protocols/
|
||||
│ ├── OP_03_MONITOR_PROGRESS.md
|
||||
│ ├── OP_04_ANALYZE_RESULTS.md
|
||||
│ ├── OP_05_EXPORT_TRAINING_DATA.md
|
||||
│ └── OP_06_TROUBLESHOOT.md
|
||||
│ ├── OP_06_TROUBLESHOOT.md
|
||||
│ ├── OP_07_DISK_OPTIMIZATION.md
|
||||
│ └── OP_08_GENERATE_REPORT.md
|
||||
├── system/ # Layer 3: Core specifications
|
||||
│ ├── SYS_10_IMSO.md
|
||||
│ ├── SYS_11_MULTI_OBJECTIVE.md
|
||||
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
|
||||
│ ├── SYS_13_DASHBOARD_TRACKING.md
|
||||
│ └── SYS_14_NEURAL_ACCELERATION.md
|
||||
│ ├── SYS_14_NEURAL_ACCELERATION.md
|
||||
│ ├── SYS_15_METHOD_SELECTOR.md
|
||||
│ ├── SYS_16_SELF_AWARE_TURBO.md
|
||||
│ ├── SYS_17_STUDY_INSIGHTS.md
|
||||
│ └── SYS_18_CONTEXT_ENGINEERING.md
|
||||
└── extensions/ # Layer 4: Extensibility guides
|
||||
├── EXT_01_CREATE_EXTRACTOR.md
|
||||
├── EXT_02_CREATE_HOOK.md
|
||||
@@ -56,6 +62,8 @@ Day-to-day how-to guides:
|
||||
- **OP_04**: Analyze results
|
||||
- **OP_05**: Export training data
|
||||
- **OP_06**: Troubleshoot issues
|
||||
- **OP_07**: Disk optimization (free space)
|
||||
- **OP_08**: Generate study report
|
||||
|
||||
### Layer 3: System (`system/`)
|
||||
Core technical specifications:
|
||||
@@ -65,6 +73,9 @@ Core technical specifications:
|
||||
- **SYS_13**: Real-Time Dashboard Tracking
|
||||
- **SYS_14**: Neural Network Acceleration
|
||||
- **SYS_15**: Method Selector
|
||||
- **SYS_16**: Self-Aware Turbo (SAT) Method
|
||||
- **SYS_17**: Study Insights (Physics Visualization)
|
||||
- **SYS_18**: Context Engineering (ACE Framework)
|
||||
|
||||
### Layer 4: Extensions (`extensions/`)
|
||||
Guides for extending Atomizer:
|
||||
@@ -130,6 +141,8 @@ LOAD_WITH: [{dependencies}]
|
||||
| Analyze results | [OP_04](operations/OP_04_ANALYZE_RESULTS.md) |
|
||||
| Export neural data | [OP_05](operations/OP_05_EXPORT_TRAINING_DATA.md) |
|
||||
| Fix errors | [OP_06](operations/OP_06_TROUBLESHOOT.md) |
|
||||
| Free disk space | [OP_07](operations/OP_07_DISK_OPTIMIZATION.md) |
|
||||
| Generate report | [OP_08](operations/OP_08_GENERATE_REPORT.md) |
|
||||
| Add extractor | [EXT_01](extensions/EXT_01_CREATE_EXTRACTOR.md) |
|
||||
|
||||
### By Protocol Number
|
||||
@@ -142,6 +155,9 @@ LOAD_WITH: [{dependencies}]
|
||||
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
|
||||
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
|
||||
| 15 | Method Selector | [System](system/SYS_15_METHOD_SELECTOR.md) |
|
||||
| 16 | Self-Aware Turbo | [System](system/SYS_16_SELF_AWARE_TURBO.md) |
|
||||
| 17 | Study Insights | [System](system/SYS_17_STUDY_INSIGHTS.md) |
|
||||
| 18 | Context Engineering | [System](system/SYS_18_CONTEXT_ENGINEERING.md) |
|
||||
|
||||
---
|
||||
|
||||
@@ -160,3 +176,4 @@ LOAD_WITH: [{dependencies}]
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial Protocol Operating System |
|
||||
| 1.1 | 2026-01-06 | Added OP_07, OP_08; SYS_16, SYS_17, SYS_18; Fixed SYS_16 duplication |
|
||||
|
||||
276
docs/protocols/operations/OP_08_GENERATE_REPORT.md
Normal file
276
docs/protocols/operations/OP_08_GENERATE_REPORT.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# OP_08: Generate Study Report
|
||||
|
||||
<!--
|
||||
PROTOCOL: Automated Study Report Generation
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2026-01-06
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers automated generation of comprehensive study reports via the Dashboard API or CLI. Reports include executive summaries, optimization metrics, best solutions, and engineering recommendations.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "generate report" | Follow this protocol |
|
||||
| Dashboard "Report" button | API endpoint called |
|
||||
| Optimization complete | Auto-generate option |
|
||||
| CLI `atomizer report <study>` | Direct generation |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**API Endpoint**: `POST /api/optimization/studies/{study_id}/report/generate`
|
||||
|
||||
**Output**: `STUDY_REPORT.md` in study root directory
|
||||
|
||||
**Formats Supported**: Markdown (default), JSON (data export)
|
||||
|
||||
---
|
||||
|
||||
## Generation Methods
|
||||
|
||||
### 1. Via Dashboard
|
||||
|
||||
Click the "Generate Report" button in the study control panel. The report will be generated and displayed in the Reports tab.
|
||||
|
||||
### 2. Via API
|
||||
|
||||
```bash
|
||||
# Generate report
|
||||
curl -X POST http://localhost:8003/api/optimization/studies/my_study/report/generate
|
||||
|
||||
# Response
|
||||
{
|
||||
"success": true,
|
||||
"content": "# Study Report: ...",
|
||||
"path": "/path/to/STUDY_REPORT.md",
|
||||
"generated_at": "2026-01-06T12:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Via CLI
|
||||
|
||||
```bash
|
||||
# Using Claude Code
|
||||
"Generate a report for the bracket_optimization study"
|
||||
|
||||
# Direct Python
|
||||
python -m optimization_engine.reporting.markdown_report studies/bracket_optimization
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Report Sections
|
||||
|
||||
### Executive Summary
|
||||
|
||||
Generated automatically from trial data:
|
||||
- Total trials completed
|
||||
- Best objective value achieved
|
||||
- Improvement percentage from initial design
|
||||
- Key findings
|
||||
|
||||
### Results Table
|
||||
|
||||
| Metric | Initial | Final | Change |
|
||||
|--------|---------|-------|--------|
|
||||
| Objective 1 | X | Y | Z% |
|
||||
| Objective 2 | X | Y | Z% |
|
||||
|
||||
### Best Solution
|
||||
|
||||
- Trial number
|
||||
- All design variable values
|
||||
- All objective values
|
||||
- Constraint satisfaction status
|
||||
- User attributes (source, validation status)
|
||||
|
||||
### Design Variables Summary
|
||||
|
||||
| Variable | Min | Max | Best Value | Sensitivity |
|
||||
|----------|-----|-----|------------|-------------|
|
||||
| var_1 | 0.0 | 10.0 | 5.23 | High |
|
||||
| var_2 | 0.0 | 20.0 | 12.87 | Medium |
|
||||
|
||||
### Convergence Analysis
|
||||
|
||||
- Trials to 50% improvement
|
||||
- Trials to 90% improvement
|
||||
- Convergence rate assessment
|
||||
- Phase breakdown (exploration, exploitation, refinement)
|
||||
|
||||
### Recommendations
|
||||
|
||||
Auto-generated based on results:
|
||||
- Further optimization suggestions
|
||||
- Sensitivity observations
|
||||
- Next steps for validation
|
||||
|
||||
---
|
||||
|
||||
## Backend Implementation
|
||||
|
||||
**File**: `atomizer-dashboard/backend/api/routes/optimization.py`
|
||||
|
||||
```python
|
||||
@router.post("/studies/{study_id}/report/generate")
|
||||
async def generate_report(study_id: str, format: str = "markdown"):
|
||||
"""
|
||||
Generate comprehensive study report.
|
||||
|
||||
Args:
|
||||
study_id: Study identifier
|
||||
format: Output format (markdown, json)
|
||||
|
||||
Returns:
|
||||
Generated report content and file path
|
||||
"""
|
||||
# Load configuration
|
||||
config = load_config(study_dir)
|
||||
|
||||
# Query database for all trials
|
||||
trials = get_all_completed_trials(db)
|
||||
best_trial = get_best_trial(db)
|
||||
|
||||
# Calculate metrics
|
||||
stats = calculate_statistics(trials)
|
||||
|
||||
# Generate markdown
|
||||
report = generate_markdown_report(study_id, config, trials, best_trial, stats)
|
||||
|
||||
# Save to file
|
||||
report_path = study_dir / "STUDY_REPORT.md"
|
||||
report_path.write_text(report)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"content": report,
|
||||
"path": str(report_path),
|
||||
"generated_at": datetime.now().isoformat()
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Report Template
|
||||
|
||||
The generated report follows this structure:
|
||||
|
||||
```markdown
|
||||
# {Study Name} - Optimization Report
|
||||
|
||||
**Generated:** {timestamp}
|
||||
**Status:** {Completed/In Progress}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This optimization study completed **{n_trials} trials** and achieved a
|
||||
**{improvement}%** improvement in the primary objective.
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total Trials | {n} |
|
||||
| Best Value | {best} |
|
||||
| Initial Value | {initial} |
|
||||
| Improvement | {pct}% |
|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
| Name | Direction | Weight | Best Value |
|
||||
|------|-----------|--------|------------|
|
||||
| {obj_name} | minimize | 1.0 | {value} |
|
||||
|
||||
---
|
||||
|
||||
## Design Variables
|
||||
|
||||
| Name | Min | Max | Best Value |
|
||||
|------|-----|-----|------------|
|
||||
| {var_name} | {min} | {max} | {best} |
|
||||
|
||||
---
|
||||
|
||||
## Best Solution
|
||||
|
||||
**Trial #{n}** achieved the optimal result.
|
||||
|
||||
### Parameters
|
||||
- var_1: {value}
|
||||
- var_2: {value}
|
||||
|
||||
### Objectives
|
||||
- objective_1: {value}
|
||||
|
||||
### Constraints
|
||||
- All constraints satisfied: Yes/No
|
||||
|
||||
---
|
||||
|
||||
## Convergence Analysis
|
||||
|
||||
- Initial best: {value} (trial 1)
|
||||
- Final best: {value} (trial {n})
|
||||
- 90% improvement reached at trial {n}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. Validate best solution with high-fidelity FEA
|
||||
2. Consider sensitivity analysis around optimal design point
|
||||
3. Check manufacturing feasibility of optimal parameters
|
||||
|
||||
---
|
||||
|
||||
*Generated by Atomizer Dashboard*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before generating a report:
|
||||
- [ ] Study must have at least 1 completed trial
|
||||
- [ ] study.db must exist in results directory
|
||||
- [ ] optimization_config.json must be present
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| "No trials found" | Empty database | Run optimization first |
|
||||
| "Config not found" | Missing config file | Verify study setup |
|
||||
| "Database locked" | Optimization running | Wait or pause first |
|
||||
| "Invalid study" | Study path not found | Check study ID |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Related**: [SYS_13_DASHBOARD](../system/SYS_13_DASHBOARD.md)
|
||||
- **Triggered By**: Dashboard Report button
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-06 | Initial release - Dashboard integration |
|
||||
94
knowledge_base/playbook.json
Normal file
94
knowledge_base/playbook.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"version": 1,
|
||||
"last_updated": "2026-01-06T12:00:00",
|
||||
"items": {
|
||||
"str-00001": {
|
||||
"id": "str-00001",
|
||||
"category": "str",
|
||||
"content": "Use TPE sampler for single-objective optimization with <4 design variables",
|
||||
"helpful_count": 5,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["optimization", "sampler"]
|
||||
},
|
||||
"str-00002": {
|
||||
"id": "str-00002",
|
||||
"category": "str",
|
||||
"content": "Use CMA-ES for continuous optimization with 4+ design variables",
|
||||
"helpful_count": 3,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["optimization", "sampler"]
|
||||
},
|
||||
"mis-00001": {
|
||||
"id": "mis-00001",
|
||||
"category": "mis",
|
||||
"content": "Always close NX process when done to avoid zombie processes consuming resources",
|
||||
"helpful_count": 10,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["nx", "process", "critical"]
|
||||
},
|
||||
"mis-00002": {
|
||||
"id": "mis-00002",
|
||||
"category": "mis",
|
||||
"content": "Never trust surrogate predictions with confidence < 0.7 for production trials",
|
||||
"helpful_count": 5,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["surrogate", "validation"]
|
||||
},
|
||||
"cal-00001": {
|
||||
"id": "cal-00001",
|
||||
"category": "cal",
|
||||
"content": "Relative WFE = (WFE_current - WFE_baseline) / WFE_baseline, NOT WFE_baseline / WFE_current",
|
||||
"helpful_count": 3,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["zernike", "calculation", "critical"]
|
||||
},
|
||||
"tool-00001": {
|
||||
"id": "tool-00001",
|
||||
"category": "tool",
|
||||
"content": "Use extract_zernike_figure for surface figure analysis (E20), extract_zernike_opd for optical path difference (E21)",
|
||||
"helpful_count": 4,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["extractor", "zernike"]
|
||||
},
|
||||
"dom-00001": {
|
||||
"id": "dom-00001",
|
||||
"category": "dom",
|
||||
"content": "For mirror optimization: WFE = 2 * surface figure RMS (reflection doubles error)",
|
||||
"helpful_count": 3,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["mirror", "optics", "fundamental"]
|
||||
},
|
||||
"wf-00001": {
|
||||
"id": "wf-00001",
|
||||
"category": "wf",
|
||||
"content": "Always run 5-10 initial FEA trials before enabling surrogate to establish baseline",
|
||||
"helpful_count": 4,
|
||||
"harmful_count": 0,
|
||||
"created_at": "2026-01-06T12:00:00",
|
||||
"last_used": null,
|
||||
"source_trials": [],
|
||||
"tags": ["surrogate", "workflow"]
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user