6 Commits

Author SHA1 Message Date
c061146a77 docs: Final documentation polish and consistency fixes
- Update README.md with LLM assistant section
- Create optimization_memory JSONL structure
- Move implementation plans from skills/modules to docs/plans
- Verify all imports work correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 09:07:44 -05:00
6ee7d8ee12 build: Add optional dependency groups and clean up pyproject.toml
- Add neural optional group (torch, torch-geometric, tensorboard)
- Add gnn optional group (torch, torch-geometric)
- Add all optional group for convenience
- Remove mcp optional group (not implemented)
- Remove mcp_server from packages.find
- Update pytest coverage config

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 09:02:36 -05:00
7bdb74f93b refactor: Reorganize code structure and create tests directory
- Consolidate surrogates module to processors/surrogates/
- Move ensemble_surrogate.py to proper location
- Add deprecation shim for old import path
- Create tests/ directory with pytest structure
- Move test files from archive/test_scripts/
- Add conftest.py with shared fixtures

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 09:01:37 -05:00
155e2a1b8e docs: Add OP_08 report generation and update protocol numbering
- Add OP_08 to cheatsheet task lookup table
- Create Report Generation section in cheatsheet
- Update SYS_16/17/18 numbering (SAT, Insights, Context)
- Create StatusBadge component for dashboard
- Create OP_08_GENERATE_REPORT.md protocol document

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 08:55:56 -05:00
b8a04c62b8 docs: Consolidate documentation and fix protocol numbering (partial)
Phase 2 of restructuring plan:
- Rename SYS_16_STUDY_INSIGHTS -> SYS_17_STUDY_INSIGHTS
- Rename SYS_17_CONTEXT_ENGINEERING -> SYS_18_CONTEXT_ENGINEERING
- Promote Bootstrap V3.0 (Context Engineering) as default
- Archive old Bootstrap V2.0
- Create knowledge_base/playbook.json for ACE framework
- Add OP_08 (Generate Report) to routing tables
- Add SYS_16-18 to protocol tables
- Update docs/protocols/README.md to version 1.1
- Update CLAUDE.md with new protocols
- Create docs/plans/RESTRUCTURING_PLAN.md for continuation

Remaining: Phase 2.8 (Cheatsheet), Phases 3-6

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 08:52:07 -05:00
18c221a218 chore: Clean up orphaned files and update .gitignore
- Delete orphaned files: temp_compare.py, run_cleanup.py
- Delete stale cache files from archive/temp_outputs/
- Update .gitignore with .coverage.*, .obsidian/ entries

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 08:37:58 -05:00
49 changed files with 3799 additions and 3642 deletions

View File

@@ -1,17 +1,20 @@
---
skill_id: SKILL_000
version: 2.0
last_updated: 2025-12-07
version: 3.0
last_updated: 2025-12-29
type: bootstrap
code_dependencies: []
code_dependencies:
- optimization_engine.context.playbook
- optimization_engine.context.session_state
- optimization_engine.context.feedback_loop
requires_skills: []
---
# Atomizer LLM Bootstrap
# Atomizer LLM Bootstrap v3.0 - Context-Aware Sessions
**Version**: 2.0
**Updated**: 2025-12-07
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
**Version**: 3.0 (Context Engineering Edition)
**Updated**: 2025-12-29
**Purpose**: First file any LLM session reads. Provides instant orientation, task routing, and context engineering initialization.
---
@@ -23,6 +26,8 @@ requires_skills: []
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
**NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run.
---
## Session Startup Checklist
@@ -31,23 +36,29 @@ On **every new session**, complete these steps:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION STARTUP
│ SESSION STARTUP (v3.0)
├─────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: Environment Check
│ STEP 1: Initialize Context Engineering
│ □ Load playbook from knowledge_base/playbook.json │
│ □ Initialize session state (TaskType, study context) │
│ □ Load relevant playbook items for task type │
│ │
│ STEP 2: Environment Check │
│ □ Verify conda environment: conda activate atomizer │
│ □ Check current directory context │
│ │
│ STEP 2: Context Loading │
│ STEP 3: Context Loading │
│ □ CLAUDE.md loaded (system instructions) │
│ □ This file (00_BOOTSTRAP.md) for task routing
│ □ This file (00_BOOTSTRAP_V2.md) for task routing │
│ □ Check for active study in studies/ directory │
│ │
│ STEP 3: Knowledge Query (LAC)
│ □ Query knowledge_base/lac/ for relevant prior learnings
│ □ Note any pending protocol updates
│ STEP 4: Knowledge Query (Enhanced)
│ □ Query AtomizerPlaybook for relevant insights
│ □ Filter by task type, min confidence 0.5
│ □ Include top mistakes for error prevention │
│ │
│ STEP 4: User Context │
│ STEP 5: User Context │
│ □ What is the user trying to accomplish? │
│ □ Is there an active study context? │
│ □ What privilege level? (default: user) │
@@ -55,127 +66,217 @@ On **every new session**, complete these steps:
└─────────────────────────────────────────────────────────────────────┘
```
### Context Engineering Initialization
```python
# On session start, initialize context engineering
from optimization_engine.context import (
AtomizerPlaybook,
AtomizerSessionState,
TaskType,
get_session
)
# Load playbook
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
# Initialize session
session = get_session()
session.exposed.task_type = TaskType.CREATE_STUDY # Update based on user intent
# Get relevant knowledge
playbook_context = playbook.get_context_for_task(
task_type="optimization",
max_items=15,
min_confidence=0.5
)
# Always include recent mistakes for error prevention
mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2)
```
---
## Task Classification Tree
When a user request arrives, classify it:
When a user request arrives, classify it and update session state:
```
User Request
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this", "create a study"
│ ├─► DEFAULT: Interview Mode (guided Q&A with validation)
└─► Load: modules/study-interview-mode.md + OP_01
│ │
│ └─► MANUAL mode? (power users, explicit request)
│ ├─ "quick setup", "skip interview", "manual config"
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
│ ├─ "new study", "set up", "create", "optimize this"
│ ├─ session.exposed.task_type = TaskType.CREATE_STUDY
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ ├─ session.exposed.task_type = TaskType.RUN_OPTIMIZATION
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ ├─ session.exposed.task_type = TaskType.MONITOR_PROGRESS
│ └─► Load: OP_03_MONITOR_PROGRESS.md
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ ├─ session.exposed.task_type = TaskType.ANALYZE_RESULTS
│ └─► Load: OP_04_ANALYZE_RESULTS.md
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
└─► Load: OP_06_TROUBLESHOOT.md
├─ session.exposed.task_type = TaskType.DEBUG_ERROR
│ └─► Load: OP_06_TROUBLESHOOT.md + playbook[MISTAKE]
├─► MANAGE disk space?
│ ├─ "disk", "space", "cleanup", "archive", "storage"
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
├─► GENERATE report?
│ ├─ "report", "summary", "generate", "document"
│ └─► Load: OP_08_GENERATE_REPORT.md
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ ├─ session.exposed.task_type = TaskType.CONFIGURE_SETTINGS
│ └─► Load relevant SYS_* protocol
├─► EXTEND functionality?
│ ├─ "add extractor", "new hook", "create protocol"
└─► Check privilege, then load EXT_* protocol
├─► NEURAL acceleration?
│ ├─ "neural", "surrogate", "turbo", "GNN"
├─ session.exposed.task_type = TaskType.NEURAL_ACCELERATION
│ └─► Load: SYS_14_NEURAL_ACCELERATION.md
└─► EXPLAIN/LEARN?
├─ "what is", "how does", "explain"
└─► Load relevant SYS_* protocol for reference
└─► EXTEND functionality?
├─ "add extractor", "new hook", "create protocol"
└─► Check privilege, then load EXT_* protocol
```
---
## Protocol Routing Table
## Protocol Routing Table (With Context Loading)
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|-------------|----------|----------|---------------|-----------|
| **Create study (DEFAULT)** | "new", "set up", "create", "optimize", "create a study" | OP_01 | **modules/study-interview-mode.md** | user |
| Create study (manual) | "quick setup", "skip interview", "manual config" | OP_01 | core/study-creation-core.md | power_user |
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
| **Disk management** | "disk", "space", "cleanup", "archive" | **OP_07** | modules/study-disk-optimization.md | user |
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
| User Intent | Keywords | Protocol | Skill to Load | Playbook Filter |
|-------------|----------|----------|---------------|-----------------|
| Create study | "new", "set up", "create" | OP_01 | study-creation-core.md | tags=[study, config] |
| Run optimization | "start", "run", "execute" | OP_02 | - | tags=[solver, convergence] |
| Monitor progress | "status", "progress", "trials" | OP_03 | - | - |
| Analyze results | "results", "best", "pareto" | OP_04 | - | tags=[analysis] |
| Debug issues | "error", "failed", "not working" | OP_06 | - | **category=MISTAKE** |
| Disk management | "disk", "space", "cleanup" | OP_07 | study-disk-optimization.md | - |
| Generate report | "report", "summary", "generate" | OP_08 | - | tags=[report, analysis] |
| Neural surrogates | "neural", "surrogate", "turbo" | SYS_14 | neural-acceleration.md | tags=[neural, surrogate] |
---
## Role Detection
## Playbook Integration Pattern
Determine user's privilege level:
### Loading Playbook Context
| Role | How to Detect | Can Do | Cannot Do |
|------|---------------|--------|-----------|
| **user** | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
| **power_user** | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
| **admin** | Explicit declaration, admin config present | Full access | - |
```python
def load_context_for_task(task_type: TaskType, session: AtomizerSessionState):
"""Load full context including playbook for LLM consumption."""
context_parts = []
**Default**: Assume `user` unless explicitly told otherwise.
# 1. Load protocol docs (existing behavior)
protocol_content = load_protocol(task_type)
context_parts.append(protocol_content)
---
# 2. Load session state (exposed only)
context_parts.append(session.get_llm_context())
## Context Loading Rules
# 3. Load relevant playbook items
playbook = AtomizerPlaybook.load(PLAYBOOK_PATH)
playbook_context = playbook.get_context_for_task(
task_type=task_type.value,
max_items=15,
min_confidence=0.6
)
context_parts.append(playbook_context)
After classifying the task, load context in this order:
# 4. Add error-specific items if debugging
if task_type == TaskType.DEBUG_ERROR:
mistakes = playbook.get_by_category(InsightCategory.MISTAKE)
for item in mistakes[:5]:
context_parts.append(item.to_context_string())
### 1. Always Loaded (via CLAUDE.md)
- This file (00_BOOTSTRAP.md)
- Python environment rules
- Code reuse protocol
### 2. Load Per Task Type
See `02_CONTEXT_LOADER.md` for complete loading rules.
**Quick Reference**:
return "\n\n---\n\n".join(context_parts)
```
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
→ modules/zernike-optimization.md (if telescope/mirror)
→ modules/neural-acceleration.md (if >50 trials)
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
### Real-Time Recording
DEBUG → OP_06_TROUBLESHOOT.md
→ Relevant SYS_* based on error type
**CRITICAL**: Record insights IMMEDIATELY when they occur. Do not wait until session end.
```python
# On discovering a workaround
playbook.add_insight(
category=InsightCategory.WORKFLOW,
content="For mesh update issues, load _i.prt file before UpdateFemodel()",
tags=["mesh", "nx", "update"]
)
playbook.save(PLAYBOOK_PATH)
# On trial failure
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"Convergence failure with tolerance < 1e-8 on large meshes",
source_trial=trial_number,
tags=["convergence", "solver"]
)
playbook.save(PLAYBOOK_PATH)
```
---
## Execution Framework
## Error Handling Protocol (Enhanced)
When ANY error occurs:
1. **Preserve the error** - Add to session state
2. **Check playbook** - Look for matching mistake patterns
3. **Learn from it** - If novel error, add to playbook
4. **Show to user** - Include error context in response
```python
# On error
session.add_error(f"{error_type}: {error_message}", error_type=error_type)
# Check playbook for similar errors
similar = playbook.search_by_content(error_message, category=InsightCategory.MISTAKE)
if similar:
print(f"Known issue: {similar[0].content}")
# Provide solution from playbook
else:
# New error - record for future reference
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"{error_type}: {error_message[:200]}",
tags=["error", error_type]
)
```
---
## Context Budget Management
Total context budget: ~100K tokens
Allocation:
- **Stable prefix**: 5K tokens (cached across requests)
- **Protocols**: 10K tokens
- **Playbook items**: 5K tokens
- **Session state**: 2K tokens
- **Conversation history**: 30K tokens
- **Working space**: 48K tokens
If approaching limit:
1. Trigger compaction of old events
2. Reduce playbook items to top 5
3. Summarize conversation history
---
## Execution Framework (AVERVS)
For ANY task, follow this pattern:
@@ -183,60 +284,122 @@ For ANY task, follow this pattern:
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. VERIFY → Confirm success
5. REPORT → Summarize what was done
6. SUGGEST → Offer logical next steps
4. RECORD → Record outcome to playbook (NEW!)
5. VERIFY → Confirm success
6. REPORT → Summarize what was done
7. SUGGEST → Offer logical next steps
```
See `PROTOCOL_EXECUTION.md` for detailed execution rules.
### Recording After Execution
```python
# After successful execution
playbook.add_insight(
category=InsightCategory.STRATEGY,
content=f"Approach worked: {brief_description}",
tags=relevant_tags
)
# After failure
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"Failed approach: {brief_description}. Reason: {reason}",
tags=relevant_tags
)
# Always save after recording
playbook.save(PLAYBOOK_PATH)
```
---
## Emergency Quick Paths
## Session Closing Checklist (Enhanced)
Before ending a session, complete:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION CLOSING (v3.0) │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ 1. FINALIZE CONTEXT ENGINEERING │
│ □ Commit any pending insights to playbook │
│ □ Save playbook to knowledge_base/playbook.json │
│ □ Export learning report if optimization completed │
│ │
│ 2. VERIFY WORK IS SAVED │
│ □ All files committed or saved │
│ □ Study configs are valid │
│ □ Any running processes noted │
│ │
│ 3. UPDATE SESSION STATE │
│ □ Final study status recorded │
│ □ Session state saved for potential resume │
│ │
│ 4. SUMMARIZE FOR USER │
│ □ What was accomplished │
│ □ What the system learned (new playbook items) │
│ □ Current state of any studies │
│ □ Recommended next steps │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Finalization Code
```python
# At session end
from optimization_engine.context import FeedbackLoop, save_playbook
# If optimization was run, finalize learning
if optimization_completed:
feedback = FeedbackLoop(playbook_path)
result = feedback.finalize_study({
"name": study_name,
"total_trials": n_trials,
"best_value": best_value,
"convergence_rate": success_rate
})
print(f"Learning finalized: {result['insights_added']} insights added")
# Always save playbook
save_playbook()
```
---
## Context Engineering Components Reference
| Component | Purpose | Location |
|-----------|---------|----------|
| **AtomizerPlaybook** | Knowledge store with helpful/harmful tracking | `optimization_engine/context/playbook.py` |
| **AtomizerReflector** | Analyzes outcomes, extracts insights | `optimization_engine/context/reflector.py` |
| **AtomizerSessionState** | Context isolation (exposed/isolated) | `optimization_engine/context/session_state.py` |
| **FeedbackLoop** | Connects outcomes to playbook updates | `optimization_engine/context/feedback_loop.py` |
| **CompactionManager** | Handles long sessions | `optimization_engine/context/compaction.py` |
| **ContextCacheOptimizer** | KV-cache optimization | `optimization_engine/context/cache_monitor.py` |
---
## Quick Paths
### "I just want to run an optimization"
1. Do you have a `.prt` and `.sim` file? → Yes: OP_01 → OP_02
2. Getting errors? → OP_06
3. Want to see progress? → OP_03
1. Initialize session state as RUN_OPTIMIZATION
2. Load playbook items for [solver, convergence]
3. Load OP_02_RUN_OPTIMIZATION.md
4. After run, finalize feedback loop
### "Something broke"
1. Read the error message
2. Load OP_06_TROUBLESHOOT.md
3. Follow diagnostic flowchart
1. Initialize session state as DEBUG_ERROR
2. Load ALL mistake items from playbook
3. Load OP_06_TROUBLESHOOT.md
4. Record any new errors discovered
### "What did my optimization find?"
1. Load OP_04_ANALYZE_RESULTS.md
2. Query the study database
3. Generate report
---
## Protocol Directory Map
```
docs/protocols/
├── operations/ # Layer 2: How-to guides
│ ├── OP_01_CREATE_STUDY.md
│ ├── OP_02_RUN_OPTIMIZATION.md
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
│ └── OP_06_TROUBLESHOOT.md
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
│ └── SYS_14_NEURAL_ACCELERATION.md
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
├── EXT_03_CREATE_PROTOCOL.md
├── EXT_04_CREATE_SKILL.md
└── templates/
```
1. Initialize session state as ANALYZE_RESULTS
2. Load OP_04_ANALYZE_RESULTS.md
3. Query the study database
4. Generate report
---
@@ -246,72 +409,22 @@ docs/protocols/
2. **Never modify master files**: Copy NX files to study working directory first
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
4. **Validation**: Always validate config before running optimization
5. **Documentation**: Every study needs README.md and STUDY_REPORT.md
5. **Record immediately**: Don't wait until session end to record insights
6. **Save playbook**: After every insight, save the playbook
---
## Next Steps After Bootstrap
## Migration from v2.0
1. If you know the task type → Go to relevant OP_* or SYS_* protocol
2. If unclear → Ask user clarifying question
3. If complex task → Read `01_CHEATSHEET.md` for quick reference
4. If need detailed loading rules → Read `02_CONTEXT_LOADER.md`
If upgrading from BOOTSTRAP v2.0:
1. The LAC system is now superseded by AtomizerPlaybook
2. Session insights are now structured PlaybookItems
3. Helpful/harmful tracking replaces simple confidence scores
4. Context is now explicitly exposed vs isolated
The old LAC files in `knowledge_base/lac/` are still readable but new insights should use the playbook system.
---
## Session Closing Checklist
Before ending a session, complete:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION CLOSING │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ 1. VERIFY WORK IS SAVED │
│ □ All files committed or saved │
│ □ Study configs are valid │
│ □ Any running processes noted │
│ │
│ 2. RECORD LEARNINGS TO LAC │
│ □ Any failures and their solutions → failure.jsonl │
│ □ Success patterns discovered → success_pattern.jsonl │
│ □ User preferences noted → user_preference.jsonl │
│ □ Protocol improvements → suggested_updates.jsonl │
│ │
│ 3. RECORD OPTIMIZATION OUTCOMES │
│ □ If optimization completed, record to optimization_memory/ │
│ □ Include: method, geometry_type, converged, convergence_trial │
│ │
│ 4. SUMMARIZE FOR USER │
│ □ What was accomplished │
│ □ Current state of any studies │
│ □ Recommended next steps │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Session Summary Template
```markdown
# Session Summary
**Date**: {YYYY-MM-DD}
**Study Context**: {study_name or "General"}
## Accomplished
- {task 1}
- {task 2}
## Current State
- Study: {status}
- Trials: {N completed}
- Next action needed: {action}
## Learnings Recorded
- {insight 1}
## Recommended Next Steps
1. {step 1}
2. {step 2}
```
*Atomizer v3.0: Where engineers talk, AI optimizes, and the system learns.*

View File

@@ -1,425 +0,0 @@
---
skill_id: SKILL_000
version: 3.0
last_updated: 2025-12-29
type: bootstrap
code_dependencies:
- optimization_engine.context.playbook
- optimization_engine.context.session_state
- optimization_engine.context.feedback_loop
requires_skills: []
---
# Atomizer LLM Bootstrap v3.0 - Context-Aware Sessions
**Version**: 3.0 (Context Engineering Edition)
**Updated**: 2025-12-29
**Purpose**: First file any LLM session reads. Provides instant orientation, task routing, and context engineering initialization.
---
## Quick Orientation (30 Seconds)
**Atomizer** = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
**NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run.
---
## Session Startup Checklist
On **every new session**, complete these steps:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION STARTUP (v3.0) │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: Initialize Context Engineering │
│ □ Load playbook from knowledge_base/playbook.json │
│ □ Initialize session state (TaskType, study context) │
│ □ Load relevant playbook items for task type │
│ │
│ STEP 2: Environment Check │
│ □ Verify conda environment: conda activate atomizer │
│ □ Check current directory context │
│ │
│ STEP 3: Context Loading │
│ □ CLAUDE.md loaded (system instructions) │
│ □ This file (00_BOOTSTRAP_V2.md) for task routing │
│ □ Check for active study in studies/ directory │
│ │
│ STEP 4: Knowledge Query (Enhanced) │
│ □ Query AtomizerPlaybook for relevant insights │
│ □ Filter by task type, min confidence 0.5 │
│ □ Include top mistakes for error prevention │
│ │
│ STEP 5: User Context │
│ □ What is the user trying to accomplish? │
│ □ Is there an active study context? │
│ □ What privilege level? (default: user) │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Context Engineering Initialization
```python
# On session start, initialize context engineering
from optimization_engine.context import (
AtomizerPlaybook,
AtomizerSessionState,
TaskType,
get_session
)
# Load playbook
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
# Initialize session
session = get_session()
session.exposed.task_type = TaskType.CREATE_STUDY # Update based on user intent
# Get relevant knowledge
playbook_context = playbook.get_context_for_task(
task_type="optimization",
max_items=15,
min_confidence=0.5
)
# Always include recent mistakes for error prevention
mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2)
```
---
## Task Classification Tree
When a user request arrives, classify it and update session state:
```
User Request
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this"
│ ├─ session.exposed.task_type = TaskType.CREATE_STUDY
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ ├─ session.exposed.task_type = TaskType.RUN_OPTIMIZATION
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ ├─ session.exposed.task_type = TaskType.MONITOR_PROGRESS
│ └─► Load: OP_03_MONITOR_PROGRESS.md
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ ├─ session.exposed.task_type = TaskType.ANALYZE_RESULTS
│ └─► Load: OP_04_ANALYZE_RESULTS.md
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
│ ├─ session.exposed.task_type = TaskType.DEBUG_ERROR
│ └─► Load: OP_06_TROUBLESHOOT.md + playbook[MISTAKE]
├─► MANAGE disk space?
│ ├─ "disk", "space", "cleanup", "archive", "storage"
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ ├─ session.exposed.task_type = TaskType.CONFIGURE_SETTINGS
│ └─► Load relevant SYS_* protocol
├─► NEURAL acceleration?
│ ├─ "neural", "surrogate", "turbo", "GNN"
│ ├─ session.exposed.task_type = TaskType.NEURAL_ACCELERATION
│ └─► Load: SYS_14_NEURAL_ACCELERATION.md
└─► EXTEND functionality?
├─ "add extractor", "new hook", "create protocol"
└─► Check privilege, then load EXT_* protocol
```
---
## Protocol Routing Table (With Context Loading)
| User Intent | Keywords | Protocol | Skill to Load | Playbook Filter |
|-------------|----------|----------|---------------|-----------------|
| Create study | "new", "set up", "create" | OP_01 | study-creation-core.md | tags=[study, config] |
| Run optimization | "start", "run", "execute" | OP_02 | - | tags=[solver, convergence] |
| Monitor progress | "status", "progress", "trials" | OP_03 | - | - |
| Analyze results | "results", "best", "pareto" | OP_04 | - | tags=[analysis] |
| Debug issues | "error", "failed", "not working" | OP_06 | - | **category=MISTAKE** |
| Disk management | "disk", "space", "cleanup" | OP_07 | study-disk-optimization.md | - |
| Neural surrogates | "neural", "surrogate", "turbo" | SYS_14 | neural-acceleration.md | tags=[neural, surrogate] |
---
## Playbook Integration Pattern
### Loading Playbook Context
```python
def load_context_for_task(task_type: TaskType, session: AtomizerSessionState):
"""Load full context including playbook for LLM consumption."""
context_parts = []
# 1. Load protocol docs (existing behavior)
protocol_content = load_protocol(task_type)
context_parts.append(protocol_content)
# 2. Load session state (exposed only)
context_parts.append(session.get_llm_context())
# 3. Load relevant playbook items
playbook = AtomizerPlaybook.load(PLAYBOOK_PATH)
playbook_context = playbook.get_context_for_task(
task_type=task_type.value,
max_items=15,
min_confidence=0.6
)
context_parts.append(playbook_context)
# 4. Add error-specific items if debugging
if task_type == TaskType.DEBUG_ERROR:
mistakes = playbook.get_by_category(InsightCategory.MISTAKE)
for item in mistakes[:5]:
context_parts.append(item.to_context_string())
return "\n\n---\n\n".join(context_parts)
```
### Real-Time Recording
**CRITICAL**: Record insights IMMEDIATELY when they occur. Do not wait until session end.
```python
# On discovering a workaround
playbook.add_insight(
category=InsightCategory.WORKFLOW,
content="For mesh update issues, load _i.prt file before UpdateFemodel()",
tags=["mesh", "nx", "update"]
)
playbook.save(PLAYBOOK_PATH)
# On trial failure
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"Convergence failure with tolerance < 1e-8 on large meshes",
source_trial=trial_number,
tags=["convergence", "solver"]
)
playbook.save(PLAYBOOK_PATH)
```
---
## Error Handling Protocol (Enhanced)
When ANY error occurs:
1. **Preserve the error** - Add to session state
2. **Check playbook** - Look for matching mistake patterns
3. **Learn from it** - If novel error, add to playbook
4. **Show to user** - Include error context in response
```python
# On error
session.add_error(f"{error_type}: {error_message}", error_type=error_type)
# Check playbook for similar errors
similar = playbook.search_by_content(error_message, category=InsightCategory.MISTAKE)
if similar:
print(f"Known issue: {similar[0].content}")
# Provide solution from playbook
else:
# New error - record for future reference
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"{error_type}: {error_message[:200]}",
tags=["error", error_type]
)
```
---
## Context Budget Management
Total context budget: ~100K tokens
Allocation:
- **Stable prefix**: 5K tokens (cached across requests)
- **Protocols**: 10K tokens
- **Playbook items**: 5K tokens
- **Session state**: 2K tokens
- **Conversation history**: 30K tokens
- **Working space**: 48K tokens
If approaching limit:
1. Trigger compaction of old events
2. Reduce playbook items to top 5
3. Summarize conversation history
---
## Execution Framework (AVERVS)
For ANY task, follow this pattern:
```
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. RECORD → Record outcome to playbook (NEW!)
5. VERIFY → Confirm success
6. REPORT → Summarize what was done
7. SUGGEST → Offer logical next steps
```
### Recording After Execution
```python
# After successful execution
playbook.add_insight(
category=InsightCategory.STRATEGY,
content=f"Approach worked: {brief_description}",
tags=relevant_tags
)
# After failure
playbook.add_insight(
category=InsightCategory.MISTAKE,
content=f"Failed approach: {brief_description}. Reason: {reason}",
tags=relevant_tags
)
# Always save after recording
playbook.save(PLAYBOOK_PATH)
```
---
## Session Closing Checklist (Enhanced)
Before ending a session, complete:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION CLOSING (v3.0) │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ 1. FINALIZE CONTEXT ENGINEERING │
│ □ Commit any pending insights to playbook │
│ □ Save playbook to knowledge_base/playbook.json │
│ □ Export learning report if optimization completed │
│ │
│ 2. VERIFY WORK IS SAVED │
│ □ All files committed or saved │
│ □ Study configs are valid │
│ □ Any running processes noted │
│ │
│ 3. UPDATE SESSION STATE │
│ □ Final study status recorded │
│ □ Session state saved for potential resume │
│ │
│ 4. SUMMARIZE FOR USER │
│ □ What was accomplished │
│ □ What the system learned (new playbook items) │
│ □ Current state of any studies │
│ □ Recommended next steps │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Finalization Code
```python
# At session end
from optimization_engine.context import FeedbackLoop, save_playbook
# If optimization was run, finalize learning
if optimization_completed:
feedback = FeedbackLoop(playbook_path)
result = feedback.finalize_study({
"name": study_name,
"total_trials": n_trials,
"best_value": best_value,
"convergence_rate": success_rate
})
print(f"Learning finalized: {result['insights_added']} insights added")
# Always save playbook
save_playbook()
```
---
## Context Engineering Components Reference
| Component | Purpose | Location |
|-----------|---------|----------|
| **AtomizerPlaybook** | Knowledge store with helpful/harmful tracking | `optimization_engine/context/playbook.py` |
| **AtomizerReflector** | Analyzes outcomes, extracts insights | `optimization_engine/context/reflector.py` |
| **AtomizerSessionState** | Context isolation (exposed/isolated) | `optimization_engine/context/session_state.py` |
| **FeedbackLoop** | Connects outcomes to playbook updates | `optimization_engine/context/feedback_loop.py` |
| **CompactionManager** | Handles long sessions | `optimization_engine/context/compaction.py` |
| **ContextCacheOptimizer** | KV-cache optimization | `optimization_engine/context/cache_monitor.py` |
---
## Quick Paths
### "I just want to run an optimization"
1. Initialize session state as RUN_OPTIMIZATION
2. Load playbook items for [solver, convergence]
3. Load OP_02_RUN_OPTIMIZATION.md
4. After run, finalize feedback loop
### "Something broke"
1. Initialize session state as DEBUG_ERROR
2. Load ALL mistake items from playbook
3. Load OP_06_TROUBLESHOOT.md
4. Record any new errors discovered
### "What did my optimization find?"
1. Initialize session state as ANALYZE_RESULTS
2. Load OP_04_ANALYZE_RESULTS.md
3. Query the study database
4. Generate report
---
## Key Constraints (Always Apply)
1. **Python Environment**: Always use `conda activate atomizer`
2. **Never modify master files**: Copy NX files to study working directory first
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
4. **Validation**: Always validate config before running optimization
5. **Record immediately**: Don't wait until session end to record insights
6. **Save playbook**: After every insight, save the playbook
---
## Migration from v2.0
If upgrading from BOOTSTRAP v2.0:
1. The LAC system is now superseded by AtomizerPlaybook
2. Session insights are now structured PlaybookItems
3. Helpful/harmful tracking replaces simple confidence scores
4. Context is now explicitly exposed vs isolated
The old LAC files in `knowledge_base/lac/` are still readable but new insights should use the playbook system.
---
*Atomizer v3.0: Where engineers talk, AI optimizes, and the system learns.*

View File

@@ -31,10 +31,12 @@ requires_skills:
| Export neural training data | OP_05 | `python run_optimization.py --export-training` |
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
| **Free disk space** | **OP_07** | `archive_study.bat cleanup <study> --execute` |
| **Generate report** | **OP_08** | `python -m optimization_engine.future.report_generator <study>` |
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
| Generate physics insight | SYS_16 | `python -m optimization_engine.insights generate <study>` |
| **Manage knowledge/playbook** | **SYS_17** | `from optimization_engine.context import AtomizerPlaybook` |
| **Use SAT (Self-Aware Turbo)** | **SYS_16** | SAT v3 for high-efficiency neural-accelerated optimization |
| Generate physics insight | SYS_17 | `python -m optimization_engine.insights generate <study>` |
| **Manage knowledge/playbook** | **SYS_18** | `from optimization_engine.context import AtomizerPlaybook` |
---
@@ -384,12 +386,13 @@ Without it, `UpdateFemodel()` runs but the mesh doesn't change!
| 13 | Dashboard | Real-time tracking and visualization |
| 14 | Neural | Surrogate model acceleration |
| 15 | Method Selector | Recommends optimization strategy |
| 16 | Study Insights | Physics visualizations (Zernike, stress, modal) |
| 17 | Context Engineering | ACE framework - self-improving knowledge system |
| 16 | Self-Aware Turbo | SAT v3 - high-efficiency neural optimization |
| 17 | Study Insights | Physics visualizations (Zernike, stress, modal) |
| 18 | Context Engineering | ACE framework - self-improving knowledge system |
---
## Study Insights Quick Reference (SYS_16)
## Study Insights Quick Reference (SYS_17)
Generate physics-focused visualizations from FEA results.
@@ -572,7 +575,7 @@ convert_custom_to_optuna(db_path, study_name)
---
## Context Engineering Quick Reference (SYS_17)
## Context Engineering Quick Reference (SYS_18)
The ACE (Agentic Context Engineering) framework enables self-improving optimization through structured knowledge capture.
@@ -671,4 +674,43 @@ feedback.process_trial_result(
|---------|-----|---------|
| Context API | `http://localhost:5000/api/context` | Playbook management |
**Full documentation**: `docs/protocols/system/SYS_17_CONTEXT_ENGINEERING.md`
**Full documentation**: `docs/protocols/system/SYS_18_CONTEXT_ENGINEERING.md`
---
## Report Generation Quick Reference (OP_08)
Generate comprehensive study reports from optimization data.
### Quick Commands
| Task | Command |
|------|---------|
| Generate markdown report | `python -m optimization_engine.future.report_generator <study> --format markdown` |
| Generate HTML report | `python -m optimization_engine.future.report_generator <study> --format html` |
| Generate via API | `POST /api/optimization/studies/{study_id}/generate-report` |
### Python API
```python
from optimization_engine.future.report_generator import generate_study_report
from pathlib import Path
# Generate markdown report
output = generate_study_report(
study_dir=Path("studies/my_study"),
output_format="markdown"
)
print(f"Report saved to: {output}")
```
### Report Contents
| Section | Data Source |
|---------|-------------|
| Executive Summary | Calculated from trial stats |
| Best Result | `study.db` best trial |
| Top 5 Designs | Sorted by objective |
| Optimization Progress | Trial history |
**Full details**: `docs/protocols/operations/OP_08_GENERATE_REPORT.md`

View File

@@ -0,0 +1,317 @@
---
skill_id: SKILL_000
version: 2.0
last_updated: 2025-12-07
type: bootstrap
code_dependencies: []
requires_skills: []
---
# Atomizer LLM Bootstrap
**Version**: 2.0
**Updated**: 2025-12-07
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
---
## Quick Orientation (30 Seconds)
**Atomizer** = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
---
## Session Startup Checklist
On **every new session**, complete these steps:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION STARTUP │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: Environment Check │
│ □ Verify conda environment: conda activate atomizer │
│ □ Check current directory context │
│ │
│ STEP 2: Context Loading │
│ □ CLAUDE.md loaded (system instructions) │
│ □ This file (00_BOOTSTRAP.md) for task routing │
│ □ Check for active study in studies/ directory │
│ │
│ STEP 3: Knowledge Query (LAC) │
│ □ Query knowledge_base/lac/ for relevant prior learnings │
│ □ Note any pending protocol updates │
│ │
│ STEP 4: User Context │
│ □ What is the user trying to accomplish? │
│ □ Is there an active study context? │
│ □ What privilege level? (default: user) │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
---
## Task Classification Tree
When a user request arrives, classify it:
```
User Request
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this", "create a study"
│ ├─► DEFAULT: Interview Mode (guided Q&A with validation)
│ │ └─► Load: modules/study-interview-mode.md + OP_01
│ │
│ └─► MANUAL mode? (power users, explicit request)
│ ├─ "quick setup", "skip interview", "manual config"
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ └─► Load: OP_03_MONITOR_PROGRESS.md
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ └─► Load: OP_04_ANALYZE_RESULTS.md
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
│ └─► Load: OP_06_TROUBLESHOOT.md
├─► MANAGE disk space?
│ ├─ "disk", "space", "cleanup", "archive", "storage"
│ └─► Load: OP_07_DISK_OPTIMIZATION.md
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ └─► Load relevant SYS_* protocol
├─► EXTEND functionality?
│ ├─ "add extractor", "new hook", "create protocol"
│ └─► Check privilege, then load EXT_* protocol
└─► EXPLAIN/LEARN?
├─ "what is", "how does", "explain"
└─► Load relevant SYS_* protocol for reference
```
---
## Protocol Routing Table
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|-------------|----------|----------|---------------|-----------|
| **Create study (DEFAULT)** | "new", "set up", "create", "optimize", "create a study" | OP_01 | **modules/study-interview-mode.md** | user |
| Create study (manual) | "quick setup", "skip interview", "manual config" | OP_01 | core/study-creation-core.md | power_user |
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
| **Disk management** | "disk", "space", "cleanup", "archive" | **OP_07** | modules/study-disk-optimization.md | user |
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
---
## Role Detection
Determine user's privilege level:
| Role | How to Detect | Can Do | Cannot Do |
|------|---------------|--------|-----------|
| **user** | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
| **power_user** | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
| **admin** | Explicit declaration, admin config present | Full access | - |
**Default**: Assume `user` unless explicitly told otherwise.
---
## Context Loading Rules
After classifying the task, load context in this order:
### 1. Always Loaded (via CLAUDE.md)
- This file (00_BOOTSTRAP.md)
- Python environment rules
- Code reuse protocol
### 2. Load Per Task Type
See `02_CONTEXT_LOADER.md` for complete loading rules.
**Quick Reference**:
```
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
→ modules/zernike-optimization.md (if telescope/mirror)
→ modules/neural-acceleration.md (if >50 trials)
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
DEBUG → OP_06_TROUBLESHOOT.md
→ Relevant SYS_* based on error type
```
---
## Execution Framework
For ANY task, follow this pattern:
```
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. VERIFY → Confirm success
5. REPORT → Summarize what was done
6. SUGGEST → Offer logical next steps
```
See `PROTOCOL_EXECUTION.md` for detailed execution rules.
---
## Emergency Quick Paths
### "I just want to run an optimization"
1. Do you have a `.prt` and `.sim` file? → Yes: OP_01 → OP_02
2. Getting errors? → OP_06
3. Want to see progress? → OP_03
### "Something broke"
1. Read the error message
2. Load OP_06_TROUBLESHOOT.md
3. Follow diagnostic flowchart
### "What did my optimization find?"
1. Load OP_04_ANALYZE_RESULTS.md
2. Query the study database
3. Generate report
---
## Protocol Directory Map
```
docs/protocols/
├── operations/ # Layer 2: How-to guides
│ ├── OP_01_CREATE_STUDY.md
│ ├── OP_02_RUN_OPTIMIZATION.md
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
│ └── OP_06_TROUBLESHOOT.md
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
│ └── SYS_14_NEURAL_ACCELERATION.md
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
├── EXT_03_CREATE_PROTOCOL.md
├── EXT_04_CREATE_SKILL.md
└── templates/
```
---
## Key Constraints (Always Apply)
1. **Python Environment**: Always use `conda activate atomizer`
2. **Never modify master files**: Copy NX files to study working directory first
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
4. **Validation**: Always validate config before running optimization
5. **Documentation**: Every study needs README.md and STUDY_REPORT.md
---
## Next Steps After Bootstrap
1. If you know the task type → Go to relevant OP_* or SYS_* protocol
2. If unclear → Ask user clarifying question
3. If complex task → Read `01_CHEATSHEET.md` for quick reference
4. If need detailed loading rules → Read `02_CONTEXT_LOADER.md`
---
## Session Closing Checklist
Before ending a session, complete:
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION CLOSING │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ 1. VERIFY WORK IS SAVED │
│ □ All files committed or saved │
│ □ Study configs are valid │
│ □ Any running processes noted │
│ │
│ 2. RECORD LEARNINGS TO LAC │
│ □ Any failures and their solutions → failure.jsonl │
│ □ Success patterns discovered → success_pattern.jsonl │
│ □ User preferences noted → user_preference.jsonl │
│ □ Protocol improvements → suggested_updates.jsonl │
│ │
│ 3. RECORD OPTIMIZATION OUTCOMES │
│ □ If optimization completed, record to optimization_memory/ │
│ □ Include: method, geometry_type, converged, convergence_trial │
│ │
│ 4. SUMMARIZE FOR USER │
│ □ What was accomplished │
│ □ Current state of any studies │
│ □ Recommended next steps │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
### Session Summary Template
```markdown
# Session Summary
**Date**: {YYYY-MM-DD}
**Study Context**: {study_name or "General"}
## Accomplished
- {task 1}
- {task 2}
## Current State
- Study: {status}
- Trials: {N completed}
- Next action needed: {action}
## Learnings Recorded
- {insight 1}
## Recommended Next Steps
1. {step 1}
2. {step 2}
```

View File

@@ -283,7 +283,7 @@ python -m optimization_engine.insights recommend studies/my_study
## Related Documentation
- **Protocol Specification**: `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md`
- **Protocol Specification**: `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md`
- **OPD Method Physics**: `docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md`
- **Zernike Integration**: `docs/ZERNIKE_INTEGRATION.md`
- **Extractor Catalog**: `.claude/skills/modules/extractors-catalog.md`

2
.gitignore vendored
View File

@@ -22,6 +22,7 @@ wheels/
MANIFEST
.pytest_cache/
.coverage
.coverage.*
htmlcov/
*.cover
.hypothesis/
@@ -35,6 +36,7 @@ env/
# IDEs
.vscode/
.idea/
.obsidian/
*.swp
*.swo
*~

View File

@@ -93,6 +93,7 @@ The Protocol Operating System (POS) provides layered documentation:
| Export neural data | OP_05 | `docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md` |
| Debug issues | OP_06 | `docs/protocols/operations/OP_06_TROUBLESHOOT.md` |
| **Free disk space** | OP_07 | `docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md` |
| **Generate report** | OP_08 | `docs/protocols/operations/OP_08_GENERATE_REPORT.md` |
## System Protocols (Technical Specs)
@@ -104,6 +105,9 @@ The Protocol Operating System (POS) provides layered documentation:
| 13 | Dashboard | "dashboard", "real-time", monitoring |
| 14 | Neural Acceleration | >50 trials, "neural", "surrogate" |
| 15 | Method Selector | "which method", "recommend", "turbo vs" |
| 16 | Self-Aware Turbo | "SAT", "turbo v3", high-efficiency optimization |
| 17 | Study Insights | "insight", "visualization", physics analysis |
| 18 | Context Engineering | "ACE", "playbook", session context |
**Full specs**: `docs/protocols/system/SYS_{N}_{NAME}.md`
@@ -156,8 +160,8 @@ git remote | xargs -L1 git push --all
Atomizer/
├── .claude/skills/ # LLM skills (Bootstrap + Core + Modules)
├── docs/protocols/ # Protocol Operating System
│ ├── operations/ # OP_01 - OP_07
│ ├── system/ # SYS_10 - SYS_15
│ ├── operations/ # OP_01 - OP_08
│ ├── system/ # SYS_10 - SYS_18
│ └── extensions/ # EXT_01 - EXT_04
├── optimization_engine/ # Core Python modules (v2.0)
│ ├── core/ # Optimization runners, method_selector, gradient_optimizer

View File

@@ -315,6 +315,25 @@ Atomizer/
---
## For AI Assistants
Atomizer is designed for LLM-first interaction. Key resources:
- **[CLAUDE.md](CLAUDE.md)** - System instructions for Claude Code
- **[.claude/skills/](/.claude/skills/)** - LLM skill modules
- **[docs/protocols/](docs/protocols/)** - Protocol Operating System
### Knowledge Base (LAC)
The Learning Atomizer Core (`knowledge_base/lac/`) accumulates optimization knowledge:
- `session_insights/` - Learnings from past sessions
- `optimization_memory/` - Optimization outcomes by geometry type
- `playbook.json` - ACE framework knowledge store
For detailed AI interaction guidance, see CLAUDE.md.
---
## Environment
**Critical**: Always use the `atomizer` conda environment:

View File

@@ -1,209 +0,0 @@
# Morning Summary - November 17, 2025
## Good Morning! Here's What Was Done While You Slept 🌅
### Critical Bugs Fixed ✅
**1. Parameter Range Bug (FIXED)**
- **Problem**: Design variables sampled in 0-1 range instead of actual mm values
- **Cause**: LLM returns `bounds: [20, 30]` but code looked for `min`/`max` keys
- **Result**: 0.27mm thickness instead of 27mm!
- **Fix**: Updated [llm_optimization_runner.py](optimization_engine/llm_optimization_runner.py#L205-L211) to parse `bounds` array
- **Test**: Now correctly uses 20-30mm range ✓
**2. Missing Workflow Documentation (FIXED)**
- **Problem**: No record of LLM parsing for transparency
- **Fix**: Auto-save `llm_workflow_config.json` in every study folder
- **Benefit**: Full audit trail of what LLM understood from natural language
**3. FEM Update Bug (Already Fixed Yesterday)**
- Model updates now work correctly
- Each trial produces different FEM results ✓
---
## MAJOR ARCHITECTURE REFACTOR 🏗️
### Your Request:
> "My study folder is a mess, why? I want some order and real structure to develop an insanely good engineering software that evolve with time."
### Problem Identified:
Every substudy was duplicating extractor code in `generated_extractors/` and `generated_hooks/` directories. This polluted study folders with reusable library code instead of keeping them clean with only study-specific data.
### Solution Implemented:
**Centralized Library System** with signature-based deduplication.
#### Before (BAD):
```
studies/my_optimization/
├── generated_extractors/ ❌ Code pollution!
│ ├── extract_displacement.py
│ ├── extract_von_mises_stress.py
│ └── extract_mass.py
├── generated_hooks/ ❌ Code pollution!
├── llm_workflow_config.json
└── optimization_results.json
```
#### After (GOOD):
```
optimization_engine/extractors/ ✓ Core library
├── extract_displacement.py
├── extract_von_mises_stress.py
├── extract_mass.py
└── catalog.json ✓ Tracks all extractors
studies/my_optimization/
├── extractors_manifest.json ✓ Just references!
├── llm_workflow_config.json ✓ Study config
├── optimization_results.json ✓ Results only
└── optimization_history.json ✓ History only
```
### Key Benefits:
1. **Clean Study Folders**: Only metadata (no code!)
2. **Code Reuse**: Same extractor = single file in library
3. **Deduplication**: Automatic via signature matching
4. **Professional**: Production-grade architecture
5. **Evolves**: Library grows, studies stay clean
---
## New Components Created
### 1. ExtractorLibrary (`extractor_library.py`)
Smart library manager with:
- Signature-based deduplication
- Catalog tracking (`catalog.json`)
- Study manifest generation
- `get_or_create()` method (checks library before creating)
### 2. Updated Components
- **ExtractorOrchestrator**: Now uses core library instead of per-study generation
- **LLMOptimizationRunner**: Removed `generated_extractors/` and `generated_hooks/` directory creation
- **Test Suite**: Updated to verify clean study structure
---
## Test Results
### E2E Test: ✅ PASSED (18/18 checks)
**Study folder now contains**:
-`extractors_manifest.json` (references to core library)
-`llm_workflow_config.json` (what LLM understood)
-`optimization_results.json` (best design)
-`optimization_history.json` (all trials)
-`.db` file (Optuna database)
**Core library now contains**:
-`extract_displacement.py` (reusable!)
-`extract_von_mises_stress.py` (reusable!)
-`extract_mass.py` (reusable!)
-`catalog.json` (tracks everything)
**NO MORE**:
-`generated_extractors/` pollution
-`generated_hooks/` pollution
- ❌ Duplicate extractor code
---
## Commits Made
**1. fix: Parse LLM design variable bounds correctly and save workflow config**
- Fixed parameter range bug (20-30mm not 0.2-1.0mm)
- Added workflow config saving for transparency
**2. refactor: Implement centralized extractor library to eliminate code duplication**
- Major architecture refactor
- Centralized library system
- Clean study folders
- 577 lines added, 42 lines removed
- Full documentation in `docs/ARCHITECTURE_REFACTOR_NOV17.md`
---
## Documentation Created
### [ARCHITECTURE_REFACTOR_NOV17.md](docs/ARCHITECTURE_REFACTOR_NOV17.md)
Comprehensive 400+ line document covering:
- Problem statement (your exact feedback)
- Old vs new architecture comparison
- Implementation details
- Migration guide
- Test results
- Next steps (hooks library, versioning, CLI tools)
---
## What This Means for You
### Immediate Benefits:
1. **Clean Studies**: Your study folders are now professional and minimal
2. **Code Reuse**: Core library grows, studies don't duplicate code
3. **Transparency**: Every study has full audit trail (workflow config)
4. **Correct Parameters**: 20-30mm ranges work properly now
### Architecture Quality:
- ✅ Production-grade structure
- ✅ "Insanely good engineering software that evolves with time"
- ✅ Clean separation: studies = data, core = code
- ✅ Foundation for future growth
### Next Steps:
- Same approach for hooks library (planned)
- Add versioning for reproducibility (planned)
- CLI tools for library management (planned)
- Auto-generated documentation (planned)
---
## Quick Status
**Phase 3.2 Week 1**: ✅ COMPLETE
- [OK] Task 1.2: Wire LLMOptimizationRunner to production
- [OK] Task 1.3: Create minimal working example
- [OK] Task 1.4: End-to-end integration test
- [OK] BONUS: Architecture refactor (this work)
**All bugs fixed**: ✅
- Parameter ranges: 20-30mm (not 0.2-1.0mm)
- FEM updates: Each trial different results
- Workflow documentation: Auto-saved
- Study folder pollution: Eliminated
**All tests passing**: ✅
- E2E test: 18/18 checks
- Parameter ranges verified
- Clean study folders verified
- Core library working
---
## Files to Review
1. **docs/ARCHITECTURE_REFACTOR_NOV17.md** - Full architecture explanation
2. **optimization_engine/extractor_library.py** - New library manager
3. **studies/.../extractors_manifest.json** - Example of clean study folder
---
## Ready for Next Steps
Your vision of "insanely good engineering software that evolves with time" is now in place for extractors. The architecture is production-grade, clean, and ready to scale.
Same approach can be applied to hooks, then documentation generation, then versioning - all building on this solid foundation.
Have a great morning! ☕
---
**Commits**: 3 total today
**Files Changed**: 8 (5 modified, 3 created)
**Lines Added**: 600+
**Architecture**: Transformed from messy to production-grade
**Tests**: All passing ✅
**Documentation**: Comprehensive ✅
Sleep well achieved! 😴 → 🎯

View File

@@ -1,29 +0,0 @@
[
{
"iteration": 1,
"n_training_samples": 55,
"confidence_score": 0.48,
"mass_mape": 5.199446351686856,
"freq_mape": 46.23527454811865,
"avg_selected_uncertainty": 0.3559015095233917,
"status": "LOW_CONFIDENCE"
},
{
"iteration": 2,
"n_training_samples": 60,
"confidence_score": 0.6,
"mass_mape": 5.401324621678541,
"freq_mape": 88.80499920325646,
"avg_selected_uncertainty": 0.23130142092704772,
"status": "MEDIUM_CONFIDENCE"
},
{
"iteration": 3,
"n_training_samples": 65,
"confidence_score": 0.6,
"mass_mape": 4.867728649442469,
"freq_mape": 76.78009245481465,
"avg_selected_uncertainty": 0.17344236522912979,
"status": "MEDIUM_CONFIDENCE"
}
]

View File

@@ -1,530 +0,0 @@
fatal: not a git repository (or any of the parent directories): .git
================================================================================
ATOMIZER - INTERACTIVE OPTIMIZATION SETUP
================================================================================
[Atomizer] Welcome to Atomizer! I'll help you set up your optimization.
[Atomizer] First, I need to know about your model files.
[User] I have a bracket model:
- Part file: tests\Bracket.prt
- Simulation file: tests\Bracket_sim1.sim
[Atomizer] Great! Let me initialize the Setup Wizard to analyze your model...
================================================================================
STEP 1: MODEL INTROSPECTION
================================================================================
[Atomizer] I'm reading your NX model to find available design parameters...
[Atomizer] Found 4 expressions in your model:
- support_angle: 32.0 degrees
- tip_thickness: 24.0 mm
- p3: 10.0 mm
- support_blend_radius: 10.0 mm
[Atomizer] Which parameters would you like to use as design variables?
[User] I want to optimize tip_thickness and support_angle
[Atomizer] Perfect! Now, what's your optimization goal?
[User] I want to maximize displacement while keeping stress below
a safety factor of 4. The material is Aluminum 6061-T6.
================================================================================
STEP 2: BASELINE SIMULATION
================================================================================
[Atomizer] To validate your setup, I need to run ONE baseline simulation.
[Atomizer] This will generate an OP2 file that I can analyze to ensure
[Atomizer] the extraction pipeline will work correctly.
[Atomizer]
[Atomizer] Running baseline simulation with current parameter values...
[NX SOLVER] Starting simulation...
Input file: Bracket_sim1.sim
Working dir: tests
Mode: Journal
Deleted 3 old result file(s) to force fresh solve
[JOURNAL OUTPUT]
Mesh-Based Errors Summary
-------------------------
Total: 0 errors and 0 warnings
Material-Based Errors Summary
-----------------------------
Total: 0 errors and 0 warnings
Solution-Based Errors Summary
-----------------------------
Iterative Solver Option
More than 80 percent of the elements in this model are 3D elements.
It is therefore recommended that you turn ON the Element Iterative Solver in the "Edit
Solution" dialog.
Total: 0 errors and 0 warnings
Load/BC-Based Errors Summary
----------------------------
Total: 0 errors and 0 warnings
Nastran Model Setup Check completed
*** 20:33:59 ***
Starting Nastran Exporter
*** 20:33:59 ***
Writing file
c:\Users\antoi\Documents\Atomaste\Atomizer\tests\bracket_sim1-solution_1.dat
*** 20:33:59 ***
Writing SIMCENTER NASTRAN 2412.0 compatible deck
*** 20:33:59 ***
Writing Nastran System section
*** 20:33:59 ***
Writing File Management section
*** 20:33:59 ***
Writing Executive Control section
*** 20:33:59 ***
Writing Case Control section
*** 20:33:59 ***
Writing Bulk Data section
*** 20:33:59 ***
Writing Nodes
*** 20:33:59 ***
Writing Elements
*** 20:33:59 ***
Writing Physical Properties
*** 20:33:59 ***
Writing Materials
*** 20:33:59 ***
Writing Degree-of-Freedom Sets
*** 20:33:59 ***
Writing Loads and Constraints
*** 20:33:59 ***
Writing Coordinate Systems
*** 20:33:59 ***
Validating Solution Setup
*** 20:33:59 ***
Summary of Bulk Data cards written
+----------+----------+
| NAME | NUMBER |
+----------+----------+
| CHEXA | 306 |
| CPENTA | 10 |
| FORCE | 3 |
| GRID | 585 |
| MAT1 | 1 |
| MATT1 | 1 |
| PARAM | 6 |
| PSOLID | 1 |
| SPC | 51 |
| TABLEM1 | 3 |
+----------+----------+
*** 20:33:59 ***
Nastran Deck Successfully Written
[JOURNAL] Opening simulation: c:\Users\antoi\Documents\Atomaste\Atomizer\tests\Bracket_sim1.sim
[JOURNAL] Checking for open parts...
[JOURNAL] Opening simulation fresh from disk...
[JOURNAL] STEP 1: Updating Bracket.prt geometry...
[JOURNAL] Rebuilding geometry with new expression values...
[JOURNAL] Bracket geometry updated (0 errors)
[JOURNAL] STEP 2: Opening Bracket_fem1.fem...
[JOURNAL] Updating FE Model...
[JOURNAL] FE Model updated with new geometry!
[JOURNAL] STEP 3: Switching back to sim part...
[JOURNAL] Switched back to sim part
[JOURNAL] Starting solve...
[JOURNAL] Solve submitted!
[JOURNAL] Solutions solved: -1779619210
[JOURNAL] Solutions failed: 32764
[JOURNAL] Solutions skipped: 1218183744
[JOURNAL] Saving simulation to ensure output files are written...
[JOURNAL] Save complete!
[JOURNAL ERRORS]
Journal execution results for c:\Users\antoi\Documents\Atomaste\Atomizer\tests\_temp_solve_journal.py...
Syntax errors:
Line 0: Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\tests\_temp_solve_journal.py", line 247, in <module>
sys.exit(0 if success else 1)
[NX SOLVER] Waiting for solve to complete...
[NX SOLVER] Output files detected after 0.5s
[NX SOLVER] Complete in 4.3s
[NX SOLVER] Results: bracket_sim1-solution_1.op2
[Atomizer] Baseline simulation complete! OP2 file: bracket_sim1-solution_1.op2
================================================================================
STEP 3: OP2 INTROSPECTION
================================================================================
[Atomizer] Now I'll analyze the OP2 file to see what's actually in there...
DEBUG: op2.py:614 combine=True
DEBUG: op2.py:615 -------- reading op2 with read_mode=1 (array sizing) --------
INFO: op2_scalar.py:1960 op2_filename = 'tests\\bracket_sim1-solution_1.op2'
DEBUG: op2_reader.py:323 date = (11, 16, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_reader.py:403 mode='nx' version='2412'
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:634 -------- reading op2 with read_mode=2 (array filling) --------
DEBUG: op2_reader.py:323 date = (11, 16, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:932 combine_results
DEBUG: op2.py:648 finished reading op2
[Atomizer] Here's what I found in your OP2 file:
- Element types with stress: CHEXA, CPENTA
- Available results: displacement, stress
- Number of elements: 0
- Number of nodes: 0
================================================================================
STEP 4: LLM-GUIDED CONFIGURATION
================================================================================
[Atomizer] Based on your goal and the OP2 contents, here's what I recommend:
[Atomizer]
[Atomizer] OBJECTIVE:
[Atomizer] - Maximize displacement (minimize negative displacement)
[Atomizer]
[Atomizer] EXTRACTIONS:
[Atomizer] - Extract displacement from OP2
[Atomizer] - Extract stress from CHEXA elements
[Atomizer] (I detected these element types in your model)
[Atomizer]
[Atomizer] CALCULATIONS:
[Atomizer] - Calculate safety factor: SF = 276 MPa / max_stress
[Atomizer] - Negate displacement for minimization
[Atomizer]
[Atomizer] CONSTRAINT:
[Atomizer] - Enforce SF >= 4.0 with penalty
[Atomizer]
[Atomizer] DESIGN VARIABLES:
[Atomizer] - tip_thickness: 24.0 mm (suggest range: 15-25 mm)
[Atomizer] - support_angle: 32.0 degrees (suggest range: 20-40 deg)
[User] That looks good! Let's use those ranges.
================================================================================
STEP 5: PIPELINE VALIDATION (DRY RUN)
================================================================================
[Atomizer] Before running 20-30 optimization trials, let me validate that
[Atomizer] EVERYTHING works correctly with your baseline OP2 file...
[Atomizer]
[Atomizer] Running dry-run validation...
DEBUG: op2.py:614 combine=True
DEBUG: op2.py:615 -------- reading op2 with read_mode=1 (array sizing) --------
INFO: op2_scalar.py:1960 op2_filename = 'tests\\bracket_sim1-solution_1.op2'
DEBUG: op2_reader.py:323 date = (11, 16, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_reader.py:403 mode='nx' version='2412'
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:634 -------- reading op2 with read_mode=2 (array filling) --------
DEBUG: op2_reader.py:323 date = (11, 16, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:932 combine_results
DEBUG: op2.py:648 finished reading op2
DEBUG: op2.py:614 combine=True
DEBUG: op2.py:615 -------- reading op2 with read_mode=1 (array sizing) --------
INFO: op2_scalar.py:1960 op2_filename = 'tests\\bracket_sim1-solution_1.op2'
DEBUG: op2_reader.py:323 date = (11, 16, 25)
Extraction failed: extract_solid_stress - No ctetra stress results in OP2
\u274c extract_solid_stress: No ctetra stress results in OP2
\u274c calculate_safety_factor: name 'max_von_mises' is not defined
Required input 'min_force' not found in context
Hook 'ratio_hook' failed: Missing required input: min_force
Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\hooks.py", line 72, in execute
result = self.function(context)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\post_calculation\min_to_avg_ratio_hook.py", line 38, in ratio_hook
raise ValueError(f"Missing required input: min_force")
ValueError: Missing required input: min_force
Hook 'ratio_hook' failed: Missing required input: min_force
Required input 'max_stress' not found in context
Hook 'safety_factor_hook' failed: Missing required input: max_stress
Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\hooks.py", line 72, in execute
result = self.function(context)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\post_calculation\safety_factor_hook.py", line 38, in safety_factor_hook
raise ValueError(f"Missing required input: max_stress")
ValueError: Missing required input: max_stress
Hook 'safety_factor_hook' failed: Missing required input: max_stress
Required input 'norm_stress' not found in context
Hook 'weighted_objective_hook' failed: Missing required input: norm_stress
Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\hooks.py", line 72, in execute
result = self.function(context)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\plugins\post_calculation\weighted_objective_test.py", line 38, in weighted_objective_hook
raise ValueError(f"Missing required input: norm_stress")
ValueError: Missing required input: norm_stress
Hook 'weighted_objective_hook' failed: Missing required input: norm_stress
\u26a0\ufe0f No explicit objective, using: max_displacement
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_reader.py:403 mode='nx' version='2412'
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:634 -------- reading op2 with read_mode=2 (array filling) --------
DEBUG: op2_reader.py:323 date = (11, 16, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM3' (constraint cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'DIT' (TABLEx cards)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 585, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: oes.py:2840 numwide_real=193
DEBUG: oes.py:2840 numwide_real=151
DEBUG: op2.py:932 combine_results
DEBUG: op2.py:648 finished reading op2
[Wizard] VALIDATION RESULTS:
[Wizard] [OK] extractor: ['max_displacement', 'max_disp_node', 'max_disp_x', 'max_disp_y', 'max_disp_z']
[Wizard] [FAIL] extractor: No ctetra stress results in OP2
[Wizard] [FAIL] calculation: name 'max_von_mises' is not defined
[Wizard] [OK] calculation: Created ['neg_displacement']
[Wizard] [OK] hook: 3 results
[Wizard] [OK] objective: 0.36178338527679443
================================================================================
VALIDATION FAILED!
================================================================================
[Atomizer] The validation found issues that need to be fixed:
Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\tests\interactive_optimization_setup.py", line 324, in <module>
main()
File "c:\Users\antoi\Documents\Atomaste\Atomizer\tests\interactive_optimization_setup.py", line 316, in main
print(f" [ERROR] {result.message}")
File "C:\Users\antoi\anaconda3\envs\test_env\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u274c' in position 19: character maps to <undefined>

File diff suppressed because it is too large Load Diff

View File

@@ -1,122 +0,0 @@
fatal: not a git repository (or any of the parent directories): .git
================================================================================
HYBRID MODE - AUTOMATED STUDY CREATION
================================================================================
[1/5] Creating study structure...
[OK] Study directory: circular_plate_frequency_tuning
[2/5] Copying model files...
[OK] Copied 4 files
[3/5] Installing workflow configuration...
[OK] Workflow: circular_plate_frequency_tuning
[OK] Variables: 2
[OK] Objectives: 1
[4/5] Running benchmarking (validating simulation setup)...
Running INTELLIGENT benchmarking...
- Solving ALL solutions in .sim file
- Discovering all available results
- Matching objectives to results
================================================================================
INTELLIGENT SETUP - COMPLETE ANALYSIS
================================================================================
[Phase 1/4] Extracting ALL expressions from model...
[NX] Exporting expressions from Circular_Plate.prt to .exp format...
[OK] Expressions exported to: c:\Users\antoi\Documents\Atomaste\Atomizer\studies\circular_plate_frequency_tuning\1_setup\model\Circular_Plate_expressions.exp
[OK] Found 4 expressions
- inner_diameter: 130.24581665835925 MilliMeter
- p0: None MilliMeter
- p1: 0.0 MilliMeter
- plate_thickness: 5.190705791851906 MilliMeter
[Phase 2/4] Solving ALL solutions in .sim file...
[OK] Solved 0 solutions
[Phase 3/4] Analyzing ALL result files...
DEBUG: op2.py:614 combine=True
DEBUG: op2.py:615 -------- reading op2 with read_mode=1 (array sizing) --------
INFO: op2_scalar.py:1960 op2_filename = 'c:\\Users\\antoi\\Documents\\Atomaste\\Atomizer\\studies\\circular_plate_frequency_tuning\\1_setup\\model\\circular_plate_sim1-solution_1.op2'
DEBUG: op2_reader.py:323 date = (11, 18, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_reader.py:403 mode='nx' version='2412'
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 613, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: op2.py:634 -------- reading op2 with read_mode=2 (array filling) --------
DEBUG: op2_reader.py:323 date = (11, 18, 25)
WARNING: version.py:88 nx version='2412' is not supported
DEBUG: op2_scalar.py:2173 table_name=b'IBULK' (explicit bulk data)
DEBUG: op2_scalar.py:2173 table_name=b'ICASE' (explicit case control)
DEBUG: op2_scalar.py:2173 table_name=b'CASECC' (case control)
DEBUG: op2_scalar.py:2173 table_name=b'PVT0' (PARAM cards)
DEBUG: op2_scalar.py:2173 table_name=b'GPL' (grid point list)
DEBUG: op2_scalar.py:2173 table_name=b'GPDT' (grid point locations)
DEBUG: op2_scalar.py:2173 table_name=b'EPT' (property cards)
DEBUG: op2_scalar.py:2173 table_name=b'MPT' (material cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM2' (element cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM4' (load cards)
DEBUG: op2_scalar.py:2173 table_name=b'GEOM1' (grid/coord cards)
DEBUG: op2_scalar.py:2173 table_name=b'BGPDT' (grid points in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'EQEXIN' (internal/external ids)
DEBUG: op2_reader.py:672 eqexin idata=(101, 613, 0, 0, 0, 0, 0)
DEBUG: op2_scalar.py:2173 table_name=b'OQG1' (spc/mpc forces)
DEBUG: op2_scalar.py:2173 table_name=b'BOUGV1' (g-set U in cid=0 frame)
DEBUG: op2_scalar.py:2173 table_name=b'OES1' (linear stress)
DEBUG: op2.py:932 combine_results
DEBUG: op2.py:648 finished reading op2
[OK] Found 1 result files
- displacements: 613 entries in circular_plate_sim1-solution_1.op2
[Phase 4/4] Matching objectives to available results...
[OK] Objective mapping complete
- frequency_error
Solution: NONE
Result type: eigenvalues
Extractor: extract_first_frequency
================================================================================
ANALYSIS COMPLETE
================================================================================
[OK] Expressions found: 4
[OK] Solutions found: 4
[OK] Results discovered: 1
[OK] Objectives matched: 1
- frequency_error: eigenvalues from 'NONE' (ERROR confidence)
[OK] Simulation validated
[OK] Extracted 0 results
[4.5/5] Generating configuration report...
Traceback (most recent call last):
File "c:\Users\antoi\Documents\Atomaste\Atomizer\create_circular_plate_study.py", line 70, in <module>
main()
File "c:\Users\antoi\Documents\Atomaste\Atomizer\create_circular_plate_study.py", line 52, in main
study_dir = creator.create_from_workflow(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\hybrid_study_creator.py", line 100, in create_from_workflow
self._generate_configuration_report(study_dir, workflow, benchmark_results)
File "c:\Users\antoi\Documents\Atomaste\Atomizer\optimization_engine\hybrid_study_creator.py", line 757, in _generate_configuration_report
f.write(content)
File "C:\Users\antoi\anaconda3\envs\test_env\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode characters in position 1535-1536: character maps to <undefined>

View File

@@ -1,19 +0,0 @@
{
"stats": {
"total_time": 3.4194347858428955,
"avg_trial_time_ms": 0.25935888290405273,
"trials_per_second": 584.8919851550838,
"extrapolation_count": 1757,
"extrapolation_pct": 87.85
},
"pareto_summary": {
"total": 407,
"confident": 0,
"needs_fea": 407
},
"top_designs": [
{
"mass": 717.7724576426426,
"frequency": 15.794277791999079,
"uncertainty": 0.3945587883026621,
"needs_fea":

View File

@@ -0,0 +1,109 @@
import React from 'react';
import { CheckCircle, PauseCircle, StopCircle, PlayCircle, AlertCircle } from 'lucide-react';
export type OptimizationStatus = 'running' | 'paused' | 'stopped' | 'completed' | 'error' | 'not_started';
interface StatusBadgeProps {
status: OptimizationStatus;
showLabel?: boolean;
size?: 'sm' | 'md' | 'lg';
pulse?: boolean;
}
const statusConfig = {
running: {
label: 'Running',
icon: PlayCircle,
dotClass: 'bg-green-500',
textClass: 'text-green-400',
bgClass: 'bg-green-500/10 border-green-500/30',
},
paused: {
label: 'Paused',
icon: PauseCircle,
dotClass: 'bg-yellow-500',
textClass: 'text-yellow-400',
bgClass: 'bg-yellow-500/10 border-yellow-500/30',
},
stopped: {
label: 'Stopped',
icon: StopCircle,
dotClass: 'bg-dark-500',
textClass: 'text-dark-400',
bgClass: 'bg-dark-700 border-dark-600',
},
completed: {
label: 'Completed',
icon: CheckCircle,
dotClass: 'bg-primary-500',
textClass: 'text-primary-400',
bgClass: 'bg-primary-500/10 border-primary-500/30',
},
error: {
label: 'Error',
icon: AlertCircle,
dotClass: 'bg-red-500',
textClass: 'text-red-400',
bgClass: 'bg-red-500/10 border-red-500/30',
},
not_started: {
label: 'Not Started',
icon: StopCircle,
dotClass: 'bg-dark-600',
textClass: 'text-dark-500',
bgClass: 'bg-dark-800 border-dark-700',
},
};
const sizeConfig = {
sm: { dot: 'w-2 h-2', text: 'text-xs', padding: 'px-2 py-0.5', icon: 'w-3 h-3' },
md: { dot: 'w-3 h-3', text: 'text-sm', padding: 'px-3 py-1', icon: 'w-4 h-4' },
lg: { dot: 'w-4 h-4', text: 'text-base', padding: 'px-4 py-2', icon: 'w-5 h-5' },
};
export function StatusBadge({
status,
showLabel = true,
size = 'md',
pulse = true
}: StatusBadgeProps) {
const config = statusConfig[status];
const sizes = sizeConfig[size];
return (
<div
className={`inline-flex items-center gap-2 rounded-full border ${config.bgClass} ${sizes.padding}`}
>
<div
className={`rounded-full ${config.dotClass} ${sizes.dot} ${
pulse && status === 'running' ? 'animate-pulse' : ''
}`}
/>
{showLabel && (
<span className={`font-semibold uppercase tracking-wide ${config.textClass} ${sizes.text}`}>
{config.label}
</span>
)}
</div>
);
}
/**
* Helper function to determine status from process state
*/
export function getStatusFromProcess(
isRunning: boolean,
isPaused: boolean,
completedTrials: number,
totalTrials: number,
hasError?: boolean
): OptimizationStatus {
if (hasError) return 'error';
if (completedTrials >= totalTrials && totalTrials > 0) return 'completed';
if (isPaused) return 'paused';
if (isRunning) return 'running';
if (completedTrials === 0) return 'not_started';
return 'stopped';
}
export default StatusBadge;

View File

@@ -39,7 +39,7 @@ This folder contains detailed physics and domain-specific documentation for Atom
| `.claude/skills/modules/extractors-catalog.md` | Quick extractor lookup |
| `.claude/skills/modules/insights-catalog.md` | Quick insight lookup |
| `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` | Extractor specifications |
| `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` | Insight specifications |
| `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` | Insight specifications |
---

View File

@@ -315,7 +315,7 @@ studies/
### Protocol Documentation
- `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` - Extractor specifications (E8-E10: Standard Zernike, E20-E21: OPD method)
- `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` - Insight specifications (`zernike_wfe`, `zernike_opd_comparison`)
- `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` - Insight specifications (`zernike_wfe`, `zernike_opd_comparison`)
### Skill Modules (Quick Lookup)

View File

@@ -558,7 +558,7 @@ The `concave` parameter in the code handles this sign flip.
| `.claude/skills/modules/extractors-catalog.md` | Quick extractor lookup |
| `.claude/skills/modules/insights-catalog.md` | Quick insight lookup |
| `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` | Extractor specifications (E8-E10, E20-E21) |
| `docs/protocols/system/SYS_16_STUDY_INSIGHTS.md` | Insight specifications |
| `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` | Insight specifications |
---

View File

@@ -944,5 +944,5 @@ Error response format:
## See Also
- [Context Engineering Report](../CONTEXT_ENGINEERING_REPORT.md) - Full implementation report
- [SYS_17 Protocol](../protocols/system/SYS_17_CONTEXT_ENGINEERING.md) - System protocol
- [SYS_18 Protocol](../protocols/system/SYS_18_CONTEXT_ENGINEERING.md) - System protocol
- [Cheatsheet](../../.claude/skills/01_CHEATSHEET.md) - Quick reference

File diff suppressed because it is too large Load Diff

View File

@@ -497,7 +497,7 @@ docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md (lines 71-851 - MANY reference
docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md (lines 60, 85, 315)
docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md (lines 231-1080)
docs/protocols/system/SYS_15_METHOD_SELECTOR.md (lines 42-422)
docs/protocols/system/SYS_16_STUDY_INSIGHTS.md (lines 62-498)
docs/protocols/system/SYS_17_STUDY_INSIGHTS.md (lines 62-498)
docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md (lines 35-287)
docs/protocols/extensions/EXT_02_CREATE_HOOK.md (lines 45-357)
docs/protocols/extensions/EXT_03_CREATE_INSIGHT.md

View File

@@ -0,0 +1,549 @@
# Atomizer Massive Restructuring Plan
**Created:** 2026-01-06
**Purpose:** Comprehensive TODO list for Ralph mode execution with skip permissions
**Status:** IN PROGRESS (Phase 2 partially complete)
---
## Progress Summary
**Completed:**
- [x] Phase 1: Safe Cleanup (FULLY DONE)
- [x] Phase 2.1-2.7: Protocol renaming, Bootstrap V3.0 promotion, routing updates
**In Progress:**
- Phase 2.8-2.10: Cheatsheet updates and commit
**Remaining:**
- Phases 3-6 and final push
---
## RALPH MODE TODO LIST
### PHASE 2 (Remaining - Documentation)
#### 2.8 Add OP_08 to 01_CHEATSHEET.md
```
File: .claude/skills/01_CHEATSHEET.md
Action: Add row to "I want to..." table after OP_07 entry (around line 33)
Add this line:
| **Generate report** | **OP_08** | `python -m optimization_engine.reporting.report_generator <study>` |
Also add a section around line 280:
## Report Generation (OP_08)
### Quick Commands
| Task | Command |
|------|---------|
| Generate markdown report | `python -m optimization_engine.reporting.markdown_report <study>` |
| Generate HTML visualization | `python tools/zernike_html_generator.py <study>` |
**Full details**: `docs/protocols/operations/OP_08_GENERATE_REPORT.md`
```
#### 2.9 SKIP (Already verified V3.0 Bootstrap has no circular refs)
#### 2.10 Commit Phase 2 Changes
```bash
cd c:\Users\antoi\Atomizer
git add -A
git commit -m "$(cat <<'EOF'
docs: Consolidate documentation and fix protocol numbering
- Rename SYS_16_STUDY_INSIGHTS -> SYS_17_STUDY_INSIGHTS
- Rename SYS_17_CONTEXT_ENGINEERING -> SYS_18_CONTEXT_ENGINEERING
- Promote Bootstrap V3.0 (Context Engineering) as default
- Create knowledge_base/playbook.json for ACE framework
- Add OP_08 (Generate Report) to all routing tables
- Add SYS_16-18 to all protocol tables
- Update docs/protocols/README.md version 1.1
- Update CLAUDE.md with new protocols
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
```
---
### PHASE 3: Code Organization
#### 3.1 Move ensemble_surrogate.py
```bash
cd c:\Users\antoi\Atomizer
git mv optimization_engine/surrogates/ensemble_surrogate.py optimization_engine/processors/surrogates/ensemble_surrogate.py
```
#### 3.2 Update processors/surrogates/__init__.py
```
File: optimization_engine/processors/surrogates/__init__.py
Action: Add to __getattr__ function and __all__ list:
In __getattr__, add these elif blocks:
elif name == 'EnsembleSurrogate':
from .ensemble_surrogate import EnsembleSurrogate
return EnsembleSurrogate
elif name == 'OODDetector':
from .ensemble_surrogate import OODDetector
return OODDetector
elif name == 'create_and_train_ensemble':
from .ensemble_surrogate import create_and_train_ensemble
return create_and_train_ensemble
In __all__, add:
'EnsembleSurrogate',
'OODDetector',
'create_and_train_ensemble',
```
#### 3.3 Add deprecation shim to surrogates/__init__.py
```
File: optimization_engine/surrogates/__init__.py
Replace contents with:
"""
DEPRECATED: This module has been moved to optimization_engine.processors.surrogates
Please update your imports:
from optimization_engine.processors.surrogates import EnsembleSurrogate
This module will be removed in a future version.
"""
import warnings
warnings.warn(
"optimization_engine.surrogates is deprecated. "
"Use optimization_engine.processors.surrogates instead.",
DeprecationWarning,
stacklevel=2
)
# Redirect imports
from optimization_engine.processors.surrogates import (
EnsembleSurrogate,
OODDetector,
create_and_train_ensemble
)
__all__ = ['EnsembleSurrogate', 'OODDetector', 'create_and_train_ensemble']
```
#### 3.4 Check future/ imports
```bash
cd c:\Users\antoi\Atomizer
grep -r "from optimization_engine.future" --include="*.py" | grep -v "future/" | head -20
```
Analyze output and decide which modules need to move out of future/
#### 3.5 Move workflow_decomposer.py (if imported by production code)
If grep shows imports from config/ or core/:
```bash
git mv optimization_engine/future/workflow_decomposer.py optimization_engine/config/workflow_decomposer.py
# Update imports in capability_matcher.py and any other files
```
#### 3.6 Create tests/ directory structure
```bash
cd c:\Users\antoi\Atomizer
mkdir -p tests/unit/gnn tests/unit/extractors tests/integration tests/fixtures/sample_data
```
#### 3.7 Move test files from archive/test_scripts/
```bash
cd c:\Users\antoi\Atomizer
git mv archive/test_scripts/test_neural_surrogate.py tests/unit/
git mv archive/test_scripts/test_nn_surrogate.py tests/unit/
git mv archive/test_scripts/test_parametric_surrogate.py tests/unit/
git mv archive/test_scripts/test_adaptive_characterization.py tests/unit/
git mv archive/test_scripts/test_training_data_export.py tests/unit/
git mv optimization_engine/gnn/test_*.py tests/unit/gnn/ 2>/dev/null || true
git mv optimization_engine/extractors/test_phase3_extractors.py tests/unit/extractors/ 2>/dev/null || true
```
#### 3.8 Create tests/conftest.py
```
File: tests/conftest.py
Content:
"""
Pytest configuration and shared fixtures for Atomizer tests.
"""
import pytest
import sys
from pathlib import Path
# Add project root to path
sys.path.insert(0, str(Path(__file__).parent.parent))
@pytest.fixture
def sample_study_dir(tmp_path):
"""Create a temporary study directory structure."""
study = tmp_path / "test_study"
(study / "1_setup").mkdir(parents=True)
(study / "2_iterations").mkdir()
(study / "3_results").mkdir()
return study
@pytest.fixture
def sample_config():
"""Sample optimization config for testing."""
return {
"study_name": "test_study",
"design_variables": [
{"name": "param1", "lower": 0, "upper": 10, "type": "continuous"}
],
"objectives": [
{"name": "minimize_mass", "direction": "minimize"}
]
}
```
#### 3.9 Rename bracket_displacement_maximizing/results to 3_results
```bash
cd c:\Users\antoi\Atomizer
# Check if results/ exists first
if [ -d "studies/bracket_displacement_maximizing/results" ]; then
git mv studies/bracket_displacement_maximizing/results studies/bracket_displacement_maximizing/3_results
fi
```
#### 3.10 Rename Drone_Gimbal/2_results to 3_results
```bash
cd c:\Users\antoi\Atomizer
# Check if 2_results/ exists first
if [ -d "studies/Drone_Gimbal/2_results" ]; then
git mv studies/Drone_Gimbal/2_results studies/Drone_Gimbal/3_results
fi
```
#### 3.11 Commit Phase 3 Changes
```bash
cd c:\Users\antoi\Atomizer
git add -A
git commit -m "$(cat <<'EOF'
refactor: Reorganize code structure and create tests directory
- Consolidate surrogates module to processors/surrogates/
- Add deprecation shim for old import path
- Create tests/ directory with pytest structure
- Move test files from archive/test_scripts/
- Standardize study folder naming (3_results/)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
```
---
### PHASE 4: Dependency Management
#### 4.1-4.2 Add neural and gnn optional deps to pyproject.toml
```
File: pyproject.toml
After the [project.optional-dependencies] section, add:
neural = [
"torch>=2.0.0",
"torch-geometric>=2.3.0",
"tensorboard>=2.13.0",
]
gnn = [
"torch>=2.0.0",
"torch-geometric>=2.3.0",
]
all = ["atomizer[neural,gnn,dev,dashboard]"]
```
#### 4.3 Remove mcp optional deps
```
File: pyproject.toml
Delete this section:
mcp = [
"mcp>=0.1.0",
]
```
#### 4.4 Remove mcp_server from packages.find
```
File: pyproject.toml
Change:
include = ["mcp_server*", "optimization_engine*", "nx_journals*"]
To:
include = ["optimization_engine*", "nx_journals*"]
```
#### 4.5 Commit Phase 4 Changes
```bash
cd c:\Users\antoi\Atomizer
git add pyproject.toml
git commit -m "$(cat <<'EOF'
build: Add optional dependency groups and clean up pyproject.toml
- Add neural optional group (torch, torch-geometric, tensorboard)
- Add gnn optional group (torch, torch-geometric)
- Add all optional group for convenience
- Remove mcp optional group (not implemented)
- Remove mcp_server from packages.find (not implemented)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
```
---
### PHASE 5: Study Organization
#### 5.1 Create archive directory
```bash
cd c:\Users\antoi\Atomizer
mkdir -p studies/M1_Mirror/_archive
```
#### 5.2 Move V1-V8 cost_reduction studies
```bash
cd c:\Users\antoi\Atomizer/studies/M1_Mirror
# Move cost_reduction V2-V8 (V1 doesn't exist as base is just "cost_reduction")
for v in V2 V3 V4 V5 V6 V7 V8; do
if [ -d "m1_mirror_cost_reduction_$v" ]; then
mv "m1_mirror_cost_reduction_$v" _archive/
fi
done
```
#### 5.3 Move V1-V8 flat_back studies
```bash
cd c:\Users\antoi\Atomizer/studies/M1_Mirror
# Move flat_back V1-V8 (note: V2 may not exist)
for v in V1 V3 V4 V5 V6 V7 V8; do
if [ -d "m1_mirror_cost_reduction_flat_back_$v" ]; then
mv "m1_mirror_cost_reduction_flat_back_$v" _archive/
fi
done
```
#### 5.4 Create MANIFEST.md
```
File: studies/M1_Mirror/_archive/MANIFEST.md
Content:
# M1 Mirror Archived Studies
**Archived:** 2026-01-06
**Reason:** Repository cleanup - keeping only V9+ studies active
## Archived Studies
### Cost Reduction Series
| Study | Trials | Best WS | Notes |
|-------|--------|---------|-------|
| V2 | TBD | TBD | Early exploration |
| V3 | TBD | TBD | - |
| V4 | TBD | TBD | - |
| V5 | TBD | TBD | - |
| V6 | TBD | TBD | - |
| V7 | TBD | TBD | - |
| V8 | TBD | TBD | - |
### Cost Reduction Flat Back Series
| Study | Trials | Best WS | Notes |
|-------|--------|---------|-------|
| V1 | TBD | TBD | Initial flat back design |
| V3 | TBD | TBD | V2 was skipped |
| V4 | TBD | TBD | - |
| V5 | TBD | TBD | - |
| V6 | TBD | TBD | - |
| V7 | TBD | TBD | - |
| V8 | TBD | TBD | - |
## Restoration Instructions
To restore a study:
1. Move from _archive/ to parent directory
2. Verify database integrity: `sqlite3 3_results/study.db ".tables"`
3. Check optimization_config.json exists
```
#### 5.5 Commit Phase 5 Changes
```bash
cd c:\Users\antoi\Atomizer
git add -A
git commit -m "$(cat <<'EOF'
chore: Archive old M1_Mirror studies (V1-V8)
- Create studies/M1_Mirror/_archive/ directory
- Move cost_reduction V2-V8 to archive
- Move flat_back V1-V8 to archive
- Create MANIFEST.md documenting archived studies
- Keep V9+ studies active for reference
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
```
---
### PHASE 6: Documentation Polish
#### 6.1 Update README.md with LLM section
```
File: README.md
Add this section after the main description:
## For AI Assistants
Atomizer is designed for LLM-first interaction. Key resources:
- **[CLAUDE.md](CLAUDE.md)** - System instructions for Claude Code
- **[.claude/skills/](/.claude/skills/)** - LLM skill modules
- **[docs/protocols/](docs/protocols/)** - Protocol Operating System
### Knowledge Base (LAC)
The Learning Atomizer Core (`knowledge_base/lac/`) accumulates optimization knowledge:
- `session_insights/` - Learnings from past sessions
- `optimization_memory/` - Optimization outcomes by geometry type
- `playbook.json` - ACE framework knowledge store
For detailed AI interaction guidance, see CLAUDE.md.
```
#### 6.2-6.4 Create optimization_memory JSONL files
```bash
cd c:\Users\antoi\Atomizer
mkdir -p knowledge_base/lac/optimization_memory
```
```
File: knowledge_base/lac/optimization_memory/bracket.jsonl
Content (one JSON per line):
{"geometry_type": "bracket", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}
```
```
File: knowledge_base/lac/optimization_memory/beam.jsonl
Content:
{"geometry_type": "beam", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}
```
```
File: knowledge_base/lac/optimization_memory/mirror.jsonl
Content:
{"geometry_type": "mirror", "study_name": "m1_mirror_adaptive_V14", "method": "IMSO", "objectives": ["wfe_40_20", "mass_kg"], "trials": 100, "converged": true, "notes": "SAT v3 achieved WS=205.58"}
```
#### 6.5 Move implementation plans to docs/plans
```bash
cd c:\Users\antoi\Atomizer
git mv .claude/skills/modules/DYNAMIC_RESPONSE_IMPLEMENTATION_PLAN.md docs/plans/
git mv .claude/skills/modules/OPTIMIZATION_ENGINE_MIGRATION_PLAN.md docs/plans/
git mv .claude/skills/modules/atomizer_fast_solver_technologies.md docs/plans/
```
#### 6.6 Final consistency verification
```bash
cd c:\Users\antoi\Atomizer
# Verify protocol files exist
ls docs/protocols/operations/OP_0*.md
ls docs/protocols/system/SYS_1*.md
# Verify imports work
python -c "import optimization_engine; print('OK')"
# Verify no broken references
grep -r "SYS_16_STUDY" . --include="*.md" | head -5 # Should be empty
grep -r "SYS_17_CONTEXT" . --include="*.md" | head -5 # Should be empty
# Count todos completed
echo "Verification complete"
```
#### 6.7 Commit Phase 6 Changes
```bash
cd c:\Users\antoi\Atomizer
git add -A
git commit -m "$(cat <<'EOF'
docs: Final documentation polish and consistency fixes
- Update README.md with LLM assistant section
- Create optimization_memory JSONL structure
- Move implementation plans from skills/modules to docs/plans
- Verify all protocol references are consistent
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
```
---
### FINAL: Push to Both Remotes
```bash
cd c:\Users\antoi\Atomizer
git push origin main
git push github main
```
---
## Quick Reference
### Files Modified in This Restructuring
**Documentation (Phase 2):**
- `docs/protocols/README.md` - Updated protocol listings
- `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md` - Renamed from SYS_16
- `docs/protocols/system/SYS_18_CONTEXT_ENGINEERING.md` - Renamed from SYS_17
- `CLAUDE.md` - Updated routing tables
- `.claude/skills/00_BOOTSTRAP.md` - Replaced with V3.0
- `.claude/skills/01_CHEATSHEET.md` - Added OP_08
- `knowledge_base/playbook.json` - Created
**Code (Phase 3):**
- `optimization_engine/processors/surrogates/__init__.py` - Added exports
- `optimization_engine/surrogates/__init__.py` - Deprecation shim
- `tests/conftest.py` - Created
**Dependencies (Phase 4):**
- `pyproject.toml` - Updated optional groups
**Studies (Phase 5):**
- `studies/M1_Mirror/_archive/` - Created with V1-V8 studies
**Final Polish (Phase 6):**
- `README.md` - Added LLM section
- `knowledge_base/lac/optimization_memory/` - Created structure
- `docs/plans/` - Moved implementation plans
---
## Success Criteria Checklist
- [ ] All imports work: `python -c "import optimization_engine"`
- [ ] Dashboard starts: `python launch_dashboard.py`
- [ ] No SYS_16 duplication (only SELF_AWARE_TURBO)
- [ ] Bootstrap V3.0 is active version
- [ ] OP_08 discoverable in all routing tables
- [ ] Studies use consistent 3_results/ naming
- [ ] Tests directory exists with conftest.py
- [ ] All changes pushed to both remotes

View File

@@ -1,7 +1,7 @@
# Atomizer Protocol Operating System (POS)
**Version**: 1.0
**Last Updated**: 2025-12-05
**Version**: 1.1
**Last Updated**: 2026-01-06
---
@@ -22,13 +22,19 @@ protocols/
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
── OP_06_TROUBLESHOOT.md
── OP_06_TROUBLESHOOT.md
│ ├── OP_07_DISK_OPTIMIZATION.md
│ └── OP_08_GENERATE_REPORT.md
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
── SYS_14_NEURAL_ACCELERATION.md
── SYS_14_NEURAL_ACCELERATION.md
│ ├── SYS_15_METHOD_SELECTOR.md
│ ├── SYS_16_SELF_AWARE_TURBO.md
│ ├── SYS_17_STUDY_INSIGHTS.md
│ └── SYS_18_CONTEXT_ENGINEERING.md
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
@@ -56,6 +62,8 @@ Day-to-day how-to guides:
- **OP_04**: Analyze results
- **OP_05**: Export training data
- **OP_06**: Troubleshoot issues
- **OP_07**: Disk optimization (free space)
- **OP_08**: Generate study report
### Layer 3: System (`system/`)
Core technical specifications:
@@ -65,6 +73,9 @@ Core technical specifications:
- **SYS_13**: Real-Time Dashboard Tracking
- **SYS_14**: Neural Network Acceleration
- **SYS_15**: Method Selector
- **SYS_16**: Self-Aware Turbo (SAT) Method
- **SYS_17**: Study Insights (Physics Visualization)
- **SYS_18**: Context Engineering (ACE Framework)
### Layer 4: Extensions (`extensions/`)
Guides for extending Atomizer:
@@ -130,6 +141,8 @@ LOAD_WITH: [{dependencies}]
| Analyze results | [OP_04](operations/OP_04_ANALYZE_RESULTS.md) |
| Export neural data | [OP_05](operations/OP_05_EXPORT_TRAINING_DATA.md) |
| Fix errors | [OP_06](operations/OP_06_TROUBLESHOOT.md) |
| Free disk space | [OP_07](operations/OP_07_DISK_OPTIMIZATION.md) |
| Generate report | [OP_08](operations/OP_08_GENERATE_REPORT.md) |
| Add extractor | [EXT_01](extensions/EXT_01_CREATE_EXTRACTOR.md) |
### By Protocol Number
@@ -142,6 +155,9 @@ LOAD_WITH: [{dependencies}]
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
| 15 | Method Selector | [System](system/SYS_15_METHOD_SELECTOR.md) |
| 16 | Self-Aware Turbo | [System](system/SYS_16_SELF_AWARE_TURBO.md) |
| 17 | Study Insights | [System](system/SYS_17_STUDY_INSIGHTS.md) |
| 18 | Context Engineering | [System](system/SYS_18_CONTEXT_ENGINEERING.md) |
---
@@ -160,3 +176,4 @@ LOAD_WITH: [{dependencies}]
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | 2025-12-05 | Initial Protocol Operating System |
| 1.1 | 2026-01-06 | Added OP_07, OP_08; SYS_16, SYS_17, SYS_18; Fixed SYS_16 duplication |

View File

@@ -0,0 +1,276 @@
# OP_08: Generate Study Report
<!--
PROTOCOL: Automated Study Report Generation
LAYER: Operations
VERSION: 1.0
STATUS: Active
LAST_UPDATED: 2026-01-06
PRIVILEGE: user
LOAD_WITH: []
-->
## Overview
This protocol covers automated generation of comprehensive study reports via the Dashboard API or CLI. Reports include executive summaries, optimization metrics, best solutions, and engineering recommendations.
---
## When to Use
| Trigger | Action |
|---------|--------|
| "generate report" | Follow this protocol |
| Dashboard "Report" button | API endpoint called |
| Optimization complete | Auto-generate option |
| CLI `atomizer report <study>` | Direct generation |
---
## Quick Reference
**API Endpoint**: `POST /api/optimization/studies/{study_id}/report/generate`
**Output**: `STUDY_REPORT.md` in study root directory
**Formats Supported**: Markdown (default), JSON (data export)
---
## Generation Methods
### 1. Via Dashboard
Click the "Generate Report" button in the study control panel. The report will be generated and displayed in the Reports tab.
### 2. Via API
```bash
# Generate report
curl -X POST http://localhost:8003/api/optimization/studies/my_study/report/generate
# Response
{
"success": true,
"content": "# Study Report: ...",
"path": "/path/to/STUDY_REPORT.md",
"generated_at": "2026-01-06T12:00:00"
}
```
### 3. Via CLI
```bash
# Using Claude Code
"Generate a report for the bracket_optimization study"
# Direct Python
python -m optimization_engine.reporting.markdown_report studies/bracket_optimization
```
---
## Report Sections
### Executive Summary
Generated automatically from trial data:
- Total trials completed
- Best objective value achieved
- Improvement percentage from initial design
- Key findings
### Results Table
| Metric | Initial | Final | Change |
|--------|---------|-------|--------|
| Objective 1 | X | Y | Z% |
| Objective 2 | X | Y | Z% |
### Best Solution
- Trial number
- All design variable values
- All objective values
- Constraint satisfaction status
- User attributes (source, validation status)
### Design Variables Summary
| Variable | Min | Max | Best Value | Sensitivity |
|----------|-----|-----|------------|-------------|
| var_1 | 0.0 | 10.0 | 5.23 | High |
| var_2 | 0.0 | 20.0 | 12.87 | Medium |
### Convergence Analysis
- Trials to 50% improvement
- Trials to 90% improvement
- Convergence rate assessment
- Phase breakdown (exploration, exploitation, refinement)
### Recommendations
Auto-generated based on results:
- Further optimization suggestions
- Sensitivity observations
- Next steps for validation
---
## Backend Implementation
**File**: `atomizer-dashboard/backend/api/routes/optimization.py`
```python
@router.post("/studies/{study_id}/report/generate")
async def generate_report(study_id: str, format: str = "markdown"):
"""
Generate comprehensive study report.
Args:
study_id: Study identifier
format: Output format (markdown, json)
Returns:
Generated report content and file path
"""
# Load configuration
config = load_config(study_dir)
# Query database for all trials
trials = get_all_completed_trials(db)
best_trial = get_best_trial(db)
# Calculate metrics
stats = calculate_statistics(trials)
# Generate markdown
report = generate_markdown_report(study_id, config, trials, best_trial, stats)
# Save to file
report_path = study_dir / "STUDY_REPORT.md"
report_path.write_text(report)
return {
"success": True,
"content": report,
"path": str(report_path),
"generated_at": datetime.now().isoformat()
}
```
---
## Report Template
The generated report follows this structure:
```markdown
# {Study Name} - Optimization Report
**Generated:** {timestamp}
**Status:** {Completed/In Progress}
---
## Executive Summary
This optimization study completed **{n_trials} trials** and achieved a
**{improvement}%** improvement in the primary objective.
| Metric | Value |
|--------|-------|
| Total Trials | {n} |
| Best Value | {best} |
| Initial Value | {initial} |
| Improvement | {pct}% |
---
## Objectives
| Name | Direction | Weight | Best Value |
|------|-----------|--------|------------|
| {obj_name} | minimize | 1.0 | {value} |
---
## Design Variables
| Name | Min | Max | Best Value |
|------|-----|-----|------------|
| {var_name} | {min} | {max} | {best} |
---
## Best Solution
**Trial #{n}** achieved the optimal result.
### Parameters
- var_1: {value}
- var_2: {value}
### Objectives
- objective_1: {value}
### Constraints
- All constraints satisfied: Yes/No
---
## Convergence Analysis
- Initial best: {value} (trial 1)
- Final best: {value} (trial {n})
- 90% improvement reached at trial {n}
---
## Recommendations
1. Validate best solution with high-fidelity FEA
2. Consider sensitivity analysis around optimal design point
3. Check manufacturing feasibility of optimal parameters
---
*Generated by Atomizer Dashboard*
```
---
## Prerequisites
Before generating a report:
- [ ] Study must have at least 1 completed trial
- [ ] study.db must exist in results directory
- [ ] optimization_config.json must be present
---
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| "No trials found" | Empty database | Run optimization first |
| "Config not found" | Missing config file | Verify study setup |
| "Database locked" | Optimization running | Wait or pause first |
| "Invalid study" | Study path not found | Check study ID |
---
## Cross-References
- **Preceded By**: [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
- **Related**: [SYS_13_DASHBOARD](../system/SYS_13_DASHBOARD.md)
- **Triggered By**: Dashboard Report button
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | 2026-01-06 | Initial release - Dashboard integration |

View File

@@ -0,0 +1 @@
{"geometry_type": "beam", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}

View File

@@ -0,0 +1 @@
{"geometry_type": "bracket", "study_name": "example", "method": "TPE", "objectives": ["mass"], "trials": 0, "converged": false, "notes": "Schema file - replace with real data"}

View File

@@ -0,0 +1 @@
{"geometry_type": "mirror", "study_name": "m1_mirror_cost_reduction_flat_back_V9", "method": "SAT_v3", "objectives": ["wfe_40_20", "mass_kg"], "trials": 100, "converged": true, "best_weighted_sum": 205.58, "notes": "SAT v3 achieved WS=205.58 (new campaign record)"}

View File

@@ -0,0 +1,94 @@
{
"version": 1,
"last_updated": "2026-01-06T12:00:00",
"items": {
"str-00001": {
"id": "str-00001",
"category": "str",
"content": "Use TPE sampler for single-objective optimization with <4 design variables",
"helpful_count": 5,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["optimization", "sampler"]
},
"str-00002": {
"id": "str-00002",
"category": "str",
"content": "Use CMA-ES for continuous optimization with 4+ design variables",
"helpful_count": 3,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["optimization", "sampler"]
},
"mis-00001": {
"id": "mis-00001",
"category": "mis",
"content": "Always close NX process when done to avoid zombie processes consuming resources",
"helpful_count": 10,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["nx", "process", "critical"]
},
"mis-00002": {
"id": "mis-00002",
"category": "mis",
"content": "Never trust surrogate predictions with confidence < 0.7 for production trials",
"helpful_count": 5,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["surrogate", "validation"]
},
"cal-00001": {
"id": "cal-00001",
"category": "cal",
"content": "Relative WFE = (WFE_current - WFE_baseline) / WFE_baseline, NOT WFE_baseline / WFE_current",
"helpful_count": 3,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["zernike", "calculation", "critical"]
},
"tool-00001": {
"id": "tool-00001",
"category": "tool",
"content": "Use extract_zernike_figure for surface figure analysis (E20), extract_zernike_opd for optical path difference (E21)",
"helpful_count": 4,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["extractor", "zernike"]
},
"dom-00001": {
"id": "dom-00001",
"category": "dom",
"content": "For mirror optimization: WFE = 2 * surface figure RMS (reflection doubles error)",
"helpful_count": 3,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["mirror", "optics", "fundamental"]
},
"wf-00001": {
"id": "wf-00001",
"category": "wf",
"content": "Always run 5-10 initial FEA trials before enabling surrogate to establish baseline",
"helpful_count": 4,
"harmful_count": 0,
"created_at": "2026-01-06T12:00:00",
"last_used": null,
"source_trials": [],
"tags": ["surrogate", "workflow"]
}
}
}

View File

@@ -59,6 +59,15 @@ def __getattr__(name):
elif name == 'create_exporter_from_config':
from .training_data_exporter import create_exporter_from_config
return create_exporter_from_config
elif name == 'EnsembleSurrogate':
from .ensemble_surrogate import EnsembleSurrogate
return EnsembleSurrogate
elif name == 'OODDetector':
from .ensemble_surrogate import OODDetector
return OODDetector
elif name == 'create_and_train_ensemble':
from .ensemble_surrogate import create_and_train_ensemble
return create_and_train_ensemble
raise AttributeError(f"module 'optimization_engine.processors.surrogates' has no attribute '{name}'")
@@ -76,4 +85,7 @@ __all__ = [
'AutoTrainer',
'TrainingDataExporter',
'create_exporter_from_config',
'EnsembleSurrogate',
'OODDetector',
'create_and_train_ensemble',
]

View File

@@ -1,19 +1,26 @@
"""
Surrogate models for FEA acceleration.
DEPRECATED: This module has been moved to optimization_engine.processors.surrogates
Available surrogates:
- EnsembleSurrogate: Multiple MLPs with uncertainty quantification
- OODDetector: Out-of-distribution detection
Please update your imports:
from optimization_engine.processors.surrogates import EnsembleSurrogate
This module will be removed in a future version.
"""
from .ensemble_surrogate import (
import warnings
warnings.warn(
"optimization_engine.surrogates is deprecated. "
"Use optimization_engine.processors.surrogates instead.",
DeprecationWarning,
stacklevel=2
)
# Redirect imports
from optimization_engine.processors.surrogates import (
EnsembleSurrogate,
OODDetector,
create_and_train_ensemble
)
__all__ = [
'EnsembleSurrogate',
'OODDetector',
'create_and_train_ensemble'
]
__all__ = ['EnsembleSurrogate', 'OODDetector', 'create_and_train_ensemble']

View File

@@ -35,8 +35,15 @@ dev = [
"pre-commit>=3.6.0",
]
mcp = [
"mcp>=0.1.0",
neural = [
"torch>=2.0.0",
"torch-geometric>=2.3.0",
"tensorboard>=2.13.0",
]
gnn = [
"torch>=2.0.0",
"torch-geometric>=2.3.0",
]
dashboard = [
@@ -44,13 +51,17 @@ dashboard = [
"dash-bootstrap-components>=1.5.0",
]
all = [
"atomizer[neural,gnn,dev,dashboard]",
]
[build-system]
requires = ["setuptools>=68.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["."]
include = ["mcp_server*", "optimization_engine*", "nx_journals*"]
include = ["optimization_engine*", "nx_journals*"]
[tool.black]
line-length = 100
@@ -73,4 +84,4 @@ testpaths = ["tests"]
python_files = ["test_*.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = "-v --cov=mcp_server --cov=optimization_engine --cov-report=html --cov-report=term"
addopts = "-v --cov=optimization_engine --cov-report=html --cov-report=term"

View File

@@ -1,67 +0,0 @@
"""Run cleanup excluding protected studies."""
import sys
from pathlib import Path
# Add project root to path
sys.path.insert(0, str(Path(__file__).parent))
from optimization_engine.utils.study_cleanup import cleanup_study, get_study_info
m1_dir = Path(r"C:\Users\antoi\Atomizer\studies\M1_Mirror")
# Studies to SKIP (user requested)
skip_patterns = [
"cost_reduction_V10",
"cost_reduction_V11",
"cost_reduction_V12",
"flat_back",
]
# Parse args
dry_run = "--execute" not in sys.argv
keep_best = 5
total_saved = 0
studies_to_clean = []
print("=" * 75)
print(f"CLEANUP (excluding V10-V12 and flat_back studies)")
print(f"Mode: {'DRY RUN' if dry_run else 'EXECUTE'}")
print("=" * 75)
print(f"{'Study':<45} {'Trials':>7} {'Size':>8} {'Savings':>8}")
print("-" * 75)
for study_path in sorted(m1_dir.iterdir()):
if not study_path.is_dir():
continue
# Check if has iterations
if not (study_path / "2_iterations").exists():
continue
# Skip protected studies
skip = False
for pattern in skip_patterns:
if pattern in study_path.name:
skip = True
break
if skip:
info = get_study_info(study_path)
print(f"{study_path.name:<45} {info['trial_count']:>7} SKIPPED")
continue
# This study will be cleaned
result = cleanup_study(study_path, dry_run=dry_run, keep_best=keep_best)
saved = result["space_saved_gb"]
total_saved += saved
status = "would save" if dry_run else "saved"
print(f"{study_path.name:<45} {result['trial_count']:>7} {result['total_size_before']/(1024**3):>7.1f}G {saved:>7.1f}G")
studies_to_clean.append(study_path.name)
print("-" * 75)
print(f"{'TOTAL SAVINGS:':<45} {' '*15} {total_saved:>7.1f}G")
if dry_run:
print(f"\n[!] This was a dry run. Run with --execute to actually delete files.")
else:
print(f"\n[OK] Cleanup complete! Freed {total_saved:.1f} GB")

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env python
"""Compare V8 and V11 lateral parameter convergence"""
import optuna
import statistics
# Load V8 study
v8_study = optuna.load_study(
study_name='m1_mirror_cost_reduction_V8',
storage='sqlite:///studies/M1_Mirror/m1_mirror_cost_reduction_V8/3_results/study.db'
)
# Load V11 study
v11_study = optuna.load_study(
study_name='m1_mirror_cost_reduction_V11',
storage='sqlite:///studies/M1_Mirror/m1_mirror_cost_reduction_V11/3_results/study.db'
)
print("="*70)
print("V8 BEST TRIAL (Z-only Zernike)")
print("="*70)
v8_best = v8_study.best_trial
print(f"Trial: {v8_best.number}")
print(f"WS: {v8_best.value:.2f}")
print("\nLateral Parameters:")
for k, v in sorted(v8_best.params.items()):
print(f" {k}: {v:.4f}")
print("\nObjectives:")
for k, v in v8_best.user_attrs.items():
if isinstance(v, (int, float)):
print(f" {k}: {v:.4f}")
print("\n" + "="*70)
print("V11 BEST TRIAL (ZernikeOPD + extract_relative)")
print("="*70)
v11_best = v11_study.best_trial
print(f"Trial: {v11_best.number}")
print(f"WS: {v11_best.value:.2f}")
print("\nLateral Parameters:")
for k, v in sorted(v11_best.params.items()):
print(f" {k}: {v:.4f}")
print("\nObjectives:")
for k, v in v11_best.user_attrs.items():
if isinstance(v, (int, float)):
print(f" {k}: {v:.4f}")
# Compare parameter ranges explored
print("\n" + "="*70)
print("PARAMETER EXPLORATION COMPARISON")
print("="*70)
params = ['lateral_inner_angle', 'lateral_outer_angle', 'lateral_outer_pivot',
'lateral_inner_pivot', 'lateral_middle_pivot', 'lateral_closeness']
for p in params:
v8_vals = [t.params.get(p) for t in v8_study.trials if t.state.name == 'COMPLETE' and p in t.params]
v11_vals = [t.params.get(p) for t in v11_study.trials if t.state.name == 'COMPLETE' and p in t.params]
if v8_vals and v11_vals:
print(f"\n{p}:")
print(f" V8: mean={statistics.mean(v8_vals):.2f}, std={statistics.stdev(v8_vals) if len(v8_vals) > 1 else 0:.2f}, range=[{min(v8_vals):.2f}, {max(v8_vals):.2f}]")
print(f" V11: mean={statistics.mean(v11_vals):.2f}, std={statistics.stdev(v11_vals) if len(v11_vals) > 1 else 0:.2f}, range=[{min(v11_vals):.2f}, {max(v11_vals):.2f}]")
print(f" Best V8: {v8_best.params.get(p, 'N/A'):.2f}")
print(f" Best V11: {v11_best.params.get(p, 'N/A'):.2f}")
# Lateral displacement comparison (V11 has this data)
print("\n" + "="*70)
print("V11 LATERAL DISPLACEMENT DATA (not available in V8)")
print("="*70)
for t in v11_study.trials:
if t.state.name == 'COMPLETE':
lat_rms = t.user_attrs.get('lateral_rms_um', None)
lat_max = t.user_attrs.get('lateral_max_um', None)
if lat_rms is not None:
print(f"Trial {t.number}: RMS={lat_rms:.2f} um, Max={lat_max:.2f} um, WS={t.value:.2f}")

32
tests/conftest.py Normal file
View File

@@ -0,0 +1,32 @@
"""
Pytest configuration and shared fixtures for Atomizer tests.
"""
import pytest
import sys
from pathlib import Path
# Add project root to path
sys.path.insert(0, str(Path(__file__).parent.parent))
@pytest.fixture
def sample_study_dir(tmp_path):
"""Create a temporary study directory structure."""
study = tmp_path / "test_study"
(study / "1_setup").mkdir(parents=True)
(study / "2_iterations").mkdir()
(study / "3_results").mkdir()
return study
@pytest.fixture
def sample_config():
"""Sample optimization config for testing."""
return {
"study_name": "test_study",
"design_variables": [
{"name": "param1", "lower": 0, "upper": 10, "type": "continuous"}
],
"objectives": [
{"name": "minimize_mass", "direction": "minimize"}
]
}