feat: Add Zernike GNN surrogate module and M1 mirror V12/V13 studies
This commit introduces the GNN-based surrogate for Zernike mirror optimization and the M1 mirror study progression from V12 (GNN validation) to V13 (pure NSGA-II). ## GNN Surrogate Module (optimization_engine/gnn/) New module for Graph Neural Network surrogate prediction of mirror deformations: - `polar_graph.py`: PolarMirrorGraph - fixed 3000-node polar grid structure - `zernike_gnn.py`: ZernikeGNN with design-conditioned message passing - `differentiable_zernike.py`: GPU-accelerated Zernike fitting and objectives - `train_zernike_gnn.py`: ZernikeGNNTrainer with multi-task loss - `gnn_optimizer.py`: ZernikeGNNOptimizer for turbo mode (~900k trials/hour) - `extract_displacement_field.py`: OP2 to HDF5 field extraction - `backfill_field_data.py`: Extract fields from existing FEA trials Key innovation: Design-conditioned convolutions that modulate message passing based on structural design parameters, enabling accurate field prediction. ## M1 Mirror Studies ### V12: GNN Field Prediction + FEA Validation - Zernike GNN trained on V10/V11 FEA data (238 samples) - Turbo mode: 5000 GNN predictions → top candidates → FEA validation - Calibration workflow for GNN-to-FEA error correction - Scripts: run_gnn_turbo.py, validate_gnn_best.py, compute_full_calibration.py ### V13: Pure NSGA-II FEA (Ground Truth) - Seeds 217 FEA trials from V11+V12 - Pure multi-objective NSGA-II without any surrogate - Establishes ground-truth Pareto front for GNN accuracy evaluation - Narrowed blank_backface_angle range to [4.0, 5.0] ## Documentation Updates - SYS_14: Added Zernike GNN section with architecture diagrams - CLAUDE.md: Added GNN module reference and quick start - V13 README: Study documentation with seeding strategy 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
123
CLAUDE.md
123
CLAUDE.md
@@ -2,6 +2,39 @@
|
||||
|
||||
You are the AI orchestrator for **Atomizer**, an LLM-first FEA optimization framework. Your role is to help users set up, run, and analyze structural optimization studies through natural conversation.
|
||||
|
||||
## Session Initialization (CRITICAL - Read on Every New Session)
|
||||
|
||||
On **EVERY new Claude session**, perform these initialization steps:
|
||||
|
||||
### Step 1: Load Context
|
||||
1. Read `.claude/ATOMIZER_CONTEXT.md` for unified context (if not already loaded via this file)
|
||||
2. This file (CLAUDE.md) provides system instructions
|
||||
3. Use `.claude/skills/00_BOOTSTRAP.md` for task routing
|
||||
|
||||
### Step 2: Detect Study Context
|
||||
If working directory is inside a study (`studies/*/`):
|
||||
1. Read `optimization_config.json` to understand the study
|
||||
2. Check `2_results/study.db` for optimization status (trial count, state)
|
||||
3. Summarize study state to user in first response
|
||||
|
||||
### Step 3: Route by User Intent
|
||||
| User Keywords | Load Protocol | Subagent Type |
|
||||
|---------------|---------------|---------------|
|
||||
| "create", "new", "set up" | OP_01, SYS_12 | general-purpose |
|
||||
| "run", "start", "trials" | OP_02, SYS_15 | - (direct execution) |
|
||||
| "status", "progress" | OP_03 | - (DB query) |
|
||||
| "results", "analyze", "Pareto" | OP_04 | - (analysis) |
|
||||
| "neural", "surrogate", "turbo" | SYS_14, SYS_15 | general-purpose |
|
||||
| "NX", "model", "expression" | MCP siemens-docs | general-purpose |
|
||||
| "error", "fix", "debug" | OP_06 | Explore |
|
||||
|
||||
### Step 4: Proactive Actions
|
||||
- If optimization is running: Report progress automatically
|
||||
- If no study context: Offer to create one or list available studies
|
||||
- After code changes: Update documentation proactively (SYS_12, cheatsheet)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start - Protocol Operating System
|
||||
|
||||
**For ANY task, first check**: `.claude/skills/00_BOOTSTRAP.md`
|
||||
@@ -76,11 +109,37 @@ Atomizer/
|
||||
│ ├── system/ # SYS_10 - SYS_15
|
||||
│ └── extensions/ # EXT_01 - EXT_04
|
||||
├── optimization_engine/ # Core Python modules
|
||||
│ └── extractors/ # Physics extraction library
|
||||
│ ├── extractors/ # Physics extraction library
|
||||
│ └── gnn/ # GNN surrogate module (Zernike)
|
||||
├── studies/ # User studies
|
||||
└── atomizer-dashboard/ # React dashboard
|
||||
```
|
||||
|
||||
## GNN Surrogate for Zernike Optimization
|
||||
|
||||
The `optimization_engine/gnn/` module provides Graph Neural Network surrogates for mirror optimization:
|
||||
|
||||
| Component | Purpose |
|
||||
|-----------|---------|
|
||||
| `polar_graph.py` | PolarMirrorGraph - fixed 3000-node polar grid |
|
||||
| `zernike_gnn.py` | ZernikeGNN model with design-conditioned convolutions |
|
||||
| `differentiable_zernike.py` | GPU-accelerated Zernike fitting |
|
||||
| `train_zernike_gnn.py` | Training pipeline with multi-task loss |
|
||||
| `gnn_optimizer.py` | ZernikeGNNOptimizer for turbo mode |
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# Train GNN on existing FEA data
|
||||
python -m optimization_engine.gnn.train_zernike_gnn V11 V12 --epochs 200
|
||||
|
||||
# Run turbo optimization (5000 GNN trials)
|
||||
cd studies/m1_mirror_adaptive_V12
|
||||
python run_gnn_turbo.py --trials 5000
|
||||
```
|
||||
|
||||
**Full documentation**: `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md`
|
||||
|
||||
## CRITICAL: NX Open Development Protocol
|
||||
|
||||
### Always Use Official Documentation First
|
||||
@@ -213,4 +272,66 @@ See `docs/protocols/operations/OP_06_TROUBLESHOOT.md` for full troubleshooting g
|
||||
|
||||
---
|
||||
|
||||
## Subagent Architecture
|
||||
|
||||
For complex tasks, spawn specialized subagents using the Task tool:
|
||||
|
||||
### Available Subagent Patterns
|
||||
|
||||
| Task Type | Subagent | Context to Provide |
|
||||
|-----------|----------|-------------------|
|
||||
| **Create Study** | `general-purpose` | Load `core/study-creation-core.md`, SYS_12. Task: Create complete study from description. |
|
||||
| **NX Automation** | `general-purpose` | Use MCP siemens-docs tools. Query NXOpen classes before writing journals. |
|
||||
| **Codebase Search** | `Explore` | Search for patterns, extractors, or understand existing code |
|
||||
| **Architecture** | `Plan` | Design implementation approach for complex features |
|
||||
| **Protocol Audit** | `general-purpose` | Validate config against SYS_12 extractors, check for issues |
|
||||
|
||||
### When to Use Subagents
|
||||
|
||||
**Use subagents for**:
|
||||
- Creating new studies (complex, multi-file generation)
|
||||
- NX API lookups and journal development
|
||||
- Searching for patterns across multiple files
|
||||
- Planning complex architectural changes
|
||||
|
||||
**Don't use subagents for**:
|
||||
- Simple file reads/edits
|
||||
- Running Python scripts
|
||||
- Quick DB queries
|
||||
- Direct user questions
|
||||
|
||||
### Subagent Prompt Template
|
||||
|
||||
When spawning a subagent, provide comprehensive context:
|
||||
```
|
||||
Context: [What the user wants]
|
||||
Study: [Current study name if applicable]
|
||||
Files to check: [Specific paths]
|
||||
Task: [Specific deliverable expected]
|
||||
Output: [What to return - files created, analysis, etc.]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auto-Documentation Protocol
|
||||
|
||||
When creating or modifying extractors/protocols, **proactively update docs**:
|
||||
|
||||
1. **New extractor created** →
|
||||
- Add to `optimization_engine/extractors/__init__.py`
|
||||
- Update `SYS_12_EXTRACTOR_LIBRARY.md`
|
||||
- Update `.claude/skills/01_CHEATSHEET.md`
|
||||
- Commit with: `feat: Add E{N} {name} extractor`
|
||||
|
||||
2. **Protocol updated** →
|
||||
- Update version in protocol header
|
||||
- Update `ATOMIZER_CONTEXT.md` version table
|
||||
- Mention in commit message
|
||||
|
||||
3. **New study template** →
|
||||
- Add to `optimization_engine/templates/registry.json`
|
||||
- Update `ATOMIZER_CONTEXT.md` template table
|
||||
|
||||
---
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes.*
|
||||
|
||||
Reference in New Issue
Block a user