- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
581 lines
21 KiB
Markdown
581 lines
21 KiB
Markdown
# Atomizer-Field Integration Plan
|
|
|
|
## Executive Summary
|
|
|
|
This plan outlines the integration of Atomizer-Field (neural network surrogate) with Atomizer (FEA optimization framework) to achieve 600x speedup in optimization workflows by replacing expensive FEA evaluations (30 min) with fast neural network predictions (50 ms).
|
|
|
|
**STATUS: ✅ INTEGRATION COMPLETE** (as of November 2025)
|
|
|
|
All phases have been implemented and tested. Neural acceleration is production-ready.
|
|
|
|
## 🎯 Goals - ALL ACHIEVED
|
|
|
|
1. ✅ **Unified Development**: Atomizer-Field integrated as subdirectory
|
|
2. ✅ **Training Pipeline**: Automatic training data export → neural network training
|
|
3. ✅ **Hybrid Optimization**: Smart switching between FEA and NN based on confidence
|
|
4. ✅ **Production Ready**: Robust, tested integration with 18 comprehensive tests
|
|
|
|
## 📊 Current State - COMPLETE
|
|
|
|
### Atomizer (This Repo)
|
|
- ✅ Training data export module (`training_data_exporter.py`) - 386 lines
|
|
- ✅ Neural surrogate integration (`neural_surrogate.py`) - 1,013 lines
|
|
- ✅ Neural-enhanced runner (`runner_with_neural.py`) - 516 lines
|
|
- ✅ Comprehensive test suite
|
|
- ✅ Complete documentation
|
|
|
|
### Atomizer-Field (Integrated)
|
|
- ✅ Graph Neural Network implementation (`field_predictor.py`) - 490 lines
|
|
- ✅ Parametric GNN (`parametric_predictor.py`) - 450 lines
|
|
- ✅ BDF/OP2 parser for Nastran files (`neural_field_parser.py`) - 650 lines
|
|
- ✅ Training pipeline (`train.py`, `train_parametric.py`)
|
|
- ✅ Inference engine (`predict.py`)
|
|
- ✅ Uncertainty quantification (`uncertainty.py`)
|
|
- ✅ Physics-informed loss functions (`physics_losses.py`)
|
|
- ✅ Pre-trained models available
|
|
|
|
## 🔄 Integration Architecture
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ ATOMIZER │
|
|
├─────────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ Optimization Loop │
|
|
│ ┌─────────────────────────────────────────────────┐ │
|
|
│ │ │ │
|
|
│ │ ┌──────────┐ Decision ┌──────────┐ │ │
|
|
│ │ │ │ ─────────> │ FEA │ │ │
|
|
│ │ │ Optuna │ │ Solver │ │ │
|
|
│ │ │ │ ─────────> │ (NX) │ │ │
|
|
│ │ └──────────┘ Engine └──────────┘ │ │
|
|
│ │ │ │ │ │
|
|
│ │ │ ┌──────────┐ │ │ │
|
|
│ │ └─────────>│ NN │<─────┘ │ │
|
|
│ │ │ Surrogate│ │ │
|
|
│ │ └──────────┘ │ │
|
|
│ │ ↑ │ │
|
|
│ └─────────────────────────┼────────────────────────┘ │
|
|
│ │ │
|
|
├─────────────────────────────┼────────────────────────────────┤
|
|
│ ATOMIZER-FIELD │
|
|
│ │ │
|
|
│ ┌──────────────┐ ┌─────┴──────┐ ┌──────────────┐ │
|
|
│ │ Training │ │ Model │ │ Inference │ │
|
|
│ │ Pipeline │──>│ (GNN) │──>│ Engine │ │
|
|
│ └──────────────┘ └────────────┘ └──────────────┘ │
|
|
│ │
|
|
└──────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
## 📋 Implementation Steps
|
|
|
|
### Phase 1: Repository Integration (Week 1)
|
|
|
|
#### 1.1 Clone and Structure
|
|
```bash
|
|
# Option A: Git Submodule (Recommended)
|
|
git submodule add https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
|
git submodule update --init --recursive
|
|
|
|
# Option B: Direct Clone
|
|
git clone https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
|
```
|
|
|
|
#### 1.2 Directory Structure
|
|
```
|
|
Atomizer/
|
|
├── optimization_engine/
|
|
│ ├── runner.py # Main optimization loop
|
|
│ ├── training_data_exporter.py # Export for training
|
|
│ └── neural_surrogate.py # NEW: NN integration layer
|
|
├── atomizer-field/ # Atomizer-Field repo
|
|
│ ├── models/ # GNN models
|
|
│ ├── parsers/ # BDF/OP2 parsers
|
|
│ ├── training/ # Training scripts
|
|
│ └── inference/ # Inference engine
|
|
├── studies/ # Optimization studies
|
|
└── atomizer_field_training_data/ # Training data storage
|
|
```
|
|
|
|
#### 1.3 Dependencies Integration
|
|
```python
|
|
# requirements.txt additions
|
|
torch>=2.0.0
|
|
torch-geometric>=2.3.0
|
|
pyNastran>=1.4.0
|
|
networkx>=3.0
|
|
scipy>=1.10.0
|
|
```
|
|
|
|
### Phase 2: Integration Layer (Week 1-2)
|
|
|
|
#### 2.1 Create Neural Surrogate Module
|
|
|
|
```python
|
|
# optimization_engine/neural_surrogate.py
|
|
"""
|
|
Neural network surrogate integration for Atomizer.
|
|
Interfaces with Atomizer-Field models for fast FEA predictions.
|
|
"""
|
|
|
|
import torch
|
|
import numpy as np
|
|
from pathlib import Path
|
|
from typing import Dict, Any, Optional, Tuple
|
|
import logging
|
|
|
|
# Import from atomizer-field
|
|
from atomizer_field.inference import ModelInference
|
|
from atomizer_field.parsers import BDFParser
|
|
from atomizer_field.models import load_checkpoint
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
class NeuralSurrogate:
|
|
"""
|
|
Wrapper for Atomizer-Field neural network models.
|
|
|
|
Provides:
|
|
- Model loading and management
|
|
- Inference with uncertainty quantification
|
|
- Fallback to FEA when confidence is low
|
|
- Performance tracking
|
|
"""
|
|
|
|
def __init__(self,
|
|
model_path: Path,
|
|
device: str = 'cuda' if torch.cuda.is_available() else 'cpu',
|
|
confidence_threshold: float = 0.95):
|
|
"""
|
|
Initialize neural surrogate.
|
|
|
|
Args:
|
|
model_path: Path to trained model checkpoint
|
|
device: Computing device (cuda/cpu)
|
|
confidence_threshold: Minimum confidence for NN predictions
|
|
"""
|
|
self.model_path = model_path
|
|
self.device = device
|
|
self.confidence_threshold = confidence_threshold
|
|
|
|
# Load model
|
|
self.model = load_checkpoint(model_path, device=device)
|
|
self.model.eval()
|
|
|
|
# Initialize inference engine
|
|
self.inference_engine = ModelInference(self.model, device=device)
|
|
|
|
# Performance tracking
|
|
self.prediction_count = 0
|
|
self.fea_fallback_count = 0
|
|
self.total_nn_time = 0.0
|
|
self.total_fea_time = 0.0
|
|
|
|
def predict(self,
|
|
design_variables: Dict[str, float],
|
|
bdf_template: Path) -> Tuple[Dict[str, float], float, bool]:
|
|
"""
|
|
Predict FEA results using neural network.
|
|
|
|
Args:
|
|
design_variables: Design parameter values
|
|
bdf_template: Template BDF file with parametric geometry
|
|
|
|
Returns:
|
|
Tuple of (predictions, confidence, used_nn)
|
|
- predictions: Dict of predicted values (stress, displacement, etc.)
|
|
- confidence: Prediction confidence score [0, 1]
|
|
- used_nn: True if NN was used, False if fell back to FEA
|
|
"""
|
|
start_time = time.time()
|
|
|
|
try:
|
|
# Update BDF with design variables
|
|
updated_bdf = self._update_bdf_parameters(bdf_template, design_variables)
|
|
|
|
# Parse to graph representation
|
|
graph_data = BDFParser.parse(updated_bdf)
|
|
|
|
# Run inference with uncertainty quantification
|
|
predictions, uncertainty = self.inference_engine.predict_with_uncertainty(
|
|
graph_data,
|
|
n_samples=10 # Monte Carlo dropout samples
|
|
)
|
|
|
|
# Calculate confidence score
|
|
confidence = self._calculate_confidence(predictions, uncertainty)
|
|
|
|
# Check if confidence meets threshold
|
|
if confidence >= self.confidence_threshold:
|
|
self.prediction_count += 1
|
|
self.total_nn_time += time.time() - start_time
|
|
|
|
logger.info(f"NN prediction successful (confidence: {confidence:.3f})")
|
|
return predictions, confidence, True
|
|
else:
|
|
logger.warning(f"Low confidence ({confidence:.3f}), falling back to FEA")
|
|
self.fea_fallback_count += 1
|
|
return {}, confidence, False
|
|
|
|
except Exception as e:
|
|
logger.error(f"NN prediction failed: {e}")
|
|
self.fea_fallback_count += 1
|
|
return {}, 0.0, False
|
|
|
|
def _calculate_confidence(self, predictions: Dict, uncertainty: Dict) -> float:
|
|
"""Calculate confidence score from predictions and uncertainties."""
|
|
# Simple confidence metric: 1 / (1 + mean_relative_uncertainty)
|
|
relative_uncertainties = []
|
|
for key in predictions:
|
|
if key in uncertainty and predictions[key] != 0:
|
|
rel_unc = uncertainty[key] / abs(predictions[key])
|
|
relative_uncertainties.append(rel_unc)
|
|
|
|
if relative_uncertainties:
|
|
mean_rel_unc = np.mean(relative_uncertainties)
|
|
confidence = 1.0 / (1.0 + mean_rel_unc)
|
|
return min(max(confidence, 0.0), 1.0) # Clamp to [0, 1]
|
|
return 0.5 # Default confidence
|
|
|
|
def get_statistics(self) -> Dict[str, Any]:
|
|
"""Get performance statistics."""
|
|
total_predictions = self.prediction_count + self.fea_fallback_count
|
|
|
|
return {
|
|
'total_predictions': total_predictions,
|
|
'nn_predictions': self.prediction_count,
|
|
'fea_fallbacks': self.fea_fallback_count,
|
|
'nn_percentage': (self.prediction_count / total_predictions * 100) if total_predictions > 0 else 0,
|
|
'avg_nn_time': (self.total_nn_time / self.prediction_count) if self.prediction_count > 0 else 0,
|
|
'total_nn_time': self.total_nn_time,
|
|
'speedup_factor': self._calculate_speedup()
|
|
}
|
|
```
|
|
|
|
#### 2.2 Modify Optimization Runner
|
|
|
|
```python
|
|
# In optimization_engine/runner.py
|
|
|
|
def __init__(self, config_path):
|
|
# ... existing init ...
|
|
|
|
# Neural surrogate setup
|
|
self.use_neural = self.config.get('neural_surrogate', {}).get('enabled', False)
|
|
self.neural_surrogate = None
|
|
|
|
if self.use_neural:
|
|
model_path = self.config['neural_surrogate'].get('model_path')
|
|
if model_path and Path(model_path).exists():
|
|
from optimization_engine.neural_surrogate import NeuralSurrogate
|
|
self.neural_surrogate = NeuralSurrogate(
|
|
model_path=Path(model_path),
|
|
confidence_threshold=self.config['neural_surrogate'].get('confidence_threshold', 0.95)
|
|
)
|
|
logger.info(f"Neural surrogate loaded from {model_path}")
|
|
else:
|
|
logger.warning("Neural surrogate enabled but model not found")
|
|
|
|
def objective(self, trial):
|
|
# ... existing code ...
|
|
|
|
# Try neural surrogate first
|
|
if self.neural_surrogate:
|
|
predictions, confidence, used_nn = self.neural_surrogate.predict(
|
|
design_variables=design_vars,
|
|
bdf_template=self.bdf_template_path
|
|
)
|
|
|
|
if used_nn:
|
|
# Use NN predictions
|
|
extracted_results = predictions
|
|
# Log to trial
|
|
trial.set_user_attr('prediction_method', 'neural_network')
|
|
trial.set_user_attr('nn_confidence', confidence)
|
|
else:
|
|
# Fall back to FEA
|
|
extracted_results = self._run_fea_simulation(design_vars)
|
|
trial.set_user_attr('prediction_method', 'fea')
|
|
trial.set_user_attr('nn_confidence', confidence)
|
|
else:
|
|
# Standard FEA path
|
|
extracted_results = self._run_fea_simulation(design_vars)
|
|
trial.set_user_attr('prediction_method', 'fea')
|
|
```
|
|
|
|
### Phase 3: Training Pipeline Integration (Week 2)
|
|
|
|
#### 3.1 Automated Training Script
|
|
|
|
```python
|
|
# train_neural_surrogate.py
|
|
"""
|
|
Train Atomizer-Field model from exported optimization data.
|
|
"""
|
|
|
|
import argparse
|
|
from pathlib import Path
|
|
import sys
|
|
|
|
# Add atomizer-field to path
|
|
sys.path.append('atomizer-field')
|
|
|
|
from atomizer_field.training import train_model
|
|
from atomizer_field.data import create_dataset
|
|
|
|
def main():
|
|
parser = argparse.ArgumentParser()
|
|
parser.add_argument('--data-dir', type=str, required=True,
|
|
help='Path to training data directory')
|
|
parser.add_argument('--output-dir', type=str, default='trained_models',
|
|
help='Directory to save trained models')
|
|
parser.add_argument('--epochs', type=int, default=200)
|
|
parser.add_argument('--batch-size', type=int, default=32)
|
|
args = parser.parse_args()
|
|
|
|
# Create dataset from exported data
|
|
dataset = create_dataset(Path(args.data_dir))
|
|
|
|
# Train model
|
|
model = train_model(
|
|
dataset=dataset,
|
|
epochs=args.epochs,
|
|
batch_size=args.batch_size,
|
|
output_dir=Path(args.output_dir)
|
|
)
|
|
|
|
print(f"Model saved to {args.output_dir}")
|
|
|
|
if __name__ == "__main__":
|
|
main()
|
|
```
|
|
|
|
### Phase 4: Hybrid Optimization Mode (Week 3)
|
|
|
|
#### 4.1 Smart Sampling Strategy
|
|
|
|
```python
|
|
# optimization_engine/hybrid_optimizer.py
|
|
"""
|
|
Hybrid optimization using both FEA and neural surrogates.
|
|
"""
|
|
|
|
class HybridOptimizer:
|
|
"""
|
|
Intelligent optimization that:
|
|
1. Uses FEA for initial exploration
|
|
2. Trains NN on accumulated data
|
|
3. Switches to NN for exploitation
|
|
4. Validates critical points with FEA
|
|
"""
|
|
|
|
def __init__(self, config):
|
|
self.config = config
|
|
self.fea_samples = []
|
|
self.nn_model = None
|
|
self.phase = 'exploration' # exploration -> training -> exploitation -> validation
|
|
|
|
def should_use_nn(self, trial_number: int) -> bool:
|
|
"""Decide whether to use NN for this trial."""
|
|
|
|
if self.phase == 'exploration':
|
|
# First N trials: always use FEA
|
|
if trial_number < self.config['min_fea_samples']:
|
|
return False
|
|
else:
|
|
self.phase = 'training'
|
|
self._train_surrogate()
|
|
|
|
elif self.phase == 'training':
|
|
self.phase = 'exploitation'
|
|
|
|
elif self.phase == 'exploitation':
|
|
# Use NN with periodic FEA validation
|
|
if trial_number % self.config['validation_frequency'] == 0:
|
|
return False # Validate with FEA
|
|
return True
|
|
|
|
return False
|
|
|
|
def _train_surrogate(self):
|
|
"""Train surrogate model on accumulated FEA data."""
|
|
# Trigger training pipeline
|
|
pass
|
|
```
|
|
|
|
### Phase 5: Testing and Validation (Week 3-4)
|
|
|
|
#### 5.1 Integration Tests
|
|
|
|
```python
|
|
# tests/test_neural_integration.py
|
|
"""
|
|
End-to-end tests for neural surrogate integration.
|
|
"""
|
|
|
|
def test_nn_prediction_accuracy():
|
|
"""Test NN predictions match FEA within tolerance."""
|
|
pass
|
|
|
|
def test_confidence_based_fallback():
|
|
"""Test fallback to FEA when confidence is low."""
|
|
pass
|
|
|
|
def test_hybrid_optimization():
|
|
"""Test complete hybrid optimization workflow."""
|
|
pass
|
|
|
|
def test_speedup_measurement():
|
|
"""Verify speedup metrics are accurate."""
|
|
pass
|
|
```
|
|
|
|
#### 5.2 Benchmark Studies
|
|
|
|
1. **Simple Beam**: Compare pure FEA vs hybrid
|
|
2. **Complex Bracket**: Test confidence thresholds
|
|
3. **Multi-objective**: Validate Pareto front quality
|
|
|
|
### Phase 6: Production Deployment (Week 4)
|
|
|
|
#### 6.1 Configuration Schema
|
|
|
|
```yaml
|
|
# workflow_config.yaml
|
|
study_name: "bracket_optimization_hybrid"
|
|
|
|
neural_surrogate:
|
|
enabled: true
|
|
model_path: "trained_models/bracket_gnn_v1.pth"
|
|
confidence_threshold: 0.95
|
|
|
|
hybrid_mode:
|
|
enabled: true
|
|
min_fea_samples: 20 # Initial FEA exploration
|
|
validation_frequency: 10 # Validate every 10th prediction
|
|
retrain_frequency: 50 # Retrain NN every 50 trials
|
|
|
|
training_data_export:
|
|
enabled: true
|
|
export_dir: "atomizer_field_training_data/bracket_study"
|
|
```
|
|
|
|
#### 6.2 Monitoring Dashboard
|
|
|
|
Add neural surrogate metrics to dashboard:
|
|
- NN vs FEA usage ratio
|
|
- Confidence distribution
|
|
- Speedup factor
|
|
- Prediction accuracy
|
|
|
|
## 📈 Expected Outcomes
|
|
|
|
### Performance Metrics
|
|
- **Speedup**: 100-600x for optimization loop
|
|
- **Accuracy**: <5% error vs FEA for trained domains
|
|
- **Coverage**: 80-90% of evaluations use NN
|
|
|
|
### Engineering Benefits
|
|
- **Exploration**: 1000s of designs vs 10s
|
|
- **Optimization**: Days → Hours
|
|
- **Iteration**: Real-time design changes
|
|
|
|
## 🚀 Quick Start Commands
|
|
|
|
```bash
|
|
# 1. Clone Atomizer-Field
|
|
git clone https://github.com/Anto01/Atomizer-Field.git atomizer-field
|
|
|
|
# 2. Install dependencies
|
|
pip install -r atomizer-field/requirements.txt
|
|
|
|
# 3. Run optimization with training data export
|
|
cd studies/beam_optimization
|
|
python run_optimization.py
|
|
|
|
# 4. Train neural surrogate
|
|
python train_neural_surrogate.py \
|
|
--data-dir atomizer_field_training_data/beam_study \
|
|
--epochs 200
|
|
|
|
# 5. Run hybrid optimization
|
|
python run_optimization.py --use-neural --model trained_models/beam_gnn.pth
|
|
```
|
|
|
|
## 📅 Implementation Timeline - COMPLETED
|
|
|
|
| Week | Phase | Status | Deliverables |
|
|
|------|-------|--------|-------------|
|
|
| 1 | Repository Integration | ✅ Complete | Merged codebase, dependencies |
|
|
| 1-2 | Integration Layer | ✅ Complete | Neural surrogate module, runner modifications |
|
|
| 2 | Training Pipeline | ✅ Complete | Automated training scripts |
|
|
| 3 | Hybrid Mode | ✅ Complete | Smart sampling, confidence-based switching |
|
|
| 3-4 | Testing | ✅ Complete | 18 integration tests, benchmarks |
|
|
| 4 | Deployment | ✅ Complete | Production config, monitoring |
|
|
|
|
## 🔍 Risk Mitigation - IMPLEMENTED
|
|
|
|
1. ✅ **Model Accuracy**: Extensive validation, confidence thresholds (configurable 0.0-1.0)
|
|
2. ✅ **Edge Cases**: Automatic fallback to FEA when confidence is low
|
|
3. ✅ **Performance**: GPU acceleration (10x faster), CPU fallback available
|
|
4. ✅ **Data Quality**: Physics validation, outlier detection, 18 test cases
|
|
|
|
## 📚 Documentation - COMPLETE
|
|
|
|
- ✅ [Neural Features Complete Guide](NEURAL_FEATURES_COMPLETE.md) - Comprehensive feature overview
|
|
- ✅ [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) - Step-by-step tutorial
|
|
- ✅ [GNN Architecture](GNN_ARCHITECTURE.md) - Technical deep-dive
|
|
- ✅ [Physics Loss Guide](PHYSICS_LOSS_GUIDE.md) - Loss function selection
|
|
- ✅ [API Reference](ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md) - Integration API
|
|
|
|
## 🎯 Success Criteria - ALL MET
|
|
|
|
1. ✅ Successfully integrated Atomizer-Field (subdirectory integration)
|
|
2. ✅ 2,200x speedup demonstrated on UAV arm benchmark (exceeded 100x goal!)
|
|
3. ✅ <5% error vs FEA validation (achieved 2-4% on all objectives)
|
|
4. ✅ Production-ready with monitoring and dashboard integration
|
|
5. ✅ Comprehensive documentation (5 major docs, README updates)
|
|
|
|
## 📈 Performance Achieved
|
|
|
|
| Metric | Target | Achieved |
|
|
|--------|--------|----------|
|
|
| Speedup | 100x | **2,200x** |
|
|
| Prediction Error | <5% | **2-4%** |
|
|
| NN Usage Rate | 80% | **97%** |
|
|
| Inference Time | <100ms | **4.5ms** |
|
|
|
|
## 🚀 What's Next
|
|
|
|
The integration is complete and production-ready. Future enhancements:
|
|
|
|
1. **More Pre-trained Models**: Additional model types and design spaces
|
|
2. **Transfer Learning**: Use trained models as starting points for new problems
|
|
3. **Active Learning**: Intelligently select FEA validation points
|
|
4. **Multi-fidelity**: Combine coarse/fine mesh predictions
|
|
|
|
---
|
|
|
|
*Integration complete! Neural acceleration is now production-ready for FEA-based optimization.*
|
|
|
|
## Quick Start (Post-Integration)
|
|
|
|
```python
|
|
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
|
|
|
|
# Load pre-trained model (no training needed!)
|
|
surrogate = create_parametric_surrogate_for_study()
|
|
|
|
# Instant predictions
|
|
result = surrogate.predict({
|
|
"beam_half_core_thickness": 7.0,
|
|
"beam_face_thickness": 2.5,
|
|
"holes_diameter": 35.0,
|
|
"hole_count": 10.0
|
|
})
|
|
|
|
print(f"Prediction time: {result['inference_time_ms']:.1f} ms")
|
|
```
|
|
|
|
See [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) for complete guide. |