- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
700 lines
20 KiB
Markdown
700 lines
20 KiB
Markdown
# AtomizerField Neural Optimization Guide
|
|
|
|
## 🚀 Overview
|
|
|
|
This guide explains how to use AtomizerField neural network surrogates with Atomizer to achieve **600x-500,000x speedup** in FEA-based optimization. By replacing expensive 30-minute FEA simulations with 50ms neural network predictions, you can explore 1000x more design configurations in the same time.
|
|
|
|
## Table of Contents
|
|
1. [Quick Start](#quick-start)
|
|
2. [Architecture](#architecture)
|
|
3. [Configuration](#configuration)
|
|
4. [Training Data Collection](#training-data-collection)
|
|
5. [Neural Model Training](#neural-model-training)
|
|
6. [Hybrid Optimization Strategies](#hybrid-optimization-strategies)
|
|
7. [Performance Monitoring](#performance-monitoring)
|
|
8. [Troubleshooting](#troubleshooting)
|
|
9. [Best Practices](#best-practices)
|
|
|
|
## Quick Start
|
|
|
|
### 1. Enable Neural Surrogate in Your Study
|
|
|
|
Add the following to your `workflow_config.json`:
|
|
|
|
```json
|
|
{
|
|
"neural_surrogate": {
|
|
"enabled": true,
|
|
"model_checkpoint": "atomizer-field/checkpoints/your_model.pt",
|
|
"confidence_threshold": 0.85
|
|
}
|
|
}
|
|
```
|
|
|
|
### 2. Use Neural-Enhanced Runner
|
|
|
|
```python
|
|
# In your run_optimization.py
|
|
from optimization_engine.runner_with_neural import create_neural_runner
|
|
|
|
# Create neural-enhanced runner instead of standard runner
|
|
runner = create_neural_runner(
|
|
config_path=Path("workflow_config.json"),
|
|
model_updater=update_nx_model,
|
|
simulation_runner=run_simulation,
|
|
result_extractors=extractors
|
|
)
|
|
|
|
# Run optimization with automatic neural acceleration
|
|
study = runner.run(n_trials=1000) # Can now afford 1000s of trials!
|
|
```
|
|
|
|
### 3. Monitor Speedup
|
|
|
|
After optimization, you'll see:
|
|
|
|
```
|
|
============================================================
|
|
NEURAL NETWORK SPEEDUP SUMMARY
|
|
============================================================
|
|
Trials using neural network: 950/1000 (95.0%)
|
|
Average NN inference time: 0.052 seconds
|
|
Average NN confidence: 92.3%
|
|
Estimated speedup: 34,615x
|
|
Time saved: ~475.0 hours
|
|
============================================================
|
|
```
|
|
|
|
## Architecture
|
|
|
|
### Component Overview
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────┐
|
|
│ Atomizer Framework │
|
|
├─────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ ┌──────────────────┐ ┌─────────────────────┐ │
|
|
│ │ Optimization │ │ Neural-Enhanced │ │
|
|
│ │ Runner │ ───> │ Runner │ │
|
|
│ └──────────────────┘ └─────────────────────┘ │
|
|
│ │ │ │
|
|
│ │ ▼ │
|
|
│ │ ┌─────────────────────┐ │
|
|
│ │ │ Neural Surrogate │ │
|
|
│ │ │ Manager │ │
|
|
│ │ └─────────────────────┘ │
|
|
│ │ │ │
|
|
│ ▼ ▼ │
|
|
│ ┌──────────────────┐ ┌─────────────────────┐ │
|
|
│ │ NX FEA Solver │ │ AtomizerField NN │ │
|
|
│ │ (30 minutes) │ │ (50 ms) │ │
|
|
│ └──────────────────┘ └─────────────────────┘ │
|
|
│ │
|
|
└─────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### Key Components
|
|
|
|
1. **NeuralOptimizationRunner** (`optimization_engine/runner_with_neural.py`)
|
|
- Extends base runner with neural capabilities
|
|
- Manages hybrid FEA/NN decisions
|
|
- Tracks performance metrics
|
|
|
|
2. **NeuralSurrogate** (`optimization_engine/neural_surrogate.py`)
|
|
- Loads AtomizerField models
|
|
- Converts design variables to graph format
|
|
- Provides confidence-based predictions
|
|
|
|
3. **HybridOptimizer** (`optimization_engine/neural_surrogate.py`)
|
|
- Implements smart switching strategies
|
|
- Manages exploration/exploitation phases
|
|
- Handles model retraining
|
|
|
|
4. **TrainingDataExporter** (`optimization_engine/training_data_exporter.py`)
|
|
- Exports FEA results for neural training
|
|
- Saves .dat/.op2 files with metadata
|
|
|
|
## Configuration
|
|
|
|
### Complete Neural Configuration
|
|
|
|
```json
|
|
{
|
|
"study_name": "advanced_optimization_with_neural",
|
|
|
|
"neural_surrogate": {
|
|
"enabled": true,
|
|
"model_checkpoint": "atomizer-field/checkpoints/model_v2.0/best.pt",
|
|
"confidence_threshold": 0.85,
|
|
"fallback_to_fea": true,
|
|
|
|
"ensemble_models": [
|
|
"atomizer-field/checkpoints/model_v2.0/fold_1.pt",
|
|
"atomizer-field/checkpoints/model_v2.0/fold_2.pt",
|
|
"atomizer-field/checkpoints/model_v2.0/fold_3.pt"
|
|
],
|
|
|
|
"device": "cuda",
|
|
"batch_size": 32,
|
|
"cache_predictions": true,
|
|
"cache_size": 10000
|
|
},
|
|
|
|
"hybrid_optimization": {
|
|
"enabled": true,
|
|
"exploration_trials": 30,
|
|
"training_interval": 100,
|
|
"validation_frequency": 20,
|
|
"min_training_samples": 50,
|
|
|
|
"phases": [
|
|
{
|
|
"name": "exploration",
|
|
"trials": [0, 30],
|
|
"use_nn": false,
|
|
"description": "Initial FEA exploration"
|
|
},
|
|
{
|
|
"name": "exploitation",
|
|
"trials": [31, 950],
|
|
"use_nn": true,
|
|
"description": "Neural network exploitation"
|
|
},
|
|
{
|
|
"name": "validation",
|
|
"trials": [951, 1000],
|
|
"use_nn": false,
|
|
"description": "Final FEA validation"
|
|
}
|
|
],
|
|
|
|
"adaptive_switching": true,
|
|
"drift_threshold": 0.15,
|
|
"retrain_on_drift": true
|
|
},
|
|
|
|
"training_data_export": {
|
|
"enabled": true,
|
|
"export_dir": "atomizer_field_training_data/my_study",
|
|
"include_failed_trials": false,
|
|
"compression": "gzip"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Configuration Parameters
|
|
|
|
#### neural_surrogate
|
|
- `enabled`: Enable/disable neural surrogate
|
|
- `model_checkpoint`: Path to trained PyTorch model
|
|
- `confidence_threshold`: Minimum confidence to use NN (0.0-1.0)
|
|
- `fallback_to_fea`: Use FEA when confidence is low
|
|
- `ensemble_models`: List of models for ensemble predictions
|
|
- `device`: "cuda" or "cpu"
|
|
- `batch_size`: Batch size for neural inference
|
|
- `cache_predictions`: Cache NN predictions for repeated designs
|
|
|
|
#### hybrid_optimization
|
|
- `exploration_trials`: Number of initial FEA trials
|
|
- `training_interval`: Retrain NN every N trials
|
|
- `validation_frequency`: Validate NN with FEA every N trials
|
|
- `phases`: List of optimization phases with different strategies
|
|
- `adaptive_switching`: Dynamically adjust FEA/NN usage
|
|
- `drift_threshold`: Max prediction error before retraining
|
|
|
|
## Training Data Collection
|
|
|
|
### Automatic Export During Optimization
|
|
|
|
Training data is automatically exported when enabled:
|
|
|
|
```json
|
|
{
|
|
"training_data_export": {
|
|
"enabled": true,
|
|
"export_dir": "atomizer_field_training_data/beam_study"
|
|
}
|
|
}
|
|
```
|
|
|
|
Directory structure created:
|
|
```
|
|
atomizer_field_training_data/beam_study/
|
|
├── trial_0001/
|
|
│ ├── input/
|
|
│ │ └── model.bdf # NX Nastran input
|
|
│ ├── output/
|
|
│ │ └── model.op2 # Binary results
|
|
│ └── metadata.json # Design vars, objectives
|
|
├── trial_0002/
|
|
│ └── ...
|
|
├── study_summary.json
|
|
└── README.md
|
|
```
|
|
|
|
### Manual Export of Existing Studies
|
|
|
|
```python
|
|
from optimization_engine.training_data_exporter import TrainingDataExporter
|
|
|
|
exporter = TrainingDataExporter(
|
|
export_dir=Path("training_data/my_study"),
|
|
study_name="beam_optimization",
|
|
design_variable_names=["width", "height", "thickness"],
|
|
objective_names=["max_stress", "mass"]
|
|
)
|
|
|
|
# Export each trial
|
|
for trial in existing_trials:
|
|
exporter.export_trial(
|
|
trial_number=trial.number,
|
|
design_variables=trial.params,
|
|
results=trial.values,
|
|
simulation_files={
|
|
'dat_file': Path(f"sim_{trial.number}.dat"),
|
|
'op2_file': Path(f"sim_{trial.number}.op2")
|
|
}
|
|
)
|
|
|
|
exporter.finalize()
|
|
```
|
|
|
|
## Neural Model Training
|
|
|
|
### 1. Prepare Training Data
|
|
|
|
```bash
|
|
cd atomizer-field
|
|
python batch_parser.py --data-dir ../atomizer_field_training_data/beam_study
|
|
```
|
|
|
|
This converts BDF/OP2 files to PyTorch Geometric format.
|
|
|
|
### 2. Train Neural Network
|
|
|
|
```bash
|
|
python train.py \
|
|
--data-dir training_data/parsed/ \
|
|
--epochs 200 \
|
|
--model GraphUNet \
|
|
--hidden-channels 128 \
|
|
--num-layers 4 \
|
|
--physics-loss-weight 0.3
|
|
```
|
|
|
|
### 3. Validate Model
|
|
|
|
```bash
|
|
python validate.py --checkpoint checkpoints/model_v2.0/best.pt
|
|
```
|
|
|
|
Expected output:
|
|
```
|
|
Validation Results:
|
|
- Mean Absolute Error: 2.34 MPa (1.2%)
|
|
- R² Score: 0.987
|
|
- Inference Time: 52ms ± 8ms
|
|
- Physics Constraint Violations: 0.3%
|
|
```
|
|
|
|
### 4. Deploy Model
|
|
|
|
Copy trained model to Atomizer:
|
|
```bash
|
|
cp checkpoints/model_v2.0/best.pt ../studies/my_study/neural_model.pt
|
|
```
|
|
|
|
## Hybrid Optimization Strategies
|
|
|
|
### Strategy 1: Phased Optimization
|
|
|
|
```python
|
|
# Phase 1: Exploration (FEA)
|
|
# Collect diverse training data
|
|
for trial in range(30):
|
|
use_fea() # Always use FEA
|
|
|
|
# Phase 2: Training
|
|
# Train neural network on collected data
|
|
train_neural_network()
|
|
|
|
# Phase 3: Exploitation (NN)
|
|
# Use NN for rapid optimization
|
|
for trial in range(31, 980):
|
|
if confidence > 0.85:
|
|
use_nn() # Fast neural network
|
|
else:
|
|
use_fea() # Fallback to FEA
|
|
|
|
# Phase 4: Validation (FEA)
|
|
# Validate best designs with FEA
|
|
for trial in range(981, 1000):
|
|
use_fea() # Final validation
|
|
```
|
|
|
|
### Strategy 2: Adaptive Switching
|
|
|
|
```python
|
|
class AdaptiveStrategy:
|
|
def should_use_nn(self, trial_number):
|
|
# Start with exploration
|
|
if trial_number < 20:
|
|
return False
|
|
|
|
# Check prediction accuracy
|
|
if self.recent_error > 0.15:
|
|
self.retrain_model()
|
|
return False
|
|
|
|
# Periodic validation
|
|
if trial_number % 50 == 0:
|
|
return False # Validate with FEA
|
|
|
|
# High-stakes decisions
|
|
if self.near_optimal_region():
|
|
return False # Use FEA for critical designs
|
|
|
|
return True # Use NN for everything else
|
|
```
|
|
|
|
### Strategy 3: Uncertainty-Based
|
|
|
|
```python
|
|
def decide_solver(design_vars, ensemble_models):
|
|
# Get predictions from ensemble
|
|
predictions = [model.predict(design_vars) for model in ensemble_models]
|
|
|
|
# Calculate uncertainty
|
|
mean_pred = np.mean(predictions)
|
|
std_pred = np.std(predictions)
|
|
confidence = 1.0 - (std_pred / mean_pred)
|
|
|
|
if confidence > 0.9:
|
|
return "neural", mean_pred
|
|
elif confidence > 0.7:
|
|
# Mixed strategy
|
|
if random.random() < confidence:
|
|
return "neural", mean_pred
|
|
else:
|
|
return "fea", None
|
|
else:
|
|
return "fea", None
|
|
```
|
|
|
|
## Performance Monitoring
|
|
|
|
### Real-Time Metrics
|
|
|
|
The neural runner tracks performance automatically:
|
|
|
|
```python
|
|
# During optimization
|
|
Trial 42: Used neural network (confidence: 94.2%, time: 0.048s)
|
|
Trial 43: Neural confidence too low (72.1%), using FEA
|
|
Trial 44: Used neural network (confidence: 91.8%, time: 0.051s)
|
|
```
|
|
|
|
### Post-Optimization Analysis
|
|
|
|
```python
|
|
# Access performance metrics
|
|
metrics = runner.neural_speedup_tracker
|
|
|
|
# Calculate statistics
|
|
avg_speedup = np.mean([m['speedup'] for m in metrics])
|
|
total_time_saved = sum([m['time_saved'] for m in metrics])
|
|
|
|
# Export detailed report
|
|
runner.export_performance_report("neural_performance.json")
|
|
```
|
|
|
|
### Visualization
|
|
|
|
```python
|
|
import matplotlib.pyplot as plt
|
|
|
|
# Plot confidence over trials
|
|
plt.figure(figsize=(10, 6))
|
|
plt.plot(trials, confidences, 'b-', label='NN Confidence')
|
|
plt.axhline(y=0.85, color='r', linestyle='--', label='Threshold')
|
|
plt.xlabel('Trial Number')
|
|
plt.ylabel('Confidence')
|
|
plt.title('Neural Network Confidence During Optimization')
|
|
plt.legend()
|
|
plt.savefig('nn_confidence.png')
|
|
|
|
# Plot speedup
|
|
plt.figure(figsize=(10, 6))
|
|
plt.bar(phases, speedups, color=['red', 'yellow', 'green'])
|
|
plt.xlabel('Optimization Phase')
|
|
plt.ylabel('Speedup Factor')
|
|
plt.title('Speedup by Optimization Phase')
|
|
plt.savefig('speedup_by_phase.png')
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues and Solutions
|
|
|
|
#### 1. Low Neural Network Confidence
|
|
```
|
|
WARNING: Neural confidence below threshold (65.3% < 85%)
|
|
```
|
|
|
|
**Solutions:**
|
|
- Train with more diverse data
|
|
- Reduce confidence threshold
|
|
- Use ensemble models
|
|
- Check if design is out of training distribution
|
|
|
|
#### 2. Model Loading Error
|
|
```
|
|
ERROR: Could not load model checkpoint: file not found
|
|
```
|
|
|
|
**Solutions:**
|
|
- Verify path in config
|
|
- Check file permissions
|
|
- Ensure model is compatible with current AtomizerField version
|
|
|
|
#### 3. Slow Neural Inference
|
|
```
|
|
WARNING: Neural inference taking 2.3s (expected <100ms)
|
|
```
|
|
|
|
**Solutions:**
|
|
- Use GPU acceleration (`device: "cuda"`)
|
|
- Reduce batch size
|
|
- Enable prediction caching
|
|
- Check model complexity
|
|
|
|
#### 4. Prediction Drift
|
|
```
|
|
WARNING: Neural predictions drifting from FEA (error: 18.2%)
|
|
```
|
|
|
|
**Solutions:**
|
|
- Retrain model with recent data
|
|
- Increase validation frequency
|
|
- Adjust drift threshold
|
|
- Check for distribution shift
|
|
|
|
### Debugging Tips
|
|
|
|
1. **Enable Verbose Logging**
|
|
```python
|
|
import logging
|
|
logging.basicConfig(level=logging.DEBUG)
|
|
```
|
|
|
|
2. **Test Neural Model Standalone**
|
|
```python
|
|
from optimization_engine.neural_surrogate import NeuralSurrogate
|
|
|
|
surrogate = NeuralSurrogate(model_path="model.pt")
|
|
test_design = {"width": 50, "height": 75, "thickness": 5}
|
|
pred, conf, used_nn = surrogate.predict(test_design)
|
|
print(f"Prediction: {pred}, Confidence: {conf}")
|
|
```
|
|
|
|
3. **Compare NN vs FEA**
|
|
```python
|
|
# Force FEA for comparison
|
|
fea_result = runner.simulation_runner(design_vars)
|
|
nn_result = runner.neural_surrogate.predict(design_vars)
|
|
error = abs(fea_result - nn_result) / fea_result * 100
|
|
print(f"Relative error: {error:.1f}%")
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### 1. Data Quality
|
|
- **Diverse Training Data**: Ensure training covers full design space
|
|
- **Quality Control**: Validate FEA results before training
|
|
- **Incremental Training**: Continuously improve model with new data
|
|
|
|
### 2. Model Selection
|
|
- **Start Simple**: Begin with smaller models, increase complexity as needed
|
|
- **Ensemble Methods**: Use 3-5 models for robust predictions
|
|
- **Physics Constraints**: Include physics loss for better generalization
|
|
|
|
### 3. Optimization Strategy
|
|
- **Conservative Start**: Use high confidence threshold initially
|
|
- **Adaptive Approach**: Adjust strategy based on performance
|
|
- **Validation**: Always validate final designs with FEA
|
|
|
|
### 4. Performance Optimization
|
|
- **GPU Acceleration**: Use CUDA for 10x faster inference
|
|
- **Batch Processing**: Process multiple designs simultaneously
|
|
- **Caching**: Cache predictions for repeated designs
|
|
|
|
### 5. Safety and Reliability
|
|
- **Fallback Mechanism**: Always have FEA fallback
|
|
- **Confidence Monitoring**: Track and log confidence levels
|
|
- **Periodic Validation**: Regularly check NN accuracy
|
|
|
|
## Example: Complete Workflow
|
|
|
|
### Step 1: Initial FEA Study
|
|
```bash
|
|
# Run initial optimization with training data export
|
|
python run_optimization.py --trials 50 --export-training-data
|
|
```
|
|
|
|
### Step 2: Train Neural Model
|
|
```bash
|
|
cd atomizer-field
|
|
python batch_parser.py --data-dir ../training_data
|
|
python train.py --epochs 200
|
|
```
|
|
|
|
### Step 3: Neural-Enhanced Optimization
|
|
```python
|
|
# Update config to use neural model
|
|
config["neural_surrogate"]["enabled"] = True
|
|
config["neural_surrogate"]["model_checkpoint"] = "model.pt"
|
|
|
|
# Run with 1000s of trials
|
|
runner = create_neural_runner(config_path, ...)
|
|
study = runner.run(n_trials=5000) # Now feasible!
|
|
```
|
|
|
|
### Step 4: Validate Results
|
|
```python
|
|
# Get top 10 designs
|
|
best_designs = study.best_trials[:10]
|
|
|
|
# Validate with FEA
|
|
for design in best_designs:
|
|
fea_result = validate_with_fea(design.params)
|
|
print(f"Design {design.number}: NN={design.value:.2f}, FEA={fea_result:.2f}")
|
|
```
|
|
|
|
## Parametric Surrogate Model (NEW)
|
|
|
|
The **ParametricSurrogate** is a design-conditioned GNN that predicts **ALL 4 optimization objectives** directly from design parameters, providing a future-proof solution for neural-accelerated optimization.
|
|
|
|
### Key Features
|
|
|
|
- **Predicts all objectives**: mass, frequency, max_displacement, max_stress
|
|
- **Design-conditioned**: Takes design variables as explicit input
|
|
- **Ultra-fast inference**: ~4.5ms per prediction (vs ~10s FEA = 2000x speedup)
|
|
- **GPU accelerated**: Uses CUDA for fast inference
|
|
|
|
### Quick Start
|
|
|
|
```python
|
|
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
|
|
|
|
# Create surrogate with auto-detection
|
|
surrogate = create_parametric_surrogate_for_study()
|
|
|
|
# Predict all objectives
|
|
test_params = {
|
|
"beam_half_core_thickness": 7.0,
|
|
"beam_face_thickness": 2.5,
|
|
"holes_diameter": 35.0,
|
|
"hole_count": 10.0
|
|
}
|
|
|
|
results = surrogate.predict(test_params)
|
|
print(f"Mass: {results['mass']:.2f} g")
|
|
print(f"Frequency: {results['frequency']:.2f} Hz")
|
|
print(f"Max displacement: {results['max_displacement']:.6f} mm")
|
|
print(f"Max stress: {results['max_stress']:.2f} MPa")
|
|
print(f"Inference time: {results['inference_time_ms']:.2f} ms")
|
|
```
|
|
|
|
### Architecture
|
|
|
|
```
|
|
Design Parameters (4)
|
|
|
|
|
┌────▼────┐
|
|
│ Design │
|
|
│ Encoder │ (MLP: 4 -> 64 -> 128)
|
|
└────┬────┘
|
|
│
|
|
┌────▼────┐
|
|
│ GNN │ (Design-conditioned message passing)
|
|
│ Layers │ (4 layers, 128 hidden channels)
|
|
└────┬────┘
|
|
│
|
|
┌────▼────┐
|
|
│ Global │ (Mean + Max pooling)
|
|
│ Pool │
|
|
└────┬────┘
|
|
│
|
|
┌────▼────┐
|
|
│ Scalar │ (MLP: 384 -> 128 -> 64 -> 4)
|
|
│ Heads │
|
|
└────┬────┘
|
|
│
|
|
▼
|
|
4 Objectives: [mass, frequency, displacement, stress]
|
|
```
|
|
|
|
### Training the Parametric Model
|
|
|
|
```bash
|
|
cd atomizer-field
|
|
python train_parametric.py \
|
|
--train_dir ../atomizer_field_training_data/uav_arm_train \
|
|
--val_dir ../atomizer_field_training_data/uav_arm_val \
|
|
--epochs 200 \
|
|
--output_dir runs/parametric_model
|
|
```
|
|
|
|
### Model Location
|
|
|
|
Trained models are stored in:
|
|
```
|
|
atomizer-field/runs/parametric_uav_arm_v2/
|
|
├── checkpoint_best.pt # Best validation loss model
|
|
├── config.json # Model configuration
|
|
└── training_log.csv # Training history
|
|
```
|
|
|
|
### Integration with Optimization
|
|
|
|
The ParametricSurrogate can be used as a drop-in replacement for FEA during optimization:
|
|
|
|
```python
|
|
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
|
|
|
|
# In your objective function
|
|
surrogate = create_parametric_surrogate_for_study()
|
|
|
|
def fast_objective(design_params):
|
|
"""Use neural network instead of FEA"""
|
|
results = surrogate.predict(design_params)
|
|
return results['mass'], results['frequency']
|
|
```
|
|
|
|
## Advanced Topics
|
|
|
|
### Custom Neural Architectures
|
|
See [AtomizerField documentation](https://github.com/Anto01/Atomizer-Field/docs) for implementing custom GNN architectures.
|
|
|
|
### Multi-Fidelity Optimization
|
|
Combine low-fidelity (coarse mesh) and high-fidelity (fine mesh) simulations with neural surrogates.
|
|
|
|
### Transfer Learning
|
|
Use pre-trained models from similar problems to accelerate training.
|
|
|
|
### Active Learning
|
|
Intelligently select which designs to evaluate with FEA for maximum learning.
|
|
|
|
## Summary
|
|
|
|
AtomizerField neural surrogates enable:
|
|
- ✅ **600x-500,000x speedup** over traditional FEA
|
|
- ✅ **Explore 1000x more designs** in same time
|
|
- ✅ **Maintain accuracy** with confidence-based fallback
|
|
- ✅ **Seamless integration** with existing Atomizer workflows
|
|
- ✅ **Continuous improvement** through online learning
|
|
|
|
Start with the Quick Start section and gradually adopt more advanced features as needed.
|
|
|
|
For questions or issues, see the [AtomizerField GitHub](https://github.com/Anto01/Atomizer-Field) or [Atomizer documentation](../README.md). |