# Neural Acceleration Module **Last Updated**: December 5, 2025 **Version**: 1.0 **Type**: Optional Module This module provides guidance for AtomizerField neural network surrogate acceleration, enabling 1000x faster optimization by replacing expensive FEA evaluations with instant neural predictions. --- ## When to Load - User needs >50 optimization trials - User mentions "neural", "surrogate", "NN", "machine learning" - User wants faster optimization - Exporting training data for neural networks --- ## Overview **Key Innovation**: Train once on FEA data, then explore 50,000+ designs in the time it takes to run 50 FEA trials. | Metric | Traditional FEA | Neural Network | Improvement | |--------|-----------------|----------------|-------------| | Time per evaluation | 10-30 minutes | 4.5 milliseconds | **2,000-500,000x** | | Trials per hour | 2-6 | 800,000+ | **1000x** | | Design exploration | ~50 designs | ~50,000 designs | **1000x** | --- ## Training Data Export (PR.9) Enable training data export in your optimization config: ```json { "training_data_export": { "enabled": true, "export_dir": "atomizer_field_training_data/my_study" } } ``` ### Using TrainingDataExporter ```python from optimization_engine.training_data_exporter import TrainingDataExporter training_exporter = TrainingDataExporter( export_dir=export_dir, study_name=study_name, design_variable_names=['param1', 'param2'], objective_names=['stiffness', 'mass'], constraint_names=['mass_limit'], metadata={'atomizer_version': '2.0', 'optimization_algorithm': 'NSGA-II'} ) # In objective function: training_exporter.export_trial( trial_number=trial.number, design_variables=design_vars, results={'objectives': {...}, 'constraints': {...}}, simulation_files={'dat_file': dat_path, 'op2_file': op2_path} ) # After optimization: training_exporter.finalize() ``` ### Training Data Structure ``` atomizer_field_training_data/{study_name}/ ├── trial_0001/ │ ├── input/model.bdf # Nastran input (mesh + params) │ ├── output/model.op2 # Binary results │ └── metadata.json # Design params + objectives ├── trial_0002/ │ └── ... └── study_summary.json # Study-level metadata ``` **Recommended**: 100-500 FEA samples for good generalization. --- ## Neural Configuration ### Full Configuration Example ```json { "study_name": "bracket_neural_optimization", "surrogate_settings": { "enabled": true, "model_type": "parametric_gnn", "model_path": "models/bracket_surrogate.pt", "confidence_threshold": 0.85, "validation_frequency": 10, "fallback_to_fea": true }, "training_data_export": { "enabled": true, "export_dir": "atomizer_field_training_data/bracket_study", "export_bdf": true, "export_op2": true, "export_fields": ["displacement", "stress"] }, "neural_optimization": { "initial_fea_trials": 50, "neural_trials": 5000, "retraining_interval": 500, "uncertainty_threshold": 0.15 } } ``` ### Configuration Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `enabled` | bool | false | Enable neural surrogate | | `model_type` | string | "parametric_gnn" | Model architecture | | `model_path` | string | - | Path to trained model | | `confidence_threshold` | float | 0.85 | Min confidence for predictions | | `validation_frequency` | int | 10 | FEA validation every N trials | | `fallback_to_fea` | bool | true | Use FEA when uncertain | --- ## Model Types ### Parametric Predictor GNN (Recommended) Direct optimization objective prediction - fastest option. ``` Design Parameters (ND) → Design Encoder (MLP) → GNN Backbone → Scalar Heads Output (objectives): ├── mass (grams) ├── frequency (Hz) ├── max_displacement (mm) └── max_stress (MPa) ``` **Use When**: You only need scalar objectives, not full field predictions. ### Field Predictor GNN Full displacement/stress field prediction. ``` Input Features (12D per node): ├── Node coordinates (x, y, z) ├── Material properties (E, nu, rho) ├── Boundary conditions (fixed/free per DOF) └── Load information (force magnitude, direction) Output (per node): ├── Displacement (6 DOF: Tx, Ty, Tz, Rx, Ry, Rz) └── Von Mises stress (1 value) ``` **Use When**: You need field visualization or complex derived quantities. ### Ensemble Models Multiple models for uncertainty quantification. ```python # Run N models predictions = [model_i(x) for model_i in ensemble] # Statistics mean_prediction = np.mean(predictions) uncertainty = np.std(predictions) # Decision if uncertainty > threshold: result = run_fea(x) # Fall back to FEA else: result = mean_prediction ``` --- ## Hybrid FEA/Neural Workflow ### Phase 1: FEA Exploration (50-100 trials) - Run standard FEA optimization - Export training data automatically - Build landscape understanding ### Phase 2: Neural Training - Parse collected data - Train parametric predictor - Validate accuracy ### Phase 3: Neural Acceleration (1000s of trials) - Use neural network for rapid exploration - Periodic FEA validation - Retrain if distribution shifts ### Phase 4: FEA Refinement (10-20 trials) - Validate top candidates with FEA - Ensure results are physically accurate - Generate final Pareto front --- ## Training Pipeline ### Step 1: Collect Training Data Run optimization with export enabled: ```bash python run_optimization.py --train --trials 100 ``` ### Step 2: Parse to Neural Format ```bash cd atomizer-field python batch_parser.py ../atomizer_field_training_data/my_study ``` ### Step 3: Train Model **Parametric Predictor** (recommended): ```bash python train_parametric.py \ --train_dir ../training_data/parsed \ --val_dir ../validation_data/parsed \ --epochs 200 \ --hidden_channels 128 \ --num_layers 4 ``` **Field Predictor**: ```bash python train.py \ --train_dir ../training_data/parsed \ --epochs 200 \ --model FieldPredictorGNN \ --hidden_channels 128 \ --num_layers 6 \ --physics_loss_weight 0.3 ``` ### Step 4: Validate ```bash python validate.py --checkpoint runs/my_model/checkpoint_best.pt ``` Expected output: ``` Validation Results: ├── Mean Absolute Error: 2.3% (mass), 1.8% (frequency) ├── R² Score: 0.987 ├── Inference Time: 4.5ms ± 0.8ms └── Physics Violations: 0.2% ``` ### Step 5: Deploy Update config to use trained model: ```json { "neural_surrogate": { "enabled": true, "model_checkpoint": "atomizer-field/runs/my_model/checkpoint_best.pt", "confidence_threshold": 0.85 } } ``` --- ## Uncertainty Thresholds | Uncertainty | Action | |-------------|--------| | < 5% | Use neural prediction | | 5-15% | Use neural, flag for validation | | > 15% | Fall back to FEA | --- ## Accuracy Expectations | Problem Type | Expected R² | Samples Needed | |--------------|-------------|----------------| | Well-behaved | > 0.95 | 50-100 | | Moderate nonlinear | > 0.90 | 100-200 | | Highly nonlinear | > 0.85 | 200-500 | --- ## AtomizerField Components ``` atomizer-field/ ├── neural_field_parser.py # BDF/OP2 parsing ├── field_predictor.py # Field GNN ├── parametric_predictor.py # Parametric GNN ├── train.py # Field training ├── train_parametric.py # Parametric training ├── validate.py # Model validation ├── physics_losses.py # Physics-informed loss └── batch_parser.py # Batch data conversion optimization_engine/ ├── neural_surrogate.py # Atomizer integration └── runner_with_neural.py # Neural runner ``` --- ## Troubleshooting | Symptom | Cause | Solution | |---------|-------|----------| | High prediction error | Insufficient training data | Collect more FEA samples | | Out-of-distribution warnings | Design outside training range | Retrain with expanded range | | Slow inference | Large mesh | Use parametric predictor instead | | Physics violations | Low physics loss weight | Increase `physics_loss_weight` | --- ## Cross-References - **System Protocol**: [SYS_14_NEURAL_ACCELERATION](../../docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md) - **Operations**: [OP_05_EXPORT_TRAINING_DATA](../../docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md) - **Core Skill**: [study-creation-core](../core/study-creation-core.md)