328 lines
8.0 KiB
Markdown
328 lines
8.0 KiB
Markdown
|
|
# AtomizerField - Getting Started Guide
|
||
|
|
|
||
|
|
Welcome to AtomizerField! This guide will get you up and running with neural field learning for structural optimization.
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
|
||
|
|
AtomizerField transforms structural optimization from hours-per-design to milliseconds-per-design by using Graph Neural Networks to predict complete FEA field results.
|
||
|
|
|
||
|
|
### The Two-Phase Approach
|
||
|
|
|
||
|
|
```
|
||
|
|
Phase 1: Data Pipeline
|
||
|
|
NX Nastran Files → Parser → Neural Field Format
|
||
|
|
|
||
|
|
Phase 2: Neural Network Training
|
||
|
|
Neural Field Data → GNN Training → Fast Field Predictor
|
||
|
|
```
|
||
|
|
|
||
|
|
## Installation
|
||
|
|
|
||
|
|
### Prerequisites
|
||
|
|
- Python 3.8 or higher
|
||
|
|
- NX Nastran (for generating FEA data)
|
||
|
|
- NVIDIA GPU (recommended for Phase 2 training)
|
||
|
|
|
||
|
|
### Setup
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Clone or navigate to project directory
|
||
|
|
cd Atomizer-Field
|
||
|
|
|
||
|
|
# Create virtual environment
|
||
|
|
python -m venv atomizer_env
|
||
|
|
|
||
|
|
# Activate environment
|
||
|
|
# On Windows:
|
||
|
|
atomizer_env\Scripts\activate
|
||
|
|
# On Linux/Mac:
|
||
|
|
source atomizer_env/bin/activate
|
||
|
|
|
||
|
|
# Install dependencies
|
||
|
|
pip install -r requirements.txt
|
||
|
|
```
|
||
|
|
|
||
|
|
## Phase 1: Parse Your FEA Data
|
||
|
|
|
||
|
|
### Step 1: Generate FEA Results in NX
|
||
|
|
|
||
|
|
1. Create your model in NX
|
||
|
|
2. Generate mesh
|
||
|
|
3. Apply materials, BCs, and loads
|
||
|
|
4. Run **SOL 101** (Linear Static)
|
||
|
|
5. Request output: `DISPLACEMENT=ALL`, `STRESS=ALL`, `STRAIN=ALL`
|
||
|
|
6. Ensure these files are generated:
|
||
|
|
- `model.bdf` (input deck)
|
||
|
|
- `model.op2` (results)
|
||
|
|
|
||
|
|
### Step 2: Organize Files
|
||
|
|
|
||
|
|
```bash
|
||
|
|
mkdir training_case_001
|
||
|
|
mkdir training_case_001/input
|
||
|
|
mkdir training_case_001/output
|
||
|
|
|
||
|
|
# Copy files
|
||
|
|
cp your_model.bdf training_case_001/input/model.bdf
|
||
|
|
cp your_model.op2 training_case_001/output/model.op2
|
||
|
|
```
|
||
|
|
|
||
|
|
### Step 3: Parse
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Single case
|
||
|
|
python neural_field_parser.py training_case_001
|
||
|
|
|
||
|
|
# Validate
|
||
|
|
python validate_parsed_data.py training_case_001
|
||
|
|
|
||
|
|
# Batch processing (for multiple cases)
|
||
|
|
python batch_parser.py ./all_training_cases
|
||
|
|
```
|
||
|
|
|
||
|
|
**Output:**
|
||
|
|
- `neural_field_data.json` - Metadata
|
||
|
|
- `neural_field_data.h5` - Field data
|
||
|
|
|
||
|
|
See [README.md](README.md) for detailed Phase 1 documentation.
|
||
|
|
|
||
|
|
## Phase 2: Train Neural Network
|
||
|
|
|
||
|
|
### Step 1: Prepare Dataset
|
||
|
|
|
||
|
|
You need:
|
||
|
|
- **Minimum:** 50-100 parsed FEA cases
|
||
|
|
- **Recommended:** 500+ cases for production use
|
||
|
|
- **Variation:** Different geometries, loads, BCs
|
||
|
|
|
||
|
|
Organize into train/val splits (80/20):
|
||
|
|
|
||
|
|
```bash
|
||
|
|
mkdir training_data
|
||
|
|
mkdir validation_data
|
||
|
|
|
||
|
|
# Move 80% of cases to training_data/
|
||
|
|
# Move 20% of cases to validation_data/
|
||
|
|
```
|
||
|
|
|
||
|
|
### Step 2: Train Model
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Basic training
|
||
|
|
python train.py \
|
||
|
|
--train_dir ./training_data \
|
||
|
|
--val_dir ./validation_data \
|
||
|
|
--epochs 100 \
|
||
|
|
--batch_size 4
|
||
|
|
|
||
|
|
# Monitor progress
|
||
|
|
tensorboard --logdir runs/tensorboard
|
||
|
|
```
|
||
|
|
|
||
|
|
Training will:
|
||
|
|
- Create checkpoints in `runs/`
|
||
|
|
- Log metrics to TensorBoard
|
||
|
|
- Save best model as `checkpoint_best.pt`
|
||
|
|
|
||
|
|
**Expected Time:** 2-24 hours depending on dataset size and GPU.
|
||
|
|
|
||
|
|
### Step 3: Run Inference
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Predict on new case
|
||
|
|
python predict.py \
|
||
|
|
--model runs/checkpoint_best.pt \
|
||
|
|
--input test_case_001 \
|
||
|
|
--compare
|
||
|
|
|
||
|
|
# Batch prediction
|
||
|
|
python predict.py \
|
||
|
|
--model runs/checkpoint_best.pt \
|
||
|
|
--input ./test_cases \
|
||
|
|
--batch
|
||
|
|
```
|
||
|
|
|
||
|
|
**Result:** 5-50 milliseconds per prediction!
|
||
|
|
|
||
|
|
See [PHASE2_README.md](PHASE2_README.md) for detailed Phase 2 documentation.
|
||
|
|
|
||
|
|
## Typical Workflow
|
||
|
|
|
||
|
|
### For Development (Learning the System)
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# 1. Parse a few test cases
|
||
|
|
python batch_parser.py ./test_cases
|
||
|
|
|
||
|
|
# 2. Quick training test (small dataset)
|
||
|
|
python train.py \
|
||
|
|
--train_dir ./test_cases \
|
||
|
|
--val_dir ./test_cases \
|
||
|
|
--epochs 10 \
|
||
|
|
--batch_size 2
|
||
|
|
|
||
|
|
# 3. Test inference
|
||
|
|
python predict.py \
|
||
|
|
--model runs/checkpoint_best.pt \
|
||
|
|
--input test_cases/case_001
|
||
|
|
```
|
||
|
|
|
||
|
|
### For Production (Real Optimization)
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# 1. Generate comprehensive training dataset
|
||
|
|
# - Vary all design parameters
|
||
|
|
# - Include diverse loading conditions
|
||
|
|
# - Cover full design space
|
||
|
|
|
||
|
|
# 2. Parse all cases
|
||
|
|
python batch_parser.py ./all_fea_cases
|
||
|
|
|
||
|
|
# 3. Split into train/val
|
||
|
|
# Use script or manually organize
|
||
|
|
|
||
|
|
# 4. Train production model
|
||
|
|
python train.py \
|
||
|
|
--train_dir ./training_data \
|
||
|
|
--val_dir ./validation_data \
|
||
|
|
--epochs 200 \
|
||
|
|
--batch_size 8 \
|
||
|
|
--hidden_dim 256 \
|
||
|
|
--num_layers 8 \
|
||
|
|
--loss_type physics
|
||
|
|
|
||
|
|
# 5. Validate on held-out test set
|
||
|
|
python predict.py \
|
||
|
|
--model runs/checkpoint_best.pt \
|
||
|
|
--input ./test_data \
|
||
|
|
--batch \
|
||
|
|
--compare
|
||
|
|
|
||
|
|
# 6. Use for optimization!
|
||
|
|
```
|
||
|
|
|
||
|
|
## Key Files Reference
|
||
|
|
|
||
|
|
| File | Purpose |
|
||
|
|
|------|---------|
|
||
|
|
| **Phase 1** | |
|
||
|
|
| `neural_field_parser.py` | Parse NX Nastran to neural field format |
|
||
|
|
| `validate_parsed_data.py` | Validate parsed data quality |
|
||
|
|
| `batch_parser.py` | Batch process multiple cases |
|
||
|
|
| `metadata_template.json` | Template for design parameters |
|
||
|
|
| **Phase 2** | |
|
||
|
|
| `train.py` | Train GNN model |
|
||
|
|
| `predict.py` | Run inference on trained model |
|
||
|
|
| `neural_models/field_predictor.py` | GNN architecture |
|
||
|
|
| `neural_models/physics_losses.py` | Loss functions |
|
||
|
|
| `neural_models/data_loader.py` | Data pipeline |
|
||
|
|
| **Documentation** | |
|
||
|
|
| `README.md` | Phase 1 detailed guide |
|
||
|
|
| `PHASE2_README.md` | Phase 2 detailed guide |
|
||
|
|
| `Context.md` | Project vision and architecture |
|
||
|
|
| `Instructions.md` | Original implementation spec |
|
||
|
|
|
||
|
|
## Common Issues & Solutions
|
||
|
|
|
||
|
|
### "No cases found"
|
||
|
|
- Check directory structure: `case_dir/input/model.bdf` and `case_dir/output/model.op2`
|
||
|
|
- Ensure files are named exactly `model.bdf` and `model.op2`
|
||
|
|
|
||
|
|
### "Out of memory during training"
|
||
|
|
- Reduce `--batch_size` (try 2 or 1)
|
||
|
|
- Use smaller model: `--hidden_dim 64 --num_layers 4`
|
||
|
|
- Process larger models in chunks
|
||
|
|
|
||
|
|
### "Poor prediction accuracy"
|
||
|
|
- Need more training data (aim for 500+ cases)
|
||
|
|
- Increase model capacity: `--hidden_dim 256 --num_layers 8`
|
||
|
|
- Use physics-informed loss: `--loss_type physics`
|
||
|
|
- Check if test case is within training distribution
|
||
|
|
|
||
|
|
### "Training loss not decreasing"
|
||
|
|
- Lower learning rate: `--lr 0.0001`
|
||
|
|
- Check data normalization (should be automatic)
|
||
|
|
- Start with simple MSE loss: `--loss_type mse`
|
||
|
|
|
||
|
|
## Example: End-to-End Workflow
|
||
|
|
|
||
|
|
Let's say you want to optimize a bracket design:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# 1. Generate 100 bracket variants in NX with different:
|
||
|
|
# - Wall thicknesses (1-5mm)
|
||
|
|
# - Rib heights (5-20mm)
|
||
|
|
# - Hole diameters (6-12mm)
|
||
|
|
# - Run FEA on each
|
||
|
|
|
||
|
|
# 2. Parse all variants
|
||
|
|
python batch_parser.py ./bracket_variants
|
||
|
|
|
||
|
|
# 3. Split dataset
|
||
|
|
# training_data: 80 cases
|
||
|
|
# validation_data: 20 cases
|
||
|
|
|
||
|
|
# 4. Train model
|
||
|
|
python train.py \
|
||
|
|
--train_dir ./training_data \
|
||
|
|
--val_dir ./validation_data \
|
||
|
|
--epochs 150 \
|
||
|
|
--batch_size 4 \
|
||
|
|
--output_dir ./bracket_model
|
||
|
|
|
||
|
|
# 5. Test model (after training completes)
|
||
|
|
python predict.py \
|
||
|
|
--model bracket_model/checkpoint_best.pt \
|
||
|
|
--input new_bracket_design \
|
||
|
|
--compare
|
||
|
|
|
||
|
|
# 6. Optimize: Generate 10,000 design variants
|
||
|
|
# Predict in seconds instead of weeks!
|
||
|
|
for design in design_space:
|
||
|
|
results = predict(design)
|
||
|
|
if results['max_stress'] < 300 and results['weight'] < optimal:
|
||
|
|
optimal = design
|
||
|
|
```
|
||
|
|
|
||
|
|
## Next Steps
|
||
|
|
|
||
|
|
1. **Start Small:** Parse 5-10 test cases, train small model
|
||
|
|
2. **Validate:** Compare predictions with FEA ground truth
|
||
|
|
3. **Scale Up:** Gradually increase dataset size
|
||
|
|
4. **Production:** Train final model on comprehensive dataset
|
||
|
|
5. **Optimize:** Use trained model for rapid design exploration
|
||
|
|
|
||
|
|
## Resources
|
||
|
|
|
||
|
|
- **Phase 1 Detailed Docs:** [README.md](README.md)
|
||
|
|
- **Phase 2 Detailed Docs:** [PHASE2_README.md](PHASE2_README.md)
|
||
|
|
- **Project Context:** [Context.md](Context.md)
|
||
|
|
- **Example Data:** Check `Models/` folder
|
||
|
|
|
||
|
|
## Getting Help
|
||
|
|
|
||
|
|
If you encounter issues:
|
||
|
|
|
||
|
|
1. Check documentation (README.md, PHASE2_README.md)
|
||
|
|
2. Verify file structure and naming
|
||
|
|
3. Review error messages carefully
|
||
|
|
4. Test with smaller dataset first
|
||
|
|
5. Check GPU memory and batch size
|
||
|
|
|
||
|
|
## Success Metrics
|
||
|
|
|
||
|
|
You'll know it's working when:
|
||
|
|
|
||
|
|
- ✓ Parser processes cases without errors
|
||
|
|
- ✓ Validation shows no critical issues
|
||
|
|
- ✓ Training loss decreases steadily
|
||
|
|
- ✓ Validation loss follows training loss
|
||
|
|
- ✓ Predictions are within 5-10% of FEA
|
||
|
|
- ✓ Inference takes milliseconds
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
**Ready to revolutionize your optimization workflow?**
|
||
|
|
|
||
|
|
Start with Phase 1 parsing, then move to Phase 2 training. Within days, you'll have a neural network that predicts FEA results 1000x faster!
|