Files
Atomizer/atomizer-field/Context.md

127 lines
5.1 KiB
Markdown
Raw Normal View History

Context Instructions for Claude Sonnet 3.5
Project: AtomizerField - Neural Field Learning for Structural Optimization
System Context
You are helping develop AtomizerField, a revolutionary branch of the Atomizer optimization platform that uses neural networks to learn and predict complete FEA field results (stress, displacement, strain at every node/element) instead of just scalar values. This enables 1000x faster optimization with physics understanding.
Core Objective
Transform structural optimization from black-box number crunching to intelligent, field-aware design exploration by training neural networks on complete FEA data, not just maximum values.
Technical Foundation
Current Stack:
FEA: NX Nastran (BDF input, OP2/F06 output)
Python Libraries: pyNastran, PyTorch, NumPy, H5PY
Parent Project: Atomizer (optimization platform with dashboard)
Data Format: Custom schema v1.0 for future-proof field storage
Key Innovation:
Instead of: parameters → FEA → max_stress (scalar)
We learn: parameters → Neural Network → complete stress field (45,000 values)
Project Structure
AtomizerField/
├── data_pipeline/
│ ├── parser/ # BDF/OP2 to neural field format
│ ├── generator/ # Automated FEA case generation
│ └── validator/ # Data quality checks
├── neural_models/
│ ├── field_predictor/ # Core neural network
│ ├── physics_layers/ # Physics-informed constraints
│ └── training/ # Training scripts
├── integration/
│ └── atomizer_bridge/ # Integration with main Atomizer
└── data/
└── training_cases/ # FEA data repository
Current Development Phase
Phase 1 (Current): Data Pipeline Development
Parsing NX Nastran files (BDF/OP2) into training data
Creating standardized data format
Building automated case generation
Next Phases:
Phase 2: Neural network architecture
Phase 3: Training pipeline
Phase 4: Integration with Atomizer
Phase 5: Production deployment
Key Technical Concepts to Understand
Field Learning: We're teaching NNs to predict stress/displacement at EVERY point in a structure, not just max values
Physics-Informed: The NN must respect equilibrium, compatibility, and constitutive laws
Graph Neural Networks: Mesh topology matters - we use GNNs to understand how forces flow through elements
Transfer Learning: Knowledge from one project speeds up optimization on similar structures
Code Style & Principles
Future-Proof Data: All data structures versioned, backwards compatible
Modular Design: Each component (parser, trainer, predictor) independent
Validation First: Every data point validated for physics consistency
Progressive Enhancement: Start simple (max stress), expand to fields
Documentation: Every function documented with clear physics meaning
Specific Instructions for Implementation
When implementing code for AtomizerField:
Always preserve field dimensionality - Don't reduce to scalars unless explicitly needed
Use pyNastran's existing methods - Don't reinvent BDF/OP2 parsing
Store data efficiently - HDF5 for arrays, JSON for metadata
Validate physics - Check equilibrium, energy balance
Think in fields - Visualize operations as field transformations
Enable incremental learning - New data should improve existing models
Current Task Context
The user has:
Set up NX Nastran analyses with full field outputs
Generated BDF (input) and OP2 (output) files
Needs to parse these into neural network training data
The parser must:
Extract complete mesh (nodes, elements, connectivity)
Capture all boundary conditions and loads
Store complete field results (not just max values)
Maintain relationships between parameters and results
Be robust to different element types (solid, shell, beam)
Expected Outputs
When asked about AtomizerField, provide:
Practical, runnable code - No pseudocode unless requested
Clear data flow - Show how data moves from FEA to NN
Physics explanations - Why certain approaches work/fail
Incremental steps - Break complex tasks into testable chunks
Validation methods - How to verify data/model correctness
Common Challenges & Solutions
Large Data: Use HDF5 chunking and compression
Mixed Element Types: Handle separately, combine for training
Coordinate Systems: Always transform to global before storage
Units: Standardize early (SI units recommended)
Missing Data: Op2 might not have all requested fields - handle gracefully
Integration Notes
AtomizerField will eventually merge into main Atomizer:
Keep interfaces clean and documented
Use consistent data formats with Atomizer
Prepare for dashboard visualization needs
Enable both standalone and integrated operation
Key Questions to Ask
When implementing features, consider:
Will this work with 1 million element meshes?
Can we incrementally update models with new data?
Does this respect physical laws?
Is the data format forward-compatible?
Can non-experts understand and use this?
Ultimate Goal
Create a system where engineers can:
Run normal FEA analyses
Automatically build neural surrogates from results
Explore millions of designs instantly
Understand WHY designs work through field visualization
Optimize with physical insight, not blind search