chore: major repo cleanup - remove dead code and cruft
Remove ~24K lines of dead code for a lean rebuild foundation: - Remove atomizer-field/ (neural field predictor experiment, concept archived in docs) - Remove generated_extractors/, generated_hooks/ (legacy generator outputs) - Remove optimization_validation/ (empty skeleton) - Remove reports/ (superseded by optimization_engine/reporting/) - Remove root-level stale files: DEVELOPMENT.md, INSTALL_INSTRUCTIONS.md, config.py, atomizer_paths.py, optimization_config.json, train_neural.bat, generate_training_data.py, run_training_fea.py, migrate_imports.py - Update .gitignore for introspection caches and insight outputs Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
9
.gitignore
vendored
9
.gitignore
vendored
@@ -127,5 +127,14 @@ backend_stderr.log
|
||||
# Auto-generated documentation (regenerate with: python -m optimization_engine.auto_doc all)
|
||||
docs/generated/
|
||||
|
||||
# NX model introspection caches (generated)
|
||||
**/_introspection_*.json
|
||||
**/_introspection_cache.json
|
||||
**/_temp_introspection.json
|
||||
**/params.exp
|
||||
|
||||
# Insight outputs (generated)
|
||||
**/3_insights/
|
||||
|
||||
# Malformed filenames (Windows path used as filename)
|
||||
C:*
|
||||
|
||||
619
DEVELOPMENT.md
619
DEVELOPMENT.md
@@ -1,619 +0,0 @@
|
||||
# Atomizer Development Guide
|
||||
|
||||
**Last Updated**: 2025-11-21
|
||||
**Current Phase**: Phase 3.2 - Integration Sprint + Documentation
|
||||
**Status**: 🟢 Core Complete (100%) | ✅ Protocols 10/11/13 Active (100%) | 🎯 Dashboard Live (95%) | 📚 Documentation Reorganized
|
||||
|
||||
📘 **Quick Links**:
|
||||
- [Protocol Specifications](docs/PROTOCOLS.md) - All active protocols consolidated
|
||||
- [Documentation Index](docs/00_INDEX.md) - Complete documentation navigation
|
||||
- [README](README.md) - Project overview and quick start
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Current Phase](#current-phase)
|
||||
2. [Completed Features](#completed-features)
|
||||
3. [Active Development](#active-development)
|
||||
4. [Known Issues](#known-issues)
|
||||
5. [Testing Status](#testing-status)
|
||||
6. [Phase-by-Phase Progress](#phase-by-phase-progress)
|
||||
|
||||
---
|
||||
|
||||
## Current Phase
|
||||
|
||||
### Phase 3.2: Integration Sprint (🎯 TOP PRIORITY)
|
||||
|
||||
**Goal**: Connect LLM intelligence components to production workflow
|
||||
|
||||
**Timeline**: 2-4 weeks (Started 2025-11-17)
|
||||
|
||||
**Status**: LLM components built and tested individually (85% complete). Need to wire them into production runner.
|
||||
|
||||
📋 **Detailed Plan**: [docs/PHASE_3_2_INTEGRATION_PLAN.md](docs/PHASE_3_2_INTEGRATION_PLAN.md)
|
||||
|
||||
**Critical Path**:
|
||||
|
||||
#### Week 1: Make LLM Mode Accessible (16 hours)
|
||||
- [ ] **1.1** Create unified entry point `optimization_engine/run_optimization.py` (4h)
|
||||
- Add `--llm` flag for natural language mode
|
||||
- Add `--request` parameter for natural language input
|
||||
- Support both LLM and traditional JSON modes
|
||||
- Preserve backward compatibility
|
||||
|
||||
- [ ] **1.2** Wire LLMOptimizationRunner to production (8h)
|
||||
- Connect LLMWorkflowAnalyzer to entry point
|
||||
- Bridge LLMOptimizationRunner → OptimizationRunner
|
||||
- Pass model updater and simulation runner callables
|
||||
- Integrate with existing hook system
|
||||
|
||||
- [ ] **1.3** Create minimal example (2h)
|
||||
- Create `examples/llm_mode_demo.py`
|
||||
- Show natural language → optimization results
|
||||
- Compare traditional (100 lines) vs LLM (3 lines)
|
||||
|
||||
- [ ] **1.4** End-to-end integration test (2h)
|
||||
- Test with simple_beam_optimization study
|
||||
- Verify extractors generated correctly
|
||||
- Validate output matches manual mode
|
||||
|
||||
#### Week 2: Robustness & Safety (16 hours)
|
||||
- [ ] **2.1** Code validation pipeline (6h)
|
||||
- Create `optimization_engine/code_validator.py`
|
||||
- Implement syntax validation (ast.parse)
|
||||
- Implement security scanning (whitelist imports)
|
||||
- Implement test execution on example OP2
|
||||
- Add retry with LLM feedback on failure
|
||||
|
||||
- [ ] **2.2** Graceful fallback mechanisms (4h)
|
||||
- Wrap all LLM calls in try/except
|
||||
- Provide clear error messages
|
||||
- Offer fallback to manual mode
|
||||
- Never crash on LLM failure
|
||||
|
||||
- [ ] **2.3** LLM audit trail (3h)
|
||||
- Create `optimization_engine/llm_audit.py`
|
||||
- Log all LLM requests and responses
|
||||
- Log generated code with prompts
|
||||
- Create `llm_audit.json` in study output
|
||||
|
||||
- [ ] **2.4** Failure scenario testing (3h)
|
||||
- Test invalid natural language request
|
||||
- Test LLM unavailable
|
||||
- Test generated code syntax errors
|
||||
- Test validation failures
|
||||
|
||||
#### Week 3: Learning System (12 hours)
|
||||
- [ ] **3.1** Knowledge base implementation (4h)
|
||||
- Create `optimization_engine/knowledge_base.py`
|
||||
- Implement `save_session()` - Save successful workflows
|
||||
- Implement `search_templates()` - Find similar patterns
|
||||
- Add confidence scoring
|
||||
|
||||
- [ ] **3.2** Template extraction (4h)
|
||||
- Extract reusable patterns from generated code
|
||||
- Parameterize variable parts
|
||||
- Save templates with usage examples
|
||||
- Implement template application to new requests
|
||||
|
||||
- [ ] **3.3** ResearchAgent integration (4h)
|
||||
- Complete ResearchAgent implementation
|
||||
- Integrate into ExtractorOrchestrator error handling
|
||||
- Add user example collection workflow
|
||||
- Save learned knowledge to knowledge base
|
||||
|
||||
#### Week 4: Documentation & Discoverability (8 hours)
|
||||
- [ ] **4.1** Update README (2h)
|
||||
- Add "🤖 LLM-Powered Mode" section
|
||||
- Show example command with natural language
|
||||
- Link to detailed docs
|
||||
|
||||
- [ ] **4.2** Create LLM mode documentation (3h)
|
||||
- Create `docs/LLM_MODE.md`
|
||||
- Explain how LLM mode works
|
||||
- Provide usage examples
|
||||
- Add troubleshooting guide
|
||||
|
||||
- [ ] **4.3** Create demo video/GIF (1h)
|
||||
- Record terminal session
|
||||
- Show before/after (100 lines → 3 lines)
|
||||
- Create animated GIF for README
|
||||
|
||||
- [ ] **4.4** Update all planning docs (2h)
|
||||
- Update DEVELOPMENT.md status
|
||||
- Update DEVELOPMENT_GUIDANCE.md (80-90% → 90-95%)
|
||||
- Mark Phase 3.2 as ✅ Complete
|
||||
|
||||
---
|
||||
|
||||
## Completed Features
|
||||
|
||||
### ✅ Live Dashboard System (Completed 2025-11-21)
|
||||
|
||||
#### Backend (FastAPI + WebSocket)
|
||||
- [x] **FastAPI Backend** ([atomizer-dashboard/backend/](atomizer-dashboard/backend/))
|
||||
- REST API endpoints for study management
|
||||
- WebSocket streaming with file watching (Watchdog)
|
||||
- Real-time updates (<100ms latency)
|
||||
- CORS configured for local development
|
||||
|
||||
- [x] **REST API Endpoints** ([backend/api/routes/optimization.py](atomizer-dashboard/backend/api/routes/optimization.py))
|
||||
- `GET /api/optimization/studies` - List all studies
|
||||
- `GET /api/optimization/studies/{id}/status` - Get study status
|
||||
- `GET /api/optimization/studies/{id}/history` - Get trial history
|
||||
- `GET /api/optimization/studies/{id}/pruning` - Get pruning diagnostics
|
||||
|
||||
- [x] **WebSocket Streaming** ([backend/api/websocket/optimization_stream.py](atomizer-dashboard/backend/api/websocket/optimization_stream.py))
|
||||
- File watching on `optimization_history_incremental.json`
|
||||
- Real-time trial updates via WebSocket
|
||||
- Pruning alerts and progress updates
|
||||
- Automatic observer lifecycle management
|
||||
|
||||
#### Frontend (HTML + Chart.js)
|
||||
- [x] **Enhanced Live Dashboard** ([atomizer-dashboard/dashboard-enhanced.html](atomizer-dashboard/dashboard-enhanced.html))
|
||||
- Real-time WebSocket updates
|
||||
- Interactive convergence chart (Chart.js)
|
||||
- Parameter space scatter plot
|
||||
- Pruning alerts (toast notifications)
|
||||
- Data export (JSON/CSV)
|
||||
- Study auto-discovery and selection
|
||||
- Metric dashboard (trials, best value, pruned count)
|
||||
|
||||
#### React Frontend (In Progress)
|
||||
- [x] **Project Configuration** ([atomizer-dashboard/frontend/](atomizer-dashboard/frontend/))
|
||||
- React 18 + Vite 5 + TypeScript 5.2
|
||||
- TailwindCSS 3.3 for styling
|
||||
- Recharts 2.10 for charts
|
||||
- Complete build configuration
|
||||
|
||||
- [x] **TypeScript Types** ([frontend/src/types/](atomizer-dashboard/frontend/src/types/))
|
||||
- Complete type definitions for API data
|
||||
- WebSocket message types
|
||||
- Chart data structures
|
||||
|
||||
- [x] **Custom Hooks** ([frontend/src/hooks/useWebSocket.ts](atomizer-dashboard/frontend/src/hooks/useWebSocket.ts))
|
||||
- WebSocket connection management
|
||||
- Auto-reconnection with exponential backoff
|
||||
- Type-safe message routing
|
||||
|
||||
- [x] **Reusable Components** ([frontend/src/components/](atomizer-dashboard/frontend/src/components/))
|
||||
- Card, MetricCard, Badge, StudyCard components
|
||||
- TailwindCSS styling with dark theme
|
||||
|
||||
- [ ] **Dashboard Page** (Pending manual completion)
|
||||
- Need to run `npm install`
|
||||
- Create main.tsx, App.tsx, Dashboard.tsx
|
||||
- Integrate Recharts for charts
|
||||
- Test end-to-end
|
||||
|
||||
#### Documentation
|
||||
- [x] **Dashboard Master Plan** ([docs/DASHBOARD_MASTER_PLAN.md](docs/DASHBOARD_MASTER_PLAN.md))
|
||||
- Complete 3-page architecture (Configurator, Live Dashboard, Results Viewer)
|
||||
- Tech stack recommendations
|
||||
- Implementation phases
|
||||
|
||||
- [x] **Implementation Status** ([docs/DASHBOARD_IMPLEMENTATION_STATUS.md](docs/DASHBOARD_IMPLEMENTATION_STATUS.md))
|
||||
- Current progress tracking
|
||||
- Testing instructions
|
||||
- Next steps
|
||||
|
||||
- [x] **React Implementation Guide** ([docs/DASHBOARD_REACT_IMPLEMENTATION.md](docs/DASHBOARD_REACT_IMPLEMENTATION.md))
|
||||
- Complete templates for remaining components
|
||||
- Recharts integration examples
|
||||
- Troubleshooting guide
|
||||
|
||||
- [x] **Session Summary** ([docs/DASHBOARD_SESSION_SUMMARY.md](docs/DASHBOARD_SESSION_SUMMARY.md))
|
||||
- Features demonstrated
|
||||
- How to use the dashboard
|
||||
- Architecture explanation
|
||||
|
||||
### ✅ Phase 1: Plugin System & Infrastructure (Completed 2025-01-16)
|
||||
|
||||
#### Core Architecture
|
||||
- [x] **Hook Manager** ([optimization_engine/plugins/hook_manager.py](optimization_engine/plugins/hook_manager.py))
|
||||
- Hook registration with priority-based execution
|
||||
- Auto-discovery from plugin directories
|
||||
- Context passing to all hooks
|
||||
- Execution history tracking
|
||||
|
||||
- [x] **Lifecycle Hooks**
|
||||
- `pre_solve`: Execute before solver launch
|
||||
- `post_solve`: Execute after solve, before extraction
|
||||
- `post_extraction`: Execute after result extraction
|
||||
|
||||
#### Logging Infrastructure
|
||||
- [x] **Detailed Trial Logs** ([detailed_logger.py](optimization_engine/plugins/pre_solve/detailed_logger.py))
|
||||
- Per-trial log files in `optimization_results/trial_logs/`
|
||||
- Complete iteration trace with timestamps
|
||||
- Design variables, configuration, timeline
|
||||
- Extracted results and constraint evaluations
|
||||
|
||||
- [x] **High-Level Optimization Log** ([optimization_logger.py](optimization_engine/plugins/pre_solve/optimization_logger.py))
|
||||
- `optimization.log` file tracking overall progress
|
||||
- Configuration summary header
|
||||
- Compact START/COMPLETE entries per trial
|
||||
- Easy to scan format for monitoring
|
||||
|
||||
- [x] **Result Appenders**
|
||||
- [log_solve_complete.py](optimization_engine/plugins/post_solve/log_solve_complete.py) - Appends solve completion to trial logs
|
||||
- [log_results.py](optimization_engine/plugins/post_extraction/log_results.py) - Appends extracted results to trial logs
|
||||
- [optimization_logger_results.py](optimization_engine/plugins/post_extraction/optimization_logger_results.py) - Appends results to optimization.log
|
||||
|
||||
#### Project Organization
|
||||
- [x] **Studies Structure** ([studies/](studies/))
|
||||
- Standardized folder layout with `model/`, `optimization_results/`, `analysis/`
|
||||
- Comprehensive documentation in [studies/README.md](studies/README.md)
|
||||
- Example study: [bracket_stress_minimization/](studies/bracket_stress_minimization/)
|
||||
- Template structure for future studies
|
||||
|
||||
- [x] **Path Resolution** ([atomizer_paths.py](atomizer_paths.py))
|
||||
- Intelligent project root detection using marker files
|
||||
- Helper functions: `root()`, `optimization_engine()`, `studies()`, `tests()`
|
||||
- `ensure_imports()` for robust module imports
|
||||
- Works regardless of script location
|
||||
|
||||
#### Testing
|
||||
- [x] **Hook Validation Test** ([test_hooks_with_bracket.py](tests/test_hooks_with_bracket.py))
|
||||
- Verifies hook loading and execution
|
||||
- Tests 3 trials with dummy data
|
||||
- Checks hook execution history
|
||||
|
||||
- [x] **Integration Tests**
|
||||
- [run_5trial_test.py](tests/run_5trial_test.py) - Quick 5-trial optimization
|
||||
- [test_journal_optimization.py](tests/test_journal_optimization.py) - Full optimization test
|
||||
|
||||
#### Runner Enhancements
|
||||
- [x] **Context Passing** ([runner.py:332,365,412](optimization_engine/runner.py))
|
||||
- `output_dir` passed to all hook contexts
|
||||
- Trial number, design variables, extracted results
|
||||
- Configuration dictionary available to hooks
|
||||
|
||||
### ✅ Core Engine (Pre-Phase 1)
|
||||
- [x] Optuna integration with TPE sampler
|
||||
- [x] Multi-objective optimization support
|
||||
- [x] NX journal execution ([nx_solver.py](optimization_engine/nx_solver.py))
|
||||
- [x] Expression updates ([nx_updater.py](optimization_engine/nx_updater.py))
|
||||
- [x] OP2 result extraction (stress, displacement)
|
||||
- [x] Study management with resume capability
|
||||
- [x] Web dashboard (real-time monitoring)
|
||||
- [x] Precision control (4-decimal rounding)
|
||||
|
||||
---
|
||||
|
||||
## Active Development
|
||||
|
||||
### In Progress - Dashboard (High Priority)
|
||||
- [x] Backend API complete (FastAPI + WebSocket)
|
||||
- [x] HTML dashboard with Chart.js complete
|
||||
- [x] React project structure and configuration complete
|
||||
- [ ] **Complete React frontend** (Awaiting manual npm install)
|
||||
- [ ] Run `npm install` in frontend directory
|
||||
- [ ] Create main.tsx and App.tsx
|
||||
- [ ] Create Dashboard.tsx with Recharts
|
||||
- [ ] Test end-to-end with live optimization
|
||||
|
||||
### Up Next - Dashboard (Next Session)
|
||||
- [ ] Study Configurator page (React)
|
||||
- [ ] Results Report Viewer page (React)
|
||||
- [ ] LLM chat interface integration (future)
|
||||
- [ ] Docker deployment configuration
|
||||
|
||||
### In Progress - Phase 3.2 Integration
|
||||
- [ ] Feature registry creation (Phase 2, Week 1)
|
||||
- [ ] Claude skill definition (Phase 2, Week 1)
|
||||
|
||||
### Up Next (Phase 2, Week 2)
|
||||
- [ ] Natural language parser
|
||||
- [ ] Intent classification system
|
||||
- [ ] Entity extraction for optimization parameters
|
||||
- [ ] Conversational workflow manager
|
||||
|
||||
### Backlog (Phase 3+)
|
||||
- [ ] Custom function generator (RSS, weighted objectives)
|
||||
- [ ] Journal script generator
|
||||
- [ ] Code validation pipeline
|
||||
- [ ] Result analyzer with statistical analysis
|
||||
- [ ] Surrogate quality checker
|
||||
- [ ] HTML/PDF report generator
|
||||
|
||||
---
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Critical
|
||||
- None currently
|
||||
|
||||
### Minor
|
||||
- [ ] `.claude/settings.local.json` modified during development (contains user-specific settings)
|
||||
- [ ] Some old bash background processes still running from previous tests
|
||||
|
||||
### Documentation
|
||||
- [ ] Need to add examples of custom hooks to studies/README.md
|
||||
- [ ] Missing API documentation for hook_manager methods
|
||||
- [ ] No developer guide for creating new plugins
|
||||
|
||||
---
|
||||
|
||||
## Testing Status
|
||||
|
||||
### Automated Tests
|
||||
- ✅ **Hook system** - `test_hooks_with_bracket.py` passing
|
||||
- ✅ **5-trial integration** - `run_5trial_test.py` working
|
||||
- ✅ **Full optimization** - `test_journal_optimization.py` functional
|
||||
- ⏳ **Unit tests** - Need to create for individual modules
|
||||
- ⏳ **CI/CD pipeline** - Not yet set up
|
||||
|
||||
### Manual Testing
|
||||
- ✅ Bracket optimization (50 trials)
|
||||
- ✅ Log file generation in correct locations
|
||||
- ✅ Hook execution at all lifecycle points
|
||||
- ✅ Path resolution across different script locations
|
||||
- ✅ **Dashboard backend** - REST API and WebSocket tested successfully
|
||||
- ✅ **HTML dashboard** - Live updates working with Chart.js
|
||||
- ⏳ **React dashboard** - Pending npm install and completion
|
||||
- ⏳ Resume functionality with config validation
|
||||
|
||||
### Test Coverage
|
||||
- Hook manager: ~80% (core functionality tested)
|
||||
- Logging plugins: 100% (tested via integration tests)
|
||||
- Path resolution: 100% (tested in all scripts)
|
||||
- Result extractors: ~70% (basic tests exist)
|
||||
- **Dashboard backend**: ~90% (REST endpoints and WebSocket tested)
|
||||
- **Dashboard frontend**: ~60% (HTML version tested, React pending)
|
||||
- Overall: ~65% estimated
|
||||
|
||||
---
|
||||
|
||||
## Phase-by-Phase Progress
|
||||
|
||||
### Phase 1: Plugin System ✅ (100% Complete)
|
||||
|
||||
**Completed** (2025-01-16):
|
||||
- [x] Hook system for optimization lifecycle
|
||||
- [x] Plugin auto-discovery and registration
|
||||
- [x] Hook manager with priority-based execution
|
||||
- [x] Detailed per-trial logs (`trial_logs/`)
|
||||
- [x] High-level optimization log (`optimization.log`)
|
||||
- [x] Context passing system for hooks
|
||||
- [x] Studies folder structure
|
||||
- [x] Comprehensive studies documentation
|
||||
- [x] Model file organization (`model/` folder)
|
||||
- [x] Intelligent path resolution
|
||||
- [x] Test suite for hook system
|
||||
|
||||
**Deferred to Future Phases**:
|
||||
- Feature registry → Phase 2 (with LLM interface)
|
||||
- `pre_mesh` and `post_mesh` hooks → Future (not needed for current workflow)
|
||||
- Custom objective/constraint registration → Phase 3 (Code Generation)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: LLM Integration 🟡 (0% Complete)
|
||||
|
||||
**Target**: 2 weeks (Started 2025-01-16)
|
||||
|
||||
#### Week 1 Todos (Feature Registry & Claude Skill)
|
||||
- [ ] Create `optimization_engine/feature_registry.json`
|
||||
- [ ] Extract all current capabilities
|
||||
- [ ] Draft `.claude/skills/atomizer.md`
|
||||
- [ ] Test LLM's ability to navigate codebase
|
||||
|
||||
#### Week 2 Todos (Natural Language Interface)
|
||||
- [ ] Implement intent classifier
|
||||
- [ ] Build entity extractor
|
||||
- [ ] Create workflow manager
|
||||
- [ ] Test end-to-end: "Create a stress minimization study"
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] LLM can create optimization from natural language in <5 turns
|
||||
- [ ] 90% of user requests understood correctly
|
||||
- [ ] Zero manual JSON editing required
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Code Generation ⏳ (Not Started)
|
||||
|
||||
**Target**: 3 weeks
|
||||
|
||||
**Key Deliverables**:
|
||||
- [ ] Custom function generator
|
||||
- [ ] RSS (Root Sum Square) template
|
||||
- [ ] Weighted objectives template
|
||||
- [ ] Custom constraints template
|
||||
- [ ] Journal script generator
|
||||
- [ ] Code validation pipeline
|
||||
- [ ] Safe execution environment
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] LLM generates 10+ custom functions with zero errors
|
||||
- [ ] All generated code passes safety validation
|
||||
- [ ] Users save 50% time vs. manual coding
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Analysis & Decision Support ⏳ (Not Started)
|
||||
|
||||
**Target**: 3 weeks
|
||||
|
||||
**Key Deliverables**:
|
||||
- [ ] Result analyzer (convergence, sensitivity, outliers)
|
||||
- [ ] Surrogate model quality checker (R², CV score, confidence intervals)
|
||||
- [ ] Decision assistant (trade-offs, what-if analysis, recommendations)
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] Surrogate quality detection 95% accurate
|
||||
- [ ] Recommendations lead to 30% faster convergence
|
||||
- [ ] Users report higher confidence in results
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Automated Reporting ⏳ (Not Started)
|
||||
|
||||
**Target**: 2 weeks
|
||||
|
||||
**Key Deliverables**:
|
||||
- [ ] Report generator with Jinja2 templates
|
||||
- [ ] Multi-format export (HTML, PDF, Markdown, JSON)
|
||||
- [ ] LLM-written narrative explanations
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] Reports generated in <30 seconds
|
||||
- [ ] Narrative quality rated 4/5 by engineers
|
||||
- [ ] 80% of reports used without manual editing
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: NX MCP Enhancement ⏳ (Not Started)
|
||||
|
||||
**Target**: 4 weeks
|
||||
|
||||
**Key Deliverables**:
|
||||
- [ ] NX documentation MCP server
|
||||
- [ ] Advanced NX operations library
|
||||
- [ ] Feature bank with 50+ pre-built operations
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] NX MCP answers 95% of API questions correctly
|
||||
- [ ] Feature bank covers 80% of common workflows
|
||||
- [ ] Users write 50% less manual journal code
|
||||
|
||||
---
|
||||
|
||||
### Phase 7: Self-Improving System ⏳ (Not Started)
|
||||
|
||||
**Target**: 4 weeks
|
||||
|
||||
**Key Deliverables**:
|
||||
- [ ] Feature learning system
|
||||
- [ ] Best practices database
|
||||
- [ ] Continuous documentation generation
|
||||
|
||||
**Success Criteria**:
|
||||
- [ ] 20+ user-contributed features in library
|
||||
- [ ] Pattern recognition identifies 10+ best practices
|
||||
- [ ] Documentation auto-updates with zero manual effort
|
||||
|
||||
---
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Running Dashboard
|
||||
```bash
|
||||
# Start backend server
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# Access HTML dashboard (current)
|
||||
# Open browser: http://localhost:8000
|
||||
|
||||
# Start React frontend (when ready)
|
||||
cd atomizer-dashboard/frontend
|
||||
npm install # First time only
|
||||
npm run dev # Starts on http://localhost:3000
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
# Hook validation (3 trials, fast)
|
||||
python tests/test_hooks_with_bracket.py
|
||||
|
||||
# Quick integration test (5 trials)
|
||||
python tests/run_5trial_test.py
|
||||
|
||||
# Full optimization test
|
||||
python tests/test_journal_optimization.py
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
```bash
|
||||
# Run linter (when available)
|
||||
# pylint optimization_engine/
|
||||
|
||||
# Run type checker (when available)
|
||||
# mypy optimization_engine/
|
||||
|
||||
# Run all tests (when test suite is complete)
|
||||
# pytest tests/
|
||||
```
|
||||
|
||||
### Git Workflow
|
||||
```bash
|
||||
# Stage all changes
|
||||
git add .
|
||||
|
||||
# Commit with conventional commits format
|
||||
git commit -m "feat: description" # New feature
|
||||
git commit -m "fix: description" # Bug fix
|
||||
git commit -m "docs: description" # Documentation
|
||||
git commit -m "test: description" # Tests
|
||||
git commit -m "refactor: description" # Code refactoring
|
||||
|
||||
# Push to GitHub
|
||||
git push origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
### For Developers
|
||||
- [DEVELOPMENT_ROADMAP.md](DEVELOPMENT_ROADMAP.md) - Strategic vision and phases
|
||||
- [studies/README.md](studies/README.md) - Studies folder organization
|
||||
- [CHANGELOG.md](CHANGELOG.md) - Version history
|
||||
|
||||
### Dashboard Documentation
|
||||
- [docs/DASHBOARD_MASTER_PLAN.md](docs/DASHBOARD_MASTER_PLAN.md) - Complete architecture blueprint
|
||||
- [docs/DASHBOARD_IMPLEMENTATION_STATUS.md](docs/DASHBOARD_IMPLEMENTATION_STATUS.md) - Current progress
|
||||
- [docs/DASHBOARD_REACT_IMPLEMENTATION.md](docs/DASHBOARD_REACT_IMPLEMENTATION.md) - React implementation guide
|
||||
- [docs/DASHBOARD_SESSION_SUMMARY.md](docs/DASHBOARD_SESSION_SUMMARY.md) - Session summary
|
||||
- [atomizer-dashboard/README.md](atomizer-dashboard/README.md) - Dashboard quick start
|
||||
- [atomizer-dashboard/backend/README.md](atomizer-dashboard/backend/README.md) - Backend API docs
|
||||
- [atomizer-dashboard/frontend/README.md](atomizer-dashboard/frontend/README.md) - Frontend setup guide
|
||||
|
||||
### For Users
|
||||
- [README.md](README.md) - Project overview and quick start
|
||||
- [docs/INDEX.md](docs/INDEX.md) - Complete documentation index
|
||||
- [docs/](docs/) - Additional documentation
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
### Architecture Decisions
|
||||
- **Hook system**: Chose priority-based execution to allow precise control of plugin order
|
||||
- **Path resolution**: Used marker files instead of environment variables for simplicity
|
||||
- **Logging**: Two-tier system (detailed trial logs + high-level optimization.log) for different use cases
|
||||
|
||||
### Performance Considerations
|
||||
- Hook execution adds <1s overhead per trial (acceptable for FEA simulations)
|
||||
- Path resolution caching could improve startup time (future optimization)
|
||||
- Log file sizes grow linearly with trials (~10KB per trial)
|
||||
|
||||
### Future Considerations
|
||||
- Consider moving to structured logging (JSON) for easier parsing
|
||||
- May need database for storing hook execution history (currently in-memory)
|
||||
- Dashboard integration will require WebSocket for real-time log streaming
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-21
|
||||
**Maintained by**: Antoine Polvé (antoine@atomaste.com)
|
||||
**Repository**: [GitHub - Atomizer](https://github.com/yourusername/Atomizer)
|
||||
|
||||
---
|
||||
|
||||
## Recent Updates (November 21, 2025)
|
||||
|
||||
### Dashboard System Implementation ✅
|
||||
- **Backend**: FastAPI + WebSocket with real-time file watching complete
|
||||
- **HTML Dashboard**: Functional dashboard with Chart.js, data export, pruning alerts
|
||||
- **React Setup**: Complete project configuration, types, hooks, components
|
||||
- **Documentation**: 5 comprehensive markdown documents covering architecture, implementation, and usage
|
||||
|
||||
### Next Immediate Steps
|
||||
1. Run `npm install` in `atomizer-dashboard/frontend`
|
||||
2. Create `main.tsx`, `App.tsx`, and `Dashboard.tsx` using provided templates
|
||||
3. Test React dashboard with live optimization
|
||||
4. Build Study Configurator page (next major feature)
|
||||
@@ -1,63 +0,0 @@
|
||||
# Atomizer Installation Guide
|
||||
|
||||
## Step 1: Install Miniconda (Recommended)
|
||||
|
||||
1. Download Miniconda from: https://docs.conda.io/en/latest/miniconda.html
|
||||
- Choose: **Miniconda3 Windows 64-bit**
|
||||
|
||||
2. Run the installer:
|
||||
- Check "Add Miniconda3 to my PATH environment variable"
|
||||
- Check "Register Miniconda3 as my default Python"
|
||||
|
||||
3. Restart your terminal/VSCode after installation
|
||||
|
||||
## Step 2: Create Atomizer Environment
|
||||
|
||||
Open **Anaconda Prompt** (or any terminal after restart) and run:
|
||||
|
||||
```bash
|
||||
cd C:\Users\Antoine\Atomizer
|
||||
conda env create -f environment.yml
|
||||
conda activate atomizer
|
||||
```
|
||||
|
||||
## Step 3: Install PyTorch with GPU Support (Optional but Recommended)
|
||||
|
||||
If you have an NVIDIA GPU:
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||
pip install torch-geometric
|
||||
```
|
||||
|
||||
## Step 4: Verify Installation
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
python -c "import torch; import optuna; import pyNastran; print('All imports OK!')"
|
||||
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
|
||||
```
|
||||
|
||||
## Step 5: Train Neural Network
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
cd C:\Users\Antoine\Atomizer\atomizer-field
|
||||
python train_parametric.py --train_dir ../atomizer_field_training_data/bracket_stiffness_optimization_atomizerfield --epochs 100 --output_dir runs/bracket_model
|
||||
```
|
||||
|
||||
## Quick Commands Reference
|
||||
|
||||
```bash
|
||||
# Activate environment (do this every time you open a new terminal)
|
||||
conda activate atomizer
|
||||
|
||||
# Train neural network
|
||||
cd C:\Users\Antoine\Atomizer\atomizer-field
|
||||
python train_parametric.py --train_dir ../atomizer_field_training_data/bracket_stiffness_optimization_atomizerfield --epochs 100
|
||||
|
||||
# Run optimization with neural acceleration
|
||||
cd C:\Users\Antoine\Atomizer\studies\bracket_stiffness_optimization_atomizerfield
|
||||
python run_optimization.py --run --trials 100 --enable-nn
|
||||
```
|
||||
56
atomizer-field/.gitignore
vendored
56
atomizer-field/.gitignore
vendored
@@ -1,56 +0,0 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
*.egg-info/
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Jupyter Notebooks
|
||||
.ipynb_checkpoints/
|
||||
*.ipynb
|
||||
|
||||
# IDEs
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Data files (large)
|
||||
*.op2
|
||||
*.bdf
|
||||
*.dat
|
||||
*.f06
|
||||
*.pch
|
||||
*.h5
|
||||
*.hdf5
|
||||
|
||||
# Training data
|
||||
training_data/
|
||||
checkpoints/
|
||||
runs/
|
||||
logs/
|
||||
|
||||
# Test outputs
|
||||
test_case_*/
|
||||
visualization_images/
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.log
|
||||
*.bak
|
||||
*.orig
|
||||
|
||||
# Environment
|
||||
atomizer_env/
|
||||
.conda/
|
||||
@@ -1,635 +0,0 @@
|
||||
# AtomizerField - Complete Status Report
|
||||
|
||||
**Date:** November 24, 2025
|
||||
**Version:** 1.0
|
||||
**Status:** ✅ Core System Operational, Unit Issues Resolved
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**AtomizerField** is a neural field learning system that replaces traditional FEA simulations with graph neural networks, providing **1000× faster predictions** for structural optimization.
|
||||
|
||||
### Current Status
|
||||
- ✅ **Core pipeline working**: BDF/OP2 → Neural format → GNN inference
|
||||
- ✅ **Test case validated**: Simple Beam (5,179 nodes, 4,866 elements)
|
||||
- ✅ **Unit system understood**: MN-MM system (kPa stress, N forces, mm length)
|
||||
- ⚠️ **Not yet trained**: Neural network has random weights
|
||||
- 🔜 **Next step**: Generate training data and train model
|
||||
|
||||
---
|
||||
|
||||
## What AtomizerField Does
|
||||
|
||||
### 1. Data Pipeline ✅ WORKING
|
||||
|
||||
**Purpose:** Convert Nastran FEA results into neural network training data
|
||||
|
||||
**Input:**
|
||||
- BDF file (geometry, materials, loads, BCs)
|
||||
- OP2 file (FEA results: displacement, stress, reactions)
|
||||
|
||||
**Output:**
|
||||
- JSON metadata (mesh, materials, loads, statistics)
|
||||
- HDF5 arrays (coordinates, displacement, stress fields)
|
||||
|
||||
**What's Extracted:**
|
||||
- ✅ Mesh: 5,179 nodes, 4,866 CQUAD4 shell elements
|
||||
- ✅ Materials: Young's modulus, Poisson's ratio, density
|
||||
- ✅ Boundary conditions: SPCs, MPCs (if present)
|
||||
- ✅ Loads: 35 point forces with directions
|
||||
- ✅ Displacement field: 6 DOF per node (Tx, Ty, Tz, Rx, Ry, Rz)
|
||||
- ✅ Stress field: 8 components per element (σxx, σyy, τxy, principals, von Mises)
|
||||
- ✅ Reaction forces: 6 DOF per node
|
||||
|
||||
**Performance:**
|
||||
- Parse time: 1.27 seconds
|
||||
- Data size: JSON 1.7 MB, HDF5 546 KB
|
||||
|
||||
### 2. Graph Neural Network ✅ ARCHITECTURE WORKING
|
||||
|
||||
**Purpose:** Learn FEA physics to predict displacement/stress from geometry/loads
|
||||
|
||||
**Architecture:**
|
||||
- Type: Graph Neural Network (PyTorch Geometric)
|
||||
- Parameters: 128,589 (small model for testing)
|
||||
- Layers: 6 message passing layers
|
||||
- Hidden dimension: 64
|
||||
|
||||
**Input Features:**
|
||||
- Node features (12D): position (3D), BCs (6 DOF), loads (3D)
|
||||
- Edge features (5D): E, ν, ρ, G, α (material properties)
|
||||
|
||||
**Output Predictions:**
|
||||
- Displacement: (N_nodes, 6) - full 6 DOF per node
|
||||
- Stress: (N_elements, 6) - stress tensor components
|
||||
- Von Mises: (N_elements,) - scalar stress measure
|
||||
|
||||
**Current State:**
|
||||
- ✅ Model instantiates successfully
|
||||
- ✅ Forward pass works
|
||||
- ✅ Inference time: 95.94 ms (< 100 ms target)
|
||||
- ⚠️ Predictions are random (untrained weights)
|
||||
|
||||
### 3. Visualization ✅ WORKING
|
||||
|
||||
**Purpose:** Visualize mesh, displacement, and stress fields
|
||||
|
||||
**Capabilities:**
|
||||
- ✅ 3D mesh rendering (nodes + elements)
|
||||
- ✅ Displacement visualization (original + deformed)
|
||||
- ✅ Stress field coloring (von Mises)
|
||||
- ✅ Automatic report generation (markdown + images)
|
||||
|
||||
**Generated Outputs:**
|
||||
- mesh.png (227 KB)
|
||||
- displacement.png (335 KB)
|
||||
- stress.png (215 KB)
|
||||
- Markdown report with embedded images
|
||||
|
||||
### 4. Unit System ✅ UNDERSTOOD
|
||||
|
||||
**Nastran UNITSYS: MN-MM**
|
||||
|
||||
Despite the name, actual units are:
|
||||
- Length: **mm** (millimeter)
|
||||
- Force: **N** (Newton) - NOT MegaNewton!
|
||||
- Stress: **kPa** (kiloPascal = N/mm²) - NOT MPa!
|
||||
- Mass: **kg** (kilogram)
|
||||
- Young's modulus: **kPa** (200,000,000 kPa = 200 GPa for steel)
|
||||
|
||||
**Validated Values:**
|
||||
- Max stress: 117,000 kPa = **117 MPa** ✓ (reasonable for steel)
|
||||
- Max displacement: **19.5 mm** ✓
|
||||
- Applied forces: **~2.73 MN each** ✓ (large beam structure)
|
||||
- Young's modulus: 200,000,000 kPa = **200 GPa** ✓ (steel)
|
||||
|
||||
### 5. Direction Handling ✅ FULLY VECTORIAL
|
||||
|
||||
**All fields preserve directional information:**
|
||||
|
||||
**Displacement (6 DOF):**
|
||||
```
|
||||
[Tx, Ty, Tz, Rx, Ry, Rz]
|
||||
```
|
||||
- Stored as (5179, 6) array
|
||||
- Full translation + rotation at each node
|
||||
|
||||
**Forces/Reactions (6 DOF):**
|
||||
```
|
||||
[Fx, Fy, Fz, Mx, My, Mz]
|
||||
```
|
||||
- Stored as (5179, 6) array
|
||||
- Full force + moment vectors
|
||||
|
||||
**Stress Tensor (shell elements):**
|
||||
```
|
||||
[fiber_distance, σxx, σyy, τxy, angle, σ_major, σ_minor, von_mises]
|
||||
```
|
||||
- Stored as (9732, 8) array
|
||||
- Full stress state for each element (2 per CQUAD4)
|
||||
|
||||
**Coordinate System:**
|
||||
- Global XYZ coordinates
|
||||
- Node positions: (5179, 3) array
|
||||
- Element connectivity preserves topology
|
||||
|
||||
**Neural Network:**
|
||||
- Learns directional relationships through graph structure
|
||||
- Message passing propagates forces through mesh topology
|
||||
- Predicts full displacement vectors and stress tensors
|
||||
|
||||
---
|
||||
|
||||
## What's Been Tested
|
||||
|
||||
### ✅ Smoke Tests (5/5 PASS)
|
||||
|
||||
1. **Model Creation**: GNN instantiates with 128,589 parameters
|
||||
2. **Forward Pass**: Processes dummy graph data
|
||||
3. **Loss Functions**: All 4 loss types compute correctly
|
||||
4. **Batch Processing**: Handles batched data
|
||||
5. **Gradient Flow**: Backpropagation works
|
||||
|
||||
**Status:** All passing, system fundamentally sound
|
||||
|
||||
### ✅ Simple Beam End-to-End Test (7/7 PASS)
|
||||
|
||||
1. **File Existence**: BDF (1,230 KB) and OP2 (4,461 KB) found
|
||||
2. **Directory Setup**: test_case_beam/ structure created
|
||||
3. **Module Imports**: All dependencies load correctly
|
||||
4. **BDF/OP2 Parsing**: 5,179 nodes, 4,866 elements extracted
|
||||
5. **Data Validation**: No NaN values, physics consistent
|
||||
6. **Graph Conversion**: PyTorch Geometric format successful
|
||||
7. **Neural Prediction**: Inference in 95.94 ms
|
||||
|
||||
**Status:** Complete pipeline validated with real FEA data
|
||||
|
||||
### ✅ Visualization Test
|
||||
|
||||
1. **Mesh Rendering**: 5,179 nodes, 4,866 elements displayed
|
||||
2. **Displacement Field**: Original + deformed (10× scale)
|
||||
3. **Stress Field**: Von Mises coloring across elements
|
||||
4. **Report Generation**: Markdown + embedded images
|
||||
|
||||
**Status:** All visualizations working correctly
|
||||
|
||||
### ✅ Unit Validation
|
||||
|
||||
1. **UNITSYS Detection**: MN-MM system identified
|
||||
2. **Material Properties**: E = 200 GPa confirmed for steel
|
||||
3. **Stress Values**: 117 MPa reasonable for loaded beam
|
||||
4. **Force Values**: 2.73 MN per load point validated
|
||||
|
||||
**Status:** Units understood, values physically realistic
|
||||
|
||||
---
|
||||
|
||||
## What's NOT Tested Yet
|
||||
|
||||
### ❌ Physics Validation Tests (0/4)
|
||||
|
||||
These require **trained model**:
|
||||
|
||||
1. **Cantilever Beam Test**: Analytical solution comparison
|
||||
- Load known geometry/loads
|
||||
- Compare prediction vs analytical deflection formula
|
||||
- Target: < 5% error
|
||||
|
||||
2. **Equilibrium Test**: ∇·σ + f = 0
|
||||
- Check force balance at each node
|
||||
- Ensure physics laws satisfied
|
||||
- Target: Residual < 1% of max force
|
||||
|
||||
3. **Constitutive Law Test**: σ = C:ε (Hooke's law)
|
||||
- Verify stress-strain relationship
|
||||
- Check material model accuracy
|
||||
- Target: < 5% deviation
|
||||
|
||||
4. **Energy Conservation Test**: Strain energy = work done
|
||||
- Compute ∫(σ:ε)dV vs ∫(f·u)dV
|
||||
- Ensure energy balance
|
||||
- Target: < 5% difference
|
||||
|
||||
**Blocker:** Model not trained yet (random weights)
|
||||
|
||||
### ❌ Learning Tests (0/4)
|
||||
|
||||
These require **trained model**:
|
||||
|
||||
1. **Memorization Test**: Can model fit single example?
|
||||
- Train on 1 case, test on same case
|
||||
- Target: < 1% error (proves capacity)
|
||||
|
||||
2. **Interpolation Test**: Can model predict between training cases?
|
||||
- Train on cases A and C
|
||||
- Test on case B (intermediate)
|
||||
- Target: < 10% error
|
||||
|
||||
3. **Extrapolation Test**: Can model generalize?
|
||||
- Train on small loads
|
||||
- Test on larger loads
|
||||
- Target: < 20% error (harder)
|
||||
|
||||
4. **Pattern Recognition Test**: Does model learn physics?
|
||||
- Test on different geometry with same physics
|
||||
- Check if physical principles transfer
|
||||
- Target: Qualitative correctness
|
||||
|
||||
**Blocker:** Model not trained yet
|
||||
|
||||
### ❌ Integration Tests (0/5)
|
||||
|
||||
These require **trained model + optimization interface**:
|
||||
|
||||
1. **Batch Prediction**: Process multiple designs
|
||||
2. **Gradient Computation**: Analytical sensitivities
|
||||
3. **Optimization Loop**: Full design cycle
|
||||
4. **Uncertainty Quantification**: Ensemble predictions
|
||||
5. **Online Learning**: Update during optimization
|
||||
|
||||
**Blocker:** Model not trained yet
|
||||
|
||||
### ❌ Performance Tests (0/3)
|
||||
|
||||
These require **trained model**:
|
||||
|
||||
1. **Accuracy Benchmark**: < 10% error vs FEA
|
||||
2. **Speed Benchmark**: < 50 ms inference time
|
||||
3. **Scalability Test**: Larger meshes (10K+ nodes)
|
||||
|
||||
**Blocker:** Model not trained yet
|
||||
|
||||
---
|
||||
|
||||
## Current Capabilities Summary
|
||||
|
||||
| Feature | Status | Notes |
|
||||
|---------|--------|-------|
|
||||
| **Data Pipeline** | ✅ Working | Parses BDF/OP2 to neural format |
|
||||
| **Unit Handling** | ✅ Understood | MN-MM system (kPa stress, N force) |
|
||||
| **Direction Handling** | ✅ Complete | Full 6 DOF + tensor components |
|
||||
| **Graph Conversion** | ✅ Working | PyTorch Geometric format |
|
||||
| **GNN Architecture** | ✅ Working | 128K params, 6 layers |
|
||||
| **Forward Pass** | ✅ Working | 95.94 ms inference |
|
||||
| **Visualization** | ✅ Working | 3D mesh, displacement, stress |
|
||||
| **Training Pipeline** | ⚠️ Ready | Code exists, not executed |
|
||||
| **Physics Compliance** | ❌ Unknown | Requires trained model |
|
||||
| **Prediction Accuracy** | ❌ Unknown | Requires trained model |
|
||||
|
||||
---
|
||||
|
||||
## Known Issues
|
||||
|
||||
### ⚠️ Minor Issues
|
||||
|
||||
1. **Unit Labels**: Parser labels stress as "MPa" when it's actually "kPa"
|
||||
- Impact: Confusing but documented
|
||||
- Fix: Update labels in neural_field_parser.py
|
||||
- Priority: Low (doesn't affect calculations)
|
||||
|
||||
2. **Unicode Encoding**: Windows cp1252 codec limitations
|
||||
- Impact: Crashes with Unicode symbols (✓, →, σ, etc.)
|
||||
- Fix: Already replaced most with ASCII
|
||||
- Priority: Low (cosmetic)
|
||||
|
||||
3. **No SPCs Found**: Test beam has no explicit constraints
|
||||
- Impact: Warning message appears
|
||||
- Fix: Probably fixed at edges (investigate BDF)
|
||||
- Priority: Low (analysis ran successfully)
|
||||
|
||||
### ✅ Resolved Issues
|
||||
|
||||
1. ~~**NumPy MINGW-W64 Crashes**~~
|
||||
- Fixed: Created conda environment with proper NumPy
|
||||
- Status: All tests running without crashes
|
||||
|
||||
2. ~~**pyNastran API Compatibility**~~
|
||||
- Fixed: Added getattr/hasattr checks for optional attributes
|
||||
- Status: Parser handles missing 'sol' and 'temps'
|
||||
|
||||
3. ~~**Element Connectivity Structure**~~
|
||||
- Fixed: Discovered categorized dict structure (solid/shell/beam)
|
||||
- Status: Visualization working correctly
|
||||
|
||||
4. ~~**Node ID Mapping**~~
|
||||
- Fixed: Created node_id_to_idx mapping for 1-indexed IDs
|
||||
- Status: Element plotting correct
|
||||
|
||||
---
|
||||
|
||||
## What's Next
|
||||
|
||||
### Phase 1: Fix Unit Labels (30 minutes)
|
||||
|
||||
**Goal:** Update parser to correctly label units
|
||||
|
||||
**Changes needed:**
|
||||
```python
|
||||
# neural_field_parser.py line ~623
|
||||
"units": "kPa" # Changed from "MPa"
|
||||
|
||||
# metadata section
|
||||
"stress": "kPa" # Changed from "MPa"
|
||||
```
|
||||
|
||||
**Validation:**
|
||||
- Re-run test_simple_beam.py
|
||||
- Check reports show "117 kPa" not "117 MPa"
|
||||
- Or add conversion: stress/1000 → MPa
|
||||
|
||||
### Phase 2: Generate Training Data (1-2 weeks)
|
||||
|
||||
**Goal:** Create 50-500 training cases
|
||||
|
||||
**Approach:**
|
||||
1. Vary beam dimensions (length, width, thickness)
|
||||
2. Vary loading conditions (magnitude, direction, location)
|
||||
3. Vary material properties (steel, aluminum, titanium)
|
||||
4. Vary boundary conditions (cantilever, simply supported, clamped)
|
||||
|
||||
**Expected:**
|
||||
- 50 minimum (quick validation)
|
||||
- 200 recommended (good accuracy)
|
||||
- 500 maximum (best performance)
|
||||
|
||||
**Tools:**
|
||||
- Use parametric FEA (NX Nastran)
|
||||
- Batch processing script
|
||||
- Quality validation for each case
|
||||
|
||||
### Phase 3: Train Neural Network (2-6 hours)
|
||||
|
||||
**Goal:** Train model to < 10% prediction error
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
python train.py \
|
||||
--data_dirs training_data/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--lr 0.001 \
|
||||
--loss physics \
|
||||
--checkpoint_dir checkpoints/
|
||||
```
|
||||
|
||||
**Expected:**
|
||||
- Training time: 2-6 hours (CPU)
|
||||
- Loss convergence: < 0.01
|
||||
- Validation error: < 10%
|
||||
|
||||
**Monitoring:**
|
||||
- TensorBoard for loss curves
|
||||
- Validation metrics every 10 epochs
|
||||
- Early stopping if no improvement
|
||||
|
||||
### Phase 4: Validate Performance (1-2 hours)
|
||||
|
||||
**Goal:** Run full test suite
|
||||
|
||||
**Tests:**
|
||||
```bash
|
||||
# Physics tests
|
||||
python test_suite.py --physics
|
||||
|
||||
# Learning tests
|
||||
python test_suite.py --learning
|
||||
|
||||
# Full validation
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
**Expected:**
|
||||
- All 18 tests passing
|
||||
- Physics compliance < 5% error
|
||||
- Prediction accuracy < 10% error
|
||||
- Inference time < 50 ms
|
||||
|
||||
### Phase 5: Production Deployment (1 day)
|
||||
|
||||
**Goal:** Integrate with Atomizer
|
||||
|
||||
**Interface:**
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
optimizer = NeuralFieldOptimizer('checkpoints/best_model.pt')
|
||||
results = optimizer.evaluate(design_graph)
|
||||
sensitivities = optimizer.get_sensitivities(design_graph)
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Fast evaluation: ~10 ms per design
|
||||
- Analytical gradients: 1M× faster than finite differences
|
||||
- Uncertainty quantification: Confidence intervals
|
||||
- Online learning: Improve during optimization
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Current: Smoke Testing ✅
|
||||
|
||||
**Status:** Completed
|
||||
- 5/5 smoke tests passing
|
||||
- 7/7 end-to-end tests passing
|
||||
- System fundamentally operational
|
||||
|
||||
### Next: Unit Testing
|
||||
|
||||
**What to test:**
|
||||
- Individual parser functions
|
||||
- Data validation rules
|
||||
- Unit conversion functions
|
||||
- Graph construction logic
|
||||
|
||||
**Priority:** Medium (system working, but good for maintainability)
|
||||
|
||||
### Future: Integration Testing
|
||||
|
||||
**What to test:**
|
||||
- Multi-case batch processing
|
||||
- Training pipeline end-to-end
|
||||
- Optimization interface
|
||||
- Uncertainty quantification
|
||||
|
||||
**Priority:** High (required before production)
|
||||
|
||||
### Future: Physics Testing
|
||||
|
||||
**What to test:**
|
||||
- Analytical solution comparison
|
||||
- Energy conservation
|
||||
- Force equilibrium
|
||||
- Constitutive laws
|
||||
|
||||
**Priority:** Critical (validates correctness)
|
||||
|
||||
---
|
||||
|
||||
## Performance Expectations
|
||||
|
||||
### After Training
|
||||
|
||||
| Metric | Target | Expected |
|
||||
|--------|--------|----------|
|
||||
| Prediction Error | < 10% | 5-10% |
|
||||
| Inference Time | < 50 ms | 10-30 ms |
|
||||
| Speedup vs FEA | 1000× | 1000-3000× |
|
||||
| Memory Usage | < 500 MB | ~300 MB |
|
||||
|
||||
### Production Capability
|
||||
|
||||
**Single Evaluation:**
|
||||
- FEA: 30-300 seconds
|
||||
- Neural: 10-30 ms
|
||||
- **Speedup: 1000-10,000×**
|
||||
|
||||
**Optimization Loop (100 iterations):**
|
||||
- FEA: 50-500 minutes
|
||||
- Neural: 1-3 seconds
|
||||
- **Speedup: 3000-30,000×**
|
||||
|
||||
**Gradient Computation:**
|
||||
- FEA (finite diff): 300-3000 seconds
|
||||
- Neural (analytical): 0.1 ms
|
||||
- **Speedup: 3,000,000-30,000,000×**
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
### Low Risk ✅
|
||||
|
||||
- Core pipeline working
|
||||
- Data extraction validated
|
||||
- Units understood
|
||||
- Visualization working
|
||||
|
||||
### Medium Risk ⚠️
|
||||
|
||||
- Model architecture untested with training
|
||||
- Physics compliance unknown
|
||||
- Generalization capability unclear
|
||||
- Need diverse training data
|
||||
|
||||
### High Risk ❌
|
||||
|
||||
- None identified currently
|
||||
|
||||
### Mitigation Strategies
|
||||
|
||||
1. **Start with small dataset** (50 cases) to validate training
|
||||
2. **Monitor physics losses** during training
|
||||
3. **Test on analytical cases** first (cantilever beam)
|
||||
4. **Gradual scaling** to larger/more complex geometries
|
||||
|
||||
---
|
||||
|
||||
## Resource Requirements
|
||||
|
||||
### Computational
|
||||
|
||||
**Training:**
|
||||
- CPU: 8+ cores recommended
|
||||
- RAM: 16 GB minimum
|
||||
- GPU: Optional (10× faster, 8+ GB VRAM)
|
||||
- Time: 2-6 hours
|
||||
|
||||
**Inference:**
|
||||
- CPU: Any (even single core works)
|
||||
- RAM: 2 GB sufficient
|
||||
- GPU: Not needed
|
||||
- Time: 10-30 ms per case
|
||||
|
||||
### Data Storage
|
||||
|
||||
**Per Training Case:**
|
||||
- BDF: ~1 MB
|
||||
- OP2: ~5 MB
|
||||
- Parsed (JSON): ~2 MB
|
||||
- Parsed (HDF5): ~500 KB
|
||||
- **Total: ~8.5 MB per case**
|
||||
|
||||
**Full Training Set (200 cases):**
|
||||
- Raw: ~1.2 GB
|
||||
- Parsed: ~500 MB
|
||||
- Model: ~2 MB
|
||||
- **Total: ~1.7 GB**
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Immediate (This Week)
|
||||
|
||||
1. ✅ **Fix unit labels** - 30 minutes
|
||||
- Update "MPa" → "kPa" in parser
|
||||
- Or add /1000 conversion to match expected units
|
||||
|
||||
2. **Document unit system** - 1 hour
|
||||
- Add comments in parser
|
||||
- Update user documentation
|
||||
- Create unit conversion guide
|
||||
|
||||
### Short-term (Next 2 Weeks)
|
||||
|
||||
3. **Generate training data** - 1-2 weeks
|
||||
- Start with 50 cases (minimum viable)
|
||||
- Validate data quality
|
||||
- Expand to 200 if needed
|
||||
|
||||
4. **Initial training** - 1 day
|
||||
- Train on 50 cases
|
||||
- Validate on 10 held-out cases
|
||||
- Check physics compliance
|
||||
|
||||
### Medium-term (Next Month)
|
||||
|
||||
5. **Full validation** - 1 week
|
||||
- Run complete test suite
|
||||
- Physics compliance tests
|
||||
- Accuracy benchmarks
|
||||
|
||||
6. **Production integration** - 1 week
|
||||
- Connect to Atomizer
|
||||
- End-to-end optimization test
|
||||
- Performance profiling
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
### ✅ What's Working
|
||||
|
||||
AtomizerField has a **fully functional core pipeline**:
|
||||
- Parses real FEA data (5,179 nodes validated)
|
||||
- Converts to neural network format
|
||||
- GNN architecture operational (128K params)
|
||||
- Inference runs fast (95.94 ms)
|
||||
- Visualization produces publication-quality figures
|
||||
- Units understood and validated
|
||||
|
||||
### 🔜 What's Next
|
||||
|
||||
The system is **ready for training**:
|
||||
- All infrastructure in place
|
||||
- Test case validated
|
||||
- Neural architecture proven
|
||||
- Just needs training data
|
||||
|
||||
### 🎯 Production Readiness
|
||||
|
||||
**After training (2-3 weeks):**
|
||||
- Prediction accuracy: < 10% error
|
||||
- Inference speed: 1000× faster than FEA
|
||||
- Full integration with Atomizer
|
||||
- **Revolutionary optimization capability unlocked!**
|
||||
|
||||
The hard work is done - now we train and deploy! 🚀
|
||||
|
||||
---
|
||||
|
||||
*Report generated: November 24, 2025*
|
||||
*AtomizerField v1.0*
|
||||
*Status: Core operational, ready for training*
|
||||
@@ -1,603 +0,0 @@
|
||||
# AtomizerField Development Report
|
||||
|
||||
**Prepared for:** Antoine Polvé
|
||||
**Date:** November 24, 2025
|
||||
**Status:** Core System Complete → Ready for Training Phase
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
AtomizerField is **fully implemented and validated** at the architectural level. The project has achieved approximately **~7,000 lines of production code** across all phases, with a complete data pipeline, neural network architecture, physics-informed training system, and optimization interface.
|
||||
|
||||
**Current Position:** You're at the transition point between "building" and "training/deploying."
|
||||
|
||||
**Critical Insight:** The system works—now it needs data to learn from.
|
||||
|
||||
---
|
||||
|
||||
## Part 1: Current Development Status
|
||||
|
||||
### What's Built ✅
|
||||
|
||||
| Component | Status | Lines of Code | Validation |
|
||||
|-----------|--------|---------------|------------|
|
||||
| **BDF/OP2 Parser** | ✅ Complete | ~1,400 | Tested with Simple Beam |
|
||||
| **Graph Neural Network** | ✅ Complete | ~490 | 718,221 parameters, forward pass validated |
|
||||
| **Physics-Informed Losses** | ✅ Complete | ~450 | All 4 loss types tested |
|
||||
| **Data Loader** | ✅ Complete | ~420 | PyTorch Geometric integration |
|
||||
| **Training Pipeline** | ✅ Complete | ~430 | TensorBoard, checkpointing, early stopping |
|
||||
| **Inference Engine** | ✅ Complete | ~380 | 95ms inference time validated |
|
||||
| **Optimization Interface** | ✅ Complete | ~430 | Drop-in FEA replacement ready |
|
||||
| **Uncertainty Quantification** | ✅ Complete | ~380 | Ensemble-based, online learning |
|
||||
| **Test Suite** | ✅ Complete | ~2,700 | 18 automated tests |
|
||||
| **Documentation** | ✅ Complete | 10 guides | Comprehensive coverage |
|
||||
|
||||
### Simple Beam Validation Results
|
||||
|
||||
Your actual FEA model was successfully processed:
|
||||
|
||||
```
|
||||
✅ Nodes parsed: 5,179
|
||||
✅ Elements parsed: 4,866 CQUAD4
|
||||
✅ Displacement field: Complete (max: 19.56 mm)
|
||||
✅ Stress field: Complete (9,732 values)
|
||||
✅ Graph conversion: PyTorch Geometric format
|
||||
✅ Neural inference: 95.94 ms
|
||||
✅ All 7 tests: PASSED
|
||||
```
|
||||
|
||||
### What's NOT Done Yet ⏳
|
||||
|
||||
| Gap | Impact | Effort Required |
|
||||
|-----|--------|-----------------|
|
||||
| **Training data generation** | Can't train without data | 1-2 weeks (50-500 cases) |
|
||||
| **Model training** | Model has random weights | 2-8 hours (GPU) |
|
||||
| **Physics validation** | Can't verify accuracy | After training |
|
||||
| **Atomizer integration** | Not connected yet | 1-2 weeks |
|
||||
| **Production deployment** | Not in optimization loop | After integration |
|
||||
|
||||
---
|
||||
|
||||
## Part 2: The Physics-Neural Network Architecture
|
||||
|
||||
### Core Innovation: Learning Fields, Not Scalars
|
||||
|
||||
**Traditional Approach:**
|
||||
```
|
||||
Design Parameters → FEA (30 min) → max_stress = 450 MPa (1 number)
|
||||
```
|
||||
|
||||
**AtomizerField Approach:**
|
||||
```
|
||||
Design Parameters → Neural Network (50 ms) → stress_field[5,179 nodes × 6 components]
|
||||
= 31,074 stress values!
|
||||
```
|
||||
|
||||
This isn't just faster—it's fundamentally different. You know **WHERE** the stress is, not just **HOW MUCH**.
|
||||
|
||||
### The Graph Neural Network Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ GRAPH REPRESENTATION │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ NODES (from FEA mesh): │
|
||||
│ ├── Position (x, y, z) → 3 features │
|
||||
│ ├── Boundary conditions (6 DOF) → 6 features (0/1 mask) │
|
||||
│ └── Applied loads (Fx, Fy, Fz) → 3 features │
|
||||
│ Total: 12 features per node │
|
||||
│ │
|
||||
│ EDGES (from element connectivity): │
|
||||
│ ├── Young's modulus (E) → Material stiffness │
|
||||
│ ├── Poisson's ratio (ν) → Lateral contraction │
|
||||
│ ├── Density (ρ) → Mass distribution │
|
||||
│ ├── Shear modulus (G) → Shear behavior │
|
||||
│ └── Thermal expansion (α) → Thermal effects │
|
||||
│ Total: 5 features per edge │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ MESSAGE PASSING (6 LAYERS) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Each layer: │
|
||||
│ 1. Gather neighbor information │
|
||||
│ 2. Weight by material properties (edge features) │
|
||||
│ 3. Update node representation │
|
||||
│ 4. Residual connection + LayerNorm │
|
||||
│ │
|
||||
│ KEY INSIGHT: Forces propagate through connected elements! │
|
||||
│ The network learns HOW forces flow through the structure. │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ FIELD PREDICTIONS │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Displacement: [N_nodes, 6] → Tx, Ty, Tz, Rx, Ry, Rz │
|
||||
│ Stress: [N_nodes, 6] → σxx, σyy, σzz, τxy, τyz, τxz │
|
||||
│ Von Mises: [N_nodes, 1] → Scalar stress measure │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Physics-Informed Loss Functions
|
||||
|
||||
The network doesn't just minimize prediction error—it enforces physical laws:
|
||||
|
||||
```
|
||||
L_total = λ_data × L_data # Match FEA results
|
||||
+ λ_eq × L_equilibrium # ∇·σ + f = 0 (force balance)
|
||||
+ λ_const × L_constitutive # σ = C:ε (Hooke's law)
|
||||
+ λ_bc × L_boundary # u = 0 at fixed nodes
|
||||
```
|
||||
|
||||
**Why This Matters:**
|
||||
- **Faster convergence:** Network starts with physics intuition
|
||||
- **Better generalization:** Extrapolates correctly outside training range
|
||||
- **Physically plausible:** No "impossible" stress distributions
|
||||
- **Less data needed:** Physics provides strong inductive bias
|
||||
|
||||
### What Makes This Different from Standard PINNs
|
||||
|
||||
| Aspect | Academic PINNs | AtomizerField |
|
||||
|--------|----------------|---------------|
|
||||
| **Geometry** | Simple (rods, plates) | Complex industrial meshes |
|
||||
| **Data source** | Solve PDEs from scratch | Learn from existing FEA |
|
||||
| **Goal** | Replace physics solvers | Accelerate optimization |
|
||||
| **Mesh** | Regular grids | Arbitrary unstructured |
|
||||
| **Scalability** | ~100s of DOFs | ~50,000+ DOFs |
|
||||
|
||||
AtomizerField is better described as a **"Data-Driven Surrogate Model for Structural Optimization"** or **"FEA-Informed Neural Network."**
|
||||
|
||||
---
|
||||
|
||||
## Part 3: How to Test a Concrete Solution
|
||||
|
||||
### Step 1: Generate Training Data (Critical Path)
|
||||
|
||||
You need **50-500 FEA cases** with geometric/load variations.
|
||||
|
||||
**Option A: Parametric Study in NX (Recommended)**
|
||||
|
||||
```
|
||||
For your Simple Beam:
|
||||
1. Open beam_sim1 in NX
|
||||
2. Create design study with variations:
|
||||
- Thickness: 1mm, 2mm, 3mm, 4mm, 5mm
|
||||
- Width: 50mm, 75mm, 100mm
|
||||
- Load: 1000N, 2000N, 3000N, 4000N
|
||||
- Support position: 3 locations
|
||||
|
||||
Total: 5 × 3 × 4 × 3 = 180 cases
|
||||
|
||||
3. Run all cases (automated with NX journal)
|
||||
4. Export BDF/OP2 for each case
|
||||
```
|
||||
|
||||
**Option B: Design of Experiments**
|
||||
|
||||
```python
|
||||
# Generate Latin Hypercube sampling
|
||||
import numpy as np
|
||||
from scipy.stats import qmc
|
||||
|
||||
sampler = qmc.LatinHypercube(d=4) # 4 design variables
|
||||
sample = sampler.random(n=100) # 100 cases
|
||||
|
||||
# Scale to your design space
|
||||
thickness = 1 + sample[:, 0] * 4 # 1-5 mm
|
||||
width = 50 + sample[:, 1] * 50 # 50-100 mm
|
||||
load = 1000 + sample[:, 2] * 3000 # 1000-4000 N
|
||||
# etc.
|
||||
```
|
||||
|
||||
**Option C: Monte Carlo Sampling**
|
||||
|
||||
Generate random combinations within bounds. Quick but less space-filling than LHS.
|
||||
|
||||
### Step 2: Parse All Training Data
|
||||
|
||||
```bash
|
||||
# Create directory structure
|
||||
mkdir training_data
|
||||
mkdir validation_data
|
||||
|
||||
# Move 80% of cases to training, 20% to validation
|
||||
|
||||
# Batch parse
|
||||
python batch_parser.py --input training_data/ --output parsed_training/
|
||||
python batch_parser.py --input validation_data/ --output parsed_validation/
|
||||
```
|
||||
|
||||
### Step 3: Train the Model
|
||||
|
||||
```bash
|
||||
# Initial training (MSE only)
|
||||
python train.py \
|
||||
--data_dirs parsed_training/* \
|
||||
--epochs 50 \
|
||||
--batch_size 16 \
|
||||
--loss mse \
|
||||
--checkpoint_dir checkpoints/mse/
|
||||
|
||||
# Physics-informed training (recommended)
|
||||
python train.py \
|
||||
--data_dirs parsed_training/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--loss physics \
|
||||
--checkpoint_dir checkpoints/physics/
|
||||
|
||||
# Monitor progress
|
||||
tensorboard --logdir runs/
|
||||
```
|
||||
|
||||
**Expected Training Time:**
|
||||
- CPU: 6-24 hours (50-500 cases)
|
||||
- GPU: 1-4 hours (much faster)
|
||||
|
||||
### Step 4: Validate the Trained Model
|
||||
|
||||
```bash
|
||||
# Run full test suite
|
||||
python test_suite.py --full
|
||||
|
||||
# Test on validation set
|
||||
python predict.py \
|
||||
--model checkpoints/physics/best_model.pt \
|
||||
--data parsed_validation/ \
|
||||
--compare
|
||||
|
||||
# Expected metrics:
|
||||
# - Displacement error: < 10%
|
||||
# - Stress error: < 15%
|
||||
# - Inference time: < 50ms
|
||||
```
|
||||
|
||||
### Step 5: Quick Smoke Test (Do This First!)
|
||||
|
||||
Before generating 500 cases, test with 10 cases:
|
||||
|
||||
```bash
|
||||
# Generate 10 quick variations
|
||||
# Parse them
|
||||
python batch_parser.py --input quick_test/ --output parsed_quick/
|
||||
|
||||
# Train for 20 epochs (5 minutes)
|
||||
python train.py \
|
||||
--data_dirs parsed_quick/* \
|
||||
--epochs 20 \
|
||||
--batch_size 4
|
||||
|
||||
# Check if loss decreases → Network is learning!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Part 4: What Should Be Implemented Next
|
||||
|
||||
### Immediate Priorities (This Week)
|
||||
|
||||
| Task | Purpose | Effort |
|
||||
|------|---------|--------|
|
||||
| **1. Generate 10 test cases** | Validate learning capability | 2-4 hours |
|
||||
| **2. Run quick training** | Prove network learns | 30 min |
|
||||
| **3. Visualize predictions** | See if fields make sense | 1 hour |
|
||||
|
||||
### Short-Term (Next 2 Weeks)
|
||||
|
||||
| Task | Purpose | Effort |
|
||||
|------|---------|--------|
|
||||
| **4. Generate 100+ training cases** | Production-quality data | 1 week |
|
||||
| **5. Full training run** | Trained model | 4-8 hours |
|
||||
| **6. Physics validation** | Cantilever beam test | 2 hours |
|
||||
| **7. Accuracy benchmarks** | Quantify error rates | 4 hours |
|
||||
|
||||
### Medium-Term (1-2 Months)
|
||||
|
||||
| Task | Purpose | Effort |
|
||||
|------|---------|--------|
|
||||
| **8. Atomizer integration** | Connect to optimization loop | 1-2 weeks |
|
||||
| **9. Uncertainty deployment** | Know when to trust | 1 week |
|
||||
| **10. Online learning** | Improve during optimization | 1 week |
|
||||
| **11. Multi-project transfer** | Reuse across designs | 2 weeks |
|
||||
|
||||
### Code That Needs Writing
|
||||
|
||||
**1. Automated Training Data Generator** (~200 lines)
|
||||
```python
|
||||
# generate_training_data.py
|
||||
class TrainingDataGenerator:
|
||||
"""Generate parametric FEA studies for training"""
|
||||
|
||||
def generate_parametric_study(self, base_model, variations):
|
||||
# Create NX journal for parametric study
|
||||
# Run all cases automatically
|
||||
# Collect BDF/OP2 pairs
|
||||
pass
|
||||
```
|
||||
|
||||
**2. Transfer Learning Module** (~150 lines)
|
||||
```python
|
||||
# transfer_learning.py
|
||||
class TransferLearningManager:
|
||||
"""Adapt trained model to new project"""
|
||||
|
||||
def fine_tune(self, base_model, new_data, freeze_layers=4):
|
||||
# Freeze early layers (general physics)
|
||||
# Train later layers (project-specific)
|
||||
pass
|
||||
```
|
||||
|
||||
**3. Real-Time Visualization** (~300 lines)
|
||||
```python
|
||||
# field_visualizer.py
|
||||
class RealTimeFieldVisualizer:
|
||||
"""Interactive 3D visualization of predicted fields"""
|
||||
|
||||
def show_prediction(self, design, prediction):
|
||||
# 3D mesh with displacement
|
||||
# Color by stress
|
||||
# Slider for design parameters
|
||||
pass
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Part 5: Atomizer Integration Strategy
|
||||
|
||||
### Current Atomizer Architecture
|
||||
|
||||
```
|
||||
Atomizer (Main Platform)
|
||||
├── optimization_engine/
|
||||
│ ├── runner.py # Manages optimization loop
|
||||
│ ├── multi_optimizer.py # Optuna optimization
|
||||
│ └── hook_manager.py # Plugin system
|
||||
├── nx_journals/
|
||||
│ └── update_and_solve.py # NX FEA automation
|
||||
└── dashboard/
|
||||
└── React frontend # Real-time monitoring
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
|
||||
**1. Replace FEA Calls (Primary Integration)**
|
||||
|
||||
In `runner.py`, replace:
|
||||
```python
|
||||
# Before
|
||||
def evaluate_design(self, parameters):
|
||||
self.nx_solver.update_parameters(parameters)
|
||||
self.nx_solver.run_fea() # 30 minutes
|
||||
results = self.nx_solver.extract_results()
|
||||
return results
|
||||
```
|
||||
|
||||
With:
|
||||
```python
|
||||
# After
|
||||
from atomizer_field import NeuralFieldOptimizer
|
||||
|
||||
def evaluate_design(self, parameters):
|
||||
# First: Neural prediction (50ms)
|
||||
graph = self.build_graph(parameters)
|
||||
prediction = self.neural_optimizer.predict(graph)
|
||||
|
||||
# Check uncertainty
|
||||
if prediction['uncertainty'] > 0.1:
|
||||
# High uncertainty: run FEA for validation
|
||||
self.nx_solver.run_fea()
|
||||
fea_results = self.nx_solver.extract_results()
|
||||
|
||||
# Update model online
|
||||
self.neural_optimizer.update(graph, fea_results)
|
||||
return fea_results
|
||||
|
||||
return prediction # Trust neural network
|
||||
```
|
||||
|
||||
**2. Gradient-Based Optimization**
|
||||
|
||||
Current Atomizer uses Optuna (TPE, GP). With AtomizerField:
|
||||
|
||||
```python
|
||||
# Add gradient-based option
|
||||
from atomizer_field import NeuralFieldOptimizer
|
||||
|
||||
optimizer = NeuralFieldOptimizer('model.pt')
|
||||
|
||||
# Analytical gradients (instant!)
|
||||
gradients = optimizer.get_sensitivities(design_graph)
|
||||
|
||||
# Gradient descent optimization
|
||||
for iteration in range(100):
|
||||
gradients = optimizer.get_sensitivities(current_design)
|
||||
current_design -= learning_rate * gradients # Direct update!
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- 1,000,000× faster than finite differences
|
||||
- Can optimize 100+ parameters efficiently
|
||||
- Better local convergence
|
||||
|
||||
**3. Dashboard Integration**
|
||||
|
||||
Add neural prediction tab to React dashboard:
|
||||
- Real-time field visualization
|
||||
- Prediction vs FEA comparison
|
||||
- Uncertainty heatmap
|
||||
- Training progress monitoring
|
||||
|
||||
### Integration Roadmap
|
||||
|
||||
```
|
||||
Week 1-2: Basic Integration
|
||||
├── Add AtomizerField as dependency
|
||||
├── Create neural_evaluator.py in optimization_engine/
|
||||
├── Add --use-neural flag to runner
|
||||
└── Test on simple_beam_optimization study
|
||||
|
||||
Week 3-4: Smart Switching
|
||||
├── Implement uncertainty-based FEA triggering
|
||||
├── Add online learning updates
|
||||
├── Compare optimization quality vs pure FEA
|
||||
└── Benchmark speedup
|
||||
|
||||
Week 5-6: Full Production
|
||||
├── Dashboard integration
|
||||
├── Multi-project support
|
||||
├── Documentation and examples
|
||||
└── Performance profiling
|
||||
```
|
||||
|
||||
### Expected Benefits After Integration
|
||||
|
||||
| Metric | Current (FEA Only) | With AtomizerField |
|
||||
|--------|-------------------|-------------------|
|
||||
| **Time per evaluation** | 30-300 seconds | 5-50 ms |
|
||||
| **Evaluations per hour** | 12-120 | 72,000-720,000 |
|
||||
| **Optimization time (1000 trials)** | 8-80 hours | 5-50 seconds + validation FEA |
|
||||
| **Gradient computation** | Finite diff (slow) | Analytical (instant) |
|
||||
| **Field insights** | Only max values | Complete distributions |
|
||||
|
||||
**Conservative Estimate:** 100-1000× speedup with hybrid approach (neural + selective FEA validation)
|
||||
|
||||
---
|
||||
|
||||
## Part 6: Development Gap Analysis
|
||||
|
||||
### Code Gaps
|
||||
|
||||
| Component | Current State | What's Needed | Effort |
|
||||
|-----------|--------------|---------------|--------|
|
||||
| Training data generation | Manual | Automated NX journal | 1 week |
|
||||
| Real-time visualization | Basic | Interactive 3D | 1 week |
|
||||
| Atomizer bridge | Not started | Integration module | 1-2 weeks |
|
||||
| Transfer learning | Designed | Implementation | 3-5 days |
|
||||
| Multi-solution support | Not started | Extend parser | 3-5 days |
|
||||
|
||||
### Testing Gaps
|
||||
|
||||
| Test Type | Current | Needed |
|
||||
|-----------|---------|--------|
|
||||
| Smoke tests | ✅ Complete | - |
|
||||
| Physics validation | ⏳ Ready | Run after training |
|
||||
| Accuracy benchmarks | ⏳ Ready | Run after training |
|
||||
| Integration tests | Not started | After Atomizer merge |
|
||||
| Production stress tests | Not started | Before deployment |
|
||||
|
||||
### Documentation Gaps
|
||||
|
||||
| Document | Status |
|
||||
|----------|--------|
|
||||
| API reference | Partial (need docstrings) |
|
||||
| Training guide | ✅ Complete |
|
||||
| Integration guide | Needs writing |
|
||||
| User manual | Needs writing |
|
||||
| Video tutorials | Not started |
|
||||
|
||||
---
|
||||
|
||||
## Part 7: Recommended Action Plan
|
||||
|
||||
### This Week (Testing & Validation)
|
||||
|
||||
```
|
||||
Day 1: Quick Validation
|
||||
├── Generate 10 Simple Beam variations in NX
|
||||
├── Parse all 10 cases
|
||||
└── Run 20-epoch training (30 min)
|
||||
Goal: See loss decrease = network learns!
|
||||
|
||||
Day 2-3: Expand Dataset
|
||||
├── Generate 50 variations with better coverage
|
||||
├── Include thickness, width, load, support variations
|
||||
└── Parse and organize train/val split (80/20)
|
||||
|
||||
Day 4-5: Proper Training
|
||||
├── Train for 100 epochs with physics loss
|
||||
├── Monitor TensorBoard
|
||||
└── Validate on held-out cases
|
||||
Goal: < 15% error on validation set
|
||||
```
|
||||
|
||||
### Next 2 Weeks (Production Quality)
|
||||
|
||||
```
|
||||
Week 1: Data & Training
|
||||
├── Generate 200+ training cases
|
||||
├── Train production model
|
||||
├── Run full test suite
|
||||
└── Document accuracy metrics
|
||||
|
||||
Week 2: Integration Prep
|
||||
├── Create atomizer_field_bridge.py
|
||||
├── Add to Atomizer as submodule
|
||||
├── Test on existing optimization study
|
||||
└── Compare results vs pure FEA
|
||||
```
|
||||
|
||||
### First Month (Full Integration)
|
||||
|
||||
```
|
||||
Week 3-4:
|
||||
├── Full Atomizer integration
|
||||
├── Uncertainty-based FEA triggering
|
||||
├── Dashboard neural prediction tab
|
||||
├── Performance benchmarks
|
||||
|
||||
Documentation:
|
||||
├── Integration guide
|
||||
├── Best practices
|
||||
├── Example workflows
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
### What You Have
|
||||
- ✅ Complete neural field learning system (~7,000 lines)
|
||||
- ✅ Physics-informed architecture
|
||||
- ✅ Validated pipeline (Simple Beam test passed)
|
||||
- ✅ Production-ready code structure
|
||||
- ✅ Comprehensive documentation
|
||||
|
||||
### What You Need
|
||||
- ⏳ Training data (50-500 FEA cases)
|
||||
- ⏳ Trained model weights
|
||||
- ⏳ Atomizer integration code
|
||||
- ⏳ Production validation
|
||||
|
||||
### The Key Insight
|
||||
|
||||
**AtomizerField is not trying to replace FEA—it's learning FROM FEA to accelerate optimization.**
|
||||
|
||||
The network encodes your engineering knowledge:
|
||||
- How forces propagate through structures
|
||||
- How geometry affects stress distribution
|
||||
- How boundary conditions constrain deformation
|
||||
|
||||
Once trained, it can predict these patterns 1000× faster than computing them from scratch.
|
||||
|
||||
### Next Concrete Step
|
||||
|
||||
**Right now, today:**
|
||||
```bash
|
||||
# 1. Generate 10 Simple Beam variations in NX
|
||||
# 2. Parse them:
|
||||
python batch_parser.py --input ten_cases/ --output parsed_ten/
|
||||
|
||||
# 3. Train for 20 epochs:
|
||||
python train.py --data_dirs parsed_ten/* --epochs 20
|
||||
|
||||
# 4. Watch the loss decrease → Your network is learning physics!
|
||||
```
|
||||
|
||||
This 2-hour test will prove the concept works. Then scale up.
|
||||
|
||||
---
|
||||
|
||||
*Report generated: November 24, 2025*
|
||||
*AtomizerField Version: 1.0*
|
||||
*Status: Ready for Training Phase*
|
||||
@@ -1,567 +0,0 @@
|
||||
# AtomizerField - Complete Implementation Summary
|
||||
|
||||
## ✅ What Has Been Built
|
||||
|
||||
You now have a **complete, production-ready system** for neural field learning in structural optimization.
|
||||
|
||||
---
|
||||
|
||||
## 📍 Location
|
||||
|
||||
```
|
||||
c:\Users\antoi\Documents\Atomaste\Atomizer-Field\
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📦 What's Inside (Complete File List)
|
||||
|
||||
### Documentation (Read These!)
|
||||
```
|
||||
├── README.md # Phase 1 guide (parser)
|
||||
├── PHASE2_README.md # Phase 2 guide (neural network)
|
||||
├── GETTING_STARTED.md # Quick start tutorial
|
||||
├── SYSTEM_ARCHITECTURE.md # System architecture (detailed!)
|
||||
├── COMPLETE_SUMMARY.md # This file
|
||||
├── Context.md # Project vision
|
||||
└── Instructions.md # Implementation spec
|
||||
```
|
||||
|
||||
### Phase 1: Data Parser (✅ Implemented & Tested)
|
||||
```
|
||||
├── neural_field_parser.py # Main parser: BDF/OP2 → Neural format
|
||||
├── validate_parsed_data.py # Data validation
|
||||
├── batch_parser.py # Batch processing
|
||||
└── metadata_template.json # Design parameter template
|
||||
```
|
||||
|
||||
### Phase 2: Neural Network (✅ Implemented & Tested)
|
||||
```
|
||||
├── neural_models/
|
||||
│ ├── __init__.py
|
||||
│ ├── field_predictor.py # GNN (718K params) ✅ TESTED
|
||||
│ ├── physics_losses.py # Loss functions ✅ TESTED
|
||||
│ └── data_loader.py # Data pipeline ✅ TESTED
|
||||
│
|
||||
├── train.py # Training script
|
||||
└── predict.py # Inference script
|
||||
```
|
||||
|
||||
### Configuration
|
||||
```
|
||||
└── requirements.txt # All dependencies
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Test Results
|
||||
|
||||
### ✅ Phase 2 Neural Network Tests
|
||||
|
||||
**1. GNN Model Test (field_predictor.py):**
|
||||
```
|
||||
Testing AtomizerField Model Creation...
|
||||
Model created: 718,221 parameters
|
||||
|
||||
Test forward pass:
|
||||
Displacement shape: torch.Size([100, 6])
|
||||
Stress shape: torch.Size([100, 6])
|
||||
Von Mises shape: torch.Size([100])
|
||||
|
||||
Max values:
|
||||
Max displacement: 3.249960
|
||||
Max stress: 3.94
|
||||
|
||||
Model test passed! ✓
|
||||
```
|
||||
|
||||
**2. Loss Functions Test (physics_losses.py):**
|
||||
```
|
||||
Testing AtomizerField Loss Functions...
|
||||
|
||||
Testing MSE loss...
|
||||
Total loss: 3.885789 ✓
|
||||
|
||||
Testing RELATIVE loss...
|
||||
Total loss: 2.941448 ✓
|
||||
|
||||
Testing PHYSICS loss...
|
||||
Total loss: 3.850585 ✓
|
||||
(All physics constraints working)
|
||||
|
||||
Testing MAX loss...
|
||||
Total loss: 20.127707 ✓
|
||||
|
||||
Loss function tests passed! ✓
|
||||
```
|
||||
|
||||
**Conclusion:** All neural network components working perfectly!
|
||||
|
||||
---
|
||||
|
||||
## 🔍 How It Works - Visual Summary
|
||||
|
||||
### The Big Picture
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────────────┐
|
||||
│ YOUR WORKFLOW │
|
||||
└───────────────────────────────────────────────────────────┘
|
||||
|
||||
1️⃣ CREATE DESIGNS IN NX
|
||||
├─ Make 500 bracket variants
|
||||
├─ Different thicknesses, ribs, holes
|
||||
└─ Run FEA on each → .bdf + .op2 files
|
||||
|
||||
↓
|
||||
|
||||
2️⃣ PARSE FEA DATA (Phase 1)
|
||||
$ python batch_parser.py ./all_brackets
|
||||
|
||||
├─ Converts 500 cases in ~2 hours
|
||||
├─ Output: neural_field_data.json + .h5
|
||||
└─ Complete stress/displacement fields preserved
|
||||
|
||||
↓
|
||||
|
||||
3️⃣ TRAIN NEURAL NETWORK (Phase 2)
|
||||
$ python train.py --train_dir brackets --epochs 150
|
||||
|
||||
├─ Trains Graph Neural Network (GNN)
|
||||
├─ Learns physics of bracket behavior
|
||||
├─ Time: 8-12 hours (one-time!)
|
||||
└─ Output: checkpoint_best.pt (3 MB)
|
||||
|
||||
↓
|
||||
|
||||
4️⃣ OPTIMIZE AT LIGHTNING SPEED
|
||||
$ python predict.py --model checkpoint_best.pt --input new_design
|
||||
|
||||
├─ Predicts in 15 milliseconds
|
||||
├─ Complete stress field (not just max!)
|
||||
├─ Test 10,000 designs in 2.5 minutes
|
||||
└─ Find optimal design instantly!
|
||||
|
||||
↓
|
||||
|
||||
5️⃣ VERIFY & MANUFACTURE
|
||||
├─ Run full FEA on final design (verify accuracy)
|
||||
└─ Manufacture optimal bracket
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Innovation: Complete Fields
|
||||
|
||||
### Old Way (Traditional Surrogate Models)
|
||||
```python
|
||||
# Only learns scalar values
|
||||
max_stress = neural_network(thickness, rib_height, hole_diameter)
|
||||
# Result: 450.2 MPa
|
||||
|
||||
# Problems:
|
||||
❌ No spatial information
|
||||
❌ Can't see WHERE stress occurs
|
||||
❌ Can't guide design improvements
|
||||
❌ Black box optimization
|
||||
```
|
||||
|
||||
### AtomizerField Way (Neural Field Learning)
|
||||
```python
|
||||
# Learns COMPLETE field at every point
|
||||
field_results = neural_network(mesh_graph)
|
||||
|
||||
displacement = field_results['displacement'] # [15,432 nodes × 6 DOF]
|
||||
stress = field_results['stress'] # [15,432 nodes × 6 components]
|
||||
von_mises = field_results['von_mises'] # [15,432 nodes]
|
||||
|
||||
# Now you know:
|
||||
✅ Max stress: 450.2 MPa
|
||||
✅ WHERE it occurs: Node 8,743 (near fillet)
|
||||
✅ Stress distribution across entire structure
|
||||
✅ Can intelligently add material where needed
|
||||
✅ Physics-guided optimization!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧠 The Neural Network Architecture
|
||||
|
||||
### What You Built
|
||||
|
||||
```
|
||||
AtomizerFieldModel (718,221 parameters)
|
||||
|
||||
INPUT:
|
||||
├─ Nodes: [x, y, z, BC_mask(6), loads(3)] → 12 features per node
|
||||
└─ Edges: [E, ν, ρ, G, α] → 5 features per edge (material)
|
||||
|
||||
PROCESSING:
|
||||
├─ Node Encoder: 12 → 128 dimensions
|
||||
├─ Edge Encoder: 5 → 64 dimensions
|
||||
├─ Message Passing × 6 layers:
|
||||
│ ├─ Forces propagate through mesh
|
||||
│ ├─ Learns stiffness matrix behavior
|
||||
│ └─ Respects element connectivity
|
||||
│
|
||||
├─ Displacement Decoder: 128 → 6 (ux, uy, uz, θx, θy, θz)
|
||||
└─ Stress Predictor: displacement → stress tensor
|
||||
|
||||
OUTPUT:
|
||||
├─ Displacement field at ALL nodes
|
||||
├─ Stress field at ALL elements
|
||||
└─ Von Mises stress everywhere
|
||||
```
|
||||
|
||||
**Why This Works:**
|
||||
|
||||
FEA solves: **K·u = f**
|
||||
- K = stiffness matrix (depends on mesh topology + materials)
|
||||
- u = displacement
|
||||
- f = forces
|
||||
|
||||
Our GNN learns this relationship:
|
||||
- **Mesh topology** → Graph edges
|
||||
- **Materials** → Edge features
|
||||
- **BCs & loads** → Node features
|
||||
- **Message passing** → Mimics K·u = f solving!
|
||||
|
||||
**Result:** Network learns physics, not just patterns!
|
||||
|
||||
---
|
||||
|
||||
## 📊 Performance Benchmarks
|
||||
|
||||
### Tested Performance
|
||||
|
||||
| Component | Status | Performance |
|
||||
|-----------|--------|-------------|
|
||||
| GNN Forward Pass | ✅ Tested | 100 nodes in ~5ms |
|
||||
| Loss Functions | ✅ Tested | All 4 types working |
|
||||
| Data Pipeline | ✅ Implemented | Graph conversion ready |
|
||||
| Training Loop | ✅ Implemented | GPU-optimized |
|
||||
| Inference | ✅ Implemented | Batch prediction ready |
|
||||
|
||||
### Expected Real-World Performance
|
||||
|
||||
| Task | Traditional FEA | AtomizerField | Speedup |
|
||||
|------|----------------|---------------|---------|
|
||||
| 10k element model | 15 minutes | 5 ms | 180,000× |
|
||||
| 50k element model | 2 hours | 15 ms | 480,000× |
|
||||
| 100k element model | 8 hours | 35 ms | 823,000× |
|
||||
|
||||
### Accuracy (Expected)
|
||||
|
||||
| Metric | Target | Typical |
|
||||
|--------|--------|---------|
|
||||
| Displacement Error | < 5% | 2-3% |
|
||||
| Stress Error | < 10% | 5-8% |
|
||||
| Max Value Error | < 3% | 1-2% |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 How to Use (Step-by-Step)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Python 3.8+** (you have Python 3.14)
|
||||
2. **NX Nastran** (you have it)
|
||||
3. **GPU recommended** for training (optional but faster)
|
||||
|
||||
### Setup (One-Time)
|
||||
|
||||
```bash
|
||||
# Navigate to project
|
||||
cd c:\Users\antoi\Documents\Atomaste\Atomizer-Field
|
||||
|
||||
# Create virtual environment
|
||||
python -m venv atomizer_env
|
||||
|
||||
# Activate
|
||||
atomizer_env\Scripts\activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
#### Step 1: Generate FEA Data in NX
|
||||
|
||||
```
|
||||
1. Create design in NX
|
||||
2. Mesh (CTETRA, CHEXA, CQUAD4, etc.)
|
||||
3. Apply materials (MAT1)
|
||||
4. Apply BCs (SPC)
|
||||
5. Apply loads (FORCE, PLOAD4)
|
||||
6. Run SOL 101 (Linear Static)
|
||||
7. Request: DISPLACEMENT=ALL, STRESS=ALL
|
||||
8. Get files: model.bdf, model.op2
|
||||
```
|
||||
|
||||
#### Step 2: Parse FEA Results
|
||||
|
||||
```bash
|
||||
# Organize files
|
||||
mkdir training_case_001
|
||||
mkdir training_case_001/input
|
||||
mkdir training_case_001/output
|
||||
cp your_model.bdf training_case_001/input/model.bdf
|
||||
cp your_model.op2 training_case_001/output/model.op2
|
||||
|
||||
# Parse
|
||||
python neural_field_parser.py training_case_001
|
||||
|
||||
# Validate
|
||||
python validate_parsed_data.py training_case_001
|
||||
|
||||
# For many cases:
|
||||
python batch_parser.py ./all_your_cases
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- `neural_field_data.json` - Metadata (200 KB)
|
||||
- `neural_field_data.h5` - Fields (3 MB)
|
||||
|
||||
#### Step 3: Train Neural Network
|
||||
|
||||
```bash
|
||||
# Organize data
|
||||
mkdir training_data
|
||||
mkdir validation_data
|
||||
# Move 80% of parsed cases to training_data/
|
||||
# Move 20% of parsed cases to validation_data/
|
||||
|
||||
# Train
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 100 \
|
||||
--batch_size 4 \
|
||||
--lr 0.001 \
|
||||
--loss_type physics
|
||||
|
||||
# Monitor (in another terminal)
|
||||
tensorboard --logdir runs/tensorboard
|
||||
```
|
||||
|
||||
**Training takes:** 2-24 hours depending on dataset size
|
||||
|
||||
**Output:**
|
||||
- `runs/checkpoint_best.pt` - Best model
|
||||
- `runs/config.json` - Configuration
|
||||
- `runs/tensorboard/` - Training logs
|
||||
|
||||
#### Step 4: Run Predictions
|
||||
|
||||
```bash
|
||||
# Single prediction
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input new_design_case \
|
||||
--compare
|
||||
|
||||
# Batch prediction
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input ./test_designs \
|
||||
--batch \
|
||||
--output_dir ./results
|
||||
```
|
||||
|
||||
**Each prediction:** 5-50 milliseconds!
|
||||
|
||||
---
|
||||
|
||||
## 📚 Data Format Details
|
||||
|
||||
### Parsed Data Structure
|
||||
|
||||
**JSON (neural_field_data.json):**
|
||||
- Metadata (version, timestamp, analysis type)
|
||||
- Mesh statistics (nodes, elements, types)
|
||||
- Materials (E, ν, ρ, G, α)
|
||||
- Boundary conditions (SPCs, MPCs)
|
||||
- Loads (forces, pressures, gravity)
|
||||
- Results summary (max values, units)
|
||||
|
||||
**HDF5 (neural_field_data.h5):**
|
||||
- `/mesh/node_coordinates` - [N × 3] coordinates
|
||||
- `/results/displacement` - [N × 6] complete field
|
||||
- `/results/stress/*` - Complete stress tensors
|
||||
- `/results/strain/*` - Complete strain tensors
|
||||
- `/results/reactions` - Reaction forces
|
||||
|
||||
**Why Two Files?**
|
||||
- JSON: Human-readable, metadata, structure
|
||||
- HDF5: Efficient, compressed, large arrays
|
||||
- Combined: Best of both worlds!
|
||||
|
||||
---
|
||||
|
||||
## 🎓 What Makes This Special
|
||||
|
||||
### 1. Physics-Informed Learning
|
||||
|
||||
```python
|
||||
# Standard neural network
|
||||
loss = prediction_error
|
||||
|
||||
# AtomizerField
|
||||
loss = prediction_error
|
||||
+ equilibrium_violation # ∇·σ + f = 0
|
||||
+ constitutive_law_error # σ = C:ε
|
||||
+ boundary_condition_violation # u = 0 at fixed nodes
|
||||
|
||||
# Result: Learns physics, needs less data!
|
||||
```
|
||||
|
||||
### 2. Graph Neural Networks
|
||||
|
||||
```
|
||||
Traditional NN:
|
||||
Input → Dense Layers → Output
|
||||
(Ignores mesh structure!)
|
||||
|
||||
AtomizerField GNN:
|
||||
Mesh Graph → Message Passing → Field Prediction
|
||||
(Respects topology, learns force flow!)
|
||||
```
|
||||
|
||||
### 3. Complete Field Prediction
|
||||
|
||||
```
|
||||
Traditional:
|
||||
- Only max stress
|
||||
- No spatial info
|
||||
- Black box
|
||||
|
||||
AtomizerField:
|
||||
- Complete stress distribution
|
||||
- Know WHERE concentrations are
|
||||
- Physics-guided design
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. "No module named torch"**
|
||||
```bash
|
||||
pip install torch torch-geometric tensorboard
|
||||
```
|
||||
|
||||
**2. "Out of memory during training"**
|
||||
```bash
|
||||
# Reduce batch size
|
||||
python train.py --batch_size 2
|
||||
|
||||
# Or use smaller model
|
||||
python train.py --hidden_dim 64 --num_layers 4
|
||||
```
|
||||
|
||||
**3. "Poor predictions"**
|
||||
- Need more training data (aim for 500+ cases)
|
||||
- Increase model size: `--hidden_dim 256 --num_layers 8`
|
||||
- Use physics loss: `--loss_type physics`
|
||||
- Ensure test cases within training distribution
|
||||
|
||||
**4. NumPy warnings (like you saw)**
|
||||
- This is a Windows/NumPy compatibility issue
|
||||
- Doesn't affect functionality
|
||||
- Can be ignored or use specific NumPy version
|
||||
- The neural network components work perfectly (as tested!)
|
||||
|
||||
---
|
||||
|
||||
## 📈 Next Steps
|
||||
|
||||
### Immediate
|
||||
1. ✅ System is ready to use
|
||||
2. Generate training dataset (50-500 FEA cases)
|
||||
3. Parse with `batch_parser.py`
|
||||
4. Train first model with `train.py`
|
||||
5. Test predictions with `predict.py`
|
||||
|
||||
### Short-term
|
||||
- Generate comprehensive dataset
|
||||
- Train production model
|
||||
- Validate accuracy on test set
|
||||
- Use for optimization!
|
||||
|
||||
### Long-term (Phase 3+)
|
||||
- Nonlinear analysis support
|
||||
- Modal analysis
|
||||
- Thermal coupling
|
||||
- Atomizer dashboard integration
|
||||
- Cloud deployment
|
||||
|
||||
---
|
||||
|
||||
## 📊 System Capabilities
|
||||
|
||||
### What It Can Do
|
||||
|
||||
✅ **Parse NX Nastran** - BDF/OP2 to neural format
|
||||
✅ **Handle Mixed Elements** - Solid, shell, beam
|
||||
✅ **Preserve Complete Fields** - All nodes/elements
|
||||
✅ **Graph Neural Networks** - Mesh-aware learning
|
||||
✅ **Physics-Informed** - Equilibrium, constitutive laws
|
||||
✅ **Fast Training** - GPU-accelerated, checkpointing
|
||||
✅ **Lightning Inference** - Millisecond predictions
|
||||
✅ **Batch Processing** - Handle hundreds of cases
|
||||
✅ **Validation** - Comprehensive quality checks
|
||||
✅ **Logging** - TensorBoard visualization
|
||||
|
||||
### What It Delivers
|
||||
|
||||
🎯 **1000× speedup** over traditional FEA
|
||||
🎯 **Complete field predictions** (not just max values)
|
||||
🎯 **Physics understanding** (know WHERE stress occurs)
|
||||
🎯 **Rapid optimization** (test millions of designs)
|
||||
🎯 **Production-ready** (error handling, documentation)
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Summary
|
||||
|
||||
You now have a **complete, revolutionary system** for structural optimization:
|
||||
|
||||
1. **Phase 1 Parser** - Converts FEA to ML format (✅ Implemented)
|
||||
2. **Phase 2 Neural Network** - Learns complete physics fields (✅ Implemented & Tested)
|
||||
3. **Training Pipeline** - GPU-optimized with checkpointing (✅ Implemented)
|
||||
4. **Inference Engine** - Millisecond predictions (✅ Implemented)
|
||||
5. **Documentation** - Comprehensive guides (✅ Complete)
|
||||
|
||||
**Total:**
|
||||
- ~3,000 lines of production code
|
||||
- 7 documentation files
|
||||
- 8 Python modules
|
||||
- Complete testing
|
||||
- Ready for real-world use
|
||||
|
||||
**Key Files to Read:**
|
||||
1. `GETTING_STARTED.md` - Quick tutorial
|
||||
2. `SYSTEM_ARCHITECTURE.md` - Detailed architecture
|
||||
3. `README.md` - Phase 1 guide
|
||||
4. `PHASE2_README.md` - Phase 2 guide
|
||||
|
||||
**Start Here:**
|
||||
```bash
|
||||
cd c:\Users\antoi\Documents\Atomaste\Atomizer-Field
|
||||
# Read GETTING_STARTED.md
|
||||
# Generate your first training dataset
|
||||
# Train your first model!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**You're ready to revolutionize structural optimization! 🚀**
|
||||
|
||||
From hours of FEA to milliseconds of prediction.
|
||||
From black-box optimization to physics-guided design.
|
||||
From scalar outputs to complete field understanding.
|
||||
|
||||
**AtomizerField - The future of engineering optimization is here.**
|
||||
@@ -1,127 +0,0 @@
|
||||
Context Instructions for Claude Sonnet 3.5
|
||||
Project: AtomizerField - Neural Field Learning for Structural Optimization
|
||||
System Context
|
||||
You are helping develop AtomizerField, a revolutionary branch of the Atomizer optimization platform that uses neural networks to learn and predict complete FEA field results (stress, displacement, strain at every node/element) instead of just scalar values. This enables 1000x faster optimization with physics understanding.
|
||||
Core Objective
|
||||
Transform structural optimization from black-box number crunching to intelligent, field-aware design exploration by training neural networks on complete FEA data, not just maximum values.
|
||||
Technical Foundation
|
||||
Current Stack:
|
||||
|
||||
FEA: NX Nastran (BDF input, OP2/F06 output)
|
||||
Python Libraries: pyNastran, PyTorch, NumPy, H5PY
|
||||
Parent Project: Atomizer (optimization platform with dashboard)
|
||||
Data Format: Custom schema v1.0 for future-proof field storage
|
||||
|
||||
Key Innovation:
|
||||
Instead of: parameters → FEA → max_stress (scalar)
|
||||
We learn: parameters → Neural Network → complete stress field (45,000 values)
|
||||
Project Structure
|
||||
AtomizerField/
|
||||
├── data_pipeline/
|
||||
│ ├── parser/ # BDF/OP2 to neural field format
|
||||
│ ├── generator/ # Automated FEA case generation
|
||||
│ └── validator/ # Data quality checks
|
||||
├── neural_models/
|
||||
│ ├── field_predictor/ # Core neural network
|
||||
│ ├── physics_layers/ # Physics-informed constraints
|
||||
│ └── training/ # Training scripts
|
||||
├── integration/
|
||||
│ └── atomizer_bridge/ # Integration with main Atomizer
|
||||
└── data/
|
||||
└── training_cases/ # FEA data repository
|
||||
Current Development Phase
|
||||
Phase 1 (Current): Data Pipeline Development
|
||||
|
||||
Parsing NX Nastran files (BDF/OP2) into training data
|
||||
Creating standardized data format
|
||||
Building automated case generation
|
||||
|
||||
Next Phases:
|
||||
|
||||
Phase 2: Neural network architecture
|
||||
Phase 3: Training pipeline
|
||||
Phase 4: Integration with Atomizer
|
||||
Phase 5: Production deployment
|
||||
|
||||
Key Technical Concepts to Understand
|
||||
|
||||
Field Learning: We're teaching NNs to predict stress/displacement at EVERY point in a structure, not just max values
|
||||
Physics-Informed: The NN must respect equilibrium, compatibility, and constitutive laws
|
||||
Graph Neural Networks: Mesh topology matters - we use GNNs to understand how forces flow through elements
|
||||
Transfer Learning: Knowledge from one project speeds up optimization on similar structures
|
||||
|
||||
Code Style & Principles
|
||||
|
||||
Future-Proof Data: All data structures versioned, backwards compatible
|
||||
Modular Design: Each component (parser, trainer, predictor) independent
|
||||
Validation First: Every data point validated for physics consistency
|
||||
Progressive Enhancement: Start simple (max stress), expand to fields
|
||||
Documentation: Every function documented with clear physics meaning
|
||||
|
||||
Specific Instructions for Implementation
|
||||
When implementing code for AtomizerField:
|
||||
|
||||
Always preserve field dimensionality - Don't reduce to scalars unless explicitly needed
|
||||
Use pyNastran's existing methods - Don't reinvent BDF/OP2 parsing
|
||||
Store data efficiently - HDF5 for arrays, JSON for metadata
|
||||
Validate physics - Check equilibrium, energy balance
|
||||
Think in fields - Visualize operations as field transformations
|
||||
Enable incremental learning - New data should improve existing models
|
||||
|
||||
Current Task Context
|
||||
The user has:
|
||||
|
||||
Set up NX Nastran analyses with full field outputs
|
||||
Generated BDF (input) and OP2 (output) files
|
||||
Needs to parse these into neural network training data
|
||||
|
||||
The parser must:
|
||||
|
||||
Extract complete mesh (nodes, elements, connectivity)
|
||||
Capture all boundary conditions and loads
|
||||
Store complete field results (not just max values)
|
||||
Maintain relationships between parameters and results
|
||||
Be robust to different element types (solid, shell, beam)
|
||||
|
||||
Expected Outputs
|
||||
When asked about AtomizerField, provide:
|
||||
|
||||
Practical, runnable code - No pseudocode unless requested
|
||||
Clear data flow - Show how data moves from FEA to NN
|
||||
Physics explanations - Why certain approaches work/fail
|
||||
Incremental steps - Break complex tasks into testable chunks
|
||||
Validation methods - How to verify data/model correctness
|
||||
|
||||
Common Challenges & Solutions
|
||||
|
||||
Large Data: Use HDF5 chunking and compression
|
||||
Mixed Element Types: Handle separately, combine for training
|
||||
Coordinate Systems: Always transform to global before storage
|
||||
Units: Standardize early (SI units recommended)
|
||||
Missing Data: Op2 might not have all requested fields - handle gracefully
|
||||
|
||||
Integration Notes
|
||||
AtomizerField will eventually merge into main Atomizer:
|
||||
|
||||
Keep interfaces clean and documented
|
||||
Use consistent data formats with Atomizer
|
||||
Prepare for dashboard visualization needs
|
||||
Enable both standalone and integrated operation
|
||||
|
||||
Key Questions to Ask
|
||||
When implementing features, consider:
|
||||
|
||||
Will this work with 1 million element meshes?
|
||||
Can we incrementally update models with new data?
|
||||
Does this respect physical laws?
|
||||
Is the data format forward-compatible?
|
||||
Can non-experts understand and use this?
|
||||
|
||||
Ultimate Goal
|
||||
Create a system where engineers can:
|
||||
|
||||
Run normal FEA analyses
|
||||
Automatically build neural surrogates from results
|
||||
Explore millions of designs instantly
|
||||
Understand WHY designs work through field visualization
|
||||
Optimize with physical insight, not blind search
|
||||
@@ -1,494 +0,0 @@
|
||||
# AtomizerField Enhancements Guide
|
||||
|
||||
## 🎯 What's Been Added (Phase 2.1)
|
||||
|
||||
Following the review, I've implemented critical enhancements to make AtomizerField production-ready for real optimization workflows.
|
||||
|
||||
---
|
||||
|
||||
## ✨ New Features
|
||||
|
||||
### 1. **Optimization Interface** (`optimization_interface.py`)
|
||||
|
||||
Direct integration with Atomizer optimization platform.
|
||||
|
||||
**Key Features:**
|
||||
- Drop-in FEA replacement (1000× faster)
|
||||
- Gradient computation for sensitivity analysis
|
||||
- Batch evaluation (test 1000 designs in seconds)
|
||||
- Automatic performance tracking
|
||||
|
||||
**Usage:**
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
# Create optimizer
|
||||
optimizer = NeuralFieldOptimizer('checkpoint_best.pt')
|
||||
|
||||
# Evaluate design
|
||||
results = optimizer.evaluate(graph_data)
|
||||
print(f"Max stress: {results['max_stress']:.2f} MPa")
|
||||
print(f"Time: {results['inference_time_ms']:.1f} ms")
|
||||
|
||||
# Get gradients for optimization
|
||||
gradients = optimizer.get_sensitivities(graph_data, objective='max_stress')
|
||||
|
||||
# Update design using gradients (much faster than finite differences!)
|
||||
new_parameters = parameters - learning_rate * gradients['node_gradients']
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- **Gradient-based optimization** - Use analytical gradients instead of finite differences
|
||||
- **Field-aware optimization** - Know WHERE to add/remove material
|
||||
- **Performance tracking** - Monitor speedup vs traditional FEA
|
||||
|
||||
### 2. **Uncertainty Quantification** (`neural_models/uncertainty.py`)
|
||||
|
||||
Know when to trust predictions and when to run FEA!
|
||||
|
||||
**Key Features:**
|
||||
- Ensemble-based uncertainty estimation
|
||||
- Confidence intervals for predictions
|
||||
- Automatic FEA recommendation
|
||||
- Online learning from new FEA results
|
||||
|
||||
**Usage:**
|
||||
```python
|
||||
from neural_models.uncertainty import UncertainFieldPredictor
|
||||
|
||||
# Create ensemble (5 models)
|
||||
ensemble = UncertainFieldPredictor(model_config, n_ensemble=5)
|
||||
|
||||
# Get predictions with uncertainty
|
||||
predictions = ensemble(graph_data, return_uncertainty=True)
|
||||
|
||||
# Check if FEA validation needed
|
||||
recommendation = ensemble.needs_fea_validation(predictions, threshold=0.1)
|
||||
|
||||
if recommendation['recommend_fea']:
|
||||
print("Run FEA - prediction uncertain")
|
||||
run_full_fea()
|
||||
else:
|
||||
print("Trust neural prediction - high confidence!")
|
||||
use_neural_result()
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- **Risk management** - Know when predictions are reliable
|
||||
- **Adaptive workflow** - Use FEA only when needed
|
||||
- **Cost optimization** - Minimize expensive FEA runs
|
||||
|
||||
### 3. **Configuration System** (`atomizer_field_config.yaml`)
|
||||
|
||||
Long-term vision configuration for all features.
|
||||
|
||||
**Key Sections:**
|
||||
- Model architecture (foundation models, adaptation layers)
|
||||
- Training (progressive, online learning, physics loss weights)
|
||||
- Data pipeline (normalization, augmentation, multi-resolution)
|
||||
- Optimization (gradients, uncertainty, FEA fallback)
|
||||
- Deployment (versioning, production settings)
|
||||
- Integration (Atomizer dashboard, API)
|
||||
|
||||
**Usage:**
|
||||
```yaml
|
||||
# Enable foundation model transfer learning
|
||||
model:
|
||||
foundation:
|
||||
enabled: true
|
||||
path: "models/physics_foundation_v1.pt"
|
||||
freeze: true
|
||||
|
||||
# Enable online learning during optimization
|
||||
training:
|
||||
online:
|
||||
enabled: true
|
||||
update_frequency: 10
|
||||
```
|
||||
|
||||
### 4. **Online Learning** (in `uncertainty.py`)
|
||||
|
||||
Learn from FEA runs during optimization.
|
||||
|
||||
**Workflow:**
|
||||
```python
|
||||
from neural_models.uncertainty import OnlineLearner
|
||||
|
||||
# Create learner
|
||||
learner = OnlineLearner(model, learning_rate=0.0001)
|
||||
|
||||
# During optimization:
|
||||
for design in optimization_loop:
|
||||
# Fast neural prediction
|
||||
result = model.predict(design)
|
||||
|
||||
# If high uncertainty, run FEA
|
||||
if uncertainty > threshold:
|
||||
fea_result = run_fea(design)
|
||||
|
||||
# Learn from it!
|
||||
learner.add_fea_result(design, fea_result)
|
||||
|
||||
# Quick update (10 gradient steps)
|
||||
if len(learner.replay_buffer) >= 10:
|
||||
learner.quick_update(steps=10)
|
||||
|
||||
# Model gets better over time!
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- **Continuous improvement** - Model learns during optimization
|
||||
- **Less FEA needed** - Model adapts to current design space
|
||||
- **Virtuous cycle** - Better predictions → less FEA → faster optimization
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Complete Workflow Examples
|
||||
|
||||
### Example 1: Basic Optimization
|
||||
|
||||
```python
|
||||
# 1. Load trained model
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
optimizer = NeuralFieldOptimizer('runs/checkpoint_best.pt')
|
||||
|
||||
# 2. Evaluate 1000 designs
|
||||
results = []
|
||||
for design_params in design_space:
|
||||
# Generate mesh
|
||||
graph_data = create_mesh(design_params)
|
||||
|
||||
# Predict in milliseconds
|
||||
pred = optimizer.evaluate(graph_data)
|
||||
|
||||
results.append({
|
||||
'params': design_params,
|
||||
'max_stress': pred['max_stress'],
|
||||
'max_displacement': pred['max_displacement']
|
||||
})
|
||||
|
||||
# 3. Find best design
|
||||
best = min(results, key=lambda r: r['max_stress'])
|
||||
print(f"Optimal design: {best['params']}")
|
||||
print(f"Stress: {best['max_stress']:.2f} MPa")
|
||||
|
||||
# 4. Validate with FEA
|
||||
fea_validation = run_fea(best['params'])
|
||||
```
|
||||
|
||||
**Time:** 1000 designs in ~30 seconds (vs 3000 hours FEA!)
|
||||
|
||||
### Example 2: Uncertainty-Guided Optimization
|
||||
|
||||
```python
|
||||
from neural_models.uncertainty import UncertainFieldPredictor, OnlineLearner
|
||||
|
||||
# 1. Create ensemble
|
||||
ensemble = UncertainFieldPredictor(model_config, n_ensemble=5)
|
||||
learner = OnlineLearner(ensemble.models[0])
|
||||
|
||||
# 2. Optimization with smart FEA usage
|
||||
fea_count = 0
|
||||
|
||||
for iteration in range(1000):
|
||||
design = generate_candidate()
|
||||
|
||||
# Predict with uncertainty
|
||||
pred = ensemble(design, return_uncertainty=True)
|
||||
|
||||
# Check if we need FEA
|
||||
rec = ensemble.needs_fea_validation(pred, threshold=0.1)
|
||||
|
||||
if rec['recommend_fea']:
|
||||
# High uncertainty - run FEA
|
||||
fea_result = run_fea(design)
|
||||
fea_count += 1
|
||||
|
||||
# Learn from it
|
||||
learner.add_fea_result(design, fea_result)
|
||||
|
||||
# Update model every 10 FEA runs
|
||||
if fea_count % 10 == 0:
|
||||
learner.quick_update(steps=10)
|
||||
|
||||
# Use FEA result
|
||||
result = fea_result
|
||||
else:
|
||||
# Low uncertainty - trust neural prediction
|
||||
result = pred
|
||||
|
||||
# Continue optimization...
|
||||
|
||||
print(f"Total FEA runs: {fea_count}/1000")
|
||||
print(f"FEA reduction: {(1 - fea_count/1000)*100:.1f}%")
|
||||
```
|
||||
|
||||
**Result:** ~10-20 FEA runs instead of 1000 (98% reduction!)
|
||||
|
||||
### Example 3: Gradient-Based Optimization
|
||||
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
import torch
|
||||
|
||||
# 1. Initialize
|
||||
optimizer = NeuralFieldOptimizer('checkpoint_best.pt', enable_gradients=True)
|
||||
|
||||
# 2. Starting design
|
||||
parameters = torch.tensor([2.5, 5.0, 15.0], requires_grad=True) # thickness, radius, height
|
||||
|
||||
# 3. Gradient-based optimization loop
|
||||
learning_rate = 0.1
|
||||
|
||||
for step in range(100):
|
||||
# Convert parameters to mesh
|
||||
graph_data = parameters_to_mesh(parameters)
|
||||
|
||||
# Evaluate
|
||||
result = optimizer.evaluate(graph_data)
|
||||
stress = result['max_stress']
|
||||
|
||||
# Get sensitivities
|
||||
grads = optimizer.get_sensitivities(graph_data, objective='max_stress')
|
||||
|
||||
# Update parameters (gradient descent)
|
||||
with torch.no_grad():
|
||||
parameters -= learning_rate * torch.tensor(grads['node_gradients'].mean(axis=0))
|
||||
|
||||
if step % 10 == 0:
|
||||
print(f"Step {step}: Stress = {stress:.2f} MPa")
|
||||
|
||||
print(f"Final design: {parameters.tolist()}")
|
||||
print(f"Final stress: {stress:.2f} MPa")
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Uses analytical gradients (exact!)
|
||||
- Much faster than finite differences
|
||||
- Finds optimal designs quickly
|
||||
|
||||
---
|
||||
|
||||
## 📊 Performance Improvements
|
||||
|
||||
### With New Features:
|
||||
|
||||
| Capability | Before | After |
|
||||
|-----------|--------|-------|
|
||||
| **Optimization** | Finite differences | Analytical gradients (10× faster) |
|
||||
| **Reliability** | No uncertainty info | Confidence intervals, FEA recommendations |
|
||||
| **Adaptivity** | Fixed model | Online learning during optimization |
|
||||
| **Integration** | Manual | Clean API for Atomizer |
|
||||
|
||||
### Expected Workflow Performance:
|
||||
|
||||
**Optimize 1000-design bracket study:**
|
||||
|
||||
| Step | Traditional | With AtomizerField | Speedup |
|
||||
|------|-------------|-------------------|---------|
|
||||
| Generate designs | 1 day | 1 day | 1× |
|
||||
| Evaluate (FEA) | 3000 hours | 30 seconds (neural) | 360,000× |
|
||||
| + Validation (20 FEA) | - | 40 hours | - |
|
||||
| **Total** | **125 days** | **2 days** | **62× faster** |
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Implementation Priority
|
||||
|
||||
### ✅ Phase 2.1 (Complete - Just Added)
|
||||
1. ✅ Optimization interface with gradients
|
||||
2. ✅ Uncertainty quantification with ensemble
|
||||
3. ✅ Online learning capability
|
||||
4. ✅ Configuration system
|
||||
5. ✅ Complete documentation
|
||||
|
||||
### 📅 Phase 2.2 (Next Steps)
|
||||
1. Multi-resolution training (coarse → fine)
|
||||
2. Foundation model architecture
|
||||
3. Parameter encoding improvements
|
||||
4. Advanced data augmentation
|
||||
|
||||
### 📅 Phase 3 (Future)
|
||||
1. Atomizer dashboard integration
|
||||
2. REST API deployment
|
||||
3. Real-time field visualization
|
||||
4. Cloud deployment
|
||||
|
||||
---
|
||||
|
||||
## 📁 Updated File Structure
|
||||
|
||||
```
|
||||
Atomizer-Field/
|
||||
│
|
||||
├── 🆕 optimization_interface.py # NEW: Optimization API
|
||||
├── 🆕 atomizer_field_config.yaml # NEW: Configuration system
|
||||
│
|
||||
├── neural_models/
|
||||
│ ├── field_predictor.py
|
||||
│ ├── physics_losses.py
|
||||
│ ├── data_loader.py
|
||||
│ └── 🆕 uncertainty.py # NEW: Uncertainty & online learning
|
||||
│
|
||||
├── train.py
|
||||
├── predict.py
|
||||
├── neural_field_parser.py
|
||||
├── validate_parsed_data.py
|
||||
├── batch_parser.py
|
||||
│
|
||||
└── Documentation/
|
||||
├── README.md
|
||||
├── PHASE2_README.md
|
||||
├── GETTING_STARTED.md
|
||||
├── SYSTEM_ARCHITECTURE.md
|
||||
├── COMPLETE_SUMMARY.md
|
||||
└── 🆕 ENHANCEMENTS_GUIDE.md # NEW: This file
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 How to Use the Enhancements
|
||||
|
||||
### Step 1: Basic Optimization (No Uncertainty)
|
||||
|
||||
```bash
|
||||
# Use optimization interface for fast evaluation
|
||||
python -c "
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
opt = NeuralFieldOptimizer('checkpoint_best.pt')
|
||||
# Evaluate designs...
|
||||
"
|
||||
```
|
||||
|
||||
### Step 2: Add Uncertainty Quantification
|
||||
|
||||
```bash
|
||||
# Train ensemble (5 models with different initializations)
|
||||
python train.py --ensemble 5 --epochs 100
|
||||
|
||||
# Use ensemble for predictions with confidence
|
||||
python -c "
|
||||
from neural_models.uncertainty import UncertainFieldPredictor
|
||||
ensemble = UncertainFieldPredictor(config, n_ensemble=5)
|
||||
# Get predictions with uncertainty...
|
||||
"
|
||||
```
|
||||
|
||||
### Step 3: Enable Online Learning
|
||||
|
||||
```bash
|
||||
# During optimization, update model from FEA runs
|
||||
# See Example 2 above for complete code
|
||||
```
|
||||
|
||||
### Step 4: Customize via Config
|
||||
|
||||
```bash
|
||||
# Edit atomizer_field_config.yaml
|
||||
# Enable features you want:
|
||||
# - Foundation models
|
||||
# - Online learning
|
||||
# - Multi-resolution
|
||||
# - Etc.
|
||||
|
||||
# Train with config
|
||||
python train.py --config atomizer_field_config.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Benefits Summary
|
||||
|
||||
### 1. **Faster Optimization**
|
||||
- Analytical gradients instead of finite differences
|
||||
- Batch evaluation (1000 designs/minute)
|
||||
- 10-100× faster than before
|
||||
|
||||
### 2. **Smarter Workflow**
|
||||
- Know when to trust predictions (uncertainty)
|
||||
- Automatic FEA recommendation
|
||||
- Adaptive FEA usage (98% reduction)
|
||||
|
||||
### 3. **Continuous Improvement**
|
||||
- Model learns during optimization
|
||||
- Less FEA needed over time
|
||||
- Better predictions on current design space
|
||||
|
||||
### 4. **Production Ready**
|
||||
- Clean API for integration
|
||||
- Configuration management
|
||||
- Performance monitoring
|
||||
- Comprehensive documentation
|
||||
|
||||
---
|
||||
|
||||
## 🚦 Getting Started with Enhancements
|
||||
|
||||
### Quick Start:
|
||||
|
||||
```python
|
||||
# 1. Use optimization interface (simplest)
|
||||
from optimization_interface import create_optimizer
|
||||
|
||||
opt = create_optimizer('checkpoint_best.pt')
|
||||
result = opt.evaluate(graph_data)
|
||||
|
||||
# 2. Add uncertainty (recommended)
|
||||
from neural_models.uncertainty import create_uncertain_predictor
|
||||
|
||||
ensemble = create_uncertain_predictor(model_config, n_ensemble=5)
|
||||
pred = ensemble(graph_data, return_uncertainty=True)
|
||||
|
||||
if pred['stress_rel_uncertainty'] > 0.1:
|
||||
print("High uncertainty - recommend FEA")
|
||||
|
||||
# 3. Enable online learning (advanced)
|
||||
from neural_models.uncertainty import OnlineLearner
|
||||
|
||||
learner = OnlineLearner(model)
|
||||
# Learn from FEA during optimization...
|
||||
```
|
||||
|
||||
### Full Integration:
|
||||
|
||||
See examples above for complete workflows integrating:
|
||||
- Optimization interface
|
||||
- Uncertainty quantification
|
||||
- Online learning
|
||||
- Configuration management
|
||||
|
||||
---
|
||||
|
||||
## 📚 Additional Resources
|
||||
|
||||
**Documentation:**
|
||||
- [GETTING_STARTED.md](GETTING_STARTED.md) - Basic tutorial
|
||||
- [SYSTEM_ARCHITECTURE.md](SYSTEM_ARCHITECTURE.md) - System details
|
||||
- [PHASE2_README.md](PHASE2_README.md) - Neural network guide
|
||||
|
||||
**Code Examples:**
|
||||
- `optimization_interface.py` - See `if __name__ == "__main__"` section
|
||||
- `uncertainty.py` - See usage examples at bottom
|
||||
|
||||
**Configuration:**
|
||||
- `atomizer_field_config.yaml` - All configuration options
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Summary
|
||||
|
||||
**Phase 2.1 adds four critical capabilities:**
|
||||
|
||||
1. ✅ **Optimization Interface** - Easy integration with Atomizer
|
||||
2. ✅ **Uncertainty Quantification** - Know when to trust predictions
|
||||
3. ✅ **Online Learning** - Improve during optimization
|
||||
4. ✅ **Configuration System** - Manage all features
|
||||
|
||||
**Result:** Production-ready neural field learning system that's:
|
||||
- Fast (1000× speedup)
|
||||
- Smart (uncertainty-aware)
|
||||
- Adaptive (learns during use)
|
||||
- Integrated (ready for Atomizer)
|
||||
|
||||
**You're ready to revolutionize structural optimization!** 🚀
|
||||
@@ -1,419 +0,0 @@
|
||||
# AtomizerField Environment Setup
|
||||
|
||||
## ✅ Problem Solved!
|
||||
|
||||
The NumPy MINGW-W64 segmentation fault issue has been resolved by creating a proper conda environment with compatible packages.
|
||||
|
||||
---
|
||||
|
||||
## Solution Summary
|
||||
|
||||
**Issue:** NumPy built with MINGW-W64 on Windows caused segmentation faults when importing
|
||||
|
||||
**Solution:** Created conda environment `atomizer_field` with properly compiled NumPy from conda-forge
|
||||
|
||||
**Result:** ✅ All tests passing! System ready for use.
|
||||
|
||||
---
|
||||
|
||||
## Environment Details
|
||||
|
||||
### Conda Environment: `atomizer_field`
|
||||
|
||||
**Created with:**
|
||||
```bash
|
||||
conda create -n atomizer_field python=3.10 numpy scipy -y
|
||||
conda activate atomizer_field
|
||||
conda install pytorch torchvision torchaudio cpuonly -c pytorch -y
|
||||
pip install torch-geometric pyNastran h5py tensorboard
|
||||
```
|
||||
|
||||
### Installed Packages:
|
||||
|
||||
**Core Scientific:**
|
||||
- Python 3.10.19
|
||||
- NumPy 1.26.4 (conda-compiled, no MINGW-W64 issues!)
|
||||
- SciPy 1.15.3
|
||||
- Matplotlib 3.10.7
|
||||
|
||||
**PyTorch Stack:**
|
||||
- PyTorch 2.5.1 (CPU)
|
||||
- TorchVision 0.20.1
|
||||
- TorchAudio 2.5.1
|
||||
- PyTorch Geometric 2.7.0
|
||||
|
||||
**AtomizerField Dependencies:**
|
||||
- pyNastran 1.4.1
|
||||
- H5Py 3.15.1
|
||||
- TensorBoard 2.20.0
|
||||
|
||||
**Total Environment Size:** ~2GB
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Activate Environment
|
||||
|
||||
```bash
|
||||
# Windows (PowerShell)
|
||||
conda activate atomizer_field
|
||||
|
||||
# Windows (Command Prompt)
|
||||
activate atomizer_field
|
||||
|
||||
# Linux/Mac
|
||||
conda activate atomizer_field
|
||||
```
|
||||
|
||||
### Run Tests
|
||||
|
||||
```bash
|
||||
# Activate environment
|
||||
conda activate atomizer_field
|
||||
|
||||
# Quick smoke tests (30 seconds)
|
||||
python test_suite.py --quick
|
||||
|
||||
# Physics validation (15 minutes)
|
||||
python test_suite.py --physics
|
||||
|
||||
# Full test suite (1 hour)
|
||||
python test_suite.py --full
|
||||
|
||||
# Test with Simple Beam
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
### Run AtomizerField
|
||||
|
||||
```bash
|
||||
# Activate environment
|
||||
conda activate atomizer_field
|
||||
|
||||
# Parse FEA data
|
||||
python neural_field_parser.py path/to/case
|
||||
|
||||
# Train model
|
||||
python train.py --data_dirs case1 case2 case3 --epochs 100
|
||||
|
||||
# Make predictions
|
||||
python predict.py --model best_model.pt --data test_case
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Results
|
||||
|
||||
### First Successful Test Run
|
||||
|
||||
```
|
||||
============================================================
|
||||
AtomizerField Test Suite v1.0
|
||||
Mode: QUICK
|
||||
============================================================
|
||||
|
||||
PHASE 1: SMOKE TESTS (5 minutes)
|
||||
============================================================
|
||||
|
||||
[TEST] Model Creation
|
||||
Description: Verify GNN model can be instantiated
|
||||
Creating GNN model...
|
||||
Model created: 128,589 parameters
|
||||
Status: [PASS]
|
||||
Duration: 0.06s
|
||||
|
||||
[TEST] Forward Pass
|
||||
Description: Verify model can process dummy data
|
||||
Testing forward pass...
|
||||
Displacement shape: torch.Size([100, 6]) [OK]
|
||||
Stress shape: torch.Size([100, 6]) [OK]
|
||||
Von Mises shape: torch.Size([100]) [OK]
|
||||
Status: [PASS]
|
||||
Duration: 0.02s
|
||||
|
||||
[TEST] Loss Computation
|
||||
Description: Verify loss functions work
|
||||
Testing loss functions...
|
||||
MSE loss: 4.027361 [OK]
|
||||
RELATIVE loss: 3.027167 [OK]
|
||||
PHYSICS loss: 3.659333 [OK]
|
||||
MAX loss: 13.615703 [OK]
|
||||
Status: [PASS]
|
||||
Duration: 0.00s
|
||||
|
||||
============================================================
|
||||
TEST SUMMARY
|
||||
============================================================
|
||||
|
||||
Total Tests: 3
|
||||
+ Passed: 3
|
||||
- Failed: 0
|
||||
Pass Rate: 100.0%
|
||||
|
||||
[SUCCESS] ALL TESTS PASSED - SYSTEM READY!
|
||||
============================================================
|
||||
|
||||
Total testing time: 0.0 minutes
|
||||
```
|
||||
|
||||
**Status:** ✅ All smoke tests passing!
|
||||
|
||||
---
|
||||
|
||||
## Environment Management
|
||||
|
||||
### View Environment Info
|
||||
|
||||
```bash
|
||||
# List all conda environments
|
||||
conda env list
|
||||
|
||||
# View installed packages
|
||||
conda activate atomizer_field
|
||||
conda list
|
||||
```
|
||||
|
||||
### Update Packages
|
||||
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
|
||||
# Update conda packages
|
||||
conda update numpy scipy pytorch
|
||||
|
||||
# Update pip packages
|
||||
pip install --upgrade torch-geometric pyNastran h5py tensorboard
|
||||
```
|
||||
|
||||
### Export Environment
|
||||
|
||||
```bash
|
||||
# Export for reproducibility
|
||||
conda activate atomizer_field
|
||||
conda env export > environment.yml
|
||||
|
||||
# Recreate from export
|
||||
conda env create -f environment.yml
|
||||
```
|
||||
|
||||
### Remove Environment (if needed)
|
||||
|
||||
```bash
|
||||
# Deactivate first
|
||||
conda deactivate
|
||||
|
||||
# Remove environment
|
||||
conda env remove -n atomizer_field
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: conda command not found
|
||||
|
||||
**Solution:** Add conda to PATH or use Anaconda Prompt
|
||||
|
||||
### Issue: Import errors
|
||||
|
||||
**Solution:** Make sure environment is activated
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
```
|
||||
|
||||
### Issue: CUDA/GPU not available
|
||||
|
||||
**Note:** Current installation is CPU-only. For GPU support:
|
||||
```bash
|
||||
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
|
||||
```
|
||||
|
||||
### Issue: Slow training
|
||||
|
||||
**Solutions:**
|
||||
1. Use GPU (see above)
|
||||
2. Reduce batch size
|
||||
3. Reduce model size (hidden_dim)
|
||||
4. Use fewer training epochs
|
||||
|
||||
---
|
||||
|
||||
## Performance Comparison
|
||||
|
||||
### Before (pip-installed NumPy):
|
||||
```
|
||||
Error: Segmentation fault (core dumped)
|
||||
CRASHES ARE TO BE EXPECTED
|
||||
```
|
||||
|
||||
### After (conda environment):
|
||||
```
|
||||
✅ All tests passing
|
||||
✅ Model creates successfully (128,589 parameters)
|
||||
✅ Forward pass working
|
||||
✅ All 4 loss functions operational
|
||||
✅ No crashes or errors
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### 1. Run Full Test Suite
|
||||
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
|
||||
# Run all smoke tests
|
||||
python test_suite.py --quick
|
||||
|
||||
# Run physics tests
|
||||
python test_suite.py --physics
|
||||
|
||||
# Run complete validation
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### 2. Test with Simple Beam
|
||||
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
Expected output:
|
||||
- Files found ✓
|
||||
- Test case setup ✓
|
||||
- Modules imported ✓
|
||||
- Beam parsed ✓
|
||||
- Data validated ✓
|
||||
- Graph created ✓
|
||||
- Prediction made ✓
|
||||
|
||||
### 3. Generate Training Data
|
||||
|
||||
```bash
|
||||
# Parse multiple FEA cases
|
||||
conda activate atomizer_field
|
||||
python batch_parser.py --input Models/ --output training_data/
|
||||
```
|
||||
|
||||
### 4. Train Model
|
||||
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
|
||||
python train.py \
|
||||
--data_dirs training_data/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--lr 0.001 \
|
||||
--loss physics
|
||||
|
||||
# Monitor with TensorBoard
|
||||
tensorboard --logdir runs/
|
||||
```
|
||||
|
||||
### 5. Make Predictions
|
||||
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
|
||||
python predict.py \
|
||||
--model checkpoints/best_model.pt \
|
||||
--data test_case/ \
|
||||
--output predictions/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Specifications
|
||||
|
||||
### System Requirements
|
||||
|
||||
**Minimum:**
|
||||
- CPU: 4 cores
|
||||
- RAM: 8GB
|
||||
- Disk: 5GB free space
|
||||
- OS: Windows 10/11, Linux, macOS
|
||||
|
||||
**Recommended:**
|
||||
- CPU: 8+ cores
|
||||
- RAM: 16GB+
|
||||
- Disk: 20GB+ free space
|
||||
- GPU: NVIDIA with 8GB+ VRAM (optional)
|
||||
|
||||
### Installation Time
|
||||
|
||||
- Conda environment creation: ~5 minutes
|
||||
- Package downloads: ~10 minutes
|
||||
- Total setup time: ~15 minutes
|
||||
|
||||
### Disk Usage
|
||||
|
||||
```
|
||||
atomizer_field environment: ~2GB
|
||||
- Python: ~200MB
|
||||
- PyTorch: ~800MB
|
||||
- NumPy/SciPy: ~400MB
|
||||
- Other packages: ~600MB
|
||||
|
||||
Training data (per case): ~1-10MB
|
||||
Model checkpoint: ~500KB-2MB
|
||||
Test results: <1MB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Checklist
|
||||
|
||||
### Environment Setup ✅
|
||||
- [x] Conda installed
|
||||
- [x] Environment `atomizer_field` created
|
||||
- [x] All packages installed
|
||||
- [x] No MINGW-W64 errors
|
||||
- [x] Tests running successfully
|
||||
|
||||
### System Validation ✅
|
||||
- [x] Model creation works (128K params)
|
||||
- [x] Forward pass functional
|
||||
- [x] All loss functions operational
|
||||
- [x] Batch processing works
|
||||
- [x] Gradient flow correct
|
||||
|
||||
### Ready for Production ✅
|
||||
- [x] Smoke tests pass
|
||||
- [ ] Physics tests pass (requires training)
|
||||
- [ ] Learning tests pass (requires training)
|
||||
- [ ] Integration tests pass (requires training data)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**✅ Environment successfully configured!**
|
||||
|
||||
**What's Working:**
|
||||
- Conda environment `atomizer_field` created
|
||||
- NumPy MINGW-W64 issue resolved
|
||||
- All smoke tests passing (3/3)
|
||||
- Model creates and runs correctly
|
||||
- 128,589 parameters instantiated
|
||||
- All 4 loss functions working
|
||||
|
||||
**What's Next:**
|
||||
1. Run full test suite
|
||||
2. Test with Simple Beam model
|
||||
3. Generate training data (50-500 cases)
|
||||
4. Train neural network
|
||||
5. Validate performance
|
||||
6. Deploy to production
|
||||
|
||||
**The system is now ready for training and deployment!** 🚀
|
||||
|
||||
---
|
||||
|
||||
*Environment Setup v1.0 - Problem Solved!*
|
||||
*Conda environment: atomizer_field*
|
||||
*All tests passing - System ready for use*
|
||||
@@ -1,531 +0,0 @@
|
||||
# AtomizerField - Final Implementation Report
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Project:** AtomizerField Neural Field Learning System
|
||||
**Version:** 2.1
|
||||
**Status:** ✅ Production-Ready
|
||||
**Date:** 2024
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Mission Accomplished
|
||||
|
||||
You asked for **Phase 2** (neural network training).
|
||||
|
||||
**I delivered a complete, production-ready neural field learning platform with advanced optimization capabilities.**
|
||||
|
||||
---
|
||||
|
||||
## 📦 Complete Deliverables
|
||||
|
||||
### Phase 1: Data Parser (4 files)
|
||||
1. ✅ `neural_field_parser.py` (650 lines)
|
||||
2. ✅ `validate_parsed_data.py` (400 lines)
|
||||
3. ✅ `batch_parser.py` (350 lines)
|
||||
4. ✅ `metadata_template.json`
|
||||
|
||||
### Phase 2: Neural Network (5 files)
|
||||
5. ✅ `neural_models/field_predictor.py` (490 lines) **[TESTED ✓]**
|
||||
6. ✅ `neural_models/physics_losses.py` (450 lines) **[TESTED ✓]**
|
||||
7. ✅ `neural_models/data_loader.py` (420 lines)
|
||||
8. ✅ `train.py` (430 lines)
|
||||
9. ✅ `predict.py` (380 lines)
|
||||
|
||||
### Phase 2.1: Advanced Features (3 files) **[NEW!]**
|
||||
10. ✅ `optimization_interface.py` (430 lines)
|
||||
11. ✅ `neural_models/uncertainty.py` (380 lines)
|
||||
12. ✅ `atomizer_field_config.yaml` (configuration system)
|
||||
|
||||
### Documentation (8 files)
|
||||
13. ✅ `README.md` (Phase 1 guide)
|
||||
14. ✅ `PHASE2_README.md` (Phase 2 guide)
|
||||
15. ✅ `GETTING_STARTED.md` (Quick start)
|
||||
16. ✅ `SYSTEM_ARCHITECTURE.md` (Complete architecture)
|
||||
17. ✅ `COMPLETE_SUMMARY.md` (Implementation summary)
|
||||
18. ✅ `ENHANCEMENTS_GUIDE.md` (Phase 2.1 features)
|
||||
19. ✅ `FINAL_IMPLEMENTATION_REPORT.md` (This file)
|
||||
20. Context.md, Instructions.md (Original specs)
|
||||
|
||||
**Total:** 20 files, ~4,500 lines of production code
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing & Validation
|
||||
|
||||
### ✅ Successfully Tested:
|
||||
|
||||
**1. Graph Neural Network (field_predictor.py)**
|
||||
```
|
||||
✓ Model creation: 718,221 parameters
|
||||
✓ Forward pass: Displacement [100, 6]
|
||||
✓ Forward pass: Stress [100, 6]
|
||||
✓ Forward pass: Von Mises [100]
|
||||
✓ Max values extraction working
|
||||
```
|
||||
|
||||
**2. Physics-Informed Loss Functions (physics_losses.py)**
|
||||
```
|
||||
✓ MSE Loss: Working
|
||||
✓ Relative Loss: Working
|
||||
✓ Physics-Informed Loss: Working (all 4 components)
|
||||
✓ Max Value Loss: Working
|
||||
```
|
||||
|
||||
**3. All Components Validated**
|
||||
- Graph construction logic ✓
|
||||
- Data pipeline architecture ✓
|
||||
- Training loop ✓
|
||||
- Inference engine ✓
|
||||
- Optimization interface ✓
|
||||
- Uncertainty quantification ✓
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Innovations Implemented
|
||||
|
||||
### 1. Complete Field Learning
|
||||
**Not just max values - entire stress/displacement distributions!**
|
||||
|
||||
```
|
||||
Traditional: max_stress = 450 MPa (1 number)
|
||||
AtomizerField: stress_field[15,432 nodes × 6 components] (92,592 values!)
|
||||
```
|
||||
|
||||
**Benefit:** Know WHERE stress concentrations occur, not just maximum value
|
||||
|
||||
### 2. Graph Neural Networks
|
||||
**Respects mesh topology - learns how forces flow through structure**
|
||||
|
||||
```
|
||||
6 message passing layers
|
||||
Forces propagate through connected elements
|
||||
Learns physics, not just patterns
|
||||
```
|
||||
|
||||
**Benefit:** Understands structural mechanics, needs less training data
|
||||
|
||||
### 3. Physics-Informed Training
|
||||
**Enforces physical laws during learning**
|
||||
|
||||
```python
|
||||
Loss = Data_Loss (match FEA)
|
||||
+ Equilibrium_Loss (∇·σ + f = 0)
|
||||
+ Constitutive_Loss (σ = C:ε)
|
||||
+ Boundary_Condition_Loss (u = 0 at fixed nodes)
|
||||
```
|
||||
|
||||
**Benefit:** Better generalization, faster convergence, physically plausible predictions
|
||||
|
||||
### 4. Optimization Interface
|
||||
**Drop-in replacement for FEA with gradients!**
|
||||
|
||||
```python
|
||||
# Traditional finite differences
|
||||
for i in range(n_params):
|
||||
params[i] += delta
|
||||
stress_plus = fea(params) # 2 hours
|
||||
params[i] -= 2*delta
|
||||
stress_minus = fea(params) # 2 hours
|
||||
gradient[i] = (stress_plus - stress_minus) / (2*delta)
|
||||
# Total: 4n hours for n parameters
|
||||
|
||||
# AtomizerField analytical gradients
|
||||
gradients = optimizer.get_sensitivities(graph_data) # 15 milliseconds!
|
||||
# Total: 15 ms (960,000× faster!)
|
||||
```
|
||||
|
||||
**Benefit:** Gradient-based optimization 1,000,000× faster than finite differences
|
||||
|
||||
### 5. Uncertainty Quantification
|
||||
**Know when to trust predictions**
|
||||
|
||||
```python
|
||||
ensemble = UncertainFieldPredictor(config, n_ensemble=5)
|
||||
predictions = ensemble(design, return_uncertainty=True)
|
||||
|
||||
if predictions['stress_rel_uncertainty'] > 0.1:
|
||||
result = run_fea(design) # High uncertainty - use FEA
|
||||
else:
|
||||
result = predictions # Low uncertainty - trust neural network
|
||||
```
|
||||
|
||||
**Benefit:** Intelligent FEA usage - only run when needed (98% reduction possible)
|
||||
|
||||
### 6. Online Learning
|
||||
**Model improves during optimization**
|
||||
|
||||
```python
|
||||
learner = OnlineLearner(model)
|
||||
|
||||
for design in optimization:
|
||||
pred = model.predict(design)
|
||||
|
||||
if high_uncertainty:
|
||||
fea_result = run_fea(design)
|
||||
learner.add_fea_result(design, fea_result)
|
||||
learner.quick_update() # Model learns!
|
||||
```
|
||||
|
||||
**Benefit:** Model adapts to current design space, needs less FEA over time
|
||||
|
||||
---
|
||||
|
||||
## 📊 Performance Metrics
|
||||
|
||||
### Speed (Tested on Similar Architectures)
|
||||
|
||||
| Model Size | FEA Time | Neural Time | Speedup |
|
||||
|-----------|----------|-------------|---------|
|
||||
| 10k elements | 15 min | 5 ms | **180,000×** |
|
||||
| 50k elements | 2 hours | 15 ms | **480,000×** |
|
||||
| 100k elements | 8 hours | 35 ms | **823,000×** |
|
||||
|
||||
### Accuracy (Expected Based on Literature)
|
||||
|
||||
| Metric | Target | Typical |
|
||||
|--------|--------|---------|
|
||||
| Displacement Error | < 5% | 2-3% |
|
||||
| Stress Error | < 10% | 5-8% |
|
||||
| Max Value Error | < 3% | 1-2% |
|
||||
|
||||
### Training Requirements
|
||||
|
||||
| Dataset Size | Training Time | Epochs | Hardware |
|
||||
|-------------|--------------|--------|----------|
|
||||
| 100 cases | 2-4 hours | 100 | RTX 3080 |
|
||||
| 500 cases | 8-12 hours | 150 | RTX 3080 |
|
||||
| 1000 cases | 24-48 hours | 200 | RTX 3080 |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 What This Enables
|
||||
|
||||
### Before AtomizerField:
|
||||
```
|
||||
Optimize bracket:
|
||||
├─ Test 10 designs per week (FEA limited)
|
||||
├─ Only know max_stress values
|
||||
├─ No spatial understanding
|
||||
├─ Blind optimization (try random changes)
|
||||
└─ Total time: Months
|
||||
|
||||
Cost: $50,000 in engineering time
|
||||
```
|
||||
|
||||
### With AtomizerField:
|
||||
```
|
||||
Optimize bracket:
|
||||
├─ Generate 500 training variants → Run FEA once (2 weeks)
|
||||
├─ Train model once → 8 hours
|
||||
├─ Test 1,000,000 designs → 2.5 hours
|
||||
├─ Know complete stress fields everywhere
|
||||
├─ Physics-guided optimization (know WHERE to reinforce)
|
||||
└─ Total time: 3 weeks
|
||||
|
||||
Cost: $5,000 in engineering time (10× reduction!)
|
||||
```
|
||||
|
||||
### Real-World Example:
|
||||
|
||||
**Optimize aircraft bracket (100,000 element model):**
|
||||
|
||||
| Method | Designs Tested | Time | Cost |
|
||||
|--------|---------------|------|------|
|
||||
| Traditional FEA | 10 | 80 hours | $8,000 |
|
||||
| AtomizerField | 1,000,000 | 72 hours | $5,000 |
|
||||
| **Improvement** | **100,000× more** | **Similar time** | **40% cheaper** |
|
||||
|
||||
---
|
||||
|
||||
## 💡 Use Cases
|
||||
|
||||
### 1. Rapid Design Exploration
|
||||
```
|
||||
Test thousands of variants in minutes
|
||||
Identify promising design regions
|
||||
Focus FEA on final validation
|
||||
```
|
||||
|
||||
### 2. Real-Time Optimization
|
||||
```
|
||||
Interactive design tool
|
||||
Engineer modifies geometry
|
||||
Instant stress prediction (15 ms)
|
||||
Immediate feedback
|
||||
```
|
||||
|
||||
### 3. Physics-Guided Design
|
||||
```
|
||||
Complete stress field shows:
|
||||
- WHERE stress concentrations occur
|
||||
- HOW to add material efficiently
|
||||
- WHY design fails or succeeds
|
||||
→ Intelligent design improvements
|
||||
```
|
||||
|
||||
### 4. Multi-Objective Optimization
|
||||
```
|
||||
Optimize for:
|
||||
- Minimize weight
|
||||
- Minimize max stress
|
||||
- Minimize max displacement
|
||||
- Minimize cost
|
||||
→ Explore Pareto frontier rapidly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ System Architecture Summary
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ COMPLETE SYSTEM FLOW │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. GENERATE FEA DATA (NX Nastran)
|
||||
├─ Design variants (thickness, ribs, holes, etc.)
|
||||
├─ Run SOL 101 → .bdf + .op2 files
|
||||
└─ Time: Days to weeks (one-time cost)
|
||||
|
||||
2. PARSE TO NEURAL FORMAT (Phase 1)
|
||||
├─ batch_parser.py → Process all cases
|
||||
├─ Extract complete fields (not just max values!)
|
||||
└─ Output: JSON + HDF5 format
|
||||
Time: ~15 seconds per case
|
||||
|
||||
3. TRAIN NEURAL NETWORK (Phase 2)
|
||||
├─ data_loader.py → Convert to graphs
|
||||
├─ train.py → Train GNN with physics loss
|
||||
├─ TensorBoard monitoring
|
||||
└─ Output: checkpoint_best.pt
|
||||
Time: 8-12 hours (one-time)
|
||||
|
||||
4. OPTIMIZE WITH CONFIDENCE (Phase 2.1)
|
||||
├─ optimization_interface.py → Fast evaluation
|
||||
├─ uncertainty.py → Know when to trust
|
||||
├─ Online learning → Improve during use
|
||||
└─ Result: Optimal design!
|
||||
Time: Minutes to hours
|
||||
|
||||
5. VALIDATE & MANUFACTURE
|
||||
├─ Run FEA on final design (verify)
|
||||
└─ Manufacture optimal part
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📁 Repository Structure
|
||||
|
||||
```
|
||||
c:\Users\antoi\Documents\Atomaste\Atomizer-Field\
|
||||
│
|
||||
├── 📄 Documentation (8 files)
|
||||
│ ├── FINAL_IMPLEMENTATION_REPORT.md ← YOU ARE HERE
|
||||
│ ├── ENHANCEMENTS_GUIDE.md ← Phase 2.1 features
|
||||
│ ├── COMPLETE_SUMMARY.md ← Quick overview
|
||||
│ ├── GETTING_STARTED.md ← Start here!
|
||||
│ ├── SYSTEM_ARCHITECTURE.md ← Deep dive
|
||||
│ ├── README.md ← Phase 1 guide
|
||||
│ ├── PHASE2_README.md ← Phase 2 guide
|
||||
│ └── Context.md, Instructions.md ← Vision & specs
|
||||
│
|
||||
├── 🔧 Phase 1: Parser (4 files)
|
||||
│ ├── neural_field_parser.py
|
||||
│ ├── validate_parsed_data.py
|
||||
│ ├── batch_parser.py
|
||||
│ └── metadata_template.json
|
||||
│
|
||||
├── 🧠 Phase 2: Neural Network (5 files)
|
||||
│ ├── neural_models/
|
||||
│ │ ├── field_predictor.py [TESTED ✓]
|
||||
│ │ ├── physics_losses.py [TESTED ✓]
|
||||
│ │ ├── data_loader.py
|
||||
│ │ └── uncertainty.py [NEW!]
|
||||
│ ├── train.py
|
||||
│ └── predict.py
|
||||
│
|
||||
├── 🚀 Phase 2.1: Optimization (2 files)
|
||||
│ ├── optimization_interface.py [NEW!]
|
||||
│ └── atomizer_field_config.yaml [NEW!]
|
||||
│
|
||||
├── 📦 Configuration
|
||||
│ └── requirements.txt
|
||||
│
|
||||
└── 🔬 Example Data
|
||||
└── Models/Simple Beam/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Quality Assurance
|
||||
|
||||
### Code Quality
|
||||
- ✅ Production-ready error handling
|
||||
- ✅ Comprehensive docstrings
|
||||
- ✅ Type hints where appropriate
|
||||
- ✅ Modular, extensible design
|
||||
- ✅ Configuration management
|
||||
|
||||
### Testing
|
||||
- ✅ Neural network components tested
|
||||
- ✅ Loss functions validated
|
||||
- ✅ Architecture verified
|
||||
- ✅ Ready for real-world use
|
||||
|
||||
### Documentation
|
||||
- ✅ 8 comprehensive guides
|
||||
- ✅ Code examples throughout
|
||||
- ✅ Troubleshooting sections
|
||||
- ✅ Usage tutorials
|
||||
- ✅ Architecture explanations
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Knowledge Transfer
|
||||
|
||||
### To Use This System:
|
||||
|
||||
**1. Read Documentation (30 minutes)**
|
||||
```
|
||||
Start → GETTING_STARTED.md
|
||||
Deep dive → SYSTEM_ARCHITECTURE.md
|
||||
Features → ENHANCEMENTS_GUIDE.md
|
||||
```
|
||||
|
||||
**2. Generate Training Data (1-2 weeks)**
|
||||
```
|
||||
Create designs in NX → Run FEA → Parse with batch_parser.py
|
||||
Aim for 500+ cases for production use
|
||||
```
|
||||
|
||||
**3. Train Model (8-12 hours)**
|
||||
```
|
||||
python train.py --train_dir training_data --val_dir validation_data
|
||||
Monitor with TensorBoard
|
||||
Save best checkpoint
|
||||
```
|
||||
|
||||
**4. Optimize (minutes to hours)**
|
||||
```
|
||||
Use optimization_interface.py for fast evaluation
|
||||
Enable uncertainty for smart FEA usage
|
||||
Online learning for continuous improvement
|
||||
```
|
||||
|
||||
### Skills Required:
|
||||
- ✅ Python programming (intermediate)
|
||||
- ✅ NX Nastran (create FEA models)
|
||||
- ✅ Basic neural networks (helpful but not required)
|
||||
- ✅ Structural mechanics (understand results)
|
||||
|
||||
---
|
||||
|
||||
## 🔮 Future Roadmap
|
||||
|
||||
### Phase 3: Atomizer Integration
|
||||
- Dashboard visualization of stress fields
|
||||
- Database integration
|
||||
- REST API for predictions
|
||||
- Multi-user support
|
||||
|
||||
### Phase 4: Advanced Analysis
|
||||
- Nonlinear analysis (plasticity, large deformation)
|
||||
- Contact and friction
|
||||
- Composite materials
|
||||
- Modal analysis (natural frequencies)
|
||||
|
||||
### Phase 5: Foundation Models
|
||||
- Pre-trained physics foundation
|
||||
- Transfer learning across component types
|
||||
- Multi-resolution architecture
|
||||
- Universal structural predictor
|
||||
|
||||
---
|
||||
|
||||
## 💰 Business Value
|
||||
|
||||
### Return on Investment
|
||||
|
||||
**Initial Investment:**
|
||||
- Engineering time: 2-3 weeks
|
||||
- Compute (GPU training): ~$50
|
||||
- Total: ~$10,000
|
||||
|
||||
**Returns:**
|
||||
- 1000× faster optimization
|
||||
- 10-100× more designs tested
|
||||
- Better final designs (physics-guided)
|
||||
- Reduced prototyping costs
|
||||
- Faster time-to-market
|
||||
|
||||
**Payback Period:** First major optimization project
|
||||
|
||||
### Competitive Advantage
|
||||
- Explore design spaces competitors can't reach
|
||||
- Find optimal designs faster
|
||||
- Reduce development costs
|
||||
- Accelerate innovation
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Final Summary
|
||||
|
||||
### What You Have:
|
||||
|
||||
**A complete, production-ready neural field learning system that:**
|
||||
|
||||
1. ✅ Parses NX Nastran FEA results into ML format
|
||||
2. ✅ Trains Graph Neural Networks with physics constraints
|
||||
3. ✅ Predicts complete stress/displacement fields 1000× faster than FEA
|
||||
4. ✅ Provides optimization interface with analytical gradients
|
||||
5. ✅ Quantifies prediction uncertainty for smart FEA usage
|
||||
6. ✅ Learns online during optimization
|
||||
7. ✅ Includes comprehensive documentation and examples
|
||||
|
||||
### Implementation Stats:
|
||||
|
||||
- **Files:** 20 (12 code, 8 documentation)
|
||||
- **Lines of Code:** ~4,500
|
||||
- **Test Status:** Core components validated ✓
|
||||
- **Documentation:** Complete ✓
|
||||
- **Production Ready:** Yes ✓
|
||||
|
||||
### Key Capabilities:
|
||||
|
||||
| Capability | Status |
|
||||
|-----------|--------|
|
||||
| Complete field prediction | ✅ Implemented |
|
||||
| Graph neural networks | ✅ Implemented & Tested |
|
||||
| Physics-informed loss | ✅ Implemented & Tested |
|
||||
| Fast training pipeline | ✅ Implemented |
|
||||
| Fast inference | ✅ Implemented |
|
||||
| Optimization interface | ✅ Implemented |
|
||||
| Uncertainty quantification | ✅ Implemented |
|
||||
| Online learning | ✅ Implemented |
|
||||
| Configuration management | ✅ Implemented |
|
||||
| Complete documentation | ✅ Complete |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 You're Ready!
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. ✅ Read `GETTING_STARTED.md`
|
||||
2. ✅ Generate your training dataset (50-500 FEA cases)
|
||||
3. ✅ Train your first model
|
||||
4. ✅ Run predictions and compare with FEA
|
||||
5. ✅ Start optimizing 1000× faster!
|
||||
|
||||
**The future of structural optimization is in your hands.**
|
||||
|
||||
**AtomizerField - Transform hours of FEA into milliseconds of prediction!** 🎯
|
||||
|
||||
---
|
||||
|
||||
*Implementation completed with comprehensive testing, documentation, and advanced features. Ready for production deployment.*
|
||||
|
||||
**Version:** 2.1
|
||||
**Status:** Production-Ready ✅
|
||||
**Date:** 2024
|
||||
@@ -1,327 +0,0 @@
|
||||
# AtomizerField - Getting Started Guide
|
||||
|
||||
Welcome to AtomizerField! This guide will get you up and running with neural field learning for structural optimization.
|
||||
|
||||
## Overview
|
||||
|
||||
AtomizerField transforms structural optimization from hours-per-design to milliseconds-per-design by using Graph Neural Networks to predict complete FEA field results.
|
||||
|
||||
### The Two-Phase Approach
|
||||
|
||||
```
|
||||
Phase 1: Data Pipeline
|
||||
NX Nastran Files → Parser → Neural Field Format
|
||||
|
||||
Phase 2: Neural Network Training
|
||||
Neural Field Data → GNN Training → Fast Field Predictor
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.8 or higher
|
||||
- NX Nastran (for generating FEA data)
|
||||
- NVIDIA GPU (recommended for Phase 2 training)
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Clone or navigate to project directory
|
||||
cd Atomizer-Field
|
||||
|
||||
# Create virtual environment
|
||||
python -m venv atomizer_env
|
||||
|
||||
# Activate environment
|
||||
# On Windows:
|
||||
atomizer_env\Scripts\activate
|
||||
# On Linux/Mac:
|
||||
source atomizer_env/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Phase 1: Parse Your FEA Data
|
||||
|
||||
### Step 1: Generate FEA Results in NX
|
||||
|
||||
1. Create your model in NX
|
||||
2. Generate mesh
|
||||
3. Apply materials, BCs, and loads
|
||||
4. Run **SOL 101** (Linear Static)
|
||||
5. Request output: `DISPLACEMENT=ALL`, `STRESS=ALL`, `STRAIN=ALL`
|
||||
6. Ensure these files are generated:
|
||||
- `model.bdf` (input deck)
|
||||
- `model.op2` (results)
|
||||
|
||||
### Step 2: Organize Files
|
||||
|
||||
```bash
|
||||
mkdir training_case_001
|
||||
mkdir training_case_001/input
|
||||
mkdir training_case_001/output
|
||||
|
||||
# Copy files
|
||||
cp your_model.bdf training_case_001/input/model.bdf
|
||||
cp your_model.op2 training_case_001/output/model.op2
|
||||
```
|
||||
|
||||
### Step 3: Parse
|
||||
|
||||
```bash
|
||||
# Single case
|
||||
python neural_field_parser.py training_case_001
|
||||
|
||||
# Validate
|
||||
python validate_parsed_data.py training_case_001
|
||||
|
||||
# Batch processing (for multiple cases)
|
||||
python batch_parser.py ./all_training_cases
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- `neural_field_data.json` - Metadata
|
||||
- `neural_field_data.h5` - Field data
|
||||
|
||||
See [README.md](README.md) for detailed Phase 1 documentation.
|
||||
|
||||
## Phase 2: Train Neural Network
|
||||
|
||||
### Step 1: Prepare Dataset
|
||||
|
||||
You need:
|
||||
- **Minimum:** 50-100 parsed FEA cases
|
||||
- **Recommended:** 500+ cases for production use
|
||||
- **Variation:** Different geometries, loads, BCs
|
||||
|
||||
Organize into train/val splits (80/20):
|
||||
|
||||
```bash
|
||||
mkdir training_data
|
||||
mkdir validation_data
|
||||
|
||||
# Move 80% of cases to training_data/
|
||||
# Move 20% of cases to validation_data/
|
||||
```
|
||||
|
||||
### Step 2: Train Model
|
||||
|
||||
```bash
|
||||
# Basic training
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 100 \
|
||||
--batch_size 4
|
||||
|
||||
# Monitor progress
|
||||
tensorboard --logdir runs/tensorboard
|
||||
```
|
||||
|
||||
Training will:
|
||||
- Create checkpoints in `runs/`
|
||||
- Log metrics to TensorBoard
|
||||
- Save best model as `checkpoint_best.pt`
|
||||
|
||||
**Expected Time:** 2-24 hours depending on dataset size and GPU.
|
||||
|
||||
### Step 3: Run Inference
|
||||
|
||||
```bash
|
||||
# Predict on new case
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input test_case_001 \
|
||||
--compare
|
||||
|
||||
# Batch prediction
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input ./test_cases \
|
||||
--batch
|
||||
```
|
||||
|
||||
**Result:** 5-50 milliseconds per prediction!
|
||||
|
||||
See [PHASE2_README.md](PHASE2_README.md) for detailed Phase 2 documentation.
|
||||
|
||||
## Typical Workflow
|
||||
|
||||
### For Development (Learning the System)
|
||||
|
||||
```bash
|
||||
# 1. Parse a few test cases
|
||||
python batch_parser.py ./test_cases
|
||||
|
||||
# 2. Quick training test (small dataset)
|
||||
python train.py \
|
||||
--train_dir ./test_cases \
|
||||
--val_dir ./test_cases \
|
||||
--epochs 10 \
|
||||
--batch_size 2
|
||||
|
||||
# 3. Test inference
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input test_cases/case_001
|
||||
```
|
||||
|
||||
### For Production (Real Optimization)
|
||||
|
||||
```bash
|
||||
# 1. Generate comprehensive training dataset
|
||||
# - Vary all design parameters
|
||||
# - Include diverse loading conditions
|
||||
# - Cover full design space
|
||||
|
||||
# 2. Parse all cases
|
||||
python batch_parser.py ./all_fea_cases
|
||||
|
||||
# 3. Split into train/val
|
||||
# Use script or manually organize
|
||||
|
||||
# 4. Train production model
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 200 \
|
||||
--batch_size 8 \
|
||||
--hidden_dim 256 \
|
||||
--num_layers 8 \
|
||||
--loss_type physics
|
||||
|
||||
# 5. Validate on held-out test set
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input ./test_data \
|
||||
--batch \
|
||||
--compare
|
||||
|
||||
# 6. Use for optimization!
|
||||
```
|
||||
|
||||
## Key Files Reference
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| **Phase 1** | |
|
||||
| `neural_field_parser.py` | Parse NX Nastran to neural field format |
|
||||
| `validate_parsed_data.py` | Validate parsed data quality |
|
||||
| `batch_parser.py` | Batch process multiple cases |
|
||||
| `metadata_template.json` | Template for design parameters |
|
||||
| **Phase 2** | |
|
||||
| `train.py` | Train GNN model |
|
||||
| `predict.py` | Run inference on trained model |
|
||||
| `neural_models/field_predictor.py` | GNN architecture |
|
||||
| `neural_models/physics_losses.py` | Loss functions |
|
||||
| `neural_models/data_loader.py` | Data pipeline |
|
||||
| **Documentation** | |
|
||||
| `README.md` | Phase 1 detailed guide |
|
||||
| `PHASE2_README.md` | Phase 2 detailed guide |
|
||||
| `Context.md` | Project vision and architecture |
|
||||
| `Instructions.md` | Original implementation spec |
|
||||
|
||||
## Common Issues & Solutions
|
||||
|
||||
### "No cases found"
|
||||
- Check directory structure: `case_dir/input/model.bdf` and `case_dir/output/model.op2`
|
||||
- Ensure files are named exactly `model.bdf` and `model.op2`
|
||||
|
||||
### "Out of memory during training"
|
||||
- Reduce `--batch_size` (try 2 or 1)
|
||||
- Use smaller model: `--hidden_dim 64 --num_layers 4`
|
||||
- Process larger models in chunks
|
||||
|
||||
### "Poor prediction accuracy"
|
||||
- Need more training data (aim for 500+ cases)
|
||||
- Increase model capacity: `--hidden_dim 256 --num_layers 8`
|
||||
- Use physics-informed loss: `--loss_type physics`
|
||||
- Check if test case is within training distribution
|
||||
|
||||
### "Training loss not decreasing"
|
||||
- Lower learning rate: `--lr 0.0001`
|
||||
- Check data normalization (should be automatic)
|
||||
- Start with simple MSE loss: `--loss_type mse`
|
||||
|
||||
## Example: End-to-End Workflow
|
||||
|
||||
Let's say you want to optimize a bracket design:
|
||||
|
||||
```bash
|
||||
# 1. Generate 100 bracket variants in NX with different:
|
||||
# - Wall thicknesses (1-5mm)
|
||||
# - Rib heights (5-20mm)
|
||||
# - Hole diameters (6-12mm)
|
||||
# - Run FEA on each
|
||||
|
||||
# 2. Parse all variants
|
||||
python batch_parser.py ./bracket_variants
|
||||
|
||||
# 3. Split dataset
|
||||
# training_data: 80 cases
|
||||
# validation_data: 20 cases
|
||||
|
||||
# 4. Train model
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 150 \
|
||||
--batch_size 4 \
|
||||
--output_dir ./bracket_model
|
||||
|
||||
# 5. Test model (after training completes)
|
||||
python predict.py \
|
||||
--model bracket_model/checkpoint_best.pt \
|
||||
--input new_bracket_design \
|
||||
--compare
|
||||
|
||||
# 6. Optimize: Generate 10,000 design variants
|
||||
# Predict in seconds instead of weeks!
|
||||
for design in design_space:
|
||||
results = predict(design)
|
||||
if results['max_stress'] < 300 and results['weight'] < optimal:
|
||||
optimal = design
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Start Small:** Parse 5-10 test cases, train small model
|
||||
2. **Validate:** Compare predictions with FEA ground truth
|
||||
3. **Scale Up:** Gradually increase dataset size
|
||||
4. **Production:** Train final model on comprehensive dataset
|
||||
5. **Optimize:** Use trained model for rapid design exploration
|
||||
|
||||
## Resources
|
||||
|
||||
- **Phase 1 Detailed Docs:** [README.md](README.md)
|
||||
- **Phase 2 Detailed Docs:** [PHASE2_README.md](PHASE2_README.md)
|
||||
- **Project Context:** [Context.md](Context.md)
|
||||
- **Example Data:** Check `Models/` folder
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you encounter issues:
|
||||
|
||||
1. Check documentation (README.md, PHASE2_README.md)
|
||||
2. Verify file structure and naming
|
||||
3. Review error messages carefully
|
||||
4. Test with smaller dataset first
|
||||
5. Check GPU memory and batch size
|
||||
|
||||
## Success Metrics
|
||||
|
||||
You'll know it's working when:
|
||||
|
||||
- ✓ Parser processes cases without errors
|
||||
- ✓ Validation shows no critical issues
|
||||
- ✓ Training loss decreases steadily
|
||||
- ✓ Validation loss follows training loss
|
||||
- ✓ Predictions are within 5-10% of FEA
|
||||
- ✓ Inference takes milliseconds
|
||||
|
||||
---
|
||||
|
||||
**Ready to revolutionize your optimization workflow?**
|
||||
|
||||
Start with Phase 1 parsing, then move to Phase 2 training. Within days, you'll have a neural network that predicts FEA results 1000x faster!
|
||||
@@ -1,500 +0,0 @@
|
||||
# AtomizerField Implementation Status
|
||||
|
||||
## Project Overview
|
||||
|
||||
**AtomizerField** is a neural field learning system that replaces FEA simulations with graph neural networks for 1000× faster structural optimization.
|
||||
|
||||
**Key Innovation:** Learn complete stress/displacement FIELDS (45,000+ values per simulation) instead of just scalar maximum values, enabling full field predictions with neural networks.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Status: ✅ COMPLETE
|
||||
|
||||
All phases of AtomizerField have been implemented and are ready for use.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Data Parser ✅ COMPLETE
|
||||
|
||||
**Purpose:** Convert NX Nastran FEA results into neural field training data
|
||||
|
||||
### Implemented Files:
|
||||
|
||||
1. **neural_field_parser.py** (650 lines)
|
||||
- Main BDF/OP2 parser
|
||||
- Extracts complete mesh, materials, BCs, loads
|
||||
- Exports full displacement and stress fields
|
||||
- HDF5 + JSON output format
|
||||
- Status: ✅ Tested and working
|
||||
|
||||
2. **validate_parsed_data.py** (400 lines)
|
||||
- Data quality validation
|
||||
- Physics consistency checks
|
||||
- Comprehensive reporting
|
||||
- Status: ✅ Tested and working
|
||||
|
||||
3. **batch_parser.py** (350 lines)
|
||||
- Process multiple FEA cases
|
||||
- Parallel processing support
|
||||
- Batch statistics and reporting
|
||||
- Status: ✅ Ready for use
|
||||
|
||||
**Total:** ~1,400 lines for complete data pipeline
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Neural Network ✅ COMPLETE
|
||||
|
||||
**Purpose:** Graph neural network architecture for field prediction
|
||||
|
||||
### Implemented Files:
|
||||
|
||||
1. **neural_models/field_predictor.py** (490 lines)
|
||||
- GNN architecture: 718,221 parameters
|
||||
- 6 message passing layers
|
||||
- Predicts displacement (6 DOF) and stress (6 components)
|
||||
- Custom MeshGraphConv for FEA topology
|
||||
- Status: ✅ Tested - model creates and runs
|
||||
|
||||
2. **neural_models/physics_losses.py** (450 lines)
|
||||
- 4 loss function types:
|
||||
- MSE Loss
|
||||
- Relative Loss
|
||||
- Physics-Informed Loss (equilibrium, constitutive, BC)
|
||||
- Max Error Loss
|
||||
- Status: ✅ Tested - all losses compute correctly
|
||||
|
||||
3. **neural_models/data_loader.py** (420 lines)
|
||||
- PyTorch Geometric dataset
|
||||
- Graph construction from mesh
|
||||
- Feature engineering (12D nodes, 5D edges)
|
||||
- Batch processing
|
||||
- Status: ✅ Tested and working
|
||||
|
||||
4. **train.py** (430 lines)
|
||||
- Complete training pipeline
|
||||
- TensorBoard integration
|
||||
- Checkpointing and early stopping
|
||||
- Command-line interface
|
||||
- Status: ✅ Ready for training
|
||||
|
||||
5. **predict.py** (380 lines)
|
||||
- Fast inference engine (5-50ms)
|
||||
- Batch prediction
|
||||
- Ground truth comparison
|
||||
- Status: ✅ Ready for use
|
||||
|
||||
**Total:** ~2,170 lines for complete neural pipeline
|
||||
|
||||
---
|
||||
|
||||
## Phase 2.1: Advanced Features ✅ COMPLETE
|
||||
|
||||
**Purpose:** Optimization interface, uncertainty quantification, online learning
|
||||
|
||||
### Implemented Files:
|
||||
|
||||
1. **optimization_interface.py** (430 lines)
|
||||
- Drop-in FEA replacement for Atomizer
|
||||
- Analytical gradient computation (1M× faster than FD)
|
||||
- Fast evaluation (15ms per design)
|
||||
- Design parameter encoding
|
||||
- Status: ✅ Ready for integration
|
||||
|
||||
2. **neural_models/uncertainty.py** (380 lines)
|
||||
- Ensemble-based uncertainty (5 models)
|
||||
- Automatic FEA validation recommendations
|
||||
- Online learning from new FEA runs
|
||||
- Confidence-based model updates
|
||||
- Status: ✅ Ready for use
|
||||
|
||||
3. **atomizer_field_config.yaml**
|
||||
- YAML configuration system
|
||||
- Foundation models
|
||||
- Progressive training
|
||||
- Online learning settings
|
||||
- Status: ✅ Complete
|
||||
|
||||
**Total:** ~810 lines for advanced features
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Testing Framework ✅ COMPLETE
|
||||
|
||||
**Purpose:** Comprehensive validation from basic functionality to production
|
||||
|
||||
### Master Orchestrator:
|
||||
|
||||
**test_suite.py** (403 lines)
|
||||
- Four testing modes: --quick, --physics, --learning, --full
|
||||
- 18 comprehensive tests
|
||||
- JSON results export
|
||||
- Progress tracking and reporting
|
||||
- Status: ✅ Complete and ready
|
||||
|
||||
### Test Modules:
|
||||
|
||||
1. **tests/test_synthetic.py** (297 lines)
|
||||
- 5 smoke tests
|
||||
- Model creation, forward pass, losses, batch, gradients
|
||||
- Status: ✅ Complete
|
||||
|
||||
2. **tests/test_physics.py** (370 lines)
|
||||
- 4 physics validation tests
|
||||
- Cantilever analytical, equilibrium, energy, constitutive law
|
||||
- Compares with known solutions
|
||||
- Status: ✅ Complete
|
||||
|
||||
3. **tests/test_learning.py** (410 lines)
|
||||
- 4 learning capability tests
|
||||
- Memorization, interpolation, extrapolation, pattern recognition
|
||||
- Demonstrates learning with synthetic data
|
||||
- Status: ✅ Complete
|
||||
|
||||
4. **tests/test_predictions.py** (400 lines)
|
||||
- 5 integration tests
|
||||
- Parser, training, accuracy, performance, batch inference
|
||||
- Complete pipeline validation
|
||||
- Status: ✅ Complete
|
||||
|
||||
5. **tests/analytical_cases.py** (450 lines)
|
||||
- Library of 5 analytical solutions
|
||||
- Cantilever, simply supported, tension, pressure vessel, torsion
|
||||
- Ground truth for validation
|
||||
- Status: ✅ Complete
|
||||
|
||||
6. **test_simple_beam.py** (377 lines)
|
||||
- 7-step integration test
|
||||
- Tests with user's actual Simple Beam model
|
||||
- Complete pipeline: parse → validate → graph → predict
|
||||
- Status: ✅ Complete
|
||||
|
||||
**Total:** ~2,700 lines of comprehensive testing
|
||||
|
||||
---
|
||||
|
||||
## Documentation ✅ COMPLETE
|
||||
|
||||
### Implementation Guides:
|
||||
|
||||
1. **README.md** - Project overview and quick start
|
||||
2. **PHASE2_README.md** - Neural network documentation
|
||||
3. **GETTING_STARTED.md** - Step-by-step usage guide
|
||||
4. **SYSTEM_ARCHITECTURE.md** - Technical architecture
|
||||
5. **COMPLETE_SUMMARY.md** - Comprehensive system summary
|
||||
6. **ENHANCEMENTS_GUIDE.md** - Phase 2.1 features guide
|
||||
7. **FINAL_IMPLEMENTATION_REPORT.md** - Implementation report
|
||||
8. **TESTING_FRAMEWORK_SUMMARY.md** - Testing overview
|
||||
9. **TESTING_COMPLETE.md** - Complete testing documentation
|
||||
10. **IMPLEMENTATION_STATUS.md** - This file
|
||||
|
||||
**Total:** 10 comprehensive documentation files
|
||||
|
||||
---
|
||||
|
||||
## Project Statistics
|
||||
|
||||
### Code Implementation:
|
||||
```
|
||||
Phase 1 (Data Parser): ~1,400 lines
|
||||
Phase 2 (Neural Network): ~2,170 lines
|
||||
Phase 2.1 (Advanced Features): ~810 lines
|
||||
Phase 3 (Testing): ~2,700 lines
|
||||
────────────────────────────────────────
|
||||
Total Implementation: ~7,080 lines
|
||||
```
|
||||
|
||||
### Test Coverage:
|
||||
```
|
||||
Smoke tests: 5 tests
|
||||
Physics tests: 4 tests
|
||||
Learning tests: 4 tests
|
||||
Integration tests: 5 tests
|
||||
Simple Beam test: 7 steps
|
||||
────────────────────────────
|
||||
Total: 18 tests + integration
|
||||
```
|
||||
|
||||
### File Count:
|
||||
```
|
||||
Core Implementation: 12 files
|
||||
Test Modules: 6 files
|
||||
Documentation: 10 files
|
||||
Configuration: 3 files
|
||||
────────────────────────────
|
||||
Total: 31 files
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Works Right Now
|
||||
|
||||
### ✅ Data Pipeline
|
||||
- Parse BDF/OP2 files → Working
|
||||
- Extract mesh, materials, BCs, loads → Working
|
||||
- Export full displacement/stress fields → Working
|
||||
- Validate data quality → Working
|
||||
- Batch processing → Working
|
||||
|
||||
### ✅ Neural Network
|
||||
- Create GNN model (718K params) → Working
|
||||
- Forward pass (displacement + stress) → Working
|
||||
- All 4 loss functions → Working
|
||||
- Batch processing → Working
|
||||
- Gradient flow → Working
|
||||
|
||||
### ✅ Advanced Features
|
||||
- Optimization interface → Implemented
|
||||
- Uncertainty quantification → Implemented
|
||||
- Online learning → Implemented
|
||||
- Configuration system → Implemented
|
||||
|
||||
### ✅ Testing
|
||||
- All test modules → Complete
|
||||
- Test orchestrator → Complete
|
||||
- Analytical library → Complete
|
||||
- Simple Beam test → Complete
|
||||
|
||||
---
|
||||
|
||||
## Ready to Use
|
||||
|
||||
### Immediate Usage (Environment Fixed):
|
||||
|
||||
1. **Parse FEA Data:**
|
||||
```bash
|
||||
python neural_field_parser.py path/to/case_directory
|
||||
```
|
||||
|
||||
2. **Validate Parsed Data:**
|
||||
```bash
|
||||
python validate_parsed_data.py path/to/case_directory
|
||||
```
|
||||
|
||||
3. **Run Tests:**
|
||||
```bash
|
||||
python test_suite.py --quick
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
4. **Train Model:**
|
||||
```bash
|
||||
python train.py --data_dirs case1 case2 case3 --epochs 100
|
||||
```
|
||||
|
||||
5. **Make Predictions:**
|
||||
```bash
|
||||
python predict.py --model checkpoints/best_model.pt --data test_case
|
||||
```
|
||||
|
||||
6. **Optimize with Atomizer:**
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
optimizer = NeuralFieldOptimizer('best_model.pt')
|
||||
results = optimizer.evaluate(design_graph)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Current Limitation
|
||||
|
||||
### NumPy Environment Issue
|
||||
- **Issue:** MINGW-W64 NumPy on Windows causes segmentation faults
|
||||
- **Impact:** Cannot run tests that import NumPy (most tests)
|
||||
- **Workaround Options:**
|
||||
1. Use conda environment: `conda install numpy`
|
||||
2. Use WSL (Windows Subsystem for Linux)
|
||||
3. Run on native Linux system
|
||||
4. Wait for NumPy Windows compatibility improvement
|
||||
|
||||
**All code is complete and ready to run once environment is fixed.**
|
||||
|
||||
---
|
||||
|
||||
## Production Readiness Checklist
|
||||
|
||||
### Pre-Training ✅
|
||||
- [x] Data parser implemented
|
||||
- [x] Neural architecture implemented
|
||||
- [x] Loss functions implemented
|
||||
- [x] Training pipeline implemented
|
||||
- [x] Testing framework implemented
|
||||
- [x] Documentation complete
|
||||
|
||||
### For Training ⏳
|
||||
- [ ] Resolve NumPy environment issue
|
||||
- [ ] Generate 50-500 training cases
|
||||
- [ ] Run training pipeline
|
||||
- [ ] Validate physics compliance
|
||||
- [ ] Benchmark performance
|
||||
|
||||
### For Production ⏳
|
||||
- [ ] Train on diverse design space
|
||||
- [ ] Validate < 10% prediction error
|
||||
- [ ] Demonstrate 1000× speedup
|
||||
- [ ] Integrate with Atomizer
|
||||
- [ ] Deploy uncertainty quantification
|
||||
- [ ] Enable online learning
|
||||
|
||||
---
|
||||
|
||||
## Next Actions
|
||||
|
||||
### Immediate (Once Environment Fixed):
|
||||
1. Run smoke tests: `python test_suite.py --quick`
|
||||
2. Test Simple Beam: `python test_simple_beam.py`
|
||||
3. Verify all tests pass
|
||||
|
||||
### Short Term (Training Phase):
|
||||
1. Generate diverse training dataset (50-500 cases)
|
||||
2. Parse all cases: `python batch_parser.py`
|
||||
3. Train model: `python train.py --full`
|
||||
4. Validate physics: `python test_suite.py --physics`
|
||||
5. Check performance: `python test_suite.py --full`
|
||||
|
||||
### Medium Term (Integration):
|
||||
1. Integrate with Atomizer optimization loop
|
||||
2. Test on real design optimization
|
||||
3. Validate vs FEA ground truth
|
||||
4. Deploy uncertainty quantification
|
||||
5. Enable online learning
|
||||
|
||||
---
|
||||
|
||||
## Key Technical Achievements
|
||||
|
||||
### Architecture
|
||||
✅ Graph Neural Network respects mesh topology
|
||||
✅ Physics-informed loss functions enforce constraints
|
||||
✅ 718,221 parameters for complex field learning
|
||||
✅ 6 message passing layers for information propagation
|
||||
|
||||
### Performance
|
||||
✅ Target: 1000× speedup vs FEA (5-50ms inference)
|
||||
✅ Batch processing for optimization loops
|
||||
✅ Analytical gradients for fast sensitivity analysis
|
||||
|
||||
### Innovation
|
||||
✅ Complete field learning (not just max values)
|
||||
✅ Uncertainty quantification for confidence
|
||||
✅ Online learning during optimization
|
||||
✅ Drop-in FEA replacement interface
|
||||
|
||||
### Validation
|
||||
✅ 18 comprehensive tests
|
||||
✅ Analytical solutions for ground truth
|
||||
✅ Physics compliance verification
|
||||
✅ Learning capability confirmation
|
||||
|
||||
---
|
||||
|
||||
## System Capabilities
|
||||
|
||||
### What AtomizerField Can Do:
|
||||
|
||||
1. **Parse FEA Results**
|
||||
- Read Nastran BDF/OP2 files
|
||||
- Extract complete mesh and results
|
||||
- Export to neural format
|
||||
|
||||
2. **Learn from FEA**
|
||||
- Train on 50-500 examples
|
||||
- Learn complete displacement/stress fields
|
||||
- Generalize to new designs
|
||||
|
||||
3. **Fast Predictions**
|
||||
- 5-50ms inference (vs 30-300s FEA)
|
||||
- 1000× speedup
|
||||
- Batch processing capability
|
||||
|
||||
4. **Optimization Integration**
|
||||
- Drop-in FEA replacement
|
||||
- Analytical gradients
|
||||
- 1M× faster sensitivity analysis
|
||||
|
||||
5. **Quality Assurance**
|
||||
- Uncertainty quantification
|
||||
- Automatic FEA validation triggers
|
||||
- Online learning improvements
|
||||
|
||||
6. **Physics Compliance**
|
||||
- Equilibrium enforcement
|
||||
- Constitutive law compliance
|
||||
- Boundary condition respect
|
||||
- Energy conservation
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Code Quality
|
||||
- ✅ ~7,000 lines of production code
|
||||
- ✅ Comprehensive error handling
|
||||
- ✅ Extensive documentation
|
||||
- ✅ Modular architecture
|
||||
|
||||
### Testing
|
||||
- ✅ 18 automated tests
|
||||
- ✅ Progressive validation strategy
|
||||
- ✅ Analytical ground truth
|
||||
- ✅ Performance benchmarks
|
||||
|
||||
### Features
|
||||
- ✅ Complete data pipeline
|
||||
- ✅ Neural architecture
|
||||
- ✅ Training infrastructure
|
||||
- ✅ Optimization interface
|
||||
- ✅ Uncertainty quantification
|
||||
- ✅ Online learning
|
||||
|
||||
### Documentation
|
||||
- ✅ 10 comprehensive guides
|
||||
- ✅ Code examples
|
||||
- ✅ Usage instructions
|
||||
- ✅ Architecture details
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**AtomizerField is fully implemented and ready for training and deployment.**
|
||||
|
||||
### Completed:
|
||||
- ✅ All phases implemented (Phase 1, 2, 2.1, 3)
|
||||
- ✅ ~7,000 lines of production code
|
||||
- ✅ 18 comprehensive tests
|
||||
- ✅ 10 documentation files
|
||||
- ✅ Complete testing framework
|
||||
|
||||
### Remaining:
|
||||
- ⏳ Resolve NumPy environment issue
|
||||
- ⏳ Generate training dataset
|
||||
- ⏳ Train and validate model
|
||||
- ⏳ Deploy to production
|
||||
|
||||
### Ready to:
|
||||
1. Run tests (once environment fixed)
|
||||
2. Train on FEA data
|
||||
3. Make predictions 1000× faster
|
||||
4. Integrate with Atomizer
|
||||
5. Enable online learning
|
||||
|
||||
**The system is production-ready pending training data and environment setup.** 🚀
|
||||
|
||||
---
|
||||
|
||||
## Contact & Support
|
||||
|
||||
- **Project:** AtomizerField Neural Field Learning System
|
||||
- **Purpose:** 1000× faster FEA predictions for structural optimization
|
||||
- **Status:** Implementation complete, ready for training
|
||||
- **Documentation:** See 10 comprehensive guides in project root
|
||||
|
||||
**AtomizerField is ready to revolutionize structural optimization with neural field learning!**
|
||||
|
||||
---
|
||||
|
||||
*Implementation Status Report*
|
||||
*Version: 1.0 - Complete*
|
||||
*Date: January 2025*
|
||||
*Total Implementation: ~7,000 lines across 31 files*
|
||||
@@ -1,674 +0,0 @@
|
||||
Neural Field Data Parser: From NX Nastran Files to Training Data
|
||||
Complete Implementation Guide
|
||||
|
||||
What You Have vs What You Need
|
||||
✅ What NX Nastran Gives You:
|
||||
Files Available:
|
||||
|
||||
.sim - Simulation file with load/BC definitions
|
||||
.fem - Finite element model
|
||||
.prt - Part geometry
|
||||
.bdf/.dat - Nastran input deck (mesh, materials, loads, BCs)
|
||||
.op2 - Binary results (stress, displacement, strain)
|
||||
.f06 - ASCII results (human readable)
|
||||
.log - Solver log
|
||||
|
||||
This is SUFFICIENT! The BDF contains everything about setup, OP2 contains all results.
|
||||
|
||||
Step-by-Step Instructions for Manual Data Generation
|
||||
Step 1: Set Up Your Analysis in NX
|
||||
1. Create your geometry in NX
|
||||
2. Generate mesh (record statistics)
|
||||
3. Apply materials
|
||||
4. Define boundary conditions:
|
||||
- Fixed supports
|
||||
- Pinned constraints
|
||||
- Contact (if needed)
|
||||
5. Apply loads:
|
||||
- Forces
|
||||
- Pressures
|
||||
- Gravity
|
||||
6. Set up solution parameters
|
||||
7. Run analysis
|
||||
8. Ensure these files are generated:
|
||||
- model.bdf (or .dat)
|
||||
- model.op2
|
||||
- model.f06
|
||||
Step 2: Organize Your Files
|
||||
training_case_001/
|
||||
├── input/
|
||||
│ ├── model.bdf # Main input deck
|
||||
│ ├── model.sim # NX simulation file
|
||||
│ └── geometry.prt # Original geometry
|
||||
├── output/
|
||||
│ ├── model.op2 # Binary results
|
||||
│ ├── model.f06 # ASCII results
|
||||
│ └── model.log # Solver log
|
||||
└── metadata.json # Your manual annotations
|
||||
|
||||
Python Parser Implementation
|
||||
Main Parser Script
|
||||
python"""
|
||||
neural_field_parser.py
|
||||
Parses NX Nastran files into Neural Field training data
|
||||
"""
|
||||
|
||||
import json
|
||||
import numpy as np
|
||||
import h5py
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
import hashlib
|
||||
|
||||
# pyNastran imports
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
class NastranToNeuralFieldParser:
|
||||
"""
|
||||
Parses Nastran BDF/OP2 files into Neural Field data structure
|
||||
"""
|
||||
|
||||
def __init__(self, case_directory):
|
||||
self.case_dir = Path(case_directory)
|
||||
self.bdf_file = self.case_dir / "input" / "model.bdf"
|
||||
self.op2_file = self.case_dir / "output" / "model.op2"
|
||||
|
||||
# Initialize readers
|
||||
self.bdf = BDF(debug=False)
|
||||
self.op2 = OP2(debug=False)
|
||||
|
||||
# Data structure
|
||||
self.neural_field_data = {
|
||||
"metadata": {},
|
||||
"geometry": {},
|
||||
"mesh": {},
|
||||
"materials": {},
|
||||
"boundary_conditions": {},
|
||||
"loads": {},
|
||||
"results": {}
|
||||
}
|
||||
|
||||
def parse_all(self):
|
||||
"""
|
||||
Main parsing function
|
||||
"""
|
||||
print("Starting parse of Nastran files...")
|
||||
|
||||
# Parse input deck
|
||||
print("Reading BDF file...")
|
||||
self.bdf.read_bdf(str(self.bdf_file))
|
||||
|
||||
# Parse results
|
||||
print("Reading OP2 file...")
|
||||
self.op2.read_op2(str(self.op2_file))
|
||||
|
||||
# Extract all data
|
||||
self.extract_metadata()
|
||||
self.extract_mesh()
|
||||
self.extract_materials()
|
||||
self.extract_boundary_conditions()
|
||||
self.extract_loads()
|
||||
self.extract_results()
|
||||
|
||||
# Save to file
|
||||
self.save_data()
|
||||
|
||||
print("Parse complete!")
|
||||
return self.neural_field_data
|
||||
|
||||
def extract_metadata(self):
|
||||
"""
|
||||
Extract metadata and analysis info
|
||||
"""
|
||||
self.neural_field_data["metadata"] = {
|
||||
"version": "1.0.0",
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"source": "NX_Nastran",
|
||||
"case_directory": str(self.case_dir),
|
||||
"analysis_type": self.op2.sol, # SOL 101, 103, etc.
|
||||
"title": self.bdf.case_control_deck.title.title if hasattr(self.bdf.case_control_deck, 'title') else "",
|
||||
"units": {
|
||||
"length": "mm", # You may need to specify this
|
||||
"force": "N",
|
||||
"stress": "Pa",
|
||||
"temperature": "K"
|
||||
}
|
||||
}
|
||||
|
||||
def extract_mesh(self):
|
||||
"""
|
||||
Extract mesh data from BDF
|
||||
"""
|
||||
print("Extracting mesh...")
|
||||
|
||||
# Nodes
|
||||
nodes = []
|
||||
node_ids = []
|
||||
for nid, node in sorted(self.bdf.nodes.items()):
|
||||
node_ids.append(nid)
|
||||
nodes.append(node.get_position())
|
||||
|
||||
nodes_array = np.array(nodes)
|
||||
|
||||
# Elements
|
||||
element_data = {
|
||||
"solid": [],
|
||||
"shell": [],
|
||||
"beam": [],
|
||||
"rigid": []
|
||||
}
|
||||
|
||||
# Solid elements (TETRA, HEXA, PENTA)
|
||||
for eid, elem in self.bdf.elements.items():
|
||||
elem_type = elem.type
|
||||
|
||||
if elem_type in ['CTETRA', 'CHEXA', 'CPENTA', 'CTETRA10', 'CHEXA20']:
|
||||
element_data["solid"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": elem.node_ids,
|
||||
"material_id": elem.mid,
|
||||
"property_id": elem.pid if hasattr(elem, 'pid') else None
|
||||
})
|
||||
|
||||
elif elem_type in ['CQUAD4', 'CTRIA3', 'CQUAD8', 'CTRIA6']:
|
||||
element_data["shell"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": elem.node_ids,
|
||||
"material_id": elem.mid,
|
||||
"property_id": elem.pid,
|
||||
"thickness": elem.T() if hasattr(elem, 'T') else None
|
||||
})
|
||||
|
||||
elif elem_type in ['CBAR', 'CBEAM', 'CROD']:
|
||||
element_data["beam"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": elem.node_ids,
|
||||
"material_id": elem.mid,
|
||||
"property_id": elem.pid
|
||||
})
|
||||
|
||||
elif elem_type in ['RBE2', 'RBE3', 'RBAR']:
|
||||
element_data["rigid"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": elem.node_ids
|
||||
})
|
||||
|
||||
# Store mesh data
|
||||
self.neural_field_data["mesh"] = {
|
||||
"statistics": {
|
||||
"n_nodes": len(nodes),
|
||||
"n_elements": len(self.bdf.elements),
|
||||
"element_types": {
|
||||
"solid": len(element_data["solid"]),
|
||||
"shell": len(element_data["shell"]),
|
||||
"beam": len(element_data["beam"]),
|
||||
"rigid": len(element_data["rigid"])
|
||||
}
|
||||
},
|
||||
"nodes": {
|
||||
"ids": node_ids,
|
||||
"coordinates": nodes_array.tolist(),
|
||||
"shape": list(nodes_array.shape)
|
||||
},
|
||||
"elements": element_data
|
||||
}
|
||||
|
||||
def extract_materials(self):
|
||||
"""
|
||||
Extract material properties
|
||||
"""
|
||||
print("Extracting materials...")
|
||||
|
||||
materials = []
|
||||
for mid, mat in self.bdf.materials.items():
|
||||
mat_data = {
|
||||
"id": mid,
|
||||
"type": mat.type
|
||||
}
|
||||
|
||||
if mat.type == 'MAT1': # Isotropic material
|
||||
mat_data.update({
|
||||
"E": mat.e, # Young's modulus
|
||||
"nu": mat.nu, # Poisson's ratio
|
||||
"rho": mat.rho, # Density
|
||||
"G": mat.g, # Shear modulus
|
||||
"alpha": mat.a if hasattr(mat, 'a') else None, # Thermal expansion
|
||||
"tref": mat.tref if hasattr(mat, 'tref') else None,
|
||||
"ST": mat.St() if hasattr(mat, 'St') else None, # Tensile stress limit
|
||||
"SC": mat.Sc() if hasattr(mat, 'Sc') else None, # Compressive stress limit
|
||||
"SS": mat.Ss() if hasattr(mat, 'Ss') else None # Shear stress limit
|
||||
})
|
||||
|
||||
materials.append(mat_data)
|
||||
|
||||
self.neural_field_data["materials"] = materials
|
||||
|
||||
def extract_boundary_conditions(self):
|
||||
"""
|
||||
Extract boundary conditions from BDF
|
||||
"""
|
||||
print("Extracting boundary conditions...")
|
||||
|
||||
bcs = {
|
||||
"spc": [], # Single point constraints
|
||||
"mpc": [], # Multi-point constraints
|
||||
"suport": [] # Free body supports
|
||||
}
|
||||
|
||||
# SPC (fixed DOFs)
|
||||
for spc_id, spc_list in self.bdf.spcs.items():
|
||||
for spc in spc_list:
|
||||
bcs["spc"].append({
|
||||
"id": spc_id,
|
||||
"node": spc.node_ids[0] if hasattr(spc, 'node_ids') else spc.node,
|
||||
"dofs": spc.components, # Which DOFs are constrained (123456)
|
||||
"enforced_motion": spc.enforced
|
||||
})
|
||||
|
||||
# MPC equations
|
||||
for mpc_id, mpc_list in self.bdf.mpcs.items():
|
||||
for mpc in mpc_list:
|
||||
bcs["mpc"].append({
|
||||
"id": mpc_id,
|
||||
"nodes": mpc.node_ids,
|
||||
"coefficients": mpc.coefficients,
|
||||
"components": mpc.components
|
||||
})
|
||||
|
||||
self.neural_field_data["boundary_conditions"] = bcs
|
||||
|
||||
def extract_loads(self):
|
||||
"""
|
||||
Extract loads from BDF
|
||||
"""
|
||||
print("Extracting loads...")
|
||||
|
||||
loads = {
|
||||
"point_forces": [],
|
||||
"pressure": [],
|
||||
"gravity": [],
|
||||
"thermal": []
|
||||
}
|
||||
|
||||
# Point forces (FORCE, MOMENT)
|
||||
for load_id, load_list in self.bdf.loads.items():
|
||||
for load in load_list:
|
||||
if load.type == 'FORCE':
|
||||
loads["point_forces"].append({
|
||||
"id": load_id,
|
||||
"node": load.node,
|
||||
"magnitude": load.mag,
|
||||
"direction": [load.xyz[0], load.xyz[1], load.xyz[2]],
|
||||
"coord_system": load.cid
|
||||
})
|
||||
|
||||
elif load.type == 'MOMENT':
|
||||
loads["point_forces"].append({
|
||||
"id": load_id,
|
||||
"node": load.node,
|
||||
"moment": load.mag,
|
||||
"direction": [load.xyz[0], load.xyz[1], load.xyz[2]],
|
||||
"coord_system": load.cid
|
||||
})
|
||||
|
||||
elif load.type in ['PLOAD', 'PLOAD2', 'PLOAD4']:
|
||||
loads["pressure"].append({
|
||||
"id": load_id,
|
||||
"elements": load.element_ids,
|
||||
"pressure": load.pressure,
|
||||
"type": load.type
|
||||
})
|
||||
|
||||
elif load.type == 'GRAV':
|
||||
loads["gravity"].append({
|
||||
"id": load_id,
|
||||
"acceleration": load.scale,
|
||||
"direction": [load.N[0], load.N[1], load.N[2]],
|
||||
"coord_system": load.cid
|
||||
})
|
||||
|
||||
# Temperature loads
|
||||
for temp_id, temp_list in self.bdf.temps.items():
|
||||
for temp in temp_list:
|
||||
loads["thermal"].append({
|
||||
"id": temp_id,
|
||||
"node": temp.node,
|
||||
"temperature": temp.temperature
|
||||
})
|
||||
|
||||
self.neural_field_data["loads"] = loads
|
||||
|
||||
def extract_results(self):
|
||||
"""
|
||||
Extract results from OP2
|
||||
"""
|
||||
print("Extracting results...")
|
||||
|
||||
results = {}
|
||||
|
||||
# Get subcase ID (usually 1 for linear static)
|
||||
subcase_id = 1
|
||||
|
||||
# Displacement
|
||||
if hasattr(self.op2, 'displacements'):
|
||||
disp = self.op2.displacements[subcase_id]
|
||||
disp_data = disp.data[0, :, :] # [itime=0, all_nodes, 6_dofs]
|
||||
|
||||
results["displacement"] = {
|
||||
"node_ids": disp.node_gridtype[:, 0].tolist(),
|
||||
"data": disp_data.tolist(),
|
||||
"shape": list(disp_data.shape),
|
||||
"max_magnitude": float(np.max(np.linalg.norm(disp_data[:, :3], axis=1)))
|
||||
}
|
||||
|
||||
# Stress - handle different element types
|
||||
stress_results = {}
|
||||
|
||||
# Solid stress
|
||||
if hasattr(self.op2, 'ctetra_stress'):
|
||||
stress = self.op2.ctetra_stress[subcase_id]
|
||||
stress_data = stress.data[0, :, :]
|
||||
stress_results["solid_stress"] = {
|
||||
"element_ids": stress.element_node[:, 0].tolist(),
|
||||
"data": stress_data.tolist(),
|
||||
"von_mises": stress_data[:, -1].tolist() if stress_data.shape[1] > 6 else None
|
||||
}
|
||||
|
||||
# Shell stress
|
||||
if hasattr(self.op2, 'cquad4_stress'):
|
||||
stress = self.op2.cquad4_stress[subcase_id]
|
||||
stress_data = stress.data[0, :, :]
|
||||
stress_results["shell_stress"] = {
|
||||
"element_ids": stress.element_node[:, 0].tolist(),
|
||||
"data": stress_data.tolist()
|
||||
}
|
||||
|
||||
results["stress"] = stress_results
|
||||
|
||||
# Strain
|
||||
strain_results = {}
|
||||
if hasattr(self.op2, 'ctetra_strain'):
|
||||
strain = self.op2.ctetra_strain[subcase_id]
|
||||
strain_data = strain.data[0, :, :]
|
||||
strain_results["solid_strain"] = {
|
||||
"element_ids": strain.element_node[:, 0].tolist(),
|
||||
"data": strain_data.tolist()
|
||||
}
|
||||
|
||||
results["strain"] = strain_results
|
||||
|
||||
# SPC Forces (reactions)
|
||||
if hasattr(self.op2, 'spc_forces'):
|
||||
spc = self.op2.spc_forces[subcase_id]
|
||||
spc_data = spc.data[0, :, :]
|
||||
results["reactions"] = {
|
||||
"node_ids": spc.node_gridtype[:, 0].tolist(),
|
||||
"forces": spc_data.tolist()
|
||||
}
|
||||
|
||||
self.neural_field_data["results"] = results
|
||||
|
||||
def save_data(self):
|
||||
"""
|
||||
Save parsed data to JSON and HDF5
|
||||
"""
|
||||
print("Saving data...")
|
||||
|
||||
# Save JSON metadata
|
||||
json_file = self.case_dir / "neural_field_data.json"
|
||||
with open(json_file, 'w') as f:
|
||||
# Convert numpy arrays to lists for JSON serialization
|
||||
json.dump(self.neural_field_data, f, indent=2, default=str)
|
||||
|
||||
# Save HDF5 for large arrays
|
||||
h5_file = self.case_dir / "neural_field_data.h5"
|
||||
with h5py.File(h5_file, 'w') as f:
|
||||
# Save mesh data
|
||||
mesh_grp = f.create_group('mesh')
|
||||
mesh_grp.create_dataset('node_coordinates',
|
||||
data=np.array(self.neural_field_data["mesh"]["nodes"]["coordinates"]))
|
||||
|
||||
# Save results
|
||||
if "results" in self.neural_field_data:
|
||||
results_grp = f.create_group('results')
|
||||
if "displacement" in self.neural_field_data["results"]:
|
||||
results_grp.create_dataset('displacement',
|
||||
data=np.array(self.neural_field_data["results"]["displacement"]["data"]))
|
||||
|
||||
print(f"Data saved to {json_file} and {h5_file}")
|
||||
|
||||
# ============================================================================
|
||||
# USAGE SCRIPT
|
||||
# ============================================================================
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main function to run the parser
|
||||
"""
|
||||
import sys
|
||||
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python neural_field_parser.py <case_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
case_dir = sys.argv[1]
|
||||
|
||||
# Create parser
|
||||
parser = NastranToNeuralFieldParser(case_dir)
|
||||
|
||||
# Parse all data
|
||||
try:
|
||||
data = parser.parse_all()
|
||||
print("\nParsing successful!")
|
||||
print(f"Nodes: {data['mesh']['statistics']['n_nodes']}")
|
||||
print(f"Elements: {data['mesh']['statistics']['n_elements']}")
|
||||
print(f"Materials: {len(data['materials'])}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError during parsing: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
Validation Script
|
||||
python"""
|
||||
validate_parsed_data.py
|
||||
Validates the parsed neural field data
|
||||
"""
|
||||
|
||||
import json
|
||||
import h5py
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
|
||||
class NeuralFieldDataValidator:
|
||||
"""
|
||||
Validates parsed data for completeness and consistency
|
||||
"""
|
||||
|
||||
def __init__(self, case_directory):
|
||||
self.case_dir = Path(case_directory)
|
||||
self.json_file = self.case_dir / "neural_field_data.json"
|
||||
self.h5_file = self.case_dir / "neural_field_data.h5"
|
||||
|
||||
def validate(self):
|
||||
"""
|
||||
Run all validation checks
|
||||
"""
|
||||
print("Starting validation...")
|
||||
|
||||
# Load data
|
||||
with open(self.json_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Check required fields
|
||||
required_fields = [
|
||||
"metadata", "mesh", "materials",
|
||||
"boundary_conditions", "loads", "results"
|
||||
]
|
||||
|
||||
for field in required_fields:
|
||||
if field not in data:
|
||||
print(f"❌ Missing required field: {field}")
|
||||
return False
|
||||
else:
|
||||
print(f"✅ Found {field}")
|
||||
|
||||
# Validate mesh
|
||||
n_nodes = data["mesh"]["statistics"]["n_nodes"]
|
||||
n_elements = data["mesh"]["statistics"]["n_elements"]
|
||||
|
||||
print(f"\nMesh Statistics:")
|
||||
print(f" Nodes: {n_nodes}")
|
||||
print(f" Elements: {n_elements}")
|
||||
|
||||
# Check results consistency
|
||||
if "displacement" in data["results"]:
|
||||
disp_nodes = len(data["results"]["displacement"]["node_ids"])
|
||||
if disp_nodes != n_nodes:
|
||||
print(f"⚠️ Displacement nodes ({disp_nodes}) != mesh nodes ({n_nodes})")
|
||||
|
||||
# Check HDF5 file
|
||||
with h5py.File(self.h5_file, 'r') as f:
|
||||
print(f"\nHDF5 Contents:")
|
||||
for key in f.keys():
|
||||
print(f" {key}: {list(f[key].keys())}")
|
||||
|
||||
print("\n✅ Validation complete!")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
validator = NeuralFieldDataValidator(sys.argv[1])
|
||||
validator.validate()
|
||||
|
||||
Step-by-Step Usage Instructions
|
||||
1. Prepare Your Analysis
|
||||
bash# In NX:
|
||||
1. Create geometry
|
||||
2. Generate mesh
|
||||
3. Apply materials (MAT1 cards)
|
||||
4. Apply constraints (SPC)
|
||||
5. Apply loads (FORCE, PLOAD4)
|
||||
6. Run SOL 101 (Linear Static)
|
||||
7. Request output: DISPLACEMENT=ALL, STRESS=ALL, STRAIN=ALL
|
||||
2. Organize Files
|
||||
bashmkdir training_case_001
|
||||
mkdir training_case_001/input
|
||||
mkdir training_case_001/output
|
||||
|
||||
# Copy files
|
||||
cp your_model.bdf training_case_001/input/model.bdf
|
||||
cp your_model.op2 training_case_001/output/model.op2
|
||||
cp your_model.f06 training_case_001/output/model.f06
|
||||
3. Run Parser
|
||||
bash# Install requirements
|
||||
pip install pyNastran numpy h5py
|
||||
|
||||
# Run parser
|
||||
python neural_field_parser.py training_case_001
|
||||
|
||||
# Validate
|
||||
python validate_parsed_data.py training_case_001
|
||||
4. Check Output
|
||||
You'll get:
|
||||
|
||||
neural_field_data.json - Complete metadata and structure
|
||||
neural_field_data.h5 - Large arrays (mesh, results)
|
||||
|
||||
|
||||
Automation Script for Multiple Cases
|
||||
python"""
|
||||
batch_parser.py
|
||||
Parse multiple cases automatically
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
|
||||
def batch_parse(root_directory):
|
||||
"""
|
||||
Parse all cases in directory
|
||||
"""
|
||||
root = Path(root_directory)
|
||||
cases = [d for d in root.iterdir() if d.is_dir()]
|
||||
|
||||
results = []
|
||||
for case in cases:
|
||||
print(f"\nProcessing {case.name}...")
|
||||
try:
|
||||
parser = NastranToNeuralFieldParser(case)
|
||||
data = parser.parse_all()
|
||||
results.append({
|
||||
"case": case.name,
|
||||
"status": "success",
|
||||
"nodes": data["mesh"]["statistics"]["n_nodes"],
|
||||
"elements": data["mesh"]["statistics"]["n_elements"]
|
||||
})
|
||||
except Exception as e:
|
||||
results.append({
|
||||
"case": case.name,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
})
|
||||
|
||||
# Summary
|
||||
print("\n" + "="*50)
|
||||
print("BATCH PROCESSING COMPLETE")
|
||||
print("="*50)
|
||||
for r in results:
|
||||
status = "✅" if r["status"] == "success" else "❌"
|
||||
print(f"{status} {r['case']}: {r['status']}")
|
||||
|
||||
return results
|
||||
|
||||
if __name__ == "__main__":
|
||||
batch_parse("./training_data")
|
||||
|
||||
What to Add Manually
|
||||
Create a metadata.json in each case directory with design intent:
|
||||
json{
|
||||
"design_parameters": {
|
||||
"thickness": 2.5,
|
||||
"fillet_radius": 5.0,
|
||||
"rib_height": 15.0
|
||||
},
|
||||
"optimization_context": {
|
||||
"objectives": ["minimize_weight", "minimize_stress"],
|
||||
"constraints": ["max_displacement < 2mm"],
|
||||
"iteration": 42
|
||||
},
|
||||
"notes": "Baseline design with standard loading"
|
||||
}
|
||||
|
||||
Troubleshooting
|
||||
Common Issues:
|
||||
|
||||
"Can't find BDF nodes"
|
||||
|
||||
Make sure you're using .bdf or .dat, not .sim
|
||||
Check that mesh was exported to solver deck
|
||||
|
||||
|
||||
"OP2 has no results"
|
||||
|
||||
Ensure analysis completed successfully
|
||||
Check that you requested output (DISP=ALL, STRESS=ALL)
|
||||
|
||||
|
||||
"Memory error with large models"
|
||||
|
||||
Use HDF5 chunking for very large models
|
||||
Process in batches
|
||||
|
||||
|
||||
|
||||
This parser gives you everything you need to start training neural networks on your FEA data. The format is future-proof and will work with your automated generation pipeline!
|
||||
@@ -1,587 +0,0 @@
|
||||
|
||||
|
||||
# AtomizerField Phase 2: Neural Field Learning
|
||||
|
||||
**Version 2.0.0**
|
||||
|
||||
Phase 2 implements Graph Neural Networks (GNNs) to learn complete FEA field results from mesh geometry, boundary conditions, and loads. This enables 1000x faster structural analysis for optimization.
|
||||
|
||||
## What's New in Phase 2
|
||||
|
||||
### The Revolutionary Approach
|
||||
|
||||
**Traditional FEA Surrogate Models:**
|
||||
```
|
||||
Parameters → Neural Network → Max Stress (scalar)
|
||||
```
|
||||
- Only learns maximum values
|
||||
- Loses all spatial information
|
||||
- Can't understand physics
|
||||
- Limited to specific loading conditions
|
||||
|
||||
**AtomizerField Neural Field Learning:**
|
||||
```
|
||||
Mesh + BCs + Loads → Graph Neural Network → Complete Stress Field (45,000 values)
|
||||
```
|
||||
- Learns complete field distributions
|
||||
- Understands how forces flow through structure
|
||||
- Physics-informed constraints
|
||||
- Generalizes to new loading conditions
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **Graph Neural Network Architecture** - Learns on mesh topology
|
||||
2. **Physics-Informed Loss Functions** - Enforces physical laws
|
||||
3. **Efficient Data Loading** - Handles large FEA datasets
|
||||
4. **Training Pipeline** - Multi-GPU support, checkpointing, early stopping
|
||||
5. **Fast Inference** - Millisecond predictions vs hours of FEA
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Installation
|
||||
|
||||
```bash
|
||||
# Install Phase 2 dependencies (PyTorch, PyTorch Geometric)
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Prepare Training Data
|
||||
|
||||
First, you need parsed FEA data from Phase 1:
|
||||
|
||||
```bash
|
||||
# Parse your NX Nastran results (from Phase 1)
|
||||
python neural_field_parser.py training_case_001
|
||||
python neural_field_parser.py training_case_002
|
||||
# ... repeat for all training cases
|
||||
|
||||
# Organize into train/val splits
|
||||
mkdir training_data
|
||||
mkdir validation_data
|
||||
|
||||
# Move 80% of cases to training_data/
|
||||
# Move 20% of cases to validation_data/
|
||||
```
|
||||
|
||||
### 3. Train Model
|
||||
|
||||
```bash
|
||||
# Basic training
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 100 \
|
||||
--batch_size 4 \
|
||||
--lr 0.001
|
||||
|
||||
# Advanced training with physics-informed loss
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 200 \
|
||||
--batch_size 8 \
|
||||
--lr 0.001 \
|
||||
--loss_type physics \
|
||||
--hidden_dim 256 \
|
||||
--num_layers 8 \
|
||||
--output_dir ./my_model
|
||||
```
|
||||
|
||||
### 4. Run Inference
|
||||
|
||||
```bash
|
||||
# Predict on single case
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input test_case_001 \
|
||||
--compare
|
||||
|
||||
# Batch prediction on multiple cases
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input ./test_data \
|
||||
--batch \
|
||||
--output_dir ./predictions
|
||||
```
|
||||
|
||||
## Architecture Deep Dive
|
||||
|
||||
### Graph Neural Network (GNN)
|
||||
|
||||
Our GNN architecture respects the physics of structural mechanics:
|
||||
|
||||
```
|
||||
Input Graph:
|
||||
- Nodes: FEA mesh nodes
|
||||
* Position (x, y, z)
|
||||
* Boundary conditions (6 DOF constraints)
|
||||
* Applied loads (force vectors)
|
||||
|
||||
- Edges: Element connectivity
|
||||
* Material properties (E, ν, ρ, G, α)
|
||||
* Element type (solid, shell, beam)
|
||||
|
||||
Message Passing (6 layers):
|
||||
Each layer propagates information through mesh:
|
||||
1. Gather information from neighbors
|
||||
2. Update node representations
|
||||
3. Respect mesh topology (forces flow through connected elements)
|
||||
|
||||
Output:
|
||||
- Displacement field: [num_nodes, 6] (3 translation + 3 rotation)
|
||||
- Stress field: [num_nodes, 6] (σxx, σyy, σzz, τxy, τyz, τxz)
|
||||
- Von Mises stress: [num_nodes, 1]
|
||||
```
|
||||
|
||||
### Why This Works
|
||||
|
||||
**Key Insight**: FEA solves:
|
||||
```
|
||||
K u = f
|
||||
```
|
||||
Where K depends on mesh topology and materials.
|
||||
|
||||
Our GNN learns this relationship:
|
||||
- **Mesh topology** → Graph edges
|
||||
- **Material properties** → Edge features
|
||||
- **Boundary conditions** → Node features
|
||||
- **Loads** → Node features
|
||||
- **Message passing** → Learns stiffness matrix behavior
|
||||
|
||||
Result: The network learns physics, not just patterns!
|
||||
|
||||
### Model Architecture Details
|
||||
|
||||
```python
|
||||
AtomizerFieldModel:
|
||||
├── Node Encoder (12 → 128 dim)
|
||||
│ └── Coordinates (3) + BCs (6) + Loads (3)
|
||||
│
|
||||
├── Edge Encoder (5 → 64 dim)
|
||||
│ └── Material properties (E, ν, ρ, G, α)
|
||||
│
|
||||
├── Message Passing Layers (6 layers)
|
||||
│ ├── MeshGraphConv
|
||||
│ ├── Layer Normalization
|
||||
│ ├── Residual Connection
|
||||
│ └── Dropout
|
||||
│
|
||||
├── Displacement Decoder (128 → 6)
|
||||
│ └── Outputs: u_x, u_y, u_z, θ_x, θ_y, θ_z
|
||||
│
|
||||
└── Stress Predictor (6 → 6)
|
||||
└── Outputs: σxx, σyy, σzz, τxy, τyz, τxz
|
||||
|
||||
Total Parameters: ~718,000
|
||||
```
|
||||
|
||||
## Physics-Informed Loss Functions
|
||||
|
||||
Standard neural networks only minimize prediction error. We also enforce physics:
|
||||
|
||||
### 1. Data Loss
|
||||
```
|
||||
L_data = ||u_pred - u_FEA||² + ||σ_pred - σ_FEA||²
|
||||
```
|
||||
Ensures predictions match FEA ground truth.
|
||||
|
||||
### 2. Equilibrium Loss
|
||||
```
|
||||
L_eq = ||∇·σ + f||²
|
||||
```
|
||||
Forces must balance at every point.
|
||||
|
||||
### 3. Constitutive Loss
|
||||
```
|
||||
L_const = ||σ - C:ε||²
|
||||
```
|
||||
Stress must follow material law (σ = C:ε).
|
||||
|
||||
### 4. Boundary Condition Loss
|
||||
```
|
||||
L_bc = ||u||² at fixed nodes
|
||||
```
|
||||
Displacement must be zero at constraints.
|
||||
|
||||
### Total Loss
|
||||
```
|
||||
L_total = λ_data·L_data + λ_eq·L_eq + λ_const·L_const + λ_bc·L_bc
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Faster convergence
|
||||
- Better generalization
|
||||
- Physically plausible predictions
|
||||
- Works with less training data
|
||||
|
||||
## Training Guide
|
||||
|
||||
### Dataset Requirements
|
||||
|
||||
**Minimum Dataset Size:**
|
||||
- Small models (< 10k elements): 50-100 cases
|
||||
- Medium models (10k-100k elements): 100-500 cases
|
||||
- Large models (> 100k elements): 500-1000 cases
|
||||
|
||||
**Data Diversity:**
|
||||
Vary these parameters across training cases:
|
||||
- Geometry (thicknesses, radii, dimensions)
|
||||
- Loading conditions (magnitude, direction, location)
|
||||
- Boundary conditions (support locations, constrained DOFs)
|
||||
- Materials (within reason - same element types)
|
||||
|
||||
### Training Best Practices
|
||||
|
||||
**1. Start Simple:**
|
||||
```bash
|
||||
# First, train with MSE loss only
|
||||
python train.py --loss_type mse --epochs 50
|
||||
```
|
||||
|
||||
**2. Add Physics:**
|
||||
```bash
|
||||
# Then add physics-informed constraints
|
||||
python train.py --loss_type physics --epochs 100
|
||||
```
|
||||
|
||||
**3. Tune Hyperparameters:**
|
||||
```bash
|
||||
# Increase model capacity for complex geometries
|
||||
python train.py \
|
||||
--hidden_dim 256 \
|
||||
--num_layers 8 \
|
||||
--dropout 0.15
|
||||
```
|
||||
|
||||
**4. Monitor Training:**
|
||||
```bash
|
||||
# View training progress in TensorBoard
|
||||
tensorboard --logdir runs/tensorboard
|
||||
```
|
||||
|
||||
### Typical Training Time
|
||||
|
||||
On a single GPU (e.g., NVIDIA RTX 3080):
|
||||
- Small dataset (100 cases, 10k elements each): 2-4 hours
|
||||
- Medium dataset (500 cases, 50k elements each): 8-12 hours
|
||||
- Large dataset (1000 cases, 100k elements each): 24-48 hours
|
||||
|
||||
**Speedup Tips:**
|
||||
- Use multiple GPUs: `CUDA_VISIBLE_DEVICES=0,1 python train.py`
|
||||
- Increase batch size if memory allows
|
||||
- Use mixed precision training (future feature)
|
||||
- Cache data in RAM: set `cache_in_memory=True` in data_loader.py
|
||||
|
||||
## Inference Performance
|
||||
|
||||
### Speed Comparison
|
||||
|
||||
| Analysis Method | Time | Speedup |
|
||||
|----------------|------|---------|
|
||||
| Traditional FEA (NX Nastran) | 2-3 hours | 1x |
|
||||
| **AtomizerField GNN** | **5-50 ms** | **10,000x** |
|
||||
|
||||
**Real Performance (100k element model):**
|
||||
- FEA Setup + Solve: ~2 hours
|
||||
- Neural Network Inference: ~15 milliseconds
|
||||
- **Speedup: 480,000x**
|
||||
|
||||
This enables:
|
||||
- Interactive design exploration
|
||||
- Real-time optimization (evaluate millions of designs)
|
||||
- Instant "what-if" analysis
|
||||
|
||||
### Accuracy
|
||||
|
||||
Typical prediction errors (on validation set):
|
||||
- **Displacement**: 2-5% relative error
|
||||
- **Stress**: 5-10% relative error
|
||||
- **Max values**: 1-3% relative error
|
||||
|
||||
**When It Works Best:**
|
||||
- Interpolation (designs within training range)
|
||||
- Similar loading conditions
|
||||
- Same support configurations
|
||||
- Parametric variations
|
||||
|
||||
**When to Use with Caution:**
|
||||
- Extreme extrapolation (far outside training data)
|
||||
- Completely new loading scenarios
|
||||
- Different element types than training
|
||||
- Nonlinear materials (future work)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Rapid Design Optimization
|
||||
|
||||
```python
|
||||
from predict import FieldPredictor
|
||||
|
||||
# Load trained model
|
||||
predictor = FieldPredictor('checkpoint_best.pt')
|
||||
|
||||
# Test 1000 design variants
|
||||
results = []
|
||||
for design_params in design_space:
|
||||
# Generate FEA input (don't solve!)
|
||||
create_nastran_model(design_params)
|
||||
parse_to_neural_format(design_params)
|
||||
|
||||
# Predict in milliseconds
|
||||
pred = predictor.predict(f'design_{i}')
|
||||
|
||||
results.append({
|
||||
'params': design_params,
|
||||
'max_stress': pred['max_stress'],
|
||||
'max_displacement': pred['max_displacement']
|
||||
})
|
||||
|
||||
# Find optimal design
|
||||
best = min(results, key=lambda r: r['max_stress'])
|
||||
print(f"Best design: {best['params']}")
|
||||
print(f"Stress: {best['max_stress']:.2f} MPa")
|
||||
```
|
||||
|
||||
**Result:** Evaluate 1000 designs in ~30 seconds instead of 3000 hours!
|
||||
|
||||
### Example 2: Interactive Design Tool
|
||||
|
||||
```python
|
||||
# Real-time design feedback
|
||||
while user_editing:
|
||||
# User modifies geometry
|
||||
updated_geometry = get_user_input()
|
||||
|
||||
# Generate mesh (fast, no solve)
|
||||
mesh = generate_mesh(updated_geometry)
|
||||
parse_mesh(mesh)
|
||||
|
||||
# Instant prediction
|
||||
prediction = predictor.predict('current_design')
|
||||
|
||||
# Show results immediately
|
||||
display_stress_field(prediction['von_mises'])
|
||||
display_displacement(prediction['displacement'])
|
||||
|
||||
# Immediate feedback: "Max stress: 450 MPa (SAFE)"
|
||||
```
|
||||
|
||||
**Result:** Engineer sees results instantly, not hours later!
|
||||
|
||||
### Example 3: Optimization with Physics Understanding
|
||||
|
||||
```python
|
||||
# Traditional: Only knows max_stress = 450 MPa
|
||||
# AtomizerField: Knows WHERE stress concentrations are!
|
||||
|
||||
prediction = predictor.predict('current_design')
|
||||
stress_field = prediction['von_mises']
|
||||
|
||||
# Find stress hotspots
|
||||
hotspots = find_nodes_above_threshold(stress_field, threshold=400)
|
||||
|
||||
# Intelligent design suggestions
|
||||
for hotspot in hotspots:
|
||||
location = mesh.nodes[hotspot].position
|
||||
suggest_reinforcement(location) # Add material where needed
|
||||
suggest_fillet(location) # Smooth sharp corners
|
||||
```
|
||||
|
||||
**Result:** Optimization guided by physics, not blind search!
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
Atomizer-Field/
|
||||
├── neural_models/
|
||||
│ ├── __init__.py
|
||||
│ ├── field_predictor.py # GNN architecture
|
||||
│ ├── physics_losses.py # Loss functions
|
||||
│ └── data_loader.py # Data pipeline
|
||||
│
|
||||
├── train.py # Training script
|
||||
├── predict.py # Inference script
|
||||
├── requirements.txt # Dependencies
|
||||
│
|
||||
├── runs/ # Training outputs
|
||||
│ ├── checkpoint_best.pt # Best model
|
||||
│ ├── checkpoint_latest.pt # Latest checkpoint
|
||||
│ ├── tensorboard/ # Training logs
|
||||
│ └── config.json # Model configuration
|
||||
│
|
||||
└── training_data/ # Parsed FEA cases
|
||||
├── case_001/
|
||||
│ ├── neural_field_data.json
|
||||
│ └── neural_field_data.h5
|
||||
├── case_002/
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Training Issues
|
||||
|
||||
**Problem: Loss not decreasing**
|
||||
```bash
|
||||
# Solutions:
|
||||
# 1. Lower learning rate
|
||||
python train.py --lr 0.0001
|
||||
|
||||
# 2. Check data normalization
|
||||
# Ensure normalize=True in data_loader.py
|
||||
|
||||
# 3. Start with simpler loss
|
||||
python train.py --loss_type mse
|
||||
```
|
||||
|
||||
**Problem: Out of memory**
|
||||
```bash
|
||||
# Solutions:
|
||||
# 1. Reduce batch size
|
||||
python train.py --batch_size 2
|
||||
|
||||
# 2. Reduce model size
|
||||
python train.py --hidden_dim 64 --num_layers 4
|
||||
|
||||
# 3. Use gradient accumulation (future feature)
|
||||
```
|
||||
|
||||
**Problem: Overfitting**
|
||||
```bash
|
||||
# Solutions:
|
||||
# 1. Increase dropout
|
||||
python train.py --dropout 0.2
|
||||
|
||||
# 2. Get more training data
|
||||
# 3. Use data augmentation (rotate/scale meshes)
|
||||
```
|
||||
|
||||
### Inference Issues
|
||||
|
||||
**Problem: Poor predictions**
|
||||
- Check if test case is within training distribution
|
||||
- Verify data normalization matches training
|
||||
- Ensure model finished training (check validation loss)
|
||||
|
||||
**Problem: Slow inference**
|
||||
- Use GPU: `--device cuda`
|
||||
- Batch multiple predictions together
|
||||
- Use smaller model for production
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Transfer Learning
|
||||
|
||||
Train on one component type, fine-tune on another:
|
||||
|
||||
```bash
|
||||
# 1. Train base model on brackets
|
||||
python train.py --train_dir brackets/ --epochs 100
|
||||
|
||||
# 2. Fine-tune on beams (similar physics, different geometry)
|
||||
python train.py \
|
||||
--train_dir beams/ \
|
||||
--resume runs/checkpoint_best.pt \
|
||||
--epochs 50 \
|
||||
--lr 0.0001 # Lower LR for fine-tuning
|
||||
```
|
||||
|
||||
### Multi-Fidelity Learning
|
||||
|
||||
Combine coarse and fine meshes:
|
||||
|
||||
```python
|
||||
# Train on mix of mesh resolutions
|
||||
train_cases = [
|
||||
*coarse_mesh_cases, # Fast to solve, less accurate
|
||||
*fine_mesh_cases # Slow to solve, very accurate
|
||||
]
|
||||
|
||||
# Model learns to predict fine-mesh accuracy at coarse-mesh speed!
|
||||
```
|
||||
|
||||
### Physics-Based Data Augmentation
|
||||
|
||||
```python
|
||||
# Augment training data with physical transformations
|
||||
def augment_case(mesh, displacement, stress):
|
||||
# Rotate entire structure
|
||||
mesh_rotated = rotate(mesh, angle=random.uniform(0, 360))
|
||||
displacement_rotated = rotate_vector_field(displacement, angle)
|
||||
stress_rotated = rotate_tensor_field(stress, angle)
|
||||
|
||||
# Scale loads (linear scaling)
|
||||
scale = random.uniform(0.5, 2.0)
|
||||
displacement_scaled = displacement * scale
|
||||
stress_scaled = stress * scale
|
||||
|
||||
return augmented_cases
|
||||
```
|
||||
|
||||
## Future Enhancements (Phase 3)
|
||||
|
||||
- [ ] Nonlinear analysis support (plasticity, large deformation)
|
||||
- [ ] Contact and friction
|
||||
- [ ] Composite materials
|
||||
- [ ] Modal analysis (natural frequencies)
|
||||
- [ ] Thermal coupling
|
||||
- [ ] Topology optimization integration
|
||||
- [ ] Atomizer dashboard integration
|
||||
- [ ] Cloud deployment for team access
|
||||
|
||||
## Performance Benchmarks
|
||||
|
||||
### Model Accuracy (Validation Set)
|
||||
|
||||
| Metric | Error | Target |
|
||||
|--------|-------|--------|
|
||||
| Displacement MAE | 0.003 mm | < 0.01 mm |
|
||||
| Displacement Relative Error | 3.2% | < 5% |
|
||||
| Stress MAE | 12.5 MPa | < 20 MPa |
|
||||
| Max Stress Error | 2.1% | < 5% |
|
||||
| Max Displacement Error | 1.8% | < 3% |
|
||||
|
||||
### Computational Performance
|
||||
|
||||
| Dataset | FEA Time | NN Time | Speedup |
|
||||
|---------|----------|---------|---------|
|
||||
| 10k elements | 15 min | 5 ms | 180,000x |
|
||||
| 50k elements | 2 hours | 15 ms | 480,000x |
|
||||
| 100k elements | 8 hours | 35 ms | 823,000x |
|
||||
|
||||
**Hardware:** Single NVIDIA RTX 3080, Intel i9-12900K
|
||||
|
||||
## Citation
|
||||
|
||||
If you use AtomizerField in research, please cite:
|
||||
|
||||
```
|
||||
@software{atomizerfield2024,
|
||||
title={AtomizerField: Neural Field Learning for Structural Optimization},
|
||||
author={Your Name},
|
||||
year={2024},
|
||||
url={https://github.com/yourusername/atomizer-field}
|
||||
}
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions about Phase 2:
|
||||
|
||||
1. Check this README and PHASE2_README.md
|
||||
2. Review training logs in TensorBoard
|
||||
3. Examine model predictions vs ground truth
|
||||
4. Check GPU memory usage and batch size
|
||||
5. Verify data normalization
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Phase 3**: Integration with main Atomizer platform
|
||||
- **Phase 4**: Production deployment and dashboard
|
||||
- **Phase 5**: Multi-user cloud platform
|
||||
|
||||
---
|
||||
|
||||
**AtomizerField Phase 2**: Revolutionary neural field learning for structural optimization.
|
||||
|
||||
*1000x faster than FEA. Physics-informed. Production-ready.*
|
||||
@@ -1,500 +0,0 @@
|
||||
# AtomizerField Quick Reference Guide
|
||||
|
||||
**Version 1.0** | Complete Implementation | Ready for Training
|
||||
|
||||
---
|
||||
|
||||
## 🎯 What is AtomizerField?
|
||||
|
||||
Neural field learning system that replaces FEA with 1000× faster graph neural networks.
|
||||
|
||||
**Key Innovation:** Learn complete stress/displacement FIELDS (45,000+ values), not just max values.
|
||||
|
||||
---
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
Atomizer-Field/
|
||||
├── Neural Network Core
|
||||
│ ├── neural_models/
|
||||
│ │ ├── field_predictor.py # GNN architecture (718K params)
|
||||
│ │ ├── physics_losses.py # 4 loss functions
|
||||
│ │ ├── data_loader.py # PyTorch Geometric dataset
|
||||
│ │ └── uncertainty.py # Ensemble + online learning
|
||||
│ ├── train.py # Training pipeline
|
||||
│ ├── predict.py # Inference engine
|
||||
│ └── optimization_interface.py # Atomizer integration
|
||||
│
|
||||
├── Data Pipeline
|
||||
│ ├── neural_field_parser.py # BDF/OP2 → neural format
|
||||
│ ├── validate_parsed_data.py # Data quality checks
|
||||
│ └── batch_parser.py # Multi-case processing
|
||||
│
|
||||
├── Testing (18 tests)
|
||||
│ ├── test_suite.py # Master orchestrator
|
||||
│ ├── test_simple_beam.py # Simple Beam validation
|
||||
│ └── tests/
|
||||
│ ├── test_synthetic.py # 5 smoke tests
|
||||
│ ├── test_physics.py # 4 physics tests
|
||||
│ ├── test_learning.py # 4 learning tests
|
||||
│ ├── test_predictions.py # 5 integration tests
|
||||
│ └── analytical_cases.py # Analytical solutions
|
||||
│
|
||||
└── Documentation (10 guides)
|
||||
├── README.md # Project overview
|
||||
├── IMPLEMENTATION_STATUS.md # Complete status
|
||||
├── TESTING_COMPLETE.md # Testing guide
|
||||
└── ... (7 more guides)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start Commands
|
||||
|
||||
### 1. Test the System
|
||||
```bash
|
||||
# Smoke tests (30 seconds) - Once environment fixed
|
||||
python test_suite.py --quick
|
||||
|
||||
# Test with Simple Beam
|
||||
python test_simple_beam.py
|
||||
|
||||
# Full test suite (1 hour)
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### 2. Parse FEA Data
|
||||
```bash
|
||||
# Single case
|
||||
python neural_field_parser.py path/to/case_directory
|
||||
|
||||
# Validate parsed data
|
||||
python validate_parsed_data.py path/to/case_directory
|
||||
|
||||
# Batch process multiple cases
|
||||
python batch_parser.py --input Models/ --output parsed_data/
|
||||
```
|
||||
|
||||
### 3. Train Model
|
||||
```bash
|
||||
# Basic training
|
||||
python train.py --data_dirs case1 case2 case3 --epochs 100
|
||||
|
||||
# With all options
|
||||
python train.py \
|
||||
--data_dirs parsed_data/* \
|
||||
--epochs 200 \
|
||||
--batch_size 32 \
|
||||
--lr 0.001 \
|
||||
--loss physics \
|
||||
--checkpoint_dir checkpoints/
|
||||
```
|
||||
|
||||
### 4. Make Predictions
|
||||
```bash
|
||||
# Single prediction
|
||||
python predict.py --model checkpoints/best_model.pt --data test_case/
|
||||
|
||||
# Batch prediction
|
||||
python predict.py --model best_model.pt --data test_cases/*.h5 --batch_size 64
|
||||
```
|
||||
|
||||
### 5. Optimize with Atomizer
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
# Initialize
|
||||
optimizer = NeuralFieldOptimizer('checkpoints/best_model.pt')
|
||||
|
||||
# Evaluate design
|
||||
results = optimizer.evaluate(design_graph)
|
||||
print(f"Max stress: {results['max_stress']} MPa")
|
||||
print(f"Max displacement: {results['max_displacement']} mm")
|
||||
|
||||
# Get gradients for optimization
|
||||
sensitivities = optimizer.get_sensitivities(design_graph)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Key Metrics
|
||||
|
||||
### Performance
|
||||
- **Training time:** 2-6 hours (50-500 cases, 100-200 epochs)
|
||||
- **Inference time:** 5-50ms (vs 30-300s FEA)
|
||||
- **Speedup:** 1000× faster than FEA
|
||||
- **Memory:** ~2GB GPU for training, ~500MB for inference
|
||||
|
||||
### Accuracy (After Training)
|
||||
- **Target:** < 10% prediction error vs FEA
|
||||
- **Physics tests:** < 5% error on analytical solutions
|
||||
- **Learning tests:** < 5% interpolation error
|
||||
|
||||
### Model Size
|
||||
- **Parameters:** 718,221
|
||||
- **Layers:** 6 message passing layers
|
||||
- **Input:** 12D node features, 5D edge features
|
||||
- **Output:** 6 DOF displacement + 6 stress components per node
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing Overview
|
||||
|
||||
### Quick Smoke Test (30s)
|
||||
```bash
|
||||
python test_suite.py --quick
|
||||
```
|
||||
**5 tests:** Model creation, forward pass, losses, batch, gradients
|
||||
|
||||
### Physics Validation (15 min)
|
||||
```bash
|
||||
python test_suite.py --physics
|
||||
```
|
||||
**9 tests:** Smoke + Cantilever, equilibrium, energy, constitutive
|
||||
|
||||
### Learning Tests (30 min)
|
||||
```bash
|
||||
python test_suite.py --learning
|
||||
```
|
||||
**13 tests:** Smoke + Physics + Memorization, interpolation, extrapolation, patterns
|
||||
|
||||
### Full Suite (1 hour)
|
||||
```bash
|
||||
python test_suite.py --full
|
||||
```
|
||||
**18 tests:** Complete validation from zero to production
|
||||
|
||||
---
|
||||
|
||||
## 📈 Typical Workflow
|
||||
|
||||
### Phase 1: Data Preparation
|
||||
```bash
|
||||
# 1. Parse FEA cases
|
||||
python batch_parser.py --input Models/ --output training_data/
|
||||
|
||||
# 2. Validate data
|
||||
for dir in training_data/*; do
|
||||
python validate_parsed_data.py $dir
|
||||
done
|
||||
|
||||
# Expected: 50-500 parsed cases
|
||||
```
|
||||
|
||||
### Phase 2: Training
|
||||
```bash
|
||||
# 3. Train model
|
||||
python train.py \
|
||||
--data_dirs training_data/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--loss physics \
|
||||
--checkpoint_dir checkpoints/
|
||||
|
||||
# Monitor with TensorBoard
|
||||
tensorboard --logdir runs/
|
||||
|
||||
# Expected: Training loss < 0.01 after 100 epochs
|
||||
```
|
||||
|
||||
### Phase 3: Validation
|
||||
```bash
|
||||
# 4. Run all tests
|
||||
python test_suite.py --full
|
||||
|
||||
# 5. Test on new data
|
||||
python predict.py --model checkpoints/best_model.pt --data test_case/
|
||||
|
||||
# Expected: All tests pass, < 10% error
|
||||
```
|
||||
|
||||
### Phase 4: Deployment
|
||||
```python
|
||||
# 6. Integrate with Atomizer
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
optimizer = NeuralFieldOptimizer('checkpoints/best_model.pt')
|
||||
|
||||
# Use in optimization loop
|
||||
for iteration in range(100):
|
||||
results = optimizer.evaluate(current_design)
|
||||
sensitivities = optimizer.get_sensitivities(current_design)
|
||||
# Update design based on gradients
|
||||
current_design = update_design(current_design, sensitivities)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Training Config (atomizer_field_config.yaml)
|
||||
```yaml
|
||||
model:
|
||||
hidden_dim: 128
|
||||
num_layers: 6
|
||||
dropout: 0.1
|
||||
|
||||
training:
|
||||
batch_size: 16
|
||||
learning_rate: 0.001
|
||||
epochs: 100
|
||||
early_stopping_patience: 10
|
||||
|
||||
loss:
|
||||
type: physics
|
||||
lambda_data: 1.0
|
||||
lambda_equilibrium: 0.1
|
||||
lambda_constitutive: 0.1
|
||||
lambda_boundary: 0.5
|
||||
|
||||
uncertainty:
|
||||
n_ensemble: 5
|
||||
threshold: 0.1 # Trigger FEA if uncertainty > 10%
|
||||
|
||||
online_learning:
|
||||
enabled: true
|
||||
update_frequency: 10 # Update every 10 FEA runs
|
||||
batch_size: 32
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Feature Reference
|
||||
|
||||
### 1. Data Parser
|
||||
**File:** `neural_field_parser.py`
|
||||
|
||||
```python
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
|
||||
# Parse case
|
||||
parser = NastranToNeuralFieldParser('case_directory')
|
||||
data = parser.parse_all()
|
||||
|
||||
# Access results
|
||||
print(f"Nodes: {data['mesh']['statistics']['n_nodes']}")
|
||||
print(f"Max displacement: {data['results']['displacement']['max_translation']} mm")
|
||||
```
|
||||
|
||||
### 2. Neural Model
|
||||
**File:** `neural_models/field_predictor.py`
|
||||
|
||||
```python
|
||||
from neural_models.field_predictor import create_model
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 128,
|
||||
'num_layers': 6
|
||||
}
|
||||
model = create_model(config)
|
||||
|
||||
# Predict
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
# predictions['displacement']: (N, 6) - 6 DOF per node
|
||||
# predictions['stress']: (N, 6) - stress tensor
|
||||
# predictions['von_mises']: (N,) - von Mises stress
|
||||
```
|
||||
|
||||
### 3. Physics Losses
|
||||
**File:** `neural_models/physics_losses.py`
|
||||
|
||||
```python
|
||||
from neural_models.physics_losses import create_loss_function
|
||||
|
||||
# Create loss
|
||||
loss_fn = create_loss_function('physics')
|
||||
|
||||
# Compute loss
|
||||
losses = loss_fn(predictions, targets, data)
|
||||
# losses['total_loss']: Combined loss
|
||||
# losses['displacement_loss']: Data loss
|
||||
# losses['equilibrium_loss']: ∇·σ + f = 0
|
||||
# losses['constitutive_loss']: σ = C:ε
|
||||
# losses['boundary_loss']: BC compliance
|
||||
```
|
||||
|
||||
### 4. Optimization Interface
|
||||
**File:** `optimization_interface.py`
|
||||
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
# Initialize
|
||||
optimizer = NeuralFieldOptimizer('model.pt')
|
||||
|
||||
# Fast evaluation (15ms)
|
||||
results = optimizer.evaluate(graph_data)
|
||||
|
||||
# Analytical gradients (1M× faster than FD)
|
||||
grads = optimizer.get_sensitivities(graph_data)
|
||||
```
|
||||
|
||||
### 5. Uncertainty Quantification
|
||||
**File:** `neural_models/uncertainty.py`
|
||||
|
||||
```python
|
||||
from neural_models.uncertainty import UncertainFieldPredictor
|
||||
|
||||
# Create ensemble
|
||||
model = UncertainFieldPredictor(base_config, n_ensemble=5)
|
||||
|
||||
# Predict with uncertainty
|
||||
predictions = model.predict_with_uncertainty(graph_data)
|
||||
# predictions['mean']: Mean prediction
|
||||
# predictions['std']: Standard deviation
|
||||
# predictions['confidence']: 95% confidence interval
|
||||
|
||||
# Check if FEA needed
|
||||
if model.needs_fea_validation(predictions, threshold=0.1):
|
||||
# Run FEA for this case
|
||||
fea_result = run_fea(design)
|
||||
# Update model online
|
||||
model.update_online(graph_data, fea_result)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### NumPy Environment Issue
|
||||
**Problem:** Segmentation fault when importing NumPy
|
||||
```
|
||||
CRASHES ARE TO BE EXPECTED - PLEASE REPORT THEM TO NUMPY DEVELOPERS
|
||||
Segmentation fault
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
1. Use conda: `conda install numpy`
|
||||
2. Use WSL: Install Windows Subsystem for Linux
|
||||
3. Use Linux: Native Linux environment
|
||||
4. Reinstall: `pip uninstall numpy && pip install numpy`
|
||||
|
||||
### Import Errors
|
||||
**Problem:** Cannot find modules
|
||||
```python
|
||||
ModuleNotFoundError: No module named 'torch_geometric'
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Install all dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Or individual packages
|
||||
pip install torch torch-geometric pyg-lib
|
||||
pip install pyNastran h5py pyyaml tensorboard
|
||||
```
|
||||
|
||||
### GPU Memory Issues
|
||||
**Problem:** CUDA out of memory during training
|
||||
|
||||
**Solutions:**
|
||||
1. Reduce batch size: `--batch_size 8`
|
||||
2. Reduce model size: `hidden_dim: 64`
|
||||
3. Use CPU: `--device cpu`
|
||||
4. Enable gradient checkpointing
|
||||
|
||||
### Poor Predictions
|
||||
**Problem:** High prediction error (> 20%)
|
||||
|
||||
**Solutions:**
|
||||
1. Train longer: `--epochs 200`
|
||||
2. More data: Generate 200-500 training cases
|
||||
3. Use physics loss: `--loss physics`
|
||||
4. Check data quality: `python validate_parsed_data.py`
|
||||
5. Normalize data: `normalize=True` in dataset
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation Index
|
||||
|
||||
1. **README.md** - Project overview and quick start
|
||||
2. **IMPLEMENTATION_STATUS.md** - Complete status report
|
||||
3. **TESTING_COMPLETE.md** - Comprehensive testing guide
|
||||
4. **PHASE2_README.md** - Neural network documentation
|
||||
5. **GETTING_STARTED.md** - Step-by-step tutorial
|
||||
6. **SYSTEM_ARCHITECTURE.md** - Technical architecture
|
||||
7. **ENHANCEMENTS_GUIDE.md** - Advanced features
|
||||
8. **FINAL_IMPLEMENTATION_REPORT.md** - Implementation details
|
||||
9. **TESTING_FRAMEWORK_SUMMARY.md** - Testing overview
|
||||
10. **QUICK_REFERENCE.md** - This guide
|
||||
|
||||
---
|
||||
|
||||
## ⚡ Pro Tips
|
||||
|
||||
### Training
|
||||
- Start with 50 cases to verify pipeline
|
||||
- Use physics loss for better generalization
|
||||
- Monitor TensorBoard for convergence
|
||||
- Save checkpoints every 10 epochs
|
||||
- Early stopping prevents overfitting
|
||||
|
||||
### Data
|
||||
- Quality > Quantity: 50 good cases better than 200 poor ones
|
||||
- Diverse designs: Vary geometry, loads, materials
|
||||
- Validate data: Check for NaN, physics violations
|
||||
- Normalize features: Improves training stability
|
||||
|
||||
### Performance
|
||||
- GPU recommended: 10× faster training
|
||||
- Batch size = GPU memory / model size
|
||||
- Use DataLoader workers: `num_workers=4`
|
||||
- Cache in memory: `cache_in_memory=True`
|
||||
|
||||
### Uncertainty
|
||||
- Use ensemble (5 models) for confidence
|
||||
- Trigger FEA when uncertainty > 10%
|
||||
- Update online: Improves during optimization
|
||||
- Track confidence: Builds trust in predictions
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Success Checklist
|
||||
|
||||
### Pre-Training
|
||||
- [x] All code implemented
|
||||
- [x] Tests written
|
||||
- [x] Documentation complete
|
||||
- [ ] Environment working (NumPy issue)
|
||||
|
||||
### Training
|
||||
- [ ] 50-500 training cases generated
|
||||
- [ ] Data parsed and validated
|
||||
- [ ] Model trains without errors
|
||||
- [ ] Loss converges < 0.01
|
||||
|
||||
### Validation
|
||||
- [ ] All tests pass
|
||||
- [ ] Physics compliance < 5% error
|
||||
- [ ] Prediction error < 10%
|
||||
- [ ] Inference < 50ms
|
||||
|
||||
### Production
|
||||
- [ ] Integrated with Atomizer
|
||||
- [ ] 1000× speedup demonstrated
|
||||
- [ ] Uncertainty quantification working
|
||||
- [ ] Online learning enabled
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
**Current Status:** Implementation complete, ready for training
|
||||
|
||||
**Next Steps:**
|
||||
1. Fix NumPy environment
|
||||
2. Generate training data
|
||||
3. Train and validate
|
||||
4. Deploy to production
|
||||
|
||||
**All code is ready to use!** 🚀
|
||||
|
||||
---
|
||||
|
||||
*AtomizerField Quick Reference v1.0*
|
||||
*~7,000 lines | 18 tests | 10 docs | Production Ready*
|
||||
@@ -1,548 +0,0 @@
|
||||
# AtomizerField Neural Field Data Parser
|
||||
|
||||
**Version 1.0.0**
|
||||
|
||||
A production-ready Python parser that converts NX Nastran FEA results into standardized neural field training data for the AtomizerField optimization platform.
|
||||
|
||||
## What This Does
|
||||
|
||||
Instead of extracting just scalar values (like maximum stress) from FEA results, this parser captures **complete field data** - stress, displacement, and strain at every node and element. This enables neural networks to learn the physics of how structures respond to loads, enabling 1000x faster optimization with true physics understanding.
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ **Complete Field Extraction**: Captures displacement, stress, strain at ALL points
|
||||
- ✅ **Future-Proof Format**: Versioned data structure (v1.0) designed for years of neural network training
|
||||
- ✅ **Efficient Storage**: Uses HDF5 for large arrays, JSON for metadata
|
||||
- ✅ **Robust Parsing**: Handles mixed element types (solid, shell, beam, rigid)
|
||||
- ✅ **Data Validation**: Built-in physics and quality checks
|
||||
- ✅ **Batch Processing**: Process hundreds of cases automatically
|
||||
- ✅ **Production Ready**: Error handling, logging, provenance tracking
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Installation
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Prepare Your NX Nastran Analysis
|
||||
|
||||
In NX:
|
||||
1. Create geometry and generate mesh
|
||||
2. Apply materials (MAT1 cards)
|
||||
3. Define boundary conditions (SPC)
|
||||
4. Apply loads (FORCE, PLOAD4, GRAV)
|
||||
5. Run **SOL 101** (Linear Static)
|
||||
6. Request output: `DISPLACEMENT=ALL`, `STRESS=ALL`, `STRAIN=ALL`
|
||||
|
||||
### 3. Organize Files
|
||||
|
||||
```bash
|
||||
mkdir training_case_001
|
||||
mkdir training_case_001/input
|
||||
mkdir training_case_001/output
|
||||
|
||||
# Copy files
|
||||
cp your_model.bdf training_case_001/input/model.bdf
|
||||
cp your_model.op2 training_case_001/output/model.op2
|
||||
```
|
||||
|
||||
### 4. Run Parser
|
||||
|
||||
```bash
|
||||
# Parse single case
|
||||
python neural_field_parser.py training_case_001
|
||||
|
||||
# Validate results
|
||||
python validate_parsed_data.py training_case_001
|
||||
```
|
||||
|
||||
### 5. Check Output
|
||||
|
||||
You'll get:
|
||||
- **neural_field_data.json** - Complete metadata and structure
|
||||
- **neural_field_data.h5** - Large arrays (mesh, field results)
|
||||
|
||||
## Usage Guide
|
||||
|
||||
### Single Case Parsing
|
||||
|
||||
```bash
|
||||
python neural_field_parser.py <case_directory>
|
||||
```
|
||||
|
||||
**Expected directory structure:**
|
||||
```
|
||||
training_case_001/
|
||||
├── input/
|
||||
│ ├── model.bdf # Nastran input deck
|
||||
│ └── model.sim # (optional) NX simulation file
|
||||
├── output/
|
||||
│ ├── model.op2 # Binary results (REQUIRED)
|
||||
│ └── model.f06 # (optional) ASCII results
|
||||
└── metadata.json # (optional) Your design annotations
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
training_case_001/
|
||||
├── neural_field_data.json # Metadata, structure, small arrays
|
||||
└── neural_field_data.h5 # Large arrays (coordinates, fields)
|
||||
```
|
||||
|
||||
### Batch Processing
|
||||
|
||||
Process multiple cases at once:
|
||||
|
||||
```bash
|
||||
python batch_parser.py ./training_data
|
||||
```
|
||||
|
||||
**Expected structure:**
|
||||
```
|
||||
training_data/
|
||||
├── case_001/
|
||||
│ ├── input/model.bdf
|
||||
│ └── output/model.op2
|
||||
├── case_002/
|
||||
│ ├── input/model.bdf
|
||||
│ └── output/model.op2
|
||||
└── case_003/
|
||||
├── input/model.bdf
|
||||
└── output/model.op2
|
||||
```
|
||||
|
||||
**Options:**
|
||||
```bash
|
||||
# Skip validation (faster)
|
||||
python batch_parser.py ./training_data --no-validate
|
||||
|
||||
# Stop on first error
|
||||
python batch_parser.py ./training_data --stop-on-error
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- Parses all cases
|
||||
- Validates each one
|
||||
- Generates `batch_processing_summary.json` with results
|
||||
|
||||
### Data Validation
|
||||
|
||||
```bash
|
||||
python validate_parsed_data.py training_case_001
|
||||
```
|
||||
|
||||
Checks:
|
||||
- ✓ File existence and format
|
||||
- ✓ Data completeness (all required fields)
|
||||
- ✓ Physics consistency (equilibrium, units)
|
||||
- ✓ Data quality (no NaN/inf, reasonable values)
|
||||
- ✓ Mesh integrity
|
||||
- ✓ Material property validity
|
||||
|
||||
## Data Structure v1.0
|
||||
|
||||
The parser produces a standardized data structure designed to be future-proof:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"version": "1.0.0",
|
||||
"created_at": "timestamp",
|
||||
"analysis_type": "SOL_101",
|
||||
"units": {...}
|
||||
},
|
||||
"mesh": {
|
||||
"statistics": {
|
||||
"n_nodes": 15432,
|
||||
"n_elements": 8765
|
||||
},
|
||||
"nodes": {
|
||||
"ids": [...],
|
||||
"coordinates": "stored in HDF5"
|
||||
},
|
||||
"elements": {
|
||||
"solid": [...],
|
||||
"shell": [...],
|
||||
"beam": [...]
|
||||
}
|
||||
},
|
||||
"materials": [...],
|
||||
"boundary_conditions": {
|
||||
"spc": [...],
|
||||
"mpc": [...]
|
||||
},
|
||||
"loads": {
|
||||
"point_forces": [...],
|
||||
"pressure": [...],
|
||||
"gravity": [...],
|
||||
"thermal": [...]
|
||||
},
|
||||
"results": {
|
||||
"displacement": "stored in HDF5",
|
||||
"stress": "stored in HDF5",
|
||||
"strain": "stored in HDF5",
|
||||
"reactions": "stored in HDF5"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### HDF5 Structure
|
||||
|
||||
Large numerical arrays are stored in HDF5 for efficiency:
|
||||
|
||||
```
|
||||
neural_field_data.h5
|
||||
├── mesh/
|
||||
│ ├── node_coordinates [n_nodes, 3]
|
||||
│ └── node_ids [n_nodes]
|
||||
└── results/
|
||||
├── displacement [n_nodes, 6]
|
||||
├── displacement_node_ids
|
||||
├── stress/
|
||||
│ ├── ctetra_stress/
|
||||
│ │ ├── data [n_elem, n_components]
|
||||
│ │ └── element_ids
|
||||
│ └── cquad4_stress/...
|
||||
├── strain/...
|
||||
└── reactions/...
|
||||
```
|
||||
|
||||
## Adding Design Metadata
|
||||
|
||||
Create a `metadata.json` in each case directory to track design parameters:
|
||||
|
||||
```json
|
||||
{
|
||||
"design_parameters": {
|
||||
"thickness": 2.5,
|
||||
"fillet_radius": 5.0,
|
||||
"rib_height": 15.0
|
||||
},
|
||||
"optimization_context": {
|
||||
"objectives": ["minimize_weight", "minimize_stress"],
|
||||
"constraints": ["max_displacement < 2mm"],
|
||||
"iteration": 42
|
||||
},
|
||||
"notes": "Baseline design with standard loading"
|
||||
}
|
||||
```
|
||||
|
||||
See [metadata_template.json](metadata_template.json) for a complete template.
|
||||
|
||||
## Preparing NX Nastran Analyses
|
||||
|
||||
### Required Output Requests
|
||||
|
||||
Add these to your Nastran input deck or NX solution setup:
|
||||
|
||||
```nastran
|
||||
DISPLACEMENT = ALL
|
||||
STRESS = ALL
|
||||
STRAIN = ALL
|
||||
SPCFORCES = ALL
|
||||
```
|
||||
|
||||
### Recommended Settings
|
||||
|
||||
- **Element Types**: CTETRA10, CHEXA20, CQUAD4
|
||||
- **Analysis**: SOL 101 (Linear Static) initially
|
||||
- **Units**: Consistent (recommend SI: mm, N, MPa, kg)
|
||||
- **Output Format**: OP2 (binary) for efficiency
|
||||
|
||||
### Common Issues
|
||||
|
||||
**"OP2 has no results"**
|
||||
- Ensure analysis completed successfully (check .log file)
|
||||
- Verify output requests (DISPLACEMENT=ALL, STRESS=ALL)
|
||||
- Check that OP2 file is not empty (should be > 1 KB)
|
||||
|
||||
**"Can't find BDF nodes"**
|
||||
- Use .bdf or .dat file, not .sim
|
||||
- Ensure mesh was exported to solver deck
|
||||
- Check that BDF contains GRID cards
|
||||
|
||||
**"Memory error with large models"**
|
||||
- Parser uses HDF5 chunking and compression
|
||||
- For models > 100k elements, ensure you have sufficient RAM
|
||||
- Consider splitting into subcases
|
||||
|
||||
## Loading Parsed Data
|
||||
|
||||
### In Python
|
||||
|
||||
```python
|
||||
import json
|
||||
import h5py
|
||||
import numpy as np
|
||||
|
||||
# Load metadata
|
||||
with open("neural_field_data.json", 'r') as f:
|
||||
metadata = json.load(f)
|
||||
|
||||
# Load field data
|
||||
with h5py.File("neural_field_data.h5", 'r') as f:
|
||||
# Get node coordinates
|
||||
coords = f['mesh/node_coordinates'][:]
|
||||
|
||||
# Get displacement field
|
||||
displacement = f['results/displacement'][:]
|
||||
|
||||
# Get stress field
|
||||
stress = f['results/stress/ctetra_stress/data'][:]
|
||||
stress_elem_ids = f['results/stress/ctetra_stress/element_ids'][:]
|
||||
```
|
||||
|
||||
### In PyTorch (for neural network training)
|
||||
|
||||
```python
|
||||
import torch
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
class NeuralFieldDataset(Dataset):
|
||||
def __init__(self, case_directories):
|
||||
self.cases = []
|
||||
for case_dir in case_directories:
|
||||
h5_file = f"{case_dir}/neural_field_data.h5"
|
||||
with h5py.File(h5_file, 'r') as f:
|
||||
# Load inputs (mesh, BCs, loads)
|
||||
coords = torch.from_numpy(f['mesh/node_coordinates'][:])
|
||||
|
||||
# Load outputs (displacement, stress fields)
|
||||
displacement = torch.from_numpy(f['results/displacement'][:])
|
||||
|
||||
self.cases.append({
|
||||
'coords': coords,
|
||||
'displacement': displacement
|
||||
})
|
||||
|
||||
def __len__(self):
|
||||
return len(self.cases)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
return self.cases[idx]
|
||||
```
|
||||
|
||||
## Architecture & Design
|
||||
|
||||
### Why This Format?
|
||||
|
||||
1. **Complete Fields, Not Scalars**: Neural networks need to learn how stress/displacement varies across the entire structure, not just maximum values.
|
||||
|
||||
2. **Separation of Concerns**: JSON for structure/metadata (human-readable), HDF5 for numerical data (efficient).
|
||||
|
||||
3. **Future-Proof**: Versioned format allows adding new fields without breaking existing data.
|
||||
|
||||
4. **Physics Preservation**: Maintains all physics relationships (mesh topology, BCs, loads → results).
|
||||
|
||||
### Integration with Atomizer
|
||||
|
||||
This parser is Phase 1 of AtomizerField. Future integration:
|
||||
- Phase 2: Neural network architecture (Graph Neural Networks)
|
||||
- Phase 3: Training pipeline with physics-informed loss functions
|
||||
- Phase 4: Integration with main Atomizer dashboard
|
||||
- Phase 5: Production deployment for real-time optimization
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Parser Errors
|
||||
|
||||
| Error | Solution |
|
||||
|-------|----------|
|
||||
| `FileNotFoundError: No model.bdf found` | Ensure BDF/DAT file exists in `input/` directory |
|
||||
| `FileNotFoundError: No model.op2 found` | Ensure OP2 file exists in `output/` directory |
|
||||
| `pyNastran read error` | Check BDF syntax, try opening in text editor |
|
||||
| `OP2 subcase not found` | Ensure analysis ran successfully, check .f06 file |
|
||||
|
||||
### Validation Warnings
|
||||
|
||||
| Warning | Meaning | Action |
|
||||
|---------|---------|--------|
|
||||
| `No SPCs defined` | Model may be unconstrained | Check boundary conditions |
|
||||
| `No loads defined` | Model has no loading | Add forces, pressures, or gravity |
|
||||
| `Zero displacement` | Model not deforming | Check loads and constraints |
|
||||
| `Very large displacement` | Possible rigid body motion | Add constraints or check units |
|
||||
|
||||
### Data Quality Issues
|
||||
|
||||
**NaN or Inf values:**
|
||||
- Usually indicates analysis convergence failure
|
||||
- Check .f06 file for error messages
|
||||
- Verify model is properly constrained
|
||||
|
||||
**Mismatch in node counts:**
|
||||
- Some nodes may not have results (e.g., rigid elements)
|
||||
- Check element connectivity
|
||||
- Validate mesh quality in NX
|
||||
|
||||
## Example Workflow
|
||||
|
||||
Here's a complete example workflow from FEA to neural network training data:
|
||||
|
||||
### 1. Create Parametric Study in NX
|
||||
|
||||
```bash
|
||||
# Generate 10 design variants with different thicknesses
|
||||
# Run each analysis with SOL 101
|
||||
# Export BDF and OP2 files for each
|
||||
```
|
||||
|
||||
### 2. Organize Files
|
||||
|
||||
```bash
|
||||
mkdir parametric_study
|
||||
for i in {1..10}; do
|
||||
mkdir -p parametric_study/thickness_${i}/input
|
||||
mkdir -p parametric_study/thickness_${i}/output
|
||||
# Copy BDF and OP2 files
|
||||
done
|
||||
```
|
||||
|
||||
### 3. Batch Parse
|
||||
|
||||
```bash
|
||||
python batch_parser.py parametric_study
|
||||
```
|
||||
|
||||
### 4. Review Results
|
||||
|
||||
```bash
|
||||
# Check summary
|
||||
cat parametric_study/batch_processing_summary.json
|
||||
|
||||
# Validate a specific case
|
||||
python validate_parsed_data.py parametric_study/thickness_5
|
||||
```
|
||||
|
||||
### 5. Load into Neural Network
|
||||
|
||||
```python
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
dataset = NeuralFieldDataset([
|
||||
f"parametric_study/thickness_{i}" for i in range(1, 11)
|
||||
])
|
||||
|
||||
dataloader = DataLoader(dataset, batch_size=4, shuffle=True)
|
||||
|
||||
# Ready for training!
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
Typical parsing times (on standard laptop):
|
||||
- Small model (1k elements): ~5 seconds
|
||||
- Medium model (10k elements): ~15 seconds
|
||||
- Large model (100k elements): ~60 seconds
|
||||
- Very large (1M elements): ~10 minutes
|
||||
|
||||
File sizes (compressed HDF5):
|
||||
- Mesh (100k nodes): ~10 MB
|
||||
- Displacement field (100k nodes × 6 DOF): ~5 MB
|
||||
- Stress field (100k elements × 10 components): ~8 MB
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.8+
|
||||
- pyNastran 1.4+
|
||||
- NumPy 1.20+
|
||||
- h5py 3.0+
|
||||
- NX Nastran (any version that outputs .bdf and .op2)
|
||||
|
||||
## Files in This Repository
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `neural_field_parser.py` | Main parser - BDF/OP2 to neural field format |
|
||||
| `validate_parsed_data.py` | Data validation and quality checks |
|
||||
| `batch_parser.py` | Batch processing for multiple cases |
|
||||
| `metadata_template.json` | Template for design parameter tracking |
|
||||
| `requirements.txt` | Python dependencies |
|
||||
| `README.md` | This file |
|
||||
| `Context.md` | Project context and vision |
|
||||
| `Instructions.md` | Original implementation instructions |
|
||||
|
||||
## Development
|
||||
|
||||
### Testing with Example Models
|
||||
|
||||
There are example models in the `Models/` folder. To test the parser:
|
||||
|
||||
```bash
|
||||
# Set up test case
|
||||
mkdir test_case_001
|
||||
mkdir test_case_001/input
|
||||
mkdir test_case_001/output
|
||||
|
||||
# Copy example files
|
||||
cp Models/example_model.bdf test_case_001/input/model.bdf
|
||||
cp Models/example_model.op2 test_case_001/output/model.op2
|
||||
|
||||
# Run parser
|
||||
python neural_field_parser.py test_case_001
|
||||
|
||||
# Validate
|
||||
python validate_parsed_data.py test_case_001
|
||||
```
|
||||
|
||||
### Extending the Parser
|
||||
|
||||
To add new result types (e.g., modal analysis, thermal):
|
||||
|
||||
1. Update `extract_results()` in `neural_field_parser.py`
|
||||
2. Add corresponding validation in `validate_parsed_data.py`
|
||||
3. Update data structure version if needed
|
||||
4. Document changes in this README
|
||||
|
||||
### Contributing
|
||||
|
||||
This is part of the AtomizerField project. When making changes:
|
||||
- Preserve the v1.0 data format for backwards compatibility
|
||||
- Add comprehensive error handling
|
||||
- Update validation checks accordingly
|
||||
- Test with multiple element types
|
||||
- Document physics assumptions
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Planned features:
|
||||
- [ ] Support for nonlinear analyses (SOL 106)
|
||||
- [ ] Modal analysis results (SOL 103)
|
||||
- [ ] Thermal analysis (SOL 153)
|
||||
- [ ] Contact results
|
||||
- [ ] Composite material support
|
||||
- [ ] Automatic mesh quality assessment
|
||||
- [ ] Parallel batch processing
|
||||
- [ ] Progress bars for long operations
|
||||
- [ ] Integration with Atomizer dashboard
|
||||
|
||||
## License
|
||||
|
||||
Part of the Atomizer optimization platform.
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check this README and troubleshooting section
|
||||
2. Review `Context.md` for project background
|
||||
3. Examine example files in `Models/` folder
|
||||
4. Check pyNastran documentation for BDF/OP2 specifics
|
||||
|
||||
## Version History
|
||||
|
||||
### v1.0.0 (Current)
|
||||
- Initial release
|
||||
- Complete BDF/OP2 parsing
|
||||
- Support for solid, shell, beam elements
|
||||
- HDF5 + JSON output format
|
||||
- Data validation
|
||||
- Batch processing
|
||||
- Physics consistency checks
|
||||
|
||||
---
|
||||
|
||||
**AtomizerField**: Revolutionizing structural optimization through neural field learning.
|
||||
|
||||
*Built with Claude Code, designed for the future of engineering.*
|
||||
@@ -1,529 +0,0 @@
|
||||
# Simple Beam Test Report
|
||||
|
||||
**AtomizerField Neural Field Learning System**
|
||||
|
||||
**Test Date:** November 24, 2025
|
||||
**Model:** Simple Beam (beam_sim1-solution_1)
|
||||
**Status:** ✅ ALL TESTS PASSED
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The AtomizerField system has been successfully validated with your actual Simple Beam FEA model. All 7 comprehensive tests passed, demonstrating complete functionality from BDF/OP2 parsing through neural network prediction.
|
||||
|
||||
**Key Results:**
|
||||
- ✅ 7/7 tests passed
|
||||
- ✅ 5,179 nodes processed
|
||||
- ✅ 4,866 elements parsed
|
||||
- ✅ Complete field extraction (displacement + stress)
|
||||
- ✅ Neural network inference: 95.94 ms
|
||||
- ✅ System ready for training!
|
||||
|
||||
---
|
||||
|
||||
## Test Results
|
||||
|
||||
### Test 1: File Existence ✅ PASS
|
||||
**Purpose:** Verify Simple Beam files are available
|
||||
|
||||
**Results:**
|
||||
- BDF file found: `beam_sim1-solution_1.dat` (1,230.1 KB)
|
||||
- OP2 file found: `beam_sim1-solution_1.op2` (4,461.2 KB)
|
||||
|
||||
**Status:** Files located and validated
|
||||
|
||||
---
|
||||
|
||||
### Test 2: Directory Setup ✅ PASS
|
||||
**Purpose:** Create test case directory structure
|
||||
|
||||
**Results:**
|
||||
- Created: `test_case_beam/input/`
|
||||
- Created: `test_case_beam/output/`
|
||||
- Copied BDF to input directory
|
||||
- Copied OP2 to output directory
|
||||
|
||||
**Status:** Directory structure established
|
||||
|
||||
---
|
||||
|
||||
### Test 3: Module Imports ✅ PASS
|
||||
**Purpose:** Verify all required modules load correctly
|
||||
|
||||
**Results:**
|
||||
- pyNastran imported successfully
|
||||
- AtomizerField parser imported successfully
|
||||
- All dependencies available
|
||||
|
||||
**Status:** Environment configured correctly
|
||||
|
||||
---
|
||||
|
||||
### Test 4: BDF/OP2 Parsing ✅ PASS
|
||||
**Purpose:** Extract all data from FEA files
|
||||
|
||||
**Parse Time:** 1.27 seconds
|
||||
|
||||
**Extracted Data:**
|
||||
- **Nodes:** 5,179 nodes with 3D coordinates
|
||||
- **Elements:** 4,866 CQUAD4 shell elements
|
||||
- **Materials:** 1 material definition
|
||||
- **Boundary Conditions:** 0 SPCs, 0 MPCs
|
||||
- **Loads:** 35 forces, 0 pressures, 0 gravity, 0 thermal
|
||||
- **Displacement Field:** 5,179 nodes × 6 DOF
|
||||
- Maximum displacement: 19.556875 mm
|
||||
- **Stress Field:** 9,732 stress values (2 per element)
|
||||
- Captured for all elements
|
||||
- **Reactions:** 5,179 reaction forces
|
||||
- Maximum force: 152,198,576 N
|
||||
|
||||
**Output Files:**
|
||||
- JSON metadata: 1,686.3 KB
|
||||
- HDF5 field data: 546.3 KB
|
||||
- **Total:** 2,232.6 KB
|
||||
|
||||
**Status:** Complete field extraction successful
|
||||
|
||||
---
|
||||
|
||||
### Test 5: Data Validation ✅ PASS
|
||||
**Purpose:** Verify data quality and physics consistency
|
||||
|
||||
**Validation Checks:**
|
||||
- ✅ JSON and HDF5 files present
|
||||
- ✅ All required fields found
|
||||
- ✅ Node coordinates valid (5,179 nodes)
|
||||
- ✅ Element connectivity valid (4,866 elements)
|
||||
- ✅ Material definitions complete (1 material)
|
||||
- ✅ Displacement field complete (max: 19.56 mm)
|
||||
- ✅ Stress field complete (9,732 values)
|
||||
- ⚠ Warning: No SPCs defined (may be unconstrained)
|
||||
|
||||
**Status:** Data quality validated, ready for neural network
|
||||
|
||||
---
|
||||
|
||||
### Test 6: Graph Conversion ✅ PASS
|
||||
**Purpose:** Convert to PyTorch Geometric format for neural network
|
||||
|
||||
**Graph Structure:**
|
||||
- **Nodes:** 5,179 nodes
|
||||
- **Node Features:** 12 dimensions
|
||||
- Position (3D)
|
||||
- Boundary conditions (6 DOF)
|
||||
- Applied loads (3D)
|
||||
- **Edges:** 58,392 edges
|
||||
- **Edge Features:** 5 dimensions
|
||||
- Young's modulus
|
||||
- Poisson's ratio
|
||||
- Density
|
||||
- Shear modulus
|
||||
- Thermal expansion
|
||||
- **Target Displacement:** (5179, 6) - 6 DOF per node
|
||||
- **Target Stress:** (9732, 8) - Full stress tensor per element
|
||||
|
||||
**Status:** Successfully converted to graph neural network format
|
||||
|
||||
---
|
||||
|
||||
### Test 7: Neural Prediction ✅ PASS
|
||||
**Purpose:** Validate neural network can process the data
|
||||
|
||||
**Model Configuration:**
|
||||
- Architecture: Graph Neural Network (GNN)
|
||||
- Parameters: 128,589 parameters
|
||||
- Layers: 6 message passing layers
|
||||
- Hidden dimension: 64
|
||||
- Model state: Untrained (random weights)
|
||||
|
||||
**Inference Performance:**
|
||||
- **Inference Time:** 95.94 ms
|
||||
- **Target:** < 100 ms ✅
|
||||
- **Speedup vs FEA:** 1000× expected after training
|
||||
|
||||
**Predictions (Untrained Model):**
|
||||
- Max displacement: 2.03 (arbitrary units)
|
||||
- Max stress: 4.98 (arbitrary units)
|
||||
|
||||
**Note:** Values are from untrained model with random weights. After training on 50-500 examples, predictions will match FEA results with < 10% error.
|
||||
|
||||
**Status:** Neural network architecture validated and functional
|
||||
|
||||
---
|
||||
|
||||
## Model Statistics
|
||||
|
||||
### Geometry
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Nodes | 5,179 |
|
||||
| Elements | 4,866 |
|
||||
| Element Type | CQUAD4 (shell) |
|
||||
| Materials | 1 |
|
||||
|
||||
### Loading
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Applied Forces | 35 |
|
||||
| Pressure Loads | 0 |
|
||||
| Gravity Loads | 0 |
|
||||
| Thermal Loads | 0 |
|
||||
|
||||
### Results
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Max Displacement | 19.556875 mm |
|
||||
| Displacement Nodes | 5,179 |
|
||||
| Stress Elements | 9,732 (2 per element) |
|
||||
| Max Reaction Force | 152,198,576 N |
|
||||
|
||||
### Data Files
|
||||
| File | Size |
|
||||
|------|------|
|
||||
| BDF Input | 1,230.1 KB |
|
||||
| OP2 Results | 4,461.2 KB |
|
||||
| JSON Metadata | 1,686.3 KB |
|
||||
| HDF5 Field Data | 546.3 KB |
|
||||
| **Total Parsed** | **2,232.6 KB** |
|
||||
|
||||
---
|
||||
|
||||
## 3D Visualizations
|
||||
|
||||
### Mesh Structure
|
||||

|
||||
|
||||
The Simple Beam model consists of 5,179 nodes connected by 4,866 CQUAD4 shell elements, creating a detailed 3D representation of the beam geometry.
|
||||
|
||||
### Displacement Field
|
||||

|
||||
|
||||
**Left:** Original mesh
|
||||
**Right:** Deformed mesh (10× displacement scale)
|
||||
|
||||
The displacement field shows the beam's deformation under load, with maximum displacement of 19.56 mm. Colors represent displacement magnitude, with red indicating maximum deformation.
|
||||
|
||||
### Stress Field
|
||||

|
||||
|
||||
The von Mises stress distribution shows stress concentrations throughout the beam structure. Colors range from blue (low stress) to red (high stress), revealing critical stress regions.
|
||||
|
||||
---
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Parsing Performance
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Parse Time | 1.27 seconds |
|
||||
| Nodes/second | 4,077 nodes/s |
|
||||
| Elements/second | 3,831 elements/s |
|
||||
|
||||
### Neural Network Performance
|
||||
| Metric | Value | Target | Status |
|
||||
|--------|-------|--------|--------|
|
||||
| Inference Time | 95.94 ms | < 100 ms | ✅ Pass |
|
||||
| Model Parameters | 128,589 | - | - |
|
||||
| Forward Pass | Working | - | ✅ |
|
||||
| Gradient Flow | Working | - | ✅ |
|
||||
|
||||
### Comparison: FEA vs Neural (After Training)
|
||||
| Operation | FEA Time | Neural Time | Speedup |
|
||||
|-----------|----------|-------------|---------|
|
||||
| Single Analysis | 30-300 s | 0.096 s | **300-3000×** |
|
||||
| Optimization (100 evals) | 50-500 min | 10 s | **300-3000×** |
|
||||
| Gradient Computation | Very slow | 0.1 ms | **1,000,000×** |
|
||||
|
||||
---
|
||||
|
||||
## System Validation
|
||||
|
||||
### Functional Tests
|
||||
- ✅ File I/O (BDF/OP2 reading)
|
||||
- ✅ Data extraction (mesh, materials, BCs, loads)
|
||||
- ✅ Field extraction (displacement, stress)
|
||||
- ✅ Data validation (quality checks)
|
||||
- ✅ Format conversion (FEA → neural)
|
||||
- ✅ Graph construction (PyTorch Geometric)
|
||||
- ✅ Neural network inference
|
||||
|
||||
### Data Quality
|
||||
- ✅ No NaN values in coordinates
|
||||
- ✅ No NaN values in displacement
|
||||
- ✅ No NaN values in stress
|
||||
- ✅ Element connectivity valid
|
||||
- ✅ Node IDs consistent
|
||||
- ✅ Physics units preserved (mm, MPa, N)
|
||||
|
||||
### Neural Network
|
||||
- ✅ Model instantiation
|
||||
- ✅ Forward pass
|
||||
- ✅ All 4 loss functions operational
|
||||
- ✅ Batch processing
|
||||
- ✅ Gradient computation
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### 1. Generate Training Data (50-500 cases)
|
||||
**Goal:** Create diverse dataset for training
|
||||
|
||||
**Approach:**
|
||||
- Vary beam dimensions
|
||||
- Vary loading conditions
|
||||
- Vary material properties
|
||||
- Vary boundary conditions
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
python batch_parser.py --input Models/ --output training_data/
|
||||
```
|
||||
|
||||
### 2. Train Neural Network
|
||||
**Goal:** Learn FEA behavior from examples
|
||||
|
||||
**Configuration:**
|
||||
- Epochs: 100-200
|
||||
- Batch size: 16
|
||||
- Learning rate: 0.001
|
||||
- Loss: Physics-informed
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python train.py \
|
||||
--data_dirs training_data/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--loss physics \
|
||||
--checkpoint_dir checkpoints/
|
||||
```
|
||||
|
||||
**Expected Training Time:** 2-6 hours (GPU recommended)
|
||||
|
||||
### 3. Validate Performance
|
||||
**Goal:** Verify < 10% prediction error
|
||||
|
||||
**Tests:**
|
||||
- Physics validation (cantilever, beam tests)
|
||||
- Learning tests (memorization, interpolation)
|
||||
- Prediction accuracy on test set
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### 4. Deploy to Production
|
||||
**Goal:** Integrate with Atomizer for optimization
|
||||
|
||||
**Integration:**
|
||||
```python
|
||||
from optimization_interface import NeuralFieldOptimizer
|
||||
|
||||
# Initialize
|
||||
optimizer = NeuralFieldOptimizer('checkpoints/best_model.pt')
|
||||
|
||||
# Replace FEA calls
|
||||
results = optimizer.evaluate(design_graph)
|
||||
gradients = optimizer.get_sensitivities(design_graph)
|
||||
```
|
||||
|
||||
**Expected Speedup:** 1000× faster than FEA!
|
||||
|
||||
---
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Graph Neural Network Architecture
|
||||
|
||||
**Input Layer:**
|
||||
- Node features: 12D (position, BCs, loads)
|
||||
- Edge features: 5D (material properties)
|
||||
|
||||
**Hidden Layers:**
|
||||
- 6 message passing layers
|
||||
- Hidden dimension: 64
|
||||
- Activation: ReLU
|
||||
- Dropout: 0.1
|
||||
|
||||
**Output Layers:**
|
||||
- Displacement decoder: 6 DOF per node
|
||||
- Stress predictor: 6 stress components per element
|
||||
- Von Mises calculator: Scalar per element
|
||||
|
||||
**Total Parameters:** 128,589
|
||||
|
||||
### Data Format
|
||||
|
||||
**JSON Metadata:**
|
||||
```json
|
||||
{
|
||||
"metadata": { "case_name", "analysis_type", ... },
|
||||
"mesh": { "nodes", "elements", "statistics" },
|
||||
"materials": { ... },
|
||||
"boundary_conditions": { ... },
|
||||
"loads": { ... },
|
||||
"results": { "displacement", "stress" }
|
||||
}
|
||||
```
|
||||
|
||||
**HDF5 Arrays:**
|
||||
- `mesh/node_coordinates`: (5179, 3) float32
|
||||
- `mesh/node_ids`: (5179,) int32
|
||||
- `results/displacement`: (5179, 6) float32
|
||||
- `results/stress/cquad4_stress/data`: (9732, 8) float32
|
||||
|
||||
### Physics-Informed Loss
|
||||
|
||||
**Total Loss:**
|
||||
```
|
||||
L_total = λ_data * L_data
|
||||
+ λ_equilibrium * L_equilibrium
|
||||
+ λ_constitutive * L_constitutive
|
||||
+ λ_boundary * L_boundary
|
||||
```
|
||||
|
||||
**Components:**
|
||||
- **Data Loss:** MSE between prediction and FEA
|
||||
- **Equilibrium:** ∇·σ + f = 0 (force balance)
|
||||
- **Constitutive:** σ = C:ε (Hooke's law)
|
||||
- **Boundary:** Enforce BC compliance
|
||||
|
||||
---
|
||||
|
||||
## Conclusions
|
||||
|
||||
### ✅ System Status: FULLY OPERATIONAL
|
||||
|
||||
All components of the AtomizerField system have been validated:
|
||||
|
||||
1. **Data Pipeline** ✅
|
||||
- BDF/OP2 parsing working
|
||||
- Complete field extraction
|
||||
- Data quality validated
|
||||
|
||||
2. **Neural Network** ✅
|
||||
- Model architecture validated
|
||||
- Forward pass working
|
||||
- Inference time: 95.94 ms
|
||||
|
||||
3. **Visualization** ✅
|
||||
- 3D mesh rendering
|
||||
- Displacement fields
|
||||
- Stress fields
|
||||
- Automated report generation
|
||||
|
||||
4. **Testing Framework** ✅
|
||||
- 7/7 tests passing
|
||||
- Comprehensive validation
|
||||
- Performance benchmarks met
|
||||
|
||||
### Key Achievements
|
||||
|
||||
- ✅ Successfully parsed real 5,179-node model
|
||||
- ✅ Extracted complete displacement and stress fields
|
||||
- ✅ Converted to neural network format
|
||||
- ✅ Neural inference < 100ms
|
||||
- ✅ 3D visualization working
|
||||
- ✅ Ready for training!
|
||||
|
||||
### Performance Expectations
|
||||
|
||||
**After Training (50-500 cases, 100-200 epochs):**
|
||||
- Prediction error: < 10% vs FEA
|
||||
- Inference time: 5-50 ms
|
||||
- Speedup: 1000× faster than FEA
|
||||
- Optimization: 1,000,000× faster gradients
|
||||
|
||||
### Production Readiness
|
||||
|
||||
The system is **ready for production** after training:
|
||||
- ✅ All tests passing
|
||||
- ✅ Data pipeline validated
|
||||
- ✅ Neural architecture proven
|
||||
- ✅ Visualization tools available
|
||||
- ✅ Integration interface ready
|
||||
|
||||
**The AtomizerField system will revolutionize your structural optimization workflow with 1000× faster predictions!** 🚀
|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
|
||||
### Files Generated
|
||||
|
||||
**Test Data:**
|
||||
- `test_case_beam/input/model.bdf` (1,230 KB)
|
||||
- `test_case_beam/output/model.op2` (4,461 KB)
|
||||
- `test_case_beam/neural_field_data.json` (1,686 KB)
|
||||
- `test_case_beam/neural_field_data.h5` (546 KB)
|
||||
|
||||
**Visualizations:**
|
||||
- `visualization_images/mesh.png` (227 KB)
|
||||
- `visualization_images/displacement.png` (335 KB)
|
||||
- `visualization_images/stress.png` (215 KB)
|
||||
|
||||
**Reports:**
|
||||
- `visualization_report.md`
|
||||
- `SIMPLE_BEAM_TEST_REPORT.md` (this file)
|
||||
|
||||
### Commands Reference
|
||||
|
||||
```bash
|
||||
# Activate environment
|
||||
conda activate atomizer_field
|
||||
|
||||
# Run tests
|
||||
python test_simple_beam.py # Simple Beam test
|
||||
python test_suite.py --quick # Smoke tests
|
||||
python test_suite.py --full # Complete validation
|
||||
|
||||
# Visualize
|
||||
python visualize_results.py test_case_beam --mesh # Mesh only
|
||||
python visualize_results.py test_case_beam --displacement # Displacement
|
||||
python visualize_results.py test_case_beam --stress # Stress
|
||||
python visualize_results.py test_case_beam --report # Full report
|
||||
|
||||
# Parse data
|
||||
python neural_field_parser.py test_case_beam # Single case
|
||||
python batch_parser.py --input Models/ # Batch
|
||||
|
||||
# Train
|
||||
python train.py --data_dirs training_data/* --epochs 100
|
||||
|
||||
# Predict
|
||||
python predict.py --model best_model.pt --data test_case/
|
||||
```
|
||||
|
||||
### Environment Details
|
||||
|
||||
**Conda Environment:** `atomizer_field`
|
||||
|
||||
**Key Packages:**
|
||||
- Python 3.10.19
|
||||
- NumPy 1.26.4 (conda-compiled)
|
||||
- PyTorch 2.5.1
|
||||
- PyTorch Geometric 2.7.0
|
||||
- pyNastran 1.4.1
|
||||
- Matplotlib 3.10.7
|
||||
- H5Py 3.15.1
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
conda create -n atomizer_field python=3.10 numpy scipy -y
|
||||
conda activate atomizer_field
|
||||
conda install pytorch torchvision torchaudio cpuonly -c pytorch -y
|
||||
pip install torch-geometric pyNastran h5py tensorboard matplotlib
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** November 24, 2025
|
||||
**AtomizerField Version:** 1.0
|
||||
**Status:** ✅ All Systems Operational
|
||||
**Ready For:** Production Training and Deployment
|
||||
|
||||
🎉 **COMPLETE SUCCESS!**
|
||||
@@ -1,741 +0,0 @@
|
||||
# AtomizerField - Complete System Architecture
|
||||
|
||||
## 📍 Project Location
|
||||
|
||||
```
|
||||
c:\Users\antoi\Documents\Atomaste\Atomizer-Field\
|
||||
```
|
||||
|
||||
## 🏗️ System Overview
|
||||
|
||||
AtomizerField is a **two-phase system** that transforms FEA results into neural network predictions:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: DATA PIPELINE │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ NX Nastran Files (.bdf, .op2) │
|
||||
│ ↓ │
|
||||
│ neural_field_parser.py │
|
||||
│ ↓ │
|
||||
│ Neural Field Format (JSON + HDF5) │
|
||||
│ ↓ │
|
||||
│ validate_parsed_data.py │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: NEURAL NETWORK │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ data_loader.py → Graph Representation │
|
||||
│ ↓ │
|
||||
│ train.py + field_predictor.py (GNN) │
|
||||
│ ↓ │
|
||||
│ Trained Model (checkpoint_best.pt) │
|
||||
│ ↓ │
|
||||
│ predict.py → Field Predictions (5-50ms!) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📂 Complete File Structure
|
||||
|
||||
```
|
||||
Atomizer-Field/
|
||||
│
|
||||
├── 📄 Core Documentation
|
||||
│ ├── README.md # Phase 1 detailed guide
|
||||
│ ├── PHASE2_README.md # Phase 2 detailed guide
|
||||
│ ├── GETTING_STARTED.md # Quick start tutorial
|
||||
│ ├── SYSTEM_ARCHITECTURE.md # This file (system overview)
|
||||
│ ├── Context.md # Project vision & philosophy
|
||||
│ └── Instructions.md # Original implementation spec
|
||||
│
|
||||
├── 🔧 Phase 1: FEA Data Parser
|
||||
│ ├── neural_field_parser.py # Main parser (BDF/OP2 → Neural format)
|
||||
│ ├── validate_parsed_data.py # Data quality validation
|
||||
│ ├── batch_parser.py # Batch processing multiple cases
|
||||
│ └── metadata_template.json # Template for design parameters
|
||||
│
|
||||
├── 🧠 Phase 2: Neural Network
|
||||
│ ├── neural_models/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── field_predictor.py # GNN architecture (718K params)
|
||||
│ │ ├── physics_losses.py # Physics-informed loss functions
|
||||
│ │ └── data_loader.py # PyTorch Geometric data pipeline
|
||||
│ │
|
||||
│ ├── train.py # Training script
|
||||
│ └── predict.py # Inference script
|
||||
│
|
||||
├── 📦 Dependencies & Config
|
||||
│ ├── requirements.txt # All dependencies
|
||||
│ └── .gitignore # (if using git)
|
||||
│
|
||||
├── 📁 Data Directories (created during use)
|
||||
│ ├── training_data/ # Parsed training cases
|
||||
│ ├── validation_data/ # Parsed validation cases
|
||||
│ ├── test_data/ # Parsed test cases
|
||||
│ └── runs/ # Training outputs
|
||||
│ ├── checkpoint_best.pt # Best model
|
||||
│ ├── checkpoint_latest.pt # Latest checkpoint
|
||||
│ ├── config.json # Model configuration
|
||||
│ └── tensorboard/ # Training logs
|
||||
│
|
||||
├── 🔬 Example Models (your existing data)
|
||||
│ └── Models/
|
||||
│ └── Simple Beam/
|
||||
│ ├── beam_sim1-solution_1.dat # BDF file
|
||||
│ ├── beam_sim1-solution_1.op2 # OP2 results
|
||||
│ └── ...
|
||||
│
|
||||
└── 🐍 Virtual Environment
|
||||
└── atomizer_env/ # Python virtual environment
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔍 PHASE 1: Data Parser - Deep Dive
|
||||
|
||||
### Location
|
||||
```
|
||||
c:\Users\antoi\Documents\Atomaste\Atomizer-Field\neural_field_parser.py
|
||||
```
|
||||
|
||||
### What It Does
|
||||
|
||||
**Transforms this:**
|
||||
```
|
||||
NX Nastran Files:
|
||||
├── model.bdf (1.2 MB text file with mesh, materials, BCs, loads)
|
||||
└── model.op2 (4.5 MB binary file with stress/displacement results)
|
||||
```
|
||||
|
||||
**Into this:**
|
||||
```
|
||||
Neural Field Format:
|
||||
├── neural_field_data.json (200 KB - metadata, structure)
|
||||
└── neural_field_data.h5 (3 MB - large numerical arrays)
|
||||
```
|
||||
|
||||
### Data Structure Breakdown
|
||||
|
||||
#### 1. JSON File (neural_field_data.json)
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"version": "1.0.0",
|
||||
"created_at": "2024-01-15T10:30:00",
|
||||
"source": "NX_Nastran",
|
||||
"case_name": "training_case_001",
|
||||
"analysis_type": "SOL_101",
|
||||
"units": {
|
||||
"length": "mm",
|
||||
"force": "N",
|
||||
"stress": "MPa"
|
||||
},
|
||||
"file_hashes": {
|
||||
"bdf": "sha256_hash_here",
|
||||
"op2": "sha256_hash_here"
|
||||
}
|
||||
},
|
||||
|
||||
"mesh": {
|
||||
"statistics": {
|
||||
"n_nodes": 15432,
|
||||
"n_elements": 8765,
|
||||
"element_types": {
|
||||
"solid": 5000,
|
||||
"shell": 3000,
|
||||
"beam": 765
|
||||
}
|
||||
},
|
||||
"bounding_box": {
|
||||
"min": [0.0, 0.0, 0.0],
|
||||
"max": [100.0, 50.0, 30.0]
|
||||
},
|
||||
"nodes": {
|
||||
"ids": [1, 2, 3, ...],
|
||||
"coordinates": "<stored in HDF5>",
|
||||
"shape": [15432, 3]
|
||||
},
|
||||
"elements": {
|
||||
"solid": [
|
||||
{
|
||||
"id": 1,
|
||||
"type": "CTETRA",
|
||||
"nodes": [1, 5, 12, 34],
|
||||
"material_id": 1,
|
||||
"property_id": 10
|
||||
},
|
||||
...
|
||||
],
|
||||
"shell": [...],
|
||||
"beam": [...]
|
||||
}
|
||||
},
|
||||
|
||||
"materials": [
|
||||
{
|
||||
"id": 1,
|
||||
"type": "MAT1",
|
||||
"E": 71700.0, // Young's modulus (MPa)
|
||||
"nu": 0.33, // Poisson's ratio
|
||||
"rho": 2.81e-06, // Density (kg/mm³)
|
||||
"G": 26900.0, // Shear modulus (MPa)
|
||||
"alpha": 2.3e-05 // Thermal expansion (1/°C)
|
||||
}
|
||||
],
|
||||
|
||||
"boundary_conditions": {
|
||||
"spc": [ // Single-point constraints
|
||||
{
|
||||
"id": 1,
|
||||
"node": 1,
|
||||
"dofs": "123456", // Constrained DOFs (x,y,z,rx,ry,rz)
|
||||
"enforced_motion": 0.0
|
||||
},
|
||||
...
|
||||
],
|
||||
"mpc": [] // Multi-point constraints
|
||||
},
|
||||
|
||||
"loads": {
|
||||
"point_forces": [
|
||||
{
|
||||
"id": 100,
|
||||
"type": "force",
|
||||
"node": 500,
|
||||
"magnitude": 10000.0, // Newtons
|
||||
"direction": [1.0, 0.0, 0.0],
|
||||
"coord_system": 0
|
||||
}
|
||||
],
|
||||
"pressure": [],
|
||||
"gravity": [],
|
||||
"thermal": []
|
||||
},
|
||||
|
||||
"results": {
|
||||
"displacement": {
|
||||
"node_ids": [1, 2, 3, ...],
|
||||
"data": "<stored in HDF5>",
|
||||
"shape": [15432, 6],
|
||||
"max_translation": 0.523456,
|
||||
"max_rotation": 0.001234,
|
||||
"units": "mm and radians"
|
||||
},
|
||||
"stress": {
|
||||
"ctetra_stress": {
|
||||
"element_ids": [1, 2, 3, ...],
|
||||
"data": "<stored in HDF5>",
|
||||
"shape": [5000, 7],
|
||||
"max_von_mises": 245.67,
|
||||
"units": "MPa"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. HDF5 File (neural_field_data.h5)
|
||||
|
||||
**Structure:**
|
||||
```
|
||||
neural_field_data.h5
|
||||
│
|
||||
├── /mesh/
|
||||
│ ├── node_coordinates [15432 × 3] float64
|
||||
│ │ Each row: [x, y, z] in mm
|
||||
│ │
|
||||
│ └── node_ids [15432] int32
|
||||
│ Node ID numbers
|
||||
│
|
||||
└── /results/
|
||||
├── /displacement [15432 × 6] float64
|
||||
│ Each row: [ux, uy, uz, θx, θy, θz]
|
||||
│ Translation (mm) + Rotation (radians)
|
||||
│
|
||||
├── displacement_node_ids [15432] int32
|
||||
│
|
||||
├── /stress/
|
||||
│ ├── /ctetra_stress/
|
||||
│ │ ├── data [5000 × 7] float64
|
||||
│ │ │ [σxx, σyy, σzz, τxy, τyz, τxz, von_mises]
|
||||
│ │ └── element_ids [5000] int32
|
||||
│ │
|
||||
│ └── /cquad4_stress/
|
||||
│ └── ...
|
||||
│
|
||||
├── /strain/
|
||||
│ └── ...
|
||||
│
|
||||
└── /reactions [N × 6] float64
|
||||
Reaction forces at constrained nodes
|
||||
```
|
||||
|
||||
**Why HDF5?**
|
||||
- ✅ Efficient storage (compressed)
|
||||
- ✅ Fast random access
|
||||
- ✅ Handles large arrays (millions of values)
|
||||
- ✅ Industry standard for scientific data
|
||||
- ✅ Direct NumPy/PyTorch integration
|
||||
|
||||
### Parser Code Flow
|
||||
|
||||
```python
|
||||
# neural_field_parser.py - Main Parser Class
|
||||
|
||||
class NastranToNeuralFieldParser:
|
||||
def __init__(self, case_directory):
|
||||
# Find BDF and OP2 files
|
||||
# Initialize pyNastran readers
|
||||
|
||||
def parse_all(self):
|
||||
# 1. Read BDF (input deck)
|
||||
self.bdf.read_bdf(bdf_file)
|
||||
|
||||
# 2. Read OP2 (results)
|
||||
self.op2.read_op2(op2_file)
|
||||
|
||||
# 3. Extract data
|
||||
self.extract_metadata() # Analysis info, units
|
||||
self.extract_mesh() # Nodes, elements, connectivity
|
||||
self.extract_materials() # Material properties
|
||||
self.extract_boundary_conditions() # SPCs, MPCs
|
||||
self.extract_loads() # Forces, pressures, gravity
|
||||
self.extract_results() # COMPLETE FIELDS (key!)
|
||||
|
||||
# 4. Save
|
||||
self.save_data() # JSON + HDF5
|
||||
```
|
||||
|
||||
**Key Innovation in `extract_results()`:**
|
||||
```python
|
||||
def extract_results(self):
|
||||
# Traditional FEA post-processing:
|
||||
# max_stress = np.max(stress_data) ← LOSES SPATIAL INFO!
|
||||
|
||||
# AtomizerField approach:
|
||||
# Store COMPLETE field at EVERY node/element
|
||||
results["displacement"] = {
|
||||
"data": disp_data.tolist(), # ALL 15,432 nodes × 6 DOF
|
||||
"shape": [15432, 6],
|
||||
"max_translation": float(np.max(magnitudes)) # Also store max
|
||||
}
|
||||
|
||||
# This enables neural network to learn spatial patterns!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧠 PHASE 2: Neural Network - Deep Dive
|
||||
|
||||
### Location
|
||||
```
|
||||
c:\Users\antoi\Documents\Atomaste\Atomizer-Field\neural_models\
|
||||
```
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ AtomizerFieldModel │
|
||||
│ (718,221 parameters) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ INPUT: Graph Representation of FEA Mesh │
|
||||
│ ├── Nodes (15,432): │
|
||||
│ │ └── Features [12D]: [x,y,z, BC_mask(6), loads(3)] │
|
||||
│ └── Edges (mesh connectivity): │
|
||||
│ └── Features [5D]: [E, ν, ρ, G, α] (materials) │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────┐ │
|
||||
│ │ NODE ENCODER (12 → 128) │ │
|
||||
│ │ Embeds node position + BCs + loads │ │
|
||||
│ └──────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────────────────────────────────────┐ │
|
||||
│ │ EDGE ENCODER (5 → 64) │ │
|
||||
│ │ Embeds material properties │ │
|
||||
│ └──────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────────────────────────────────────┐ │
|
||||
│ │ MESSAGE PASSING LAYERS × 6 │ │
|
||||
│ │ ┌────────────────────────────────────┐ │ │
|
||||
│ │ │ Layer 1: MeshGraphConv │ │ │
|
||||
│ │ │ ├── Gather neighbor info │ │ │
|
||||
│ │ │ ├── Combine with edge features │ │ │
|
||||
│ │ │ ├── Update node representations │ │ │
|
||||
│ │ │ └── Residual + LayerNorm │ │ │
|
||||
│ │ ├────────────────────────────────────┤ │ │
|
||||
│ │ │ Layer 2-6: Same structure │ │ │
|
||||
│ │ └────────────────────────────────────┘ │ │
|
||||
│ │ (Forces propagate through mesh!) │ │
|
||||
│ └──────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────────────────────────────────────┐ │
|
||||
│ │ DISPLACEMENT DECODER (128 → 6) │ │
|
||||
│ │ Predicts: [ux, uy, uz, θx, θy, θz] │ │
|
||||
│ └──────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌──────────────────────────────────────────────────┐ │
|
||||
│ │ STRESS PREDICTOR (6 → 6) │ │
|
||||
│ │ From displacement → stress tensor │ │
|
||||
│ │ Outputs: [σxx, σyy, σzz, τxy, τyz, τxz] │ │
|
||||
│ └──────────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ OUTPUT: │
|
||||
│ ├── Displacement field [15,432 × 6] │
|
||||
│ ├── Stress field [15,432 × 6] │
|
||||
│ └── Von Mises stress [15,432 × 1] │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Graph Representation
|
||||
|
||||
**From Mesh to Graph:**
|
||||
|
||||
```
|
||||
FEA Mesh: Graph:
|
||||
|
||||
Node 1 ──── Element 1 ──── Node 2 Node 1 ──── Edge ──── Node 2
|
||||
│ │ │ │
|
||||
│ │ Features: Features:
|
||||
Element 2 Element 3 [x,y,z, [x,y,z,
|
||||
│ │ BC,loads] BC,loads]
|
||||
│ │ │ │
|
||||
Node 3 ──── Element 4 ──── Node 4 Edge Edge
|
||||
│ │
|
||||
[E,ν,ρ,G,α] [E,ν,ρ,G,α]
|
||||
```
|
||||
|
||||
**Built by `data_loader.py`:**
|
||||
|
||||
```python
|
||||
class FEAMeshDataset(Dataset):
|
||||
def _build_graph(self, metadata, node_coords, displacement, stress):
|
||||
# 1. Build node features
|
||||
x = torch.cat([
|
||||
node_coords, # [N, 3] - position
|
||||
bc_mask, # [N, 6] - which DOFs constrained
|
||||
load_features # [N, 3] - applied forces
|
||||
], dim=-1) # → [N, 12]
|
||||
|
||||
# 2. Build edges from element connectivity
|
||||
for element in elements:
|
||||
nodes = element['nodes']
|
||||
# Fully connect nodes within element
|
||||
for i, j in pairs(nodes):
|
||||
edge_index.append([i, j])
|
||||
edge_attr.append(material_props)
|
||||
|
||||
# 3. Create PyTorch Geometric Data object
|
||||
data = Data(
|
||||
x=x, # Node features
|
||||
edge_index=edge_index, # Connectivity
|
||||
edge_attr=edge_attr, # Material properties
|
||||
y_displacement=displacement, # Target (ground truth)
|
||||
y_stress=stress # Target (ground truth)
|
||||
)
|
||||
|
||||
return data
|
||||
```
|
||||
|
||||
### Physics-Informed Loss
|
||||
|
||||
**Standard Neural Network:**
|
||||
```python
|
||||
loss = MSE(prediction, ground_truth)
|
||||
# Only learns to match training data
|
||||
```
|
||||
|
||||
**AtomizerField (Physics-Informed):**
|
||||
```python
|
||||
loss = λ_data × MSE(prediction, ground_truth)
|
||||
+ λ_eq × EquilibriumViolation(stress) # ∇·σ + f = 0
|
||||
+ λ_const × ConstitutiveLawError(stress, strain) # σ = C:ε
|
||||
+ λ_bc × BoundaryConditionError(disp, BCs) # u = 0 at fixed nodes
|
||||
|
||||
# Learns physics, not just patterns!
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Faster convergence
|
||||
- Better generalization to unseen cases
|
||||
- Physically plausible predictions
|
||||
- Needs less training data
|
||||
|
||||
### Training Pipeline
|
||||
|
||||
**`train.py` workflow:**
|
||||
|
||||
```python
|
||||
# 1. Load data
|
||||
train_loader = create_dataloaders(train_cases, val_cases)
|
||||
|
||||
# 2. Create model
|
||||
model = AtomizerFieldModel(
|
||||
node_feature_dim=12,
|
||||
hidden_dim=128,
|
||||
num_layers=6
|
||||
)
|
||||
|
||||
# 3. Training loop
|
||||
for epoch in range(num_epochs):
|
||||
for batch in train_loader:
|
||||
# Forward pass
|
||||
predictions = model(batch)
|
||||
|
||||
# Compute loss
|
||||
losses = criterion(predictions, targets)
|
||||
|
||||
# Backward pass
|
||||
losses['total_loss'].backward()
|
||||
optimizer.step()
|
||||
|
||||
# Validate
|
||||
val_metrics = validate(val_loader)
|
||||
|
||||
# Save checkpoint if best
|
||||
if val_loss < best_val_loss:
|
||||
save_checkpoint('checkpoint_best.pt')
|
||||
|
||||
# TensorBoard logging
|
||||
writer.add_scalar('Loss/train', train_loss, epoch)
|
||||
```
|
||||
|
||||
**Outputs:**
|
||||
```
|
||||
runs/
|
||||
├── checkpoint_best.pt # Best model (lowest validation loss)
|
||||
├── checkpoint_latest.pt # Latest state (for resuming)
|
||||
├── config.json # Model configuration
|
||||
└── tensorboard/ # Training logs
|
||||
└── events.out.tfevents...
|
||||
```
|
||||
|
||||
### Inference (Prediction)
|
||||
|
||||
**`predict.py` workflow:**
|
||||
|
||||
```python
|
||||
# 1. Load trained model
|
||||
model = load_model('checkpoint_best.pt')
|
||||
|
||||
# 2. Load new case (mesh + BCs + loads, NO FEA solve!)
|
||||
data = load_case('new_design')
|
||||
|
||||
# 3. Predict in milliseconds
|
||||
predictions = model(data) # ~15ms
|
||||
|
||||
# 4. Extract results
|
||||
displacement = predictions['displacement'] # [N, 6]
|
||||
stress = predictions['stress'] # [N, 6]
|
||||
von_mises = predictions['von_mises'] # [N]
|
||||
|
||||
# 5. Get max values (like traditional FEA)
|
||||
max_disp = np.max(np.linalg.norm(displacement[:, :3], axis=1))
|
||||
max_stress = np.max(von_mises)
|
||||
|
||||
print(f"Max displacement: {max_disp:.6f} mm")
|
||||
print(f"Max stress: {max_stress:.2f} MPa")
|
||||
```
|
||||
|
||||
**Performance:**
|
||||
- Traditional FEA: 2-3 hours
|
||||
- AtomizerField: 15 milliseconds
|
||||
- **Speedup: ~480,000×**
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Key Innovations
|
||||
|
||||
### 1. Complete Field Learning (Not Scalars)
|
||||
|
||||
**Traditional Surrogate:**
|
||||
```python
|
||||
# Only learns one number per analysis
|
||||
max_stress = neural_net(design_parameters)
|
||||
```
|
||||
|
||||
**AtomizerField:**
|
||||
```python
|
||||
# Learns ENTIRE FIELD (45,000 values)
|
||||
stress_field = neural_net(mesh_graph)
|
||||
# Knows WHERE stress occurs, not just max value!
|
||||
```
|
||||
|
||||
### 2. Graph Neural Networks (Respect Topology)
|
||||
|
||||
```
|
||||
Why GNNs?
|
||||
- FEA solves: K·u = f
|
||||
- K depends on mesh connectivity
|
||||
- GNN learns on mesh structure
|
||||
- Messages propagate like forces!
|
||||
```
|
||||
|
||||
### 3. Physics-Informed Training
|
||||
|
||||
```
|
||||
Standard NN: "Make output match training data"
|
||||
AtomizerField: "Match data AND obey physics laws"
|
||||
Result: Better with less data!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💾 Data Flow Example
|
||||
|
||||
### Complete End-to-End Flow
|
||||
|
||||
```
|
||||
1. Engineer creates bracket in NX
|
||||
├── Geometry: 100mm × 50mm × 30mm
|
||||
├── Material: Aluminum 7075-T6
|
||||
├── Mesh: 15,432 nodes, 8,765 elements
|
||||
├── BCs: Fixed at mounting holes
|
||||
└── Load: 10,000 N tension
|
||||
|
||||
2. Run FEA in NX Nastran
|
||||
├── Time: 2.5 hours
|
||||
└── Output: model.bdf, model.op2
|
||||
|
||||
3. Parse to neural format
|
||||
$ python neural_field_parser.py bracket_001
|
||||
├── Time: 15 seconds
|
||||
├── Output: neural_field_data.json (200 KB)
|
||||
└── neural_field_data.h5 (3.2 MB)
|
||||
|
||||
4. Train neural network (once, on 500 brackets)
|
||||
$ python train.py --train_dir ./brackets --epochs 150
|
||||
├── Time: 8 hours (one-time)
|
||||
└── Output: checkpoint_best.pt (3 MB model)
|
||||
|
||||
5. Predict new bracket design
|
||||
$ python predict.py --model checkpoint_best.pt --input new_bracket
|
||||
├── Time: 15 milliseconds
|
||||
├── Output:
|
||||
│ ├── Max displacement: 0.523 mm
|
||||
│ ├── Max stress: 245.7 MPa
|
||||
│ └── Complete stress field at all 15,432 nodes
|
||||
└── Can now test 10,000 designs in 2.5 minutes!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 How to Use Your System
|
||||
|
||||
### Quick Reference Commands
|
||||
|
||||
```bash
|
||||
# Navigate to project
|
||||
cd c:\Users\antoi\Documents\Atomaste\Atomizer-Field
|
||||
|
||||
# Activate environment
|
||||
atomizer_env\Scripts\activate
|
||||
|
||||
# ===== PHASE 1: Parse FEA Data =====
|
||||
|
||||
# Single case
|
||||
python neural_field_parser.py case_001
|
||||
|
||||
# Validate
|
||||
python validate_parsed_data.py case_001
|
||||
|
||||
# Batch process
|
||||
python batch_parser.py ./all_cases
|
||||
|
||||
# ===== PHASE 2: Train Neural Network =====
|
||||
|
||||
# Train model
|
||||
python train.py \
|
||||
--train_dir ./training_data \
|
||||
--val_dir ./validation_data \
|
||||
--epochs 100 \
|
||||
--batch_size 4
|
||||
|
||||
# Monitor training
|
||||
tensorboard --logdir runs/tensorboard
|
||||
|
||||
# ===== PHASE 2: Run Predictions =====
|
||||
|
||||
# Predict single case
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input test_case_001
|
||||
|
||||
# Batch prediction
|
||||
python predict.py \
|
||||
--model runs/checkpoint_best.pt \
|
||||
--input ./test_cases \
|
||||
--batch
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Expected Results
|
||||
|
||||
### Phase 1 (Parser)
|
||||
|
||||
**Input:**
|
||||
- BDF file: 1.2 MB
|
||||
- OP2 file: 4.5 MB
|
||||
|
||||
**Output:**
|
||||
- JSON: ~200 KB (metadata)
|
||||
- HDF5: ~3 MB (fields)
|
||||
- Time: ~15 seconds
|
||||
|
||||
### Phase 2 (Training)
|
||||
|
||||
**Training Set:**
|
||||
- 500 parsed cases
|
||||
- Time: 8-12 hours
|
||||
- GPU: NVIDIA RTX 3080
|
||||
|
||||
**Validation Accuracy:**
|
||||
- Displacement error: 3-5%
|
||||
- Stress error: 5-10%
|
||||
- Max value error: 1-3%
|
||||
|
||||
### Phase 2 (Inference)
|
||||
|
||||
**Per Prediction:**
|
||||
- Time: 5-50 milliseconds
|
||||
- Accuracy: Within 5% of FEA
|
||||
- Speedup: 10,000× - 500,000×
|
||||
|
||||
---
|
||||
|
||||
## 🎓 What You Have Built
|
||||
|
||||
You now have a complete system that:
|
||||
|
||||
1. ✅ Parses NX Nastran results into ML-ready format
|
||||
2. ✅ Converts FEA meshes to graph neural network format
|
||||
3. ✅ Trains physics-informed GNNs to predict stress/displacement
|
||||
4. ✅ Runs inference 1000× faster than traditional FEA
|
||||
5. ✅ Provides complete field distributions (not just max values)
|
||||
6. ✅ Enables rapid design optimization
|
||||
|
||||
**Total Implementation:**
|
||||
- ~3,000 lines of production-ready Python code
|
||||
- Comprehensive documentation
|
||||
- Complete testing framework
|
||||
- Ready for real optimization workflows
|
||||
|
||||
---
|
||||
|
||||
This is a **revolutionary approach** to structural optimization that combines:
|
||||
- Traditional FEA accuracy
|
||||
- Neural network speed
|
||||
- Physics-informed learning
|
||||
- Graph-based topology understanding
|
||||
|
||||
You're ready to transform hours of FEA into milliseconds of prediction! 🚀
|
||||
@@ -1,277 +0,0 @@
|
||||
# AtomizerField Testing Checklist
|
||||
|
||||
Quick reference for testing status and next steps.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Completed Tests
|
||||
|
||||
### Environment Setup
|
||||
- [x] Conda environment created (`atomizer_field`)
|
||||
- [x] All dependencies installed
|
||||
- [x] NumPy MINGW-W64 issue resolved
|
||||
- [x] No segmentation faults
|
||||
|
||||
### Smoke Tests (5/5)
|
||||
- [x] Model creation (128,589 parameters)
|
||||
- [x] Forward pass
|
||||
- [x] Loss functions (4 types)
|
||||
- [x] Batch processing
|
||||
- [x] Gradient flow
|
||||
|
||||
### Simple Beam Test (7/7)
|
||||
- [x] File existence (BDF + OP2)
|
||||
- [x] Directory setup
|
||||
- [x] Module imports
|
||||
- [x] BDF/OP2 parsing (5,179 nodes, 4,866 elements)
|
||||
- [x] Data validation
|
||||
- [x] Graph conversion
|
||||
- [x] Neural prediction (95.94 ms)
|
||||
|
||||
### Visualization
|
||||
- [x] 3D mesh rendering
|
||||
- [x] Displacement field (original + deformed)
|
||||
- [x] Stress field (von Mises)
|
||||
- [x] Report generation (markdown + images)
|
||||
|
||||
### Unit Validation
|
||||
- [x] UNITSYS detection (MN-MM)
|
||||
- [x] Material properties (E = 200 GPa)
|
||||
- [x] Stress values (117 MPa reasonable)
|
||||
- [x] Force values (2.73 MN validated)
|
||||
- [x] Direction vectors preserved
|
||||
|
||||
---
|
||||
|
||||
## ❌ Not Yet Tested (Requires Trained Model)
|
||||
|
||||
### Physics Tests (0/4)
|
||||
- [ ] Cantilever beam (analytical comparison)
|
||||
- [ ] Equilibrium check (∇·σ + f = 0)
|
||||
- [ ] Constitutive law (σ = C:ε)
|
||||
- [ ] Energy conservation
|
||||
|
||||
### Learning Tests (0/4)
|
||||
- [ ] Memorization (single case < 1% error)
|
||||
- [ ] Interpolation (between cases < 10% error)
|
||||
- [ ] Extrapolation (unseen loads < 20% error)
|
||||
- [ ] Pattern recognition (physics transfer)
|
||||
|
||||
### Integration Tests (0/5)
|
||||
- [ ] Batch prediction
|
||||
- [ ] Gradient computation
|
||||
- [ ] Optimization loop
|
||||
- [ ] Uncertainty quantification
|
||||
- [ ] Online learning
|
||||
|
||||
### Performance Tests (0/3)
|
||||
- [ ] Accuracy benchmark (< 10% error)
|
||||
- [ ] Speed benchmark (< 50 ms)
|
||||
- [ ] Scalability (10K+ nodes)
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Known Issues to Fix
|
||||
|
||||
### Minor (Non-blocking)
|
||||
- [ ] Unit labels: "MPa" should be "kPa" (or convert values)
|
||||
- [ ] Missing SPCs warning (investigate BDF)
|
||||
- [ ] Unicode encoding (mostly fixed, minor cleanup remains)
|
||||
|
||||
### Documentation
|
||||
- [ ] Unit conversion guide
|
||||
- [ ] Training data generation guide
|
||||
- [ ] User manual
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Testing Roadmap
|
||||
|
||||
### Phase 1: Pre-Training Validation
|
||||
**Status:** ✅ COMPLETE
|
||||
|
||||
- [x] Core pipeline working
|
||||
- [x] Test case validated
|
||||
- [x] Units understood
|
||||
- [x] Visualization working
|
||||
|
||||
### Phase 2: Training Preparation
|
||||
**Status:** 🔜 NEXT
|
||||
|
||||
- [ ] Fix unit labels (30 min)
|
||||
- [ ] Document unit system (1 hour)
|
||||
- [ ] Create training data generation script
|
||||
- [ ] Generate 50 test cases (1-2 weeks)
|
||||
|
||||
### Phase 3: Initial Training
|
||||
**Status:** ⏸️ WAITING
|
||||
|
||||
- [ ] Train on 50 cases (2-4 hours)
|
||||
- [ ] Validate on 10 held-out cases
|
||||
- [ ] Check loss convergence
|
||||
- [ ] Run memorization test
|
||||
|
||||
### Phase 4: Physics Validation
|
||||
**Status:** ⏸️ WAITING
|
||||
|
||||
- [ ] Cantilever beam test
|
||||
- [ ] Equilibrium check
|
||||
- [ ] Energy conservation
|
||||
- [ ] Compare vs analytical solutions
|
||||
|
||||
### Phase 5: Full Validation
|
||||
**Status:** ⏸️ WAITING
|
||||
|
||||
- [ ] Run full test suite (18 tests)
|
||||
- [ ] Accuracy benchmarks
|
||||
- [ ] Speed benchmarks
|
||||
- [ ] Scalability tests
|
||||
|
||||
### Phase 6: Production Deployment
|
||||
**Status:** ⏸️ WAITING
|
||||
|
||||
- [ ] Integration with Atomizer
|
||||
- [ ] End-to-end optimization test
|
||||
- [ ] Performance profiling
|
||||
- [ ] User acceptance testing
|
||||
|
||||
---
|
||||
|
||||
## 📊 Test Commands Quick Reference
|
||||
|
||||
### Run Tests
|
||||
```bash
|
||||
# Activate environment
|
||||
conda activate atomizer_field
|
||||
|
||||
# Quick smoke tests (30 seconds)
|
||||
python test_suite.py --quick
|
||||
|
||||
# Simple Beam end-to-end (1 minute)
|
||||
python test_simple_beam.py
|
||||
|
||||
# Physics tests (15 minutes) - REQUIRES TRAINED MODEL
|
||||
python test_suite.py --physics
|
||||
|
||||
# Full test suite (1 hour) - REQUIRES TRAINED MODEL
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### Visualization
|
||||
```bash
|
||||
# Mesh only
|
||||
python visualize_results.py test_case_beam --mesh
|
||||
|
||||
# Displacement
|
||||
python visualize_results.py test_case_beam --displacement
|
||||
|
||||
# Stress
|
||||
python visualize_results.py test_case_beam --stress
|
||||
|
||||
# Full report
|
||||
python visualize_results.py test_case_beam --report
|
||||
```
|
||||
|
||||
### Unit Validation
|
||||
```bash
|
||||
# Check parsed data units
|
||||
python check_units.py
|
||||
|
||||
# Check OP2 raw data
|
||||
python check_op2_units.py
|
||||
|
||||
# Check actual values
|
||||
python check_actual_values.py
|
||||
```
|
||||
|
||||
### Training (When Ready)
|
||||
```bash
|
||||
# Generate training data
|
||||
python batch_parser.py --input Models/ --output training_data/
|
||||
|
||||
# Train model
|
||||
python train.py \
|
||||
--data_dirs training_data/* \
|
||||
--epochs 100 \
|
||||
--batch_size 16 \
|
||||
--loss physics
|
||||
|
||||
# Monitor training
|
||||
tensorboard --logdir runs/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📈 Success Criteria
|
||||
|
||||
### Phase 1: Core System ✅
|
||||
- [x] All smoke tests passing
|
||||
- [x] End-to-end test passing
|
||||
- [x] Real FEA data processed
|
||||
- [x] Visualization working
|
||||
|
||||
### Phase 2: Training Ready 🔜
|
||||
- [ ] Unit labels correct
|
||||
- [ ] 50+ training cases generated
|
||||
- [ ] Training script validated
|
||||
- [ ] Monitoring setup (TensorBoard)
|
||||
|
||||
### Phase 3: Model Trained ⏸️
|
||||
- [ ] Training loss < 0.01
|
||||
- [ ] Validation loss < 0.05
|
||||
- [ ] No overfitting (train ≈ val loss)
|
||||
- [ ] Predictions physically reasonable
|
||||
|
||||
### Phase 4: Physics Validated ⏸️
|
||||
- [ ] Equilibrium error < 1%
|
||||
- [ ] Constitutive error < 5%
|
||||
- [ ] Energy conservation < 5%
|
||||
- [ ] Analytical test < 5% error
|
||||
|
||||
### Phase 5: Production Ready ⏸️
|
||||
- [ ] Prediction error < 10%
|
||||
- [ ] Inference time < 50 ms
|
||||
- [ ] All 18 tests passing
|
||||
- [ ] Integration with Atomizer working
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Current Focus
|
||||
|
||||
**Status:** ✅ Core validation complete, ready for training phase
|
||||
|
||||
**Next immediate steps:**
|
||||
1. Fix unit labels (optional, 30 min)
|
||||
2. Generate training data (critical, 1-2 weeks)
|
||||
3. Train model (critical, 2-4 hours)
|
||||
|
||||
**Blockers:** None - system ready!
|
||||
|
||||
---
|
||||
|
||||
## 📞 Quick Status Check
|
||||
|
||||
Run this to verify system health:
|
||||
```bash
|
||||
conda activate atomizer_field
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
TEST 1: Files exist ✓
|
||||
TEST 2: Directory setup ✓
|
||||
TEST 3: Modules import ✓
|
||||
TEST 4: BDF/OP2 parsed ✓
|
||||
TEST 5: Data validated ✓
|
||||
TEST 6: Graph created ✓
|
||||
TEST 7: Prediction made ✓
|
||||
|
||||
[SUCCESS] All 7 tests passed!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Testing Checklist v1.0*
|
||||
*Last updated: November 24, 2025*
|
||||
*Status: Phase 1 complete, Phase 2 ready to start*
|
||||
@@ -1,673 +0,0 @@
|
||||
# AtomizerField Testing Framework - Complete Implementation
|
||||
|
||||
## Overview
|
||||
|
||||
The complete testing framework has been implemented for AtomizerField. All test modules are ready to validate the system from basic functionality through full neural FEA predictions.
|
||||
|
||||
---
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Directory Layout
|
||||
```
|
||||
Atomizer-Field/
|
||||
├── test_suite.py # Master orchestrator
|
||||
├── test_simple_beam.py # Specific test for Simple Beam model
|
||||
│
|
||||
├── tests/
|
||||
│ ├── __init__.py # Package initialization
|
||||
│ ├── test_synthetic.py # Smoke tests (5 tests)
|
||||
│ ├── test_physics.py # Physics validation (4 tests)
|
||||
│ ├── test_learning.py # Learning capability (4 tests)
|
||||
│ ├── test_predictions.py # Integration tests (5 tests)
|
||||
│ └── analytical_cases.py # Analytical solutions library
|
||||
│
|
||||
└── test_results/ # Auto-generated results
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implemented Test Modules
|
||||
|
||||
### 1. test_synthetic.py ✅ COMPLETE
|
||||
**Purpose:** Basic functionality validation (smoke tests)
|
||||
|
||||
**5 Tests Implemented:**
|
||||
1. **Model Creation** - Verify GNN instantiates (718K params)
|
||||
2. **Forward Pass** - Model processes data correctly
|
||||
3. **Loss Computation** - All 4 loss types work (MSE, Relative, Physics, Max)
|
||||
4. **Batch Processing** - Handle multiple graphs
|
||||
5. **Gradient Flow** - Backpropagation works
|
||||
|
||||
**Run standalone:**
|
||||
```bash
|
||||
python tests/test_synthetic.py
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
5/5 tests passed
|
||||
✓ Model creation successful
|
||||
✓ Forward pass works
|
||||
✓ Loss functions operational
|
||||
✓ Batch processing works
|
||||
✓ Gradients flow correctly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. test_physics.py ✅ COMPLETE
|
||||
**Purpose:** Physics constraint validation
|
||||
|
||||
**4 Tests Implemented:**
|
||||
1. **Cantilever Analytical** - Compare with δ = FL³/3EI
|
||||
- Creates synthetic cantilever beam graph
|
||||
- Computes analytical displacement
|
||||
- Compares neural prediction
|
||||
- Expected error: < 5% after training
|
||||
|
||||
2. **Equilibrium Check** - Verify ∇·σ + f = 0
|
||||
- Tests force balance
|
||||
- Checks stress field consistency
|
||||
- Expected residual: < 1e-6 after training
|
||||
|
||||
3. **Energy Conservation** - Verify strain energy = work
|
||||
- Computes external work (F·u)
|
||||
- Computes strain energy (σ:ε)
|
||||
- Expected balance: < 1% error
|
||||
|
||||
4. **Constitutive Law** - Verify σ = C:ε
|
||||
- Tests Hooke's law compliance
|
||||
- Checks stress-strain proportionality
|
||||
- Expected: Linear relationship
|
||||
|
||||
**Run standalone:**
|
||||
```bash
|
||||
python tests/test_physics.py
|
||||
```
|
||||
|
||||
**Note:** These tests will show physics compliance after model is trained with physics-informed losses.
|
||||
|
||||
---
|
||||
|
||||
### 3. test_learning.py ✅ COMPLETE
|
||||
**Purpose:** Learning capability validation
|
||||
|
||||
**4 Tests Implemented:**
|
||||
1. **Memorization Test** (10 samples, 100 epochs)
|
||||
- Can network memorize small dataset?
|
||||
- Expected: > 50% loss improvement
|
||||
- Success criteria: Final loss < 0.1
|
||||
|
||||
2. **Interpolation Test** (Train: [1,3,5,7,9], Test: [2,4,6,8])
|
||||
- Can network generalize between training points?
|
||||
- Expected: < 5% error after training
|
||||
- Tests pattern recognition within range
|
||||
|
||||
3. **Extrapolation Test** (Train: [1-5], Test: [7-10])
|
||||
- Can network predict beyond training range?
|
||||
- Expected: < 20% error (harder than interpolation)
|
||||
- Tests robustness of learned patterns
|
||||
|
||||
4. **Pattern Recognition** (Stiffness variation)
|
||||
- Does network learn physics relationships?
|
||||
- Expected: Stiffness ↑ → Displacement ↓
|
||||
- Tests understanding vs memorization
|
||||
|
||||
**Run standalone:**
|
||||
```bash
|
||||
python tests/test_learning.py
|
||||
```
|
||||
|
||||
**Training details:**
|
||||
- Each test trains a fresh model
|
||||
- Uses synthetic datasets with known patterns
|
||||
- Demonstrates learning capability before real FEA training
|
||||
|
||||
---
|
||||
|
||||
### 4. test_predictions.py ✅ COMPLETE
|
||||
**Purpose:** Integration tests for complete pipeline
|
||||
|
||||
**5 Tests Implemented:**
|
||||
1. **Parser Validation**
|
||||
- Checks test_case_beam directory exists
|
||||
- Validates parsed JSON/HDF5 files
|
||||
- Reports node/element counts
|
||||
- Requires: Run `test_simple_beam.py` first
|
||||
|
||||
2. **Training Pipeline**
|
||||
- Creates synthetic dataset (5 samples)
|
||||
- Trains model for 10 epochs
|
||||
- Validates complete training loop
|
||||
- Reports: Training time, final loss
|
||||
|
||||
3. **Prediction Accuracy**
|
||||
- Quick trains on test case
|
||||
- Measures displacement/stress errors
|
||||
- Reports inference time
|
||||
- Expected: < 100ms inference
|
||||
|
||||
4. **Performance Benchmark**
|
||||
- Tests 4 mesh sizes: [10, 50, 100, 500] nodes
|
||||
- Measures average inference time
|
||||
- 10 runs per size for statistics
|
||||
- Success: < 100ms for 100 nodes
|
||||
|
||||
5. **Batch Inference**
|
||||
- Processes 5 graphs simultaneously
|
||||
- Reports batch processing time
|
||||
- Tests optimization loop scenario
|
||||
- Validates parallel processing capability
|
||||
|
||||
**Run standalone:**
|
||||
```bash
|
||||
python tests/test_predictions.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. analytical_cases.py ✅ COMPLETE
|
||||
**Purpose:** Library of analytical solutions for validation
|
||||
|
||||
**5 Analytical Cases:**
|
||||
|
||||
1. **Cantilever Beam (Point Load)**
|
||||
```python
|
||||
δ_max = FL³/3EI
|
||||
σ_max = FL/Z
|
||||
```
|
||||
- Full deflection curve
|
||||
- Moment distribution
|
||||
- Stress field
|
||||
|
||||
2. **Simply Supported Beam (Center Load)**
|
||||
```python
|
||||
δ_max = FL³/48EI
|
||||
σ_max = FL/4Z
|
||||
```
|
||||
- Symmetric deflection
|
||||
- Support reactions
|
||||
- Moment diagram
|
||||
|
||||
3. **Axial Tension Bar**
|
||||
```python
|
||||
δ = FL/EA
|
||||
σ = F/A
|
||||
ε = σ/E
|
||||
```
|
||||
- Linear displacement
|
||||
- Uniform stress
|
||||
- Constant strain
|
||||
|
||||
4. **Pressure Vessel (Thin-Walled)**
|
||||
```python
|
||||
σ_hoop = pr/t
|
||||
σ_axial = pr/2t
|
||||
```
|
||||
- Hoop stress
|
||||
- Axial stress
|
||||
- Radial expansion
|
||||
|
||||
5. **Circular Shaft Torsion**
|
||||
```python
|
||||
θ = TL/GJ
|
||||
τ_max = Tr/J
|
||||
```
|
||||
- Twist angle
|
||||
- Shear stress distribution
|
||||
- Shear strain
|
||||
|
||||
**Standard test cases:**
|
||||
- `get_standard_cantilever()` - 1m steel beam, 1kN load
|
||||
- `get_standard_simply_supported()` - 2m steel beam, 5kN load
|
||||
- `get_standard_tension_bar()` - 1m square bar, 10kN load
|
||||
|
||||
**Run standalone to verify:**
|
||||
```bash
|
||||
python tests/analytical_cases.py
|
||||
```
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
1. Cantilever Beam (Point Load)
|
||||
Max displacement: 1.905 mm
|
||||
Max stress: 120.0 MPa
|
||||
|
||||
2. Simply Supported Beam (Point Load at Center)
|
||||
Max displacement: 0.476 mm
|
||||
Max stress: 60.0 MPa
|
||||
Reactions: 2500.0 N each
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Master Test Orchestrator
|
||||
|
||||
### test_suite.py ✅ COMPLETE
|
||||
|
||||
**Four Testing Modes:**
|
||||
|
||||
1. **Quick Mode** (`--quick`)
|
||||
- Duration: ~5 minutes
|
||||
- Tests: 5 smoke tests
|
||||
- Purpose: Verify basic functionality
|
||||
```bash
|
||||
python test_suite.py --quick
|
||||
```
|
||||
|
||||
2. **Physics Mode** (`--physics`)
|
||||
- Duration: ~15 minutes
|
||||
- Tests: Smoke + Physics (9 tests)
|
||||
- Purpose: Validate physics constraints
|
||||
```bash
|
||||
python test_suite.py --physics
|
||||
```
|
||||
|
||||
3. **Learning Mode** (`--learning`)
|
||||
- Duration: ~30 minutes
|
||||
- Tests: Smoke + Physics + Learning (13 tests)
|
||||
- Purpose: Confirm learning capability
|
||||
```bash
|
||||
python test_suite.py --learning
|
||||
```
|
||||
|
||||
4. **Full Mode** (`--full`)
|
||||
- Duration: ~1 hour
|
||||
- Tests: All 18 tests
|
||||
- Purpose: Complete validation
|
||||
```bash
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Progress tracking
|
||||
- Detailed reporting
|
||||
- JSON results export
|
||||
- Clean pass/fail output
|
||||
- Duration tracking
|
||||
- Metrics collection
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
============================================================
|
||||
AtomizerField Test Suite v1.0
|
||||
Mode: QUICK
|
||||
============================================================
|
||||
|
||||
[TEST] Model Creation
|
||||
Description: Verify GNN model can be instantiated
|
||||
Creating GNN model...
|
||||
Model created: 718,221 parameters
|
||||
Status: ✓ PASS
|
||||
Duration: 0.15s
|
||||
|
||||
...
|
||||
|
||||
============================================================
|
||||
TEST SUMMARY
|
||||
============================================================
|
||||
|
||||
Total Tests: 5
|
||||
✓ Passed: 5
|
||||
✗ Failed: 0
|
||||
Pass Rate: 100.0%
|
||||
|
||||
✓ ALL TESTS PASSED - SYSTEM READY!
|
||||
|
||||
============================================================
|
||||
|
||||
Total testing time: 0.5 minutes
|
||||
|
||||
Results saved to: test_results/test_results_quick_1234567890.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test for Simple Beam Model
|
||||
|
||||
### test_simple_beam.py ✅ COMPLETE
|
||||
|
||||
**Purpose:** Validate complete pipeline with user's actual Simple Beam model
|
||||
|
||||
**7-Step Test:**
|
||||
1. Check Files - Verify beam_sim1-solution_1.dat and .op2 exist
|
||||
2. Setup Test Case - Create test_case_beam/ directory
|
||||
3. Import Modules - Verify pyNastran and AtomizerField imports
|
||||
4. Parse Beam - Parse BDF/OP2 files
|
||||
5. Validate Data - Run quality checks
|
||||
6. Load as Graph - Convert to PyG format
|
||||
7. Neural Prediction - Make prediction with model
|
||||
|
||||
**Location of beam files:**
|
||||
```
|
||||
Models/Simple Beam/
|
||||
├── beam_sim1-solution_1.dat (BDF)
|
||||
└── beam_sim1-solution_1.op2 (Results)
|
||||
```
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
**Creates:**
|
||||
```
|
||||
test_case_beam/
|
||||
├── input/
|
||||
│ └── model.bdf
|
||||
├── output/
|
||||
│ └── model.op2
|
||||
├── neural_field_data.json
|
||||
└── neural_field_data.h5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Results Export
|
||||
|
||||
### JSON Format
|
||||
|
||||
All test runs save results to `test_results/`:
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-01-24T12:00:00",
|
||||
"mode": "quick",
|
||||
"tests": [
|
||||
{
|
||||
"name": "Model Creation",
|
||||
"description": "Verify GNN model can be instantiated",
|
||||
"status": "PASS",
|
||||
"duration": 0.15,
|
||||
"message": "Model created successfully (718,221 params)",
|
||||
"metrics": {
|
||||
"parameters": 718221
|
||||
}
|
||||
},
|
||||
...
|
||||
],
|
||||
"summary": {
|
||||
"total": 5,
|
||||
"passed": 5,
|
||||
"failed": 0,
|
||||
"pass_rate": 100.0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Progressive Validation
|
||||
|
||||
```
|
||||
Level 1: Smoke Tests (5 min)
|
||||
↓
|
||||
"Code runs, model works"
|
||||
↓
|
||||
Level 2: Physics Tests (15 min)
|
||||
↓
|
||||
"Understands physics constraints"
|
||||
↓
|
||||
Level 3: Learning Tests (30 min)
|
||||
↓
|
||||
"Can learn patterns"
|
||||
↓
|
||||
Level 4: Integration Tests (1 hour)
|
||||
↓
|
||||
"Production ready"
|
||||
```
|
||||
|
||||
### Development Workflow
|
||||
|
||||
```
|
||||
1. Write code
|
||||
2. Run: python test_suite.py --quick (30s)
|
||||
3. If pass → Continue
|
||||
If fail → Fix immediately
|
||||
4. Before commit: python test_suite.py --full (1h)
|
||||
5. All pass → Commit
|
||||
```
|
||||
|
||||
### Training Validation
|
||||
|
||||
```
|
||||
Before training:
|
||||
- All smoke tests pass
|
||||
- Physics tests show correct structure
|
||||
|
||||
During training:
|
||||
- Monitor loss curves
|
||||
- Check physics residuals
|
||||
|
||||
After training:
|
||||
- All physics tests < 5% error
|
||||
- Learning tests show convergence
|
||||
- Integration tests < 10% prediction error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### What's Tested
|
||||
|
||||
✅ **Architecture:**
|
||||
- Model instantiation
|
||||
- Layer connectivity
|
||||
- Parameter counts
|
||||
- Forward pass
|
||||
|
||||
✅ **Loss Functions:**
|
||||
- MSE loss
|
||||
- Relative loss
|
||||
- Physics-informed loss
|
||||
- Max error loss
|
||||
|
||||
✅ **Data Pipeline:**
|
||||
- BDF/OP2 parsing
|
||||
- Graph construction
|
||||
- Feature engineering
|
||||
- Batch processing
|
||||
|
||||
✅ **Physics Compliance:**
|
||||
- Equilibrium (∇·σ + f = 0)
|
||||
- Constitutive law (σ = C:ε)
|
||||
- Boundary conditions
|
||||
- Energy conservation
|
||||
|
||||
✅ **Learning Capability:**
|
||||
- Memorization
|
||||
- Interpolation
|
||||
- Extrapolation
|
||||
- Pattern recognition
|
||||
|
||||
✅ **Performance:**
|
||||
- Inference speed
|
||||
- Batch processing
|
||||
- Memory usage
|
||||
- Scalability
|
||||
|
||||
---
|
||||
|
||||
## Running the Tests
|
||||
|
||||
### Environment Setup
|
||||
|
||||
**Note:** There is currently a NumPy compatibility issue on Windows with MINGW-W64 that causes segmentation faults. Tests are ready to run once this environment issue is resolved.
|
||||
|
||||
**Options:**
|
||||
1. Use conda environment with proper NumPy build
|
||||
2. Use WSL (Windows Subsystem for Linux)
|
||||
3. Run on Linux system
|
||||
4. Wait for NumPy Windows compatibility fix
|
||||
|
||||
### Quick Start (Once Environment Fixed)
|
||||
|
||||
```bash
|
||||
# 1. Quick smoke test (30 seconds)
|
||||
python test_suite.py --quick
|
||||
|
||||
# 2. Test with Simple Beam
|
||||
python test_simple_beam.py
|
||||
|
||||
# 3. Physics validation
|
||||
python test_suite.py --physics
|
||||
|
||||
# 4. Complete validation
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### Individual Test Modules
|
||||
|
||||
```bash
|
||||
# Run specific test suites
|
||||
python tests/test_synthetic.py # 5 smoke tests
|
||||
python tests/test_physics.py # 4 physics tests
|
||||
python tests/test_learning.py # 4 learning tests
|
||||
python tests/test_predictions.py # 5 integration tests
|
||||
|
||||
# Run analytical case examples
|
||||
python tests/analytical_cases.py # See all analytical solutions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Minimum Viable Testing (Pre-Training)
|
||||
- ✅ All smoke tests pass
|
||||
- ✅ Physics tests run (may not pass without training)
|
||||
- ✅ Learning tests demonstrate convergence
|
||||
- ⏳ Simple Beam parses successfully
|
||||
|
||||
### Production Ready (Post-Training)
|
||||
- ✅ All smoke tests pass
|
||||
- ⏳ Physics tests < 5% error
|
||||
- ⏳ Learning tests show interpolation < 5% error
|
||||
- ⏳ Integration tests < 10% prediction error
|
||||
- ⏳ Performance: 1000× speedup vs FEA
|
||||
|
||||
---
|
||||
|
||||
## Implementation Status
|
||||
|
||||
### Completed ✅
|
||||
1. Master test orchestrator (test_suite.py)
|
||||
2. Smoke tests (test_synthetic.py) - 5 tests
|
||||
3. Physics tests (test_physics.py) - 4 tests
|
||||
4. Learning tests (test_learning.py) - 4 tests
|
||||
5. Integration tests (test_predictions.py) - 5 tests
|
||||
6. Analytical solutions library (analytical_cases.py) - 5 cases
|
||||
7. Simple Beam test (test_simple_beam.py) - 7 steps
|
||||
8. Documentation and examples
|
||||
|
||||
### Total Test Count: 18 tests + 7-step integration test
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### To Run Tests:
|
||||
1. **Resolve NumPy environment issue**
|
||||
- Use conda: `conda install numpy`
|
||||
- Or use WSL/Linux
|
||||
- Or wait for Windows NumPy fix
|
||||
|
||||
2. **Run smoke tests**
|
||||
```bash
|
||||
python test_suite.py --quick
|
||||
```
|
||||
|
||||
3. **Test with Simple Beam**
|
||||
```bash
|
||||
python test_simple_beam.py
|
||||
```
|
||||
|
||||
4. **Generate training data**
|
||||
- Create multiple design variations
|
||||
- Run FEA on each
|
||||
- Parse all cases
|
||||
|
||||
5. **Train model**
|
||||
```bash
|
||||
python train.py --config training_config.yaml
|
||||
```
|
||||
|
||||
6. **Validate trained model**
|
||||
```bash
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Summary
|
||||
|
||||
| File | Lines | Purpose | Status |
|
||||
|------|-------|---------|--------|
|
||||
| test_suite.py | 403 | Master orchestrator | ✅ Complete |
|
||||
| test_simple_beam.py | 377 | Simple Beam test | ✅ Complete |
|
||||
| tests/test_synthetic.py | 297 | Smoke tests | ✅ Complete |
|
||||
| tests/test_physics.py | 370 | Physics validation | ✅ Complete |
|
||||
| tests/test_learning.py | 410 | Learning tests | ✅ Complete |
|
||||
| tests/test_predictions.py | 400 | Integration tests | ✅ Complete |
|
||||
| tests/analytical_cases.py | 450 | Analytical library | ✅ Complete |
|
||||
|
||||
**Total:** ~2,700 lines of comprehensive testing infrastructure
|
||||
|
||||
---
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
### Fast Feedback
|
||||
- Smoke tests in 30 seconds
|
||||
- Catch errors immediately
|
||||
- Continuous validation during development
|
||||
|
||||
### Comprehensive Coverage
|
||||
- From basic functionality to full pipeline
|
||||
- Physics compliance verification
|
||||
- Learning capability confirmation
|
||||
- Performance benchmarking
|
||||
|
||||
### Progressive Confidence
|
||||
```
|
||||
Code runs → Understands physics → Learns patterns → Production ready
|
||||
```
|
||||
|
||||
### Automated Validation
|
||||
- JSON results export
|
||||
- Clear pass/fail reporting
|
||||
- Metrics tracking
|
||||
- Duration monitoring
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**The complete testing framework is implemented and ready for use.**
|
||||
|
||||
**What's Ready:**
|
||||
- 18 comprehensive tests across 4 test suites
|
||||
- Analytical solutions library with 5 classical cases
|
||||
- Master orchestrator with 4 testing modes
|
||||
- Simple Beam integration test
|
||||
- Detailed documentation and examples
|
||||
|
||||
**To Use:**
|
||||
1. Resolve NumPy environment issue
|
||||
2. Run: `python test_suite.py --quick`
|
||||
3. Validate: All smoke tests should pass
|
||||
4. Proceed with training and full validation
|
||||
|
||||
**The testing framework provides complete validation from zero to production-ready neural FEA predictions!** ✅
|
||||
|
||||
---
|
||||
|
||||
*AtomizerField Testing Framework v1.0 - Complete Implementation*
|
||||
*Total: 18 tests + analytical library + integration test*
|
||||
*Ready for immediate use once environment is configured*
|
||||
@@ -1,422 +0,0 @@
|
||||
# AtomizerField Testing Framework - Implementation Summary
|
||||
|
||||
## 🎯 Testing Framework Created
|
||||
|
||||
I've implemented a comprehensive testing framework for AtomizerField that validates everything from basic functionality to full neural FEA predictions.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Files Created
|
||||
|
||||
### 1. **test_suite.py** - Master Test Orchestrator
|
||||
**Status:** ✅ Complete
|
||||
|
||||
**Features:**
|
||||
- Four testing modes: `--quick`, `--physics`, `--learning`, `--full`
|
||||
- Progress tracking and detailed reporting
|
||||
- JSON results export
|
||||
- Clean pass/fail output
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Quick smoke tests (5 minutes)
|
||||
python test_suite.py --quick
|
||||
|
||||
# Physics validation (15 minutes)
|
||||
python test_suite.py --physics
|
||||
|
||||
# Learning tests (30 minutes)
|
||||
python test_suite.py --learning
|
||||
|
||||
# Full suite (1 hour)
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
### 2. **tests/test_synthetic.py** - Synthetic Tests
|
||||
**Status:** ✅ Complete
|
||||
|
||||
**Tests Implemented:**
|
||||
1. ✅ Model Creation - Verify GNN instantiates
|
||||
2. ✅ Forward Pass - Model processes data
|
||||
3. ✅ Loss Computation - All loss functions work
|
||||
4. ✅ Batch Processing - Handle multiple graphs
|
||||
5. ✅ Gradient Flow - Backpropagation works
|
||||
|
||||
**Can run standalone:**
|
||||
```bash
|
||||
python tests/test_synthetic.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📋 Testing Strategy
|
||||
|
||||
### Phase 1: Smoke Tests (5 min) ✅ Implemented
|
||||
```
|
||||
✓ Model creation (718K parameters)
|
||||
✓ Forward pass (displacement, stress, von Mises)
|
||||
✓ Loss computation (MSE, relative, physics, max)
|
||||
✓ Batch processing
|
||||
✓ Gradient flow
|
||||
```
|
||||
|
||||
### Phase 2: Physics Tests (15 min) ⏳ Spec Ready
|
||||
```
|
||||
- Cantilever beam (δ = FL³/3EI)
|
||||
- Simply supported beam
|
||||
- Pressure vessel (σ = pr/t)
|
||||
- Equilibrium check (∇·σ + f = 0)
|
||||
- Energy conservation
|
||||
```
|
||||
|
||||
### Phase 3: Learning Tests (30 min) ⏳ Spec Ready
|
||||
```
|
||||
- Memorization (10 examples)
|
||||
- Interpolation (between training points)
|
||||
- Extrapolation (beyond training data)
|
||||
- Pattern recognition (thickness → stress)
|
||||
```
|
||||
|
||||
### Phase 4: Integration Tests (1 hour) ⏳ Spec Ready
|
||||
```
|
||||
- Parser validation
|
||||
- Training pipeline
|
||||
- Prediction accuracy
|
||||
- Performance benchmarks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Test Results Format
|
||||
|
||||
### Example Output:
|
||||
```
|
||||
============================================================
|
||||
AtomizerField Test Suite v1.0
|
||||
Mode: QUICK
|
||||
============================================================
|
||||
|
||||
[TEST] Model Creation
|
||||
Description: Verify GNN model can be instantiated
|
||||
Creating GNN model...
|
||||
Model created: 718,221 parameters
|
||||
Status: ✓ PASS
|
||||
Duration: 0.15s
|
||||
|
||||
[TEST] Forward Pass
|
||||
Description: Verify model can process dummy data
|
||||
Testing forward pass...
|
||||
Displacement shape: (100, 6) ✓
|
||||
Stress shape: (100, 6) ✓
|
||||
Von Mises shape: (100,) ✓
|
||||
Status: ✓ PASS
|
||||
Duration: 0.05s
|
||||
|
||||
[TEST] Loss Computation
|
||||
Description: Verify loss functions work
|
||||
Testing loss functions...
|
||||
MSE loss: 3.885789 ✓
|
||||
RELATIVE loss: 2.941448 ✓
|
||||
PHYSICS loss: 3.850585 ✓
|
||||
MAX loss: 20.127707 ✓
|
||||
Status: ✓ PASS
|
||||
Duration: 0.12s
|
||||
|
||||
============================================================
|
||||
TEST SUMMARY
|
||||
============================================================
|
||||
|
||||
Total Tests: 5
|
||||
✓ Passed: 5
|
||||
✗ Failed: 0
|
||||
Pass Rate: 100.0%
|
||||
|
||||
✓ ALL TESTS PASSED - SYSTEM READY!
|
||||
|
||||
============================================================
|
||||
|
||||
Total testing time: 0.5 minutes
|
||||
|
||||
Results saved to: test_results/test_results_quick_1234567890.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
Atomizer-Field/
|
||||
├── test_suite.py # ✅ Master orchestrator
|
||||
├── tests/
|
||||
│ ├── __init__.py # ✅ Package init
|
||||
│ ├── test_synthetic.py # ✅ Synthetic tests (COMPLETE)
|
||||
│ ├── test_physics.py # ⏳ Physics validation (NEXT)
|
||||
│ ├── test_learning.py # ⏳ Learning tests
|
||||
│ ├── test_predictions.py # ⏳ Integration tests
|
||||
│ └── analytical_cases.py # ⏳ Known solutions
|
||||
│
|
||||
├── generate_test_data.py # ⏳ Test data generator
|
||||
├── benchmark.py # ⏳ Performance tests
|
||||
├── visualize_results.py # ⏳ Visualization
|
||||
├── test_dashboard.py # ⏳ HTML report generator
|
||||
│
|
||||
└── test_results/ # Auto-created
|
||||
├── test_results_quick_*.json
|
||||
├── test_results_full_*.json
|
||||
└── test_report.html
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start Testing
|
||||
|
||||
### Step 1: Run Smoke Tests (Immediate)
|
||||
```bash
|
||||
# Verify basic functionality (5 minutes)
|
||||
python test_suite.py --quick
|
||||
```
|
||||
|
||||
**Expected Output:**
|
||||
```
|
||||
5/5 tests passed
|
||||
✓ ALL TESTS PASSED - SYSTEM READY!
|
||||
```
|
||||
|
||||
### Step 2: Generate Test Data (When Ready)
|
||||
```bash
|
||||
# Create synthetic FEA data with known solutions
|
||||
python generate_test_data.py --all-cases
|
||||
```
|
||||
|
||||
### Step 3: Full Validation (When Model Trained)
|
||||
```bash
|
||||
# Complete test suite (1 hour)
|
||||
python test_suite.py --full
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 What Each Test Validates
|
||||
|
||||
### Smoke Tests (test_synthetic.py) ✅
|
||||
**Purpose:** Verify code runs without errors
|
||||
|
||||
| Test | What It Checks | Why It Matters |
|
||||
|------|----------------|----------------|
|
||||
| Model Creation | Can instantiate GNN | Code imports work, architecture valid |
|
||||
| Forward Pass | Produces outputs | Model can process data |
|
||||
| Loss Computation | All loss types work | Training will work |
|
||||
| Batch Processing | Handles multiple graphs | Real training scenario |
|
||||
| Gradient Flow | Backprop works | Model can learn |
|
||||
|
||||
### Physics Tests (test_physics.py) ⏳
|
||||
**Purpose:** Validate physics understanding
|
||||
|
||||
| Test | Known Solution | Tolerance |
|
||||
|------|---------------|-----------|
|
||||
| Cantilever Beam | δ = FL³/3EI | < 5% |
|
||||
| Simply Supported | δ = FL³/48EI | < 5% |
|
||||
| Pressure Vessel | σ = pr/t | < 5% |
|
||||
| Equilibrium | ∇·σ + f = 0 | < 1e-6 |
|
||||
|
||||
### Learning Tests (test_learning.py) ⏳
|
||||
**Purpose:** Confirm network learns
|
||||
|
||||
| Test | Dataset | Expected Result |
|
||||
|------|---------|-----------------|
|
||||
| Memorization | 10 samples | < 1% error |
|
||||
| Interpolation | Train: [1,3,5], Test: [2,4] | < 5% error |
|
||||
| Extrapolation | Train: [1-3], Test: [5] | < 20% error |
|
||||
| Pattern | thickness ↑ → stress ↓ | Correct trend |
|
||||
|
||||
### Integration Tests (test_predictions.py) ⏳
|
||||
**Purpose:** Full system validation
|
||||
|
||||
| Test | Input | Output |
|
||||
|------|-------|--------|
|
||||
| Parser | Simple Beam BDF/OP2 | Parsed data |
|
||||
| Training | 50 cases, 20 epochs | Trained model |
|
||||
| Prediction | New design | Stress/disp fields |
|
||||
| Accuracy | Compare vs FEA | < 10% error |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
### To Complete Testing Framework:
|
||||
|
||||
**Priority 1: Physics Tests** (30 min implementation)
|
||||
```python
|
||||
# tests/test_physics.py
|
||||
def test_cantilever_analytical():
|
||||
"""Compare with δ = FL³/3EI"""
|
||||
# Generate cantilever mesh
|
||||
# Predict displacement
|
||||
# Compare with analytical
|
||||
pass
|
||||
```
|
||||
|
||||
**Priority 2: Test Data Generator** (1 hour)
|
||||
```python
|
||||
# generate_test_data.py
|
||||
class SyntheticFEAGenerator:
|
||||
"""Create fake but realistic FEA data"""
|
||||
def generate_cantilever_dataset(n_samples=100):
|
||||
# Generate meshes with varying parameters
|
||||
# Calculate analytical solutions
|
||||
pass
|
||||
```
|
||||
|
||||
**Priority 3: Learning Tests** (30 min)
|
||||
```python
|
||||
# tests/test_learning.py
|
||||
def test_memorization():
|
||||
"""Can network memorize 10 examples?"""
|
||||
pass
|
||||
```
|
||||
|
||||
**Priority 4: Visualization** (1 hour)
|
||||
```python
|
||||
# visualize_results.py
|
||||
def plot_test_results():
|
||||
"""Create plots comparing predictions vs truth"""
|
||||
pass
|
||||
```
|
||||
|
||||
**Priority 5: HTML Dashboard** (1 hour)
|
||||
```python
|
||||
# test_dashboard.py
|
||||
def generate_html_report():
|
||||
"""Create comprehensive HTML report"""
|
||||
pass
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📈 Success Criteria
|
||||
|
||||
### Minimum Viable Testing:
|
||||
- ✅ Smoke tests pass (basic functionality)
|
||||
- ⏳ At least one physics test passes (analytical validation)
|
||||
- ⏳ Network can memorize small dataset (learning proof)
|
||||
|
||||
### Production Ready:
|
||||
- All smoke tests pass ✅
|
||||
- All physics tests < 5% error
|
||||
- Learning tests show convergence
|
||||
- Integration tests < 10% prediction error
|
||||
- Performance benchmarks meet targets (1000× speedup)
|
||||
|
||||
---
|
||||
|
||||
## 🔧 How to Extend
|
||||
|
||||
### Adding New Test:
|
||||
|
||||
```python
|
||||
# tests/test_custom.py
|
||||
def test_my_feature():
|
||||
"""
|
||||
Test custom feature
|
||||
|
||||
Expected: Feature works correctly
|
||||
"""
|
||||
# Setup
|
||||
# Execute
|
||||
# Validate
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': 'Test completed',
|
||||
'metrics': {'accuracy': 0.95}
|
||||
}
|
||||
```
|
||||
|
||||
### Register in test_suite.py:
|
||||
```python
|
||||
def run_custom_tests(self):
|
||||
from tests import test_custom
|
||||
|
||||
self.run_test(
|
||||
"My Feature Test",
|
||||
test_custom.test_my_feature,
|
||||
"Verify my feature works"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Testing Philosophy
|
||||
|
||||
### Progressive Confidence:
|
||||
```
|
||||
Level 1: Smoke Tests → "Code runs"
|
||||
Level 2: Physics Tests → "Understands physics"
|
||||
Level 3: Learning Tests → "Can learn patterns"
|
||||
Level 4: Integration Tests → "Production ready"
|
||||
```
|
||||
|
||||
### Fast Feedback Loop:
|
||||
```
|
||||
Developer writes code
|
||||
↓
|
||||
Run smoke tests (30 seconds)
|
||||
↓
|
||||
If pass → Continue development
|
||||
If fail → Fix immediately
|
||||
```
|
||||
|
||||
### Comprehensive Validation:
|
||||
```
|
||||
Before deployment:
|
||||
↓
|
||||
Run full test suite (1 hour)
|
||||
↓
|
||||
All tests pass → Deploy
|
||||
Any test fails → Fix and retest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Resources
|
||||
|
||||
**Current Implementation:**
|
||||
- ✅ `test_suite.py` - Master orchestrator
|
||||
- ✅ `tests/test_synthetic.py` - 5 smoke tests
|
||||
|
||||
**Documentation:**
|
||||
- Example outputs provided
|
||||
- Clear usage instructions
|
||||
- Extension guide included
|
||||
|
||||
**Next To Implement:**
|
||||
- Physics tests with analytical solutions
|
||||
- Learning capability tests
|
||||
- Integration tests
|
||||
- Visualization tools
|
||||
- HTML dashboard
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Summary
|
||||
|
||||
**Status:** Testing framework foundation complete ✅
|
||||
|
||||
**Implemented:**
|
||||
- Master test orchestrator with 4 modes
|
||||
- 5 comprehensive smoke tests
|
||||
- Clean reporting system
|
||||
- JSON results export
|
||||
- Extensible architecture
|
||||
|
||||
**Ready To:**
|
||||
1. Run smoke tests immediately (`python test_suite.py --quick`)
|
||||
2. Verify basic functionality
|
||||
3. Add physics tests as needed
|
||||
4. Expand to full validation
|
||||
|
||||
**Testing framework is production-ready for incremental expansion!** 🚀
|
||||
|
||||
---
|
||||
|
||||
*Testing Framework v1.0 - Comprehensive validation from zero to neural FEA*
|
||||
@@ -1,299 +0,0 @@
|
||||
# Unit Conversion Issue - Analysis and Fix
|
||||
|
||||
**Date:** November 24, 2025
|
||||
**Issue:** Stresses displaying 1000× too large
|
||||
|
||||
---
|
||||
|
||||
## Root Cause Identified
|
||||
|
||||
### BDF File Unit System
|
||||
The BDF file contains: **`PARAM UNITSYS MN-MM`**
|
||||
|
||||
This defines the Nastran unit system as:
|
||||
- **Length:** mm (millimeter)
|
||||
- **Force:** MN (MegaNewton) = 1,000,000 N
|
||||
- **Mass:** tonne (1000 kg)
|
||||
- **Stress:** Pa (Pascal) = N/m² *[NOT MPa!]*
|
||||
- **Energy:** MN-mm = 1,000 N-m = 1 kJ
|
||||
|
||||
### Material Properties Confirm This
|
||||
Young's modulus from BDF: **E = 200,000,000**
|
||||
- If units were MPa: E = 200 GPa (way too high for steel ~200 GPa)
|
||||
- If units are Pa: E = 200 MPa (way too low!)
|
||||
- **Actual: E = 200,000,000 Pa = 200 GPa** ✓ (correct for steel)
|
||||
|
||||
### What pyNastran Returns
|
||||
pyNastran reads the OP2 file and returns data **in the same units as the BDF**:
|
||||
- Displacement: mm ✓
|
||||
- Force/Reactions: **MN** (not N!)
|
||||
- Stress: **Pa** (not MPa!)
|
||||
|
||||
---
|
||||
|
||||
## Current vs Actual Values
|
||||
|
||||
### Stress Values
|
||||
| What we claimed | Actual value | Correct interpretation |
|
||||
|----------------|--------------|------------------------|
|
||||
| 117,000 MPa | 117,000 Pa | **117 kPa = 0.117 MPa** ✓ |
|
||||
| 46,000 MPa (mean) | 46,000 Pa | **46 kPa = 0.046 MPa** ✓ |
|
||||
|
||||
**Correct stress values are 1000× smaller!**
|
||||
|
||||
### Force Values
|
||||
| What we claimed | Actual value | Correct interpretation |
|
||||
|----------------|--------------|------------------------|
|
||||
| 2.73 MN (applied) | 2.73 MN | **2.73 MN = 2,730,000 N** ✓ |
|
||||
| 150 MN (reaction) | 150 MN | **150 MN = 150,000,000 N** ✓ |
|
||||
|
||||
**Force values are correctly stored, but labeled as N instead of MN**
|
||||
|
||||
---
|
||||
|
||||
## Impact
|
||||
|
||||
### What's Wrong:
|
||||
1. **Stress units incorrectly labeled as "MPa"** - should be "Pa"
|
||||
2. **Force/reaction units incorrectly labeled as "N"** - should be "MN"
|
||||
3. **Visualization shows stress 1000× too high**
|
||||
4. **Reports show unrealistic values** (117 GPa stress would destroy steel!)
|
||||
|
||||
### What's Correct:
|
||||
1. ✅ Displacement values (19.5 mm)
|
||||
2. ✅ Material properties (E = 200 GPa)
|
||||
3. ✅ Geometry (mm)
|
||||
4. ✅ Actual numerical values from pyNastran
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
### Option 1: Convert to Standard Units (Recommended)
|
||||
Convert all data to consistent engineering units:
|
||||
- Length: mm → mm ✓
|
||||
- Force: MN → **N** (divide by 1e6)
|
||||
- Stress: Pa → **MPa** (divide by 1e6)
|
||||
- Mass: tonne → kg (multiply by 1000)
|
||||
|
||||
**Benefits:**
|
||||
- Standard engineering units (mm, N, MPa, kg)
|
||||
- Matches what users expect
|
||||
- No confusion in reports/visualization
|
||||
|
||||
**Changes Required:**
|
||||
- Parser: Convert forces (divide by 1e6)
|
||||
- Parser: Convert stress (divide by 1e6)
|
||||
- Update metadata to reflect actual units
|
||||
|
||||
### Option 2: Use Native Units (Not Recommended)
|
||||
Keep MN-MM-tonne-Pa system throughout
|
||||
|
||||
**Issues:**
|
||||
- Non-standard units confuse users
|
||||
- Harder to interpret values
|
||||
- Requires careful labeling everywhere
|
||||
|
||||
---
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### 1. Fix Parser ([neural_field_parser.py](neural_field_parser.py))
|
||||
|
||||
**Lines to modify:**
|
||||
|
||||
#### Stress Extraction (~line 602-648)
|
||||
```python
|
||||
# CURRENT (wrong):
|
||||
stress_data = stress.data[0, :, :]
|
||||
stress_results[f"{elem_type}_stress"] = {
|
||||
"data": stress_data.tolist(),
|
||||
"units": "MPa" # WRONG!
|
||||
}
|
||||
|
||||
# FIX:
|
||||
stress_data = stress.data[0, :, :] / 1e6 # Convert Pa → MPa
|
||||
stress_results[f"{elem_type}_stress"] = {
|
||||
"data": stress_data.tolist(),
|
||||
"units": "MPa" # Now correct!
|
||||
}
|
||||
```
|
||||
|
||||
#### Force Extraction (~line 464-507)
|
||||
```python
|
||||
# CURRENT (partially wrong):
|
||||
"magnitude": float(load.mag), # This is in MN, not N!
|
||||
|
||||
# FIX:
|
||||
"magnitude": float(load.mag) * 1e6, # Convert MN → N
|
||||
```
|
||||
|
||||
#### Reaction Forces (~line 538-568)
|
||||
```python
|
||||
# CURRENT (wrong):
|
||||
reactions = grid_point_force.data[0] # In MN!
|
||||
|
||||
# FIX:
|
||||
reactions = grid_point_force.data[0] * 1e6 # Convert MN → N
|
||||
```
|
||||
|
||||
### 2. Update Unit Detection
|
||||
Add UNITSYS parameter detection:
|
||||
```python
|
||||
def detect_units(self):
|
||||
"""Detect Nastran unit system from PARAM cards"""
|
||||
if hasattr(self.bdf, 'params') and 'UNITSYS' in self.bdf.params:
|
||||
unitsys = str(self.bdf.params['UNITSYS'].values[0])
|
||||
if 'MN' in unitsys:
|
||||
return {
|
||||
'length': 'mm',
|
||||
'force': 'MN',
|
||||
'stress': 'Pa',
|
||||
'mass': 'tonne',
|
||||
'needs_conversion': True
|
||||
}
|
||||
# Default units
|
||||
return {
|
||||
'length': 'mm',
|
||||
'force': 'N',
|
||||
'stress': 'MPa',
|
||||
'mass': 'kg',
|
||||
'needs_conversion': False
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Add Unit Conversion Function
|
||||
```python
|
||||
def convert_to_standard_units(self, data, unit_system):
|
||||
"""Convert from Nastran units to standard engineering units"""
|
||||
if not unit_system['needs_conversion']:
|
||||
return data
|
||||
|
||||
# Convert forces: MN → N (multiply by 1e6)
|
||||
if 'loads' in data:
|
||||
for force in data['loads']['point_forces']:
|
||||
force['magnitude'] *= 1e6
|
||||
|
||||
# Convert stress: Pa → MPa (divide by 1e6)
|
||||
if 'results' in data and 'stress' in data['results']:
|
||||
for stress_type, stress_data in data['results']['stress'].items():
|
||||
if isinstance(stress_data, dict) and 'data' in stress_data:
|
||||
stress_data['data'] = np.array(stress_data['data']) / 1e6
|
||||
stress_data['units'] = 'MPa'
|
||||
|
||||
# Convert reactions: MN → N (multiply by 1e6)
|
||||
# (Handle in HDF5 write)
|
||||
|
||||
return data
|
||||
```
|
||||
|
||||
### 4. Update HDF5 Writing
|
||||
Apply conversions when writing to HDF5:
|
||||
```python
|
||||
# Reactions
|
||||
if 'reactions' in self.neural_field_data['results']:
|
||||
reactions_data = np.array(self.neural_field_data['results']['reactions']['data'])
|
||||
if unit_system['force'] == 'MN':
|
||||
reactions_data *= 1e6 # MN → N
|
||||
hf.create_dataset('results/reactions', data=reactions_data)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### 1. Create Unit Conversion Test
|
||||
```python
|
||||
def test_unit_conversion():
|
||||
"""Verify units are correctly converted"""
|
||||
parser = NastranToNeuralFieldParser('test_case_beam')
|
||||
data = parser.parse_all()
|
||||
|
||||
# Check stress units
|
||||
stress = data['results']['stress']['cquad4_stress']
|
||||
assert stress['units'] == 'MPa'
|
||||
max_stress = np.max(stress['data'][:, -1]) # Von Mises
|
||||
assert max_stress < 500, f"Stress {max_stress} MPa too high!"
|
||||
|
||||
# Check force units
|
||||
force = data['loads']['point_forces'][0]
|
||||
assert force['magnitude'] < 1e7, "Force should be in N"
|
||||
|
||||
print("[OK] Units correctly converted")
|
||||
```
|
||||
|
||||
### 2. Expected Values After Fix
|
||||
| Property | Before (wrong) | After (correct) |
|
||||
|----------|---------------|-----------------|
|
||||
| Max stress | 117,000 MPa | **117 MPa** ✓ |
|
||||
| Mean stress | 46,000 MPa | **46 MPa** ✓ |
|
||||
| Applied force | 2.73 MN | **2,730,000 N** |
|
||||
| Max reaction | 150 MN | **150,000,000 N** |
|
||||
|
||||
### 3. Validation Checks
|
||||
- ✓ Stress < 500 MPa (reasonable for steel)
|
||||
- ✓ Force magnitude matches applied loads
|
||||
- ✓ Material E = 200 GPa (correct for steel)
|
||||
- ✓ Displacement still 19.5 mm
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
### Low Risk:
|
||||
- ✅ Only affects numerical scaling
|
||||
- ✅ No changes to data structure
|
||||
- ✅ Easy to verify with test
|
||||
- ✅ Can be fixed with multiplication/division
|
||||
|
||||
### What Could Go Wrong:
|
||||
- ⚠ Other BDF files might use different UNITSYS
|
||||
- ⚠ Some files might already be in correct units
|
||||
- ⚠ Need to handle multiple unit systems
|
||||
|
||||
### Mitigation:
|
||||
- Always check PARAM UNITSYS first
|
||||
- Add unit system detection
|
||||
- Log conversions clearly
|
||||
- Add validation checks
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Immediate Actions:
|
||||
1. ✅ **Update parser to detect UNITSYS**
|
||||
2. ✅ **Add unit conversion for stress (Pa → MPa)**
|
||||
3. ✅ **Add unit conversion for forces (MN → N)**
|
||||
4. ✅ **Update metadata to reflect conversions**
|
||||
5. ✅ **Add validation checks**
|
||||
|
||||
### Long-term:
|
||||
- Support multiple Nastran unit systems
|
||||
- Add unit conversion utilities
|
||||
- Document unit assumptions clearly
|
||||
- Add warnings for unusual values
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**The system is working correctly** - pyNastran is reading the data accurately.
|
||||
|
||||
**The issue is labeling** - we incorrectly assumed MPa when Nastran uses Pa.
|
||||
|
||||
**The fix is simple** - divide stress by 1e6, multiply forces by 1e6, update labels.
|
||||
|
||||
**After fix:**
|
||||
- Stress: 117 MPa (reasonable for steel) ✓
|
||||
- Force: 2.73 MN = 2,730 kN (reasonable for large beam) ✓
|
||||
- All other values unchanged ✓
|
||||
|
||||
**System will be production-ready after this fix!** 🚀
|
||||
|
||||
---
|
||||
|
||||
*Unit Conversion Analysis v1.0*
|
||||
*Issue: 1000× stress error*
|
||||
*Root cause: MN-MM unit system misinterpretation*
|
||||
*Fix: Scale factors + label corrections*
|
||||
@@ -1,147 +0,0 @@
|
||||
# Unit Investigation Summary
|
||||
|
||||
## Your Question
|
||||
> "Force and stresses seems to be 1000 too much, how do you check units and validate values?"
|
||||
|
||||
## Answer
|
||||
|
||||
You were **absolutely correct!** The stresses ARE 1000× too large, but the forces are actually correct (just mislabeled).
|
||||
|
||||
---
|
||||
|
||||
## Root Cause Found
|
||||
|
||||
Your BDF file contains: **`PARAM UNITSYS MN-MM`**
|
||||
|
||||
This tells Nastran to use the MegaNewton-Millimeter unit system:
|
||||
- Length: mm ✓
|
||||
- Force: **MN (MegaNewton)** = 1,000,000 N
|
||||
- Stress: **Pa (Pascal)**, NOT MPa!
|
||||
- Mass: tonne (1000 kg)
|
||||
|
||||
### What This Means
|
||||
|
||||
**pyNastran correctly reads the OP2 file in these units**, but my parser incorrectly assumed:
|
||||
- Force in N (actually MN)
|
||||
- Stress in MPa (actually Pa)
|
||||
|
||||
---
|
||||
|
||||
## Actual Values
|
||||
|
||||
### Stress (The 1000× Error You Found)
|
||||
| What Report Shows | Actual Unit | Correct Value |
|
||||
|-------------------|-------------|---------------|
|
||||
| 117,000 MPa | 117,000 Pa | **117 MPa** ✓ |
|
||||
| 46,000 MPa (mean) | 46,000 Pa | **46 MPa** ✓ |
|
||||
|
||||
**Your stresses are 1000× too high because Pa should be divided by 1000 to get kPa, or by 1,000,000 to get MPa.**
|
||||
|
||||
### Forces (Correctly Stored, Mislabeled)
|
||||
| What Report Shows | Actual Unit | Interpretation |
|
||||
|-------------------|-------------|----------------|
|
||||
| 2.73 MN | MN ✓ | 2,730,000 N |
|
||||
| 150 MN | MN ✓ | 150,000,000 N |
|
||||
|
||||
Forces are actually correct! They're in MegaNewtons, which is perfectly fine for a large beam structure.
|
||||
|
||||
---
|
||||
|
||||
## How I Validated This
|
||||
|
||||
### 1. Checked the BDF File
|
||||
Found `PARAM UNITSYS MN-MM` which defines the unit system.
|
||||
|
||||
### 2. Checked Material Properties
|
||||
Young's modulus E = 200,000,000
|
||||
- If this were MPa → E = 200 GPa ✓ (correct for steel)
|
||||
- This confirms stress is in Pa (base SI unit)
|
||||
|
||||
### 3. Direct OP2 Reading
|
||||
Created [check_op2_units.py](check_op2_units.py) to directly read the OP2 file with pyNastran:
|
||||
- Confirmed pyNastran doesn't specify units
|
||||
- Confirmed stress values: min=1.87e+03, max=1.17e+05
|
||||
- These are clearly in Pa, not MPa!
|
||||
|
||||
### 4. Sanity Check
|
||||
A 117 GPa von Mises stress would **instantly destroy any material** (even diamond is ~130 GPa).
|
||||
117 MPa is reasonable for a loaded steel beam ✓
|
||||
|
||||
---
|
||||
|
||||
## The Fix
|
||||
|
||||
### What Needs to Change
|
||||
|
||||
**In [neural_field_parser.py](neural_field_parser.py:602-648):**
|
||||
|
||||
1. **Detect UNITSYS parameter from BDF**
|
||||
2. **Convert stress: Pa → MPa** (divide by 1e6)
|
||||
3. **Update force labels: MN → N** (or keep as MN with correct label)
|
||||
4. **Add validation checks** to catch unrealistic values
|
||||
|
||||
### Conversion Factors
|
||||
```python
|
||||
# If UNITSYS is MN-MM:
|
||||
stress_MPa = stress_Pa / 1e6
|
||||
force_N = force_MN * 1e6
|
||||
mass_kg = mass_tonne * 1000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Values After Fix
|
||||
|
||||
| Property | Current (Wrong) | After Fix | Reasonable? |
|
||||
|----------|----------------|-----------|-------------|
|
||||
| Max von Mises | 117,000 MPa | **117 MPa** | ✓ Yes (steel ~250 MPa yield) |
|
||||
| Mean von Mises | 46,000 MPa | **46 MPa** | ✓ Yes |
|
||||
| Max displacement | 19.5 mm | 19.5 mm | ✓ Yes |
|
||||
| Applied forces | 2.73 MN | 2.73 MN | ✓ Yes (large beam) |
|
||||
| Young's modulus | 200 GPa | 200 GPa | ✓ Yes (steel) |
|
||||
|
||||
---
|
||||
|
||||
## Files Created for Investigation
|
||||
|
||||
1. **[check_units.py](check_units.py)** - Analyzes parsed data for unit consistency
|
||||
2. **[check_op2_units.py](check_op2_units.py)** - Directly reads OP2/BDF to verify units
|
||||
3. **[UNIT_CONVERSION_REPORT.md](UNIT_CONVERSION_REPORT.md)** - Complete analysis and fix plan
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Option 1: I Fix It Now
|
||||
I can update the parser to:
|
||||
1. Detect UNITSYS parameter
|
||||
2. Convert Pa → MPa for stress
|
||||
3. Add unit validation
|
||||
4. Re-run test and regenerate report
|
||||
|
||||
**Time:** 15-20 minutes
|
||||
**Risk:** Low (just scaling factors)
|
||||
|
||||
### Option 2: You Review First
|
||||
You can review the [UNIT_CONVERSION_REPORT.md](UNIT_CONVERSION_REPORT.md) for the detailed fix plan, then I implement.
|
||||
|
||||
**Advantage:** You understand the changes before they're made
|
||||
|
||||
---
|
||||
|
||||
## Bottom Line
|
||||
|
||||
**Your intuition was spot-on!** The stresses displayed are 1000× too high.
|
||||
|
||||
**Root cause:** Nastran uses Pa (not MPa) in the MN-MM unit system, and my parser mislabeled them.
|
||||
|
||||
**Fix:** Simple scaling factors (divide by 1e6) and correct labels.
|
||||
|
||||
**After fix:** All values will be realistic and match engineering expectations! ✓
|
||||
|
||||
---
|
||||
|
||||
What would you like me to do next?
|
||||
1. Implement the unit conversion fix?
|
||||
2. Answer any questions about the analysis?
|
||||
3. Something else?
|
||||
@@ -1,239 +0,0 @@
|
||||
# AtomizerField Configuration
|
||||
# Long-term vision configuration for neural field learning
|
||||
|
||||
# ============================================================================
|
||||
# Model Architecture
|
||||
# ============================================================================
|
||||
model:
|
||||
type: "graph_neural_network"
|
||||
architecture: "message_passing"
|
||||
|
||||
# Foundation model settings (for transfer learning)
|
||||
foundation:
|
||||
enabled: false # Set to true when foundation model available
|
||||
path: "models/physics_foundation_v1.pt"
|
||||
freeze: true # Freeze foundation layers during fine-tuning
|
||||
|
||||
# Adaptation layers (for fine-tuning on new component types)
|
||||
adaptation:
|
||||
layers: 2
|
||||
neurons: 128
|
||||
dropout: 0.1
|
||||
|
||||
# Core GNN parameters
|
||||
gnn:
|
||||
node_feature_dim: 12 # [x,y,z, BC(6), loads(3)]
|
||||
edge_feature_dim: 5 # [E, nu, rho, G, alpha]
|
||||
hidden_dim: 128
|
||||
num_layers: 6
|
||||
dropout: 0.1
|
||||
|
||||
# Output decoders
|
||||
decoders:
|
||||
displacement:
|
||||
enabled: true
|
||||
output_dim: 6 # [ux, uy, uz, rx, ry, rz]
|
||||
|
||||
stress:
|
||||
enabled: true
|
||||
output_dim: 6 # [sxx, syy, szz, txy, tyz, txz]
|
||||
|
||||
# ============================================================================
|
||||
# Training Configuration
|
||||
# ============================================================================
|
||||
training:
|
||||
# Progressive training (coarse to fine meshes)
|
||||
progressive:
|
||||
enabled: false # Enable for multi-resolution training
|
||||
stages:
|
||||
- resolution: "coarse"
|
||||
max_nodes: 5000
|
||||
epochs: 20
|
||||
lr: 0.001
|
||||
|
||||
- resolution: "medium"
|
||||
max_nodes: 20000
|
||||
epochs: 10
|
||||
lr: 0.0005
|
||||
|
||||
- resolution: "fine"
|
||||
max_nodes: 100000
|
||||
epochs: 5
|
||||
lr: 0.0001
|
||||
|
||||
# Online learning (during optimization)
|
||||
online:
|
||||
enabled: false # Enable to learn from FEA during optimization
|
||||
update_frequency: 10 # Update model every N FEA runs
|
||||
quick_update_steps: 10
|
||||
learning_rate: 0.0001
|
||||
|
||||
# Physics-informed loss weights
|
||||
loss:
|
||||
type: "physics" # Options: mse, relative, physics, max
|
||||
weights:
|
||||
data: 1.0 # Match FEA results
|
||||
equilibrium: 0.1 # ∇·σ + f = 0
|
||||
constitutive: 0.1 # σ = C:ε
|
||||
boundary: 1.0 # u = 0 at fixed nodes
|
||||
|
||||
# Standard training parameters
|
||||
hyperparameters:
|
||||
epochs: 100
|
||||
batch_size: 4
|
||||
learning_rate: 0.001
|
||||
weight_decay: 0.00001
|
||||
|
||||
# Optimization
|
||||
optimizer:
|
||||
type: "AdamW"
|
||||
betas: [0.9, 0.999]
|
||||
|
||||
scheduler:
|
||||
type: "ReduceLROnPlateau"
|
||||
factor: 0.5
|
||||
patience: 10
|
||||
|
||||
# Early stopping
|
||||
early_stopping:
|
||||
enabled: true
|
||||
patience: 50
|
||||
min_delta: 0.0001
|
||||
|
||||
# ============================================================================
|
||||
# Data Pipeline
|
||||
# ============================================================================
|
||||
data:
|
||||
# Data normalization
|
||||
normalization:
|
||||
enabled: true
|
||||
method: "standard" # Options: standard, minmax
|
||||
|
||||
# Data augmentation
|
||||
augmentation:
|
||||
enabled: false # Enable for data augmentation
|
||||
techniques:
|
||||
- rotation # Rotate mesh randomly
|
||||
- scaling # Scale loads
|
||||
- noise # Add small noise to inputs
|
||||
|
||||
# Multi-resolution support
|
||||
multi_resolution:
|
||||
enabled: false
|
||||
resolutions: ["coarse", "medium", "fine"]
|
||||
|
||||
# Caching
|
||||
cache:
|
||||
in_memory: false # Cache dataset in RAM (faster but memory-intensive)
|
||||
disk_cache: true # Cache preprocessed graphs to disk
|
||||
|
||||
# ============================================================================
|
||||
# Optimization Interface
|
||||
# ============================================================================
|
||||
optimization:
|
||||
# Gradient-based optimization
|
||||
use_gradients: true
|
||||
|
||||
# Uncertainty quantification
|
||||
uncertainty:
|
||||
enabled: false # Enable ensemble for uncertainty
|
||||
ensemble_size: 5
|
||||
threshold: 0.1 # Recommend FEA if uncertainty > threshold
|
||||
|
||||
# FEA fallback
|
||||
fallback_to_fea:
|
||||
enabled: true
|
||||
conditions:
|
||||
- high_uncertainty # Uncertainty > threshold
|
||||
- extrapolation # Outside training distribution
|
||||
- critical_design # Final validation
|
||||
|
||||
# Batch evaluation
|
||||
batch_size: 100 # Evaluate designs in batches for speed
|
||||
|
||||
# ============================================================================
|
||||
# Model Versioning & Deployment
|
||||
# ============================================================================
|
||||
deployment:
|
||||
# Model versioning
|
||||
versioning:
|
||||
enabled: true
|
||||
format: "semantic" # v1.0.0, v1.1.0, etc.
|
||||
|
||||
# Model registry
|
||||
registry:
|
||||
path: "models/"
|
||||
naming: "{component_type}_v{version}.pt"
|
||||
|
||||
# Metadata tracking
|
||||
metadata:
|
||||
track_training_data: true
|
||||
track_performance: true
|
||||
track_hyperparameters: true
|
||||
|
||||
# Production settings
|
||||
production:
|
||||
device: "cuda" # cuda or cpu
|
||||
batch_inference: true
|
||||
max_batch_size: 100
|
||||
|
||||
# ============================================================================
|
||||
# Integration with Atomizer
|
||||
# ============================================================================
|
||||
atomizer_integration:
|
||||
# Dashboard integration
|
||||
dashboard:
|
||||
enabled: false # Future: Show field visualizations in dashboard
|
||||
|
||||
# Database integration
|
||||
database:
|
||||
enabled: false # Future: Store predictions in Atomizer DB
|
||||
|
||||
# API endpoints
|
||||
api:
|
||||
enabled: false # Future: REST API for predictions
|
||||
port: 8000
|
||||
|
||||
# ============================================================================
|
||||
# Monitoring & Logging
|
||||
# ============================================================================
|
||||
monitoring:
|
||||
# TensorBoard
|
||||
tensorboard:
|
||||
enabled: true
|
||||
log_dir: "runs/tensorboard"
|
||||
|
||||
# Weights & Biases (optional)
|
||||
wandb:
|
||||
enabled: false
|
||||
project: "atomizerfield"
|
||||
entity: "your_team"
|
||||
|
||||
# Logging level
|
||||
logging:
|
||||
level: "INFO" # DEBUG, INFO, WARNING, ERROR
|
||||
file: "logs/atomizerfield.log"
|
||||
|
||||
# ============================================================================
|
||||
# Experimental Features
|
||||
# ============================================================================
|
||||
experimental:
|
||||
# Nonlinear analysis
|
||||
nonlinear:
|
||||
enabled: false
|
||||
|
||||
# Contact analysis
|
||||
contact:
|
||||
enabled: false
|
||||
|
||||
# Composite materials
|
||||
composites:
|
||||
enabled: false
|
||||
|
||||
# Modal analysis
|
||||
modal:
|
||||
enabled: false
|
||||
|
||||
# Topology optimization
|
||||
topology:
|
||||
enabled: false
|
||||
@@ -1,360 +0,0 @@
|
||||
"""
|
||||
batch_parser.py
|
||||
Parse multiple NX Nastran cases in batch
|
||||
|
||||
AtomizerField Batch Parser v1.0.0
|
||||
Efficiently processes multiple FEA cases for neural network training dataset creation.
|
||||
|
||||
Usage:
|
||||
python batch_parser.py <root_directory>
|
||||
|
||||
Example:
|
||||
python batch_parser.py ./training_data
|
||||
|
||||
Directory structure expected:
|
||||
training_data/
|
||||
├── case_001/
|
||||
│ ├── input/model.bdf
|
||||
│ └── output/model.op2
|
||||
├── case_002/
|
||||
│ ├── input/model.bdf
|
||||
│ └── output/model.op2
|
||||
└── ...
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
import traceback
|
||||
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
from validate_parsed_data import NeuralFieldDataValidator
|
||||
|
||||
|
||||
class BatchParser:
|
||||
"""
|
||||
Batch parser for processing multiple FEA cases
|
||||
|
||||
This enables rapid dataset creation for neural network training.
|
||||
Processes each case sequentially and generates a summary report.
|
||||
"""
|
||||
|
||||
def __init__(self, root_directory, validate=True, continue_on_error=True):
|
||||
"""
|
||||
Initialize batch parser
|
||||
|
||||
Args:
|
||||
root_directory (str or Path): Root directory containing case subdirectories
|
||||
validate (bool): Run validation after parsing each case
|
||||
continue_on_error (bool): Continue processing if a case fails
|
||||
"""
|
||||
self.root_dir = Path(root_directory)
|
||||
self.validate = validate
|
||||
self.continue_on_error = continue_on_error
|
||||
self.results = []
|
||||
|
||||
def find_cases(self):
|
||||
"""
|
||||
Find all case directories in root directory
|
||||
|
||||
A valid case directory contains:
|
||||
- input/ subdirectory with .bdf or .dat file
|
||||
- output/ subdirectory with .op2 file
|
||||
|
||||
Returns:
|
||||
list: List of Path objects for valid case directories
|
||||
"""
|
||||
cases = []
|
||||
|
||||
for item in self.root_dir.iterdir():
|
||||
if not item.is_dir():
|
||||
continue
|
||||
|
||||
# Check for required subdirectories and files
|
||||
input_dir = item / "input"
|
||||
output_dir = item / "output"
|
||||
|
||||
if not input_dir.exists() or not output_dir.exists():
|
||||
continue
|
||||
|
||||
# Check for BDF file
|
||||
bdf_files = list(input_dir.glob("*.bdf")) + list(input_dir.glob("*.dat"))
|
||||
if not bdf_files:
|
||||
continue
|
||||
|
||||
# Check for OP2 file
|
||||
op2_files = list(output_dir.glob("*.op2"))
|
||||
if not op2_files:
|
||||
continue
|
||||
|
||||
cases.append(item)
|
||||
|
||||
return sorted(cases)
|
||||
|
||||
def parse_case(self, case_dir):
|
||||
"""
|
||||
Parse a single case
|
||||
|
||||
Args:
|
||||
case_dir (Path): Path to case directory
|
||||
|
||||
Returns:
|
||||
dict: Result dictionary with status and metadata
|
||||
"""
|
||||
result = {
|
||||
"case": case_dir.name,
|
||||
"status": "unknown",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
try:
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Processing: {case_dir.name}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
# Parse
|
||||
parser = NastranToNeuralFieldParser(case_dir)
|
||||
data = parser.parse_all()
|
||||
|
||||
result["status"] = "parsed"
|
||||
result["nodes"] = data["mesh"]["statistics"]["n_nodes"]
|
||||
result["elements"] = data["mesh"]["statistics"]["n_elements"]
|
||||
|
||||
# Get max displacement and stress if available
|
||||
if "displacement" in data.get("results", {}):
|
||||
result["max_displacement"] = data["results"]["displacement"].get("max_translation")
|
||||
|
||||
if "stress" in data.get("results", {}):
|
||||
for stress_type, stress_data in data["results"]["stress"].items():
|
||||
if "max_von_mises" in stress_data and stress_data["max_von_mises"] is not None:
|
||||
result["max_stress"] = stress_data["max_von_mises"]
|
||||
break
|
||||
|
||||
# Validate if requested
|
||||
if self.validate:
|
||||
print(f"\nValidating {case_dir.name}...")
|
||||
validator = NeuralFieldDataValidator(case_dir)
|
||||
validation_passed = validator.validate()
|
||||
|
||||
result["validated"] = validation_passed
|
||||
if validation_passed:
|
||||
result["status"] = "success"
|
||||
else:
|
||||
result["status"] = "validation_failed"
|
||||
result["message"] = "Validation failed (see output above)"
|
||||
|
||||
else:
|
||||
result["status"] = "success"
|
||||
|
||||
except Exception as e:
|
||||
result["status"] = "failed"
|
||||
result["error"] = str(e)
|
||||
result["traceback"] = traceback.format_exc()
|
||||
|
||||
print(f"\n✗ ERROR: {e}")
|
||||
if not self.continue_on_error:
|
||||
raise
|
||||
|
||||
return result
|
||||
|
||||
def batch_parse(self):
|
||||
"""
|
||||
Parse all cases in root directory
|
||||
|
||||
Returns:
|
||||
list: List of result dictionaries
|
||||
"""
|
||||
print("\n" + "="*60)
|
||||
print("AtomizerField Batch Parser v1.0")
|
||||
print("="*60)
|
||||
print(f"\nRoot directory: {self.root_dir}")
|
||||
|
||||
# Find all cases
|
||||
cases = self.find_cases()
|
||||
|
||||
if not cases:
|
||||
print(f"\n✗ No valid cases found in {self.root_dir}")
|
||||
print("\nCase directories should contain:")
|
||||
print(" input/model.bdf (or model.dat)")
|
||||
print(" output/model.op2")
|
||||
return []
|
||||
|
||||
print(f"\nFound {len(cases)} case(s) to process:")
|
||||
for case in cases:
|
||||
print(f" - {case.name}")
|
||||
|
||||
# Process each case
|
||||
self.results = []
|
||||
start_time = datetime.now()
|
||||
|
||||
for i, case in enumerate(cases, 1):
|
||||
print(f"\n[{i}/{len(cases)}] Processing {case.name}...")
|
||||
|
||||
result = self.parse_case(case)
|
||||
self.results.append(result)
|
||||
|
||||
# Show progress
|
||||
success_count = sum(1 for r in self.results if r["status"] == "success")
|
||||
print(f"\nProgress: {i}/{len(cases)} processed, {success_count} successful")
|
||||
|
||||
end_time = datetime.now()
|
||||
elapsed = (end_time - start_time).total_seconds()
|
||||
|
||||
# Print summary
|
||||
self._print_summary(elapsed)
|
||||
|
||||
# Save summary to JSON
|
||||
self._save_summary()
|
||||
|
||||
return self.results
|
||||
|
||||
def _print_summary(self, elapsed_time):
|
||||
"""Print batch processing summary"""
|
||||
print("\n" + "="*60)
|
||||
print("BATCH PROCESSING COMPLETE")
|
||||
print("="*60)
|
||||
|
||||
success_count = sum(1 for r in self.results if r["status"] == "success")
|
||||
failed_count = sum(1 for r in self.results if r["status"] == "failed")
|
||||
validation_failed = sum(1 for r in self.results if r["status"] == "validation_failed")
|
||||
|
||||
print(f"\nTotal cases: {len(self.results)}")
|
||||
print(f" ✓ Successful: {success_count}")
|
||||
if validation_failed > 0:
|
||||
print(f" ⚠ Validation failed: {validation_failed}")
|
||||
if failed_count > 0:
|
||||
print(f" ✗ Failed: {failed_count}")
|
||||
|
||||
print(f"\nProcessing time: {elapsed_time:.1f} seconds")
|
||||
if len(self.results) > 0:
|
||||
print(f"Average time per case: {elapsed_time/len(self.results):.1f} seconds")
|
||||
|
||||
# Detailed results
|
||||
print("\nDetailed Results:")
|
||||
print("-" * 60)
|
||||
for result in self.results:
|
||||
status_symbol = {
|
||||
"success": "✓",
|
||||
"failed": "✗",
|
||||
"validation_failed": "⚠",
|
||||
"parsed": "•"
|
||||
}.get(result["status"], "?")
|
||||
|
||||
case_info = f"{status_symbol} {result['case']}: {result['status']}"
|
||||
|
||||
if "nodes" in result and "elements" in result:
|
||||
case_info += f" ({result['nodes']:,} nodes, {result['elements']:,} elements)"
|
||||
|
||||
if "max_stress" in result:
|
||||
case_info += f" | Max VM: {result['max_stress']:.2f} MPa"
|
||||
|
||||
if result["status"] == "failed" and "error" in result:
|
||||
case_info += f"\n Error: {result['error']}"
|
||||
|
||||
print(f" {case_info}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
|
||||
if success_count == len(self.results):
|
||||
print("✓ ALL CASES PROCESSED SUCCESSFULLY")
|
||||
elif success_count > 0:
|
||||
print(f"⚠ {success_count}/{len(self.results)} CASES SUCCESSFUL")
|
||||
else:
|
||||
print("✗ ALL CASES FAILED")
|
||||
|
||||
print("="*60 + "\n")
|
||||
|
||||
def _save_summary(self):
|
||||
"""Save batch processing summary to JSON file"""
|
||||
summary_file = self.root_dir / "batch_processing_summary.json"
|
||||
|
||||
summary = {
|
||||
"batch_info": {
|
||||
"root_directory": str(self.root_dir),
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"total_cases": len(self.results),
|
||||
"successful_cases": sum(1 for r in self.results if r["status"] == "success"),
|
||||
"failed_cases": sum(1 for r in self.results if r["status"] == "failed"),
|
||||
"validation_enabled": self.validate
|
||||
},
|
||||
"cases": self.results
|
||||
}
|
||||
|
||||
with open(summary_file, 'w') as f:
|
||||
json.dump(summary, f, indent=2)
|
||||
|
||||
print(f"Summary saved to: {summary_file}\n")
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main entry point for batch parser
|
||||
"""
|
||||
if len(sys.argv) < 2:
|
||||
print("\nAtomizerField Batch Parser v1.0")
|
||||
print("="*60)
|
||||
print("\nUsage:")
|
||||
print(" python batch_parser.py <root_directory> [options]")
|
||||
print("\nOptions:")
|
||||
print(" --no-validate Skip validation step")
|
||||
print(" --stop-on-error Stop processing if a case fails")
|
||||
print("\nExample:")
|
||||
print(" python batch_parser.py ./training_data")
|
||||
print("\nDirectory structure:")
|
||||
print(" training_data/")
|
||||
print(" ├── case_001/")
|
||||
print(" │ ├── input/model.bdf")
|
||||
print(" │ └── output/model.op2")
|
||||
print(" ├── case_002/")
|
||||
print(" │ ├── input/model.bdf")
|
||||
print(" │ └── output/model.op2")
|
||||
print(" └── ...")
|
||||
print()
|
||||
sys.exit(1)
|
||||
|
||||
root_dir = sys.argv[1]
|
||||
|
||||
# Parse options
|
||||
validate = "--no-validate" not in sys.argv
|
||||
continue_on_error = "--stop-on-error" not in sys.argv
|
||||
|
||||
if not Path(root_dir).exists():
|
||||
print(f"ERROR: Directory not found: {root_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
# Create batch parser
|
||||
batch_parser = BatchParser(
|
||||
root_dir,
|
||||
validate=validate,
|
||||
continue_on_error=continue_on_error
|
||||
)
|
||||
|
||||
# Process all cases
|
||||
try:
|
||||
results = batch_parser.batch_parse()
|
||||
|
||||
# Exit with appropriate code
|
||||
if not results:
|
||||
sys.exit(1)
|
||||
|
||||
success_count = sum(1 for r in results if r["status"] == "success")
|
||||
if success_count == len(results):
|
||||
sys.exit(0) # All successful
|
||||
elif success_count > 0:
|
||||
sys.exit(2) # Partial success
|
||||
else:
|
||||
sys.exit(1) # All failed
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nBatch processing interrupted by user")
|
||||
sys.exit(130)
|
||||
except Exception as e:
|
||||
print(f"\n\nFATAL ERROR: {e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,174 +0,0 @@
|
||||
"""
|
||||
Check actual values from OP2 to understand what's correct
|
||||
"""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
import numpy as np
|
||||
|
||||
print("="*60)
|
||||
print("CHECKING ACTUAL FEA VALUES")
|
||||
print("="*60)
|
||||
|
||||
# Load OP2
|
||||
op2_file = 'test_case_beam/output/model.op2'
|
||||
print(f"\nLoading OP2: {op2_file}")
|
||||
op2 = OP2()
|
||||
op2.read_op2(op2_file)
|
||||
|
||||
# Load BDF
|
||||
bdf_file = 'test_case_beam/input/model.bdf'
|
||||
print(f"Loading BDF: {bdf_file}")
|
||||
bdf = BDF()
|
||||
bdf.read_bdf(bdf_file)
|
||||
|
||||
print("\n1. UNIT SYSTEM:")
|
||||
print("-"*60)
|
||||
if hasattr(bdf, 'params') and 'UNITSYS' in bdf.params:
|
||||
unitsys = str(bdf.params['UNITSYS'].values[0])
|
||||
print(f" PARAM UNITSYS: {unitsys}")
|
||||
if 'MN' in unitsys and 'MM' in unitsys:
|
||||
print(" This is MegaNewton-Millimeter system:")
|
||||
print(" - Length: mm")
|
||||
print(" - Force: MN (MegaNewton)")
|
||||
print(" - Stress: Pa (base SI)")
|
||||
print(" - Mass: tonne")
|
||||
else:
|
||||
print(" No UNITSYS parameter found")
|
||||
|
||||
print("\n2. MATERIAL PROPERTIES:")
|
||||
print("-"*60)
|
||||
if hasattr(bdf, 'materials') and bdf.materials:
|
||||
mat = list(bdf.materials.values())[0]
|
||||
print(f" Material ID: {mat.mid}")
|
||||
print(f" Type: {mat.type}")
|
||||
if hasattr(mat, 'e') and mat.e:
|
||||
print(f" Young's modulus E: {mat.e:.2e}")
|
||||
print(f" -> E = {mat.e/1e9:.1f} GPa (if units are Pa)")
|
||||
print(f" -> E = {mat.e/1e6:.1f} GPa (if units are kPa)")
|
||||
print(f" -> E = {mat.e/1e3:.1f} GPa (if units are MPa)")
|
||||
print(f" Steel E ~= 200 GPa, so units must be Pa")
|
||||
|
||||
print("\n3. APPLIED FORCES FROM BDF:")
|
||||
print("-"*60)
|
||||
total_force = 0
|
||||
n_forces = 0
|
||||
if hasattr(bdf, 'loads') and bdf.loads:
|
||||
for load_id, load_list in bdf.loads.items():
|
||||
for load in load_list:
|
||||
if hasattr(load, 'mag'):
|
||||
print(f" Load ID {load_id}, Node {load.node}: {load.mag:.2e} (unit depends on UNITSYS)")
|
||||
total_force += abs(load.mag)
|
||||
n_forces += 1
|
||||
if n_forces >= 3:
|
||||
break
|
||||
if n_forces >= 3:
|
||||
break
|
||||
print(f" Total applied force (first 3): {total_force:.2e}")
|
||||
print(f" In MN: {total_force:.2e} MN")
|
||||
print(f" In N: {total_force*1e6:.2e} N")
|
||||
|
||||
print("\n4. DISPLACEMENT FROM OP2:")
|
||||
print("-"*60)
|
||||
if hasattr(op2, 'displacements') and op2.displacements:
|
||||
for subcase_id, disp in op2.displacements.items():
|
||||
# Get translation only (DOF 1-3)
|
||||
translations = disp.data[0, :, :3]
|
||||
max_trans = np.max(np.abs(translations))
|
||||
max_idx = np.unravel_index(np.argmax(np.abs(translations)), translations.shape)
|
||||
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Max translation: {max_trans:.6f} mm")
|
||||
print(f" Location: node index {max_idx[0]}, DOF {max_idx[1]}")
|
||||
|
||||
print("\n5. STRESS FROM OP2 (RAW VALUES):")
|
||||
print("-"*60)
|
||||
# Try new API
|
||||
stress_dict = None
|
||||
if hasattr(op2, 'op2_results') and hasattr(op2.op2_results, 'stress'):
|
||||
if hasattr(op2.op2_results.stress, 'cquad4_stress'):
|
||||
stress_dict = op2.op2_results.stress.cquad4_stress
|
||||
elif hasattr(op2, 'cquad4_stress'):
|
||||
try:
|
||||
stress_dict = op2.cquad4_stress
|
||||
except:
|
||||
pass
|
||||
|
||||
if stress_dict:
|
||||
for subcase_id, stress in stress_dict.items():
|
||||
# Stress data columns depend on element type
|
||||
# For CQUAD4: typically [fiber_distance, oxx, oyy, txy, angle, major, minor, von_mises]
|
||||
stress_data = stress.data[0, :, :]
|
||||
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Data shape: {stress_data.shape} (elements × stress_components)")
|
||||
print(f" Stress components: {stress_data.shape[1]}")
|
||||
|
||||
# Von Mises is usually last column
|
||||
von_mises = stress_data[:, -1]
|
||||
print(f"\n Von Mises stress (column {stress_data.shape[1]-1}):")
|
||||
print(f" Min: {np.min(von_mises):.2e}")
|
||||
print(f" Max: {np.max(von_mises):.2e}")
|
||||
print(f" Mean: {np.mean(von_mises):.2e}")
|
||||
print(f" Median: {np.median(von_mises):.2e}")
|
||||
|
||||
print(f"\n Principal stresses (columns 5-6 typically):")
|
||||
if stress_data.shape[1] >= 7:
|
||||
major = stress_data[:, -3]
|
||||
minor = stress_data[:, -2]
|
||||
print(f" Major max: {np.max(major):.2e}")
|
||||
print(f" Minor min: {np.min(minor):.2e}")
|
||||
|
||||
print(f"\n Direct stresses (columns 1-2 typically):")
|
||||
if stress_data.shape[1] >= 3:
|
||||
sxx = stress_data[:, 1]
|
||||
syy = stress_data[:, 2]
|
||||
print(f" sigmaxx range: {np.min(sxx):.2e} to {np.max(sxx):.2e}")
|
||||
print(f" sigmayy range: {np.min(syy):.2e} to {np.max(syy):.2e}")
|
||||
|
||||
print("\n6. REACTIONS FROM OP2:")
|
||||
print("-"*60)
|
||||
if hasattr(op2, 'grid_point_forces') and op2.grid_point_forces:
|
||||
for subcase_id, gpf in op2.grid_point_forces.items():
|
||||
forces = gpf.data[0]
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Data shape: {forces.shape}")
|
||||
print(f" Max reaction (all DOF): {np.max(np.abs(forces)):.2e}")
|
||||
|
||||
# Get force components (first 3 columns usually Fx, Fy, Fz)
|
||||
if forces.shape[1] >= 3:
|
||||
fx = forces[:, 0]
|
||||
fy = forces[:, 1]
|
||||
fz = forces[:, 2]
|
||||
print(f" Fx range: {np.min(fx):.2e} to {np.max(fx):.2e}")
|
||||
print(f" Fy range: {np.min(fy):.2e} to {np.max(fy):.2e}")
|
||||
print(f" Fz range: {np.min(fz):.2e} to {np.max(fz):.2e}")
|
||||
|
||||
print("\n7. YOUR STATED VALUES:")
|
||||
print("-"*60)
|
||||
print(" You said:")
|
||||
print(" - Stress around 117 MPa")
|
||||
print(" - Force around 152,200 N")
|
||||
print("\n From OP2 raw data above:")
|
||||
print(" - If Von Mises max = 1.17e+05, this is:")
|
||||
print(" -> 117,000 Pa = 117 kPa = 0.117 MPa (if UNITSYS=MN-MM)")
|
||||
print(" -> OR 117,000 MPa (if somehow in MPa already)")
|
||||
print("\n For force 152,200 N:")
|
||||
print(" - If reactions max = 1.52e+08 from OP2:")
|
||||
print(" -> 152,000,000 Pa or N·mm⁻² (if MN-MM system)")
|
||||
print(" -> 152 MN = 152,000,000 N (conversion)")
|
||||
print(" -> OR your expected value is 0.1522 MN = 152,200 N")
|
||||
|
||||
print("\n8. DIRECTIONS AND TENSORS:")
|
||||
print("-"*60)
|
||||
print(" Stress tensor (symmetric 3×3):")
|
||||
print(" sigma = [sigmaxx tauxy tauxz]")
|
||||
print(" [tauxy sigmayy tauyz]")
|
||||
print(" [tauxz tauyz sigmazz]")
|
||||
print("\n Stored in OP2 for shells (CQUAD4) as:")
|
||||
print(" [fiber_dist, sigmaxx, sigmayy, tauxy, angle, sigma_major, sigma_minor, von_mises]")
|
||||
print("\n Displacement vector (6 DOF per node):")
|
||||
print(" [Tx, Ty, Tz, Rx, Ry, Rz]")
|
||||
print("\n Force/Reaction vector (6 DOF per node):")
|
||||
print(" [Fx, Fy, Fz, Mx, My, Mz]")
|
||||
|
||||
print("\n" + "="*60)
|
||||
@@ -1,122 +0,0 @@
|
||||
"""
|
||||
Check actual units from pyNastran OP2 reader
|
||||
"""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
print("="*60)
|
||||
print("CHECKING OP2 FILE UNITS")
|
||||
print("="*60)
|
||||
|
||||
# Load OP2 file
|
||||
op2_file = 'test_case_beam/output/model.op2'
|
||||
print(f"\nLoading: {op2_file}")
|
||||
|
||||
op2 = OP2()
|
||||
op2.read_op2(op2_file)
|
||||
|
||||
print("\n1. DISPLACEMENT DATA:")
|
||||
print("-"*60)
|
||||
if hasattr(op2, 'displacements') and op2.displacements:
|
||||
for subcase_id, disp in op2.displacements.items():
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Data shape: {disp.data.shape}")
|
||||
print(f" Max displacement (all DOF): {np.max(np.abs(disp.data)):.6f}")
|
||||
print(f" Translation max (DOF 1-3): {np.max(np.abs(disp.data[0, :, :3])):.6f}")
|
||||
|
||||
# Check if pyNastran has unit info
|
||||
if hasattr(disp, 'units'):
|
||||
print(f" Units: {disp.units}")
|
||||
else:
|
||||
print(f" Units: Not specified by pyNastran")
|
||||
|
||||
print("\n2. STRESS DATA:")
|
||||
print("-"*60)
|
||||
# Try new API first
|
||||
stress_dict = None
|
||||
if hasattr(op2, 'op2_results') and hasattr(op2.op2_results, 'stress'):
|
||||
if hasattr(op2.op2_results.stress, 'cquad4_stress'):
|
||||
stress_dict = op2.op2_results.stress.cquad4_stress
|
||||
elif hasattr(op2, 'cquad4_stress'):
|
||||
try:
|
||||
stress_dict = op2.cquad4_stress
|
||||
except:
|
||||
stress_dict = None
|
||||
|
||||
if stress_dict:
|
||||
for subcase_id, stress in stress_dict.items():
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Data shape: {stress.data.shape}")
|
||||
|
||||
# Get von Mises stress (last column)
|
||||
von_mises = stress.data[0, :, -1]
|
||||
print(f" Von Mises min: {np.min(von_mises):.2e}")
|
||||
print(f" Von Mises max: {np.max(von_mises):.2e}")
|
||||
print(f" Von Mises mean: {np.mean(von_mises):.2e}")
|
||||
|
||||
# Check if pyNastran has unit info
|
||||
if hasattr(stress, 'units'):
|
||||
print(f" Units: {stress.units}")
|
||||
else:
|
||||
print(f" Units: Not specified by pyNastran")
|
||||
|
||||
# Check data type names
|
||||
if hasattr(stress, 'data_names'):
|
||||
print(f" Data names: {stress.data_names}")
|
||||
else:
|
||||
print(" No CQUAD4 stress data found")
|
||||
|
||||
print("\n3. REACTION FORCES:")
|
||||
print("-"*60)
|
||||
if hasattr(op2, 'grid_point_forces') and op2.grid_point_forces:
|
||||
for subcase_id, forces in op2.grid_point_forces.items():
|
||||
print(f" Subcase {subcase_id}:")
|
||||
print(f" Data shape: {forces.data.shape}")
|
||||
print(f" Max force: {np.max(np.abs(forces.data)):.2e}")
|
||||
|
||||
if hasattr(forces, 'units'):
|
||||
print(f" Units: {forces.units}")
|
||||
else:
|
||||
print(f" Units: Not specified by pyNastran")
|
||||
|
||||
print("\n4. BDF FILE UNITS:")
|
||||
print("-"*60)
|
||||
# Try to read BDF to check units
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
bdf_file = 'test_case_beam/input/model.bdf'
|
||||
print(f"Loading: {bdf_file}")
|
||||
|
||||
bdf = BDF()
|
||||
bdf.read_bdf(bdf_file)
|
||||
|
||||
# Check for PARAM cards that define units
|
||||
if hasattr(bdf, 'params') and bdf.params:
|
||||
print(" PARAM cards found:")
|
||||
for param_name, param in bdf.params.items():
|
||||
if 'UNIT' in param_name.upper() or 'WTMASS' in param_name.upper():
|
||||
print(f" {param_name}: {param}")
|
||||
|
||||
# Check material properties (can infer units from magnitude)
|
||||
if hasattr(bdf, 'materials') and bdf.materials:
|
||||
print("\n Material properties (first material):")
|
||||
mat = list(bdf.materials.values())[0]
|
||||
print(f" Type: {mat.type}")
|
||||
if hasattr(mat, 'e') and mat.e:
|
||||
print(f" Young's modulus E: {mat.e:.2e}")
|
||||
print(f" If E~200,000 then units are MPa (steel)")
|
||||
print(f" If E~200,000,000 then units are Pa (steel)")
|
||||
print(f" If E~29,000,000 then units are psi (steel)")
|
||||
|
||||
print("\n5. NASTRAN UNIT SYSTEM ANALYSIS:")
|
||||
print("-"*60)
|
||||
print(" Common Nastran unit systems:")
|
||||
print(" - SI: m, kg, N, Pa, s")
|
||||
print(" - mm-tonne-N: mm, tonne, N, MPa, s")
|
||||
print(" - mm-kg-N: mm, kg, N, MPa, s")
|
||||
print(" - in-lb-s: in, lb, lbf, psi, s")
|
||||
print("")
|
||||
print(" Our metadata claims: mm, kg, N, MPa")
|
||||
print(" But pyNastran might return stress in Pa (base SI)")
|
||||
print(" This would explain the 1000× error!")
|
||||
|
||||
print("\n" + "="*60)
|
||||
@@ -1,104 +0,0 @@
|
||||
"""
|
||||
Script to investigate unit conversion issues in AtomizerField
|
||||
"""
|
||||
import json
|
||||
import h5py
|
||||
import numpy as np
|
||||
|
||||
print("="*60)
|
||||
print("UNIT VALIDATION CHECK")
|
||||
print("="*60)
|
||||
|
||||
# Load JSON metadata
|
||||
with open('test_case_beam/neural_field_data.json', 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Check units
|
||||
print("\n1. UNITS FROM METADATA:")
|
||||
print("-"*60)
|
||||
units = data['metadata']['units']
|
||||
for key, value in units.items():
|
||||
print(f" {key}: {value}")
|
||||
|
||||
# Check loads
|
||||
print("\n2. APPLIED LOADS:")
|
||||
print("-"*60)
|
||||
loads = data['loads']
|
||||
print(f" Available load types: {list(loads.keys())}")
|
||||
|
||||
if 'point_forces' in loads:
|
||||
point_forces = loads['point_forces']
|
||||
print(f" Point forces: {len(point_forces)} applied")
|
||||
|
||||
if point_forces:
|
||||
print("\n Sample forces (first 3):")
|
||||
for i in range(min(3, len(point_forces))):
|
||||
force = point_forces[i]
|
||||
print(f" Force {i+1}:")
|
||||
print(f" Node: {force['node']}")
|
||||
print(f" Magnitude: {force['magnitude']:.2e} N")
|
||||
print(f" Direction: {force['direction']}")
|
||||
|
||||
# Load HDF5 data
|
||||
print("\n3. REACTION FORCES FROM HDF5:")
|
||||
print("-"*60)
|
||||
with h5py.File('test_case_beam/neural_field_data.h5', 'r') as f:
|
||||
reactions = f['results/reactions'][:]
|
||||
|
||||
# Get statistics
|
||||
non_zero_reactions = reactions[np.abs(reactions) > 1e-10]
|
||||
print(f" Shape: {reactions.shape}")
|
||||
print(f" Min: {np.min(reactions):.2e}")
|
||||
print(f" Max: {np.max(reactions):.2e}")
|
||||
print(f" Mean (non-zero): {np.mean(np.abs(non_zero_reactions)):.2e}")
|
||||
print(f" Max absolute: {np.max(np.abs(reactions)):.2e}")
|
||||
|
||||
# Check displacement for comparison
|
||||
displacement = f['results/displacement'][:]
|
||||
max_disp = np.max(np.abs(displacement[:, :3])) # Translation only
|
||||
print(f"\n Max displacement: {max_disp:.6f} mm")
|
||||
|
||||
# Check stress
|
||||
print("\n4. STRESS VALUES FROM HDF5:")
|
||||
print("-"*60)
|
||||
with h5py.File('test_case_beam/neural_field_data.h5', 'r') as f:
|
||||
stress = f['results/stress/cquad4_stress/data'][:]
|
||||
|
||||
# Von Mises stress is in column 7 (0-indexed)
|
||||
von_mises = stress[:, 7]
|
||||
|
||||
print(f" Shape: {stress.shape}")
|
||||
print(f" Von Mises min: {np.min(von_mises):.2e} MPa")
|
||||
print(f" Von Mises max: {np.max(von_mises):.2e} MPa")
|
||||
print(f" Von Mises mean: {np.mean(von_mises):.2e} MPa")
|
||||
|
||||
# Check if values make sense
|
||||
print("\n5. SANITY CHECK:")
|
||||
print("-"*60)
|
||||
print(f" Units claimed: force=N, stress=MPa, length=mm")
|
||||
print(f" Max reaction force: {np.max(np.abs(reactions)):.2e} N")
|
||||
print(f" Max von Mises: {np.max(von_mises):.2e} MPa")
|
||||
print(f" Max displacement: {max_disp:.6f} mm")
|
||||
|
||||
# Typical beam: if force is 1000 N, stress should be ~10-100 MPa
|
||||
# If reaction is 152 million N, that's 152,000 kN - VERY high!
|
||||
max_reaction = np.max(np.abs(reactions))
|
||||
max_stress_val = np.max(von_mises)
|
||||
|
||||
print(f"\n If force unit is actually kN instead of N:")
|
||||
print(f" Max reaction: {max_reaction/1000:.2e} kN")
|
||||
print(f" If stress unit is actually Pa instead of MPa:")
|
||||
print(f" Max stress: {max_stress_val/1e6:.2e} MPa")
|
||||
|
||||
print("\n6. HYPOTHESIS:")
|
||||
print("-"*60)
|
||||
if max_reaction > 1e6:
|
||||
print(" [!] Reaction forces seem TOO LARGE (>1 MN)")
|
||||
print(" [!] Possible issue: pyNastran returns forces in different units")
|
||||
print(" [!] Check: Nastran may export in base units (N) while expecting kN")
|
||||
|
||||
if max_stress_val > 1e6:
|
||||
print(" [!] Stresses seem TOO LARGE (>1000 MPa)")
|
||||
print(" [!] Possible issue: pyNastran returns stress in Pa, not MPa")
|
||||
|
||||
print("\n" + "="*60)
|
||||
@@ -1,921 +0,0 @@
|
||||
"""
|
||||
neural_field_parser.py
|
||||
Parses NX Nastran files into Neural Field training data
|
||||
|
||||
AtomizerField Data Parser v1.0.0
|
||||
Converts NX Nastran BDF/OP2 files into standardized neural field training format.
|
||||
This format is designed to be future-proof for years of neural network training.
|
||||
|
||||
Usage:
|
||||
python neural_field_parser.py <case_directory>
|
||||
|
||||
Example:
|
||||
python neural_field_parser.py training_case_001
|
||||
"""
|
||||
|
||||
import json
|
||||
import numpy as np
|
||||
import h5py
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
import hashlib
|
||||
import warnings
|
||||
|
||||
# pyNastran imports
|
||||
try:
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
from pyNastran.op2.op2 import OP2
|
||||
except ImportError:
|
||||
print("ERROR: pyNastran is required. Install with: pip install pyNastran")
|
||||
raise
|
||||
|
||||
|
||||
class NastranToNeuralFieldParser:
|
||||
"""
|
||||
Parses Nastran BDF/OP2 files into Neural Field data structure v1.0
|
||||
|
||||
This parser extracts complete field data (stress, displacement, strain at every node/element)
|
||||
rather than just scalar maximum values. This enables neural networks to learn complete
|
||||
physics fields for 1000x faster structural optimization.
|
||||
|
||||
Data Structure v1.0:
|
||||
-------------------
|
||||
- metadata: Version, timestamps, analysis info, units
|
||||
- mesh: Complete node coordinates, element connectivity
|
||||
- materials: Full material properties (E, nu, rho, etc.)
|
||||
- boundary_conditions: All constraints (SPC, MPC)
|
||||
- loads: All loading conditions (forces, pressures, gravity, thermal)
|
||||
- results: Complete field results (displacement, stress, strain at ALL points)
|
||||
|
||||
Attributes:
|
||||
case_dir (Path): Directory containing input/output subdirectories
|
||||
bdf_file (Path): Path to Nastran input deck (.bdf or .dat)
|
||||
op2_file (Path): Path to Nastran binary results (.op2)
|
||||
bdf (BDF): pyNastran BDF reader object
|
||||
op2 (OP2): pyNastran OP2 reader object
|
||||
neural_field_data (dict): Complete parsed data structure
|
||||
"""
|
||||
|
||||
def __init__(self, case_directory):
|
||||
"""
|
||||
Initialize parser with case directory
|
||||
|
||||
Args:
|
||||
case_directory (str or Path): Path to case directory containing:
|
||||
- input/model.bdf (or model.dat)
|
||||
- output/model.op2
|
||||
"""
|
||||
self.case_dir = Path(case_directory)
|
||||
|
||||
# Find BDF file (try both .bdf and .dat extensions)
|
||||
bdf_candidates = list((self.case_dir / "input").glob("model.bdf")) + \
|
||||
list((self.case_dir / "input").glob("model.dat"))
|
||||
if not bdf_candidates:
|
||||
raise FileNotFoundError(
|
||||
f"No model.bdf or model.dat found in {self.case_dir / 'input'}/"
|
||||
)
|
||||
self.bdf_file = bdf_candidates[0]
|
||||
|
||||
# Find OP2 file
|
||||
op2_candidates = list((self.case_dir / "output").glob("model.op2"))
|
||||
if not op2_candidates:
|
||||
raise FileNotFoundError(
|
||||
f"No model.op2 found in {self.case_dir / 'output'}/"
|
||||
)
|
||||
self.op2_file = op2_candidates[0]
|
||||
|
||||
print(f"Found BDF: {self.bdf_file.name}")
|
||||
print(f"Found OP2: {self.op2_file.name}")
|
||||
|
||||
# Initialize readers with minimal debug output
|
||||
self.bdf = BDF(debug=False)
|
||||
self.op2 = OP2(debug=False)
|
||||
|
||||
# Initialize data structure v1.0
|
||||
self.neural_field_data = {
|
||||
"metadata": {},
|
||||
"geometry": {},
|
||||
"mesh": {},
|
||||
"materials": {},
|
||||
"boundary_conditions": {},
|
||||
"loads": {},
|
||||
"results": {}
|
||||
}
|
||||
|
||||
def parse_all(self):
|
||||
"""
|
||||
Main parsing function - extracts all data from BDF/OP2 files
|
||||
|
||||
Returns:
|
||||
dict: Complete neural field data structure
|
||||
"""
|
||||
print("\n" + "="*60)
|
||||
print("Starting AtomizerField Neural Field Parser v1.0")
|
||||
print("="*60)
|
||||
|
||||
# Parse input deck
|
||||
print("\n[1/6] Reading BDF file...")
|
||||
self.bdf.read_bdf(str(self.bdf_file))
|
||||
print(f" Loaded {len(self.bdf.nodes)} nodes, {len(self.bdf.elements)} elements")
|
||||
|
||||
# Parse results
|
||||
print("\n[2/6] Reading OP2 file...")
|
||||
self.op2.read_op2(str(self.op2_file))
|
||||
# Check for sol attribute (may not exist in all pyNastran versions)
|
||||
sol_num = getattr(self.op2, 'sol', 'Unknown')
|
||||
print(f" Loaded solution: SOL {sol_num}")
|
||||
|
||||
# Extract all data
|
||||
print("\n[3/6] Extracting metadata...")
|
||||
self.extract_metadata()
|
||||
|
||||
print("\n[4/6] Extracting mesh data...")
|
||||
self.extract_mesh()
|
||||
|
||||
print("\n[5/6] Extracting materials, BCs, and loads...")
|
||||
self.extract_materials()
|
||||
self.extract_boundary_conditions()
|
||||
self.extract_loads()
|
||||
|
||||
print("\n[6/6] Extracting field results...")
|
||||
self.extract_results()
|
||||
|
||||
# Save to file
|
||||
print("\nSaving data to disk...")
|
||||
self.save_data()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("Parsing complete! [OK]")
|
||||
print("="*60)
|
||||
return self.neural_field_data
|
||||
|
||||
def extract_metadata(self):
|
||||
"""
|
||||
Extract metadata and analysis information
|
||||
|
||||
This includes:
|
||||
- Data format version (v1.0.0)
|
||||
- Timestamps
|
||||
- Analysis type (SOL 101, 103, etc.)
|
||||
- Units system
|
||||
- File hashes for provenance
|
||||
"""
|
||||
# Generate file hash for data provenance
|
||||
with open(self.bdf_file, 'rb') as f:
|
||||
bdf_hash = hashlib.sha256(f.read()).hexdigest()
|
||||
with open(self.op2_file, 'rb') as f:
|
||||
op2_hash = hashlib.sha256(f.read()).hexdigest()
|
||||
|
||||
# Extract title if available
|
||||
title = ""
|
||||
if hasattr(self.bdf, 'case_control_deck') and self.bdf.case_control_deck:
|
||||
if hasattr(self.bdf.case_control_deck, 'title'):
|
||||
title = str(self.bdf.case_control_deck.title)
|
||||
|
||||
# Get solution type if available
|
||||
sol_num = getattr(self.op2, 'sol', 'Unknown')
|
||||
|
||||
self.neural_field_data["metadata"] = {
|
||||
"version": "1.0.0",
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"source": "NX_Nastran",
|
||||
"case_directory": str(self.case_dir),
|
||||
"case_name": self.case_dir.name,
|
||||
"analysis_type": f"SOL_{sol_num}",
|
||||
"title": title,
|
||||
"file_hashes": {
|
||||
"bdf": bdf_hash,
|
||||
"op2": op2_hash
|
||||
},
|
||||
"units": {
|
||||
"length": "mm", # Standard NX units
|
||||
"force": "N",
|
||||
"stress": "MPa",
|
||||
"mass": "kg",
|
||||
"temperature": "C"
|
||||
},
|
||||
"parser_version": "1.0.0",
|
||||
"notes": "Complete field data for neural network training"
|
||||
}
|
||||
print(f" Analysis: {self.neural_field_data['metadata']['analysis_type']}")
|
||||
|
||||
def extract_mesh(self):
|
||||
"""
|
||||
Extract complete mesh data from BDF
|
||||
|
||||
This preserves:
|
||||
- All node coordinates (global coordinate system)
|
||||
- All element connectivity
|
||||
- Element types (solid, shell, beam, rigid)
|
||||
- Material and property IDs for each element
|
||||
"""
|
||||
print(" Extracting nodes...")
|
||||
|
||||
# Nodes - store in sorted order for consistent indexing
|
||||
nodes = []
|
||||
node_ids = []
|
||||
for nid, node in sorted(self.bdf.nodes.items()):
|
||||
node_ids.append(nid)
|
||||
# Get position in global coordinate system
|
||||
pos = node.get_position()
|
||||
nodes.append([pos[0], pos[1], pos[2]])
|
||||
|
||||
nodes_array = np.array(nodes, dtype=np.float64)
|
||||
|
||||
print(f" Extracted {len(nodes)} nodes")
|
||||
print(f" Extracting elements...")
|
||||
|
||||
# Elements - organize by type for efficient neural network processing
|
||||
element_data = {
|
||||
"solid": [],
|
||||
"shell": [],
|
||||
"beam": [],
|
||||
"rigid": []
|
||||
}
|
||||
|
||||
element_type_counts = {}
|
||||
|
||||
for eid, elem in sorted(self.bdf.elements.items()):
|
||||
elem_type = elem.type
|
||||
element_type_counts[elem_type] = element_type_counts.get(elem_type, 0) + 1
|
||||
|
||||
# Solid elements (3D stress states)
|
||||
if elem_type in ['CTETRA', 'CHEXA', 'CPENTA', 'CTETRA10', 'CHEXA20', 'CPENTA15']:
|
||||
element_data["solid"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": list(elem.node_ids),
|
||||
"material_id": elem.pid, # Property ID which links to material
|
||||
"property_id": elem.pid if hasattr(elem, 'pid') else None
|
||||
})
|
||||
|
||||
# Shell elements (2D plane stress)
|
||||
elif elem_type in ['CQUAD4', 'CTRIA3', 'CQUAD8', 'CTRIA6', 'CQUAD', 'CTRIA']:
|
||||
thickness = None
|
||||
try:
|
||||
# Get thickness from property
|
||||
if hasattr(elem, 'pid') and elem.pid in self.bdf.properties:
|
||||
prop = self.bdf.properties[elem.pid]
|
||||
if hasattr(prop, 't'):
|
||||
thickness = prop.t
|
||||
except:
|
||||
pass
|
||||
|
||||
element_data["shell"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": list(elem.node_ids),
|
||||
"material_id": elem.pid,
|
||||
"property_id": elem.pid,
|
||||
"thickness": thickness
|
||||
})
|
||||
|
||||
# Beam elements (1D elements)
|
||||
elif elem_type in ['CBAR', 'CBEAM', 'CROD', 'CONROD']:
|
||||
element_data["beam"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": list(elem.node_ids),
|
||||
"material_id": elem.pid if hasattr(elem, 'pid') else None,
|
||||
"property_id": elem.pid if hasattr(elem, 'pid') else None
|
||||
})
|
||||
|
||||
# Rigid elements (kinematic constraints)
|
||||
elif elem_type in ['RBE2', 'RBE3', 'RBAR', 'RROD']:
|
||||
element_data["rigid"].append({
|
||||
"id": eid,
|
||||
"type": elem_type,
|
||||
"nodes": list(elem.node_ids)
|
||||
})
|
||||
|
||||
print(f" Extracted {len(self.bdf.elements)} elements:")
|
||||
for etype, count in element_type_counts.items():
|
||||
print(f" {etype}: {count}")
|
||||
|
||||
# Calculate mesh bounding box for reference
|
||||
bbox_min = nodes_array.min(axis=0)
|
||||
bbox_max = nodes_array.max(axis=0)
|
||||
bbox_size = bbox_max - bbox_min
|
||||
|
||||
# Store mesh data
|
||||
self.neural_field_data["mesh"] = {
|
||||
"statistics": {
|
||||
"n_nodes": len(nodes),
|
||||
"n_elements": len(self.bdf.elements),
|
||||
"element_types": {
|
||||
"solid": len(element_data["solid"]),
|
||||
"shell": len(element_data["shell"]),
|
||||
"beam": len(element_data["beam"]),
|
||||
"rigid": len(element_data["rigid"])
|
||||
},
|
||||
"element_type_breakdown": element_type_counts
|
||||
},
|
||||
"bounding_box": {
|
||||
"min": bbox_min.tolist(),
|
||||
"max": bbox_max.tolist(),
|
||||
"size": bbox_size.tolist()
|
||||
},
|
||||
"nodes": {
|
||||
"ids": node_ids,
|
||||
"coordinates": nodes_array.tolist(), # Will be stored in HDF5
|
||||
"shape": list(nodes_array.shape),
|
||||
"dtype": str(nodes_array.dtype)
|
||||
},
|
||||
"elements": element_data
|
||||
}
|
||||
|
||||
def extract_materials(self):
|
||||
"""
|
||||
Extract all material properties
|
||||
|
||||
Captures complete material definitions including:
|
||||
- Isotropic (MAT1): E, nu, rho, G, alpha
|
||||
- Orthotropic (MAT8, MAT9): directional properties
|
||||
- Stress limits for validation
|
||||
"""
|
||||
print(" Extracting materials...")
|
||||
|
||||
materials = []
|
||||
for mid, mat in sorted(self.bdf.materials.items()):
|
||||
mat_data = {
|
||||
"id": mid,
|
||||
"type": mat.type
|
||||
}
|
||||
|
||||
if mat.type == 'MAT1': # Isotropic material
|
||||
mat_data.update({
|
||||
"E": float(mat.e) if mat.e is not None else None, # Young's modulus
|
||||
"nu": float(mat.nu) if mat.nu is not None else None, # Poisson's ratio
|
||||
"rho": float(mat.rho) if mat.rho is not None else None,# Density
|
||||
"G": float(mat.g) if mat.g is not None else None, # Shear modulus
|
||||
"alpha": float(mat.a) if hasattr(mat, 'a') and mat.a is not None else None, # Thermal expansion
|
||||
"tref": float(mat.tref) if hasattr(mat, 'tref') and mat.tref is not None else None,
|
||||
})
|
||||
|
||||
# Stress limits (if defined)
|
||||
try:
|
||||
if hasattr(mat, 'St') and callable(mat.St):
|
||||
mat_data["ST"] = float(mat.St()) if mat.St() is not None else None
|
||||
if hasattr(mat, 'Sc') and callable(mat.Sc):
|
||||
mat_data["SC"] = float(mat.Sc()) if mat.Sc() is not None else None
|
||||
if hasattr(mat, 'Ss') and callable(mat.Ss):
|
||||
mat_data["SS"] = float(mat.Ss()) if mat.Ss() is not None else None
|
||||
except:
|
||||
pass
|
||||
|
||||
elif mat.type == 'MAT8': # Orthotropic shell material
|
||||
mat_data.update({
|
||||
"E1": float(mat.e11) if hasattr(mat, 'e11') and mat.e11 is not None else None,
|
||||
"E2": float(mat.e22) if hasattr(mat, 'e22') and mat.e22 is not None else None,
|
||||
"nu12": float(mat.nu12) if hasattr(mat, 'nu12') and mat.nu12 is not None else None,
|
||||
"G12": float(mat.g12) if hasattr(mat, 'g12') and mat.g12 is not None else None,
|
||||
"G1z": float(mat.g1z) if hasattr(mat, 'g1z') and mat.g1z is not None else None,
|
||||
"G2z": float(mat.g2z) if hasattr(mat, 'g2z') and mat.g2z is not None else None,
|
||||
"rho": float(mat.rho) if hasattr(mat, 'rho') and mat.rho is not None else None,
|
||||
})
|
||||
|
||||
materials.append(mat_data)
|
||||
|
||||
self.neural_field_data["materials"] = materials
|
||||
print(f" Extracted {len(materials)} materials")
|
||||
|
||||
def extract_boundary_conditions(self):
|
||||
"""
|
||||
Extract all boundary conditions
|
||||
|
||||
Includes:
|
||||
- SPC: Single point constraints (fixed DOFs)
|
||||
- MPC: Multi-point constraints (equations)
|
||||
- SUPORT: Free body supports
|
||||
"""
|
||||
print(" Extracting boundary conditions...")
|
||||
|
||||
bcs = {
|
||||
"spc": [], # Single point constraints
|
||||
"mpc": [], # Multi-point constraints
|
||||
"suport": [] # Free body supports
|
||||
}
|
||||
|
||||
# SPC (fixed DOFs) - critical for neural network to understand support conditions
|
||||
spc_count = 0
|
||||
for spc_id, spc_list in self.bdf.spcs.items():
|
||||
for spc in spc_list:
|
||||
try:
|
||||
# Handle different SPC types
|
||||
if hasattr(spc, 'node_ids'):
|
||||
nodes = spc.node_ids
|
||||
elif hasattr(spc, 'node'):
|
||||
nodes = [spc.node]
|
||||
else:
|
||||
continue
|
||||
|
||||
for node in nodes:
|
||||
bcs["spc"].append({
|
||||
"id": spc_id,
|
||||
"node": int(node),
|
||||
"dofs": str(spc.components) if hasattr(spc, 'components') else "123456",
|
||||
"enforced_motion": float(spc.enforced) if hasattr(spc, 'enforced') and spc.enforced is not None else 0.0
|
||||
})
|
||||
spc_count += 1
|
||||
except Exception as e:
|
||||
warnings.warn(f"Could not parse SPC {spc_id}: {e}")
|
||||
|
||||
# MPC equations
|
||||
mpc_count = 0
|
||||
for mpc_id, mpc_list in self.bdf.mpcs.items():
|
||||
for mpc in mpc_list:
|
||||
try:
|
||||
bcs["mpc"].append({
|
||||
"id": mpc_id,
|
||||
"nodes": list(mpc.node_ids) if hasattr(mpc, 'node_ids') else [],
|
||||
"coefficients": list(mpc.coefficients) if hasattr(mpc, 'coefficients') else [],
|
||||
"components": list(mpc.components) if hasattr(mpc, 'components') else []
|
||||
})
|
||||
mpc_count += 1
|
||||
except Exception as e:
|
||||
warnings.warn(f"Could not parse MPC {mpc_id}: {e}")
|
||||
|
||||
self.neural_field_data["boundary_conditions"] = bcs
|
||||
print(f" Extracted {spc_count} SPCs, {mpc_count} MPCs")
|
||||
|
||||
def extract_loads(self):
|
||||
"""
|
||||
Extract all loading conditions
|
||||
|
||||
Includes:
|
||||
- Point forces and moments
|
||||
- Distributed pressures
|
||||
- Gravity loads
|
||||
- Thermal loads
|
||||
"""
|
||||
print(" Extracting loads...")
|
||||
|
||||
loads = {
|
||||
"point_forces": [],
|
||||
"pressure": [],
|
||||
"gravity": [],
|
||||
"thermal": []
|
||||
}
|
||||
|
||||
force_count = 0
|
||||
pressure_count = 0
|
||||
gravity_count = 0
|
||||
|
||||
# Point forces, moments, and pressures
|
||||
for load_id, load_list in self.bdf.loads.items():
|
||||
for load in load_list:
|
||||
try:
|
||||
if load.type == 'FORCE':
|
||||
loads["point_forces"].append({
|
||||
"id": load_id,
|
||||
"type": "force",
|
||||
"node": int(load.node),
|
||||
"magnitude": float(load.mag),
|
||||
"direction": [float(load.xyz[0]), float(load.xyz[1]), float(load.xyz[2])],
|
||||
"coord_system": int(load.cid) if hasattr(load, 'cid') else 0
|
||||
})
|
||||
force_count += 1
|
||||
|
||||
elif load.type == 'MOMENT':
|
||||
loads["point_forces"].append({
|
||||
"id": load_id,
|
||||
"type": "moment",
|
||||
"node": int(load.node),
|
||||
"magnitude": float(load.mag),
|
||||
"direction": [float(load.xyz[0]), float(load.xyz[1]), float(load.xyz[2])],
|
||||
"coord_system": int(load.cid) if hasattr(load, 'cid') else 0
|
||||
})
|
||||
force_count += 1
|
||||
|
||||
elif load.type in ['PLOAD', 'PLOAD2', 'PLOAD4']:
|
||||
pressure_data = {
|
||||
"id": load_id,
|
||||
"type": load.type
|
||||
}
|
||||
|
||||
if hasattr(load, 'eids'):
|
||||
pressure_data["elements"] = list(load.eids)
|
||||
elif hasattr(load, 'eid'):
|
||||
pressure_data["elements"] = [int(load.eid)]
|
||||
|
||||
if hasattr(load, 'pressures'):
|
||||
pressure_data["pressure"] = [float(p) for p in load.pressures]
|
||||
elif hasattr(load, 'pressure'):
|
||||
pressure_data["pressure"] = float(load.pressure)
|
||||
|
||||
loads["pressure"].append(pressure_data)
|
||||
pressure_count += 1
|
||||
|
||||
elif load.type == 'GRAV':
|
||||
loads["gravity"].append({
|
||||
"id": load_id,
|
||||
"acceleration": float(load.scale),
|
||||
"direction": [float(load.N[0]), float(load.N[1]), float(load.N[2])],
|
||||
"coord_system": int(load.cid) if hasattr(load, 'cid') else 0
|
||||
})
|
||||
gravity_count += 1
|
||||
|
||||
except Exception as e:
|
||||
warnings.warn(f"Could not parse load {load_id} type {load.type}: {e}")
|
||||
|
||||
# Temperature loads (if available)
|
||||
thermal_count = 0
|
||||
if hasattr(self.bdf, 'temps'):
|
||||
for temp_id, temp_list in self.bdf.temps.items():
|
||||
for temp in temp_list:
|
||||
try:
|
||||
loads["thermal"].append({
|
||||
"id": temp_id,
|
||||
"node": int(temp.node),
|
||||
"temperature": float(temp.temperature)
|
||||
})
|
||||
thermal_count += 1
|
||||
except Exception as e:
|
||||
warnings.warn(f"Could not parse thermal load {temp_id}: {e}")
|
||||
|
||||
self.neural_field_data["loads"] = loads
|
||||
print(f" Extracted {force_count} forces, {pressure_count} pressures, {gravity_count} gravity, {thermal_count} thermal")
|
||||
|
||||
def extract_results(self):
|
||||
"""
|
||||
Extract complete field results from OP2
|
||||
|
||||
This is the CRITICAL function for neural field learning.
|
||||
We extract COMPLETE fields, not just maximum values:
|
||||
- Displacement at every node (6 DOF)
|
||||
- Stress at every element (full tensor)
|
||||
- Strain at every element (full tensor)
|
||||
- Reaction forces at constrained nodes
|
||||
|
||||
This complete field data enables the neural network to learn
|
||||
the physics of how structures respond to loads.
|
||||
"""
|
||||
print(" Extracting field results...")
|
||||
|
||||
results = {}
|
||||
|
||||
# Determine subcase ID (usually 1 for linear static)
|
||||
subcase_id = 1
|
||||
if hasattr(self.op2, 'isubcase_name_map'):
|
||||
available_subcases = list(self.op2.isubcase_name_map.keys())
|
||||
if available_subcases:
|
||||
subcase_id = available_subcases[0]
|
||||
|
||||
print(f" Using subcase ID: {subcase_id}")
|
||||
|
||||
# Displacement - complete field at all nodes
|
||||
if hasattr(self.op2, 'displacements') and subcase_id in self.op2.displacements:
|
||||
print(" Processing displacement field...")
|
||||
disp = self.op2.displacements[subcase_id]
|
||||
disp_data = disp.data[0, :, :] # [itime=0, all_nodes, 6_dofs]
|
||||
|
||||
# Extract node IDs
|
||||
node_ids = disp.node_gridtype[:, 0].tolist()
|
||||
|
||||
# Calculate magnitudes for quick reference
|
||||
translation_mag = np.linalg.norm(disp_data[:, :3], axis=1)
|
||||
rotation_mag = np.linalg.norm(disp_data[:, 3:], axis=1)
|
||||
|
||||
results["displacement"] = {
|
||||
"node_ids": node_ids,
|
||||
"data": disp_data.tolist(), # Will be stored in HDF5
|
||||
"shape": list(disp_data.shape),
|
||||
"dtype": str(disp_data.dtype),
|
||||
"max_translation": float(np.max(translation_mag)),
|
||||
"max_rotation": float(np.max(rotation_mag)),
|
||||
"units": "mm and radians"
|
||||
}
|
||||
print(f" Displacement: {len(node_ids)} nodes, max={results['displacement']['max_translation']:.6f} mm")
|
||||
|
||||
# Stress - handle different element types
|
||||
stress_results = {}
|
||||
|
||||
# Solid element stress (CTETRA, CHEXA, etc.)
|
||||
stress_attrs = ['ctetra_stress', 'chexa_stress', 'cpenta_stress']
|
||||
for attr in stress_attrs:
|
||||
if hasattr(self.op2, attr):
|
||||
stress_obj = getattr(self.op2, attr)
|
||||
if subcase_id in stress_obj:
|
||||
elem_type = attr.replace('_stress', '')
|
||||
print(f" Processing {elem_type} stress...")
|
||||
stress = stress_obj[subcase_id]
|
||||
stress_data = stress.data[0, :, :]
|
||||
|
||||
# Extract element IDs
|
||||
element_ids = stress.element_node[:, 0].tolist()
|
||||
|
||||
# Von Mises stress is usually the last column
|
||||
von_mises = None
|
||||
if stress_data.shape[1] >= 7: # Has von Mises
|
||||
von_mises = stress_data[:, -1]
|
||||
max_vm = float(np.max(von_mises))
|
||||
von_mises = von_mises.tolist()
|
||||
else:
|
||||
max_vm = None
|
||||
|
||||
stress_results[f"{elem_type}_stress"] = {
|
||||
"element_ids": element_ids,
|
||||
"data": stress_data.tolist(), # Full stress tensor
|
||||
"shape": list(stress_data.shape),
|
||||
"dtype": str(stress_data.dtype),
|
||||
"von_mises": von_mises,
|
||||
"max_von_mises": max_vm,
|
||||
"units": "MPa"
|
||||
}
|
||||
print(f" {elem_type}: {len(element_ids)} elements, max VM={max_vm:.2f} MPa" if max_vm else f" {elem_type}: {len(element_ids)} elements")
|
||||
|
||||
# Shell element stress
|
||||
shell_stress_attrs = ['cquad4_stress', 'ctria3_stress', 'cquad8_stress', 'ctria6_stress']
|
||||
for attr in shell_stress_attrs:
|
||||
if hasattr(self.op2, attr):
|
||||
stress_obj = getattr(self.op2, attr)
|
||||
if subcase_id in stress_obj:
|
||||
elem_type = attr.replace('_stress', '')
|
||||
print(f" Processing {elem_type} stress...")
|
||||
stress = stress_obj[subcase_id]
|
||||
stress_data = stress.data[0, :, :]
|
||||
|
||||
element_ids = stress.element_node[:, 0].tolist()
|
||||
|
||||
stress_results[f"{elem_type}_stress"] = {
|
||||
"element_ids": element_ids,
|
||||
"data": stress_data.tolist(),
|
||||
"shape": list(stress_data.shape),
|
||||
"dtype": str(stress_data.dtype),
|
||||
"units": "MPa"
|
||||
}
|
||||
print(f" {elem_type}: {len(element_ids)} elements")
|
||||
|
||||
results["stress"] = stress_results
|
||||
|
||||
# Strain - similar to stress
|
||||
strain_results = {}
|
||||
strain_attrs = ['ctetra_strain', 'chexa_strain', 'cpenta_strain',
|
||||
'cquad4_strain', 'ctria3_strain']
|
||||
for attr in strain_attrs:
|
||||
if hasattr(self.op2, attr):
|
||||
strain_obj = getattr(self.op2, attr)
|
||||
if subcase_id in strain_obj:
|
||||
elem_type = attr.replace('_strain', '')
|
||||
strain = strain_obj[subcase_id]
|
||||
strain_data = strain.data[0, :, :]
|
||||
element_ids = strain.element_node[:, 0].tolist()
|
||||
|
||||
strain_results[f"{elem_type}_strain"] = {
|
||||
"element_ids": element_ids,
|
||||
"data": strain_data.tolist(),
|
||||
"shape": list(strain_data.shape),
|
||||
"dtype": str(strain_data.dtype),
|
||||
"units": "mm/mm"
|
||||
}
|
||||
|
||||
if strain_results:
|
||||
results["strain"] = strain_results
|
||||
print(f" Extracted strain for {len(strain_results)} element types")
|
||||
|
||||
# SPC Forces (reaction forces at constraints)
|
||||
if hasattr(self.op2, 'spc_forces') and subcase_id in self.op2.spc_forces:
|
||||
print(" Processing reaction forces...")
|
||||
spc = self.op2.spc_forces[subcase_id]
|
||||
spc_data = spc.data[0, :, :]
|
||||
node_ids = spc.node_gridtype[:, 0].tolist()
|
||||
|
||||
# Calculate total reaction force magnitude
|
||||
force_mag = np.linalg.norm(spc_data[:, :3], axis=1)
|
||||
moment_mag = np.linalg.norm(spc_data[:, 3:], axis=1)
|
||||
|
||||
results["reactions"] = {
|
||||
"node_ids": node_ids,
|
||||
"forces": spc_data.tolist(),
|
||||
"shape": list(spc_data.shape),
|
||||
"dtype": str(spc_data.dtype),
|
||||
"max_force": float(np.max(force_mag)),
|
||||
"max_moment": float(np.max(moment_mag)),
|
||||
"units": "N and N-mm"
|
||||
}
|
||||
print(f" Reactions: {len(node_ids)} nodes, max force={results['reactions']['max_force']:.2f} N")
|
||||
|
||||
self.neural_field_data["results"] = results
|
||||
|
||||
def save_data(self):
|
||||
"""
|
||||
Save parsed data to JSON and HDF5 files
|
||||
|
||||
Data structure:
|
||||
- neural_field_data.json: Metadata, structure, small arrays
|
||||
- neural_field_data.h5: Large arrays (node coordinates, field results)
|
||||
|
||||
HDF5 is used for efficient storage and loading of large numerical arrays.
|
||||
JSON provides human-readable metadata and structure.
|
||||
"""
|
||||
# Save JSON metadata
|
||||
json_file = self.case_dir / "neural_field_data.json"
|
||||
|
||||
# Create a copy for JSON (will remove large arrays)
|
||||
json_data = self._prepare_json_data()
|
||||
|
||||
with open(json_file, 'w') as f:
|
||||
json.dump(json_data, f, indent=2, default=str)
|
||||
|
||||
print(f" [OK] Saved metadata to: {json_file.name}")
|
||||
|
||||
# Save HDF5 for large arrays
|
||||
h5_file = self.case_dir / "neural_field_data.h5"
|
||||
with h5py.File(h5_file, 'w') as f:
|
||||
# Metadata attributes
|
||||
f.attrs['version'] = '1.0.0'
|
||||
f.attrs['created_at'] = self.neural_field_data['metadata']['created_at']
|
||||
f.attrs['case_name'] = self.neural_field_data['metadata']['case_name']
|
||||
|
||||
# Save mesh data
|
||||
mesh_grp = f.create_group('mesh')
|
||||
node_coords = np.array(self.neural_field_data["mesh"]["nodes"]["coordinates"])
|
||||
mesh_grp.create_dataset('node_coordinates',
|
||||
data=node_coords,
|
||||
compression='gzip',
|
||||
compression_opts=4)
|
||||
mesh_grp.create_dataset('node_ids',
|
||||
data=np.array(self.neural_field_data["mesh"]["nodes"]["ids"]))
|
||||
|
||||
# Save results
|
||||
if "results" in self.neural_field_data:
|
||||
results_grp = f.create_group('results')
|
||||
|
||||
# Displacement
|
||||
if "displacement" in self.neural_field_data["results"]:
|
||||
disp_data = np.array(self.neural_field_data["results"]["displacement"]["data"])
|
||||
results_grp.create_dataset('displacement',
|
||||
data=disp_data,
|
||||
compression='gzip',
|
||||
compression_opts=4)
|
||||
results_grp.create_dataset('displacement_node_ids',
|
||||
data=np.array(self.neural_field_data["results"]["displacement"]["node_ids"]))
|
||||
|
||||
# Stress fields
|
||||
if "stress" in self.neural_field_data["results"]:
|
||||
stress_grp = results_grp.create_group('stress')
|
||||
for stress_type, stress_data in self.neural_field_data["results"]["stress"].items():
|
||||
type_grp = stress_grp.create_group(stress_type)
|
||||
type_grp.create_dataset('data',
|
||||
data=np.array(stress_data["data"]),
|
||||
compression='gzip',
|
||||
compression_opts=4)
|
||||
type_grp.create_dataset('element_ids',
|
||||
data=np.array(stress_data["element_ids"]))
|
||||
|
||||
# Strain fields
|
||||
if "strain" in self.neural_field_data["results"]:
|
||||
strain_grp = results_grp.create_group('strain')
|
||||
for strain_type, strain_data in self.neural_field_data["results"]["strain"].items():
|
||||
type_grp = strain_grp.create_group(strain_type)
|
||||
type_grp.create_dataset('data',
|
||||
data=np.array(strain_data["data"]),
|
||||
compression='gzip',
|
||||
compression_opts=4)
|
||||
type_grp.create_dataset('element_ids',
|
||||
data=np.array(strain_data["element_ids"]))
|
||||
|
||||
# Reactions
|
||||
if "reactions" in self.neural_field_data["results"]:
|
||||
reactions_data = np.array(self.neural_field_data["results"]["reactions"]["forces"])
|
||||
results_grp.create_dataset('reactions',
|
||||
data=reactions_data,
|
||||
compression='gzip',
|
||||
compression_opts=4)
|
||||
results_grp.create_dataset('reaction_node_ids',
|
||||
data=np.array(self.neural_field_data["results"]["reactions"]["node_ids"]))
|
||||
|
||||
print(f" [OK] Saved field data to: {h5_file.name}")
|
||||
|
||||
# Calculate and display file sizes
|
||||
json_size = json_file.stat().st_size / 1024 # KB
|
||||
h5_size = h5_file.stat().st_size / 1024 # KB
|
||||
print(f"\n File sizes:")
|
||||
print(f" JSON: {json_size:.1f} KB")
|
||||
print(f" HDF5: {h5_size:.1f} KB")
|
||||
print(f" Total: {json_size + h5_size:.1f} KB")
|
||||
|
||||
def _prepare_json_data(self):
|
||||
"""
|
||||
Prepare data for JSON export by removing large arrays
|
||||
(they go to HDF5 instead)
|
||||
"""
|
||||
import copy
|
||||
json_data = copy.deepcopy(self.neural_field_data)
|
||||
|
||||
# Remove large arrays from nodes (keep metadata)
|
||||
if "mesh" in json_data and "nodes" in json_data["mesh"]:
|
||||
json_data["mesh"]["nodes"]["coordinates"] = f"<stored in HDF5: shape {json_data['mesh']['nodes']['shape']}>"
|
||||
|
||||
# Remove large result arrays
|
||||
if "results" in json_data:
|
||||
if "displacement" in json_data["results"]:
|
||||
shape = json_data["results"]["displacement"]["shape"]
|
||||
json_data["results"]["displacement"]["data"] = f"<stored in HDF5: shape {shape}>"
|
||||
|
||||
if "stress" in json_data["results"]:
|
||||
for stress_type in json_data["results"]["stress"]:
|
||||
shape = json_data["results"]["stress"][stress_type]["shape"]
|
||||
json_data["results"]["stress"][stress_type]["data"] = f"<stored in HDF5: shape {shape}>"
|
||||
|
||||
if "strain" in json_data["results"]:
|
||||
for strain_type in json_data["results"]["strain"]:
|
||||
shape = json_data["results"]["strain"][strain_type]["shape"]
|
||||
json_data["results"]["strain"][strain_type]["data"] = f"<stored in HDF5: shape {shape}>"
|
||||
|
||||
if "reactions" in json_data["results"]:
|
||||
shape = json_data["results"]["reactions"]["shape"]
|
||||
json_data["results"]["reactions"]["forces"] = f"<stored in HDF5: shape {shape}>"
|
||||
|
||||
return json_data
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# MAIN ENTRY POINT
|
||||
# ============================================================================
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main function to run the parser from command line
|
||||
|
||||
Usage:
|
||||
python neural_field_parser.py <case_directory>
|
||||
"""
|
||||
import sys
|
||||
|
||||
if len(sys.argv) < 2:
|
||||
print("\nAtomizerField Neural Field Parser v1.0")
|
||||
print("="*60)
|
||||
print("\nUsage:")
|
||||
print(" python neural_field_parser.py <case_directory>")
|
||||
print("\nExample:")
|
||||
print(" python neural_field_parser.py training_case_001")
|
||||
print("\nCase directory should contain:")
|
||||
print(" input/model.bdf (or model.dat)")
|
||||
print(" output/model.op2")
|
||||
print("\n")
|
||||
sys.exit(1)
|
||||
|
||||
case_dir = sys.argv[1]
|
||||
|
||||
# Verify directory exists
|
||||
if not Path(case_dir).exists():
|
||||
print(f"ERROR: Directory not found: {case_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
# Create parser
|
||||
try:
|
||||
parser = NastranToNeuralFieldParser(case_dir)
|
||||
except FileNotFoundError as e:
|
||||
print(f"\nERROR: {e}")
|
||||
print("\nPlease ensure your case directory contains:")
|
||||
print(" input/model.bdf (or model.dat)")
|
||||
print(" output/model.op2")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse all data
|
||||
try:
|
||||
data = parser.parse_all()
|
||||
|
||||
# Print summary
|
||||
print("\n" + "="*60)
|
||||
print("PARSING SUMMARY")
|
||||
print("="*60)
|
||||
print(f"Case: {data['metadata']['case_name']}")
|
||||
print(f"Analysis: {data['metadata']['analysis_type']}")
|
||||
print(f"\nMesh:")
|
||||
print(f" Nodes: {data['mesh']['statistics']['n_nodes']:,}")
|
||||
print(f" Elements: {data['mesh']['statistics']['n_elements']:,}")
|
||||
for elem_type, count in data['mesh']['statistics']['element_types'].items():
|
||||
if count > 0:
|
||||
print(f" {elem_type}: {count:,}")
|
||||
print(f"\nMaterials: {len(data['materials'])}")
|
||||
print(f"Boundary Conditions: {len(data['boundary_conditions']['spc'])} SPCs")
|
||||
print(f"Loads: {len(data['loads']['point_forces'])} forces, {len(data['loads']['pressure'])} pressures")
|
||||
|
||||
if "displacement" in data['results']:
|
||||
print(f"\nResults:")
|
||||
print(f" Displacement: {len(data['results']['displacement']['node_ids'])} nodes")
|
||||
print(f" Max: {data['results']['displacement']['max_translation']:.6f} mm")
|
||||
if "stress" in data['results']:
|
||||
for stress_type in data['results']['stress']:
|
||||
if 'max_von_mises' in data['results']['stress'][stress_type]:
|
||||
max_vm = data['results']['stress'][stress_type]['max_von_mises']
|
||||
if max_vm is not None:
|
||||
print(f" {stress_type}: Max VM = {max_vm:.2f} MPa")
|
||||
|
||||
print("\n[OK] Data ready for neural network training!")
|
||||
print("="*60 + "\n")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n" + "="*60)
|
||||
print("ERROR DURING PARSING")
|
||||
print("="*60)
|
||||
print(f"{e}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,25 +0,0 @@
|
||||
"""
|
||||
AtomizerField Neural Models Package
|
||||
|
||||
Phase 2: Neural Network Architecture for Field Prediction
|
||||
|
||||
This package contains neural network models for learning complete FEA field results
|
||||
from mesh geometry, boundary conditions, and loads.
|
||||
|
||||
Models:
|
||||
- AtomizerFieldModel: Full field predictor (displacement + stress fields)
|
||||
- ParametricFieldPredictor: Design-conditioned scalar predictor (mass, freq, disp, stress)
|
||||
"""
|
||||
|
||||
__version__ = "2.0.0"
|
||||
|
||||
# Import main model classes for convenience
|
||||
from .field_predictor import AtomizerFieldModel, create_model
|
||||
from .parametric_predictor import ParametricFieldPredictor, create_parametric_model
|
||||
|
||||
__all__ = [
|
||||
'AtomizerFieldModel',
|
||||
'create_model',
|
||||
'ParametricFieldPredictor',
|
||||
'create_parametric_model',
|
||||
]
|
||||
@@ -1,416 +0,0 @@
|
||||
"""
|
||||
data_loader.py
|
||||
Data loading pipeline for neural field training
|
||||
|
||||
AtomizerField Data Loader v2.0
|
||||
Converts parsed FEA data (HDF5 + JSON) into PyTorch Geometric graphs for training.
|
||||
|
||||
Key Transformation:
|
||||
Parsed FEA Data → Graph Representation → Neural Network Input
|
||||
|
||||
Graph structure:
|
||||
- Nodes: FEA mesh nodes (with coordinates, BCs, loads)
|
||||
- Edges: Element connectivity (with material properties)
|
||||
- Labels: Displacement and stress fields (ground truth from FEA)
|
||||
"""
|
||||
|
||||
import json
|
||||
import h5py
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
import torch
|
||||
from torch.utils.data import Dataset
|
||||
from torch_geometric.data import Data
|
||||
from torch_geometric.loader import DataLoader
|
||||
import warnings
|
||||
|
||||
|
||||
class FEAMeshDataset(Dataset):
|
||||
"""
|
||||
PyTorch Dataset for FEA mesh data
|
||||
|
||||
Loads parsed neural field data and converts to PyTorch Geometric graphs.
|
||||
Each graph represents one FEA analysis case.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
case_directories,
|
||||
normalize=True,
|
||||
include_stress=True,
|
||||
cache_in_memory=False
|
||||
):
|
||||
"""
|
||||
Initialize dataset
|
||||
|
||||
Args:
|
||||
case_directories (list): List of paths to parsed cases
|
||||
normalize (bool): Normalize node coordinates and results
|
||||
include_stress (bool): Include stress in targets
|
||||
cache_in_memory (bool): Load all data into RAM (faster but memory-intensive)
|
||||
"""
|
||||
self.case_dirs = [Path(d) for d in case_directories]
|
||||
self.normalize = normalize
|
||||
self.include_stress = include_stress
|
||||
self.cache_in_memory = cache_in_memory
|
||||
|
||||
# Validate all cases exist
|
||||
self.valid_cases = []
|
||||
for case_dir in self.case_dirs:
|
||||
if self._validate_case(case_dir):
|
||||
self.valid_cases.append(case_dir)
|
||||
else:
|
||||
warnings.warn(f"Skipping invalid case: {case_dir}")
|
||||
|
||||
print(f"Loaded {len(self.valid_cases)}/{len(self.case_dirs)} valid cases")
|
||||
|
||||
# Cache data if requested
|
||||
self.cache = {}
|
||||
if cache_in_memory:
|
||||
print("Caching data in memory...")
|
||||
for idx in range(len(self.valid_cases)):
|
||||
self.cache[idx] = self._load_case(idx)
|
||||
print("Cache complete!")
|
||||
|
||||
# Compute normalization statistics
|
||||
if normalize:
|
||||
self._compute_normalization_stats()
|
||||
|
||||
def _validate_case(self, case_dir):
|
||||
"""Check if case has required files"""
|
||||
json_file = case_dir / "neural_field_data.json"
|
||||
h5_file = case_dir / "neural_field_data.h5"
|
||||
return json_file.exists() and h5_file.exists()
|
||||
|
||||
def __len__(self):
|
||||
return len(self.valid_cases)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
"""
|
||||
Get graph data for one case
|
||||
|
||||
Returns:
|
||||
torch_geometric.data.Data object with:
|
||||
- x: Node features [num_nodes, feature_dim]
|
||||
- edge_index: Element connectivity [2, num_edges]
|
||||
- edge_attr: Edge features (material props) [num_edges, edge_dim]
|
||||
- y_displacement: Target displacement [num_nodes, 6]
|
||||
- y_stress: Target stress [num_nodes, 6] (if include_stress)
|
||||
- bc_mask: Boundary condition mask [num_nodes, 6]
|
||||
- pos: Node positions [num_nodes, 3]
|
||||
"""
|
||||
if self.cache_in_memory and idx in self.cache:
|
||||
return self.cache[idx]
|
||||
|
||||
return self._load_case(idx)
|
||||
|
||||
def _load_case(self, idx):
|
||||
"""Load and process a single case"""
|
||||
case_dir = self.valid_cases[idx]
|
||||
|
||||
# Load JSON metadata
|
||||
with open(case_dir / "neural_field_data.json", 'r') as f:
|
||||
metadata = json.load(f)
|
||||
|
||||
# Load HDF5 field data
|
||||
with h5py.File(case_dir / "neural_field_data.h5", 'r') as f:
|
||||
# Node coordinates
|
||||
node_coords = torch.from_numpy(f['mesh/node_coordinates'][:]).float()
|
||||
|
||||
# Displacement field (target)
|
||||
displacement = torch.from_numpy(f['results/displacement'][:]).float()
|
||||
|
||||
# Stress field (target, if available)
|
||||
stress = None
|
||||
if self.include_stress and 'results/stress' in f:
|
||||
# Try to load first available stress type
|
||||
stress_group = f['results/stress']
|
||||
for stress_type in stress_group.keys():
|
||||
stress_data = stress_group[stress_type]['data'][:]
|
||||
stress = torch.from_numpy(stress_data).float()
|
||||
break
|
||||
|
||||
# Build graph structure
|
||||
graph_data = self._build_graph(metadata, node_coords, displacement, stress)
|
||||
|
||||
# Normalize if requested
|
||||
if self.normalize:
|
||||
graph_data = self._normalize_graph(graph_data)
|
||||
|
||||
return graph_data
|
||||
|
||||
def _build_graph(self, metadata, node_coords, displacement, stress):
|
||||
"""
|
||||
Convert FEA mesh to graph
|
||||
|
||||
Args:
|
||||
metadata (dict): Parsed metadata
|
||||
node_coords (Tensor): Node positions [num_nodes, 3]
|
||||
displacement (Tensor): Displacement field [num_nodes, 6]
|
||||
stress (Tensor): Stress field [num_nodes, 6] or None
|
||||
|
||||
Returns:
|
||||
torch_geometric.data.Data
|
||||
"""
|
||||
num_nodes = node_coords.shape[0]
|
||||
|
||||
# === NODE FEATURES ===
|
||||
# Start with coordinates
|
||||
node_features = [node_coords] # [num_nodes, 3]
|
||||
|
||||
# Add boundary conditions (which DOFs are constrained)
|
||||
bc_mask = torch.zeros(num_nodes, 6) # [num_nodes, 6]
|
||||
if 'boundary_conditions' in metadata and 'spc' in metadata['boundary_conditions']:
|
||||
for spc in metadata['boundary_conditions']['spc']:
|
||||
node_id = spc['node']
|
||||
# Find node index (assuming node IDs are sequential starting from 1)
|
||||
# This is a simplification - production code should use ID mapping
|
||||
if node_id <= num_nodes:
|
||||
dofs = spc['dofs']
|
||||
# Parse DOF string (e.g., "123" means constrained in x,y,z)
|
||||
for dof_char in str(dofs):
|
||||
if dof_char.isdigit():
|
||||
dof_idx = int(dof_char) - 1 # 0-indexed
|
||||
if 0 <= dof_idx < 6:
|
||||
bc_mask[node_id - 1, dof_idx] = 1.0
|
||||
|
||||
node_features.append(bc_mask) # [num_nodes, 6]
|
||||
|
||||
# Add load information (force magnitude at each node)
|
||||
load_features = torch.zeros(num_nodes, 3) # [num_nodes, 3] for x,y,z forces
|
||||
if 'loads' in metadata and 'point_forces' in metadata['loads']:
|
||||
for force in metadata['loads']['point_forces']:
|
||||
node_id = force['node']
|
||||
if node_id <= num_nodes:
|
||||
magnitude = force['magnitude']
|
||||
direction = force['direction']
|
||||
force_vector = [magnitude * d for d in direction]
|
||||
load_features[node_id - 1] = torch.tensor(force_vector)
|
||||
|
||||
node_features.append(load_features) # [num_nodes, 3]
|
||||
|
||||
# Concatenate all node features
|
||||
x = torch.cat(node_features, dim=-1) # [num_nodes, 3+6+3=12]
|
||||
|
||||
# === EDGE FEATURES ===
|
||||
# Build edge index from element connectivity
|
||||
edge_index = []
|
||||
edge_attrs = []
|
||||
|
||||
# Get material properties
|
||||
material_dict = {}
|
||||
if 'materials' in metadata:
|
||||
for mat in metadata['materials']:
|
||||
mat_id = mat['id']
|
||||
if mat['type'] == 'MAT1':
|
||||
material_dict[mat_id] = [
|
||||
mat.get('E', 0.0) / 1e6, # Normalize E (MPa → GPa)
|
||||
mat.get('nu', 0.0),
|
||||
mat.get('rho', 0.0) * 1e6, # Normalize rho
|
||||
mat.get('G', 0.0) / 1e6 if mat.get('G') else 0.0,
|
||||
mat.get('alpha', 0.0) * 1e6 if mat.get('alpha') else 0.0
|
||||
]
|
||||
|
||||
# Process elements to create edges
|
||||
if 'mesh' in metadata and 'elements' in metadata['mesh']:
|
||||
for elem_type in ['solid', 'shell', 'beam']:
|
||||
if elem_type in metadata['mesh']['elements']:
|
||||
for elem in metadata['mesh']['elements'][elem_type]:
|
||||
elem_nodes = elem['nodes']
|
||||
mat_id = elem.get('material_id', 1)
|
||||
|
||||
# Get material properties for this element
|
||||
mat_props = material_dict.get(mat_id, [0.0] * 5)
|
||||
|
||||
# Create edges between all node pairs in element
|
||||
# (fully connected within element)
|
||||
for i in range(len(elem_nodes)):
|
||||
for j in range(i + 1, len(elem_nodes)):
|
||||
node_i = elem_nodes[i] - 1 # 0-indexed
|
||||
node_j = elem_nodes[j] - 1
|
||||
|
||||
if node_i < num_nodes and node_j < num_nodes:
|
||||
# Add bidirectional edges
|
||||
edge_index.append([node_i, node_j])
|
||||
edge_index.append([node_j, node_i])
|
||||
|
||||
# Both edges get same material properties
|
||||
edge_attrs.append(mat_props)
|
||||
edge_attrs.append(mat_props)
|
||||
|
||||
# Convert to tensors
|
||||
if edge_index:
|
||||
edge_index = torch.tensor(edge_index, dtype=torch.long).t() # [2, num_edges]
|
||||
edge_attr = torch.tensor(edge_attrs, dtype=torch.float) # [num_edges, 5]
|
||||
else:
|
||||
# No edges (shouldn't happen, but handle gracefully)
|
||||
edge_index = torch.zeros((2, 0), dtype=torch.long)
|
||||
edge_attr = torch.zeros((0, 5), dtype=torch.float)
|
||||
|
||||
# === CREATE DATA OBJECT ===
|
||||
data = Data(
|
||||
x=x,
|
||||
edge_index=edge_index,
|
||||
edge_attr=edge_attr,
|
||||
y_displacement=displacement,
|
||||
bc_mask=bc_mask,
|
||||
pos=node_coords # Store original positions
|
||||
)
|
||||
|
||||
# Add stress if available
|
||||
if stress is not None:
|
||||
data.y_stress = stress
|
||||
|
||||
return data
|
||||
|
||||
def _normalize_graph(self, data):
|
||||
"""
|
||||
Normalize graph features
|
||||
|
||||
- Coordinates: Center and scale to unit box
|
||||
- Displacement: Scale by mean displacement
|
||||
- Stress: Scale by mean stress
|
||||
"""
|
||||
# Normalize coordinates (already done in node features)
|
||||
if hasattr(self, 'coord_mean') and hasattr(self, 'coord_std'):
|
||||
# Extract coords from features (first 3 dimensions)
|
||||
coords = data.x[:, :3]
|
||||
coords_norm = (coords - self.coord_mean) / (self.coord_std + 1e-8)
|
||||
data.x[:, :3] = coords_norm
|
||||
|
||||
# Normalize displacement
|
||||
if hasattr(self, 'disp_mean') and hasattr(self, 'disp_std'):
|
||||
data.y_displacement = (data.y_displacement - self.disp_mean) / (self.disp_std + 1e-8)
|
||||
|
||||
# Normalize stress
|
||||
if hasattr(data, 'y_stress') and hasattr(self, 'stress_mean') and hasattr(self, 'stress_std'):
|
||||
data.y_stress = (data.y_stress - self.stress_mean) / (self.stress_std + 1e-8)
|
||||
|
||||
return data
|
||||
|
||||
def _compute_normalization_stats(self):
|
||||
"""
|
||||
Compute mean and std for normalization across entire dataset
|
||||
"""
|
||||
print("Computing normalization statistics...")
|
||||
|
||||
all_coords = []
|
||||
all_disp = []
|
||||
all_stress = []
|
||||
|
||||
for idx in range(len(self.valid_cases)):
|
||||
case_dir = self.valid_cases[idx]
|
||||
|
||||
with h5py.File(case_dir / "neural_field_data.h5", 'r') as f:
|
||||
coords = f['mesh/node_coordinates'][:]
|
||||
disp = f['results/displacement'][:]
|
||||
|
||||
all_coords.append(coords)
|
||||
all_disp.append(disp)
|
||||
|
||||
# Load stress if available
|
||||
if self.include_stress and 'results/stress' in f:
|
||||
stress_group = f['results/stress']
|
||||
for stress_type in stress_group.keys():
|
||||
stress_data = stress_group[stress_type]['data'][:]
|
||||
all_stress.append(stress_data)
|
||||
break
|
||||
|
||||
# Concatenate all data
|
||||
all_coords = np.concatenate(all_coords, axis=0)
|
||||
all_disp = np.concatenate(all_disp, axis=0)
|
||||
|
||||
# Compute statistics
|
||||
self.coord_mean = torch.from_numpy(all_coords.mean(axis=0)).float()
|
||||
self.coord_std = torch.from_numpy(all_coords.std(axis=0)).float()
|
||||
|
||||
self.disp_mean = torch.from_numpy(all_disp.mean(axis=0)).float()
|
||||
self.disp_std = torch.from_numpy(all_disp.std(axis=0)).float()
|
||||
|
||||
if all_stress:
|
||||
all_stress = np.concatenate(all_stress, axis=0)
|
||||
self.stress_mean = torch.from_numpy(all_stress.mean(axis=0)).float()
|
||||
self.stress_std = torch.from_numpy(all_stress.std(axis=0)).float()
|
||||
|
||||
print("Normalization statistics computed!")
|
||||
|
||||
|
||||
def create_dataloaders(
|
||||
train_cases,
|
||||
val_cases,
|
||||
batch_size=4,
|
||||
num_workers=0,
|
||||
normalize=True,
|
||||
include_stress=True
|
||||
):
|
||||
"""
|
||||
Create training and validation dataloaders
|
||||
|
||||
Args:
|
||||
train_cases (list): List of training case directories
|
||||
val_cases (list): List of validation case directories
|
||||
batch_size (int): Batch size
|
||||
num_workers (int): Number of data loading workers
|
||||
normalize (bool): Normalize features
|
||||
include_stress (bool): Include stress targets
|
||||
|
||||
Returns:
|
||||
train_loader, val_loader
|
||||
"""
|
||||
print("\nCreating datasets...")
|
||||
|
||||
# Create datasets
|
||||
train_dataset = FEAMeshDataset(
|
||||
train_cases,
|
||||
normalize=normalize,
|
||||
include_stress=include_stress,
|
||||
cache_in_memory=False # Set to True for small datasets
|
||||
)
|
||||
|
||||
val_dataset = FEAMeshDataset(
|
||||
val_cases,
|
||||
normalize=normalize,
|
||||
include_stress=include_stress,
|
||||
cache_in_memory=False
|
||||
)
|
||||
|
||||
# Share normalization stats with validation set
|
||||
if normalize and hasattr(train_dataset, 'coord_mean'):
|
||||
val_dataset.coord_mean = train_dataset.coord_mean
|
||||
val_dataset.coord_std = train_dataset.coord_std
|
||||
val_dataset.disp_mean = train_dataset.disp_mean
|
||||
val_dataset.disp_std = train_dataset.disp_std
|
||||
if hasattr(train_dataset, 'stress_mean'):
|
||||
val_dataset.stress_mean = train_dataset.stress_mean
|
||||
val_dataset.stress_std = train_dataset.stress_std
|
||||
|
||||
# Create dataloaders
|
||||
train_loader = DataLoader(
|
||||
train_dataset,
|
||||
batch_size=batch_size,
|
||||
shuffle=True,
|
||||
num_workers=num_workers
|
||||
)
|
||||
|
||||
val_loader = DataLoader(
|
||||
val_dataset,
|
||||
batch_size=batch_size,
|
||||
shuffle=False,
|
||||
num_workers=num_workers
|
||||
)
|
||||
|
||||
print(f"\nDataloaders created:")
|
||||
print(f" Training: {len(train_dataset)} cases")
|
||||
print(f" Validation: {len(val_dataset)} cases")
|
||||
|
||||
return train_loader, val_loader
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test data loader
|
||||
print("Testing FEA Mesh Data Loader...\n")
|
||||
|
||||
# This is a placeholder test - you would use actual parsed case directories
|
||||
print("Note: This test requires actual parsed FEA data.")
|
||||
print("Run the parser first on your NX Nastran files.")
|
||||
print("\nData loader implementation complete!")
|
||||
@@ -1,490 +0,0 @@
|
||||
"""
|
||||
field_predictor.py
|
||||
Graph Neural Network for predicting complete FEA field results
|
||||
|
||||
AtomizerField Field Predictor v2.0
|
||||
Uses Graph Neural Networks to learn the physics of structural response.
|
||||
|
||||
Key Innovation:
|
||||
Instead of: parameters → FEA → max_stress (scalar)
|
||||
We learn: parameters → Neural Network → complete stress field (N values)
|
||||
|
||||
This enables 1000x faster optimization with physics understanding.
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from torch_geometric.nn import MessagePassing, global_mean_pool
|
||||
from torch_geometric.data import Data
|
||||
import numpy as np
|
||||
|
||||
|
||||
class MeshGraphConv(MessagePassing):
|
||||
"""
|
||||
Custom Graph Convolution for FEA meshes
|
||||
|
||||
This layer propagates information along mesh edges (element connectivity)
|
||||
to learn how forces flow through the structure.
|
||||
|
||||
Key insight: Stress and displacement fields follow mesh topology.
|
||||
Adjacent elements influence each other through equilibrium.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, edge_dim=None):
|
||||
"""
|
||||
Args:
|
||||
in_channels (int): Input node feature dimension
|
||||
out_channels (int): Output node feature dimension
|
||||
edge_dim (int): Edge feature dimension (optional)
|
||||
"""
|
||||
super().__init__(aggr='mean') # Mean aggregation of neighbor messages
|
||||
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
|
||||
# Message function: how to combine node and edge features
|
||||
if edge_dim is not None:
|
||||
self.message_mlp = nn.Sequential(
|
||||
nn.Linear(2 * in_channels + edge_dim, out_channels),
|
||||
nn.LayerNorm(out_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(out_channels, out_channels)
|
||||
)
|
||||
else:
|
||||
self.message_mlp = nn.Sequential(
|
||||
nn.Linear(2 * in_channels, out_channels),
|
||||
nn.LayerNorm(out_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(out_channels, out_channels)
|
||||
)
|
||||
|
||||
# Update function: how to update node features
|
||||
self.update_mlp = nn.Sequential(
|
||||
nn.Linear(in_channels + out_channels, out_channels),
|
||||
nn.LayerNorm(out_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(out_channels, out_channels)
|
||||
)
|
||||
|
||||
self.edge_dim = edge_dim
|
||||
|
||||
def forward(self, x, edge_index, edge_attr=None):
|
||||
"""
|
||||
Propagate messages through the mesh graph
|
||||
|
||||
Args:
|
||||
x: Node features [num_nodes, in_channels]
|
||||
edge_index: Edge connectivity [2, num_edges]
|
||||
edge_attr: Edge features [num_edges, edge_dim] (optional)
|
||||
|
||||
Returns:
|
||||
Updated node features [num_nodes, out_channels]
|
||||
"""
|
||||
return self.propagate(edge_index, x=x, edge_attr=edge_attr)
|
||||
|
||||
def message(self, x_i, x_j, edge_attr=None):
|
||||
"""
|
||||
Construct messages from neighbors
|
||||
|
||||
Args:
|
||||
x_i: Target node features
|
||||
x_j: Source node features
|
||||
edge_attr: Edge features
|
||||
"""
|
||||
if edge_attr is not None:
|
||||
# Combine source node, target node, and edge features
|
||||
msg_input = torch.cat([x_i, x_j, edge_attr], dim=-1)
|
||||
else:
|
||||
msg_input = torch.cat([x_i, x_j], dim=-1)
|
||||
|
||||
return self.message_mlp(msg_input)
|
||||
|
||||
def update(self, aggr_out, x):
|
||||
"""
|
||||
Update node features with aggregated messages
|
||||
|
||||
Args:
|
||||
aggr_out: Aggregated messages from neighbors
|
||||
x: Original node features
|
||||
"""
|
||||
# Combine original features with aggregated messages
|
||||
update_input = torch.cat([x, aggr_out], dim=-1)
|
||||
return self.update_mlp(update_input)
|
||||
|
||||
|
||||
class FieldPredictorGNN(nn.Module):
|
||||
"""
|
||||
Graph Neural Network for predicting complete FEA fields
|
||||
|
||||
Architecture:
|
||||
1. Node Encoder: Encode node positions, BCs, loads
|
||||
2. Edge Encoder: Encode element connectivity, material properties
|
||||
3. Message Passing: Propagate information through mesh (multiple layers)
|
||||
4. Field Decoder: Predict displacement/stress at each node/element
|
||||
|
||||
This architecture respects physics:
|
||||
- Uses mesh topology (forces flow through connected elements)
|
||||
- Incorporates boundary conditions (fixed/loaded nodes)
|
||||
- Learns material behavior (E, nu → stress-strain relationship)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
node_feature_dim=3, # Node coordinates (x, y, z)
|
||||
edge_feature_dim=5, # Material properties (E, nu, rho, etc.)
|
||||
hidden_dim=128,
|
||||
num_layers=6,
|
||||
output_dim=6, # 6 DOF displacement (3 translation + 3 rotation)
|
||||
dropout=0.1
|
||||
):
|
||||
"""
|
||||
Initialize field predictor
|
||||
|
||||
Args:
|
||||
node_feature_dim (int): Dimension of node features (position + BCs + loads)
|
||||
edge_feature_dim (int): Dimension of edge features (material properties)
|
||||
hidden_dim (int): Hidden layer dimension
|
||||
num_layers (int): Number of message passing layers
|
||||
output_dim (int): Output dimension per node (6 for displacement)
|
||||
dropout (float): Dropout rate
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
self.node_feature_dim = node_feature_dim
|
||||
self.edge_feature_dim = edge_feature_dim
|
||||
self.hidden_dim = hidden_dim
|
||||
self.num_layers = num_layers
|
||||
self.output_dim = output_dim
|
||||
|
||||
# Node encoder: embed node coordinates + BCs + loads
|
||||
self.node_encoder = nn.Sequential(
|
||||
nn.Linear(node_feature_dim, hidden_dim),
|
||||
nn.LayerNorm(hidden_dim),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(dropout),
|
||||
nn.Linear(hidden_dim, hidden_dim)
|
||||
)
|
||||
|
||||
# Edge encoder: embed material properties
|
||||
self.edge_encoder = nn.Sequential(
|
||||
nn.Linear(edge_feature_dim, hidden_dim),
|
||||
nn.LayerNorm(hidden_dim),
|
||||
nn.ReLU(),
|
||||
nn.Linear(hidden_dim, hidden_dim // 2)
|
||||
)
|
||||
|
||||
# Message passing layers (the physics learning happens here)
|
||||
self.conv_layers = nn.ModuleList([
|
||||
MeshGraphConv(
|
||||
in_channels=hidden_dim,
|
||||
out_channels=hidden_dim,
|
||||
edge_dim=hidden_dim // 2
|
||||
)
|
||||
for _ in range(num_layers)
|
||||
])
|
||||
|
||||
self.layer_norms = nn.ModuleList([
|
||||
nn.LayerNorm(hidden_dim)
|
||||
for _ in range(num_layers)
|
||||
])
|
||||
|
||||
self.dropouts = nn.ModuleList([
|
||||
nn.Dropout(dropout)
|
||||
for _ in range(num_layers)
|
||||
])
|
||||
|
||||
# Field decoder: predict displacement at each node
|
||||
self.field_decoder = nn.Sequential(
|
||||
nn.Linear(hidden_dim, hidden_dim),
|
||||
nn.LayerNorm(hidden_dim),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(dropout),
|
||||
nn.Linear(hidden_dim, hidden_dim // 2),
|
||||
nn.ReLU(),
|
||||
nn.Linear(hidden_dim // 2, output_dim)
|
||||
)
|
||||
|
||||
# Physics-informed constraint layer (optional, ensures equilibrium)
|
||||
self.physics_scale = nn.Parameter(torch.ones(1))
|
||||
|
||||
def forward(self, data):
|
||||
"""
|
||||
Forward pass: mesh → displacement field
|
||||
|
||||
Args:
|
||||
data (torch_geometric.data.Data): Batch of mesh graphs containing:
|
||||
- x: Node features [num_nodes, node_feature_dim]
|
||||
- edge_index: Connectivity [2, num_edges]
|
||||
- edge_attr: Edge features [num_edges, edge_feature_dim]
|
||||
- batch: Batch assignment [num_nodes]
|
||||
|
||||
Returns:
|
||||
displacement_field: Predicted displacement [num_nodes, output_dim]
|
||||
"""
|
||||
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
|
||||
|
||||
# Encode nodes (positions + BCs + loads)
|
||||
x = self.node_encoder(x) # [num_nodes, hidden_dim]
|
||||
|
||||
# Encode edges (material properties)
|
||||
if edge_attr is not None:
|
||||
edge_features = self.edge_encoder(edge_attr) # [num_edges, hidden_dim//2]
|
||||
else:
|
||||
edge_features = None
|
||||
|
||||
# Message passing: learn how forces propagate through mesh
|
||||
for i, (conv, norm, dropout) in enumerate(zip(
|
||||
self.conv_layers, self.layer_norms, self.dropouts
|
||||
)):
|
||||
# Graph convolution
|
||||
x_new = conv(x, edge_index, edge_features)
|
||||
|
||||
# Residual connection (helps gradients flow)
|
||||
x = x + dropout(x_new)
|
||||
|
||||
# Layer normalization
|
||||
x = norm(x)
|
||||
|
||||
# Decode to displacement field
|
||||
displacement = self.field_decoder(x) # [num_nodes, output_dim]
|
||||
|
||||
# Apply physics-informed scaling
|
||||
displacement = displacement * self.physics_scale
|
||||
|
||||
return displacement
|
||||
|
||||
def predict_stress_from_displacement(self, displacement, data, material_props):
|
||||
"""
|
||||
Convert predicted displacement to stress using constitutive law
|
||||
|
||||
This implements: σ = C : ε = C : (∇u)
|
||||
Where C is the material stiffness matrix
|
||||
|
||||
Args:
|
||||
displacement: Predicted displacement [num_nodes, 6]
|
||||
data: Mesh graph data
|
||||
material_props: Material properties (E, nu)
|
||||
|
||||
Returns:
|
||||
stress_field: Predicted stress [num_elements, n_components]
|
||||
"""
|
||||
# This would compute strain from displacement gradients
|
||||
# then apply material constitutive law
|
||||
# For now, we'll predict displacement and train a separate stress predictor
|
||||
raise NotImplementedError("Stress prediction implemented in StressPredictor")
|
||||
|
||||
|
||||
class StressPredictor(nn.Module):
|
||||
"""
|
||||
Predicts stress field from displacement field
|
||||
|
||||
This can be:
|
||||
1. Physics-based: Compute strain from displacement, apply constitutive law
|
||||
2. Learned: Train neural network to predict stress from displacement
|
||||
|
||||
We use learned approach for flexibility with nonlinear materials.
|
||||
"""
|
||||
|
||||
def __init__(self, displacement_dim=6, hidden_dim=128, stress_components=6):
|
||||
"""
|
||||
Args:
|
||||
displacement_dim (int): Displacement DOFs per node
|
||||
hidden_dim (int): Hidden layer size
|
||||
stress_components (int): Stress tensor components (6 for 3D)
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
# Stress predictor network
|
||||
self.stress_net = nn.Sequential(
|
||||
nn.Linear(displacement_dim, hidden_dim),
|
||||
nn.LayerNorm(hidden_dim),
|
||||
nn.ReLU(),
|
||||
nn.Linear(hidden_dim, hidden_dim),
|
||||
nn.LayerNorm(hidden_dim),
|
||||
nn.ReLU(),
|
||||
nn.Linear(hidden_dim, stress_components)
|
||||
)
|
||||
|
||||
def forward(self, displacement):
|
||||
"""
|
||||
Predict stress from displacement
|
||||
|
||||
Args:
|
||||
displacement: [num_nodes, displacement_dim]
|
||||
|
||||
Returns:
|
||||
stress: [num_nodes, stress_components]
|
||||
"""
|
||||
return self.stress_net(displacement)
|
||||
|
||||
|
||||
class AtomizerFieldModel(nn.Module):
|
||||
"""
|
||||
Complete AtomizerField model: predicts both displacement and stress fields
|
||||
|
||||
This is the main model you'll use for training and inference.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
node_feature_dim=10, # 3 (xyz) + 6 (BC DOFs) + 1 (load magnitude)
|
||||
edge_feature_dim=5, # E, nu, rho, G, alpha
|
||||
hidden_dim=128,
|
||||
num_layers=6,
|
||||
dropout=0.1
|
||||
):
|
||||
"""
|
||||
Initialize complete field prediction model
|
||||
|
||||
Args:
|
||||
node_feature_dim (int): Node features (coords + BCs + loads)
|
||||
edge_feature_dim (int): Edge features (material properties)
|
||||
hidden_dim (int): Hidden dimension
|
||||
num_layers (int): Message passing layers
|
||||
dropout (float): Dropout rate
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
# Displacement predictor (main GNN)
|
||||
self.displacement_predictor = FieldPredictorGNN(
|
||||
node_feature_dim=node_feature_dim,
|
||||
edge_feature_dim=edge_feature_dim,
|
||||
hidden_dim=hidden_dim,
|
||||
num_layers=num_layers,
|
||||
output_dim=6, # 6 DOF displacement
|
||||
dropout=dropout
|
||||
)
|
||||
|
||||
# Stress predictor (from displacement)
|
||||
self.stress_predictor = StressPredictor(
|
||||
displacement_dim=6,
|
||||
hidden_dim=hidden_dim,
|
||||
stress_components=6 # σxx, σyy, σzz, τxy, τyz, τxz
|
||||
)
|
||||
|
||||
def forward(self, data, return_stress=True):
|
||||
"""
|
||||
Predict displacement and stress fields
|
||||
|
||||
Args:
|
||||
data: Mesh graph data
|
||||
return_stress (bool): Whether to predict stress
|
||||
|
||||
Returns:
|
||||
dict with:
|
||||
- displacement: [num_nodes, 6]
|
||||
- stress: [num_nodes, 6] (if return_stress=True)
|
||||
- von_mises: [num_nodes] (if return_stress=True)
|
||||
"""
|
||||
# Predict displacement
|
||||
displacement = self.displacement_predictor(data)
|
||||
|
||||
results = {'displacement': displacement}
|
||||
|
||||
if return_stress:
|
||||
# Predict stress from displacement
|
||||
stress = self.stress_predictor(displacement)
|
||||
|
||||
# Calculate von Mises stress
|
||||
# σ_vm = sqrt(0.5 * ((σxx-σyy)² + (σyy-σzz)² + (σzz-σxx)² + 6(τxy² + τyz² + τxz²)))
|
||||
sxx, syy, szz, txy, tyz, txz = stress[:, 0], stress[:, 1], stress[:, 2], \
|
||||
stress[:, 3], stress[:, 4], stress[:, 5]
|
||||
|
||||
von_mises = torch.sqrt(
|
||||
0.5 * (
|
||||
(sxx - syy)**2 + (syy - szz)**2 + (szz - sxx)**2 +
|
||||
6 * (txy**2 + tyz**2 + txz**2)
|
||||
)
|
||||
)
|
||||
|
||||
results['stress'] = stress
|
||||
results['von_mises'] = von_mises
|
||||
|
||||
return results
|
||||
|
||||
def get_max_values(self, results):
|
||||
"""
|
||||
Extract maximum values (for compatibility with scalar optimization)
|
||||
|
||||
Args:
|
||||
results: Output from forward()
|
||||
|
||||
Returns:
|
||||
dict with max_displacement, max_stress
|
||||
"""
|
||||
max_displacement = torch.max(torch.norm(results['displacement'][:, :3], dim=1))
|
||||
max_stress = torch.max(results['von_mises']) if 'von_mises' in results else None
|
||||
|
||||
return {
|
||||
'max_displacement': max_displacement,
|
||||
'max_stress': max_stress
|
||||
}
|
||||
|
||||
|
||||
def create_model(config=None):
|
||||
"""
|
||||
Factory function to create AtomizerField model
|
||||
|
||||
Args:
|
||||
config (dict): Model configuration
|
||||
|
||||
Returns:
|
||||
AtomizerFieldModel instance
|
||||
"""
|
||||
if config is None:
|
||||
config = {
|
||||
'node_feature_dim': 10,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 128,
|
||||
'num_layers': 6,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = AtomizerFieldModel(**config)
|
||||
|
||||
# Initialize weights
|
||||
def init_weights(m):
|
||||
if isinstance(m, nn.Linear):
|
||||
nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='relu')
|
||||
if m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
model.apply(init_weights)
|
||||
|
||||
return model
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test model creation
|
||||
print("Testing AtomizerField Model Creation...")
|
||||
|
||||
model = create_model()
|
||||
print(f"Model created: {sum(p.numel() for p in model.parameters()):,} parameters")
|
||||
|
||||
# Create dummy data
|
||||
num_nodes = 100
|
||||
num_edges = 300
|
||||
|
||||
x = torch.randn(num_nodes, 10) # Node features
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges)) # Edge connectivity
|
||||
edge_attr = torch.randn(num_edges, 5) # Edge features
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long) # Batch assignment
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Forward pass
|
||||
with torch.no_grad():
|
||||
results = model(data)
|
||||
|
||||
print(f"\nTest forward pass:")
|
||||
print(f" Displacement shape: {results['displacement'].shape}")
|
||||
print(f" Stress shape: {results['stress'].shape}")
|
||||
print(f" Von Mises shape: {results['von_mises'].shape}")
|
||||
|
||||
max_vals = model.get_max_values(results)
|
||||
print(f"\nMax values:")
|
||||
print(f" Max displacement: {max_vals['max_displacement']:.6f}")
|
||||
print(f" Max stress: {max_vals['max_stress']:.2f}")
|
||||
|
||||
print("\nModel test passed!")
|
||||
@@ -1,470 +0,0 @@
|
||||
"""
|
||||
parametric_predictor.py
|
||||
Design-Conditioned Graph Neural Network for direct objective prediction
|
||||
|
||||
AtomizerField Parametric Predictor v2.0
|
||||
|
||||
Key Innovation:
|
||||
Instead of: parameters -> FEA -> objectives (expensive)
|
||||
We learn: parameters -> Neural Network -> objectives (milliseconds)
|
||||
|
||||
This model directly predicts all 4 optimization objectives:
|
||||
- mass (g)
|
||||
- frequency (Hz)
|
||||
- max_displacement (mm)
|
||||
- max_stress (MPa)
|
||||
|
||||
Architecture:
|
||||
1. Design Encoder: MLP(n_design_vars -> 64 -> 128)
|
||||
2. GNN Backbone: 4 layers of design-conditioned message passing
|
||||
3. Global Pooling: Mean + Max pooling
|
||||
4. Scalar Heads: MLP(384 -> 128 -> 64 -> 4)
|
||||
|
||||
This enables 2000x faster optimization with ~2-4% error.
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from torch_geometric.nn import MessagePassing, global_mean_pool, global_max_pool
|
||||
from torch_geometric.data import Data
|
||||
import numpy as np
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
|
||||
class DesignConditionedConv(MessagePassing):
|
||||
"""
|
||||
Graph Convolution layer conditioned on design parameters.
|
||||
|
||||
The design parameters modulate how information flows through the mesh,
|
||||
allowing the network to learn design-dependent physics.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels: int, out_channels: int, design_dim: int, edge_dim: int = None):
|
||||
"""
|
||||
Args:
|
||||
in_channels: Input node feature dimension
|
||||
out_channels: Output node feature dimension
|
||||
design_dim: Design parameter dimension (after encoding)
|
||||
edge_dim: Edge feature dimension (optional)
|
||||
"""
|
||||
super().__init__(aggr='mean')
|
||||
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.design_dim = design_dim
|
||||
|
||||
# Design-conditioned message function
|
||||
message_input_dim = 2 * in_channels + design_dim
|
||||
if edge_dim is not None:
|
||||
message_input_dim += edge_dim
|
||||
|
||||
self.message_mlp = nn.Sequential(
|
||||
nn.Linear(message_input_dim, out_channels),
|
||||
nn.LayerNorm(out_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(out_channels, out_channels)
|
||||
)
|
||||
|
||||
# Update function
|
||||
self.update_mlp = nn.Sequential(
|
||||
nn.Linear(in_channels + out_channels, out_channels),
|
||||
nn.LayerNorm(out_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(out_channels, out_channels)
|
||||
)
|
||||
|
||||
self.edge_dim = edge_dim
|
||||
|
||||
def forward(self, x, edge_index, design_features, edge_attr=None):
|
||||
"""
|
||||
Forward pass with design conditioning.
|
||||
|
||||
Args:
|
||||
x: Node features [num_nodes, in_channels]
|
||||
edge_index: Edge connectivity [2, num_edges]
|
||||
design_features: Design parameters [hidden] or [num_nodes, hidden]
|
||||
edge_attr: Edge features [num_edges, edge_dim] (optional)
|
||||
|
||||
Returns:
|
||||
Updated node features [num_nodes, out_channels]
|
||||
"""
|
||||
num_nodes = x.size(0)
|
||||
|
||||
# Handle different input shapes for design_features
|
||||
if design_features.dim() == 1:
|
||||
# Single design vector [hidden] -> broadcast to all nodes
|
||||
design_broadcast = design_features.unsqueeze(0).expand(num_nodes, -1)
|
||||
elif design_features.dim() == 2 and design_features.size(0) == num_nodes:
|
||||
# Already per-node [num_nodes, hidden]
|
||||
design_broadcast = design_features
|
||||
elif design_features.dim() == 2 and design_features.size(0) == 1:
|
||||
# Single design [1, hidden] -> broadcast
|
||||
design_broadcast = design_features.expand(num_nodes, -1)
|
||||
else:
|
||||
# Fallback: take mean across batch dimension if needed
|
||||
design_broadcast = design_features.mean(dim=0).unsqueeze(0).expand(num_nodes, -1)
|
||||
|
||||
return self.propagate(
|
||||
edge_index,
|
||||
x=x,
|
||||
design=design_broadcast,
|
||||
edge_attr=edge_attr
|
||||
)
|
||||
|
||||
def message(self, x_i, x_j, design_i, edge_attr=None):
|
||||
"""
|
||||
Construct design-conditioned messages.
|
||||
|
||||
Args:
|
||||
x_i: Target node features
|
||||
x_j: Source node features
|
||||
design_i: Design parameters at target nodes
|
||||
edge_attr: Edge features
|
||||
"""
|
||||
if edge_attr is not None:
|
||||
msg_input = torch.cat([x_i, x_j, design_i, edge_attr], dim=-1)
|
||||
else:
|
||||
msg_input = torch.cat([x_i, x_j, design_i], dim=-1)
|
||||
|
||||
return self.message_mlp(msg_input)
|
||||
|
||||
def update(self, aggr_out, x):
|
||||
"""Update node features with aggregated messages."""
|
||||
update_input = torch.cat([x, aggr_out], dim=-1)
|
||||
return self.update_mlp(update_input)
|
||||
|
||||
|
||||
class ParametricFieldPredictor(nn.Module):
|
||||
"""
|
||||
Design-conditioned GNN that predicts ALL optimization objectives from design parameters.
|
||||
|
||||
This is the "parametric" model that directly predicts scalar objectives,
|
||||
making it much faster than field prediction followed by post-processing.
|
||||
|
||||
Architecture:
|
||||
- Design Encoder: MLP that embeds design parameters
|
||||
- Node Encoder: MLP that embeds mesh node features
|
||||
- Edge Encoder: MLP that embeds material properties
|
||||
- GNN Backbone: Design-conditioned message passing layers
|
||||
- Global Pooling: Mean + Max pooling for graph-level representation
|
||||
- Scalar Heads: MLPs that predict each objective
|
||||
|
||||
Outputs:
|
||||
- mass: Predicted mass (grams)
|
||||
- frequency: Predicted fundamental frequency (Hz)
|
||||
- max_displacement: Maximum displacement magnitude (mm)
|
||||
- max_stress: Maximum von Mises stress (MPa)
|
||||
"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any] = None):
|
||||
"""
|
||||
Initialize parametric predictor.
|
||||
|
||||
Args:
|
||||
config: Model configuration dict with keys:
|
||||
- input_channels: Node feature dimension (default: 12)
|
||||
- edge_dim: Edge feature dimension (default: 5)
|
||||
- hidden_channels: Hidden layer size (default: 128)
|
||||
- num_layers: Number of GNN layers (default: 4)
|
||||
- design_dim: Design parameter dimension (default: 4)
|
||||
- dropout: Dropout rate (default: 0.1)
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
# Default configuration
|
||||
if config is None:
|
||||
config = {}
|
||||
|
||||
self.input_channels = config.get('input_channels', 12)
|
||||
self.edge_dim = config.get('edge_dim', 5)
|
||||
self.hidden_channels = config.get('hidden_channels', 128)
|
||||
self.num_layers = config.get('num_layers', 4)
|
||||
self.design_dim = config.get('design_dim', 4)
|
||||
self.dropout_rate = config.get('dropout', 0.1)
|
||||
|
||||
# Store config for checkpoint saving
|
||||
self.config = {
|
||||
'input_channels': self.input_channels,
|
||||
'edge_dim': self.edge_dim,
|
||||
'hidden_channels': self.hidden_channels,
|
||||
'num_layers': self.num_layers,
|
||||
'design_dim': self.design_dim,
|
||||
'dropout': self.dropout_rate
|
||||
}
|
||||
|
||||
# === DESIGN ENCODER ===
|
||||
# Embeds design parameters into a higher-dimensional space
|
||||
self.design_encoder = nn.Sequential(
|
||||
nn.Linear(self.design_dim, 64),
|
||||
nn.LayerNorm(64),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(64, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU()
|
||||
)
|
||||
|
||||
# === NODE ENCODER ===
|
||||
# Embeds node features (coordinates, BCs, loads)
|
||||
self.node_encoder = nn.Sequential(
|
||||
nn.Linear(self.input_channels, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, self.hidden_channels)
|
||||
)
|
||||
|
||||
# === EDGE ENCODER ===
|
||||
# Embeds edge features (material properties)
|
||||
self.edge_encoder = nn.Sequential(
|
||||
nn.Linear(self.edge_dim, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Linear(self.hidden_channels, self.hidden_channels // 2)
|
||||
)
|
||||
|
||||
# === GNN BACKBONE ===
|
||||
# Design-conditioned message passing layers
|
||||
self.conv_layers = nn.ModuleList([
|
||||
DesignConditionedConv(
|
||||
in_channels=self.hidden_channels,
|
||||
out_channels=self.hidden_channels,
|
||||
design_dim=self.hidden_channels,
|
||||
edge_dim=self.hidden_channels // 2
|
||||
)
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
|
||||
self.layer_norms = nn.ModuleList([
|
||||
nn.LayerNorm(self.hidden_channels)
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
|
||||
self.dropouts = nn.ModuleList([
|
||||
nn.Dropout(self.dropout_rate)
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
|
||||
# === GLOBAL POOLING ===
|
||||
# Mean + Max pooling gives 2 * hidden_channels features
|
||||
# Plus design features gives 3 * hidden_channels total
|
||||
pooled_dim = 3 * self.hidden_channels
|
||||
|
||||
# === SCALAR PREDICTION HEADS ===
|
||||
# Each head predicts one objective
|
||||
|
||||
self.mass_head = nn.Sequential(
|
||||
nn.Linear(pooled_dim, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, 64),
|
||||
nn.ReLU(),
|
||||
nn.Linear(64, 1)
|
||||
)
|
||||
|
||||
self.frequency_head = nn.Sequential(
|
||||
nn.Linear(pooled_dim, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, 64),
|
||||
nn.ReLU(),
|
||||
nn.Linear(64, 1)
|
||||
)
|
||||
|
||||
self.displacement_head = nn.Sequential(
|
||||
nn.Linear(pooled_dim, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, 64),
|
||||
nn.ReLU(),
|
||||
nn.Linear(64, 1)
|
||||
)
|
||||
|
||||
self.stress_head = nn.Sequential(
|
||||
nn.Linear(pooled_dim, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, 64),
|
||||
nn.ReLU(),
|
||||
nn.Linear(64, 1)
|
||||
)
|
||||
|
||||
# === OPTIONAL FIELD DECODER ===
|
||||
# For returning displacement field if requested
|
||||
self.field_decoder = nn.Sequential(
|
||||
nn.Linear(self.hidden_channels, self.hidden_channels),
|
||||
nn.LayerNorm(self.hidden_channels),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(self.dropout_rate),
|
||||
nn.Linear(self.hidden_channels, 6) # 6 DOF displacement
|
||||
)
|
||||
|
||||
def forward(
|
||||
self,
|
||||
data: Data,
|
||||
design_params: torch.Tensor,
|
||||
return_fields: bool = False
|
||||
) -> Dict[str, torch.Tensor]:
|
||||
"""
|
||||
Forward pass: predict objectives from mesh + design parameters.
|
||||
|
||||
Args:
|
||||
data: PyTorch Geometric Data object with:
|
||||
- x: Node features [num_nodes, input_channels]
|
||||
- edge_index: Edge connectivity [2, num_edges]
|
||||
- edge_attr: Edge features [num_edges, edge_dim]
|
||||
- batch: Batch assignment [num_nodes] (optional)
|
||||
design_params: Normalized design parameters [design_dim] or [batch, design_dim]
|
||||
return_fields: If True, also return displacement field prediction
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
- mass: Predicted mass [batch_size]
|
||||
- frequency: Predicted frequency [batch_size]
|
||||
- max_displacement: Predicted max displacement [batch_size]
|
||||
- max_stress: Predicted max stress [batch_size]
|
||||
- displacement: (optional) Displacement field [num_nodes, 6]
|
||||
"""
|
||||
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
|
||||
num_nodes = x.size(0)
|
||||
|
||||
# Handle design params shape - ensure 2D [batch_size, design_dim]
|
||||
if design_params.dim() == 1:
|
||||
design_params = design_params.unsqueeze(0)
|
||||
|
||||
batch_size = design_params.size(0)
|
||||
|
||||
# Encode design parameters: [batch_size, design_dim] -> [batch_size, hidden]
|
||||
design_encoded = self.design_encoder(design_params)
|
||||
|
||||
# Encode nodes (shared across all designs)
|
||||
x_encoded = self.node_encoder(x) # [num_nodes, hidden]
|
||||
|
||||
# Encode edges (shared across all designs)
|
||||
if edge_attr is not None:
|
||||
edge_features = self.edge_encoder(edge_attr) # [num_edges, hidden//2]
|
||||
else:
|
||||
edge_features = None
|
||||
|
||||
# Process each design in the batch
|
||||
all_graph_features = []
|
||||
|
||||
for i in range(batch_size):
|
||||
# Get design for this sample
|
||||
design_i = design_encoded[i] # [hidden]
|
||||
|
||||
# Reset node features for this sample
|
||||
x = x_encoded.clone()
|
||||
|
||||
# Message passing with design conditioning
|
||||
for conv, norm, dropout in zip(self.conv_layers, self.layer_norms, self.dropouts):
|
||||
x_new = conv(x, edge_index, design_i, edge_features)
|
||||
x = x + dropout(x_new) # Residual connection
|
||||
x = norm(x)
|
||||
|
||||
# Global pooling for this sample
|
||||
batch_idx = torch.zeros(num_nodes, dtype=torch.long, device=x.device)
|
||||
x_mean = global_mean_pool(x, batch_idx) # [1, hidden]
|
||||
x_max = global_max_pool(x, batch_idx) # [1, hidden]
|
||||
|
||||
# Concatenate pooled + design features
|
||||
graph_feat = torch.cat([x_mean, x_max, design_encoded[i:i+1]], dim=-1) # [1, 3*hidden]
|
||||
all_graph_features.append(graph_feat)
|
||||
|
||||
# Stack all samples
|
||||
graph_features = torch.cat(all_graph_features, dim=0) # [batch_size, 3*hidden]
|
||||
|
||||
# Predict objectives
|
||||
mass = self.mass_head(graph_features).squeeze(-1)
|
||||
frequency = self.frequency_head(graph_features).squeeze(-1)
|
||||
max_displacement = self.displacement_head(graph_features).squeeze(-1)
|
||||
max_stress = self.stress_head(graph_features).squeeze(-1)
|
||||
|
||||
results = {
|
||||
'mass': mass,
|
||||
'frequency': frequency,
|
||||
'max_displacement': max_displacement,
|
||||
'max_stress': max_stress
|
||||
}
|
||||
|
||||
# Optionally return displacement field (uses last processed x)
|
||||
if return_fields:
|
||||
displacement_field = self.field_decoder(x) # [num_nodes, 6]
|
||||
results['displacement'] = displacement_field
|
||||
|
||||
return results
|
||||
|
||||
def get_num_parameters(self) -> int:
|
||||
"""Get total number of trainable parameters."""
|
||||
return sum(p.numel() for p in self.parameters() if p.requires_grad)
|
||||
|
||||
|
||||
def create_parametric_model(config: Dict[str, Any] = None) -> ParametricFieldPredictor:
|
||||
"""
|
||||
Factory function to create parametric predictor model.
|
||||
|
||||
Args:
|
||||
config: Model configuration dictionary
|
||||
|
||||
Returns:
|
||||
Initialized ParametricFieldPredictor
|
||||
"""
|
||||
model = ParametricFieldPredictor(config)
|
||||
|
||||
# Initialize weights
|
||||
def init_weights(m):
|
||||
if isinstance(m, nn.Linear):
|
||||
nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='relu')
|
||||
if m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
model.apply(init_weights)
|
||||
|
||||
return model
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Testing Parametric Field Predictor...")
|
||||
print("=" * 60)
|
||||
|
||||
# Create model with default config
|
||||
model = create_parametric_model()
|
||||
n_params = model.get_num_parameters()
|
||||
print(f"Model created: {n_params:,} parameters")
|
||||
print(f"Config: {model.config}")
|
||||
|
||||
# Create dummy data
|
||||
num_nodes = 500
|
||||
num_edges = 2000
|
||||
|
||||
x = torch.randn(num_nodes, 12) # Node features
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Design parameters
|
||||
design_params = torch.randn(4) # 4 design variables
|
||||
|
||||
# Forward pass
|
||||
print("\nRunning forward pass...")
|
||||
with torch.no_grad():
|
||||
results = model(data, design_params, return_fields=True)
|
||||
|
||||
print(f"\nPredictions:")
|
||||
print(f" Mass: {results['mass'].item():.4f}")
|
||||
print(f" Frequency: {results['frequency'].item():.4f}")
|
||||
print(f" Max Displacement: {results['max_displacement'].item():.6f}")
|
||||
print(f" Max Stress: {results['max_stress'].item():.2f}")
|
||||
|
||||
if 'displacement' in results:
|
||||
print(f" Displacement field shape: {results['displacement'].shape}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Parametric predictor test PASSED!")
|
||||
@@ -1,449 +0,0 @@
|
||||
"""
|
||||
physics_losses.py
|
||||
Physics-informed loss functions for training FEA field predictors
|
||||
|
||||
AtomizerField Physics-Informed Loss Functions v2.0
|
||||
|
||||
Key Innovation:
|
||||
Standard neural networks only minimize prediction error.
|
||||
Physics-informed networks also enforce physical laws:
|
||||
- Equilibrium: Forces must balance
|
||||
- Compatibility: Strains must be compatible with displacements
|
||||
- Constitutive: Stress must follow material law (σ = C:ε)
|
||||
|
||||
This makes the network learn physics, not just patterns.
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
|
||||
class PhysicsInformedLoss(nn.Module):
|
||||
"""
|
||||
Combined loss function with physics constraints
|
||||
|
||||
Total Loss = λ_data * L_data + λ_physics * L_physics
|
||||
|
||||
Where:
|
||||
- L_data: Standard MSE between prediction and FEA ground truth
|
||||
- L_physics: Physics violation penalty (equilibrium, compatibility, constitutive)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
lambda_data=1.0,
|
||||
lambda_equilibrium=0.1,
|
||||
lambda_constitutive=0.1,
|
||||
lambda_boundary=1.0,
|
||||
use_relative_error=True
|
||||
):
|
||||
"""
|
||||
Initialize physics-informed loss
|
||||
|
||||
Args:
|
||||
lambda_data (float): Weight for data loss
|
||||
lambda_equilibrium (float): Weight for equilibrium violation
|
||||
lambda_constitutive (float): Weight for constitutive law violation
|
||||
lambda_boundary (float): Weight for boundary condition violation
|
||||
use_relative_error (bool): Use relative error instead of absolute
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
self.lambda_data = lambda_data
|
||||
self.lambda_equilibrium = lambda_equilibrium
|
||||
self.lambda_constitutive = lambda_constitutive
|
||||
self.lambda_boundary = lambda_boundary
|
||||
self.use_relative_error = use_relative_error
|
||||
|
||||
def forward(self, predictions, targets, data=None):
|
||||
"""
|
||||
Compute total physics-informed loss
|
||||
|
||||
Args:
|
||||
predictions (dict): Model predictions
|
||||
- displacement: [num_nodes, 6]
|
||||
- stress: [num_nodes, 6]
|
||||
- von_mises: [num_nodes]
|
||||
targets (dict): Ground truth from FEA
|
||||
- displacement: [num_nodes, 6]
|
||||
- stress: [num_nodes, 6]
|
||||
data: Mesh graph data (for physics constraints)
|
||||
|
||||
Returns:
|
||||
dict with:
|
||||
- total_loss: Combined loss
|
||||
- data_loss: Data fitting loss
|
||||
- equilibrium_loss: Equilibrium violation
|
||||
- constitutive_loss: Material law violation
|
||||
- boundary_loss: BC violation
|
||||
"""
|
||||
losses = {}
|
||||
|
||||
# 1. Data Loss: How well do predictions match FEA results?
|
||||
losses['displacement_loss'] = self._displacement_loss(
|
||||
predictions['displacement'],
|
||||
targets['displacement']
|
||||
)
|
||||
|
||||
if 'stress' in predictions and 'stress' in targets:
|
||||
losses['stress_loss'] = self._stress_loss(
|
||||
predictions['stress'],
|
||||
targets['stress']
|
||||
)
|
||||
else:
|
||||
losses['stress_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
losses['data_loss'] = losses['displacement_loss'] + losses['stress_loss']
|
||||
|
||||
# 2. Physics Losses: How well do predictions obey physics?
|
||||
if data is not None:
|
||||
# Equilibrium: ∇·σ + f = 0
|
||||
losses['equilibrium_loss'] = self._equilibrium_loss(
|
||||
predictions, data
|
||||
)
|
||||
|
||||
# Constitutive: σ = C:ε
|
||||
losses['constitutive_loss'] = self._constitutive_loss(
|
||||
predictions, data
|
||||
)
|
||||
|
||||
# Boundary conditions: u = 0 at fixed nodes
|
||||
losses['boundary_loss'] = self._boundary_condition_loss(
|
||||
predictions, data
|
||||
)
|
||||
else:
|
||||
losses['equilibrium_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
losses['constitutive_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
losses['boundary_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
# Total loss
|
||||
losses['total_loss'] = (
|
||||
self.lambda_data * losses['data_loss'] +
|
||||
self.lambda_equilibrium * losses['equilibrium_loss'] +
|
||||
self.lambda_constitutive * losses['constitutive_loss'] +
|
||||
self.lambda_boundary * losses['boundary_loss']
|
||||
)
|
||||
|
||||
return losses
|
||||
|
||||
def _displacement_loss(self, pred, target):
|
||||
"""
|
||||
Loss for displacement field
|
||||
|
||||
Uses relative error to handle different displacement magnitudes
|
||||
"""
|
||||
if self.use_relative_error:
|
||||
# Relative L2 error
|
||||
diff = pred - target
|
||||
rel_error = torch.norm(diff, dim=-1) / (torch.norm(target, dim=-1) + 1e-8)
|
||||
return rel_error.mean()
|
||||
else:
|
||||
# Absolute MSE
|
||||
return F.mse_loss(pred, target)
|
||||
|
||||
def _stress_loss(self, pred, target):
|
||||
"""
|
||||
Loss for stress field
|
||||
|
||||
Emphasizes von Mises stress (most important for failure prediction)
|
||||
"""
|
||||
# Component-wise MSE
|
||||
component_loss = F.mse_loss(pred, target)
|
||||
|
||||
# Von Mises stress MSE (computed from components)
|
||||
pred_vm = self._compute_von_mises(pred)
|
||||
target_vm = self._compute_von_mises(target)
|
||||
vm_loss = F.mse_loss(pred_vm, target_vm)
|
||||
|
||||
# Combined: 50% component accuracy, 50% von Mises accuracy
|
||||
return 0.5 * component_loss + 0.5 * vm_loss
|
||||
|
||||
def _equilibrium_loss(self, predictions, data):
|
||||
"""
|
||||
Equilibrium loss: ∇·σ + f = 0
|
||||
|
||||
In discrete form: sum of forces at each node should be zero
|
||||
(where not externally loaded)
|
||||
|
||||
This is expensive to compute exactly, so we use a simplified version:
|
||||
Check force balance on each element
|
||||
"""
|
||||
# Simplified: For now, return zero (full implementation requires
|
||||
# computing stress divergence from node stresses)
|
||||
# TODO: Implement finite difference approximation of ∇·σ
|
||||
return torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
def _constitutive_loss(self, predictions, data):
|
||||
"""
|
||||
Constitutive law loss: σ = C:ε
|
||||
|
||||
Check if predicted stress is consistent with predicted strain
|
||||
(which comes from displacement gradient)
|
||||
|
||||
Simplified version: Check if stress-strain relationship is reasonable
|
||||
"""
|
||||
# Simplified: For now, return zero
|
||||
# Full implementation would:
|
||||
# 1. Compute strain from displacement gradient
|
||||
# 2. Compute expected stress from strain using material stiffness
|
||||
# 3. Compare with predicted stress
|
||||
# TODO: Implement strain computation and constitutive check
|
||||
return torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
def _boundary_condition_loss(self, predictions, data):
|
||||
"""
|
||||
Boundary condition loss: u = 0 at fixed DOFs
|
||||
|
||||
Penalize non-zero displacement at constrained nodes
|
||||
"""
|
||||
if not hasattr(data, 'bc_mask') or data.bc_mask is None:
|
||||
return torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
# bc_mask: [num_nodes, 6] boolean mask where True = constrained
|
||||
displacement = predictions['displacement']
|
||||
bc_mask = data.bc_mask
|
||||
|
||||
# Compute penalty for non-zero displacement at constrained DOFs
|
||||
constrained_displacement = displacement * bc_mask.float()
|
||||
bc_loss = torch.mean(constrained_displacement ** 2)
|
||||
|
||||
return bc_loss
|
||||
|
||||
def _compute_von_mises(self, stress):
|
||||
"""
|
||||
Compute von Mises stress from stress tensor components
|
||||
|
||||
Args:
|
||||
stress: [num_nodes, 6] with [σxx, σyy, σzz, τxy, τyz, τxz]
|
||||
|
||||
Returns:
|
||||
von_mises: [num_nodes]
|
||||
"""
|
||||
sxx, syy, szz = stress[:, 0], stress[:, 1], stress[:, 2]
|
||||
txy, tyz, txz = stress[:, 3], stress[:, 4], stress[:, 5]
|
||||
|
||||
vm = torch.sqrt(
|
||||
0.5 * (
|
||||
(sxx - syy)**2 + (syy - szz)**2 + (szz - sxx)**2 +
|
||||
6 * (txy**2 + tyz**2 + txz**2)
|
||||
)
|
||||
)
|
||||
|
||||
return vm
|
||||
|
||||
|
||||
class FieldMSELoss(nn.Module):
|
||||
"""
|
||||
Simple MSE loss for field prediction (no physics constraints)
|
||||
|
||||
Use this for initial training or when physics constraints are too strict.
|
||||
"""
|
||||
|
||||
def __init__(self, weight_displacement=1.0, weight_stress=1.0):
|
||||
"""
|
||||
Args:
|
||||
weight_displacement (float): Weight for displacement loss
|
||||
weight_stress (float): Weight for stress loss
|
||||
"""
|
||||
super().__init__()
|
||||
self.weight_displacement = weight_displacement
|
||||
self.weight_stress = weight_stress
|
||||
|
||||
def forward(self, predictions, targets):
|
||||
"""
|
||||
Compute MSE loss
|
||||
|
||||
Args:
|
||||
predictions (dict): Model outputs
|
||||
targets (dict): Ground truth
|
||||
|
||||
Returns:
|
||||
dict with loss components
|
||||
"""
|
||||
losses = {}
|
||||
|
||||
# Displacement MSE
|
||||
losses['displacement_loss'] = F.mse_loss(
|
||||
predictions['displacement'],
|
||||
targets['displacement']
|
||||
)
|
||||
|
||||
# Stress MSE (if available)
|
||||
if 'stress' in predictions and 'stress' in targets:
|
||||
losses['stress_loss'] = F.mse_loss(
|
||||
predictions['stress'],
|
||||
targets['stress']
|
||||
)
|
||||
else:
|
||||
losses['stress_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
# Total loss
|
||||
losses['total_loss'] = (
|
||||
self.weight_displacement * losses['displacement_loss'] +
|
||||
self.weight_stress * losses['stress_loss']
|
||||
)
|
||||
|
||||
return losses
|
||||
|
||||
|
||||
class RelativeFieldLoss(nn.Module):
|
||||
"""
|
||||
Relative error loss - better for varying displacement/stress magnitudes
|
||||
|
||||
Uses: ||pred - target|| / ||target||
|
||||
This makes the loss scale-invariant.
|
||||
"""
|
||||
|
||||
def __init__(self, epsilon=1e-8):
|
||||
"""
|
||||
Args:
|
||||
epsilon (float): Small constant to avoid division by zero
|
||||
"""
|
||||
super().__init__()
|
||||
self.epsilon = epsilon
|
||||
|
||||
def forward(self, predictions, targets):
|
||||
"""
|
||||
Compute relative error loss
|
||||
|
||||
Args:
|
||||
predictions (dict): Model outputs
|
||||
targets (dict): Ground truth
|
||||
|
||||
Returns:
|
||||
dict with loss components
|
||||
"""
|
||||
losses = {}
|
||||
|
||||
# Relative displacement error
|
||||
disp_diff = predictions['displacement'] - targets['displacement']
|
||||
disp_norm_pred = torch.norm(disp_diff, dim=-1)
|
||||
disp_norm_target = torch.norm(targets['displacement'], dim=-1)
|
||||
losses['displacement_loss'] = (disp_norm_pred / (disp_norm_target + self.epsilon)).mean()
|
||||
|
||||
# Relative stress error
|
||||
if 'stress' in predictions and 'stress' in targets:
|
||||
stress_diff = predictions['stress'] - targets['stress']
|
||||
stress_norm_pred = torch.norm(stress_diff, dim=-1)
|
||||
stress_norm_target = torch.norm(targets['stress'], dim=-1)
|
||||
losses['stress_loss'] = (stress_norm_pred / (stress_norm_target + self.epsilon)).mean()
|
||||
else:
|
||||
losses['stress_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
# Total loss
|
||||
losses['total_loss'] = losses['displacement_loss'] + losses['stress_loss']
|
||||
|
||||
return losses
|
||||
|
||||
|
||||
class MaxValueLoss(nn.Module):
|
||||
"""
|
||||
Loss on maximum values only (for backward compatibility with scalar optimization)
|
||||
|
||||
This is useful if you want to ensure the network gets the critical max values right,
|
||||
even if the field distribution is slightly off.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def forward(self, predictions, targets):
|
||||
"""
|
||||
Compute loss on maximum displacement and stress
|
||||
|
||||
Args:
|
||||
predictions (dict): Model outputs with 'displacement', 'von_mises'
|
||||
targets (dict): Ground truth
|
||||
|
||||
Returns:
|
||||
dict with loss components
|
||||
"""
|
||||
losses = {}
|
||||
|
||||
# Max displacement error
|
||||
pred_max_disp = torch.max(torch.norm(predictions['displacement'][:, :3], dim=1))
|
||||
target_max_disp = torch.max(torch.norm(targets['displacement'][:, :3], dim=1))
|
||||
losses['max_displacement_loss'] = F.mse_loss(pred_max_disp, target_max_disp)
|
||||
|
||||
# Max von Mises stress error
|
||||
if 'von_mises' in predictions and 'stress' in targets:
|
||||
pred_max_vm = torch.max(predictions['von_mises'])
|
||||
|
||||
# Compute target von Mises
|
||||
target_stress = targets['stress']
|
||||
sxx, syy, szz = target_stress[:, 0], target_stress[:, 1], target_stress[:, 2]
|
||||
txy, tyz, txz = target_stress[:, 3], target_stress[:, 4], target_stress[:, 5]
|
||||
target_vm = torch.sqrt(
|
||||
0.5 * ((sxx - syy)**2 + (syy - szz)**2 + (szz - sxx)**2 +
|
||||
6 * (txy**2 + tyz**2 + txz**2))
|
||||
)
|
||||
target_max_vm = torch.max(target_vm)
|
||||
|
||||
losses['max_stress_loss'] = F.mse_loss(pred_max_vm, target_max_vm)
|
||||
else:
|
||||
losses['max_stress_loss'] = torch.tensor(0.0, device=predictions['displacement'].device)
|
||||
|
||||
# Total loss
|
||||
losses['total_loss'] = losses['max_displacement_loss'] + losses['max_stress_loss']
|
||||
|
||||
return losses
|
||||
|
||||
|
||||
def create_loss_function(loss_type='mse', config=None):
|
||||
"""
|
||||
Factory function to create loss function
|
||||
|
||||
Args:
|
||||
loss_type (str): Type of loss ('mse', 'relative', 'physics', 'max')
|
||||
config (dict): Loss function configuration
|
||||
|
||||
Returns:
|
||||
Loss function instance
|
||||
"""
|
||||
if config is None:
|
||||
config = {}
|
||||
|
||||
if loss_type == 'mse':
|
||||
return FieldMSELoss(**config)
|
||||
elif loss_type == 'relative':
|
||||
return RelativeFieldLoss(**config)
|
||||
elif loss_type == 'physics':
|
||||
return PhysicsInformedLoss(**config)
|
||||
elif loss_type == 'max':
|
||||
return MaxValueLoss(**config)
|
||||
else:
|
||||
raise ValueError(f"Unknown loss type: {loss_type}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test loss functions
|
||||
print("Testing AtomizerField Loss Functions...\n")
|
||||
|
||||
# Create dummy predictions and targets
|
||||
num_nodes = 100
|
||||
pred = {
|
||||
'displacement': torch.randn(num_nodes, 6),
|
||||
'stress': torch.randn(num_nodes, 6),
|
||||
'von_mises': torch.abs(torch.randn(num_nodes))
|
||||
}
|
||||
target = {
|
||||
'displacement': torch.randn(num_nodes, 6),
|
||||
'stress': torch.randn(num_nodes, 6)
|
||||
}
|
||||
|
||||
# Test each loss function
|
||||
loss_types = ['mse', 'relative', 'physics', 'max']
|
||||
|
||||
for loss_type in loss_types:
|
||||
print(f"Testing {loss_type.upper()} loss...")
|
||||
loss_fn = create_loss_function(loss_type)
|
||||
losses = loss_fn(pred, target)
|
||||
|
||||
print(f" Total loss: {losses['total_loss']:.6f}")
|
||||
for key, value in losses.items():
|
||||
if key != 'total_loss':
|
||||
print(f" {key}: {value:.6f}")
|
||||
print()
|
||||
|
||||
print("Loss function tests passed!")
|
||||
@@ -1,361 +0,0 @@
|
||||
"""
|
||||
uncertainty.py
|
||||
Uncertainty quantification for neural field predictions
|
||||
|
||||
AtomizerField Uncertainty Quantification v2.1
|
||||
Know when to trust predictions and when to run FEA!
|
||||
|
||||
Key Features:
|
||||
- Ensemble-based uncertainty estimation
|
||||
- Confidence intervals for predictions
|
||||
- Automatic FEA recommendation
|
||||
- Online calibration
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import numpy as np
|
||||
from copy import deepcopy
|
||||
|
||||
from .field_predictor import AtomizerFieldModel
|
||||
|
||||
|
||||
class UncertainFieldPredictor(nn.Module):
|
||||
"""
|
||||
Ensemble of models for uncertainty quantification
|
||||
|
||||
Uses multiple models trained with different initializations
|
||||
to estimate prediction uncertainty.
|
||||
|
||||
When uncertainty is high → Recommend FEA validation
|
||||
When uncertainty is low → Trust neural prediction
|
||||
"""
|
||||
|
||||
def __init__(self, base_model_config, n_ensemble=5):
|
||||
"""
|
||||
Initialize ensemble
|
||||
|
||||
Args:
|
||||
base_model_config (dict): Configuration for base model
|
||||
n_ensemble (int): Number of models in ensemble
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
print(f"\nCreating ensemble with {n_ensemble} models...")
|
||||
|
||||
# Create ensemble of models
|
||||
self.models = nn.ModuleList([
|
||||
AtomizerFieldModel(**base_model_config)
|
||||
for _ in range(n_ensemble)
|
||||
])
|
||||
|
||||
self.n_ensemble = n_ensemble
|
||||
|
||||
# Initialize each model differently
|
||||
for i, model in enumerate(self.models):
|
||||
self._init_weights(model, seed=i)
|
||||
|
||||
print(f"Ensemble created with {n_ensemble} models")
|
||||
|
||||
def _init_weights(self, model, seed):
|
||||
"""Initialize model weights with different seed"""
|
||||
torch.manual_seed(seed)
|
||||
|
||||
def init_fn(m):
|
||||
if isinstance(m, nn.Linear):
|
||||
nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='relu')
|
||||
if m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
model.apply(init_fn)
|
||||
|
||||
def forward(self, data, return_uncertainty=True, return_all_predictions=False):
|
||||
"""
|
||||
Forward pass through ensemble
|
||||
|
||||
Args:
|
||||
data: Input graph data
|
||||
return_uncertainty (bool): Return uncertainty estimates
|
||||
return_all_predictions (bool): Return all individual predictions
|
||||
|
||||
Returns:
|
||||
dict: Predictions with uncertainty
|
||||
- displacement: Mean prediction
|
||||
- stress: Mean prediction
|
||||
- von_mises: Mean prediction
|
||||
- displacement_std: Standard deviation (if return_uncertainty)
|
||||
- stress_std: Standard deviation (if return_uncertainty)
|
||||
- von_mises_std: Standard deviation (if return_uncertainty)
|
||||
- all_predictions: List of all predictions (if return_all_predictions)
|
||||
"""
|
||||
# Get predictions from all models
|
||||
all_predictions = []
|
||||
|
||||
for model in self.models:
|
||||
with torch.no_grad():
|
||||
pred = model(data, return_stress=True)
|
||||
all_predictions.append(pred)
|
||||
|
||||
# Stack predictions
|
||||
displacement_stack = torch.stack([p['displacement'] for p in all_predictions])
|
||||
stress_stack = torch.stack([p['stress'] for p in all_predictions])
|
||||
von_mises_stack = torch.stack([p['von_mises'] for p in all_predictions])
|
||||
|
||||
# Compute mean predictions
|
||||
results = {
|
||||
'displacement': displacement_stack.mean(dim=0),
|
||||
'stress': stress_stack.mean(dim=0),
|
||||
'von_mises': von_mises_stack.mean(dim=0)
|
||||
}
|
||||
|
||||
# Compute uncertainty (standard deviation across ensemble)
|
||||
if return_uncertainty:
|
||||
results['displacement_std'] = displacement_stack.std(dim=0)
|
||||
results['stress_std'] = stress_stack.std(dim=0)
|
||||
results['von_mises_std'] = von_mises_stack.std(dim=0)
|
||||
|
||||
# Overall uncertainty metrics
|
||||
results['max_displacement_uncertainty'] = results['displacement_std'].max().item()
|
||||
results['max_stress_uncertainty'] = results['von_mises_std'].max().item()
|
||||
|
||||
# Uncertainty as percentage of prediction
|
||||
results['displacement_rel_uncertainty'] = (
|
||||
results['displacement_std'] / (torch.abs(results['displacement']) + 1e-8)
|
||||
).mean().item()
|
||||
|
||||
results['stress_rel_uncertainty'] = (
|
||||
results['von_mises_std'] / (results['von_mises'] + 1e-8)
|
||||
).mean().item()
|
||||
|
||||
# Return all predictions if requested
|
||||
if return_all_predictions:
|
||||
results['all_predictions'] = all_predictions
|
||||
|
||||
return results
|
||||
|
||||
def needs_fea_validation(self, predictions, threshold=0.1):
|
||||
"""
|
||||
Determine if FEA validation is recommended
|
||||
|
||||
Args:
|
||||
predictions (dict): Output from forward() with uncertainty
|
||||
threshold (float): Relative uncertainty threshold
|
||||
|
||||
Returns:
|
||||
dict: Recommendation and reasons
|
||||
"""
|
||||
reasons = []
|
||||
|
||||
# Check displacement uncertainty
|
||||
if predictions['displacement_rel_uncertainty'] > threshold:
|
||||
reasons.append(
|
||||
f"High displacement uncertainty: "
|
||||
f"{predictions['displacement_rel_uncertainty']*100:.1f}% > {threshold*100:.1f}%"
|
||||
)
|
||||
|
||||
# Check stress uncertainty
|
||||
if predictions['stress_rel_uncertainty'] > threshold:
|
||||
reasons.append(
|
||||
f"High stress uncertainty: "
|
||||
f"{predictions['stress_rel_uncertainty']*100:.1f}% > {threshold*100:.1f}%"
|
||||
)
|
||||
|
||||
recommend_fea = len(reasons) > 0
|
||||
|
||||
return {
|
||||
'recommend_fea': recommend_fea,
|
||||
'reasons': reasons,
|
||||
'displacement_uncertainty': predictions['displacement_rel_uncertainty'],
|
||||
'stress_uncertainty': predictions['stress_rel_uncertainty']
|
||||
}
|
||||
|
||||
def get_confidence_intervals(self, predictions, confidence=0.95):
|
||||
"""
|
||||
Compute confidence intervals for predictions
|
||||
|
||||
Args:
|
||||
predictions (dict): Output from forward() with uncertainty
|
||||
confidence (float): Confidence level (0.95 = 95% confidence)
|
||||
|
||||
Returns:
|
||||
dict: Confidence intervals
|
||||
"""
|
||||
# For normal distribution, 95% CI is ±1.96 std
|
||||
# For 90% CI is ±1.645 std
|
||||
z_score = {0.90: 1.645, 0.95: 1.96, 0.99: 2.576}.get(confidence, 1.96)
|
||||
|
||||
intervals = {}
|
||||
|
||||
# Displacement intervals
|
||||
intervals['displacement_lower'] = predictions['displacement'] - z_score * predictions['displacement_std']
|
||||
intervals['displacement_upper'] = predictions['displacement'] + z_score * predictions['displacement_std']
|
||||
|
||||
# Stress intervals
|
||||
intervals['von_mises_lower'] = predictions['von_mises'] - z_score * predictions['von_mises_std']
|
||||
intervals['von_mises_upper'] = predictions['von_mises'] + z_score * predictions['von_mises_std']
|
||||
|
||||
# Max values with confidence intervals
|
||||
max_vm = predictions['von_mises'].max()
|
||||
max_vm_std = predictions['von_mises_std'].max()
|
||||
|
||||
intervals['max_stress_estimate'] = max_vm.item()
|
||||
intervals['max_stress_lower'] = (max_vm - z_score * max_vm_std).item()
|
||||
intervals['max_stress_upper'] = (max_vm + z_score * max_vm_std).item()
|
||||
|
||||
return intervals
|
||||
|
||||
|
||||
class OnlineLearner:
|
||||
"""
|
||||
Online learning from FEA runs during optimization
|
||||
|
||||
As optimization progresses and you run FEA for validation,
|
||||
this module can quickly update the model to improve predictions.
|
||||
|
||||
This creates a virtuous cycle:
|
||||
1. Use neural network for fast exploration
|
||||
2. Run FEA on promising designs
|
||||
3. Update neural network with new data
|
||||
4. Neural network gets better → need less FEA
|
||||
"""
|
||||
|
||||
def __init__(self, model, learning_rate=0.0001):
|
||||
"""
|
||||
Initialize online learner
|
||||
|
||||
Args:
|
||||
model: Neural network model
|
||||
learning_rate (float): Learning rate for updates
|
||||
"""
|
||||
self.model = model
|
||||
self.optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
||||
self.replay_buffer = []
|
||||
self.update_count = 0
|
||||
|
||||
print(f"\nOnline learner initialized")
|
||||
print(f"Learning rate: {learning_rate}")
|
||||
|
||||
def add_fea_result(self, graph_data, fea_results):
|
||||
"""
|
||||
Add new FEA result to replay buffer
|
||||
|
||||
Args:
|
||||
graph_data: Mesh graph
|
||||
fea_results (dict): FEA results (displacement, stress)
|
||||
"""
|
||||
self.replay_buffer.append({
|
||||
'graph_data': graph_data,
|
||||
'fea_results': fea_results
|
||||
})
|
||||
|
||||
print(f"Added FEA result to buffer (total: {len(self.replay_buffer)})")
|
||||
|
||||
def quick_update(self, steps=10):
|
||||
"""
|
||||
Quick fine-tuning on recent FEA results
|
||||
|
||||
Args:
|
||||
steps (int): Number of gradient steps
|
||||
"""
|
||||
if len(self.replay_buffer) == 0:
|
||||
print("No data in replay buffer")
|
||||
return
|
||||
|
||||
print(f"\nQuick update: {steps} steps on {len(self.replay_buffer)} samples")
|
||||
|
||||
self.model.train()
|
||||
|
||||
for step in range(steps):
|
||||
total_loss = 0.0
|
||||
|
||||
# Train on all samples in buffer
|
||||
for sample in self.replay_buffer:
|
||||
graph_data = sample['graph_data']
|
||||
fea_results = sample['fea_results']
|
||||
|
||||
# Forward pass
|
||||
predictions = self.model(graph_data, return_stress=True)
|
||||
|
||||
# Compute loss
|
||||
disp_loss = nn.functional.mse_loss(
|
||||
predictions['displacement'],
|
||||
fea_results['displacement']
|
||||
)
|
||||
|
||||
if 'stress' in fea_results:
|
||||
stress_loss = nn.functional.mse_loss(
|
||||
predictions['stress'],
|
||||
fea_results['stress']
|
||||
)
|
||||
loss = disp_loss + stress_loss
|
||||
else:
|
||||
loss = disp_loss
|
||||
|
||||
# Backward pass
|
||||
self.optimizer.zero_grad()
|
||||
loss.backward()
|
||||
self.optimizer.step()
|
||||
|
||||
total_loss += loss.item()
|
||||
|
||||
if step % 5 == 0:
|
||||
avg_loss = total_loss / len(self.replay_buffer)
|
||||
print(f" Step {step}/{steps}: Loss = {avg_loss:.6f}")
|
||||
|
||||
self.model.eval()
|
||||
self.update_count += 1
|
||||
|
||||
print(f"Update complete (total updates: {self.update_count})")
|
||||
|
||||
def clear_buffer(self):
|
||||
"""Clear replay buffer"""
|
||||
self.replay_buffer = []
|
||||
print("Replay buffer cleared")
|
||||
|
||||
|
||||
def create_uncertain_predictor(model_config, n_ensemble=5):
|
||||
"""
|
||||
Factory function to create uncertain predictor
|
||||
|
||||
Args:
|
||||
model_config (dict): Model configuration
|
||||
n_ensemble (int): Ensemble size
|
||||
|
||||
Returns:
|
||||
UncertainFieldPredictor instance
|
||||
"""
|
||||
return UncertainFieldPredictor(model_config, n_ensemble)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test uncertainty quantification
|
||||
print("Testing Uncertainty Quantification...\n")
|
||||
|
||||
# Create ensemble
|
||||
model_config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
ensemble = UncertainFieldPredictor(model_config, n_ensemble=3)
|
||||
|
||||
print(f"\nEnsemble created with {ensemble.n_ensemble} models")
|
||||
print("Uncertainty quantification ready!")
|
||||
print("\nUsage:")
|
||||
print("""
|
||||
# Get predictions with uncertainty
|
||||
predictions = ensemble(graph_data, return_uncertainty=True)
|
||||
|
||||
# Check if FEA validation needed
|
||||
recommendation = ensemble.needs_fea_validation(predictions, threshold=0.1)
|
||||
|
||||
if recommendation['recommend_fea']:
|
||||
print("Recommendation: Run FEA for validation")
|
||||
for reason in recommendation['reasons']:
|
||||
print(f" - {reason}")
|
||||
else:
|
||||
print("Prediction confident - no FEA needed!")
|
||||
""")
|
||||
@@ -1,421 +0,0 @@
|
||||
"""
|
||||
optimization_interface.py
|
||||
Bridge between AtomizerField neural network and Atomizer optimization platform
|
||||
|
||||
AtomizerField Optimization Interface v2.1
|
||||
Enables gradient-based optimization with neural field predictions.
|
||||
|
||||
Key Features:
|
||||
- Drop-in replacement for FEA evaluation (1000× faster)
|
||||
- Gradient computation for sensitivity analysis
|
||||
- Field-aware optimization (knows WHERE stress occurs)
|
||||
- Uncertainty quantification (knows when to trust predictions)
|
||||
- Automatic FEA fallback for high-uncertainty cases
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
import json
|
||||
import time
|
||||
|
||||
from neural_models.field_predictor import AtomizerFieldModel
|
||||
from neural_models.data_loader import FEAMeshDataset
|
||||
|
||||
|
||||
class NeuralFieldOptimizer:
|
||||
"""
|
||||
Optimization interface for AtomizerField
|
||||
|
||||
This class provides a simple API for optimization:
|
||||
- evaluate(parameters) → objectives (max_stress, max_disp, etc.)
|
||||
- get_sensitivities(parameters) → gradients for optimization
|
||||
- get_fields(parameters) → complete stress/displacement fields
|
||||
|
||||
Usage:
|
||||
optimizer = NeuralFieldOptimizer('checkpoint_best.pt')
|
||||
results = optimizer.evaluate(parameters)
|
||||
print(f"Max stress: {results['max_stress']:.2f} MPa")
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_path,
|
||||
uncertainty_threshold=0.1,
|
||||
enable_gradients=True,
|
||||
device=None
|
||||
):
|
||||
"""
|
||||
Initialize optimizer
|
||||
|
||||
Args:
|
||||
model_path (str): Path to trained model checkpoint
|
||||
uncertainty_threshold (float): Uncertainty above which to recommend FEA
|
||||
enable_gradients (bool): Enable gradient computation
|
||||
device (str): Device to run on ('cuda' or 'cpu')
|
||||
"""
|
||||
if device is None:
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
else:
|
||||
self.device = torch.device(device)
|
||||
|
||||
print(f"\nAtomizerField Optimization Interface v2.1")
|
||||
print(f"Device: {self.device}")
|
||||
|
||||
# Load model
|
||||
print(f"Loading model from {model_path}...")
|
||||
checkpoint = torch.load(model_path, map_location=self.device)
|
||||
|
||||
# Create model
|
||||
model_config = checkpoint['config']['model']
|
||||
self.model = AtomizerFieldModel(**model_config)
|
||||
self.model.load_state_dict(checkpoint['model_state_dict'])
|
||||
self.model = self.model.to(self.device)
|
||||
self.model.eval()
|
||||
|
||||
self.config = checkpoint['config']
|
||||
self.uncertainty_threshold = uncertainty_threshold
|
||||
self.enable_gradients = enable_gradients
|
||||
|
||||
# Model info
|
||||
self.model_info = {
|
||||
'version': checkpoint.get('epoch', 'unknown'),
|
||||
'best_val_loss': checkpoint.get('best_val_loss', 'unknown'),
|
||||
'training_config': checkpoint['config']
|
||||
}
|
||||
|
||||
print(f"Model loaded successfully!")
|
||||
print(f" Epoch: {checkpoint.get('epoch', 'N/A')}")
|
||||
print(f" Validation loss: {checkpoint.get('best_val_loss', 'N/A')}")
|
||||
|
||||
# Statistics for tracking
|
||||
self.eval_count = 0
|
||||
self.total_time = 0.0
|
||||
|
||||
def evaluate(self, graph_data, return_fields=False):
|
||||
"""
|
||||
Evaluate design using neural network (drop-in FEA replacement)
|
||||
|
||||
Args:
|
||||
graph_data: PyTorch Geometric Data object with mesh graph
|
||||
return_fields (bool): Return complete fields or just objectives
|
||||
|
||||
Returns:
|
||||
dict: Optimization objectives and optionally complete fields
|
||||
- max_stress: Maximum von Mises stress (MPa)
|
||||
- max_displacement: Maximum displacement (mm)
|
||||
- mass: Total mass (kg) if available
|
||||
- fields: Complete stress/displacement fields (if return_fields=True)
|
||||
- inference_time_ms: Prediction time
|
||||
- uncertainty: Prediction uncertainty (if ensemble enabled)
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
# Move to device
|
||||
graph_data = graph_data.to(self.device)
|
||||
|
||||
# Predict
|
||||
with torch.set_grad_enabled(self.enable_gradients):
|
||||
predictions = self.model(graph_data, return_stress=True)
|
||||
|
||||
inference_time = (time.time() - start_time) * 1000 # ms
|
||||
|
||||
# Extract objectives
|
||||
max_displacement = torch.max(
|
||||
torch.norm(predictions['displacement'][:, :3], dim=1)
|
||||
).item()
|
||||
|
||||
max_stress = torch.max(predictions['von_mises']).item()
|
||||
|
||||
results = {
|
||||
'max_stress': max_stress,
|
||||
'max_displacement': max_displacement,
|
||||
'inference_time_ms': inference_time,
|
||||
'evaluation_count': self.eval_count
|
||||
}
|
||||
|
||||
# Add complete fields if requested
|
||||
if return_fields:
|
||||
results['fields'] = {
|
||||
'displacement': predictions['displacement'].cpu().detach().numpy(),
|
||||
'stress': predictions['stress'].cpu().detach().numpy(),
|
||||
'von_mises': predictions['von_mises'].cpu().detach().numpy()
|
||||
}
|
||||
|
||||
# Update statistics
|
||||
self.eval_count += 1
|
||||
self.total_time += inference_time
|
||||
|
||||
return results
|
||||
|
||||
def get_sensitivities(self, graph_data, objective='max_stress'):
|
||||
"""
|
||||
Compute gradients for gradient-based optimization
|
||||
|
||||
This enables MUCH faster optimization than finite differences!
|
||||
|
||||
Args:
|
||||
graph_data: PyTorch Geometric Data with requires_grad=True
|
||||
objective (str): Which objective to differentiate ('max_stress' or 'max_displacement')
|
||||
|
||||
Returns:
|
||||
dict: Gradients with respect to input features
|
||||
- node_gradients: ∂objective/∂node_features
|
||||
- edge_gradients: ∂objective/∂edge_features
|
||||
"""
|
||||
if not self.enable_gradients:
|
||||
raise RuntimeError("Gradients not enabled. Set enable_gradients=True")
|
||||
|
||||
# Enable gradients
|
||||
graph_data = graph_data.to(self.device)
|
||||
graph_data.x.requires_grad_(True)
|
||||
if graph_data.edge_attr is not None:
|
||||
graph_data.edge_attr.requires_grad_(True)
|
||||
|
||||
# Forward pass
|
||||
predictions = self.model(graph_data, return_stress=True)
|
||||
|
||||
# Compute objective
|
||||
if objective == 'max_stress':
|
||||
obj = torch.max(predictions['von_mises'])
|
||||
elif objective == 'max_displacement':
|
||||
disp_mag = torch.norm(predictions['displacement'][:, :3], dim=1)
|
||||
obj = torch.max(disp_mag)
|
||||
else:
|
||||
raise ValueError(f"Unknown objective: {objective}")
|
||||
|
||||
# Backward pass
|
||||
obj.backward()
|
||||
|
||||
# Extract gradients
|
||||
gradients = {
|
||||
'node_gradients': graph_data.x.grad.cpu().numpy(),
|
||||
'objective_value': obj.item()
|
||||
}
|
||||
|
||||
if graph_data.edge_attr is not None and graph_data.edge_attr.grad is not None:
|
||||
gradients['edge_gradients'] = graph_data.edge_attr.grad.cpu().numpy()
|
||||
|
||||
return gradients
|
||||
|
||||
def batch_evaluate(self, graph_data_list, return_fields=False):
|
||||
"""
|
||||
Evaluate multiple designs in batch (even faster!)
|
||||
|
||||
Args:
|
||||
graph_data_list (list): List of graph data objects
|
||||
return_fields (bool): Return complete fields
|
||||
|
||||
Returns:
|
||||
list: List of evaluation results
|
||||
"""
|
||||
results = []
|
||||
|
||||
for graph_data in graph_data_list:
|
||||
result = self.evaluate(graph_data, return_fields=return_fields)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
def needs_fea_validation(self, uncertainty):
|
||||
"""
|
||||
Determine if FEA validation is recommended
|
||||
|
||||
Args:
|
||||
uncertainty (float): Prediction uncertainty
|
||||
|
||||
Returns:
|
||||
bool: True if FEA is recommended
|
||||
"""
|
||||
return uncertainty > self.uncertainty_threshold
|
||||
|
||||
def compare_with_fea(self, graph_data, fea_results):
|
||||
"""
|
||||
Compare neural predictions with FEA ground truth
|
||||
|
||||
Args:
|
||||
graph_data: Mesh graph
|
||||
fea_results (dict): FEA results with 'max_stress', 'max_displacement'
|
||||
|
||||
Returns:
|
||||
dict: Comparison metrics
|
||||
"""
|
||||
# Neural prediction
|
||||
pred = self.evaluate(graph_data)
|
||||
|
||||
# Compute errors
|
||||
stress_error = abs(pred['max_stress'] - fea_results['max_stress'])
|
||||
stress_rel_error = stress_error / (fea_results['max_stress'] + 1e-8)
|
||||
|
||||
disp_error = abs(pred['max_displacement'] - fea_results['max_displacement'])
|
||||
disp_rel_error = disp_error / (fea_results['max_displacement'] + 1e-8)
|
||||
|
||||
comparison = {
|
||||
'neural_prediction': pred,
|
||||
'fea_results': fea_results,
|
||||
'errors': {
|
||||
'stress_error_abs': stress_error,
|
||||
'stress_error_rel': stress_rel_error,
|
||||
'displacement_error_abs': disp_error,
|
||||
'displacement_error_rel': disp_rel_error
|
||||
},
|
||||
'within_tolerance': stress_rel_error < 0.1 and disp_rel_error < 0.1
|
||||
}
|
||||
|
||||
return comparison
|
||||
|
||||
def get_statistics(self):
|
||||
"""
|
||||
Get optimizer usage statistics
|
||||
|
||||
Returns:
|
||||
dict: Statistics about predictions
|
||||
"""
|
||||
avg_time = self.total_time / self.eval_count if self.eval_count > 0 else 0
|
||||
|
||||
return {
|
||||
'total_evaluations': self.eval_count,
|
||||
'total_time_ms': self.total_time,
|
||||
'average_time_ms': avg_time,
|
||||
'model_info': self.model_info
|
||||
}
|
||||
|
||||
def reset_statistics(self):
|
||||
"""Reset usage statistics"""
|
||||
self.eval_count = 0
|
||||
self.total_time = 0.0
|
||||
|
||||
|
||||
class ParametricOptimizer:
|
||||
"""
|
||||
Optimizer for parametric designs
|
||||
|
||||
This wraps NeuralFieldOptimizer and adds parameter → mesh conversion.
|
||||
Enables direct optimization over design parameters (thickness, radius, etc.)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_path,
|
||||
parameter_names,
|
||||
parameter_bounds,
|
||||
mesh_generator_fn
|
||||
):
|
||||
"""
|
||||
Initialize parametric optimizer
|
||||
|
||||
Args:
|
||||
model_path (str): Path to trained model
|
||||
parameter_names (list): Names of design parameters
|
||||
parameter_bounds (dict): Bounds for each parameter
|
||||
mesh_generator_fn: Function that converts parameters → graph_data
|
||||
"""
|
||||
self.neural_optimizer = NeuralFieldOptimizer(model_path)
|
||||
self.parameter_names = parameter_names
|
||||
self.parameter_bounds = parameter_bounds
|
||||
self.mesh_generator = mesh_generator_fn
|
||||
|
||||
print(f"\nParametric Optimizer initialized")
|
||||
print(f"Design parameters: {parameter_names}")
|
||||
|
||||
def evaluate_parameters(self, parameters):
|
||||
"""
|
||||
Evaluate design from parameters
|
||||
|
||||
Args:
|
||||
parameters (dict): Design parameters
|
||||
|
||||
Returns:
|
||||
dict: Objectives (max_stress, max_displacement, etc.)
|
||||
"""
|
||||
# Generate mesh from parameters
|
||||
graph_data = self.mesh_generator(parameters)
|
||||
|
||||
# Evaluate
|
||||
results = self.neural_optimizer.evaluate(graph_data)
|
||||
|
||||
# Add parameters to results
|
||||
results['parameters'] = parameters
|
||||
|
||||
return results
|
||||
|
||||
def optimize(
|
||||
self,
|
||||
initial_parameters,
|
||||
objectives,
|
||||
constraints,
|
||||
method='gradient',
|
||||
max_iterations=100
|
||||
):
|
||||
"""
|
||||
Run optimization
|
||||
|
||||
Args:
|
||||
initial_parameters (dict): Starting point
|
||||
objectives (list): Objectives to minimize/maximize
|
||||
constraints (list): Constraint functions
|
||||
method (str): Optimization method ('gradient' or 'genetic')
|
||||
max_iterations (int): Maximum iterations
|
||||
|
||||
Returns:
|
||||
dict: Optimal parameters and results
|
||||
"""
|
||||
# This would integrate with scipy.optimize or genetic algorithms
|
||||
# Placeholder for now
|
||||
|
||||
print(f"\nStarting optimization with {method} method...")
|
||||
print(f"Initial parameters: {initial_parameters}")
|
||||
print(f"Objectives: {objectives}")
|
||||
print(f"Max iterations: {max_iterations}")
|
||||
|
||||
# TODO: Implement optimization loop
|
||||
# For gradient-based:
|
||||
# 1. Evaluate at current parameters
|
||||
# 2. Compute sensitivities
|
||||
# 3. Update parameters using gradients
|
||||
# 4. Repeat until convergence
|
||||
|
||||
raise NotImplementedError("Full optimization loop coming in next update!")
|
||||
|
||||
|
||||
def create_optimizer(model_path, config=None):
|
||||
"""
|
||||
Factory function to create optimizer
|
||||
|
||||
Args:
|
||||
model_path (str): Path to trained model
|
||||
config (dict): Optimizer configuration
|
||||
|
||||
Returns:
|
||||
NeuralFieldOptimizer instance
|
||||
"""
|
||||
if config is None:
|
||||
config = {}
|
||||
|
||||
return NeuralFieldOptimizer(model_path, **config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Example usage
|
||||
print("AtomizerField Optimization Interface")
|
||||
print("=" * 60)
|
||||
print("\nThis module provides fast optimization with neural field predictions.")
|
||||
print("\nExample usage:")
|
||||
print("""
|
||||
# Create optimizer
|
||||
optimizer = NeuralFieldOptimizer('checkpoint_best.pt')
|
||||
|
||||
# Evaluate design
|
||||
results = optimizer.evaluate(graph_data)
|
||||
print(f"Max stress: {results['max_stress']:.2f} MPa")
|
||||
print(f"Inference time: {results['inference_time_ms']:.1f} ms")
|
||||
|
||||
# Get sensitivities for gradient-based optimization
|
||||
gradients = optimizer.get_sensitivities(graph_data, objective='max_stress')
|
||||
|
||||
# Batch evaluation (test 1000 designs in seconds!)
|
||||
all_results = optimizer.batch_evaluate(design_variants)
|
||||
""")
|
||||
|
||||
print("\nOptimization interface ready!")
|
||||
@@ -1,373 +0,0 @@
|
||||
"""
|
||||
predict.py
|
||||
Inference script for AtomizerField trained models
|
||||
|
||||
AtomizerField Inference v2.0
|
||||
Uses trained GNN to predict FEA fields 1000x faster than traditional simulation.
|
||||
|
||||
Usage:
|
||||
python predict.py --model checkpoint_best.pt --input case_001
|
||||
|
||||
This enables:
|
||||
- Rapid design exploration (milliseconds vs hours per analysis)
|
||||
- Real-time optimization
|
||||
- Interactive design feedback
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
import time
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
import h5py
|
||||
|
||||
from neural_models.field_predictor import AtomizerFieldModel
|
||||
from neural_models.data_loader import FEAMeshDataset
|
||||
|
||||
|
||||
class FieldPredictor:
|
||||
"""
|
||||
Inference engine for trained field prediction models
|
||||
"""
|
||||
|
||||
def __init__(self, checkpoint_path, device=None):
|
||||
"""
|
||||
Initialize predictor
|
||||
|
||||
Args:
|
||||
checkpoint_path (str): Path to trained model checkpoint
|
||||
device (str): Device to run on ('cuda' or 'cpu')
|
||||
"""
|
||||
if device is None:
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
else:
|
||||
self.device = torch.device(device)
|
||||
|
||||
print(f"\nAtomizerField Inference Engine v2.0")
|
||||
print(f"Device: {self.device}")
|
||||
|
||||
# Load checkpoint
|
||||
print(f"Loading model from {checkpoint_path}...")
|
||||
checkpoint = torch.load(checkpoint_path, map_location=self.device)
|
||||
|
||||
# Create model
|
||||
model_config = checkpoint['config']['model']
|
||||
self.model = AtomizerFieldModel(**model_config)
|
||||
self.model.load_state_dict(checkpoint['model_state_dict'])
|
||||
self.model = self.model.to(self.device)
|
||||
self.model.eval()
|
||||
|
||||
self.config = checkpoint['config']
|
||||
|
||||
print(f"Model loaded (epoch {checkpoint['epoch']}, val_loss={checkpoint['best_val_loss']:.6f})")
|
||||
|
||||
def predict(self, case_directory):
|
||||
"""
|
||||
Predict displacement and stress fields for a case
|
||||
|
||||
Args:
|
||||
case_directory (str): Path to parsed FEA case
|
||||
|
||||
Returns:
|
||||
dict: Predictions with displacement, stress, von_mises fields
|
||||
"""
|
||||
print(f"\nPredicting fields for {Path(case_directory).name}...")
|
||||
|
||||
# Load data
|
||||
dataset = FEAMeshDataset(
|
||||
[case_directory],
|
||||
normalize=True,
|
||||
include_stress=False # Don't need ground truth for prediction
|
||||
)
|
||||
|
||||
if len(dataset) == 0:
|
||||
raise ValueError(f"Could not load case from {case_directory}")
|
||||
|
||||
data = dataset[0].to(self.device)
|
||||
|
||||
# Predict
|
||||
start_time = time.time()
|
||||
|
||||
with torch.no_grad():
|
||||
predictions = self.model(data, return_stress=True)
|
||||
|
||||
inference_time = time.time() - start_time
|
||||
|
||||
print(f"Prediction complete in {inference_time*1000:.1f} ms")
|
||||
|
||||
# Convert to numpy
|
||||
results = {
|
||||
'displacement': predictions['displacement'].cpu().numpy(),
|
||||
'stress': predictions['stress'].cpu().numpy(),
|
||||
'von_mises': predictions['von_mises'].cpu().numpy(),
|
||||
'inference_time_ms': inference_time * 1000
|
||||
}
|
||||
|
||||
# Compute max values
|
||||
max_disp = np.max(np.linalg.norm(results['displacement'][:, :3], axis=1))
|
||||
max_stress = np.max(results['von_mises'])
|
||||
|
||||
results['max_displacement'] = float(max_disp)
|
||||
results['max_stress'] = float(max_stress)
|
||||
|
||||
print(f"\nResults:")
|
||||
print(f" Max displacement: {max_disp:.6f} mm")
|
||||
print(f" Max von Mises stress: {max_stress:.2f} MPa")
|
||||
|
||||
return results
|
||||
|
||||
def save_predictions(self, predictions, case_directory, output_name='predicted'):
|
||||
"""
|
||||
Save predictions in same format as ground truth
|
||||
|
||||
Args:
|
||||
predictions (dict): Prediction results
|
||||
case_directory (str): Case directory
|
||||
output_name (str): Output file name prefix
|
||||
"""
|
||||
case_dir = Path(case_directory)
|
||||
output_file = case_dir / f"{output_name}_fields.h5"
|
||||
|
||||
print(f"\nSaving predictions to {output_file}...")
|
||||
|
||||
with h5py.File(output_file, 'w') as f:
|
||||
# Save displacement
|
||||
f.create_dataset('displacement',
|
||||
data=predictions['displacement'],
|
||||
compression='gzip')
|
||||
|
||||
# Save stress
|
||||
f.create_dataset('stress',
|
||||
data=predictions['stress'],
|
||||
compression='gzip')
|
||||
|
||||
# Save von Mises
|
||||
f.create_dataset('von_mises',
|
||||
data=predictions['von_mises'],
|
||||
compression='gzip')
|
||||
|
||||
# Save metadata
|
||||
f.attrs['max_displacement'] = predictions['max_displacement']
|
||||
f.attrs['max_stress'] = predictions['max_stress']
|
||||
f.attrs['inference_time_ms'] = predictions['inference_time_ms']
|
||||
|
||||
print(f"Predictions saved!")
|
||||
|
||||
# Also save JSON summary
|
||||
summary_file = case_dir / f"{output_name}_summary.json"
|
||||
summary = {
|
||||
'max_displacement': predictions['max_displacement'],
|
||||
'max_stress': predictions['max_stress'],
|
||||
'inference_time_ms': predictions['inference_time_ms'],
|
||||
'num_nodes': len(predictions['displacement'])
|
||||
}
|
||||
|
||||
with open(summary_file, 'w') as f:
|
||||
json.dump(summary, f, indent=2)
|
||||
|
||||
print(f"Summary saved to {summary_file}")
|
||||
|
||||
def compare_with_ground_truth(self, predictions, case_directory):
|
||||
"""
|
||||
Compare predictions with FEA ground truth
|
||||
|
||||
Args:
|
||||
predictions (dict): Model predictions
|
||||
case_directory (str): Case directory with ground truth
|
||||
|
||||
Returns:
|
||||
dict: Comparison metrics
|
||||
"""
|
||||
case_dir = Path(case_directory)
|
||||
h5_file = case_dir / "neural_field_data.h5"
|
||||
|
||||
if not h5_file.exists():
|
||||
print("No ground truth available for comparison")
|
||||
return None
|
||||
|
||||
print("\nComparing with FEA ground truth...")
|
||||
|
||||
# Load ground truth
|
||||
with h5py.File(h5_file, 'r') as f:
|
||||
gt_displacement = f['results/displacement'][:]
|
||||
|
||||
# Try to load stress
|
||||
gt_stress = None
|
||||
if 'results/stress' in f:
|
||||
stress_group = f['results/stress']
|
||||
for stress_type in stress_group.keys():
|
||||
gt_stress = stress_group[stress_type]['data'][:]
|
||||
break
|
||||
|
||||
# Compute errors
|
||||
pred_disp = predictions['displacement']
|
||||
disp_error = np.linalg.norm(pred_disp - gt_displacement, axis=1)
|
||||
disp_magnitude = np.linalg.norm(gt_displacement, axis=1)
|
||||
rel_disp_error = disp_error / (disp_magnitude + 1e-8)
|
||||
|
||||
metrics = {
|
||||
'displacement': {
|
||||
'mae': float(np.mean(disp_error)),
|
||||
'rmse': float(np.sqrt(np.mean(disp_error**2))),
|
||||
'relative_error': float(np.mean(rel_disp_error)),
|
||||
'max_error': float(np.max(disp_error))
|
||||
}
|
||||
}
|
||||
|
||||
# Compare max values
|
||||
pred_max_disp = predictions['max_displacement']
|
||||
gt_max_disp = float(np.max(disp_magnitude))
|
||||
metrics['max_displacement_error'] = abs(pred_max_disp - gt_max_disp)
|
||||
metrics['max_displacement_relative_error'] = metrics['max_displacement_error'] / (gt_max_disp + 1e-8)
|
||||
|
||||
if gt_stress is not None:
|
||||
pred_stress = predictions['stress']
|
||||
stress_error = np.linalg.norm(pred_stress - gt_stress, axis=1)
|
||||
|
||||
metrics['stress'] = {
|
||||
'mae': float(np.mean(stress_error)),
|
||||
'rmse': float(np.sqrt(np.mean(stress_error**2))),
|
||||
}
|
||||
|
||||
# Print comparison
|
||||
print("\nComparison Results:")
|
||||
print(f" Displacement MAE: {metrics['displacement']['mae']:.6f} mm")
|
||||
print(f" Displacement RMSE: {metrics['displacement']['rmse']:.6f} mm")
|
||||
print(f" Displacement Relative Error: {metrics['displacement']['relative_error']*100:.2f}%")
|
||||
print(f" Max Displacement Error: {metrics['max_displacement_error']:.6f} mm ({metrics['max_displacement_relative_error']*100:.2f}%)")
|
||||
|
||||
if 'stress' in metrics:
|
||||
print(f" Stress MAE: {metrics['stress']['mae']:.2f} MPa")
|
||||
print(f" Stress RMSE: {metrics['stress']['rmse']:.2f} MPa")
|
||||
|
||||
return metrics
|
||||
|
||||
|
||||
def batch_predict(predictor, case_directories, output_dir=None):
|
||||
"""
|
||||
Run predictions on multiple cases
|
||||
|
||||
Args:
|
||||
predictor (FieldPredictor): Initialized predictor
|
||||
case_directories (list): List of case directories
|
||||
output_dir (str): Optional output directory for results
|
||||
|
||||
Returns:
|
||||
list: List of prediction results
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Batch Prediction: {len(case_directories)} cases")
|
||||
print(f"{'='*60}")
|
||||
|
||||
results = []
|
||||
|
||||
for i, case_dir in enumerate(case_directories, 1):
|
||||
print(f"\n[{i}/{len(case_directories)}] Processing {Path(case_dir).name}...")
|
||||
|
||||
try:
|
||||
# Predict
|
||||
predictions = predictor.predict(case_dir)
|
||||
|
||||
# Save predictions
|
||||
predictor.save_predictions(predictions, case_dir)
|
||||
|
||||
# Compare with ground truth
|
||||
comparison = predictor.compare_with_ground_truth(predictions, case_dir)
|
||||
|
||||
result = {
|
||||
'case': str(case_dir),
|
||||
'status': 'success',
|
||||
'predictions': {
|
||||
'max_displacement': predictions['max_displacement'],
|
||||
'max_stress': predictions['max_stress'],
|
||||
'inference_time_ms': predictions['inference_time_ms']
|
||||
},
|
||||
'comparison': comparison
|
||||
}
|
||||
|
||||
results.append(result)
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: {e}")
|
||||
results.append({
|
||||
'case': str(case_dir),
|
||||
'status': 'failed',
|
||||
'error': str(e)
|
||||
})
|
||||
|
||||
# Save batch results
|
||||
if output_dir:
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
results_file = output_path / 'batch_predictions.json'
|
||||
with open(results_file, 'w') as f:
|
||||
json.dump(results, f, indent=2)
|
||||
|
||||
print(f"\nBatch results saved to {results_file}")
|
||||
|
||||
# Print summary
|
||||
print(f"\n{'='*60}")
|
||||
print("Batch Prediction Summary")
|
||||
print(f"{'='*60}")
|
||||
successful = sum(1 for r in results if r['status'] == 'success')
|
||||
print(f"Successful: {successful}/{len(results)}")
|
||||
|
||||
if successful > 0:
|
||||
avg_time = np.mean([r['predictions']['inference_time_ms']
|
||||
for r in results if r['status'] == 'success'])
|
||||
print(f"Average inference time: {avg_time:.1f} ms")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main inference entry point
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description='Predict FEA fields using trained model')
|
||||
|
||||
parser.add_argument('--model', type=str, required=True,
|
||||
help='Path to model checkpoint')
|
||||
parser.add_argument('--input', type=str, required=True,
|
||||
help='Input case directory or directory containing multiple cases')
|
||||
parser.add_argument('--output_dir', type=str, default=None,
|
||||
help='Output directory for batch results')
|
||||
parser.add_argument('--batch', action='store_true',
|
||||
help='Process all subdirectories as separate cases')
|
||||
parser.add_argument('--device', type=str, default=None,
|
||||
choices=['cuda', 'cpu'],
|
||||
help='Device to run on')
|
||||
parser.add_argument('--compare', action='store_true',
|
||||
help='Compare predictions with ground truth')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create predictor
|
||||
predictor = FieldPredictor(args.model, device=args.device)
|
||||
|
||||
input_path = Path(args.input)
|
||||
|
||||
if args.batch:
|
||||
# Batch prediction
|
||||
case_dirs = [d for d in input_path.iterdir() if d.is_dir()]
|
||||
batch_predict(predictor, case_dirs, args.output_dir)
|
||||
|
||||
else:
|
||||
# Single prediction
|
||||
predictions = predictor.predict(args.input)
|
||||
|
||||
# Save predictions
|
||||
predictor.save_predictions(predictions, args.input)
|
||||
|
||||
# Compare with ground truth if requested
|
||||
if args.compare:
|
||||
predictor.compare_with_ground_truth(predictions, args.input)
|
||||
|
||||
print("\nInference complete!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,43 +0,0 @@
|
||||
# AtomizerField Requirements
|
||||
# Python 3.8+ required
|
||||
|
||||
# ============================================================================
|
||||
# Phase 1: Data Parser
|
||||
# ============================================================================
|
||||
|
||||
# Core FEA parsing
|
||||
pyNastran>=1.4.0
|
||||
|
||||
# Numerical computing
|
||||
numpy>=1.20.0
|
||||
|
||||
# HDF5 file format for efficient field data storage
|
||||
h5py>=3.0.0
|
||||
|
||||
# ============================================================================
|
||||
# Phase 2: Neural Network Training
|
||||
# ============================================================================
|
||||
|
||||
# Deep learning framework
|
||||
torch>=2.0.0
|
||||
|
||||
# Graph neural networks
|
||||
torch-geometric>=2.3.0
|
||||
|
||||
# TensorBoard for training visualization
|
||||
tensorboard>=2.13.0
|
||||
|
||||
# ============================================================================
|
||||
# Optional: Development and Testing
|
||||
# ============================================================================
|
||||
|
||||
# Testing
|
||||
# pytest>=7.0.0
|
||||
# pytest-cov>=4.0.0
|
||||
|
||||
# Visualization
|
||||
# matplotlib>=3.5.0
|
||||
# plotly>=5.0.0
|
||||
|
||||
# Progress bars
|
||||
# tqdm>=4.65.0
|
||||
@@ -1,376 +0,0 @@
|
||||
"""
|
||||
test_simple_beam.py
|
||||
Test AtomizerField with your actual Simple Beam model
|
||||
|
||||
This test validates the complete pipeline:
|
||||
1. Parse BDF/OP2 files
|
||||
2. Convert to graph format
|
||||
3. Make predictions
|
||||
4. Compare with ground truth
|
||||
|
||||
Usage:
|
||||
python test_simple_beam.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
import json
|
||||
import time
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("AtomizerField Simple Beam Test")
|
||||
print("="*60 + "\n")
|
||||
|
||||
# Test configuration
|
||||
BEAM_DIR = Path("Models/Simple Beam")
|
||||
TEST_CASE_DIR = Path("test_case_beam")
|
||||
|
||||
def test_1_check_files():
|
||||
"""Test 1: Check if beam files exist"""
|
||||
print("[TEST 1] Checking for beam files...")
|
||||
|
||||
bdf_file = BEAM_DIR / "beam_sim1-solution_1.dat"
|
||||
op2_file = BEAM_DIR / "beam_sim1-solution_1.op2"
|
||||
|
||||
if not BEAM_DIR.exists():
|
||||
print(f" [X] FAIL: Directory not found: {BEAM_DIR}")
|
||||
return False
|
||||
|
||||
if not bdf_file.exists():
|
||||
print(f" [X] FAIL: BDF file not found: {bdf_file}")
|
||||
return False
|
||||
|
||||
if not op2_file.exists():
|
||||
print(f" [X] FAIL: OP2 file not found: {op2_file}")
|
||||
return False
|
||||
|
||||
# Check file sizes
|
||||
bdf_size = bdf_file.stat().st_size / 1024 # KB
|
||||
op2_size = op2_file.stat().st_size / 1024 # KB
|
||||
|
||||
print(f" [OK] Found BDF file: {bdf_file.name} ({bdf_size:.1f} KB)")
|
||||
print(f" [OK] Found OP2 file: {op2_file.name} ({op2_size:.1f} KB)")
|
||||
print(f" Status: PASS\n")
|
||||
|
||||
return True
|
||||
|
||||
def test_2_setup_test_case():
|
||||
"""Test 2: Set up test case directory structure"""
|
||||
print("[TEST 2] Setting up test case directory...")
|
||||
|
||||
try:
|
||||
# Create directories
|
||||
(TEST_CASE_DIR / "input").mkdir(parents=True, exist_ok=True)
|
||||
(TEST_CASE_DIR / "output").mkdir(parents=True, exist_ok=True)
|
||||
|
||||
print(f" [OK] Created: {TEST_CASE_DIR / 'input'}")
|
||||
print(f" [OK] Created: {TEST_CASE_DIR / 'output'}")
|
||||
|
||||
# Copy files (create symbolic links or copy)
|
||||
import shutil
|
||||
|
||||
src_bdf = BEAM_DIR / "beam_sim1-solution_1.dat"
|
||||
src_op2 = BEAM_DIR / "beam_sim1-solution_1.op2"
|
||||
|
||||
dst_bdf = TEST_CASE_DIR / "input" / "model.bdf"
|
||||
dst_op2 = TEST_CASE_DIR / "output" / "model.op2"
|
||||
|
||||
# Copy files
|
||||
if not dst_bdf.exists():
|
||||
shutil.copy(src_bdf, dst_bdf)
|
||||
print(f" [OK] Copied BDF to {dst_bdf}")
|
||||
else:
|
||||
print(f" [OK] BDF already exists: {dst_bdf}")
|
||||
|
||||
if not dst_op2.exists():
|
||||
shutil.copy(src_op2, dst_op2)
|
||||
print(f" [OK] Copied OP2 to {dst_op2}")
|
||||
else:
|
||||
print(f" [OK] OP2 already exists: {dst_op2}")
|
||||
|
||||
print(f" Status: PASS\n")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f" [X] FAIL: {str(e)}\n")
|
||||
return False
|
||||
|
||||
def test_3_import_modules():
|
||||
"""Test 3: Import required modules"""
|
||||
print("[TEST 3] Importing modules...")
|
||||
|
||||
try:
|
||||
print(" Importing pyNastran...", end=" ")
|
||||
from pyNastran.bdf.bdf import BDF
|
||||
from pyNastran.op2.op2 import OP2
|
||||
print("[OK]")
|
||||
|
||||
print(" Importing AtomizerField parser...", end=" ")
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
print("[OK]")
|
||||
|
||||
print(f" Status: PASS\n")
|
||||
return True
|
||||
|
||||
except ImportError as e:
|
||||
print(f"\n [X] FAIL: Import error: {str(e)}\n")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"\n [X] FAIL: {str(e)}\n")
|
||||
return False
|
||||
|
||||
def test_4_parse_beam():
|
||||
"""Test 4: Parse beam BDF/OP2 files"""
|
||||
print("[TEST 4] Parsing beam files...")
|
||||
|
||||
try:
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
|
||||
# Create parser
|
||||
print(f" Initializing parser for {TEST_CASE_DIR}...")
|
||||
parser = NastranToNeuralFieldParser(str(TEST_CASE_DIR))
|
||||
|
||||
# Parse
|
||||
print(f" Parsing BDF and OP2 files...")
|
||||
start_time = time.time()
|
||||
|
||||
data = parser.parse_all()
|
||||
|
||||
parse_time = time.time() - start_time
|
||||
|
||||
# Check results
|
||||
print(f"\n Parse Results:")
|
||||
print(f" Time: {parse_time:.2f} seconds")
|
||||
print(f" Nodes: {data['mesh']['statistics']['n_nodes']:,}")
|
||||
print(f" Elements: {data['mesh']['statistics']['n_elements']:,}")
|
||||
print(f" Materials: {len(data['materials'])}")
|
||||
|
||||
if 'displacement' in data.get('results', {}):
|
||||
max_disp = data['results']['displacement']['max_translation']
|
||||
print(f" Max displacement: {max_disp:.6f} mm")
|
||||
|
||||
if 'stress' in data.get('results', {}):
|
||||
for stress_type, stress_data in data['results']['stress'].items():
|
||||
if 'max_von_mises' in stress_data:
|
||||
max_vm = stress_data['max_von_mises']
|
||||
if max_vm is not None:
|
||||
print(f" Max von Mises stress: {max_vm:.2f} MPa")
|
||||
break
|
||||
|
||||
# Check output files
|
||||
json_file = TEST_CASE_DIR / "neural_field_data.json"
|
||||
h5_file = TEST_CASE_DIR / "neural_field_data.h5"
|
||||
|
||||
if json_file.exists() and h5_file.exists():
|
||||
json_size = json_file.stat().st_size / 1024
|
||||
h5_size = h5_file.stat().st_size / 1024
|
||||
print(f"\n Output Files:")
|
||||
print(f" JSON: {json_size:.1f} KB")
|
||||
print(f" HDF5: {h5_size:.1f} KB")
|
||||
|
||||
print(f" Status: PASS\n")
|
||||
return True, data
|
||||
|
||||
except Exception as e:
|
||||
print(f" [X] FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False, None
|
||||
|
||||
def test_5_validate_data():
|
||||
"""Test 5: Validate parsed data"""
|
||||
print("[TEST 5] Validating parsed data...")
|
||||
|
||||
try:
|
||||
from validate_parsed_data import NeuralFieldDataValidator
|
||||
|
||||
validator = NeuralFieldDataValidator(str(TEST_CASE_DIR))
|
||||
|
||||
print(f" Running validation checks...")
|
||||
success = validator.validate()
|
||||
|
||||
if success:
|
||||
print(f" Status: PASS\n")
|
||||
else:
|
||||
print(f" Status: PASS (with warnings)\n")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f" [X] FAIL: {str(e)}\n")
|
||||
return False
|
||||
|
||||
def test_6_load_as_graph():
|
||||
"""Test 6: Load data as graph for neural network"""
|
||||
print("[TEST 6] Converting to graph format...")
|
||||
|
||||
try:
|
||||
import torch
|
||||
from neural_models.data_loader import FEAMeshDataset
|
||||
|
||||
print(f" Creating dataset...")
|
||||
dataset = FEAMeshDataset(
|
||||
[str(TEST_CASE_DIR)],
|
||||
normalize=False, # Don't normalize for single case
|
||||
include_stress=True,
|
||||
cache_in_memory=False
|
||||
)
|
||||
|
||||
if len(dataset) == 0:
|
||||
print(f" [X] FAIL: No data loaded")
|
||||
return False
|
||||
|
||||
print(f" Loading graph...")
|
||||
graph_data = dataset[0]
|
||||
|
||||
print(f"\n Graph Structure:")
|
||||
print(f" Nodes: {graph_data.x.shape[0]:,}")
|
||||
print(f" Node features: {graph_data.x.shape[1]}")
|
||||
print(f" Edges: {graph_data.edge_index.shape[1]:,}")
|
||||
print(f" Edge features: {graph_data.edge_attr.shape[1]}")
|
||||
|
||||
if hasattr(graph_data, 'y_displacement'):
|
||||
print(f" Target displacement: {graph_data.y_displacement.shape}")
|
||||
|
||||
if hasattr(graph_data, 'y_stress'):
|
||||
print(f" Target stress: {graph_data.y_stress.shape}")
|
||||
|
||||
print(f" Status: PASS\n")
|
||||
return True, graph_data
|
||||
|
||||
except Exception as e:
|
||||
print(f" [X] FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False, None
|
||||
|
||||
def test_7_neural_prediction():
|
||||
"""Test 7: Make neural network prediction (untrained model)"""
|
||||
print("[TEST 7] Testing neural network prediction...")
|
||||
|
||||
try:
|
||||
import torch
|
||||
from neural_models.field_predictor import create_model
|
||||
|
||||
# Load graph from previous test
|
||||
from neural_models.data_loader import FEAMeshDataset
|
||||
dataset = FEAMeshDataset([str(TEST_CASE_DIR)], normalize=False, include_stress=False)
|
||||
graph_data = dataset[0]
|
||||
|
||||
print(f" Creating untrained model...")
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
print(f" Running inference...")
|
||||
start_time = time.time()
|
||||
|
||||
with torch.no_grad():
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
inference_time = (time.time() - start_time) * 1000 # ms
|
||||
|
||||
# Extract results
|
||||
max_disp_pred = torch.max(torch.norm(predictions['displacement'][:, :3], dim=1)).item()
|
||||
max_stress_pred = torch.max(predictions['von_mises']).item()
|
||||
|
||||
print(f"\n Predictions (untrained model):")
|
||||
print(f" Inference time: {inference_time:.2f} ms")
|
||||
print(f" Max displacement: {max_disp_pred:.6f} (arbitrary units)")
|
||||
print(f" Max stress: {max_stress_pred:.2f} (arbitrary units)")
|
||||
print(f"\n Note: Values are from untrained model (random weights)")
|
||||
print(f" After training, these should match FEA results!")
|
||||
|
||||
print(f" Status: PASS\n")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f" [X] FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("Testing AtomizerField with Simple Beam model\n")
|
||||
print("This test validates:")
|
||||
print(" 1. File existence")
|
||||
print(" 2. Directory setup")
|
||||
print(" 3. Module imports")
|
||||
print(" 4. BDF/OP2 parsing")
|
||||
print(" 5. Data validation")
|
||||
print(" 6. Graph conversion")
|
||||
print(" 7. Neural prediction\n")
|
||||
|
||||
results = []
|
||||
|
||||
# Run tests
|
||||
tests = [
|
||||
("Check Files", test_1_check_files),
|
||||
("Setup Test Case", test_2_setup_test_case),
|
||||
("Import Modules", test_3_import_modules),
|
||||
("Parse Beam", test_4_parse_beam),
|
||||
("Validate Data", test_5_validate_data),
|
||||
("Load as Graph", test_6_load_as_graph),
|
||||
("Neural Prediction", test_7_neural_prediction),
|
||||
]
|
||||
|
||||
for test_name, test_func in tests:
|
||||
result = test_func()
|
||||
|
||||
# Handle tests that return tuple (success, data)
|
||||
if isinstance(result, tuple):
|
||||
success = result[0]
|
||||
else:
|
||||
success = result
|
||||
|
||||
results.append(success)
|
||||
|
||||
# Stop on first failure for critical tests
|
||||
if not success and test_name in ["Check Files", "Setup Test Case", "Import Modules"]:
|
||||
print(f"\n[X] Critical test failed: {test_name}")
|
||||
print("Cannot continue with remaining tests.\n")
|
||||
break
|
||||
|
||||
# Summary
|
||||
print("="*60)
|
||||
print("TEST SUMMARY")
|
||||
print("="*60 + "\n")
|
||||
|
||||
passed = sum(results)
|
||||
total = len(results)
|
||||
|
||||
print(f"Tests Run: {total}")
|
||||
print(f" [OK] Passed: {passed}")
|
||||
print(f" [X] Failed: {total - passed}")
|
||||
|
||||
if passed == total:
|
||||
print("\n[OK] ALL TESTS PASSED!")
|
||||
print("\nYour Simple Beam model has been:")
|
||||
print(" [OK] Successfully parsed")
|
||||
print(" [OK] Converted to neural format")
|
||||
print(" [OK] Validated for quality")
|
||||
print(" [OK] Loaded as graph")
|
||||
print(" [OK] Processed by neural network")
|
||||
print("\nNext steps:")
|
||||
print(" 1. Generate more training cases (50-500)")
|
||||
print(" 2. Train the model: python train.py")
|
||||
print(" 3. Make real predictions!")
|
||||
else:
|
||||
print(f"\n[X] {total - passed} test(s) failed")
|
||||
print("Review errors above and fix issues.")
|
||||
|
||||
print("\n" + "="*60 + "\n")
|
||||
|
||||
return 0 if passed == total else 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,402 +0,0 @@
|
||||
"""
|
||||
test_suite.py
|
||||
Master test orchestrator for AtomizerField
|
||||
|
||||
AtomizerField Testing Framework v1.0
|
||||
Comprehensive validation from basic functionality to full neural FEA predictions.
|
||||
|
||||
Usage:
|
||||
python test_suite.py --quick # 5-minute smoke tests
|
||||
python test_suite.py --physics # Physics validation tests
|
||||
python test_suite.py --learning # Learning capability tests
|
||||
python test_suite.py --full # Complete test suite (1 hour)
|
||||
|
||||
Testing Strategy:
|
||||
1. Smoke Tests (5 min) → Verify basic functionality
|
||||
2. Physics Tests (15 min) → Validate physics constraints
|
||||
3. Learning Tests (30 min) → Confirm learning capability
|
||||
4. Integration Tests (1 hour) → Full system validation
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import argparse
|
||||
import time
|
||||
from pathlib import Path
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
# Test results storage
|
||||
TEST_RESULTS = {
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'tests': [],
|
||||
'summary': {
|
||||
'total': 0,
|
||||
'passed': 0,
|
||||
'failed': 0,
|
||||
'skipped': 0
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class TestRunner:
|
||||
"""
|
||||
Test orchestrator that runs all tests in sequence
|
||||
"""
|
||||
|
||||
def __init__(self, mode='quick'):
|
||||
"""
|
||||
Initialize test runner
|
||||
|
||||
Args:
|
||||
mode (str): Testing mode ('quick', 'physics', 'learning', 'full')
|
||||
"""
|
||||
self.mode = mode
|
||||
self.results_dir = Path('test_results')
|
||||
self.results_dir.mkdir(exist_ok=True)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"AtomizerField Test Suite v1.0")
|
||||
print(f"Mode: {mode.upper()}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
def run_test(self, test_name, test_func, description):
|
||||
"""
|
||||
Run a single test and record results
|
||||
|
||||
Args:
|
||||
test_name (str): Name of test
|
||||
test_func (callable): Test function to run
|
||||
description (str): Test description
|
||||
|
||||
Returns:
|
||||
bool: True if passed
|
||||
"""
|
||||
print(f"[TEST] {test_name}")
|
||||
print(f" Description: {description}")
|
||||
|
||||
start_time = time.time()
|
||||
result = {
|
||||
'name': test_name,
|
||||
'description': description,
|
||||
'status': 'unknown',
|
||||
'duration': 0,
|
||||
'message': '',
|
||||
'metrics': {}
|
||||
}
|
||||
|
||||
try:
|
||||
test_result = test_func()
|
||||
|
||||
if test_result is None or test_result is True:
|
||||
result['status'] = 'PASS'
|
||||
result['message'] = 'Test passed successfully'
|
||||
print(f" Status: [PASS]")
|
||||
TEST_RESULTS['summary']['passed'] += 1
|
||||
elif isinstance(test_result, dict):
|
||||
result['status'] = test_result.get('status', 'PASS')
|
||||
result['message'] = test_result.get('message', '')
|
||||
result['metrics'] = test_result.get('metrics', {})
|
||||
|
||||
if result['status'] == 'PASS':
|
||||
print(f" Status: [PASS]")
|
||||
TEST_RESULTS['summary']['passed'] += 1
|
||||
else:
|
||||
print(f" Status: [FAIL]")
|
||||
print(f" Reason: {result['message']}")
|
||||
TEST_RESULTS['summary']['failed'] += 1
|
||||
else:
|
||||
result['status'] = 'FAIL'
|
||||
result['message'] = str(test_result)
|
||||
print(f" Status: [FAIL]")
|
||||
TEST_RESULTS['summary']['failed'] += 1
|
||||
|
||||
except Exception as e:
|
||||
result['status'] = 'FAIL'
|
||||
result['message'] = f"Exception: {str(e)}"
|
||||
print(f" Status: [FAIL]")
|
||||
print(f" Error: {str(e)}")
|
||||
TEST_RESULTS['summary']['failed'] += 1
|
||||
|
||||
result['duration'] = time.time() - start_time
|
||||
print(f" Duration: {result['duration']:.2f}s\n")
|
||||
|
||||
TEST_RESULTS['tests'].append(result)
|
||||
TEST_RESULTS['summary']['total'] += 1
|
||||
|
||||
return result['status'] == 'PASS'
|
||||
|
||||
def run_smoke_tests(self):
|
||||
"""
|
||||
Quick smoke tests (5 minutes)
|
||||
Verify basic functionality
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print("PHASE 1: SMOKE TESTS (5 minutes)")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
from tests import test_synthetic
|
||||
|
||||
# Test 1: Model creation
|
||||
self.run_test(
|
||||
"Model Creation",
|
||||
test_synthetic.test_model_creation,
|
||||
"Verify GNN model can be instantiated"
|
||||
)
|
||||
|
||||
# Test 2: Forward pass
|
||||
self.run_test(
|
||||
"Forward Pass",
|
||||
test_synthetic.test_forward_pass,
|
||||
"Verify model can process dummy data"
|
||||
)
|
||||
|
||||
# Test 3: Loss computation
|
||||
self.run_test(
|
||||
"Loss Computation",
|
||||
test_synthetic.test_loss_computation,
|
||||
"Verify loss functions work"
|
||||
)
|
||||
|
||||
def run_physics_tests(self):
|
||||
"""
|
||||
Physics validation tests (15 minutes)
|
||||
Ensure physics constraints work
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print("PHASE 2: PHYSICS VALIDATION (15 minutes)")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
from tests import test_physics
|
||||
|
||||
# Test 1: Cantilever beam
|
||||
self.run_test(
|
||||
"Cantilever Beam (Analytical)",
|
||||
test_physics.test_cantilever_analytical,
|
||||
"Compare with δ = FL³/3EI solution"
|
||||
)
|
||||
|
||||
# Test 2: Equilibrium
|
||||
self.run_test(
|
||||
"Equilibrium Check",
|
||||
test_physics.test_equilibrium,
|
||||
"Verify force balance (∇·σ + f = 0)"
|
||||
)
|
||||
|
||||
# Test 3: Energy conservation
|
||||
self.run_test(
|
||||
"Energy Conservation",
|
||||
test_physics.test_energy_conservation,
|
||||
"Verify strain energy = work done"
|
||||
)
|
||||
|
||||
def run_learning_tests(self):
|
||||
"""
|
||||
Learning capability tests (30 minutes)
|
||||
Confirm network can learn
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print("PHASE 3: LEARNING CAPABILITY (30 minutes)")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
from tests import test_learning
|
||||
|
||||
# Test 1: Memorization
|
||||
self.run_test(
|
||||
"Memorization Test",
|
||||
test_learning.test_memorization,
|
||||
"Can network memorize small dataset?"
|
||||
)
|
||||
|
||||
# Test 2: Interpolation
|
||||
self.run_test(
|
||||
"Interpolation Test",
|
||||
test_learning.test_interpolation,
|
||||
"Can network interpolate between training points?"
|
||||
)
|
||||
|
||||
# Test 3: Pattern recognition
|
||||
self.run_test(
|
||||
"Pattern Recognition",
|
||||
test_learning.test_pattern_recognition,
|
||||
"Does network learn thickness → stress relationship?"
|
||||
)
|
||||
|
||||
def run_integration_tests(self):
|
||||
"""
|
||||
Full integration tests (1 hour)
|
||||
Complete system validation
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print("PHASE 4: INTEGRATION TESTS (1 hour)")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
from tests import test_predictions
|
||||
|
||||
# Test 1: Parser validation
|
||||
self.run_test(
|
||||
"Parser Validation",
|
||||
test_predictions.test_parser,
|
||||
"Verify data parsing works correctly"
|
||||
)
|
||||
|
||||
# Test 2: Training pipeline
|
||||
self.run_test(
|
||||
"Training Pipeline",
|
||||
test_predictions.test_training,
|
||||
"Verify complete training workflow"
|
||||
)
|
||||
|
||||
# Test 3: Prediction accuracy
|
||||
self.run_test(
|
||||
"Prediction Accuracy",
|
||||
test_predictions.test_prediction_accuracy,
|
||||
"Compare neural vs FEA predictions"
|
||||
)
|
||||
|
||||
def print_summary(self):
|
||||
"""Print test summary"""
|
||||
summary = TEST_RESULTS['summary']
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("TEST SUMMARY")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
total = summary['total']
|
||||
passed = summary['passed']
|
||||
failed = summary['failed']
|
||||
|
||||
pass_rate = (passed / total * 100) if total > 0 else 0
|
||||
|
||||
print(f"Total Tests: {total}")
|
||||
print(f" + Passed: {passed}")
|
||||
print(f" - Failed: {failed}")
|
||||
print(f" Pass Rate: {pass_rate:.1f}%\n")
|
||||
|
||||
if failed == 0:
|
||||
print("[SUCCESS] ALL TESTS PASSED - SYSTEM READY!")
|
||||
else:
|
||||
print(f"[ERROR] {failed} TEST(S) FAILED - REVIEW REQUIRED")
|
||||
|
||||
print(f"\n{'='*60}\n")
|
||||
|
||||
def save_results(self):
|
||||
"""Save test results to JSON"""
|
||||
results_file = self.results_dir / f'test_results_{self.mode}_{int(time.time())}.json'
|
||||
|
||||
with open(results_file, 'w') as f:
|
||||
json.dump(TEST_RESULTS, f, indent=2)
|
||||
|
||||
print(f"Results saved to: {results_file}")
|
||||
|
||||
def run(self):
|
||||
"""
|
||||
Run test suite based on mode
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
if self.mode == 'quick':
|
||||
self.run_smoke_tests()
|
||||
|
||||
elif self.mode == 'physics':
|
||||
self.run_smoke_tests()
|
||||
self.run_physics_tests()
|
||||
|
||||
elif self.mode == 'learning':
|
||||
self.run_smoke_tests()
|
||||
self.run_physics_tests()
|
||||
self.run_learning_tests()
|
||||
|
||||
elif self.mode == 'full':
|
||||
self.run_smoke_tests()
|
||||
self.run_physics_tests()
|
||||
self.run_learning_tests()
|
||||
self.run_integration_tests()
|
||||
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Print summary
|
||||
self.print_summary()
|
||||
|
||||
print(f"Total testing time: {total_time/60:.1f} minutes\n")
|
||||
|
||||
# Save results
|
||||
self.save_results()
|
||||
|
||||
# Return exit code
|
||||
return 0 if TEST_RESULTS['summary']['failed'] == 0 else 1
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description='AtomizerField Test Suite',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
# Quick smoke tests (5 min)
|
||||
python test_suite.py --quick
|
||||
|
||||
# Physics validation (15 min)
|
||||
python test_suite.py --physics
|
||||
|
||||
# Learning tests (30 min)
|
||||
python test_suite.py --learning
|
||||
|
||||
# Full test suite (1 hour)
|
||||
python test_suite.py --full
|
||||
"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--quick',
|
||||
action='store_true',
|
||||
help='Run quick smoke tests (5 minutes)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--physics',
|
||||
action='store_true',
|
||||
help='Run physics validation tests (15 minutes)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--learning',
|
||||
action='store_true',
|
||||
help='Run learning capability tests (30 minutes)'
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
'--full',
|
||||
action='store_true',
|
||||
help='Run complete test suite (1 hour)'
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine mode
|
||||
if args.full:
|
||||
mode = 'full'
|
||||
elif args.learning:
|
||||
mode = 'learning'
|
||||
elif args.physics:
|
||||
mode = 'physics'
|
||||
elif args.quick:
|
||||
mode = 'quick'
|
||||
else:
|
||||
# Default to quick if no mode specified
|
||||
mode = 'quick'
|
||||
print("No mode specified, defaulting to --quick")
|
||||
|
||||
# Run tests
|
||||
runner = TestRunner(mode=mode)
|
||||
exit_code = runner.run()
|
||||
|
||||
sys.exit(exit_code)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,6 +0,0 @@
|
||||
"""
|
||||
AtomizerField Test Suite
|
||||
Comprehensive testing framework for neural field learning
|
||||
"""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
@@ -1,446 +0,0 @@
|
||||
"""
|
||||
analytical_cases.py
|
||||
Analytical solutions for classical mechanics problems
|
||||
|
||||
Provides known solutions for validation:
|
||||
- Cantilever beam under point load
|
||||
- Simply supported beam
|
||||
- Axial tension bar
|
||||
- Pressure vessel (thin-walled cylinder)
|
||||
- Torsion of circular shaft
|
||||
|
||||
These serve as ground truth for testing neural predictions.
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
from dataclasses import dataclass
|
||||
from typing import Tuple
|
||||
|
||||
|
||||
@dataclass
|
||||
class BeamProperties:
|
||||
"""Material and geometric properties for beam"""
|
||||
length: float # m
|
||||
width: float # m
|
||||
height: float # m
|
||||
E: float # Young's modulus (Pa)
|
||||
nu: float # Poisson's ratio
|
||||
rho: float # Density (kg/m³)
|
||||
|
||||
@property
|
||||
def I(self) -> float:
|
||||
"""Second moment of area for rectangular section"""
|
||||
return (self.width * self.height**3) / 12
|
||||
|
||||
@property
|
||||
def A(self) -> float:
|
||||
"""Cross-sectional area"""
|
||||
return self.width * self.height
|
||||
|
||||
@property
|
||||
def G(self) -> float:
|
||||
"""Shear modulus"""
|
||||
return self.E / (2 * (1 + self.nu))
|
||||
|
||||
|
||||
def cantilever_beam_point_load(force: float, props: BeamProperties) -> dict:
|
||||
"""
|
||||
Cantilever beam with point load at free end
|
||||
|
||||
Analytical solution:
|
||||
δ_max = FL³/3EI (at free end)
|
||||
σ_max = FL/Z (at fixed end)
|
||||
|
||||
Args:
|
||||
force: Applied force at tip (N)
|
||||
props: Beam properties
|
||||
|
||||
Returns:
|
||||
dict with displacement, stress, reactions
|
||||
"""
|
||||
L = props.length
|
||||
E = props.E
|
||||
I = props.I
|
||||
c = props.height / 2 # Distance to neutral axis
|
||||
|
||||
# Maximum displacement at free end
|
||||
delta_max = (force * L**3) / (3 * E * I)
|
||||
|
||||
# Maximum stress at fixed end (bending)
|
||||
M_max = force * L # Maximum moment at fixed end
|
||||
Z = I / c # Section modulus
|
||||
sigma_max = M_max / Z
|
||||
|
||||
# Deflection curve: y(x) = (F/(6EI)) * x² * (3L - x)
|
||||
def deflection_at(x):
|
||||
"""Deflection at position x along beam"""
|
||||
if x < 0 or x > L:
|
||||
return 0.0
|
||||
return (force / (6 * E * I)) * x**2 * (3 * L - x)
|
||||
|
||||
# Bending moment: M(x) = F * (L - x)
|
||||
def moment_at(x):
|
||||
"""Bending moment at position x"""
|
||||
if x < 0 or x > L:
|
||||
return 0.0
|
||||
return force * (L - x)
|
||||
|
||||
# Stress: σ(x, y) = M(x) * y / I
|
||||
def stress_at(x, y):
|
||||
"""Bending stress at position x and distance y from neutral axis"""
|
||||
M = moment_at(x)
|
||||
return (M * y) / I
|
||||
|
||||
return {
|
||||
'type': 'cantilever_point_load',
|
||||
'delta_max': delta_max,
|
||||
'sigma_max': sigma_max,
|
||||
'deflection_function': deflection_at,
|
||||
'moment_function': moment_at,
|
||||
'stress_function': stress_at,
|
||||
'load': force,
|
||||
'properties': props
|
||||
}
|
||||
|
||||
|
||||
def simply_supported_beam_point_load(force: float, props: BeamProperties) -> dict:
|
||||
"""
|
||||
Simply supported beam with point load at center
|
||||
|
||||
Analytical solution:
|
||||
δ_max = FL³/48EI (at center)
|
||||
σ_max = FL/4Z (at center)
|
||||
|
||||
Args:
|
||||
force: Applied force at center (N)
|
||||
props: Beam properties
|
||||
|
||||
Returns:
|
||||
dict with displacement, stress, reactions
|
||||
"""
|
||||
L = props.length
|
||||
E = props.E
|
||||
I = props.I
|
||||
c = props.height / 2
|
||||
|
||||
# Maximum displacement at center
|
||||
delta_max = (force * L**3) / (48 * E * I)
|
||||
|
||||
# Maximum stress at center
|
||||
M_max = force * L / 4 # Maximum moment at center
|
||||
Z = I / c
|
||||
sigma_max = M_max / Z
|
||||
|
||||
# Deflection curve (for 0 < x < L/2):
|
||||
# y(x) = (F/(48EI)) * x * (3L² - 4x²)
|
||||
def deflection_at(x):
|
||||
"""Deflection at position x along beam"""
|
||||
if x < 0 or x > L:
|
||||
return 0.0
|
||||
if x <= L/2:
|
||||
return (force / (48 * E * I)) * x * (3 * L**2 - 4 * x**2)
|
||||
else:
|
||||
# Symmetric about center
|
||||
return deflection_at(L - x)
|
||||
|
||||
# Bending moment: M(x) = (F/2) * x for x < L/2
|
||||
def moment_at(x):
|
||||
"""Bending moment at position x"""
|
||||
if x < 0 or x > L:
|
||||
return 0.0
|
||||
if x <= L/2:
|
||||
return (force / 2) * x
|
||||
else:
|
||||
return (force / 2) * (L - x)
|
||||
|
||||
def stress_at(x, y):
|
||||
"""Bending stress at position x and distance y from neutral axis"""
|
||||
M = moment_at(x)
|
||||
return (M * y) / I
|
||||
|
||||
return {
|
||||
'type': 'simply_supported_point_load',
|
||||
'delta_max': delta_max,
|
||||
'sigma_max': sigma_max,
|
||||
'deflection_function': deflection_at,
|
||||
'moment_function': moment_at,
|
||||
'stress_function': stress_at,
|
||||
'load': force,
|
||||
'properties': props,
|
||||
'reactions': force / 2 # Each support
|
||||
}
|
||||
|
||||
|
||||
def axial_tension_bar(force: float, props: BeamProperties) -> dict:
|
||||
"""
|
||||
Bar under axial tension
|
||||
|
||||
Analytical solution:
|
||||
δ = FL/EA (total elongation)
|
||||
σ = F/A (uniform stress)
|
||||
ε = σ/E (uniform strain)
|
||||
|
||||
Args:
|
||||
force: Axial force (N)
|
||||
props: Bar properties
|
||||
|
||||
Returns:
|
||||
dict with displacement, stress, strain
|
||||
"""
|
||||
L = props.length
|
||||
E = props.E
|
||||
A = props.A
|
||||
|
||||
# Total elongation
|
||||
delta = (force * L) / (E * A)
|
||||
|
||||
# Uniform stress
|
||||
sigma = force / A
|
||||
|
||||
# Uniform strain
|
||||
epsilon = sigma / E
|
||||
|
||||
# Displacement is linear along length
|
||||
def displacement_at(x):
|
||||
"""Axial displacement at position x"""
|
||||
if x < 0 or x > L:
|
||||
return 0.0
|
||||
return (force * x) / (E * A)
|
||||
|
||||
return {
|
||||
'type': 'axial_tension',
|
||||
'delta_total': delta,
|
||||
'sigma': sigma,
|
||||
'epsilon': epsilon,
|
||||
'displacement_function': displacement_at,
|
||||
'load': force,
|
||||
'properties': props
|
||||
}
|
||||
|
||||
|
||||
def thin_wall_pressure_vessel(pressure: float, radius: float, thickness: float,
|
||||
length: float, E: float, nu: float) -> dict:
|
||||
"""
|
||||
Thin-walled cylindrical pressure vessel
|
||||
|
||||
Analytical solution:
|
||||
σ_hoop = pr/t (circumferential stress)
|
||||
σ_axial = pr/2t (longitudinal stress)
|
||||
ε_hoop = (1/E)(σ_h - ν*σ_a)
|
||||
ε_axial = (1/E)(σ_a - ν*σ_h)
|
||||
|
||||
Args:
|
||||
pressure: Internal pressure (Pa)
|
||||
radius: Mean radius (m)
|
||||
thickness: Wall thickness (m)
|
||||
length: Cylinder length (m)
|
||||
E: Young's modulus (Pa)
|
||||
nu: Poisson's ratio
|
||||
|
||||
Returns:
|
||||
dict with stresses and strains
|
||||
"""
|
||||
# Stresses
|
||||
sigma_hoop = (pressure * radius) / thickness
|
||||
sigma_axial = (pressure * radius) / (2 * thickness)
|
||||
|
||||
# Strains
|
||||
epsilon_hoop = (1/E) * (sigma_hoop - nu * sigma_axial)
|
||||
epsilon_axial = (1/E) * (sigma_axial - nu * sigma_hoop)
|
||||
|
||||
# Radial expansion
|
||||
delta_r = epsilon_hoop * radius
|
||||
|
||||
return {
|
||||
'type': 'pressure_vessel',
|
||||
'sigma_hoop': sigma_hoop,
|
||||
'sigma_axial': sigma_axial,
|
||||
'epsilon_hoop': epsilon_hoop,
|
||||
'epsilon_axial': epsilon_axial,
|
||||
'radial_expansion': delta_r,
|
||||
'pressure': pressure,
|
||||
'radius': radius,
|
||||
'thickness': thickness
|
||||
}
|
||||
|
||||
|
||||
def torsion_circular_shaft(torque: float, radius: float, length: float, G: float) -> dict:
|
||||
"""
|
||||
Circular shaft under torsion
|
||||
|
||||
Analytical solution:
|
||||
θ = TL/GJ (angle of twist)
|
||||
τ_max = Tr/J (maximum shear stress at surface)
|
||||
γ_max = τ_max/G (maximum shear strain)
|
||||
|
||||
Args:
|
||||
torque: Applied torque (N·m)
|
||||
radius: Shaft radius (m)
|
||||
length: Shaft length (m)
|
||||
G: Shear modulus (Pa)
|
||||
|
||||
Returns:
|
||||
dict with twist angle, stress, strain
|
||||
"""
|
||||
# Polar moment of inertia
|
||||
J = (np.pi * radius**4) / 2
|
||||
|
||||
# Angle of twist
|
||||
theta = (torque * length) / (G * J)
|
||||
|
||||
# Maximum shear stress (at surface)
|
||||
tau_max = (torque * radius) / J
|
||||
|
||||
# Maximum shear strain
|
||||
gamma_max = tau_max / G
|
||||
|
||||
# Shear stress at radius r
|
||||
def shear_stress_at(r):
|
||||
"""Shear stress at radial distance r from center"""
|
||||
if r < 0 or r > radius:
|
||||
return 0.0
|
||||
return (torque * r) / J
|
||||
|
||||
return {
|
||||
'type': 'torsion',
|
||||
'theta': theta,
|
||||
'tau_max': tau_max,
|
||||
'gamma_max': gamma_max,
|
||||
'shear_stress_function': shear_stress_at,
|
||||
'torque': torque,
|
||||
'radius': radius,
|
||||
'length': length
|
||||
}
|
||||
|
||||
|
||||
# Standard test cases with typical values
|
||||
def get_standard_cantilever() -> Tuple[float, BeamProperties]:
|
||||
"""Standard cantilever test case"""
|
||||
props = BeamProperties(
|
||||
length=1.0, # 1 m
|
||||
width=0.05, # 50 mm
|
||||
height=0.1, # 100 mm
|
||||
E=210e9, # Steel: 210 GPa
|
||||
nu=0.3,
|
||||
rho=7850 # kg/m³
|
||||
)
|
||||
force = 1000.0 # 1 kN
|
||||
return force, props
|
||||
|
||||
|
||||
def get_standard_simply_supported() -> Tuple[float, BeamProperties]:
|
||||
"""Standard simply supported beam test case"""
|
||||
props = BeamProperties(
|
||||
length=2.0, # 2 m
|
||||
width=0.05, # 50 mm
|
||||
height=0.1, # 100 mm
|
||||
E=210e9, # Steel
|
||||
nu=0.3,
|
||||
rho=7850
|
||||
)
|
||||
force = 5000.0 # 5 kN
|
||||
return force, props
|
||||
|
||||
|
||||
def get_standard_tension_bar() -> Tuple[float, BeamProperties]:
|
||||
"""Standard tension bar test case"""
|
||||
props = BeamProperties(
|
||||
length=1.0, # 1 m
|
||||
width=0.02, # 20 mm
|
||||
height=0.02, # 20 mm (square bar)
|
||||
E=210e9, # Steel
|
||||
nu=0.3,
|
||||
rho=7850
|
||||
)
|
||||
force = 10000.0 # 10 kN
|
||||
return force, props
|
||||
|
||||
|
||||
# Example usage and validation
|
||||
if __name__ == "__main__":
|
||||
print("Analytical Test Cases\n")
|
||||
print("="*60)
|
||||
|
||||
# Test 1: Cantilever beam
|
||||
print("\n1. Cantilever Beam (Point Load at Tip)")
|
||||
print("-"*60)
|
||||
force, props = get_standard_cantilever()
|
||||
result = cantilever_beam_point_load(force, props)
|
||||
|
||||
print(f"Load: {force} N")
|
||||
print(f"Length: {props.length} m")
|
||||
print(f"E: {props.E/1e9:.0f} GPa")
|
||||
print(f"I: {props.I*1e12:.3f} mm⁴")
|
||||
print(f"\nResults:")
|
||||
print(f" Max displacement: {result['delta_max']*1000:.3f} mm")
|
||||
print(f" Max stress: {result['sigma_max']/1e6:.1f} MPa")
|
||||
|
||||
# Verify deflection at intermediate points
|
||||
print(f"\nDeflection profile:")
|
||||
for x in [0.0, 0.25, 0.5, 0.75, 1.0]:
|
||||
x_m = x * props.length
|
||||
delta = result['deflection_function'](x_m)
|
||||
print(f" x = {x:.2f}L: δ = {delta*1000:.3f} mm")
|
||||
|
||||
# Test 2: Simply supported beam
|
||||
print("\n2. Simply Supported Beam (Point Load at Center)")
|
||||
print("-"*60)
|
||||
force, props = get_standard_simply_supported()
|
||||
result = simply_supported_beam_point_load(force, props)
|
||||
|
||||
print(f"Load: {force} N")
|
||||
print(f"Length: {props.length} m")
|
||||
print(f"\nResults:")
|
||||
print(f" Max displacement: {result['delta_max']*1000:.3f} mm")
|
||||
print(f" Max stress: {result['sigma_max']/1e6:.1f} MPa")
|
||||
print(f" Reactions: {result['reactions']} N each")
|
||||
|
||||
# Test 3: Axial tension
|
||||
print("\n3. Axial Tension Bar")
|
||||
print("-"*60)
|
||||
force, props = get_standard_tension_bar()
|
||||
result = axial_tension_bar(force, props)
|
||||
|
||||
print(f"Load: {force} N")
|
||||
print(f"Length: {props.length} m")
|
||||
print(f"Area: {props.A*1e6:.0f} mm²")
|
||||
print(f"\nResults:")
|
||||
print(f" Total elongation: {result['delta_total']*1e6:.3f} μm")
|
||||
print(f" Stress: {result['sigma']/1e6:.1f} MPa")
|
||||
print(f" Strain: {result['epsilon']*1e6:.1f} με")
|
||||
|
||||
# Test 4: Pressure vessel
|
||||
print("\n4. Thin-Walled Pressure Vessel")
|
||||
print("-"*60)
|
||||
pressure = 10e6 # 10 MPa
|
||||
radius = 0.5 # 500 mm
|
||||
thickness = 0.01 # 10 mm
|
||||
result = thin_wall_pressure_vessel(pressure, radius, thickness, 2.0, 210e9, 0.3)
|
||||
|
||||
print(f"Pressure: {pressure/1e6:.1f} MPa")
|
||||
print(f"Radius: {radius*1000:.0f} mm")
|
||||
print(f"Thickness: {thickness*1000:.0f} mm")
|
||||
print(f"\nResults:")
|
||||
print(f" Hoop stress: {result['sigma_hoop']/1e6:.1f} MPa")
|
||||
print(f" Axial stress: {result['sigma_axial']/1e6:.1f} MPa")
|
||||
print(f" Radial expansion: {result['radial_expansion']*1e6:.3f} μm")
|
||||
|
||||
# Test 5: Torsion
|
||||
print("\n5. Circular Shaft in Torsion")
|
||||
print("-"*60)
|
||||
torque = 1000 # 1000 N·m
|
||||
radius = 0.05 # 50 mm
|
||||
length = 1.0 # 1 m
|
||||
G = 80e9 # 80 GPa
|
||||
result = torsion_circular_shaft(torque, radius, length, G)
|
||||
|
||||
print(f"Torque: {torque} N·m")
|
||||
print(f"Radius: {radius*1000:.0f} mm")
|
||||
print(f"Length: {length:.1f} m")
|
||||
print(f"\nResults:")
|
||||
print(f" Twist angle: {result['theta']*180/np.pi:.3f}°")
|
||||
print(f" Max shear stress: {result['tau_max']/1e6:.1f} MPa")
|
||||
print(f" Max shear strain: {result['gamma_max']*1e6:.1f} με")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("All analytical solutions validated!")
|
||||
@@ -1,468 +0,0 @@
|
||||
"""
|
||||
test_learning.py
|
||||
Learning capability tests
|
||||
|
||||
Tests that the neural network can actually learn:
|
||||
- Memorization: Can it memorize 10 examples?
|
||||
- Interpolation: Can it generalize between training points?
|
||||
- Extrapolation: Can it predict beyond training range?
|
||||
- Pattern recognition: Does it learn physical relationships?
|
||||
"""
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import numpy as np
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from neural_models.field_predictor import create_model
|
||||
from neural_models.physics_losses import create_loss_function
|
||||
from torch_geometric.data import Data
|
||||
|
||||
|
||||
def create_synthetic_dataset(n_samples=10, variation='load'):
|
||||
"""
|
||||
Create synthetic FEA-like dataset with known patterns
|
||||
|
||||
Args:
|
||||
n_samples: Number of samples
|
||||
variation: Parameter to vary ('load', 'stiffness', 'geometry')
|
||||
|
||||
Returns:
|
||||
List of (graph_data, target_displacement, target_stress) tuples
|
||||
"""
|
||||
dataset = []
|
||||
|
||||
for i in range(n_samples):
|
||||
num_nodes = 20
|
||||
num_edges = 40
|
||||
|
||||
# Base features
|
||||
x = torch.randn(num_nodes, 12) * 0.1
|
||||
|
||||
# Vary parameter based on type
|
||||
if variation == 'load':
|
||||
load_factor = 1.0 + i * 0.5 # Vary load from 1.0 to 5.5
|
||||
x[:, 9:12] = torch.randn(num_nodes, 3) * load_factor
|
||||
|
||||
elif variation == 'stiffness':
|
||||
stiffness_factor = 1.0 + i * 0.2 # Vary stiffness
|
||||
edge_attr = torch.randn(num_edges, 5) * 0.1
|
||||
edge_attr[:, 0] = stiffness_factor # Young's modulus
|
||||
|
||||
elif variation == 'geometry':
|
||||
geometry_factor = 1.0 + i * 0.1 # Vary geometry
|
||||
x[:, 0:3] = torch.randn(num_nodes, 3) * geometry_factor
|
||||
|
||||
# Create edges
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
|
||||
# Default edge attributes if not varying stiffness
|
||||
if variation != 'stiffness':
|
||||
edge_attr = torch.randn(num_edges, 5) * 0.1
|
||||
edge_attr[:, 0] = 1.0 # Constant Young's modulus
|
||||
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Create synthetic targets with known relationship
|
||||
# Displacement proportional to load / stiffness
|
||||
if variation == 'load':
|
||||
target_displacement = torch.randn(num_nodes, 6) * load_factor
|
||||
elif variation == 'stiffness':
|
||||
target_displacement = torch.randn(num_nodes, 6) / stiffness_factor
|
||||
else:
|
||||
target_displacement = torch.randn(num_nodes, 6)
|
||||
|
||||
# Stress also follows known pattern
|
||||
target_stress = target_displacement * 2.0 # Simple linear relationship
|
||||
|
||||
dataset.append((data, target_displacement, target_stress))
|
||||
|
||||
return dataset
|
||||
|
||||
|
||||
def test_memorization():
|
||||
"""
|
||||
Test 1: Can network memorize small dataset?
|
||||
|
||||
Expected: After training on 10 examples, can achieve < 1% error
|
||||
|
||||
This tests basic learning capability - if it can't memorize,
|
||||
something is fundamentally wrong.
|
||||
"""
|
||||
print(" Creating small dataset (10 samples)...")
|
||||
|
||||
# Create tiny dataset
|
||||
dataset = create_synthetic_dataset(n_samples=10, variation='load')
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.0 # No dropout for memorization
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
loss_fn = create_loss_function('mse')
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
|
||||
print(" Training for 100 epochs...")
|
||||
|
||||
model.train()
|
||||
losses = []
|
||||
|
||||
for epoch in range(100):
|
||||
epoch_loss = 0.0
|
||||
|
||||
for graph_data, target_disp, target_stress in dataset:
|
||||
optimizer.zero_grad()
|
||||
|
||||
# Forward pass
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
# Compute loss
|
||||
targets = {
|
||||
'displacement': target_disp,
|
||||
'stress': target_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
# Backward pass
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
epoch_loss += loss.item()
|
||||
|
||||
avg_loss = epoch_loss / len(dataset)
|
||||
losses.append(avg_loss)
|
||||
|
||||
if (epoch + 1) % 20 == 0:
|
||||
print(f" Epoch {epoch+1}/100: Loss = {avg_loss:.6f}")
|
||||
|
||||
final_loss = losses[-1]
|
||||
initial_loss = losses[0]
|
||||
improvement = (initial_loss - final_loss) / initial_loss * 100
|
||||
|
||||
print(f" Initial loss: {initial_loss:.6f}")
|
||||
print(f" Final loss: {final_loss:.6f}")
|
||||
print(f" Improvement: {improvement:.1f}%")
|
||||
|
||||
# Success if loss decreased significantly
|
||||
success = improvement > 50.0
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Memorization {"successful" if success else "failed"} ({improvement:.1f}% improvement)',
|
||||
'metrics': {
|
||||
'initial_loss': float(initial_loss),
|
||||
'final_loss': float(final_loss),
|
||||
'improvement_percent': float(improvement),
|
||||
'converged': final_loss < 0.1
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_interpolation():
|
||||
"""
|
||||
Test 2: Can network interpolate?
|
||||
|
||||
Expected: After training on [1, 3, 5], predict [2, 4] with < 5% error
|
||||
|
||||
This tests generalization capability within training range.
|
||||
"""
|
||||
print(" Creating interpolation dataset...")
|
||||
|
||||
# Train on samples 0, 2, 4, 6, 8 (odd indices)
|
||||
train_indices = [0, 2, 4, 6, 8]
|
||||
test_indices = [1, 3, 5, 7] # Even indices (interpolation)
|
||||
|
||||
full_dataset = create_synthetic_dataset(n_samples=10, variation='load')
|
||||
|
||||
train_dataset = [full_dataset[i] for i in train_indices]
|
||||
test_dataset = [full_dataset[i] for i in test_indices]
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
loss_fn = create_loss_function('mse')
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
|
||||
print(f" Training on {len(train_dataset)} samples...")
|
||||
|
||||
# Train
|
||||
model.train()
|
||||
for epoch in range(50):
|
||||
for graph_data, target_disp, target_stress in train_dataset:
|
||||
optimizer.zero_grad()
|
||||
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
targets = {
|
||||
'displacement': target_disp,
|
||||
'stress': target_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# Test interpolation
|
||||
print(f" Testing interpolation on {len(test_dataset)} samples...")
|
||||
|
||||
model.eval()
|
||||
test_errors = []
|
||||
|
||||
with torch.no_grad():
|
||||
for graph_data, target_disp, target_stress in test_dataset:
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
# Compute relative error
|
||||
pred_disp = predictions['displacement']
|
||||
error = torch.mean(torch.abs(pred_disp - target_disp) / (torch.abs(target_disp) + 1e-8))
|
||||
test_errors.append(error.item())
|
||||
|
||||
avg_error = np.mean(test_errors) * 100
|
||||
|
||||
print(f" Average interpolation error: {avg_error:.2f}%")
|
||||
|
||||
# Success if error reasonable for untrained interpolation
|
||||
success = avg_error < 100.0 # Lenient for this basic test
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Interpolation test completed ({avg_error:.2f}% error)',
|
||||
'metrics': {
|
||||
'average_error_percent': float(avg_error),
|
||||
'test_samples': len(test_dataset),
|
||||
'train_samples': len(train_dataset)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_extrapolation():
|
||||
"""
|
||||
Test 3: Can network extrapolate?
|
||||
|
||||
Expected: After training on [1-5], predict [7-10] with < 20% error
|
||||
|
||||
This tests generalization beyond training range (harder than interpolation).
|
||||
"""
|
||||
print(" Creating extrapolation dataset...")
|
||||
|
||||
# Train on first 5 samples
|
||||
train_indices = list(range(5))
|
||||
test_indices = list(range(7, 10)) # Extrapolate to higher values
|
||||
|
||||
full_dataset = create_synthetic_dataset(n_samples=10, variation='load')
|
||||
|
||||
train_dataset = [full_dataset[i] for i in train_indices]
|
||||
test_dataset = [full_dataset[i] for i in test_indices]
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
loss_fn = create_loss_function('mse')
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
|
||||
print(f" Training on samples 1-5...")
|
||||
|
||||
# Train
|
||||
model.train()
|
||||
for epoch in range(50):
|
||||
for graph_data, target_disp, target_stress in train_dataset:
|
||||
optimizer.zero_grad()
|
||||
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
targets = {
|
||||
'displacement': target_disp,
|
||||
'stress': target_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# Test extrapolation
|
||||
print(f" Testing extrapolation on samples 7-10...")
|
||||
|
||||
model.eval()
|
||||
test_errors = []
|
||||
|
||||
with torch.no_grad():
|
||||
for graph_data, target_disp, target_stress in test_dataset:
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
pred_disp = predictions['displacement']
|
||||
error = torch.mean(torch.abs(pred_disp - target_disp) / (torch.abs(target_disp) + 1e-8))
|
||||
test_errors.append(error.item())
|
||||
|
||||
avg_error = np.mean(test_errors) * 100
|
||||
|
||||
print(f" Average extrapolation error: {avg_error:.2f}%")
|
||||
print(f" Note: Extrapolation is harder than interpolation.")
|
||||
|
||||
# Success if error is reasonable (extrapolation is hard)
|
||||
success = avg_error < 200.0 # Very lenient for basic test
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Extrapolation test completed ({avg_error:.2f}% error)',
|
||||
'metrics': {
|
||||
'average_error_percent': float(avg_error),
|
||||
'test_samples': len(test_dataset),
|
||||
'train_samples': len(train_dataset)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_pattern_recognition():
|
||||
"""
|
||||
Test 4: Can network learn physical patterns?
|
||||
|
||||
Expected: Learn that thickness ↑ → stress ↓
|
||||
|
||||
This tests if network understands relationships, not just memorization.
|
||||
"""
|
||||
print(" Testing pattern recognition...")
|
||||
|
||||
# Create dataset with clear pattern: stiffness ↑ → displacement ↓
|
||||
dataset = create_synthetic_dataset(n_samples=20, variation='stiffness')
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
loss_fn = create_loss_function('mse')
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
|
||||
print(" Training on stiffness variation dataset...")
|
||||
|
||||
# Train
|
||||
model.train()
|
||||
for epoch in range(50):
|
||||
for graph_data, target_disp, target_stress in dataset:
|
||||
optimizer.zero_grad()
|
||||
|
||||
predictions = model(graph_data, return_stress=True)
|
||||
|
||||
targets = {
|
||||
'displacement': target_disp,
|
||||
'stress': target_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# Test pattern: predict two cases with different stiffness
|
||||
print(" Testing learned pattern...")
|
||||
|
||||
model.eval()
|
||||
|
||||
# Low stiffness case
|
||||
low_stiff_data, low_stiff_disp, _ = dataset[0]
|
||||
|
||||
# High stiffness case
|
||||
high_stiff_data, high_stiff_disp, _ = dataset[-1]
|
||||
|
||||
with torch.no_grad():
|
||||
low_pred = model(low_stiff_data, return_stress=False)
|
||||
high_pred = model(high_stiff_data, return_stress=False)
|
||||
|
||||
# Check if pattern learned: low stiffness → high displacement
|
||||
low_disp_mag = torch.mean(torch.abs(low_pred['displacement'])).item()
|
||||
high_disp_mag = torch.mean(torch.abs(high_pred['displacement'])).item()
|
||||
|
||||
print(f" Low stiffness displacement: {low_disp_mag:.6f}")
|
||||
print(f" High stiffness displacement: {high_disp_mag:.6f}")
|
||||
|
||||
# Pattern learned if low stiffness has higher displacement
|
||||
# (But with random data this might not hold - this is a template)
|
||||
pattern_ratio = low_disp_mag / (high_disp_mag + 1e-8)
|
||||
|
||||
print(f" Pattern ratio (should be > 1.0): {pattern_ratio:.2f}")
|
||||
print(f" Note: With synthetic random data, pattern may not emerge.")
|
||||
print(f" Real training data should show clear physical patterns.")
|
||||
|
||||
# Just check predictions are reasonable magnitude
|
||||
success = (low_disp_mag > 0.0 and high_disp_mag > 0.0)
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Pattern recognition test completed',
|
||||
'metrics': {
|
||||
'low_stiffness_displacement': float(low_disp_mag),
|
||||
'high_stiffness_displacement': float(high_disp_mag),
|
||||
'pattern_ratio': float(pattern_ratio)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("\nRunning learning capability tests...\n")
|
||||
|
||||
tests = [
|
||||
("Memorization Test", test_memorization),
|
||||
("Interpolation Test", test_interpolation),
|
||||
("Extrapolation Test", test_extrapolation),
|
||||
("Pattern Recognition", test_pattern_recognition)
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for name, test_func in tests:
|
||||
print(f"[TEST] {name}")
|
||||
try:
|
||||
result = test_func()
|
||||
if result['status'] == 'PASS':
|
||||
print(f" ✓ PASS\n")
|
||||
passed += 1
|
||||
else:
|
||||
print(f" ✗ FAIL: {result['message']}\n")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ✗ FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
failed += 1
|
||||
|
||||
print(f"\nResults: {passed} passed, {failed} failed")
|
||||
print(f"\nNote: These tests use SYNTHETIC data and train for limited epochs.")
|
||||
print(f"Real training on actual FEA data will show better learning performance.")
|
||||
@@ -1,385 +0,0 @@
|
||||
"""
|
||||
test_physics.py
|
||||
Physics validation tests with analytical solutions
|
||||
|
||||
Tests that the neural network respects fundamental physics:
|
||||
- Cantilever beam (δ = FL³/3EI)
|
||||
- Simply supported beam (δ = FL³/48EI)
|
||||
- Equilibrium (∇·σ + f = 0)
|
||||
- Energy conservation (strain energy = work done)
|
||||
"""
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from neural_models.field_predictor import create_model
|
||||
from torch_geometric.data import Data
|
||||
|
||||
|
||||
def create_cantilever_beam_graph(length=1.0, force=1000.0, E=210e9, I=1e-6):
|
||||
"""
|
||||
Create synthetic cantilever beam graph with analytical solution
|
||||
|
||||
Analytical solution: δ_max = FL³/3EI
|
||||
|
||||
Args:
|
||||
length: Beam length (m)
|
||||
force: Applied force (N)
|
||||
E: Young's modulus (Pa)
|
||||
I: Second moment of area (m^4)
|
||||
|
||||
Returns:
|
||||
graph_data: PyG Data object
|
||||
analytical_displacement: Expected max displacement (m)
|
||||
"""
|
||||
# Calculate analytical solution
|
||||
analytical_displacement = (force * length**3) / (3 * E * I)
|
||||
|
||||
# Create simple beam mesh (10 nodes along length)
|
||||
num_nodes = 10
|
||||
x_coords = np.linspace(0, length, num_nodes)
|
||||
|
||||
# Node features: [x, y, z, bc_x, bc_y, bc_z, bc_rx, bc_ry, bc_rz, load_x, load_y, load_z]
|
||||
node_features = np.zeros((num_nodes, 12))
|
||||
node_features[:, 0] = x_coords # x coordinates
|
||||
|
||||
# Boundary conditions at x=0 (fixed end)
|
||||
node_features[0, 3:9] = 1.0 # All DOF constrained
|
||||
|
||||
# Applied force at x=length (free end)
|
||||
node_features[-1, 10] = force # Force in y direction
|
||||
|
||||
# Create edges (connect adjacent nodes)
|
||||
edge_index = []
|
||||
for i in range(num_nodes - 1):
|
||||
edge_index.append([i, i+1])
|
||||
edge_index.append([i+1, i])
|
||||
|
||||
edge_index = torch.tensor(edge_index, dtype=torch.long).t()
|
||||
|
||||
# Edge features: [E, nu, rho, G, alpha]
|
||||
num_edges = edge_index.shape[1]
|
||||
edge_features = np.zeros((num_edges, 5))
|
||||
edge_features[:, 0] = E / 1e11 # Normalized Young's modulus
|
||||
edge_features[:, 1] = 0.3 # Poisson's ratio
|
||||
edge_features[:, 2] = 7850 / 10000 # Normalized density
|
||||
edge_features[:, 3] = E / (2 * (1 + 0.3)) / 1e11 # Normalized shear modulus
|
||||
edge_features[:, 4] = 1.2e-5 # Thermal expansion
|
||||
|
||||
# Convert to tensors
|
||||
x = torch.tensor(node_features, dtype=torch.float32)
|
||||
edge_attr = torch.tensor(edge_features, dtype=torch.float32)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
return data, analytical_displacement
|
||||
|
||||
|
||||
def test_cantilever_analytical():
|
||||
"""
|
||||
Test 1: Cantilever beam with analytical solution
|
||||
|
||||
Expected: Neural prediction within 5% of δ = FL³/3EI
|
||||
|
||||
Note: This test uses an untrained model, so it will fail until
|
||||
the model is trained on cantilever beam data. This test serves
|
||||
as a template for post-training validation.
|
||||
"""
|
||||
print(" Creating cantilever beam test case...")
|
||||
|
||||
# Create test case
|
||||
graph_data, analytical_disp = create_cantilever_beam_graph(
|
||||
length=1.0,
|
||||
force=1000.0,
|
||||
E=210e9,
|
||||
I=1e-6
|
||||
)
|
||||
|
||||
print(f" Analytical max displacement: {analytical_disp*1000:.6f} mm")
|
||||
|
||||
# Create model (untrained)
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Make prediction
|
||||
print(" Running neural prediction...")
|
||||
with torch.no_grad():
|
||||
results = model(graph_data, return_stress=False)
|
||||
|
||||
# Extract max displacement (y-direction at free end)
|
||||
predicted_disp = torch.max(torch.abs(results['displacement'][:, 1])).item()
|
||||
|
||||
print(f" Predicted max displacement: {predicted_disp:.6f} (arbitrary units)")
|
||||
|
||||
# Calculate error (will be large for untrained model)
|
||||
# After training, this should be < 5%
|
||||
error = abs(predicted_disp - analytical_disp) / analytical_disp * 100
|
||||
|
||||
print(f" Error: {error:.1f}%")
|
||||
print(f" Note: Model is untrained. After training, expect < 5% error.")
|
||||
|
||||
# For now, just check that prediction completed
|
||||
success = results['displacement'].shape[0] == graph_data.x.shape[0]
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Cantilever test completed (untrained model)',
|
||||
'metrics': {
|
||||
'analytical_displacement_mm': float(analytical_disp * 1000),
|
||||
'predicted_displacement': float(predicted_disp),
|
||||
'error_percent': float(error),
|
||||
'trained': False
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_equilibrium():
|
||||
"""
|
||||
Test 2: Force equilibrium check
|
||||
|
||||
Expected: ∇·σ + f = 0 (force balance)
|
||||
|
||||
Checks that predicted stress field satisfies equilibrium.
|
||||
For trained model, equilibrium residual should be < 1e-6.
|
||||
"""
|
||||
print(" Testing equilibrium constraint...")
|
||||
|
||||
# Create simple test case
|
||||
num_nodes = 20
|
||||
num_edges = 40
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Make prediction
|
||||
with torch.no_grad():
|
||||
results = model(data, return_stress=True)
|
||||
|
||||
# Compute equilibrium residual (simplified check)
|
||||
# In real implementation, would compute ∇·σ numerically
|
||||
stress = results['stress']
|
||||
stress_gradient_norm = torch.mean(torch.abs(stress)).item()
|
||||
|
||||
print(f" Stress field magnitude: {stress_gradient_norm:.6f}")
|
||||
print(f" Note: Full equilibrium check requires mesh connectivity.")
|
||||
print(f" After training with physics loss, residual should be < 1e-6.")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Equilibrium check completed',
|
||||
'metrics': {
|
||||
'stress_magnitude': float(stress_gradient_norm),
|
||||
'trained': False
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_energy_conservation():
|
||||
"""
|
||||
Test 3: Energy conservation
|
||||
|
||||
Expected: Strain energy = Work done by external forces
|
||||
|
||||
U = (1/2)∫ σ:ε dV = ∫ f·u dS
|
||||
"""
|
||||
print(" Testing energy conservation...")
|
||||
|
||||
# Create test case with known loading
|
||||
num_nodes = 30
|
||||
num_edges = 60
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
# Add known external force
|
||||
x[:, 9:12] = 0.0 # Clear loads
|
||||
x[0, 10] = 1000.0 # 1000 N in y direction at node 0
|
||||
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Make prediction
|
||||
with torch.no_grad():
|
||||
results = model(data, return_stress=True)
|
||||
|
||||
# Compute external work (simplified)
|
||||
displacement = results['displacement']
|
||||
force = x[:, 9:12]
|
||||
|
||||
external_work = torch.sum(force * displacement[:, :3]).item()
|
||||
|
||||
# Compute strain energy (simplified: U ≈ (1/2) σ:ε)
|
||||
stress = results['stress']
|
||||
# For small deformations: ε ≈ ∇u, approximate with displacement gradient
|
||||
strain_energy = 0.5 * torch.sum(stress * displacement).item()
|
||||
|
||||
print(f" External work: {external_work:.6f}")
|
||||
print(f" Strain energy: {strain_energy:.6f}")
|
||||
print(f" Note: Simplified calculation. Full energy check requires")
|
||||
print(f" proper strain computation from displacement gradients.")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Energy conservation check completed',
|
||||
'metrics': {
|
||||
'external_work': float(external_work),
|
||||
'strain_energy': float(strain_energy),
|
||||
'trained': False
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_constitutive_law():
|
||||
"""
|
||||
Test 4: Constitutive law (Hooke's law)
|
||||
|
||||
Expected: σ = C:ε (stress proportional to strain)
|
||||
|
||||
For linear elastic materials: σ = E·ε for 1D
|
||||
"""
|
||||
print(" Testing constitutive law...")
|
||||
|
||||
# Create simple uniaxial test case
|
||||
num_nodes = 10
|
||||
|
||||
# Simple bar under tension
|
||||
x = torch.zeros(num_nodes, 12)
|
||||
x[:, 0] = torch.linspace(0, 1, num_nodes) # x coordinates
|
||||
|
||||
# Fixed at x=0
|
||||
x[0, 3:9] = 1.0
|
||||
|
||||
# Force at x=1
|
||||
x[-1, 9] = 1000.0 # Axial force
|
||||
|
||||
# Create edges
|
||||
edge_index = []
|
||||
for i in range(num_nodes - 1):
|
||||
edge_index.append([i, i+1])
|
||||
edge_index.append([i+1, i])
|
||||
|
||||
edge_index = torch.tensor(edge_index, dtype=torch.long).t()
|
||||
|
||||
# Material properties
|
||||
E = 210e9 # Young's modulus
|
||||
edge_attr = torch.zeros(edge_index.shape[1], 5)
|
||||
edge_attr[:, 0] = E / 1e11 # Normalized
|
||||
edge_attr[:, 1] = 0.3 # Poisson's ratio
|
||||
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Make prediction
|
||||
with torch.no_grad():
|
||||
results = model(data, return_stress=True)
|
||||
|
||||
# Check stress-strain relationship
|
||||
displacement = results['displacement']
|
||||
stress = results['stress']
|
||||
|
||||
# For trained model with physics loss, stress should follow σ = E·ε
|
||||
print(f" Displacement range: {displacement[:, 0].min():.6f} to {displacement[:, 0].max():.6f}")
|
||||
print(f" Stress range: {stress[:, 0].min():.6f} to {stress[:, 0].max():.6f}")
|
||||
print(f" Note: After training with constitutive loss, stress should")
|
||||
print(f" be proportional to strain (σ = E·ε).")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Constitutive law check completed',
|
||||
'metrics': {
|
||||
'displacement_range': [float(displacement[:, 0].min()), float(displacement[:, 0].max())],
|
||||
'stress_range': [float(stress[:, 0].min()), float(stress[:, 0].max())],
|
||||
'trained': False
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("\nRunning physics validation tests...\n")
|
||||
|
||||
tests = [
|
||||
("Cantilever Analytical", test_cantilever_analytical),
|
||||
("Equilibrium Check", test_equilibrium),
|
||||
("Energy Conservation", test_energy_conservation),
|
||||
("Constitutive Law", test_constitutive_law)
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for name, test_func in tests:
|
||||
print(f"[TEST] {name}")
|
||||
try:
|
||||
result = test_func()
|
||||
if result['status'] == 'PASS':
|
||||
print(f" ✓ PASS\n")
|
||||
passed += 1
|
||||
else:
|
||||
print(f" ✗ FAIL: {result['message']}\n")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ✗ FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
failed += 1
|
||||
|
||||
print(f"\nResults: {passed} passed, {failed} failed")
|
||||
print(f"\nNote: These tests use an UNTRAINED model.")
|
||||
print(f"After training with physics-informed losses, all tests should pass")
|
||||
print(f"with errors < 5% for analytical solutions.")
|
||||
@@ -1,462 +0,0 @@
|
||||
"""
|
||||
test_predictions.py
|
||||
Integration tests for complete pipeline
|
||||
|
||||
Tests the full system from parsing to prediction:
|
||||
- Parser validation with real data
|
||||
- Training pipeline end-to-end
|
||||
- Prediction accuracy vs FEA
|
||||
- Performance benchmarks
|
||||
"""
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import json
|
||||
import time
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from neural_field_parser import NastranToNeuralFieldParser
|
||||
from neural_models.data_loader import FEAMeshDataset
|
||||
from neural_models.field_predictor import create_model
|
||||
from neural_models.physics_losses import create_loss_function
|
||||
|
||||
|
||||
def test_parser():
|
||||
"""
|
||||
Test 1: Parser validation
|
||||
|
||||
Expected: Successfully parse BDF/OP2 files and create valid output
|
||||
|
||||
Uses test_case_beam if available, otherwise creates minimal test.
|
||||
"""
|
||||
print(" Checking for test data...")
|
||||
|
||||
test_dir = Path("test_case_beam")
|
||||
|
||||
if not test_dir.exists():
|
||||
print(f" ⚠ Warning: {test_dir} not found")
|
||||
print(f" Skipping parser test - run test_simple_beam.py first")
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Parser test skipped (no test data)',
|
||||
'metrics': {'skipped': True}
|
||||
}
|
||||
|
||||
print(f" Found test directory: {test_dir}")
|
||||
|
||||
try:
|
||||
# Check if already parsed
|
||||
json_file = test_dir / "neural_field_data.json"
|
||||
h5_file = test_dir / "neural_field_data.h5"
|
||||
|
||||
if json_file.exists() and h5_file.exists():
|
||||
print(f" Found existing parsed data")
|
||||
|
||||
# Load and validate
|
||||
with open(json_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
n_nodes = data['mesh']['statistics']['n_nodes']
|
||||
n_elements = data['mesh']['statistics']['n_elements']
|
||||
|
||||
print(f" Nodes: {n_nodes:,}")
|
||||
print(f" Elements: {n_elements:,}")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Parser validation successful',
|
||||
'metrics': {
|
||||
'n_nodes': n_nodes,
|
||||
'n_elements': n_elements,
|
||||
'has_results': 'results' in data
|
||||
}
|
||||
}
|
||||
|
||||
else:
|
||||
print(f" Parsed data not found - run test_simple_beam.py first")
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Parser test skipped (data not parsed yet)',
|
||||
'metrics': {'skipped': True}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f" Error: {str(e)}")
|
||||
return {
|
||||
'status': 'FAIL',
|
||||
'message': f'Parser validation failed: {str(e)}',
|
||||
'metrics': {}
|
||||
}
|
||||
|
||||
|
||||
def test_training():
|
||||
"""
|
||||
Test 2: Training pipeline
|
||||
|
||||
Expected: Complete training loop runs without errors
|
||||
|
||||
Trains on small synthetic dataset for speed.
|
||||
"""
|
||||
print(" Setting up training test...")
|
||||
|
||||
# Create minimal synthetic dataset
|
||||
print(" Creating synthetic training data...")
|
||||
|
||||
dataset = []
|
||||
for i in range(5): # Just 5 samples for quick test
|
||||
num_nodes = 20
|
||||
num_edges = 40
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
from torch_geometric.data import Data
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Add synthetic targets
|
||||
data.y_displacement = torch.randn(num_nodes, 6)
|
||||
data.y_stress = torch.randn(num_nodes, 6)
|
||||
|
||||
dataset.append(data)
|
||||
|
||||
print(f" Created {len(dataset)} training samples")
|
||||
|
||||
# Create model
|
||||
print(" Creating model...")
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
loss_fn = create_loss_function('mse')
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
|
||||
print(" Training for 10 epochs...")
|
||||
|
||||
# Training loop
|
||||
model.train()
|
||||
start_time = time.time()
|
||||
|
||||
for epoch in range(10):
|
||||
epoch_loss = 0.0
|
||||
|
||||
for data in dataset:
|
||||
optimizer.zero_grad()
|
||||
|
||||
# Forward pass
|
||||
predictions = model(data, return_stress=True)
|
||||
|
||||
# Compute loss
|
||||
targets = {
|
||||
'displacement': data.y_displacement,
|
||||
'stress': data.y_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
# Backward pass
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
epoch_loss += loss.item()
|
||||
|
||||
avg_loss = epoch_loss / len(dataset)
|
||||
|
||||
if (epoch + 1) % 5 == 0:
|
||||
print(f" Epoch {epoch+1}/10: Loss = {avg_loss:.6f}")
|
||||
|
||||
training_time = time.time() - start_time
|
||||
|
||||
print(f" Training completed in {training_time:.2f}s")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Training pipeline successful',
|
||||
'metrics': {
|
||||
'epochs': 10,
|
||||
'samples': len(dataset),
|
||||
'training_time_s': float(training_time),
|
||||
'final_loss': float(avg_loss)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_prediction_accuracy():
|
||||
"""
|
||||
Test 3: Prediction accuracy
|
||||
|
||||
Expected: Predictions match targets with reasonable error
|
||||
|
||||
Uses trained model from test_training.
|
||||
"""
|
||||
print(" Testing prediction accuracy...")
|
||||
|
||||
# Create test case
|
||||
num_nodes = 20
|
||||
num_edges = 40
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
from torch_geometric.data import Data
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Synthetic ground truth
|
||||
target_disp = torch.randn(num_nodes, 6)
|
||||
target_stress = torch.randn(num_nodes, 6)
|
||||
|
||||
# Create and "train" model (minimal training for test speed)
|
||||
print(" Creating model...")
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.0
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
|
||||
# Quick training to make predictions reasonable
|
||||
model.train()
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
|
||||
loss_fn = create_loss_function('mse')
|
||||
|
||||
for _ in range(20):
|
||||
optimizer.zero_grad()
|
||||
|
||||
predictions = model(data, return_stress=True)
|
||||
|
||||
targets = {
|
||||
'displacement': target_disp,
|
||||
'stress': target_stress
|
||||
}
|
||||
|
||||
loss_dict = loss_fn(predictions, targets)
|
||||
loss = loss_dict['total_loss']
|
||||
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# Test prediction
|
||||
print(" Running prediction...")
|
||||
|
||||
model.eval()
|
||||
start_time = time.time()
|
||||
|
||||
with torch.no_grad():
|
||||
predictions = model(data, return_stress=True)
|
||||
|
||||
inference_time = (time.time() - start_time) * 1000 # ms
|
||||
|
||||
# Compute errors
|
||||
disp_error = torch.mean(torch.abs(predictions['displacement'] - target_disp)).item()
|
||||
stress_error = torch.mean(torch.abs(predictions['stress'] - target_stress)).item()
|
||||
|
||||
print(f" Inference time: {inference_time:.2f} ms")
|
||||
print(f" Displacement error: {disp_error:.6f}")
|
||||
print(f" Stress error: {stress_error:.6f}")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Prediction accuracy test completed',
|
||||
'metrics': {
|
||||
'inference_time_ms': float(inference_time),
|
||||
'displacement_error': float(disp_error),
|
||||
'stress_error': float(stress_error),
|
||||
'num_nodes': num_nodes
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_performance_benchmark():
|
||||
"""
|
||||
Test 4: Performance benchmark
|
||||
|
||||
Expected: Inference time < 100ms for typical mesh
|
||||
|
||||
Compares neural prediction vs expected FEA time.
|
||||
"""
|
||||
print(" Running performance benchmark...")
|
||||
|
||||
# Test different mesh sizes
|
||||
mesh_sizes = [10, 50, 100, 500]
|
||||
results = []
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.0
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
print(f" Testing {len(mesh_sizes)} mesh sizes...")
|
||||
|
||||
for num_nodes in mesh_sizes:
|
||||
num_edges = num_nodes * 2
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
from torch_geometric.data import Data
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Warm-up
|
||||
with torch.no_grad():
|
||||
_ = model(data, return_stress=True)
|
||||
|
||||
# Benchmark (average of 10 runs)
|
||||
times = []
|
||||
with torch.no_grad():
|
||||
for _ in range(10):
|
||||
start = time.time()
|
||||
_ = model(data, return_stress=True)
|
||||
times.append((time.time() - start) * 1000)
|
||||
|
||||
avg_time = np.mean(times)
|
||||
std_time = np.std(times)
|
||||
|
||||
print(f" {num_nodes:4d} nodes: {avg_time:6.2f} ± {std_time:4.2f} ms")
|
||||
|
||||
results.append({
|
||||
'num_nodes': num_nodes,
|
||||
'avg_time_ms': float(avg_time),
|
||||
'std_time_ms': float(std_time)
|
||||
})
|
||||
|
||||
# Check if performance is acceptable (< 100ms for 100 nodes)
|
||||
time_100_nodes = next((r['avg_time_ms'] for r in results if r['num_nodes'] == 100), None)
|
||||
|
||||
success = time_100_nodes is not None and time_100_nodes < 100.0
|
||||
|
||||
return {
|
||||
'status': 'PASS' if success else 'FAIL',
|
||||
'message': f'Performance benchmark completed',
|
||||
'metrics': {
|
||||
'results': results,
|
||||
'time_100_nodes_ms': float(time_100_nodes) if time_100_nodes else None,
|
||||
'passes_threshold': success
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_batch_inference():
|
||||
"""
|
||||
Test 5: Batch inference
|
||||
|
||||
Expected: Can process multiple designs simultaneously
|
||||
|
||||
Important for optimization loops.
|
||||
"""
|
||||
print(" Testing batch inference...")
|
||||
|
||||
batch_size = 5
|
||||
num_nodes_per_graph = 20
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.0
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
print(f" Creating batch of {batch_size} graphs...")
|
||||
|
||||
graphs = []
|
||||
for i in range(batch_size):
|
||||
num_nodes = num_nodes_per_graph
|
||||
num_edges = num_nodes * 2
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.full((num_nodes,), i, dtype=torch.long)
|
||||
|
||||
from torch_geometric.data import Data
|
||||
graphs.append(Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch))
|
||||
|
||||
# Process batch
|
||||
print(f" Processing batch...")
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
with torch.no_grad():
|
||||
for graph in graphs:
|
||||
_ = model(graph, return_stress=True)
|
||||
|
||||
batch_time = (time.time() - start_time) * 1000
|
||||
|
||||
time_per_graph = batch_time / batch_size
|
||||
|
||||
print(f" Batch processing time: {batch_time:.2f} ms")
|
||||
print(f" Time per graph: {time_per_graph:.2f} ms")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Batch inference successful',
|
||||
'metrics': {
|
||||
'batch_size': batch_size,
|
||||
'total_time_ms': float(batch_time),
|
||||
'time_per_graph_ms': float(time_per_graph)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("\nRunning integration tests...\n")
|
||||
|
||||
tests = [
|
||||
("Parser Validation", test_parser),
|
||||
("Training Pipeline", test_training),
|
||||
("Prediction Accuracy", test_prediction_accuracy),
|
||||
("Performance Benchmark", test_performance_benchmark),
|
||||
("Batch Inference", test_batch_inference)
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for name, test_func in tests:
|
||||
print(f"[TEST] {name}")
|
||||
try:
|
||||
result = test_func()
|
||||
if result['status'] == 'PASS':
|
||||
print(f" ✓ PASS\n")
|
||||
passed += 1
|
||||
else:
|
||||
print(f" ✗ FAIL: {result['message']}\n")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ✗ FAIL: {str(e)}\n")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
failed += 1
|
||||
|
||||
print(f"\nResults: {passed} passed, {failed} failed")
|
||||
print(f"\nNote: Parser test requires test_case_beam directory.")
|
||||
print(f"Run 'python test_simple_beam.py' first to create test data.")
|
||||
@@ -1,296 +0,0 @@
|
||||
"""
|
||||
test_synthetic.py
|
||||
Synthetic tests with known analytical solutions
|
||||
|
||||
Tests basic functionality without real FEA data:
|
||||
- Model can be created
|
||||
- Forward pass works
|
||||
- Loss functions compute correctly
|
||||
- Predictions have correct shape
|
||||
"""
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from neural_models.field_predictor import create_model
|
||||
from neural_models.physics_losses import create_loss_function
|
||||
from torch_geometric.data import Data
|
||||
|
||||
|
||||
def test_model_creation():
|
||||
"""
|
||||
Test 1: Can we create the model?
|
||||
|
||||
Expected: Model instantiates with correct number of parameters
|
||||
"""
|
||||
print(" Creating GNN model...")
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
|
||||
# Count parameters
|
||||
num_params = sum(p.numel() for p in model.parameters())
|
||||
|
||||
print(f" Model created: {num_params:,} parameters")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': f'Model created successfully ({num_params:,} params)',
|
||||
'metrics': {'parameters': num_params}
|
||||
}
|
||||
|
||||
|
||||
def test_forward_pass():
|
||||
"""
|
||||
Test 2: Can model process data?
|
||||
|
||||
Expected: Forward pass completes without errors
|
||||
"""
|
||||
print(" Testing forward pass...")
|
||||
|
||||
# Create model
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Create dummy data
|
||||
num_nodes = 100
|
||||
num_edges = 300
|
||||
|
||||
x = torch.randn(num_nodes, 12) # Node features
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges)) # Connectivity
|
||||
edge_attr = torch.randn(num_edges, 5) # Edge features
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long) # Batch assignment
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Forward pass
|
||||
with torch.no_grad():
|
||||
results = model(data, return_stress=True)
|
||||
|
||||
# Check outputs
|
||||
assert 'displacement' in results, "Missing displacement output"
|
||||
assert 'stress' in results, "Missing stress output"
|
||||
assert 'von_mises' in results, "Missing von Mises output"
|
||||
|
||||
# Check shapes
|
||||
assert results['displacement'].shape == (num_nodes, 6), f"Wrong displacement shape: {results['displacement'].shape}"
|
||||
assert results['stress'].shape == (num_nodes, 6), f"Wrong stress shape: {results['stress'].shape}"
|
||||
assert results['von_mises'].shape == (num_nodes,), f"Wrong von Mises shape: {results['von_mises'].shape}"
|
||||
|
||||
print(f" Displacement shape: {results['displacement'].shape} [OK]")
|
||||
print(f" Stress shape: {results['stress'].shape} [OK]")
|
||||
print(f" Von Mises shape: {results['von_mises'].shape} [OK]")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Forward pass successful',
|
||||
'metrics': {
|
||||
'num_nodes': num_nodes,
|
||||
'displacement_shape': list(results['displacement'].shape),
|
||||
'stress_shape': list(results['stress'].shape)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def test_loss_computation():
|
||||
"""
|
||||
Test 3: Do loss functions work?
|
||||
|
||||
Expected: All loss types compute without errors
|
||||
"""
|
||||
print(" Testing loss functions...")
|
||||
|
||||
# Create dummy predictions and targets
|
||||
num_nodes = 100
|
||||
|
||||
predictions = {
|
||||
'displacement': torch.randn(num_nodes, 6),
|
||||
'stress': torch.randn(num_nodes, 6),
|
||||
'von_mises': torch.abs(torch.randn(num_nodes))
|
||||
}
|
||||
|
||||
targets = {
|
||||
'displacement': torch.randn(num_nodes, 6),
|
||||
'stress': torch.randn(num_nodes, 6)
|
||||
}
|
||||
|
||||
loss_types = ['mse', 'relative', 'physics', 'max']
|
||||
loss_values = {}
|
||||
|
||||
for loss_type in loss_types:
|
||||
loss_fn = create_loss_function(loss_type)
|
||||
losses = loss_fn(predictions, targets)
|
||||
|
||||
assert 'total_loss' in losses, f"Missing total_loss for {loss_type}"
|
||||
assert not torch.isnan(losses['total_loss']), f"NaN loss for {loss_type}"
|
||||
assert not torch.isinf(losses['total_loss']), f"Inf loss for {loss_type}"
|
||||
|
||||
loss_values[loss_type] = losses['total_loss'].item()
|
||||
print(f" {loss_type.upper()} loss: {loss_values[loss_type]:.6f} [OK]")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'All loss functions working',
|
||||
'metrics': loss_values
|
||||
}
|
||||
|
||||
|
||||
def test_batch_processing():
|
||||
"""
|
||||
Test 4: Can model handle batches?
|
||||
|
||||
Expected: Batch processing works correctly
|
||||
"""
|
||||
print(" Testing batch processing...")
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.eval()
|
||||
|
||||
# Create batch of 3 graphs
|
||||
graphs = []
|
||||
for i in range(3):
|
||||
num_nodes = 50 + i * 10 # Different sizes
|
||||
num_edges = 150 + i * 30
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.full((num_nodes,), i, dtype=torch.long)
|
||||
|
||||
graphs.append(Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch))
|
||||
|
||||
# Process batch
|
||||
total_nodes = sum(g.x.shape[0] for g in graphs)
|
||||
|
||||
with torch.no_grad():
|
||||
for i, graph in enumerate(graphs):
|
||||
results = model(graph, return_stress=True)
|
||||
print(f" Graph {i+1}: {graph.x.shape[0]} nodes -> predictions [OK]")
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Batch processing successful',
|
||||
'metrics': {'num_graphs': len(graphs), 'total_nodes': total_nodes}
|
||||
}
|
||||
|
||||
|
||||
def test_gradient_flow():
|
||||
"""
|
||||
Test 5: Do gradients flow correctly?
|
||||
|
||||
Expected: Gradients computed without errors
|
||||
"""
|
||||
print(" Testing gradient flow...")
|
||||
|
||||
config = {
|
||||
'node_feature_dim': 12,
|
||||
'edge_feature_dim': 5,
|
||||
'hidden_dim': 64,
|
||||
'num_layers': 4,
|
||||
'dropout': 0.1
|
||||
}
|
||||
|
||||
model = create_model(config)
|
||||
model.train()
|
||||
|
||||
# Create dummy data
|
||||
num_nodes = 50
|
||||
num_edges = 150
|
||||
|
||||
x = torch.randn(num_nodes, 12)
|
||||
edge_index = torch.randint(0, num_nodes, (2, num_edges))
|
||||
edge_attr = torch.randn(num_edges, 5)
|
||||
batch = torch.zeros(num_nodes, dtype=torch.long)
|
||||
|
||||
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, batch=batch)
|
||||
|
||||
# Forward pass
|
||||
results = model(data, return_stress=True)
|
||||
|
||||
# Compute loss
|
||||
targets = {
|
||||
'displacement': torch.randn(num_nodes, 6),
|
||||
'stress': torch.randn(num_nodes, 6)
|
||||
}
|
||||
|
||||
loss_fn = create_loss_function('mse')
|
||||
losses = loss_fn(results, targets)
|
||||
|
||||
# Backward pass
|
||||
losses['total_loss'].backward()
|
||||
|
||||
# Check gradients
|
||||
has_grad = sum(1 for p in model.parameters() if p.grad is not None)
|
||||
total_params = sum(1 for _ in model.parameters())
|
||||
|
||||
print(f" Parameters with gradients: {has_grad}/{total_params} [OK]")
|
||||
|
||||
assert has_grad == total_params, f"Not all parameters have gradients"
|
||||
|
||||
return {
|
||||
'status': 'PASS',
|
||||
'message': 'Gradients computed successfully',
|
||||
'metrics': {
|
||||
'parameters_with_grad': has_grad,
|
||||
'total_parameters': total_params
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("\nRunning synthetic tests...\n")
|
||||
|
||||
tests = [
|
||||
("Model Creation", test_model_creation),
|
||||
("Forward Pass", test_forward_pass),
|
||||
("Loss Computation", test_loss_computation),
|
||||
("Batch Processing", test_batch_processing),
|
||||
("Gradient Flow", test_gradient_flow)
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for name, test_func in tests:
|
||||
print(f"[TEST] {name}")
|
||||
try:
|
||||
result = test_func()
|
||||
if result['status'] == 'PASS':
|
||||
print(f" [PASS]\n")
|
||||
passed += 1
|
||||
else:
|
||||
print(f" [FAIL]: {result['message']}\n")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" [FAIL]: {str(e)}\n")
|
||||
failed += 1
|
||||
|
||||
print(f"\nResults: {passed} passed, {failed} failed")
|
||||
@@ -1,451 +0,0 @@
|
||||
"""
|
||||
train.py
|
||||
Training script for AtomizerField neural field predictor
|
||||
|
||||
AtomizerField Training Pipeline v2.0
|
||||
Trains Graph Neural Networks to predict complete FEA field results.
|
||||
|
||||
Usage:
|
||||
python train.py --train_dir ./training_data --val_dir ./validation_data
|
||||
|
||||
Key Features:
|
||||
- Multi-GPU support
|
||||
- Checkpoint saving/loading
|
||||
- TensorBoard logging
|
||||
- Early stopping
|
||||
- Learning rate scheduling
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
|
||||
from neural_models.field_predictor import create_model, AtomizerFieldModel
|
||||
from neural_models.physics_losses import create_loss_function
|
||||
from neural_models.data_loader import create_dataloaders
|
||||
|
||||
|
||||
class Trainer:
|
||||
"""
|
||||
Training manager for AtomizerField models
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
"""
|
||||
Initialize trainer
|
||||
|
||||
Args:
|
||||
config (dict): Training configuration
|
||||
"""
|
||||
self.config = config
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("AtomizerField Training Pipeline v2.0")
|
||||
print(f"{'='*60}")
|
||||
print(f"Device: {self.device}")
|
||||
|
||||
# Create model
|
||||
print("\nCreating model...")
|
||||
self.model = create_model(config.get('model', {}))
|
||||
self.model = self.model.to(self.device)
|
||||
|
||||
num_params = sum(p.numel() for p in self.model.parameters())
|
||||
print(f"Model created: {num_params:,} parameters")
|
||||
|
||||
# Create loss function
|
||||
loss_config = config.get('loss', {})
|
||||
loss_type = loss_config.pop('type', 'mse')
|
||||
self.criterion = create_loss_function(loss_type, loss_config)
|
||||
print(f"Loss function: {loss_type}")
|
||||
|
||||
# Create optimizer
|
||||
self.optimizer = optim.AdamW(
|
||||
self.model.parameters(),
|
||||
lr=config.get('learning_rate', 1e-3),
|
||||
weight_decay=config.get('weight_decay', 1e-5)
|
||||
)
|
||||
|
||||
# Learning rate scheduler
|
||||
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
|
||||
self.optimizer,
|
||||
mode='min',
|
||||
factor=0.5,
|
||||
patience=10,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Training state
|
||||
self.start_epoch = 0
|
||||
self.best_val_loss = float('inf')
|
||||
self.epochs_without_improvement = 0
|
||||
|
||||
# Create output directories
|
||||
self.output_dir = Path(config.get('output_dir', './runs'))
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# TensorBoard logging
|
||||
self.writer = SummaryWriter(
|
||||
log_dir=self.output_dir / 'tensorboard'
|
||||
)
|
||||
|
||||
# Save config
|
||||
with open(self.output_dir / 'config.json', 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
def train_epoch(self, train_loader, epoch):
|
||||
"""
|
||||
Train for one epoch
|
||||
|
||||
Args:
|
||||
train_loader: Training data loader
|
||||
epoch (int): Current epoch number
|
||||
|
||||
Returns:
|
||||
dict: Training metrics
|
||||
"""
|
||||
self.model.train()
|
||||
|
||||
total_loss = 0.0
|
||||
total_disp_loss = 0.0
|
||||
total_stress_loss = 0.0
|
||||
num_batches = 0
|
||||
|
||||
for batch_idx, batch in enumerate(train_loader):
|
||||
# Move batch to device
|
||||
batch = batch.to(self.device)
|
||||
|
||||
# Zero gradients
|
||||
self.optimizer.zero_grad()
|
||||
|
||||
# Forward pass
|
||||
predictions = self.model(batch, return_stress=True)
|
||||
|
||||
# Prepare targets
|
||||
targets = {
|
||||
'displacement': batch.y_displacement,
|
||||
}
|
||||
if hasattr(batch, 'y_stress'):
|
||||
targets['stress'] = batch.y_stress
|
||||
|
||||
# Compute loss
|
||||
losses = self.criterion(predictions, targets, batch)
|
||||
|
||||
# Backward pass
|
||||
losses['total_loss'].backward()
|
||||
|
||||
# Gradient clipping (prevents exploding gradients)
|
||||
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=1.0)
|
||||
|
||||
# Update weights
|
||||
self.optimizer.step()
|
||||
|
||||
# Accumulate metrics
|
||||
total_loss += losses['total_loss'].item()
|
||||
if 'displacement_loss' in losses:
|
||||
total_disp_loss += losses['displacement_loss'].item()
|
||||
if 'stress_loss' in losses:
|
||||
total_stress_loss += losses['stress_loss'].item()
|
||||
num_batches += 1
|
||||
|
||||
# Print progress
|
||||
if batch_idx % 10 == 0:
|
||||
print(f" Batch {batch_idx}/{len(train_loader)}: "
|
||||
f"Loss={losses['total_loss'].item():.6f}")
|
||||
|
||||
# Average metrics
|
||||
metrics = {
|
||||
'total_loss': total_loss / num_batches,
|
||||
'displacement_loss': total_disp_loss / num_batches,
|
||||
'stress_loss': total_stress_loss / num_batches
|
||||
}
|
||||
|
||||
return metrics
|
||||
|
||||
def validate(self, val_loader):
|
||||
"""
|
||||
Validate model
|
||||
|
||||
Args:
|
||||
val_loader: Validation data loader
|
||||
|
||||
Returns:
|
||||
dict: Validation metrics
|
||||
"""
|
||||
self.model.eval()
|
||||
|
||||
total_loss = 0.0
|
||||
total_disp_loss = 0.0
|
||||
total_stress_loss = 0.0
|
||||
num_batches = 0
|
||||
|
||||
with torch.no_grad():
|
||||
for batch in val_loader:
|
||||
# Move batch to device
|
||||
batch = batch.to(self.device)
|
||||
|
||||
# Forward pass
|
||||
predictions = self.model(batch, return_stress=True)
|
||||
|
||||
# Prepare targets
|
||||
targets = {
|
||||
'displacement': batch.y_displacement,
|
||||
}
|
||||
if hasattr(batch, 'y_stress'):
|
||||
targets['stress'] = batch.y_stress
|
||||
|
||||
# Compute loss
|
||||
losses = self.criterion(predictions, targets, batch)
|
||||
|
||||
# Accumulate metrics
|
||||
total_loss += losses['total_loss'].item()
|
||||
if 'displacement_loss' in losses:
|
||||
total_disp_loss += losses['displacement_loss'].item()
|
||||
if 'stress_loss' in losses:
|
||||
total_stress_loss += losses['stress_loss'].item()
|
||||
num_batches += 1
|
||||
|
||||
# Average metrics
|
||||
metrics = {
|
||||
'total_loss': total_loss / num_batches,
|
||||
'displacement_loss': total_disp_loss / num_batches,
|
||||
'stress_loss': total_stress_loss / num_batches
|
||||
}
|
||||
|
||||
return metrics
|
||||
|
||||
def train(self, train_loader, val_loader, num_epochs):
|
||||
"""
|
||||
Main training loop
|
||||
|
||||
Args:
|
||||
train_loader: Training data loader
|
||||
val_loader: Validation data loader
|
||||
num_epochs (int): Number of epochs to train
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Starting training for {num_epochs} epochs")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
for epoch in range(self.start_epoch, num_epochs):
|
||||
epoch_start_time = time.time()
|
||||
|
||||
print(f"Epoch {epoch + 1}/{num_epochs}")
|
||||
print("-" * 60)
|
||||
|
||||
# Train
|
||||
train_metrics = self.train_epoch(train_loader, epoch)
|
||||
|
||||
# Validate
|
||||
val_metrics = self.validate(val_loader)
|
||||
|
||||
epoch_time = time.time() - epoch_start_time
|
||||
|
||||
# Print metrics
|
||||
print(f"\nEpoch {epoch + 1} Results:")
|
||||
print(f" Training Loss: {train_metrics['total_loss']:.6f}")
|
||||
print(f" Displacement: {train_metrics['displacement_loss']:.6f}")
|
||||
print(f" Stress: {train_metrics['stress_loss']:.6f}")
|
||||
print(f" Validation Loss: {val_metrics['total_loss']:.6f}")
|
||||
print(f" Displacement: {val_metrics['displacement_loss']:.6f}")
|
||||
print(f" Stress: {val_metrics['stress_loss']:.6f}")
|
||||
print(f" Time: {epoch_time:.1f}s")
|
||||
|
||||
# Log to TensorBoard
|
||||
self.writer.add_scalar('Loss/train', train_metrics['total_loss'], epoch)
|
||||
self.writer.add_scalar('Loss/val', val_metrics['total_loss'], epoch)
|
||||
self.writer.add_scalar('DisplacementLoss/train', train_metrics['displacement_loss'], epoch)
|
||||
self.writer.add_scalar('DisplacementLoss/val', val_metrics['displacement_loss'], epoch)
|
||||
self.writer.add_scalar('StressLoss/train', train_metrics['stress_loss'], epoch)
|
||||
self.writer.add_scalar('StressLoss/val', val_metrics['stress_loss'], epoch)
|
||||
self.writer.add_scalar('LearningRate', self.optimizer.param_groups[0]['lr'], epoch)
|
||||
|
||||
# Learning rate scheduling
|
||||
self.scheduler.step(val_metrics['total_loss'])
|
||||
|
||||
# Save checkpoint
|
||||
is_best = val_metrics['total_loss'] < self.best_val_loss
|
||||
if is_best:
|
||||
self.best_val_loss = val_metrics['total_loss']
|
||||
self.epochs_without_improvement = 0
|
||||
print(f" New best validation loss: {self.best_val_loss:.6f}")
|
||||
else:
|
||||
self.epochs_without_improvement += 1
|
||||
|
||||
self.save_checkpoint(epoch, val_metrics, is_best)
|
||||
|
||||
# Early stopping
|
||||
patience = self.config.get('early_stopping_patience', 50)
|
||||
if self.epochs_without_improvement >= patience:
|
||||
print(f"\nEarly stopping after {patience} epochs without improvement")
|
||||
break
|
||||
|
||||
print()
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("Training complete!")
|
||||
print(f"Best validation loss: {self.best_val_loss:.6f}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
self.writer.close()
|
||||
|
||||
def save_checkpoint(self, epoch, metrics, is_best=False):
|
||||
"""
|
||||
Save model checkpoint
|
||||
|
||||
Args:
|
||||
epoch (int): Current epoch
|
||||
metrics (dict): Validation metrics
|
||||
is_best (bool): Whether this is the best model so far
|
||||
"""
|
||||
checkpoint = {
|
||||
'epoch': epoch,
|
||||
'model_state_dict': self.model.state_dict(),
|
||||
'optimizer_state_dict': self.optimizer.state_dict(),
|
||||
'scheduler_state_dict': self.scheduler.state_dict(),
|
||||
'best_val_loss': self.best_val_loss,
|
||||
'config': self.config,
|
||||
'metrics': metrics
|
||||
}
|
||||
|
||||
# Save latest checkpoint
|
||||
checkpoint_path = self.output_dir / 'checkpoint_latest.pt'
|
||||
torch.save(checkpoint, checkpoint_path)
|
||||
|
||||
# Save best checkpoint
|
||||
if is_best:
|
||||
best_path = self.output_dir / 'checkpoint_best.pt'
|
||||
torch.save(checkpoint, best_path)
|
||||
print(f" Saved best model to {best_path}")
|
||||
|
||||
# Save periodic checkpoint
|
||||
if (epoch + 1) % 10 == 0:
|
||||
periodic_path = self.output_dir / f'checkpoint_epoch_{epoch + 1}.pt'
|
||||
torch.save(checkpoint, periodic_path)
|
||||
|
||||
def load_checkpoint(self, checkpoint_path):
|
||||
"""
|
||||
Load model checkpoint
|
||||
|
||||
Args:
|
||||
checkpoint_path (str): Path to checkpoint file
|
||||
"""
|
||||
checkpoint = torch.load(checkpoint_path, map_location=self.device)
|
||||
|
||||
self.model.load_state_dict(checkpoint['model_state_dict'])
|
||||
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
|
||||
self.scheduler.load_state_dict(checkpoint['scheduler_state_dict'])
|
||||
self.start_epoch = checkpoint['epoch'] + 1
|
||||
self.best_val_loss = checkpoint['best_val_loss']
|
||||
|
||||
print(f"Loaded checkpoint from epoch {checkpoint['epoch']}")
|
||||
print(f"Best validation loss: {self.best_val_loss:.6f}")
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main training entry point
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description='Train AtomizerField neural field predictor')
|
||||
|
||||
# Data arguments
|
||||
parser.add_argument('--train_dir', type=str, required=True,
|
||||
help='Directory containing training cases')
|
||||
parser.add_argument('--val_dir', type=str, required=True,
|
||||
help='Directory containing validation cases')
|
||||
|
||||
# Training arguments
|
||||
parser.add_argument('--epochs', type=int, default=100,
|
||||
help='Number of training epochs')
|
||||
parser.add_argument('--batch_size', type=int, default=4,
|
||||
help='Batch size')
|
||||
parser.add_argument('--lr', type=float, default=1e-3,
|
||||
help='Learning rate')
|
||||
parser.add_argument('--weight_decay', type=float, default=1e-5,
|
||||
help='Weight decay')
|
||||
|
||||
# Model arguments
|
||||
parser.add_argument('--hidden_dim', type=int, default=128,
|
||||
help='Hidden dimension')
|
||||
parser.add_argument('--num_layers', type=int, default=6,
|
||||
help='Number of GNN layers')
|
||||
parser.add_argument('--dropout', type=float, default=0.1,
|
||||
help='Dropout rate')
|
||||
|
||||
# Loss arguments
|
||||
parser.add_argument('--loss_type', type=str, default='mse',
|
||||
choices=['mse', 'relative', 'physics', 'max'],
|
||||
help='Loss function type')
|
||||
|
||||
# Other arguments
|
||||
parser.add_argument('--output_dir', type=str, default='./runs',
|
||||
help='Output directory for checkpoints and logs')
|
||||
parser.add_argument('--resume', type=str, default=None,
|
||||
help='Path to checkpoint to resume from')
|
||||
parser.add_argument('--num_workers', type=int, default=0,
|
||||
help='Number of data loading workers')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Build configuration
|
||||
config = {
|
||||
'model': {
|
||||
'node_feature_dim': 12, # 3 coords + 6 BCs + 3 loads
|
||||
'edge_feature_dim': 5, # E, nu, rho, G, alpha
|
||||
'hidden_dim': args.hidden_dim,
|
||||
'num_layers': args.num_layers,
|
||||
'dropout': args.dropout
|
||||
},
|
||||
'loss': {
|
||||
'type': args.loss_type
|
||||
},
|
||||
'learning_rate': args.lr,
|
||||
'weight_decay': args.weight_decay,
|
||||
'batch_size': args.batch_size,
|
||||
'num_epochs': args.epochs,
|
||||
'output_dir': args.output_dir,
|
||||
'early_stopping_patience': 50
|
||||
}
|
||||
|
||||
# Find all case directories
|
||||
train_cases = list(Path(args.train_dir).glob('*/'))
|
||||
val_cases = list(Path(args.val_dir).glob('*/'))
|
||||
|
||||
print(f"Found {len(train_cases)} training cases")
|
||||
print(f"Found {len(val_cases)} validation cases")
|
||||
|
||||
if not train_cases or not val_cases:
|
||||
print("ERROR: No training or validation cases found!")
|
||||
print("Please ensure your directories contain parsed FEA data.")
|
||||
return
|
||||
|
||||
# Create data loaders
|
||||
train_loader, val_loader = create_dataloaders(
|
||||
train_cases,
|
||||
val_cases,
|
||||
batch_size=args.batch_size,
|
||||
num_workers=args.num_workers,
|
||||
normalize=True,
|
||||
include_stress=True
|
||||
)
|
||||
|
||||
# Create trainer
|
||||
trainer = Trainer(config)
|
||||
|
||||
# Resume from checkpoint if specified
|
||||
if args.resume:
|
||||
trainer.load_checkpoint(args.resume)
|
||||
|
||||
# Train
|
||||
trainer.train(train_loader, val_loader, args.epochs)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,789 +0,0 @@
|
||||
"""
|
||||
train_parametric.py
|
||||
Training script for AtomizerField parametric predictor
|
||||
|
||||
AtomizerField Parametric Training Pipeline v2.0
|
||||
Trains design-conditioned GNN to predict optimization objectives directly.
|
||||
|
||||
Usage:
|
||||
python train_parametric.py --train_dir ./training_data --val_dir ./validation_data
|
||||
|
||||
Key Differences from train.py (field predictor):
|
||||
- Predicts scalar objectives (mass, frequency, displacement, stress) instead of fields
|
||||
- Uses design parameters as conditioning input
|
||||
- Multi-objective loss function for all 4 outputs
|
||||
- Faster training due to simpler output structure
|
||||
|
||||
Output:
|
||||
checkpoint_best.pt containing:
|
||||
- model_state_dict: Trained weights
|
||||
- config: Model configuration
|
||||
- normalization: Normalization statistics for inference
|
||||
- design_var_names: Names of design variables
|
||||
- best_val_loss: Best validation loss achieved
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
from torch.utils.data import Dataset, DataLoader
|
||||
import h5py
|
||||
|
||||
# Try to import tensorboard, but make it optional
|
||||
try:
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
TENSORBOARD_AVAILABLE = True
|
||||
except ImportError:
|
||||
TENSORBOARD_AVAILABLE = False
|
||||
|
||||
from neural_models.parametric_predictor import create_parametric_model, ParametricFieldPredictor
|
||||
|
||||
|
||||
class ParametricDataset(Dataset):
|
||||
"""
|
||||
PyTorch Dataset for parametric training.
|
||||
|
||||
Loads training data exported by Atomizer's TrainingDataExporter
|
||||
and prepares it for the parametric predictor.
|
||||
|
||||
Expected directory structure:
|
||||
training_data/
|
||||
├── trial_0000/
|
||||
│ ├── metadata.json (design params + results)
|
||||
│ └── input/
|
||||
│ └── neural_field_data.h5 (mesh data)
|
||||
├── trial_0001/
|
||||
│ └── ...
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
data_dir: Path,
|
||||
normalize: bool = True,
|
||||
cache_in_memory: bool = False
|
||||
):
|
||||
"""
|
||||
Initialize dataset.
|
||||
|
||||
Args:
|
||||
data_dir: Directory containing trial_* subdirectories
|
||||
normalize: Whether to normalize inputs/outputs
|
||||
cache_in_memory: Cache all data in RAM (faster but memory-intensive)
|
||||
"""
|
||||
self.data_dir = Path(data_dir)
|
||||
self.normalize = normalize
|
||||
self.cache_in_memory = cache_in_memory
|
||||
|
||||
# Find all valid trial directories
|
||||
self.trial_dirs = sorted([
|
||||
d for d in self.data_dir.glob("trial_*")
|
||||
if d.is_dir() and self._is_valid_trial(d)
|
||||
])
|
||||
|
||||
print(f"Found {len(self.trial_dirs)} valid trials in {data_dir}")
|
||||
|
||||
if len(self.trial_dirs) == 0:
|
||||
raise ValueError(f"No valid trial directories found in {data_dir}")
|
||||
|
||||
# Extract design variable names from first trial
|
||||
self.design_var_names = self._get_design_var_names()
|
||||
print(f"Design variables: {self.design_var_names}")
|
||||
|
||||
# Compute normalization statistics
|
||||
if normalize:
|
||||
self._compute_normalization_stats()
|
||||
|
||||
# Cache data if requested
|
||||
self.cache = {}
|
||||
if cache_in_memory:
|
||||
print("Caching data in memory...")
|
||||
for idx in range(len(self.trial_dirs)):
|
||||
self.cache[idx] = self._load_trial(idx)
|
||||
print("Cache complete!")
|
||||
|
||||
def _is_valid_trial(self, trial_dir: Path) -> bool:
|
||||
"""Check if trial directory has required files."""
|
||||
metadata_file = trial_dir / "metadata.json"
|
||||
|
||||
# Check for metadata
|
||||
if not metadata_file.exists():
|
||||
return False
|
||||
|
||||
# Check metadata has required fields
|
||||
try:
|
||||
with open(metadata_file, 'r') as f:
|
||||
metadata = json.load(f)
|
||||
|
||||
has_design = 'design_parameters' in metadata
|
||||
has_results = 'results' in metadata
|
||||
|
||||
return has_design and has_results
|
||||
except:
|
||||
return False
|
||||
|
||||
def _get_design_var_names(self) -> List[str]:
|
||||
"""Extract design variable names from first trial."""
|
||||
metadata_file = self.trial_dirs[0] / "metadata.json"
|
||||
with open(metadata_file, 'r') as f:
|
||||
metadata = json.load(f)
|
||||
return list(metadata['design_parameters'].keys())
|
||||
|
||||
def _compute_normalization_stats(self):
|
||||
"""Compute normalization statistics across all trials."""
|
||||
print("Computing normalization statistics...")
|
||||
|
||||
all_design_params = []
|
||||
all_mass = []
|
||||
all_disp = []
|
||||
all_stiffness = []
|
||||
|
||||
for trial_dir in self.trial_dirs:
|
||||
with open(trial_dir / "metadata.json", 'r') as f:
|
||||
metadata = json.load(f)
|
||||
|
||||
# Design parameters
|
||||
design_params = [metadata['design_parameters'][name]
|
||||
for name in self.design_var_names]
|
||||
all_design_params.append(design_params)
|
||||
|
||||
# Results
|
||||
results = metadata.get('results', {})
|
||||
objectives = results.get('objectives', results)
|
||||
|
||||
if 'mass' in objectives:
|
||||
all_mass.append(objectives['mass'])
|
||||
if 'max_displacement' in results:
|
||||
all_disp.append(results['max_displacement'])
|
||||
elif 'max_displacement' in objectives:
|
||||
all_disp.append(objectives['max_displacement'])
|
||||
if 'stiffness' in objectives:
|
||||
all_stiffness.append(objectives['stiffness'])
|
||||
|
||||
# Convert to numpy arrays
|
||||
all_design_params = np.array(all_design_params)
|
||||
|
||||
# Compute statistics
|
||||
self.design_mean = torch.from_numpy(all_design_params.mean(axis=0)).float()
|
||||
self.design_std = torch.from_numpy(all_design_params.std(axis=0)).float()
|
||||
self.design_std = torch.clamp(self.design_std, min=1e-6) # Prevent division by zero
|
||||
|
||||
# Output statistics
|
||||
self.mass_mean = np.mean(all_mass) if all_mass else 0.1
|
||||
self.mass_std = np.std(all_mass) if all_mass else 0.05
|
||||
self.mass_std = max(self.mass_std, 1e-6)
|
||||
|
||||
self.disp_mean = np.mean(all_disp) if all_disp else 0.01
|
||||
self.disp_std = np.std(all_disp) if all_disp else 0.005
|
||||
self.disp_std = max(self.disp_std, 1e-6)
|
||||
|
||||
self.stiffness_mean = np.mean(all_stiffness) if all_stiffness else 20000.0
|
||||
self.stiffness_std = np.std(all_stiffness) if all_stiffness else 5000.0
|
||||
self.stiffness_std = max(self.stiffness_std, 1e-6)
|
||||
|
||||
# Frequency and stress defaults (if not available in data)
|
||||
self.freq_mean = 18.0
|
||||
self.freq_std = 5.0
|
||||
self.stress_mean = 200.0
|
||||
self.stress_std = 50.0
|
||||
|
||||
print(f" Design mean: {self.design_mean.numpy()}")
|
||||
print(f" Design std: {self.design_std.numpy()}")
|
||||
print(f" Mass: {self.mass_mean:.4f} +/- {self.mass_std:.4f}")
|
||||
print(f" Displacement: {self.disp_mean:.6f} +/- {self.disp_std:.6f}")
|
||||
print(f" Stiffness: {self.stiffness_mean:.2f} +/- {self.stiffness_std:.2f}")
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self.trial_dirs)
|
||||
|
||||
def __getitem__(self, idx: int) -> Dict[str, torch.Tensor]:
|
||||
if self.cache_in_memory and idx in self.cache:
|
||||
return self.cache[idx]
|
||||
return self._load_trial(idx)
|
||||
|
||||
def _load_trial(self, idx: int) -> Dict[str, torch.Tensor]:
|
||||
"""Load and process a single trial."""
|
||||
trial_dir = self.trial_dirs[idx]
|
||||
|
||||
# Load metadata
|
||||
with open(trial_dir / "metadata.json", 'r') as f:
|
||||
metadata = json.load(f)
|
||||
|
||||
# Extract design parameters
|
||||
design_params = [metadata['design_parameters'][name]
|
||||
for name in self.design_var_names]
|
||||
design_tensor = torch.tensor(design_params, dtype=torch.float32)
|
||||
|
||||
# Normalize design parameters
|
||||
if self.normalize:
|
||||
design_tensor = (design_tensor - self.design_mean) / self.design_std
|
||||
|
||||
# Extract results
|
||||
results = metadata.get('results', {})
|
||||
objectives = results.get('objectives', results)
|
||||
|
||||
# Get targets (with fallbacks)
|
||||
mass = objectives.get('mass', 0.1)
|
||||
stiffness = objectives.get('stiffness', 20000.0)
|
||||
max_displacement = results.get('max_displacement',
|
||||
objectives.get('max_displacement', 0.01))
|
||||
|
||||
# Frequency and stress might not be available
|
||||
frequency = objectives.get('frequency', self.freq_mean)
|
||||
max_stress = objectives.get('max_stress', self.stress_mean)
|
||||
|
||||
# Create target tensor
|
||||
targets = torch.tensor([mass, frequency, max_displacement, max_stress],
|
||||
dtype=torch.float32)
|
||||
|
||||
# Normalize targets
|
||||
if self.normalize:
|
||||
targets[0] = (targets[0] - self.mass_mean) / self.mass_std
|
||||
targets[1] = (targets[1] - self.freq_mean) / self.freq_std
|
||||
targets[2] = (targets[2] - self.disp_mean) / self.disp_std
|
||||
targets[3] = (targets[3] - self.stress_mean) / self.stress_std
|
||||
|
||||
# Try to load mesh data if available
|
||||
mesh_data = self._load_mesh_data(trial_dir)
|
||||
|
||||
return {
|
||||
'design_params': design_tensor,
|
||||
'targets': targets,
|
||||
'mesh_data': mesh_data,
|
||||
'trial_dir': str(trial_dir)
|
||||
}
|
||||
|
||||
def _load_mesh_data(self, trial_dir: Path) -> Optional[Dict[str, torch.Tensor]]:
|
||||
"""Load mesh data from H5 file if available."""
|
||||
h5_paths = [
|
||||
trial_dir / "input" / "neural_field_data.h5",
|
||||
trial_dir / "neural_field_data.h5",
|
||||
]
|
||||
|
||||
for h5_path in h5_paths:
|
||||
if h5_path.exists():
|
||||
try:
|
||||
with h5py.File(h5_path, 'r') as f:
|
||||
node_coords = torch.from_numpy(f['mesh/node_coordinates'][:]).float()
|
||||
|
||||
# Build simple edge index from connectivity if available
|
||||
# For now, return just coordinates
|
||||
return {
|
||||
'node_coords': node_coords,
|
||||
'num_nodes': node_coords.shape[0]
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not load mesh from {h5_path}: {e}")
|
||||
|
||||
return None
|
||||
|
||||
def get_normalization_stats(self) -> Dict[str, Any]:
|
||||
"""Return normalization statistics for saving with model."""
|
||||
return {
|
||||
'design_mean': self.design_mean.numpy().tolist(),
|
||||
'design_std': self.design_std.numpy().tolist(),
|
||||
'mass_mean': float(self.mass_mean),
|
||||
'mass_std': float(self.mass_std),
|
||||
'freq_mean': float(self.freq_mean),
|
||||
'freq_std': float(self.freq_std),
|
||||
'max_disp_mean': float(self.disp_mean),
|
||||
'max_disp_std': float(self.disp_std),
|
||||
'max_stress_mean': float(self.stress_mean),
|
||||
'max_stress_std': float(self.stress_std),
|
||||
}
|
||||
|
||||
|
||||
def create_reference_graph(num_nodes: int = 500, device: torch.device = None):
|
||||
"""
|
||||
Create a reference graph structure for the GNN.
|
||||
|
||||
In production, this would come from the actual mesh.
|
||||
For now, create a simple grid-like structure.
|
||||
"""
|
||||
if device is None:
|
||||
device = torch.device('cpu')
|
||||
|
||||
# Create simple node features (placeholder)
|
||||
x = torch.randn(num_nodes, 12, device=device)
|
||||
|
||||
# Create grid-like connectivity - ensure valid indices
|
||||
edges = []
|
||||
grid_size = int(np.ceil(np.sqrt(num_nodes)))
|
||||
|
||||
for i in range(num_nodes):
|
||||
row = i // grid_size
|
||||
col = i % grid_size
|
||||
|
||||
# Right neighbor (same row)
|
||||
right = i + 1
|
||||
if col < grid_size - 1 and right < num_nodes:
|
||||
edges.append([i, right])
|
||||
edges.append([right, i])
|
||||
|
||||
# Bottom neighbor (next row)
|
||||
bottom = i + grid_size
|
||||
if bottom < num_nodes:
|
||||
edges.append([i, bottom])
|
||||
edges.append([bottom, i])
|
||||
|
||||
# Ensure we have at least some edges
|
||||
if len(edges) == 0:
|
||||
# Fallback: fully connected for very small graphs
|
||||
for i in range(num_nodes):
|
||||
for j in range(i + 1, min(i + 5, num_nodes)):
|
||||
edges.append([i, j])
|
||||
edges.append([j, i])
|
||||
|
||||
edge_index = torch.tensor(edges, dtype=torch.long, device=device).t().contiguous()
|
||||
edge_attr = torch.randn(edge_index.shape[1], 5, device=device)
|
||||
|
||||
from torch_geometric.data import Data
|
||||
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
|
||||
|
||||
|
||||
class ParametricTrainer:
|
||||
"""
|
||||
Training manager for parametric predictor models.
|
||||
"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
"""
|
||||
Initialize trainer.
|
||||
|
||||
Args:
|
||||
config: Training configuration
|
||||
"""
|
||||
self.config = config
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("AtomizerField Parametric Training Pipeline v2.0")
|
||||
print(f"{'='*60}")
|
||||
print(f"Device: {self.device}")
|
||||
|
||||
# Create model
|
||||
print("\nCreating parametric model...")
|
||||
model_config = config.get('model', {})
|
||||
self.model = create_parametric_model(model_config)
|
||||
self.model = self.model.to(self.device)
|
||||
|
||||
num_params = self.model.get_num_parameters()
|
||||
print(f"Model created: {num_params:,} parameters")
|
||||
|
||||
# Create optimizer
|
||||
self.optimizer = optim.AdamW(
|
||||
self.model.parameters(),
|
||||
lr=config.get('learning_rate', 1e-3),
|
||||
weight_decay=config.get('weight_decay', 1e-5)
|
||||
)
|
||||
|
||||
# Learning rate scheduler
|
||||
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
|
||||
self.optimizer,
|
||||
mode='min',
|
||||
factor=0.5,
|
||||
patience=10,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Multi-objective loss weights
|
||||
self.loss_weights = config.get('loss_weights', {
|
||||
'mass': 1.0,
|
||||
'frequency': 1.0,
|
||||
'displacement': 1.0,
|
||||
'stress': 1.0
|
||||
})
|
||||
|
||||
# Training state
|
||||
self.start_epoch = 0
|
||||
self.best_val_loss = float('inf')
|
||||
self.epochs_without_improvement = 0
|
||||
|
||||
# Create output directories
|
||||
self.output_dir = Path(config.get('output_dir', './runs/parametric'))
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# TensorBoard logging (optional)
|
||||
self.writer = None
|
||||
if TENSORBOARD_AVAILABLE:
|
||||
self.writer = SummaryWriter(log_dir=self.output_dir / 'tensorboard')
|
||||
|
||||
# Create reference graph for inference
|
||||
self.reference_graph = create_reference_graph(
|
||||
num_nodes=config.get('reference_nodes', 500),
|
||||
device=self.device
|
||||
)
|
||||
|
||||
# Save config
|
||||
with open(self.output_dir / 'config.json', 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
def compute_loss(
|
||||
self,
|
||||
predictions: Dict[str, torch.Tensor],
|
||||
targets: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, Dict[str, float]]:
|
||||
"""
|
||||
Compute multi-objective loss.
|
||||
|
||||
Args:
|
||||
predictions: Model outputs (mass, frequency, max_displacement, max_stress)
|
||||
targets: Target values [batch, 4]
|
||||
|
||||
Returns:
|
||||
total_loss: Combined loss tensor
|
||||
loss_dict: Individual losses for logging
|
||||
"""
|
||||
# MSE losses for each objective
|
||||
mass_loss = nn.functional.mse_loss(predictions['mass'], targets[:, 0])
|
||||
freq_loss = nn.functional.mse_loss(predictions['frequency'], targets[:, 1])
|
||||
disp_loss = nn.functional.mse_loss(predictions['max_displacement'], targets[:, 2])
|
||||
stress_loss = nn.functional.mse_loss(predictions['max_stress'], targets[:, 3])
|
||||
|
||||
# Weighted combination
|
||||
total_loss = (
|
||||
self.loss_weights['mass'] * mass_loss +
|
||||
self.loss_weights['frequency'] * freq_loss +
|
||||
self.loss_weights['displacement'] * disp_loss +
|
||||
self.loss_weights['stress'] * stress_loss
|
||||
)
|
||||
|
||||
loss_dict = {
|
||||
'total': total_loss.item(),
|
||||
'mass': mass_loss.item(),
|
||||
'frequency': freq_loss.item(),
|
||||
'displacement': disp_loss.item(),
|
||||
'stress': stress_loss.item()
|
||||
}
|
||||
|
||||
return total_loss, loss_dict
|
||||
|
||||
def train_epoch(self, train_loader: DataLoader, epoch: int) -> Dict[str, float]:
|
||||
"""Train for one epoch."""
|
||||
self.model.train()
|
||||
|
||||
total_losses = {'total': 0, 'mass': 0, 'frequency': 0, 'displacement': 0, 'stress': 0}
|
||||
num_batches = 0
|
||||
|
||||
for batch_idx, batch in enumerate(train_loader):
|
||||
# Get data
|
||||
design_params = batch['design_params'].to(self.device)
|
||||
targets = batch['targets'].to(self.device)
|
||||
|
||||
# Zero gradients
|
||||
self.optimizer.zero_grad()
|
||||
|
||||
# Forward pass (using reference graph)
|
||||
predictions = self.model(self.reference_graph, design_params)
|
||||
|
||||
# Compute loss
|
||||
loss, loss_dict = self.compute_loss(predictions, targets)
|
||||
|
||||
# Backward pass
|
||||
loss.backward()
|
||||
|
||||
# Gradient clipping
|
||||
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=1.0)
|
||||
|
||||
# Update weights
|
||||
self.optimizer.step()
|
||||
|
||||
# Accumulate losses
|
||||
for key in total_losses:
|
||||
total_losses[key] += loss_dict[key]
|
||||
num_batches += 1
|
||||
|
||||
# Print progress
|
||||
if batch_idx % 10 == 0:
|
||||
print(f" Batch {batch_idx}/{len(train_loader)}: Loss={loss_dict['total']:.6f}")
|
||||
|
||||
# Average losses
|
||||
return {k: v / num_batches for k, v in total_losses.items()}
|
||||
|
||||
def validate(self, val_loader: DataLoader) -> Dict[str, float]:
|
||||
"""Validate model."""
|
||||
self.model.eval()
|
||||
|
||||
total_losses = {'total': 0, 'mass': 0, 'frequency': 0, 'displacement': 0, 'stress': 0}
|
||||
num_batches = 0
|
||||
|
||||
with torch.no_grad():
|
||||
for batch in val_loader:
|
||||
design_params = batch['design_params'].to(self.device)
|
||||
targets = batch['targets'].to(self.device)
|
||||
|
||||
predictions = self.model(self.reference_graph, design_params)
|
||||
_, loss_dict = self.compute_loss(predictions, targets)
|
||||
|
||||
for key in total_losses:
|
||||
total_losses[key] += loss_dict[key]
|
||||
num_batches += 1
|
||||
|
||||
return {k: v / num_batches for k, v in total_losses.items()}
|
||||
|
||||
def train(
|
||||
self,
|
||||
train_loader: DataLoader,
|
||||
val_loader: DataLoader,
|
||||
num_epochs: int,
|
||||
train_dataset: ParametricDataset
|
||||
):
|
||||
"""
|
||||
Main training loop.
|
||||
|
||||
Args:
|
||||
train_loader: Training data loader
|
||||
val_loader: Validation data loader
|
||||
num_epochs: Number of epochs
|
||||
train_dataset: Training dataset (for normalization stats)
|
||||
"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Starting training for {num_epochs} epochs")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
for epoch in range(self.start_epoch, num_epochs):
|
||||
epoch_start = time.time()
|
||||
|
||||
print(f"Epoch {epoch + 1}/{num_epochs}")
|
||||
print("-" * 60)
|
||||
|
||||
# Train
|
||||
train_metrics = self.train_epoch(train_loader, epoch)
|
||||
|
||||
# Validate
|
||||
val_metrics = self.validate(val_loader)
|
||||
|
||||
epoch_time = time.time() - epoch_start
|
||||
|
||||
# Print metrics
|
||||
print(f"\nEpoch {epoch + 1} Results:")
|
||||
print(f" Training Loss: {train_metrics['total']:.6f}")
|
||||
print(f" Mass: {train_metrics['mass']:.6f}, Freq: {train_metrics['frequency']:.6f}")
|
||||
print(f" Disp: {train_metrics['displacement']:.6f}, Stress: {train_metrics['stress']:.6f}")
|
||||
print(f" Validation Loss: {val_metrics['total']:.6f}")
|
||||
print(f" Mass: {val_metrics['mass']:.6f}, Freq: {val_metrics['frequency']:.6f}")
|
||||
print(f" Disp: {val_metrics['displacement']:.6f}, Stress: {val_metrics['stress']:.6f}")
|
||||
print(f" Time: {epoch_time:.1f}s")
|
||||
|
||||
# Log to TensorBoard
|
||||
if self.writer:
|
||||
self.writer.add_scalar('Loss/train', train_metrics['total'], epoch)
|
||||
self.writer.add_scalar('Loss/val', val_metrics['total'], epoch)
|
||||
for key in ['mass', 'frequency', 'displacement', 'stress']:
|
||||
self.writer.add_scalar(f'{key}/train', train_metrics[key], epoch)
|
||||
self.writer.add_scalar(f'{key}/val', val_metrics[key], epoch)
|
||||
|
||||
# Learning rate scheduling
|
||||
self.scheduler.step(val_metrics['total'])
|
||||
|
||||
# Save checkpoint
|
||||
is_best = val_metrics['total'] < self.best_val_loss
|
||||
if is_best:
|
||||
self.best_val_loss = val_metrics['total']
|
||||
self.epochs_without_improvement = 0
|
||||
print(f" New best validation loss: {self.best_val_loss:.6f}")
|
||||
else:
|
||||
self.epochs_without_improvement += 1
|
||||
|
||||
self.save_checkpoint(epoch, val_metrics, train_dataset, is_best)
|
||||
|
||||
# Early stopping
|
||||
patience = self.config.get('early_stopping_patience', 50)
|
||||
if self.epochs_without_improvement >= patience:
|
||||
print(f"\nEarly stopping after {patience} epochs without improvement")
|
||||
break
|
||||
|
||||
print()
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("Training complete!")
|
||||
print(f"Best validation loss: {self.best_val_loss:.6f}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
if self.writer:
|
||||
self.writer.close()
|
||||
|
||||
def save_checkpoint(
|
||||
self,
|
||||
epoch: int,
|
||||
metrics: Dict[str, float],
|
||||
dataset: ParametricDataset,
|
||||
is_best: bool = False
|
||||
):
|
||||
"""Save model checkpoint with all required metadata."""
|
||||
checkpoint = {
|
||||
'epoch': epoch,
|
||||
'model_state_dict': self.model.state_dict(),
|
||||
'optimizer_state_dict': self.optimizer.state_dict(),
|
||||
'scheduler_state_dict': self.scheduler.state_dict(),
|
||||
'best_val_loss': self.best_val_loss,
|
||||
'config': self.model.config,
|
||||
'normalization': dataset.get_normalization_stats(),
|
||||
'design_var_names': dataset.design_var_names,
|
||||
'metrics': metrics
|
||||
}
|
||||
|
||||
# Save latest
|
||||
torch.save(checkpoint, self.output_dir / 'checkpoint_latest.pt')
|
||||
|
||||
# Save best
|
||||
if is_best:
|
||||
best_path = self.output_dir / 'checkpoint_best.pt'
|
||||
torch.save(checkpoint, best_path)
|
||||
print(f" Saved best model to {best_path}")
|
||||
|
||||
# Periodic checkpoint
|
||||
if (epoch + 1) % 10 == 0:
|
||||
torch.save(checkpoint, self.output_dir / f'checkpoint_epoch_{epoch + 1}.pt')
|
||||
|
||||
|
||||
def collate_fn(batch: List[Dict]) -> Dict[str, torch.Tensor]:
|
||||
"""Custom collate function for DataLoader."""
|
||||
design_params = torch.stack([item['design_params'] for item in batch])
|
||||
targets = torch.stack([item['targets'] for item in batch])
|
||||
|
||||
return {
|
||||
'design_params': design_params,
|
||||
'targets': targets
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""Main training entry point."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Train AtomizerField parametric predictor'
|
||||
)
|
||||
|
||||
# Data arguments
|
||||
parser.add_argument('--train_dir', type=str, required=True,
|
||||
help='Directory containing training trial_* subdirs')
|
||||
parser.add_argument('--val_dir', type=str, default=None,
|
||||
help='Directory containing validation data (uses split if not provided)')
|
||||
parser.add_argument('--val_split', type=float, default=0.2,
|
||||
help='Validation split ratio if val_dir not provided')
|
||||
|
||||
# Training arguments
|
||||
parser.add_argument('--epochs', type=int, default=200,
|
||||
help='Number of training epochs')
|
||||
parser.add_argument('--batch_size', type=int, default=16,
|
||||
help='Batch size')
|
||||
parser.add_argument('--learning_rate', type=float, default=1e-3,
|
||||
help='Learning rate')
|
||||
parser.add_argument('--weight_decay', type=float, default=1e-5,
|
||||
help='Weight decay')
|
||||
|
||||
# Model arguments
|
||||
parser.add_argument('--hidden_channels', type=int, default=128,
|
||||
help='Hidden dimension')
|
||||
parser.add_argument('--num_layers', type=int, default=4,
|
||||
help='Number of GNN layers')
|
||||
parser.add_argument('--dropout', type=float, default=0.1,
|
||||
help='Dropout rate')
|
||||
|
||||
# Output arguments
|
||||
parser.add_argument('--output_dir', type=str, default='./runs/parametric',
|
||||
help='Output directory')
|
||||
parser.add_argument('--resume', type=str, default=None,
|
||||
help='Path to checkpoint to resume from')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create datasets
|
||||
print("\nLoading training data...")
|
||||
train_dataset = ParametricDataset(args.train_dir, normalize=True)
|
||||
|
||||
if args.val_dir:
|
||||
val_dataset = ParametricDataset(args.val_dir, normalize=True)
|
||||
# Share normalization stats
|
||||
val_dataset.design_mean = train_dataset.design_mean
|
||||
val_dataset.design_std = train_dataset.design_std
|
||||
else:
|
||||
# Split training data
|
||||
n_total = len(train_dataset)
|
||||
n_val = int(n_total * args.val_split)
|
||||
n_train = n_total - n_val
|
||||
|
||||
train_dataset, val_dataset = torch.utils.data.random_split(
|
||||
train_dataset, [n_train, n_val]
|
||||
)
|
||||
print(f"Split: {n_train} train, {n_val} validation")
|
||||
|
||||
# Create data loaders
|
||||
train_loader = DataLoader(
|
||||
train_dataset,
|
||||
batch_size=args.batch_size,
|
||||
shuffle=True,
|
||||
collate_fn=collate_fn,
|
||||
num_workers=0
|
||||
)
|
||||
|
||||
val_loader = DataLoader(
|
||||
val_dataset,
|
||||
batch_size=args.batch_size,
|
||||
shuffle=False,
|
||||
collate_fn=collate_fn,
|
||||
num_workers=0
|
||||
)
|
||||
|
||||
# Get design dimension from dataset
|
||||
if hasattr(train_dataset, 'design_var_names'):
|
||||
design_dim = len(train_dataset.design_var_names)
|
||||
else:
|
||||
# For split dataset, access underlying dataset
|
||||
design_dim = len(train_dataset.dataset.design_var_names)
|
||||
|
||||
# Build configuration
|
||||
config = {
|
||||
'model': {
|
||||
'input_channels': 12,
|
||||
'edge_dim': 5,
|
||||
'hidden_channels': args.hidden_channels,
|
||||
'num_layers': args.num_layers,
|
||||
'design_dim': design_dim,
|
||||
'dropout': args.dropout
|
||||
},
|
||||
'learning_rate': args.learning_rate,
|
||||
'weight_decay': args.weight_decay,
|
||||
'batch_size': args.batch_size,
|
||||
'num_epochs': args.epochs,
|
||||
'output_dir': args.output_dir,
|
||||
'early_stopping_patience': 50,
|
||||
'loss_weights': {
|
||||
'mass': 1.0,
|
||||
'frequency': 1.0,
|
||||
'displacement': 1.0,
|
||||
'stress': 1.0
|
||||
}
|
||||
}
|
||||
|
||||
# Create trainer
|
||||
trainer = ParametricTrainer(config)
|
||||
|
||||
# Resume if specified
|
||||
if args.resume:
|
||||
checkpoint = torch.load(args.resume, map_location=trainer.device)
|
||||
trainer.model.load_state_dict(checkpoint['model_state_dict'])
|
||||
trainer.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
|
||||
trainer.start_epoch = checkpoint['epoch'] + 1
|
||||
trainer.best_val_loss = checkpoint['best_val_loss']
|
||||
print(f"Resumed from epoch {checkpoint['epoch']}")
|
||||
|
||||
# Get base dataset for normalization stats
|
||||
base_dataset = train_dataset
|
||||
if hasattr(train_dataset, 'dataset'):
|
||||
base_dataset = train_dataset.dataset
|
||||
|
||||
# Train
|
||||
trainer.train(train_loader, val_loader, args.epochs, base_dataset)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,454 +0,0 @@
|
||||
"""
|
||||
validate_parsed_data.py
|
||||
Validates the parsed neural field data for completeness and physics consistency
|
||||
|
||||
AtomizerField Data Validator v1.0.0
|
||||
Ensures parsed data meets quality standards for neural network training.
|
||||
|
||||
Usage:
|
||||
python validate_parsed_data.py <case_directory>
|
||||
|
||||
Example:
|
||||
python validate_parsed_data.py training_case_001
|
||||
"""
|
||||
|
||||
import json
|
||||
import h5py
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
|
||||
class NeuralFieldDataValidator:
|
||||
"""
|
||||
Validates parsed neural field data for:
|
||||
- File existence and format
|
||||
- Data completeness
|
||||
- Physics consistency
|
||||
- Data quality
|
||||
|
||||
This ensures that data fed to neural networks is reliable and consistent.
|
||||
"""
|
||||
|
||||
def __init__(self, case_directory):
|
||||
"""
|
||||
Initialize validator
|
||||
|
||||
Args:
|
||||
case_directory (str or Path): Path to case containing parsed data
|
||||
"""
|
||||
self.case_dir = Path(case_directory)
|
||||
self.json_file = self.case_dir / "neural_field_data.json"
|
||||
self.h5_file = self.case_dir / "neural_field_data.h5"
|
||||
self.errors = []
|
||||
self.warnings = []
|
||||
self.info = []
|
||||
|
||||
def validate(self):
|
||||
"""
|
||||
Run all validation checks
|
||||
|
||||
Returns:
|
||||
bool: True if validation passed, False otherwise
|
||||
"""
|
||||
print("\n" + "="*60)
|
||||
print("AtomizerField Data Validator v1.0")
|
||||
print("="*60)
|
||||
print(f"\nValidating: {self.case_dir.name}\n")
|
||||
|
||||
# Check file existence
|
||||
if not self._check_files_exist():
|
||||
return False
|
||||
|
||||
# Load data
|
||||
try:
|
||||
with open(self.json_file, 'r') as f:
|
||||
self.data = json.load(f)
|
||||
self.h5_data = h5py.File(self.h5_file, 'r')
|
||||
except Exception as e:
|
||||
self._add_error(f"Failed to load data files: {e}")
|
||||
return False
|
||||
|
||||
# Run validation checks
|
||||
self._validate_structure()
|
||||
self._validate_metadata()
|
||||
self._validate_mesh()
|
||||
self._validate_materials()
|
||||
self._validate_boundary_conditions()
|
||||
self._validate_loads()
|
||||
self._validate_results()
|
||||
self._validate_physics_consistency()
|
||||
self._validate_data_quality()
|
||||
|
||||
# Close HDF5 file
|
||||
self.h5_data.close()
|
||||
|
||||
# Print results
|
||||
self._print_results()
|
||||
|
||||
return len(self.errors) == 0
|
||||
|
||||
def _check_files_exist(self):
|
||||
"""Check that required files exist"""
|
||||
if not self.json_file.exists():
|
||||
self._add_error(f"JSON file not found: {self.json_file}")
|
||||
return False
|
||||
|
||||
if not self.h5_file.exists():
|
||||
self._add_error(f"HDF5 file not found: {self.h5_file}")
|
||||
return False
|
||||
|
||||
self._add_info(f"Found JSON: {self.json_file.name}")
|
||||
self._add_info(f"Found HDF5: {self.h5_file.name}")
|
||||
return True
|
||||
|
||||
def _validate_structure(self):
|
||||
"""Validate data structure has all required fields"""
|
||||
required_fields = [
|
||||
"metadata",
|
||||
"mesh",
|
||||
"materials",
|
||||
"boundary_conditions",
|
||||
"loads",
|
||||
"results"
|
||||
]
|
||||
|
||||
for field in required_fields:
|
||||
if field not in self.data:
|
||||
self._add_error(f"Missing required field: {field}")
|
||||
else:
|
||||
self._add_info(f"Found field: {field}")
|
||||
|
||||
def _validate_metadata(self):
|
||||
"""Validate metadata completeness"""
|
||||
if "metadata" not in self.data:
|
||||
return
|
||||
|
||||
meta = self.data["metadata"]
|
||||
|
||||
# Check version
|
||||
if "version" in meta:
|
||||
if meta["version"] != "1.0.0":
|
||||
self._add_warning(f"Data version {meta['version']} may not be compatible")
|
||||
else:
|
||||
self._add_info(f"Data version: {meta['version']}")
|
||||
|
||||
# Check required metadata fields
|
||||
required = ["created_at", "source", "analysis_type", "units"]
|
||||
for field in required:
|
||||
if field not in meta:
|
||||
self._add_warning(f"Missing metadata field: {field}")
|
||||
|
||||
if "analysis_type" in meta:
|
||||
self._add_info(f"Analysis type: {meta['analysis_type']}")
|
||||
|
||||
def _validate_mesh(self):
|
||||
"""Validate mesh data"""
|
||||
if "mesh" not in self.data:
|
||||
return
|
||||
|
||||
mesh = self.data["mesh"]
|
||||
|
||||
# Check statistics
|
||||
if "statistics" in mesh:
|
||||
stats = mesh["statistics"]
|
||||
n_nodes = stats.get("n_nodes", 0)
|
||||
n_elements = stats.get("n_elements", 0)
|
||||
|
||||
self._add_info(f"Mesh: {n_nodes:,} nodes, {n_elements:,} elements")
|
||||
|
||||
if n_nodes == 0:
|
||||
self._add_error("Mesh has no nodes")
|
||||
if n_elements == 0:
|
||||
self._add_error("Mesh has no elements")
|
||||
|
||||
# Check element types
|
||||
if "element_types" in stats:
|
||||
elem_types = stats["element_types"]
|
||||
total_by_type = sum(elem_types.values())
|
||||
if total_by_type != n_elements:
|
||||
self._add_warning(
|
||||
f"Element type count ({total_by_type}) doesn't match "
|
||||
f"total elements ({n_elements})"
|
||||
)
|
||||
|
||||
for etype, count in elem_types.items():
|
||||
if count > 0:
|
||||
self._add_info(f" {etype}: {count:,} elements")
|
||||
|
||||
# Validate HDF5 mesh data
|
||||
if 'mesh' in self.h5_data:
|
||||
mesh_grp = self.h5_data['mesh']
|
||||
|
||||
if 'node_coordinates' in mesh_grp:
|
||||
coords = mesh_grp['node_coordinates'][:]
|
||||
self._add_info(f"Node coordinates: shape {coords.shape}")
|
||||
|
||||
# Check for NaN or inf
|
||||
if np.any(np.isnan(coords)):
|
||||
self._add_error("Node coordinates contain NaN values")
|
||||
if np.any(np.isinf(coords)):
|
||||
self._add_error("Node coordinates contain infinite values")
|
||||
|
||||
# Check bounding box reasonableness
|
||||
bbox_size = np.max(coords, axis=0) - np.min(coords, axis=0)
|
||||
if np.any(bbox_size == 0):
|
||||
self._add_warning("Mesh is planar or degenerate in one dimension")
|
||||
|
||||
def _validate_materials(self):
|
||||
"""Validate material data"""
|
||||
if "materials" not in self.data:
|
||||
return
|
||||
|
||||
materials = self.data["materials"]
|
||||
|
||||
if len(materials) == 0:
|
||||
self._add_warning("No materials defined")
|
||||
return
|
||||
|
||||
self._add_info(f"Materials: {len(materials)} defined")
|
||||
|
||||
for mat in materials:
|
||||
mat_id = mat.get("id", "unknown")
|
||||
mat_type = mat.get("type", "unknown")
|
||||
|
||||
if mat_type == "MAT1":
|
||||
# Check required properties
|
||||
E = mat.get("E")
|
||||
nu = mat.get("nu")
|
||||
|
||||
if E is None:
|
||||
self._add_error(f"Material {mat_id}: Missing Young's modulus (E)")
|
||||
elif E <= 0:
|
||||
self._add_error(f"Material {mat_id}: Invalid E = {E} (must be > 0)")
|
||||
|
||||
if nu is None:
|
||||
self._add_error(f"Material {mat_id}: Missing Poisson's ratio (nu)")
|
||||
elif nu < 0 or nu >= 0.5:
|
||||
self._add_error(f"Material {mat_id}: Invalid nu = {nu} (must be 0 <= nu < 0.5)")
|
||||
|
||||
def _validate_boundary_conditions(self):
|
||||
"""Validate boundary conditions"""
|
||||
if "boundary_conditions" not in self.data:
|
||||
return
|
||||
|
||||
bcs = self.data["boundary_conditions"]
|
||||
|
||||
spc_count = len(bcs.get("spc", []))
|
||||
mpc_count = len(bcs.get("mpc", []))
|
||||
|
||||
self._add_info(f"Boundary conditions: {spc_count} SPCs, {mpc_count} MPCs")
|
||||
|
||||
if spc_count == 0:
|
||||
self._add_warning("No SPCs defined - model may be unconstrained")
|
||||
|
||||
def _validate_loads(self):
|
||||
"""Validate load data"""
|
||||
if "loads" not in self.data:
|
||||
return
|
||||
|
||||
loads = self.data["loads"]
|
||||
|
||||
force_count = len(loads.get("point_forces", []))
|
||||
pressure_count = len(loads.get("pressure", []))
|
||||
gravity_count = len(loads.get("gravity", []))
|
||||
thermal_count = len(loads.get("thermal", []))
|
||||
|
||||
total_loads = force_count + pressure_count + gravity_count + thermal_count
|
||||
|
||||
self._add_info(
|
||||
f"Loads: {force_count} forces, {pressure_count} pressures, "
|
||||
f"{gravity_count} gravity, {thermal_count} thermal"
|
||||
)
|
||||
|
||||
if total_loads == 0:
|
||||
self._add_warning("No loads defined")
|
||||
|
||||
# Validate force magnitudes
|
||||
for force in loads.get("point_forces", []):
|
||||
mag = force.get("magnitude")
|
||||
if mag == 0:
|
||||
self._add_warning(f"Force at node {force.get('node')} has zero magnitude")
|
||||
|
||||
def _validate_results(self):
|
||||
"""Validate results data"""
|
||||
if "results" not in self.data:
|
||||
self._add_error("No results data found")
|
||||
return
|
||||
|
||||
results = self.data["results"]
|
||||
|
||||
# Check displacement
|
||||
if "displacement" not in results:
|
||||
self._add_error("No displacement results found")
|
||||
else:
|
||||
disp = results["displacement"]
|
||||
n_nodes = len(disp.get("node_ids", []))
|
||||
max_disp = disp.get("max_translation")
|
||||
|
||||
self._add_info(f"Displacement: {n_nodes:,} nodes")
|
||||
if max_disp is not None:
|
||||
self._add_info(f" Max displacement: {max_disp:.6f} mm")
|
||||
|
||||
if max_disp == 0:
|
||||
self._add_warning("Maximum displacement is zero - check loads")
|
||||
elif max_disp > 1000:
|
||||
self._add_warning(f"Very large displacement ({max_disp:.2f} mm) - check units or model")
|
||||
|
||||
# Check stress
|
||||
if "stress" not in results or len(results["stress"]) == 0:
|
||||
self._add_warning("No stress results found")
|
||||
else:
|
||||
for stress_type, stress_data in results["stress"].items():
|
||||
n_elem = len(stress_data.get("element_ids", []))
|
||||
max_vm = stress_data.get("max_von_mises")
|
||||
|
||||
self._add_info(f"Stress ({stress_type}): {n_elem:,} elements")
|
||||
if max_vm is not None:
|
||||
self._add_info(f" Max von Mises: {max_vm:.2f} MPa")
|
||||
|
||||
if max_vm == 0:
|
||||
self._add_warning(f"{stress_type}: Zero stress - check loads")
|
||||
|
||||
# Validate HDF5 results
|
||||
if 'results' in self.h5_data:
|
||||
results_grp = self.h5_data['results']
|
||||
|
||||
if 'displacement' in results_grp:
|
||||
disp_data = results_grp['displacement'][:]
|
||||
|
||||
# Check for NaN or inf
|
||||
if np.any(np.isnan(disp_data)):
|
||||
self._add_error("Displacement results contain NaN values")
|
||||
if np.any(np.isinf(disp_data)):
|
||||
self._add_error("Displacement results contain infinite values")
|
||||
|
||||
def _validate_physics_consistency(self):
|
||||
"""Validate physics consistency of results"""
|
||||
if "results" not in self.data or "mesh" not in self.data:
|
||||
return
|
||||
|
||||
results = self.data["results"]
|
||||
mesh = self.data["mesh"]
|
||||
|
||||
# Check node count consistency
|
||||
mesh_nodes = mesh.get("statistics", {}).get("n_nodes", 0)
|
||||
|
||||
if "displacement" in results:
|
||||
disp_nodes = len(results["displacement"].get("node_ids", []))
|
||||
if disp_nodes != mesh_nodes:
|
||||
self._add_warning(
|
||||
f"Displacement nodes ({disp_nodes:,}) != mesh nodes ({mesh_nodes:,})"
|
||||
)
|
||||
|
||||
# Check for rigid body motion (if no constraints)
|
||||
if "boundary_conditions" in self.data:
|
||||
spc_count = len(self.data["boundary_conditions"].get("spc", []))
|
||||
if spc_count == 0 and "displacement" in results:
|
||||
max_disp = results["displacement"].get("max_translation", 0)
|
||||
if max_disp > 1e6:
|
||||
self._add_error("Unconstrained model with very large displacements - likely rigid body motion")
|
||||
|
||||
def _validate_data_quality(self):
|
||||
"""Validate data quality for neural network training"""
|
||||
|
||||
# Check HDF5 data types and shapes
|
||||
if 'results' in self.h5_data:
|
||||
results_grp = self.h5_data['results']
|
||||
|
||||
# Check displacement shape
|
||||
if 'displacement' in results_grp:
|
||||
disp = results_grp['displacement'][:]
|
||||
if len(disp.shape) != 2:
|
||||
self._add_error(f"Displacement has wrong shape: {disp.shape} (expected 2D)")
|
||||
elif disp.shape[1] != 6:
|
||||
self._add_error(f"Displacement has {disp.shape[1]} DOFs (expected 6)")
|
||||
|
||||
# Check file sizes
|
||||
json_size = self.json_file.stat().st_size / 1024 # KB
|
||||
h5_size = self.h5_file.stat().st_size / 1024 # KB
|
||||
|
||||
self._add_info(f"File sizes: JSON={json_size:.1f} KB, HDF5={h5_size:.1f} KB")
|
||||
|
||||
if json_size > 10000: # 10 MB
|
||||
self._add_warning("JSON file is very large - consider moving more data to HDF5")
|
||||
|
||||
def _add_error(self, message):
|
||||
"""Add error message"""
|
||||
self.errors.append(message)
|
||||
|
||||
def _add_warning(self, message):
|
||||
"""Add warning message"""
|
||||
self.warnings.append(message)
|
||||
|
||||
def _add_info(self, message):
|
||||
"""Add info message"""
|
||||
self.info.append(message)
|
||||
|
||||
def _print_results(self):
|
||||
"""Print validation results"""
|
||||
print("\n" + "="*60)
|
||||
print("VALIDATION RESULTS")
|
||||
print("="*60)
|
||||
|
||||
# Print info
|
||||
if self.info:
|
||||
print("\nInformation:")
|
||||
for msg in self.info:
|
||||
print(f" [INFO] {msg}")
|
||||
|
||||
# Print warnings
|
||||
if self.warnings:
|
||||
print("\nWarnings:")
|
||||
for msg in self.warnings:
|
||||
print(f" [WARN] {msg}")
|
||||
|
||||
# Print errors
|
||||
if self.errors:
|
||||
print("\nErrors:")
|
||||
for msg in self.errors:
|
||||
print(f" [X] {msg}")
|
||||
|
||||
# Summary
|
||||
print("\n" + "="*60)
|
||||
if len(self.errors) == 0:
|
||||
print("[OK] VALIDATION PASSED")
|
||||
print("="*60)
|
||||
print("\nData is ready for neural network training!")
|
||||
else:
|
||||
print("[X] VALIDATION FAILED")
|
||||
print("="*60)
|
||||
print(f"\nFound {len(self.errors)} error(s), {len(self.warnings)} warning(s)")
|
||||
print("Please fix errors before using this data for training.")
|
||||
|
||||
print()
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main entry point for validation script
|
||||
"""
|
||||
if len(sys.argv) < 2:
|
||||
print("\nAtomizerField Data Validator v1.0")
|
||||
print("="*60)
|
||||
print("\nUsage:")
|
||||
print(" python validate_parsed_data.py <case_directory>")
|
||||
print("\nExample:")
|
||||
print(" python validate_parsed_data.py training_case_001")
|
||||
print()
|
||||
sys.exit(1)
|
||||
|
||||
case_dir = sys.argv[1]
|
||||
|
||||
if not Path(case_dir).exists():
|
||||
print(f"ERROR: Directory not found: {case_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
validator = NeuralFieldDataValidator(case_dir)
|
||||
success = validator.validate()
|
||||
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,46 +0,0 @@
|
||||
# FEA Visualization Report
|
||||
|
||||
**Generated:** 2025-11-24T09:24:10.133023
|
||||
|
||||
**Case:** test_case_beam
|
||||
|
||||
---
|
||||
|
||||
## Model Information
|
||||
|
||||
- **Analysis Type:** SOL_Unknown
|
||||
- **Nodes:** 5,179
|
||||
- **Elements:** 4,866
|
||||
- **Materials:** 1
|
||||
|
||||
## Mesh Structure
|
||||
|
||||

|
||||
|
||||
The model contains 5,179 nodes and 4,866 elements.
|
||||
|
||||
## Displacement Results
|
||||
|
||||

|
||||
|
||||
**Maximum Displacement:** 19.556875 mm
|
||||
|
||||
The plots show the original mesh (left) and deformed mesh (right) with displacement magnitude shown in color.
|
||||
|
||||
## Stress Results
|
||||
|
||||

|
||||
|
||||
The stress distribution is shown with colors representing von Mises stress levels.
|
||||
|
||||
## Summary Statistics
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Nodes | 5,179 |
|
||||
| Elements | 4,866 |
|
||||
| Max Displacement | 19.556875 mm |
|
||||
|
||||
---
|
||||
|
||||
*Report generated by AtomizerField Visualizer*
|
||||
@@ -1,515 +0,0 @@
|
||||
"""
|
||||
visualize_results.py
|
||||
3D Visualization of FEA Results and Neural Predictions
|
||||
|
||||
Visualizes:
|
||||
- Mesh structure
|
||||
- Displacement fields
|
||||
- Stress fields (von Mises)
|
||||
- Comparison: FEA vs Neural predictions
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
from matplotlib import cm
|
||||
from mpl_toolkits.mplot3d import Axes3D
|
||||
import json
|
||||
import h5py
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
|
||||
|
||||
class FEAVisualizer:
|
||||
"""Visualize FEA results in 3D"""
|
||||
|
||||
def __init__(self, case_dir):
|
||||
"""
|
||||
Initialize visualizer
|
||||
|
||||
Args:
|
||||
case_dir: Path to case directory with neural_field_data files
|
||||
"""
|
||||
self.case_dir = Path(case_dir)
|
||||
|
||||
# Load data
|
||||
print(f"Loading data from {case_dir}...")
|
||||
self.load_data()
|
||||
|
||||
def load_data(self):
|
||||
"""Load JSON metadata and HDF5 field data"""
|
||||
json_file = self.case_dir / "neural_field_data.json"
|
||||
h5_file = self.case_dir / "neural_field_data.h5"
|
||||
|
||||
# Load JSON
|
||||
with open(json_file, 'r') as f:
|
||||
self.metadata = json.load(f)
|
||||
|
||||
# Get connectivity from JSON
|
||||
self.connectivity = []
|
||||
if 'mesh' in self.metadata and 'elements' in self.metadata['mesh']:
|
||||
elements = self.metadata['mesh']['elements']
|
||||
# Elements are categorized by type: solid, shell, beam, rigid
|
||||
for elem_category in ['solid', 'shell', 'beam']:
|
||||
if elem_category in elements and isinstance(elements[elem_category], list):
|
||||
for elem_data in elements[elem_category]:
|
||||
elem_type = elem_data.get('type', '')
|
||||
if elem_type in ['CQUAD4', 'CTRIA3', 'CTETRA', 'CHEXA']:
|
||||
# Store connectivity: [elem_id, n1, n2, n3, n4, ...]
|
||||
nodes = elem_data.get('nodes', [])
|
||||
self.connectivity.append([elem_data['id']] + nodes)
|
||||
self.connectivity = np.array(self.connectivity) if self.connectivity else np.array([[]])
|
||||
|
||||
# Load HDF5
|
||||
with h5py.File(h5_file, 'r') as f:
|
||||
self.node_coords = f['mesh/node_coordinates'][:]
|
||||
|
||||
# Get node IDs to create mapping
|
||||
self.node_ids = f['mesh/node_ids'][:]
|
||||
# Create mapping from node ID to index
|
||||
self.node_id_to_idx = {nid: idx for idx, nid in enumerate(self.node_ids)}
|
||||
|
||||
# Displacement
|
||||
if 'results/displacement' in f:
|
||||
self.displacement = f['results/displacement'][:]
|
||||
else:
|
||||
self.displacement = None
|
||||
|
||||
# Stress (try different possible locations)
|
||||
self.stress = None
|
||||
if 'results/stress/cquad4_stress/data' in f:
|
||||
self.stress = f['results/stress/cquad4_stress/data'][:]
|
||||
elif 'results/stress/cquad4_stress' in f and hasattr(f['results/stress/cquad4_stress'], 'shape'):
|
||||
self.stress = f['results/stress/cquad4_stress'][:]
|
||||
elif 'results/stress' in f and hasattr(f['results/stress'], 'shape'):
|
||||
self.stress = f['results/stress'][:]
|
||||
|
||||
print(f"Loaded {len(self.node_coords)} nodes, {len(self.connectivity)} elements")
|
||||
|
||||
def plot_mesh(self, figsize=(12, 8), save_path=None):
|
||||
"""
|
||||
Plot 3D mesh structure
|
||||
|
||||
Args:
|
||||
figsize: Figure size
|
||||
save_path: Path to save figure (optional)
|
||||
"""
|
||||
fig = plt.figure(figsize=figsize)
|
||||
ax = fig.add_subplot(111, projection='3d')
|
||||
|
||||
# Extract coordinates
|
||||
x = self.node_coords[:, 0]
|
||||
y = self.node_coords[:, 1]
|
||||
z = self.node_coords[:, 2]
|
||||
|
||||
# Plot nodes
|
||||
ax.scatter(x, y, z, c='blue', marker='.', s=1, alpha=0.3, label='Nodes')
|
||||
|
||||
# Plot a subset of elements (for visibility)
|
||||
step = max(1, len(self.connectivity) // 1000) # Show max 1000 elements
|
||||
for i in range(0, len(self.connectivity), step):
|
||||
elem = self.connectivity[i]
|
||||
if len(elem) < 5: # Skip invalid elements
|
||||
continue
|
||||
|
||||
# Get node IDs (skip element ID at position 0)
|
||||
node_ids = elem[1:5] # First 4 nodes for CQUAD4
|
||||
|
||||
# Convert node IDs to indices
|
||||
try:
|
||||
nodes = [self.node_id_to_idx[nid] for nid in node_ids]
|
||||
except KeyError:
|
||||
continue # Skip if node not found
|
||||
|
||||
# Get coordinates
|
||||
elem_coords = self.node_coords[nodes]
|
||||
|
||||
# Plot element edges
|
||||
for j in range(min(4, len(nodes))):
|
||||
next_j = (j + 1) % len(nodes)
|
||||
ax.plot([elem_coords[j, 0], elem_coords[next_j, 0]],
|
||||
[elem_coords[j, 1], elem_coords[next_j, 1]],
|
||||
[elem_coords[j, 2], elem_coords[next_j, 2]],
|
||||
'k-', linewidth=0.1, alpha=0.1)
|
||||
|
||||
ax.set_xlabel('X (mm)')
|
||||
ax.set_ylabel('Y (mm)')
|
||||
ax.set_zlabel('Z (mm)')
|
||||
ax.set_title(f'Mesh Structure\n{len(self.node_coords)} nodes, {len(self.connectivity)} elements')
|
||||
|
||||
# Equal aspect ratio
|
||||
self._set_equal_aspect(ax)
|
||||
|
||||
if save_path:
|
||||
plt.savefig(save_path, dpi=150, bbox_inches='tight')
|
||||
print(f"Saved mesh plot to {save_path}")
|
||||
|
||||
plt.show()
|
||||
|
||||
def plot_displacement(self, scale=1.0, component='magnitude', figsize=(14, 8), save_path=None):
|
||||
"""
|
||||
Plot displacement field
|
||||
|
||||
Args:
|
||||
scale: Scale factor for displacement visualization
|
||||
component: 'magnitude', 'x', 'y', or 'z'
|
||||
figsize: Figure size
|
||||
save_path: Path to save figure
|
||||
"""
|
||||
if self.displacement is None:
|
||||
print("No displacement data available")
|
||||
return
|
||||
|
||||
fig = plt.figure(figsize=figsize)
|
||||
|
||||
# Original mesh
|
||||
ax1 = fig.add_subplot(121, projection='3d')
|
||||
self._plot_mesh_with_field(ax1, self.displacement, component, scale=0)
|
||||
ax1.set_title('Original Mesh')
|
||||
|
||||
# Deformed mesh
|
||||
ax2 = fig.add_subplot(122, projection='3d')
|
||||
self._plot_mesh_with_field(ax2, self.displacement, component, scale=scale)
|
||||
ax2.set_title(f'Deformed Mesh (scale={scale}x)\nDisplacement: {component}')
|
||||
|
||||
plt.tight_layout()
|
||||
|
||||
if save_path:
|
||||
plt.savefig(save_path, dpi=150, bbox_inches='tight')
|
||||
print(f"Saved displacement plot to {save_path}")
|
||||
|
||||
plt.show()
|
||||
|
||||
def plot_stress(self, component='von_mises', figsize=(12, 8), save_path=None):
|
||||
"""
|
||||
Plot stress field
|
||||
|
||||
Args:
|
||||
component: 'von_mises', 'xx', 'yy', 'zz', 'xy', 'yz', 'xz'
|
||||
figsize: Figure size
|
||||
save_path: Path to save figure
|
||||
"""
|
||||
if self.stress is None:
|
||||
print("No stress data available")
|
||||
return
|
||||
|
||||
fig = plt.figure(figsize=figsize)
|
||||
ax = fig.add_subplot(111, projection='3d')
|
||||
|
||||
# Get stress component
|
||||
if component == 'von_mises':
|
||||
# Von Mises already computed (last column)
|
||||
stress_values = self.stress[:, -1]
|
||||
else:
|
||||
# Map component name to index
|
||||
comp_map = {'xx': 0, 'yy': 1, 'zz': 2, 'xy': 3, 'yz': 4, 'xz': 5}
|
||||
idx = comp_map.get(component, 0)
|
||||
stress_values = self.stress[:, idx]
|
||||
|
||||
# Plot elements colored by stress
|
||||
self._plot_elements_with_stress(ax, stress_values)
|
||||
|
||||
ax.set_xlabel('X (mm)')
|
||||
ax.set_ylabel('Y (mm)')
|
||||
ax.set_zlabel('Z (mm)')
|
||||
ax.set_title(f'Stress Field: {component}\nMax: {np.max(stress_values):.2f} MPa')
|
||||
|
||||
self._set_equal_aspect(ax)
|
||||
|
||||
if save_path:
|
||||
plt.savefig(save_path, dpi=150, bbox_inches='tight')
|
||||
print(f"Saved stress plot to {save_path}")
|
||||
|
||||
plt.show()
|
||||
|
||||
def plot_comparison(self, neural_predictions, figsize=(16, 6), save_path=None):
|
||||
"""
|
||||
Plot comparison: FEA vs Neural predictions
|
||||
|
||||
Args:
|
||||
neural_predictions: Dict with 'displacement' and/or 'stress'
|
||||
figsize: Figure size
|
||||
save_path: Path to save figure
|
||||
"""
|
||||
fig = plt.figure(figsize=figsize)
|
||||
|
||||
# Displacement comparison
|
||||
if self.displacement is not None and 'displacement' in neural_predictions:
|
||||
ax1 = fig.add_subplot(131, projection='3d')
|
||||
self._plot_mesh_with_field(ax1, self.displacement, 'magnitude', scale=10)
|
||||
ax1.set_title('FEA Displacement')
|
||||
|
||||
ax2 = fig.add_subplot(132, projection='3d')
|
||||
neural_disp = neural_predictions['displacement']
|
||||
self._plot_mesh_with_field(ax2, neural_disp, 'magnitude', scale=10)
|
||||
ax2.set_title('Neural Prediction')
|
||||
|
||||
# Error
|
||||
ax3 = fig.add_subplot(133, projection='3d')
|
||||
error = np.linalg.norm(self.displacement[:, :3] - neural_disp[:, :3], axis=1)
|
||||
self._plot_nodes_with_values(ax3, error)
|
||||
ax3.set_title('Prediction Error')
|
||||
|
||||
plt.tight_layout()
|
||||
|
||||
if save_path:
|
||||
plt.savefig(save_path, dpi=150, bbox_inches='tight')
|
||||
print(f"Saved comparison plot to {save_path}")
|
||||
|
||||
plt.show()
|
||||
|
||||
def _plot_mesh_with_field(self, ax, field, component, scale=1.0):
|
||||
"""Helper: Plot mesh colored by field values"""
|
||||
# Get field component
|
||||
if component == 'magnitude':
|
||||
values = np.linalg.norm(field[:, :3], axis=1)
|
||||
elif component == 'x':
|
||||
values = field[:, 0]
|
||||
elif component == 'y':
|
||||
values = field[:, 1]
|
||||
elif component == 'z':
|
||||
values = field[:, 2]
|
||||
else:
|
||||
values = np.linalg.norm(field[:, :3], axis=1)
|
||||
|
||||
# Apply deformation
|
||||
coords = self.node_coords + scale * field[:, :3]
|
||||
|
||||
# Plot nodes colored by values
|
||||
scatter = ax.scatter(coords[:, 0], coords[:, 1], coords[:, 2],
|
||||
c=values, cmap='jet', s=2)
|
||||
plt.colorbar(scatter, ax=ax, label=f'{component} (mm)')
|
||||
|
||||
ax.set_xlabel('X (mm)')
|
||||
ax.set_ylabel('Y (mm)')
|
||||
ax.set_zlabel('Z (mm)')
|
||||
|
||||
self._set_equal_aspect(ax)
|
||||
|
||||
def _plot_elements_with_stress(self, ax, stress_values):
|
||||
"""Helper: Plot elements colored by stress"""
|
||||
# Normalize stress for colormap
|
||||
vmin, vmax = np.min(stress_values), np.max(stress_values)
|
||||
norm = plt.Normalize(vmin=vmin, vmax=vmax)
|
||||
cmap = cm.get_cmap('jet')
|
||||
|
||||
# Plot subset of elements
|
||||
step = max(1, len(self.connectivity) // 500)
|
||||
for i in range(0, min(len(self.connectivity), len(stress_values)), step):
|
||||
elem = self.connectivity[i]
|
||||
if len(elem) < 5:
|
||||
continue
|
||||
|
||||
# Get node IDs and convert to indices
|
||||
node_ids = elem[1:5]
|
||||
try:
|
||||
nodes = [self.node_id_to_idx[nid] for nid in node_ids]
|
||||
except KeyError:
|
||||
continue
|
||||
|
||||
elem_coords = self.node_coords[nodes]
|
||||
|
||||
# Get stress color
|
||||
color = cmap(norm(stress_values[i]))
|
||||
|
||||
# Plot filled quadrilateral
|
||||
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
|
||||
verts = [elem_coords]
|
||||
poly = Poly3DCollection(verts, facecolors=color, edgecolors='k',
|
||||
linewidths=0.1, alpha=0.8)
|
||||
ax.add_collection3d(poly)
|
||||
|
||||
# Colorbar
|
||||
sm = cm.ScalarMappable(cmap=cmap, norm=norm)
|
||||
sm.set_array([])
|
||||
plt.colorbar(sm, ax=ax, label='Stress (MPa)')
|
||||
|
||||
def _plot_nodes_with_values(self, ax, values):
|
||||
"""Helper: Plot nodes colored by values"""
|
||||
scatter = ax.scatter(self.node_coords[:, 0],
|
||||
self.node_coords[:, 1],
|
||||
self.node_coords[:, 2],
|
||||
c=values, cmap='hot', s=2)
|
||||
plt.colorbar(scatter, ax=ax, label='Error (mm)')
|
||||
|
||||
ax.set_xlabel('X (mm)')
|
||||
ax.set_ylabel('Y (mm)')
|
||||
ax.set_zlabel('Z (mm)')
|
||||
|
||||
self._set_equal_aspect(ax)
|
||||
|
||||
def _set_equal_aspect(self, ax):
|
||||
"""Set equal aspect ratio for 3D plot"""
|
||||
# Get limits
|
||||
x_limits = [self.node_coords[:, 0].min(), self.node_coords[:, 0].max()]
|
||||
y_limits = [self.node_coords[:, 1].min(), self.node_coords[:, 1].max()]
|
||||
z_limits = [self.node_coords[:, 2].min(), self.node_coords[:, 2].max()]
|
||||
|
||||
# Find max range
|
||||
max_range = max(x_limits[1] - x_limits[0],
|
||||
y_limits[1] - y_limits[0],
|
||||
z_limits[1] - z_limits[0])
|
||||
|
||||
# Set limits
|
||||
x_middle = np.mean(x_limits)
|
||||
y_middle = np.mean(y_limits)
|
||||
z_middle = np.mean(z_limits)
|
||||
|
||||
ax.set_xlim(x_middle - max_range/2, x_middle + max_range/2)
|
||||
ax.set_ylim(y_middle - max_range/2, y_middle + max_range/2)
|
||||
ax.set_zlim(z_middle - max_range/2, z_middle + max_range/2)
|
||||
|
||||
def create_report(self, output_file='visualization_report.md'):
|
||||
"""
|
||||
Create markdown report with all visualizations
|
||||
|
||||
Args:
|
||||
output_file: Path to save report
|
||||
"""
|
||||
print(f"\nGenerating visualization report...")
|
||||
|
||||
# Create images directory
|
||||
img_dir = Path(output_file).parent / 'visualization_images'
|
||||
img_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Generate plots
|
||||
print(" Creating mesh plot...")
|
||||
self.plot_mesh(save_path=img_dir / 'mesh.png')
|
||||
plt.close('all')
|
||||
|
||||
print(" Creating displacement plot...")
|
||||
self.plot_displacement(scale=10, save_path=img_dir / 'displacement.png')
|
||||
plt.close('all')
|
||||
|
||||
print(" Creating stress plot...")
|
||||
self.plot_stress(save_path=img_dir / 'stress.png')
|
||||
plt.close('all')
|
||||
|
||||
# Write report
|
||||
with open(output_file, 'w') as f:
|
||||
f.write(f"# FEA Visualization Report\n\n")
|
||||
f.write(f"**Generated:** {self.metadata['metadata']['created_at']}\n\n")
|
||||
f.write(f"**Case:** {self.metadata['metadata']['case_name']}\n\n")
|
||||
|
||||
f.write("---\n\n")
|
||||
|
||||
# Model info
|
||||
f.write("## Model Information\n\n")
|
||||
f.write(f"- **Analysis Type:** {self.metadata['metadata']['analysis_type']}\n")
|
||||
f.write(f"- **Nodes:** {self.metadata['mesh']['statistics']['n_nodes']:,}\n")
|
||||
f.write(f"- **Elements:** {self.metadata['mesh']['statistics']['n_elements']:,}\n")
|
||||
f.write(f"- **Materials:** {len(self.metadata['materials'])}\n\n")
|
||||
|
||||
# Mesh
|
||||
f.write("## Mesh Structure\n\n")
|
||||
f.write("\n\n")
|
||||
f.write(f"The model contains {self.metadata['mesh']['statistics']['n_nodes']:,} nodes ")
|
||||
f.write(f"and {self.metadata['mesh']['statistics']['n_elements']:,} elements.\n\n")
|
||||
|
||||
# Displacement
|
||||
if self.displacement is not None:
|
||||
max_disp = self.metadata['results']['displacement']['max_translation']
|
||||
f.write("## Displacement Results\n\n")
|
||||
f.write("\n\n")
|
||||
f.write(f"**Maximum Displacement:** {max_disp:.6f} mm\n\n")
|
||||
f.write("The plots show the original mesh (left) and deformed mesh (right) ")
|
||||
f.write("with displacement magnitude shown in color.\n\n")
|
||||
|
||||
# Stress
|
||||
if self.stress is not None:
|
||||
f.write("## Stress Results\n\n")
|
||||
f.write("\n\n")
|
||||
|
||||
# Get max stress from metadata
|
||||
if 'stress' in self.metadata['results']:
|
||||
for stress_type, stress_data in self.metadata['results']['stress'].items():
|
||||
if 'max_von_mises' in stress_data and stress_data['max_von_mises'] is not None:
|
||||
max_stress = stress_data['max_von_mises']
|
||||
f.write(f"**Maximum von Mises Stress:** {max_stress:.2f} MPa\n\n")
|
||||
break
|
||||
|
||||
f.write("The stress distribution is shown with colors representing von Mises stress levels.\n\n")
|
||||
|
||||
# Statistics
|
||||
f.write("## Summary Statistics\n\n")
|
||||
f.write("| Property | Value |\n")
|
||||
f.write("|----------|-------|\n")
|
||||
f.write(f"| Nodes | {self.metadata['mesh']['statistics']['n_nodes']:,} |\n")
|
||||
f.write(f"| Elements | {self.metadata['mesh']['statistics']['n_elements']:,} |\n")
|
||||
|
||||
if self.displacement is not None:
|
||||
max_disp = self.metadata['results']['displacement']['max_translation']
|
||||
f.write(f"| Max Displacement | {max_disp:.6f} mm |\n")
|
||||
|
||||
if self.stress is not None and 'stress' in self.metadata['results']:
|
||||
for stress_type, stress_data in self.metadata['results']['stress'].items():
|
||||
if 'max_von_mises' in stress_data and stress_data['max_von_mises'] is not None:
|
||||
max_stress = stress_data['max_von_mises']
|
||||
f.write(f"| Max von Mises Stress | {max_stress:.2f} MPa |\n")
|
||||
break
|
||||
|
||||
f.write("\n---\n\n")
|
||||
f.write("*Report generated by AtomizerField Visualizer*\n")
|
||||
|
||||
print(f"\nReport saved to: {output_file}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Visualize FEA results in 3D',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
# Visualize mesh
|
||||
python visualize_results.py test_case_beam --mesh
|
||||
|
||||
# Visualize displacement
|
||||
python visualize_results.py test_case_beam --displacement
|
||||
|
||||
# Visualize stress
|
||||
python visualize_results.py test_case_beam --stress
|
||||
|
||||
# Generate full report
|
||||
python visualize_results.py test_case_beam --report
|
||||
|
||||
# All visualizations
|
||||
python visualize_results.py test_case_beam --all
|
||||
"""
|
||||
)
|
||||
|
||||
parser.add_argument('case_dir', help='Path to case directory')
|
||||
parser.add_argument('--mesh', action='store_true', help='Plot mesh structure')
|
||||
parser.add_argument('--displacement', action='store_true', help='Plot displacement field')
|
||||
parser.add_argument('--stress', action='store_true', help='Plot stress field')
|
||||
parser.add_argument('--report', action='store_true', help='Generate markdown report')
|
||||
parser.add_argument('--all', action='store_true', help='Show all plots and generate report')
|
||||
parser.add_argument('--scale', type=float, default=10.0, help='Displacement scale factor (default: 10)')
|
||||
parser.add_argument('--output', default='visualization_report.md', help='Report output file')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create visualizer
|
||||
viz = FEAVisualizer(args.case_dir)
|
||||
|
||||
# Determine what to show
|
||||
show_all = args.all or not (args.mesh or args.displacement or args.stress or args.report)
|
||||
|
||||
if args.mesh or show_all:
|
||||
print("\nShowing mesh structure...")
|
||||
viz.plot_mesh()
|
||||
|
||||
if args.displacement or show_all:
|
||||
print("\nShowing displacement field...")
|
||||
viz.plot_displacement(scale=args.scale)
|
||||
|
||||
if args.stress or show_all:
|
||||
print("\nShowing stress field...")
|
||||
viz.plot_stress()
|
||||
|
||||
if args.report or show_all:
|
||||
print("\nGenerating report...")
|
||||
viz.create_report(args.output)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,144 +0,0 @@
|
||||
"""
|
||||
Atomizer Path Configuration
|
||||
|
||||
Provides intelligent path resolution for Atomizer core modules and directories.
|
||||
This module can be imported from anywhere in the project hierarchy.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
|
||||
def get_atomizer_root() -> Path:
|
||||
"""
|
||||
Get the Atomizer project root directory.
|
||||
|
||||
This function intelligently locates the root by looking for marker files
|
||||
that uniquely identify the Atomizer project root.
|
||||
|
||||
Returns:
|
||||
Path: Absolute path to Atomizer root directory
|
||||
|
||||
Raises:
|
||||
RuntimeError: If Atomizer root cannot be found
|
||||
"""
|
||||
# Start from this file's location
|
||||
current = Path(__file__).resolve().parent
|
||||
|
||||
# Marker files that uniquely identify Atomizer root
|
||||
markers = [
|
||||
'optimization_engine', # Core module directory
|
||||
'studies', # Studies directory
|
||||
'README.md' # Project README
|
||||
]
|
||||
|
||||
# Walk up the directory tree looking for all markers
|
||||
max_depth = 10 # Prevent infinite loop
|
||||
for _ in range(max_depth):
|
||||
# Check if all markers exist at this level
|
||||
if all((current / marker).exists() for marker in markers):
|
||||
return current
|
||||
|
||||
# Move up one directory
|
||||
parent = current.parent
|
||||
if parent == current: # Reached filesystem root
|
||||
break
|
||||
current = parent
|
||||
|
||||
raise RuntimeError(
|
||||
"Could not locate Atomizer root directory. "
|
||||
"Make sure you're running from within the Atomizer project."
|
||||
)
|
||||
|
||||
|
||||
def setup_python_path():
|
||||
"""
|
||||
Add Atomizer root to Python path if not already present.
|
||||
|
||||
This allows imports like `from optimization_engine.core.runner import ...`
|
||||
to work from anywhere in the project.
|
||||
"""
|
||||
root = get_atomizer_root()
|
||||
root_str = str(root)
|
||||
|
||||
if root_str not in sys.path:
|
||||
sys.path.insert(0, root_str)
|
||||
|
||||
|
||||
# Core directories (lazy-loaded)
|
||||
_ROOT = None
|
||||
|
||||
def root() -> Path:
|
||||
"""Get Atomizer root directory."""
|
||||
global _ROOT
|
||||
if _ROOT is None:
|
||||
_ROOT = get_atomizer_root()
|
||||
return _ROOT
|
||||
|
||||
|
||||
def optimization_engine() -> Path:
|
||||
"""Get optimization_engine directory."""
|
||||
return root() / 'optimization_engine'
|
||||
|
||||
|
||||
def studies() -> Path:
|
||||
"""Get studies directory."""
|
||||
return root() / 'studies'
|
||||
|
||||
|
||||
def tests() -> Path:
|
||||
"""Get tests directory."""
|
||||
return root() / 'tests'
|
||||
|
||||
|
||||
def docs() -> Path:
|
||||
"""Get docs directory."""
|
||||
return root() / 'docs'
|
||||
|
||||
|
||||
def plugins() -> Path:
|
||||
"""Get plugins directory."""
|
||||
return optimization_engine() / 'plugins'
|
||||
|
||||
|
||||
# Common files
|
||||
def readme() -> Path:
|
||||
"""Get README.md path."""
|
||||
return root() / 'README.md'
|
||||
|
||||
|
||||
def roadmap() -> Path:
|
||||
"""Get development roadmap path."""
|
||||
return root() / 'DEVELOPMENT_ROADMAP.md'
|
||||
|
||||
|
||||
# Convenience function for scripts
|
||||
def ensure_imports():
|
||||
"""
|
||||
Ensure Atomizer modules can be imported.
|
||||
|
||||
Call this at the start of any script to ensure proper imports:
|
||||
|
||||
```python
|
||||
import atomizer_paths
|
||||
atomizer_paths.ensure_imports()
|
||||
|
||||
# Now you can import Atomizer modules
|
||||
from optimization_engine.core.runner import OptimizationRunner
|
||||
```
|
||||
"""
|
||||
setup_python_path()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Self-test
|
||||
print("Atomizer Path Configuration")
|
||||
print("=" * 60)
|
||||
print(f"Root: {root()}")
|
||||
print(f"Optimization Engine: {optimization_engine()}")
|
||||
print(f"Studies: {studies()}")
|
||||
print(f"Tests: {tests()}")
|
||||
print(f"Docs: {docs()}")
|
||||
print(f"Plugins: {plugins()}")
|
||||
print("=" * 60)
|
||||
print("\nAll paths resolved successfully!")
|
||||
192
config.py
192
config.py
@@ -1,192 +0,0 @@
|
||||
"""
|
||||
Central Configuration for Atomizer
|
||||
|
||||
This file contains all system-level paths and settings.
|
||||
To update NX version or paths, ONLY modify this file.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
import os
|
||||
|
||||
# ============================================================================
|
||||
# NX/SIMCENTER CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
# NX Installation Directory
|
||||
# Change this to update NX version across entire Atomizer codebase
|
||||
NX_VERSION = "2512"
|
||||
NX_INSTALLATION_DIR = Path("C:/Program Files/Siemens/DesigncenterNX2512")
|
||||
|
||||
# Derived NX Paths (automatically updated when NX_VERSION changes)
|
||||
NX_BIN_DIR = NX_INSTALLATION_DIR / "NXBIN"
|
||||
NX_RUN_JOURNAL = NX_BIN_DIR / "run_journal.exe"
|
||||
NX_MATERIALS_DIR = NX_INSTALLATION_DIR / "UGII" / "materials"
|
||||
NX_MATERIAL_LIBRARY = NX_MATERIALS_DIR / "physicalmateriallibrary.xml"
|
||||
NX_PYTHON_STUBS = NX_INSTALLATION_DIR / "ugopen" / "pythonStubs"
|
||||
|
||||
# Validate NX installation on import
|
||||
if not NX_INSTALLATION_DIR.exists():
|
||||
raise FileNotFoundError(
|
||||
f"NX installation not found at: {NX_INSTALLATION_DIR}\n"
|
||||
f"Please update NX_VERSION or NX_INSTALLATION_DIR in config.py"
|
||||
)
|
||||
|
||||
if not NX_RUN_JOURNAL.exists():
|
||||
raise FileNotFoundError(
|
||||
f"run_journal.exe not found at: {NX_RUN_JOURNAL}\n"
|
||||
f"Please verify NX installation or update paths in config.py"
|
||||
)
|
||||
|
||||
# ============================================================================
|
||||
# PYTHON ENVIRONMENT CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
# Python Environment Name
|
||||
PYTHON_ENV_NAME = "atomizer"
|
||||
|
||||
# Anaconda/Conda base path (auto-detected from current interpreter)
|
||||
CONDA_BASE = Path(os.environ.get("CONDA_PREFIX", "")).parent
|
||||
PYTHON_ENV_PATH = CONDA_BASE / "envs" / PYTHON_ENV_NAME / "python.exe"
|
||||
|
||||
# ============================================================================
|
||||
# PROJECT PATHS
|
||||
# ============================================================================
|
||||
|
||||
# Project root directory
|
||||
PROJECT_ROOT = Path(__file__).parent.resolve()
|
||||
|
||||
# Key directories
|
||||
OPTIMIZATION_ENGINE_DIR = PROJECT_ROOT / "optimization_engine"
|
||||
STUDIES_DIR = PROJECT_ROOT / "studies"
|
||||
DOCS_DIR = PROJECT_ROOT / "docs"
|
||||
TESTS_DIR = PROJECT_ROOT / "tests"
|
||||
|
||||
# ============================================================================
|
||||
# NASTRAN CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
# Nastran solver timeout (seconds)
|
||||
NASTRAN_TIMEOUT = 600
|
||||
|
||||
# Nastran file extensions
|
||||
NASTRAN_INPUT_EXT = ".dat"
|
||||
NASTRAN_OUTPUT_EXT = ".op2"
|
||||
NASTRAN_TEXT_OUTPUT_EXT = ".f06"
|
||||
|
||||
# ============================================================================
|
||||
# OPTIMIZATION DEFAULTS
|
||||
# ============================================================================
|
||||
|
||||
# Default optimization parameters
|
||||
DEFAULT_N_TRIALS = 50
|
||||
DEFAULT_TIMEOUT = 3600 # 1 hour
|
||||
DEFAULT_N_STARTUP_TRIALS = 10
|
||||
|
||||
# Optuna sampler settings
|
||||
DEFAULT_SAMPLER = "TPESampler"
|
||||
DEFAULT_SURROGATE = "gp" # gaussian process
|
||||
|
||||
# ============================================================================
|
||||
# LOGGING CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
LOG_LEVEL = "INFO"
|
||||
LOG_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
|
||||
# ============================================================================
|
||||
# DASHBOARD CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
DASHBOARD_HOST = "localhost"
|
||||
DASHBOARD_PORT = 8050
|
||||
|
||||
# ============================================================================
|
||||
# HELPER FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
def get_nx_journal_command(journal_script: Path, *args) -> str:
|
||||
"""
|
||||
Generate NX journal execution command.
|
||||
|
||||
Args:
|
||||
journal_script: Path to journal .py file
|
||||
*args: Additional arguments to pass to journal
|
||||
|
||||
Returns:
|
||||
Command string ready for subprocess execution
|
||||
"""
|
||||
cmd_parts = [f'"{NX_RUN_JOURNAL}"', f'"{journal_script}"']
|
||||
if args:
|
||||
cmd_parts.append("-args")
|
||||
cmd_parts.extend([f'"{arg}"' for arg in args])
|
||||
return " ".join(cmd_parts)
|
||||
|
||||
|
||||
def validate_config():
|
||||
"""
|
||||
Validate all critical paths exist.
|
||||
Raises FileNotFoundError if any critical path is missing.
|
||||
"""
|
||||
critical_paths = {
|
||||
"NX Installation": NX_INSTALLATION_DIR,
|
||||
"NX run_journal.exe": NX_RUN_JOURNAL,
|
||||
"NX Material Library": NX_MATERIAL_LIBRARY,
|
||||
"NX Python Stubs": NX_PYTHON_STUBS,
|
||||
"Project Root": PROJECT_ROOT,
|
||||
"Optimization Engine": OPTIMIZATION_ENGINE_DIR,
|
||||
}
|
||||
|
||||
missing = []
|
||||
for name, path in critical_paths.items():
|
||||
if not path.exists():
|
||||
missing.append(f" - {name}: {path}")
|
||||
|
||||
if missing:
|
||||
raise FileNotFoundError(
|
||||
"Critical paths missing:\n" + "\n".join(missing) +
|
||||
"\n\nPlease update paths in config.py"
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def print_config():
|
||||
"""Print current configuration for debugging."""
|
||||
print("=" * 80)
|
||||
print("ATOMIZER CONFIGURATION")
|
||||
print("=" * 80)
|
||||
print(f"\nNX Configuration:")
|
||||
print(f" Version: {NX_VERSION}")
|
||||
print(f" Installation: {NX_INSTALLATION_DIR}")
|
||||
print(f" run_journal.exe: {NX_RUN_JOURNAL}")
|
||||
print(f" Material Library: {NX_MATERIAL_LIBRARY}")
|
||||
print(f" Python Stubs: {NX_PYTHON_STUBS}")
|
||||
print(f"\nPython Environment:")
|
||||
print(f" Environment Name: {PYTHON_ENV_NAME}")
|
||||
print(f" Python Path: {PYTHON_ENV_PATH}")
|
||||
print(f"\nProject Paths:")
|
||||
print(f" Root: {PROJECT_ROOT}")
|
||||
print(f" Optimization Engine: {OPTIMIZATION_ENGINE_DIR}")
|
||||
print(f" Studies: {STUDIES_DIR}")
|
||||
print("=" * 80)
|
||||
|
||||
|
||||
# Validate configuration on import
|
||||
validate_config()
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# USAGE EXAMPLE
|
||||
# ============================================================================
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Print configuration
|
||||
print_config()
|
||||
|
||||
# Example: Generate journal command
|
||||
example_journal = PROJECT_ROOT / "optimization_engine" / "import_expressions.py"
|
||||
example_prt = PROJECT_ROOT / "studies" / "beam" / "Beam.prt"
|
||||
example_exp = PROJECT_ROOT / "studies" / "beam" / "Beam_vars.exp"
|
||||
|
||||
cmd = get_nx_journal_command(example_journal, example_prt, example_exp)
|
||||
print(f"\nExample Journal Command:\n{cmd}")
|
||||
41
docs/reference/ATOMIZER_FIELD_CONCEPT.md
Normal file
41
docs/reference/ATOMIZER_FIELD_CONCEPT.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# AtomizerField - Neural Field Predictor (Concept Archive)
|
||||
|
||||
> **Status**: Concept archived. Code removed during repo cleanup (Feb 2026).
|
||||
> Original location: `atomizer-field/`
|
||||
|
||||
## Idea
|
||||
|
||||
Instead of extracting just scalar values (max stress, mass) from FEA results, capture **complete field data** - stress, displacement, and strain at every node and element. Train a GNN to predict full fields from design parameters, enabling 1000x faster optimization with true physics understanding.
|
||||
|
||||
## Architecture (Two-Phase)
|
||||
|
||||
```
|
||||
Phase 1: Data Pipeline
|
||||
NX Nastran (.bdf, .op2) → neural_field_parser.py → Neural Field Format (JSON + HDF5)
|
||||
|
||||
Phase 2: Neural Network
|
||||
Graph representation → GNN training (train.py + field_predictor.py) → Field predictions (5-50ms)
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
- **neural_field_parser.py** - Parsed BDF/OP2 into complete field data (displacement, stress, strain at ALL nodes/elements)
|
||||
- **train.py** - GNN training pipeline using PyTorch Geometric
|
||||
- **predict.py** - Inference for field predictions
|
||||
- **optimization_interface.py** - Integration with Atomizer optimization loop
|
||||
|
||||
## Data Format
|
||||
|
||||
- HDF5 for large arrays (field values per node)
|
||||
- JSON for metadata (mesh topology, material properties, BCs)
|
||||
- Versioned format (v1.0) designed for neural network training
|
||||
|
||||
## Relationship to Current GNN Module
|
||||
|
||||
The `optimization_engine/gnn/` module (ZernikeGNN) is the evolved version of this concept, specialized for mirror Zernike coefficient prediction rather than full-field prediction. If full-field prediction is needed in the future, this concept provides the architecture blueprint.
|
||||
|
||||
## Training Data Requirements
|
||||
|
||||
- SOL 101 (Linear Static) results with DISPLACEMENT=ALL, STRESS=ALL, STRAIN=ALL
|
||||
- Organized as training cases with input (.bdf) and output (.op2) pairs
|
||||
- The `atomizer_field_training_data/` directory contained ~66MB of sample training data (also removed)
|
||||
@@ -1,320 +0,0 @@
|
||||
"""
|
||||
Space-Filling Training Data Generator
|
||||
|
||||
This script generates FEA training points that cover the ENTIRE design space
|
||||
uniformly, unlike optimization which focuses only on promising regions.
|
||||
|
||||
Sampling Methods:
|
||||
1. Latin Hypercube Sampling (LHS) - Good coverage, no clustering
|
||||
2. Sobol Sequence - Quasi-random, very uniform
|
||||
3. Grid Sampling - Regular grid, exhaustive but slow
|
||||
|
||||
Usage:
|
||||
python generate_training_data.py --study uav_arm_optimization --method lhs --points 100
|
||||
"""
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import json
|
||||
import argparse
|
||||
import numpy as np
|
||||
from scipy.stats import qmc # For Latin Hypercube and Sobol
|
||||
|
||||
project_root = Path(__file__).parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
|
||||
def load_config_bounds(study_path: Path) -> dict:
|
||||
"""Load design variable bounds from optimization_config.json
|
||||
|
||||
Supports two config formats:
|
||||
1. {"parameter": "name", "bounds": [min, max]} - Current format
|
||||
2. {"name": "name", "min_value": min, "max_value": max} - Legacy format
|
||||
"""
|
||||
config_path = study_path / "1_setup" / "optimization_config.json"
|
||||
|
||||
if not config_path.exists():
|
||||
raise FileNotFoundError(f"Config not found: {config_path}")
|
||||
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
|
||||
bounds = {}
|
||||
for var in config.get('design_variables', []):
|
||||
# Support both 'parameter' and 'name' keys
|
||||
name = var.get('parameter') or var.get('name')
|
||||
|
||||
# Support both "bounds": [min, max] and "min_value"/"max_value" formats
|
||||
if 'bounds' in var:
|
||||
min_val, max_val = var['bounds']
|
||||
else:
|
||||
min_val = var.get('min_value', var.get('min', 0))
|
||||
max_val = var.get('max_value', var.get('max', 1))
|
||||
|
||||
# Detect integer type based on name or explicit type
|
||||
is_int = (var.get('type') == 'integer' or
|
||||
'count' in name.lower() or
|
||||
(isinstance(min_val, int) and isinstance(max_val, int) and max_val - min_val < 20))
|
||||
|
||||
bounds[name] = {
|
||||
'min': min_val,
|
||||
'max': max_val,
|
||||
'type': 'int' if is_int else 'float'
|
||||
}
|
||||
|
||||
return bounds, config
|
||||
|
||||
|
||||
def generate_lhs_samples(bounds: dict, n_samples: int, seed: int = 42) -> list:
|
||||
"""
|
||||
Generate Latin Hypercube Samples across the full design space.
|
||||
|
||||
LHS ensures:
|
||||
- Each dimension is divided into n equal intervals
|
||||
- Exactly one sample in each interval per dimension
|
||||
- Much better coverage than random sampling
|
||||
"""
|
||||
var_names = list(bounds.keys())
|
||||
n_dims = len(var_names)
|
||||
|
||||
# Create LHS sampler
|
||||
sampler = qmc.LatinHypercube(d=n_dims, seed=seed)
|
||||
samples_unit = sampler.random(n=n_samples) # Values in [0, 1]
|
||||
|
||||
# Scale to actual bounds
|
||||
samples = []
|
||||
for i in range(n_samples):
|
||||
point = {}
|
||||
for j, name in enumerate(var_names):
|
||||
b = bounds[name]
|
||||
value = b['min'] + samples_unit[i, j] * (b['max'] - b['min'])
|
||||
|
||||
if b['type'] == 'int':
|
||||
value = int(round(value))
|
||||
|
||||
point[name] = value
|
||||
samples.append(point)
|
||||
|
||||
return samples
|
||||
|
||||
|
||||
def generate_sobol_samples(bounds: dict, n_samples: int, seed: int = 42) -> list:
|
||||
"""
|
||||
Generate Sobol sequence samples (quasi-random, very uniform).
|
||||
|
||||
Sobol sequences are deterministic and provide excellent uniformity.
|
||||
"""
|
||||
var_names = list(bounds.keys())
|
||||
n_dims = len(var_names)
|
||||
|
||||
sampler = qmc.Sobol(d=n_dims, scramble=True, seed=seed)
|
||||
samples_unit = sampler.random(n=n_samples)
|
||||
|
||||
samples = []
|
||||
for i in range(n_samples):
|
||||
point = {}
|
||||
for j, name in enumerate(var_names):
|
||||
b = bounds[name]
|
||||
value = b['min'] + samples_unit[i, j] * (b['max'] - b['min'])
|
||||
|
||||
if b['type'] == 'int':
|
||||
value = int(round(value))
|
||||
|
||||
point[name] = value
|
||||
samples.append(point)
|
||||
|
||||
return samples
|
||||
|
||||
|
||||
def generate_grid_samples(bounds: dict, points_per_dim: int = 5) -> list:
|
||||
"""
|
||||
Generate regular grid samples.
|
||||
|
||||
Warning: Scales exponentially with dimensions!
|
||||
4 dims x 5 points = 625 samples
|
||||
4 dims x 10 points = 10,000 samples
|
||||
"""
|
||||
var_names = list(bounds.keys())
|
||||
|
||||
# Create linspace for each dimension
|
||||
grids = []
|
||||
for name in var_names:
|
||||
b = bounds[name]
|
||||
if b['type'] == 'int':
|
||||
# For integers, use actual integer values
|
||||
values = np.linspace(b['min'], b['max'], points_per_dim)
|
||||
values = np.unique(np.round(values).astype(int))
|
||||
else:
|
||||
values = np.linspace(b['min'], b['max'], points_per_dim)
|
||||
grids.append(values)
|
||||
|
||||
# Create meshgrid and flatten
|
||||
mesh = np.meshgrid(*grids, indexing='ij')
|
||||
flat = [m.flatten() for m in mesh]
|
||||
|
||||
samples = []
|
||||
for i in range(len(flat[0])):
|
||||
point = {}
|
||||
for j, name in enumerate(var_names):
|
||||
value = flat[j][i]
|
||||
if bounds[name]['type'] == 'int':
|
||||
value = int(value)
|
||||
else:
|
||||
value = float(value)
|
||||
point[name] = value
|
||||
samples.append(point)
|
||||
|
||||
return samples
|
||||
|
||||
|
||||
def generate_corner_samples(bounds: dict) -> list:
|
||||
"""
|
||||
Generate samples at all corners of the design space.
|
||||
|
||||
This ensures the NN sees the extreme combinations.
|
||||
For 4 dimensions: 2^4 = 16 corner points
|
||||
"""
|
||||
var_names = list(bounds.keys())
|
||||
n_dims = len(var_names)
|
||||
|
||||
samples = []
|
||||
for i in range(2**n_dims):
|
||||
point = {}
|
||||
for j, name in enumerate(var_names):
|
||||
b = bounds[name]
|
||||
# Use bit j of i to decide min or max
|
||||
value = b['max'] if (i >> j) & 1 else b['min']
|
||||
if b['type'] == 'int':
|
||||
value = int(value)
|
||||
point[name] = value
|
||||
samples.append(point)
|
||||
|
||||
return samples
|
||||
|
||||
|
||||
def save_training_points(samples: list, output_path: Path):
|
||||
"""Save training points to JSON file."""
|
||||
with open(output_path, 'w') as f:
|
||||
json.dump({
|
||||
'n_samples': len(samples),
|
||||
'samples': samples
|
||||
}, f, indent=2)
|
||||
print(f"Saved {len(samples)} training points to: {output_path}")
|
||||
|
||||
|
||||
def visualize_coverage(samples: list, bounds: dict, save_path: Path):
|
||||
"""Visualize how well samples cover the design space."""
|
||||
import matplotlib
|
||||
matplotlib.use('Agg')
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
var_names = list(bounds.keys())
|
||||
n_vars = len(var_names)
|
||||
|
||||
# Create pairwise scatter plots
|
||||
fig, axes = plt.subplots(n_vars-1, n_vars-1, figsize=(12, 12))
|
||||
|
||||
for i in range(n_vars - 1):
|
||||
for j in range(i + 1, n_vars):
|
||||
ax = axes[j-1, i] if n_vars > 2 else axes
|
||||
|
||||
x = [s[var_names[i]] for s in samples]
|
||||
y = [s[var_names[j]] for s in samples]
|
||||
|
||||
ax.scatter(x, y, alpha=0.5, s=20)
|
||||
ax.set_xlabel(var_names[i].replace('_', '\n'), fontsize=8)
|
||||
ax.set_ylabel(var_names[j].replace('_', '\n'), fontsize=8)
|
||||
|
||||
# Show bounds
|
||||
b_i = bounds[var_names[i]]
|
||||
b_j = bounds[var_names[j]]
|
||||
ax.set_xlim(b_i['min'], b_i['max'])
|
||||
ax.set_ylim(b_j['min'], b_j['max'])
|
||||
ax.grid(True, alpha=0.3)
|
||||
|
||||
# Hide unused subplots
|
||||
for i in range(n_vars - 1):
|
||||
for j in range(i):
|
||||
if n_vars > 2:
|
||||
axes[i, j].set_visible(False)
|
||||
|
||||
plt.suptitle(f'Design Space Coverage ({len(samples)} samples)', fontsize=14)
|
||||
plt.tight_layout()
|
||||
plt.savefig(save_path, dpi=150)
|
||||
plt.close()
|
||||
print(f"Saved coverage plot: {save_path}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Generate space-filling training data')
|
||||
parser.add_argument('--study', required=True, help='Study name (e.g., uav_arm_optimization)')
|
||||
parser.add_argument('--method', default='lhs', choices=['lhs', 'sobol', 'grid', 'corners', 'combined'],
|
||||
help='Sampling method')
|
||||
parser.add_argument('--points', type=int, default=100, help='Number of samples (for lhs/sobol)')
|
||||
parser.add_argument('--grid-points', type=int, default=5, help='Points per dimension (for grid)')
|
||||
parser.add_argument('--seed', type=int, default=42, help='Random seed')
|
||||
parser.add_argument('--visualize', action='store_true', help='Generate coverage plot')
|
||||
args = parser.parse_args()
|
||||
|
||||
study_path = project_root / "studies" / args.study
|
||||
if not study_path.exists():
|
||||
print(f"ERROR: Study not found: {study_path}")
|
||||
return
|
||||
|
||||
print("="*70)
|
||||
print("Space-Filling Training Data Generator")
|
||||
print("="*70)
|
||||
|
||||
# Load bounds from config
|
||||
print(f"\nLoading config from: {study_path}")
|
||||
bounds, config = load_config_bounds(study_path)
|
||||
|
||||
print(f"\nDesign Variable Bounds:")
|
||||
for name, b in bounds.items():
|
||||
print(f" {name}: [{b['min']}, {b['max']}] ({b['type']})")
|
||||
|
||||
# Generate samples
|
||||
print(f"\nGenerating samples using method: {args.method}")
|
||||
|
||||
if args.method == 'lhs':
|
||||
samples = generate_lhs_samples(bounds, args.points, args.seed)
|
||||
elif args.method == 'sobol':
|
||||
samples = generate_sobol_samples(bounds, args.points, args.seed)
|
||||
elif args.method == 'grid':
|
||||
samples = generate_grid_samples(bounds, args.grid_points)
|
||||
elif args.method == 'corners':
|
||||
samples = generate_corner_samples(bounds)
|
||||
elif args.method == 'combined':
|
||||
# Combine corners + LHS for best coverage
|
||||
corner_samples = generate_corner_samples(bounds)
|
||||
lhs_samples = generate_lhs_samples(bounds, args.points - len(corner_samples), args.seed)
|
||||
samples = corner_samples + lhs_samples
|
||||
print(f" Combined: {len(corner_samples)} corners + {len(lhs_samples)} LHS")
|
||||
|
||||
print(f" Generated {len(samples)} samples")
|
||||
|
||||
# Show sample range coverage
|
||||
print(f"\nSample Coverage:")
|
||||
for name in bounds.keys():
|
||||
values = [s[name] for s in samples]
|
||||
print(f" {name}: [{min(values):.2f}, {max(values):.2f}]")
|
||||
|
||||
# Save samples
|
||||
output_path = study_path / "1_setup" / "training_points.json"
|
||||
save_training_points(samples, output_path)
|
||||
|
||||
# Visualize if requested
|
||||
if args.visualize:
|
||||
plot_path = study_path / "1_setup" / "training_coverage.png"
|
||||
visualize_coverage(samples, bounds, plot_path)
|
||||
|
||||
print(f"\n" + "="*70)
|
||||
print("NEXT STEPS")
|
||||
print("="*70)
|
||||
print(f"1. Run FEA on all {len(samples)} training points:")
|
||||
print(f" python run_training_fea.py --study {args.study}")
|
||||
print(f"2. This will create comprehensive training data")
|
||||
print(f"3. Then retrain NN on this uniform data")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,73 +0,0 @@
|
||||
"""
|
||||
Extract element forces from CBAR in Z direction from OP2
|
||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
||||
|
||||
Pattern: cbar_force
|
||||
Element Type: CBAR
|
||||
Result Type: force
|
||||
API: model.cbar_force[subcase]
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
import numpy as np
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_cbar_force(op2_file: Path, subcase: int = 1, direction: str = 'Z'):
|
||||
"""
|
||||
Extract forces from CBAR elements.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 file
|
||||
subcase: Subcase ID
|
||||
direction: Force direction ('X', 'Y', 'Z', 'axial', 'torque')
|
||||
|
||||
Returns:
|
||||
Dict with force statistics
|
||||
"""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
model = OP2()
|
||||
model.read_op2(str(op2_file))
|
||||
|
||||
if not hasattr(model, 'cbar_force'):
|
||||
raise ValueError("No CBAR force results in OP2")
|
||||
|
||||
force = model.cbar_force[subcase]
|
||||
itime = 0
|
||||
|
||||
# CBAR force data structure:
|
||||
# [bending_moment_a1, bending_moment_a2,
|
||||
# bending_moment_b1, bending_moment_b2,
|
||||
# shear1, shear2, axial, torque]
|
||||
|
||||
direction_map = {
|
||||
'shear1': 4,
|
||||
'shear2': 5,
|
||||
'axial': 6,
|
||||
'Z': 6, # Commonly axial is Z direction
|
||||
'torque': 7
|
||||
}
|
||||
|
||||
col_idx = direction_map.get(direction, direction_map.get(direction.lower(), 6))
|
||||
forces = force.data[itime, :, col_idx]
|
||||
|
||||
return {
|
||||
f'max_{direction}_force': float(np.max(np.abs(forces))),
|
||||
f'avg_{direction}_force': float(np.mean(np.abs(forces))),
|
||||
f'min_{direction}_force': float(np.min(np.abs(forces))),
|
||||
'forces_array': forces.tolist()
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Example usage
|
||||
import sys
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
result = extract_cbar_force(op2_file)
|
||||
print(f"Extraction result: {result}")
|
||||
else:
|
||||
print("Usage: python {sys.argv[0]} <op2_file>")
|
||||
@@ -1,56 +0,0 @@
|
||||
"""
|
||||
Extract displacement from bracket OP2
|
||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
||||
|
||||
Pattern: displacement
|
||||
Element Type: General
|
||||
Result Type: displacement
|
||||
API: model.displacements[subcase]
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
import numpy as np
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_displacement(op2_file: Path, subcase: int = 1):
|
||||
"""Extract displacement results from OP2 file."""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
model = OP2()
|
||||
model.read_op2(str(op2_file))
|
||||
|
||||
disp = model.displacements[subcase]
|
||||
itime = 0 # static case
|
||||
|
||||
# Extract translation components
|
||||
txyz = disp.data[itime, :, :3] # [tx, ty, tz]
|
||||
|
||||
# Calculate total displacement
|
||||
total_disp = np.linalg.norm(txyz, axis=1)
|
||||
max_disp = np.max(total_disp)
|
||||
|
||||
# Get node info
|
||||
node_ids = [nid for (nid, grid_type) in disp.node_gridtype]
|
||||
max_disp_node = node_ids[np.argmax(total_disp)]
|
||||
|
||||
return {
|
||||
'max_displacement': float(max_disp),
|
||||
'max_disp_node': int(max_disp_node),
|
||||
'max_disp_x': float(np.max(np.abs(txyz[:, 0]))),
|
||||
'max_disp_y': float(np.max(np.abs(txyz[:, 1]))),
|
||||
'max_disp_z': float(np.max(np.abs(txyz[:, 2])))
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Example usage
|
||||
import sys
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
result = extract_displacement(op2_file)
|
||||
print(f"Extraction result: {result}")
|
||||
else:
|
||||
print("Usage: python {sys.argv[0]} <op2_file>")
|
||||
@@ -1,75 +0,0 @@
|
||||
"""
|
||||
Comparison Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
Compare min force to average
|
||||
|
||||
Operation: ratio
|
||||
Formula: min_to_avg_ratio = min_force / avg_force
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def compare_ratio(min_force, avg_force):
|
||||
"""
|
||||
Compare values using ratio.
|
||||
|
||||
Args:
|
||||
min_force: float
|
||||
avg_force: float
|
||||
|
||||
Returns:
|
||||
float: Comparison result
|
||||
"""
|
||||
result = min_force / avg_force
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
min_force = inputs.get("min_force")
|
||||
if min_force is None:
|
||||
print(f"Error: Required input 'min_force' not found")
|
||||
sys.exit(1)
|
||||
avg_force = inputs.get("avg_force")
|
||||
if avg_force is None:
|
||||
print(f"Error: Required input 'avg_force' not found")
|
||||
sys.exit(1)
|
||||
|
||||
# Calculate comparison
|
||||
result = compare_ratio(min_force, avg_force)
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "min_to_avg_ratio.json"
|
||||
output = {
|
||||
"min_to_avg_ratio": result,
|
||||
"operation": "ratio",
|
||||
"formula": "min_force / avg_force",
|
||||
"inputs_used": {"min_force": min_force, "avg_force": avg_force}
|
||||
}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"min_to_avg_ratio = {result:.6f}")
|
||||
print(f"Result saved to: {output_file}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,83 +0,0 @@
|
||||
"""
|
||||
Constraint Check Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
Check if stress is below yield
|
||||
|
||||
Constraint: max_stress / yield_strength
|
||||
Threshold: 1.0
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def check_yield_constraint(max_stress, yield_strength):
|
||||
"""
|
||||
Check constraint condition.
|
||||
|
||||
Args:
|
||||
max_stress: float
|
||||
yield_strength: float
|
||||
|
||||
Returns:
|
||||
tuple: (satisfied: bool, value: float, violation: float)
|
||||
"""
|
||||
value = max_stress / yield_strength
|
||||
satisfied = value <= 1.0
|
||||
violation = max(0.0, value - 1.0)
|
||||
|
||||
return satisfied, value, violation
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
max_stress = inputs.get("max_stress")
|
||||
if max_stress is None:
|
||||
print(f"Error: Required input 'max_stress' not found")
|
||||
sys.exit(1)
|
||||
yield_strength = inputs.get("yield_strength")
|
||||
if yield_strength is None:
|
||||
print(f"Error: Required input 'yield_strength' not found")
|
||||
sys.exit(1)
|
||||
|
||||
# Check constraint
|
||||
satisfied, value, violation = check_yield_constraint(max_stress, yield_strength)
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "yield_constraint_check.json"
|
||||
output = {
|
||||
"constraint_name": "yield_constraint",
|
||||
"satisfied": satisfied,
|
||||
"value": value,
|
||||
"threshold": 1.0,
|
||||
"violation": violation,
|
||||
"inputs_used": {"max_stress": max_stress, "yield_strength": yield_strength}
|
||||
}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
status = "SATISFIED" if satisfied else "VIOLATED"
|
||||
print(f"Constraint {status}: {value:.6f} (threshold: 1.0)")
|
||||
if not satisfied:
|
||||
print(f"Violation: {violation:.6f}")
|
||||
print(f"Result saved to: {output_file}")
|
||||
|
||||
return value
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,74 +0,0 @@
|
||||
"""
|
||||
Custom Formula Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
Calculate safety factor
|
||||
|
||||
Formula: safety_factor = yield_strength / max_stress
|
||||
Inputs: max_stress, yield_strength
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def calculate_safety_factor(max_stress, yield_strength):
|
||||
"""
|
||||
Calculate custom metric using formula.
|
||||
|
||||
Args:
|
||||
max_stress: float
|
||||
yield_strength: float
|
||||
|
||||
Returns:
|
||||
float: safety_factor
|
||||
"""
|
||||
safety_factor = yield_strength / max_stress
|
||||
return safety_factor
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for hook execution."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
max_stress = inputs.get("max_stress")
|
||||
if max_stress is None:
|
||||
print(f"Error: Required input 'max_stress' not found")
|
||||
sys.exit(1)
|
||||
yield_strength = inputs.get("yield_strength")
|
||||
if yield_strength is None:
|
||||
print(f"Error: Required input 'yield_strength' not found")
|
||||
sys.exit(1)
|
||||
|
||||
# Calculate result
|
||||
result = calculate_safety_factor(max_stress, yield_strength)
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "safety_factor_result.json"
|
||||
output = {
|
||||
"safety_factor": result,
|
||||
"formula": "yield_strength / max_stress",
|
||||
"inputs_used": {"max_stress": max_stress, "yield_strength": yield_strength}
|
||||
}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"safety_factor = {result:.6f}")
|
||||
print(f"Result saved to: {output_file}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,54 +0,0 @@
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"name": "hook_weighted_objective_norm_stress_norm_disp.py",
|
||||
"type": "weighted_objective",
|
||||
"description": "Combine normalized stress (70%) and displacement (30%)",
|
||||
"inputs": [
|
||||
"norm_stress",
|
||||
"norm_disp"
|
||||
],
|
||||
"outputs": [
|
||||
"weighted_objective"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "hook_custom_safety_factor.py",
|
||||
"type": "custom_formula",
|
||||
"description": "Calculate safety factor",
|
||||
"inputs": [
|
||||
"max_stress",
|
||||
"yield_strength"
|
||||
],
|
||||
"outputs": [
|
||||
"safety_factor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "hook_compare_min_to_avg_ratio.py",
|
||||
"type": "comparison",
|
||||
"description": "Compare min force to average",
|
||||
"inputs": [
|
||||
"min_force",
|
||||
"avg_force"
|
||||
],
|
||||
"outputs": [
|
||||
"min_to_avg_ratio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "hook_constraint_yield_constraint.py",
|
||||
"type": "constraint_check",
|
||||
"description": "Check if stress is below yield",
|
||||
"inputs": [
|
||||
"max_stress",
|
||||
"yield_strength"
|
||||
],
|
||||
"outputs": [
|
||||
"yield_constraint",
|
||||
"yield_constraint_satisfied",
|
||||
"yield_constraint_violation"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
"""
|
||||
Weighted Objective Function Hook
|
||||
Auto-generated by Atomizer Phase 2.9
|
||||
|
||||
Combine normalized stress (70%) and displacement (30%)
|
||||
|
||||
Inputs: norm_stress, norm_disp
|
||||
Weights: 0.7, 0.3
|
||||
Formula: 0.7 * norm_stress + 0.3 * norm_disp
|
||||
Objective: minimize
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def weighted_objective(norm_stress, norm_disp):
|
||||
"""
|
||||
Calculate weighted objective from multiple inputs.
|
||||
|
||||
Args:
|
||||
norm_stress: float
|
||||
norm_disp: float
|
||||
|
||||
Returns:
|
||||
float: Weighted objective value
|
||||
"""
|
||||
result = 0.7 * norm_stress + 0.3 * norm_disp
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main entry point for hook execution.
|
||||
Reads inputs from JSON file, calculates objective, writes output.
|
||||
"""
|
||||
# Parse command line arguments
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python {} <input_file.json>".format(sys.argv[0]))
|
||||
sys.exit(1)
|
||||
|
||||
input_file = Path(sys.argv[1])
|
||||
|
||||
# Read inputs
|
||||
if not input_file.exists():
|
||||
print(f"Error: Input file {input_file} not found")
|
||||
sys.exit(1)
|
||||
|
||||
with open(input_file, 'r') as f:
|
||||
inputs = json.load(f)
|
||||
|
||||
# Extract required inputs
|
||||
norm_stress = inputs.get("norm_stress")
|
||||
if norm_stress is None:
|
||||
print(f"Error: Required input 'norm_stress' not found")
|
||||
sys.exit(1)
|
||||
norm_disp = inputs.get("norm_disp")
|
||||
if norm_disp is None:
|
||||
print(f"Error: Required input 'norm_disp' not found")
|
||||
sys.exit(1)
|
||||
|
||||
# Calculate weighted objective
|
||||
result = weighted_objective(norm_stress, norm_disp)
|
||||
|
||||
# Write output
|
||||
output_file = input_file.parent / "weighted_objective_result.json"
|
||||
output = {
|
||||
"weighted_objective": result,
|
||||
"objective_type": "minimize",
|
||||
"inputs_used": {"norm_stress": norm_stress, "norm_disp": norm_disp},
|
||||
"formula": "0.7 * norm_stress + 0.3 * norm_disp"
|
||||
}
|
||||
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(output, f, indent=2)
|
||||
|
||||
print(f"Weighted objective calculated: {result:.6f}")
|
||||
print(f"Result saved to: {output_file}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"norm_stress": 0.75,
|
||||
"norm_disp": 0.64
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"weighted_objective": 0.7169999999999999,
|
||||
"objective_type": "minimize",
|
||||
"inputs_used": {
|
||||
"norm_stress": 0.75,
|
||||
"norm_disp": 0.64
|
||||
},
|
||||
"formula": "0.7 * norm_stress + 0.3 * norm_disp"
|
||||
}
|
||||
@@ -1,208 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
optimization_engine Migration Script
|
||||
=====================================
|
||||
Automatically updates all imports across the codebase.
|
||||
|
||||
Usage:
|
||||
python migrate_imports.py --dry-run # Preview changes
|
||||
python migrate_imports.py --execute # Apply changes
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Tuple
|
||||
|
||||
# Import mappings (old -> new) - using regex patterns
|
||||
IMPORT_MAPPINGS = {
|
||||
# =============================================================================
|
||||
# CORE MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.runner\b': 'from optimization_engine.core.runner',
|
||||
r'from optimization_engine\.base_runner\b': 'from optimization_engine.core.base_runner',
|
||||
r'from optimization_engine\.runner_with_neural\b': 'from optimization_engine.core.runner_with_neural',
|
||||
r'from optimization_engine\.intelligent_optimizer\b': 'from optimization_engine.core.intelligent_optimizer',
|
||||
r'from optimization_engine\.method_selector\b': 'from optimization_engine.core.method_selector',
|
||||
r'from optimization_engine\.strategy_selector\b': 'from optimization_engine.core.strategy_selector',
|
||||
r'from optimization_engine\.strategy_portfolio\b': 'from optimization_engine.core.strategy_portfolio',
|
||||
r'from optimization_engine\.gradient_optimizer\b': 'from optimization_engine.core.gradient_optimizer',
|
||||
r'import optimization_engine\.runner\b': 'import optimization_engine.core.runner',
|
||||
r'import optimization_engine\.intelligent_optimizer\b': 'import optimization_engine.core.intelligent_optimizer',
|
||||
|
||||
# =============================================================================
|
||||
# SURROGATES MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.neural_surrogate\b': 'from optimization_engine.processors.surrogates.neural_surrogate',
|
||||
r'from optimization_engine\.generic_surrogate\b': 'from optimization_engine.processors.surrogates.generic_surrogate',
|
||||
r'from optimization_engine\.adaptive_surrogate\b': 'from optimization_engine.processors.surrogates.adaptive_surrogate',
|
||||
r'from optimization_engine\.simple_mlp_surrogate\b': 'from optimization_engine.processors.surrogates.simple_mlp_surrogate',
|
||||
r'from optimization_engine\.active_learning_surrogate\b': 'from optimization_engine.processors.surrogates.active_learning_surrogate',
|
||||
r'from optimization_engine\.surrogate_tuner\b': 'from optimization_engine.processors.surrogates.surrogate_tuner',
|
||||
r'from optimization_engine\.auto_trainer\b': 'from optimization_engine.processors.surrogates.auto_trainer',
|
||||
r'from optimization_engine\.training_data_exporter\b': 'from optimization_engine.processors.surrogates.training_data_exporter',
|
||||
|
||||
# =============================================================================
|
||||
# NX MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.nx_solver\b': 'from optimization_engine.nx.solver',
|
||||
r'from optimization_engine\.nx_updater\b': 'from optimization_engine.nx.updater',
|
||||
r'from optimization_engine\.nx_session_manager\b': 'from optimization_engine.nx.session_manager',
|
||||
r'from optimization_engine\.solve_simulation\b': 'from optimization_engine.nx.solve_simulation',
|
||||
r'from optimization_engine\.solve_simulation_simple\b': 'from optimization_engine.nx.solve_simulation_simple',
|
||||
r'from optimization_engine\.model_cleanup\b': 'from optimization_engine.nx.model_cleanup',
|
||||
r'from optimization_engine\.export_expressions\b': 'from optimization_engine.nx.export_expressions',
|
||||
r'from optimization_engine\.import_expressions\b': 'from optimization_engine.nx.import_expressions',
|
||||
r'from optimization_engine\.mesh_converter\b': 'from optimization_engine.nx.mesh_converter',
|
||||
r'import optimization_engine\.nx_solver\b': 'import optimization_engine.nx.solver',
|
||||
r'import optimization_engine\.nx_updater\b': 'import optimization_engine.nx.updater',
|
||||
|
||||
# =============================================================================
|
||||
# STUDY MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.study_creator\b': 'from optimization_engine.study.creator',
|
||||
r'from optimization_engine\.study_wizard\b': 'from optimization_engine.study.wizard',
|
||||
r'from optimization_engine\.study_state\b': 'from optimization_engine.study.state',
|
||||
r'from optimization_engine\.study_reset\b': 'from optimization_engine.study.reset',
|
||||
r'from optimization_engine\.study_continuation\b': 'from optimization_engine.study.continuation',
|
||||
r'from optimization_engine\.benchmarking_substudy\b': 'from optimization_engine.study.benchmarking',
|
||||
r'from optimization_engine\.generate_history_from_trials\b': 'from optimization_engine.study.history_generator',
|
||||
|
||||
# =============================================================================
|
||||
# REPORTING MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.generate_report\b': 'from optimization_engine.reporting.report_generator',
|
||||
r'from optimization_engine\.generate_report_markdown\b': 'from optimization_engine.reporting.markdown_report',
|
||||
r'from optimization_engine\.comprehensive_results_analyzer\b': 'from optimization_engine.reporting.results_analyzer',
|
||||
r'from optimization_engine\.visualizer\b': 'from optimization_engine.reporting.visualizer',
|
||||
r'from optimization_engine\.landscape_analyzer\b': 'from optimization_engine.reporting.landscape_analyzer',
|
||||
|
||||
# =============================================================================
|
||||
# CONFIG MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.config_manager\b': 'from optimization_engine.config.manager',
|
||||
r'from optimization_engine\.optimization_config_builder\b': 'from optimization_engine.config.builder',
|
||||
r'from optimization_engine\.optimization_setup_wizard\b': 'from optimization_engine.config.setup_wizard',
|
||||
r'from optimization_engine\.capability_matcher\b': 'from optimization_engine.config.capability_matcher',
|
||||
r'from optimization_engine\.template_loader\b': 'from optimization_engine.config.template_loader',
|
||||
|
||||
# =============================================================================
|
||||
# UTILS MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.logger\b': 'from optimization_engine.utils.logger',
|
||||
r'from optimization_engine\.auto_doc\b': 'from optimization_engine.utils.auto_doc',
|
||||
r'from optimization_engine\.realtime_tracking\b': 'from optimization_engine.utils.realtime_tracking',
|
||||
r'from optimization_engine\.codebase_analyzer\b': 'from optimization_engine.utils.codebase_analyzer',
|
||||
r'from optimization_engine\.pruning_logger\b': 'from optimization_engine.utils.pruning_logger',
|
||||
|
||||
# =============================================================================
|
||||
# FUTURE MODULE
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.research_agent\b': 'from optimization_engine.future.research_agent',
|
||||
r'from optimization_engine\.pynastran_research_agent\b': 'from optimization_engine.future.pynastran_research_agent',
|
||||
r'from optimization_engine\.targeted_research_planner\b': 'from optimization_engine.future.targeted_research_planner',
|
||||
r'from optimization_engine\.workflow_decomposer\b': 'from optimization_engine.future.workflow_decomposer',
|
||||
r'from optimization_engine\.step_classifier\b': 'from optimization_engine.future.step_classifier',
|
||||
r'from optimization_engine\.llm_optimization_runner\b': 'from optimization_engine.future.llm_optimization_runner',
|
||||
r'from optimization_engine\.llm_workflow_analyzer\b': 'from optimization_engine.future.llm_workflow_analyzer',
|
||||
|
||||
# =============================================================================
|
||||
# EXTRACTORS/VALIDATORS additions
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.op2_extractor\b': 'from optimization_engine.extractors.op2_extractor',
|
||||
r'from optimization_engine\.extractor_library\b': 'from optimization_engine.extractors.extractor_library',
|
||||
r'from optimization_engine\.simulation_validator\b': 'from optimization_engine.validators.simulation_validator',
|
||||
|
||||
# =============================================================================
|
||||
# PROCESSORS
|
||||
# =============================================================================
|
||||
r'from optimization_engine\.adaptive_characterization\b': 'from optimization_engine.processors.adaptive_characterization',
|
||||
}
|
||||
|
||||
# Also need to handle utils submodule imports that moved
|
||||
UTILS_MAPPINGS = {
|
||||
r'from optimization_engine\.utils\.nx_session_manager\b': 'from optimization_engine.nx.session_manager',
|
||||
}
|
||||
|
||||
# Combine all mappings
|
||||
ALL_MAPPINGS = {**IMPORT_MAPPINGS, **UTILS_MAPPINGS}
|
||||
|
||||
def find_files(root: Path, extensions: List[str], exclude_dirs: List[str] = None) -> List[Path]:
|
||||
"""Find all files with given extensions, excluding certain directories."""
|
||||
if exclude_dirs is None:
|
||||
exclude_dirs = ['optimization_engine_BACKUP', '.venv', 'node_modules', '__pycache__', '.git']
|
||||
|
||||
files = []
|
||||
for ext in extensions:
|
||||
for f in root.rglob(f'*{ext}'):
|
||||
# Check if any excluded dir is in the path
|
||||
if not any(excl in str(f) for excl in exclude_dirs):
|
||||
files.append(f)
|
||||
return files
|
||||
|
||||
def update_file(filepath: Path, mappings: Dict[str, str], dry_run: bool = True) -> Tuple[int, List[str]]:
|
||||
"""Update imports in a single file."""
|
||||
try:
|
||||
content = filepath.read_text(encoding='utf-8', errors='ignore')
|
||||
except Exception as e:
|
||||
print(f" ERROR reading {filepath}: {e}")
|
||||
return 0, []
|
||||
|
||||
changes = []
|
||||
new_content = content
|
||||
|
||||
for pattern, replacement in mappings.items():
|
||||
matches = re.findall(pattern, content)
|
||||
if matches:
|
||||
new_content = re.sub(pattern, replacement, new_content)
|
||||
changes.append(f" {pattern} -> {replacement} ({len(matches)} occurrences)")
|
||||
|
||||
if changes and not dry_run:
|
||||
filepath.write_text(new_content, encoding='utf-8')
|
||||
|
||||
return len(changes), changes
|
||||
|
||||
def main():
|
||||
dry_run = '--dry-run' in sys.argv or '--execute' not in sys.argv
|
||||
|
||||
if dry_run:
|
||||
print("=" * 60)
|
||||
print("DRY RUN MODE - No files will be modified")
|
||||
print("=" * 60)
|
||||
else:
|
||||
print("=" * 60)
|
||||
print("EXECUTE MODE - Files will be modified!")
|
||||
print("=" * 60)
|
||||
confirm = input("Are you sure? (yes/no): ")
|
||||
if confirm.lower() != 'yes':
|
||||
print("Aborted.")
|
||||
return
|
||||
|
||||
root = Path('.')
|
||||
|
||||
# Find all Python files
|
||||
py_files = find_files(root, ['.py'])
|
||||
print(f"\nFound {len(py_files)} Python files to check")
|
||||
|
||||
total_changes = 0
|
||||
files_changed = 0
|
||||
|
||||
for filepath in sorted(py_files):
|
||||
count, changes = update_file(filepath, ALL_MAPPINGS, dry_run)
|
||||
if count > 0:
|
||||
files_changed += 1
|
||||
total_changes += count
|
||||
print(f"\n{filepath} ({count} changes):")
|
||||
for change in changes:
|
||||
print(change)
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print(f"SUMMARY: {total_changes} changes in {files_changed} files")
|
||||
print("=" * 60)
|
||||
|
||||
if dry_run:
|
||||
print("\nTo apply changes, run: python migrate_imports.py --execute")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,190 +0,0 @@
|
||||
{
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "tip_thickness",
|
||||
"type": "continuous",
|
||||
"bounds": [
|
||||
15.0,
|
||||
25.0
|
||||
],
|
||||
"units": "mm",
|
||||
"initial_value": 20.0
|
||||
},
|
||||
{
|
||||
"name": "support_angle",
|
||||
"type": "continuous",
|
||||
"bounds": [
|
||||
20.0,
|
||||
40.0
|
||||
],
|
||||
"units": "degrees",
|
||||
"initial_value": 30.0
|
||||
},
|
||||
{
|
||||
"name": "support_blend_radius",
|
||||
"type": "continuous",
|
||||
"bounds": [
|
||||
5.0,
|
||||
15.0
|
||||
],
|
||||
"units": "mm",
|
||||
"initial_value": 10.0
|
||||
}
|
||||
],
|
||||
"objectives": [
|
||||
{
|
||||
"name": "minimize_mass",
|
||||
"description": "Minimize total mass (weight reduction)",
|
||||
"extractor": "mass_extractor",
|
||||
"metric": "total_mass",
|
||||
"direction": "minimize",
|
||||
"weight": 5.0
|
||||
},
|
||||
{
|
||||
"name": "minimize_max_stress",
|
||||
"description": "Minimize maximum von Mises stress",
|
||||
"extractor": "stress_extractor",
|
||||
"metric": "max_von_mises",
|
||||
"direction": "minimize",
|
||||
"weight": 10.0
|
||||
}
|
||||
],
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_displacement_limit",
|
||||
"description": "Maximum allowable displacement",
|
||||
"extractor": "displacement_extractor",
|
||||
"metric": "max_displacement",
|
||||
"type": "upper_bound",
|
||||
"limit": 1.0,
|
||||
"units": "mm"
|
||||
},
|
||||
{
|
||||
"name": "max_stress_limit",
|
||||
"description": "Maximum allowable von Mises stress",
|
||||
"extractor": "stress_extractor",
|
||||
"metric": "max_von_mises",
|
||||
"type": "upper_bound",
|
||||
"limit": 200.0,
|
||||
"units": "MPa"
|
||||
}
|
||||
],
|
||||
"optimization_settings": {
|
||||
"n_trials": 150,
|
||||
"sampler": "TPE",
|
||||
"n_startup_trials": 20
|
||||
},
|
||||
"model_info": {
|
||||
"sim_file": "/home/user/Atomizer/tests/Bracket_sim1.sim",
|
||||
"solutions": [
|
||||
{
|
||||
"name": "DisableInThermalSolution",
|
||||
"type": "DisableInThermalSolution",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Disable in Thermal Solution 3D",
|
||||
"type": "Disable in Thermal Solution 3D",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "-Flow-Structural Coupled Solution Parameters",
|
||||
"type": "-Flow-Structural Coupled Solution Parameters",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Direct Frequency Response",
|
||||
"type": "Direct Frequency Response",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "*Thermal-Flow Coupled Solution Parameters",
|
||||
"type": "*Thermal-Flow Coupled Solution Parameters",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "0Thermal-Structural Coupled Solution Parameters",
|
||||
"type": "0Thermal-Structural Coupled Solution Parameters",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Linear Statics",
|
||||
"type": "Linear Statics",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Disable in Thermal Solution 2D",
|
||||
"type": "Disable in Thermal Solution 2D",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Thermal Solution Parameters",
|
||||
"type": "Thermal Solution Parameters",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Nonlinear Statics",
|
||||
"type": "Nonlinear Statics",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Modal Frequency Response",
|
||||
"type": "Modal Frequency Response",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "1Pass Structural Contact Solution to Flow Solver",
|
||||
"type": "1Pass Structural Contact Solution to Flow Solver",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "\"ObjectDisableInThermalSolution3D",
|
||||
"type": "\"ObjectDisableInThermalSolution3D",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Design Optimization",
|
||||
"type": "Design Optimization",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "\"ObjectDisableInThermalSolution2D",
|
||||
"type": "\"ObjectDisableInThermalSolution2D",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Normal Modes",
|
||||
"type": "Normal Modes",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Direct Transient Response",
|
||||
"type": "Direct Transient Response",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
},
|
||||
{
|
||||
"name": "Modal Transient Response",
|
||||
"type": "Modal Transient Response",
|
||||
"solver": "NX Nastran",
|
||||
"description": "Extracted from binary .sim file"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
"""
|
||||
Extract displacement results from OP2 file
|
||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
||||
|
||||
Pattern: displacement
|
||||
Element Type: General
|
||||
Result Type: displacement
|
||||
API: model.displacements[subcase]
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
import numpy as np
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_displacement(op2_file: Path, subcase: int = 1):
|
||||
"""Extract displacement results from OP2 file."""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
model = OP2()
|
||||
model.read_op2(str(op2_file))
|
||||
|
||||
disp = model.displacements[subcase]
|
||||
itime = 0 # static case
|
||||
|
||||
# Extract translation components
|
||||
txyz = disp.data[itime, :, :3] # [tx, ty, tz]
|
||||
|
||||
# Calculate total displacement
|
||||
total_disp = np.linalg.norm(txyz, axis=1)
|
||||
max_disp = np.max(total_disp)
|
||||
|
||||
# Get node info
|
||||
node_ids = [nid for (nid, grid_type) in disp.node_gridtype]
|
||||
max_disp_node = node_ids[np.argmax(total_disp)]
|
||||
|
||||
return {
|
||||
'max_displacement': float(max_disp),
|
||||
'max_disp_node': int(max_disp_node),
|
||||
'max_disp_x': float(np.max(np.abs(txyz[:, 0]))),
|
||||
'max_disp_y': float(np.max(np.abs(txyz[:, 1]))),
|
||||
'max_disp_z': float(np.max(np.abs(txyz[:, 2])))
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Example usage
|
||||
import sys
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
result = extract_displacement(op2_file)
|
||||
print(f"Extraction result: {result}")
|
||||
else:
|
||||
print("Usage: python {sys.argv[0]} <op2_file>")
|
||||
@@ -1,64 +0,0 @@
|
||||
"""
|
||||
Extract von Mises stress from CHEXA elements
|
||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
||||
|
||||
Pattern: solid_stress
|
||||
Element Type: CTETRA
|
||||
Result Type: stress
|
||||
API: model.ctetra_stress[subcase] or model.chexa_stress[subcase]
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
import numpy as np
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_solid_stress(op2_file: Path, subcase: int = 1, element_type: str = 'ctetra'):
|
||||
"""Extract stress from solid elements."""
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
model = OP2()
|
||||
model.read_op2(str(op2_file))
|
||||
|
||||
# Get stress object for element type
|
||||
# In pyNastran, stress is stored in model.op2_results.stress
|
||||
stress_attr = f"{element_type}_stress"
|
||||
|
||||
if not hasattr(model, 'op2_results') or not hasattr(model.op2_results, 'stress'):
|
||||
raise ValueError(f"No stress results in OP2")
|
||||
|
||||
stress_obj = model.op2_results.stress
|
||||
if not hasattr(stress_obj, stress_attr):
|
||||
raise ValueError(f"No {element_type} stress results in OP2")
|
||||
|
||||
stress = getattr(stress_obj, stress_attr)[subcase]
|
||||
itime = 0
|
||||
|
||||
# Extract von Mises if available
|
||||
if stress.is_von_mises: # Property, not method
|
||||
von_mises = stress.data[itime, :, 9] # Column 9 is von Mises
|
||||
max_stress = float(np.max(von_mises))
|
||||
|
||||
# Get element info
|
||||
element_ids = [eid for (eid, node) in stress.element_node]
|
||||
max_stress_elem = element_ids[np.argmax(von_mises)]
|
||||
|
||||
return {
|
||||
'max_von_mises': max_stress,
|
||||
'max_stress_element': int(max_stress_elem)
|
||||
}
|
||||
else:
|
||||
raise ValueError("von Mises stress not available")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Example usage
|
||||
import sys
|
||||
if len(sys.argv) > 1:
|
||||
op2_file = Path(sys.argv[1])
|
||||
result = extract_solid_stress(op2_file)
|
||||
print(f"Extraction result: {result}")
|
||||
else:
|
||||
print("Usage: python {sys.argv[0]} <op2_file>")
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,255 +0,0 @@
|
||||
# Neural Network Surrogate Performance Report
|
||||
|
||||
**Study:** uav_arm_optimization
|
||||
|
||||
**Generated:** 2025-11-25 15:15:52
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Overall Status:** EXCELLENT
|
||||
|
||||
Model shows excellent prediction accuracy and is suitable for production optimization.
|
||||
|
||||
**Training Data:** 50 FEA simulations
|
||||
|
||||
### Key Metrics
|
||||
|
||||
| Objective | MAPE | MAE | R² | Assessment |
|
||||
|-----------|------|-----|----|-----------|
|
||||
| mass | 1.8% | 0.00 | 0.9000 | Excellent |
|
||||
| fundamental_frequency | 1.1% | 0.00 | 0.9000 | Excellent |
|
||||
|
||||
---
|
||||
|
||||
## 1. Training Data Analysis
|
||||
|
||||
The neural network was trained on 50 completed FEA simulations.
|
||||
|
||||
### Design Space Coverage
|
||||
|
||||
| Parameter | Min | Max | Bounds | Coverage |
|
||||
|-----------|-----|-----|--------|----------|
|
||||
| beam_face_thickness | 1.02 | 2.83 | [1, 3] | [OK] 90% |
|
||||
| beam_half_core_thickness | 5.01 | 9.13 | [5, 10] | [OK] 82% |
|
||||
| hole_count | 8.09 | 13.70 | [8, 14] | [OK] 94% |
|
||||
| holes_diameter | 12.50 | 49.26 | [10, 50] | [OK] 92% |
|
||||
|
||||

|
||||
|
||||
## 2. Prediction Accuracy
|
||||
|
||||
### Methodology
|
||||
|
||||
Prediction accuracy is evaluated by comparing neural network predictions against actual FEA results.
|
||||
|
||||
**Metrics used:**
|
||||
- **MAPE (Mean Absolute Percentage Error):** Average percentage difference between predicted and actual values
|
||||
- **MAE (Mean Absolute Error):** Average absolute difference in original units
|
||||
- **R² (Coefficient of Determination):** Proportion of variance explained by the model
|
||||
|
||||
### Results
|
||||
|
||||

|
||||
|
||||
#### mass
|
||||
|
||||
- MAPE: 1.77%
|
||||
- MAE: 0.00
|
||||
- RMSE: 0.00
|
||||
- R²: 0.9000
|
||||
- Samples: 50
|
||||
|
||||
#### fundamental_frequency
|
||||
|
||||
- MAPE: 1.15%
|
||||
- MAE: 0.00
|
||||
- RMSE: 0.00
|
||||
- R²: 0.9000
|
||||
- Samples: 50
|
||||
|
||||
## 3. Extrapolation Risk Analysis
|
||||
|
||||
Neural networks perform best on data similar to their training set. This section analyzes the risk of extrapolation errors.
|
||||
|
||||

|
||||
|
||||
## 4. Optimization Performance
|
||||
|
||||

|
||||
|
||||
### Speed Comparison
|
||||
|
||||
| Method | Evaluations | Est. Time | Speed |
|
||||
|--------|-------------|-----------|-------|
|
||||
| FEA Optimization | 50 | ~50 min | 1x |
|
||||
| NN Optimization | 1000 | ~1 sec | 1200x |
|
||||
|
||||
## 5. Recommendations
|
||||
|
||||
### Immediate Actions
|
||||
|
||||
### Model Improvement
|
||||
|
||||
- Consider ensemble methods for uncertainty quantification
|
||||
- Implement active learning to target high-error regions
|
||||
- Add cross-validation for robust performance estimation
|
||||
|
||||
## 6. Summary Dashboard
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
|
||||
### Files Generated
|
||||
|
||||
- `training_data_coverage.png` - Training Data Coverage
|
||||
- `prediction_accuracy.png` - Prediction Accuracy
|
||||
- `optimization_comparison.png` - Optimization Comparison
|
||||
- `extrapolation_analysis.png` - Extrapolation Analysis
|
||||
- `summary_dashboard.png` - Summary Dashboard
|
||||
|
||||
### Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "uav_arm_optimization",
|
||||
"description": "UAV Camera Support Arm - Multi-Objective Lightweight Design",
|
||||
"engineering_context": "Unmanned aerial vehicle camera gimbal arm. Target: lighter than current 145g design while maintaining camera stability under 850g payload. Must avoid resonance with rotor frequencies (80-120 Hz).",
|
||||
"optimization_settings": {
|
||||
"protocol": "protocol_11_multi_objective",
|
||||
"n_trials": 30,
|
||||
"sampler": "NSGAIISampler",
|
||||
"pruner": null,
|
||||
"timeout_per_trial": 600
|
||||
},
|
||||
"design_variables": [
|
||||
{
|
||||
"parameter": "beam_half_core_thickness",
|
||||
"bounds": [
|
||||
5,
|
||||
10
|
||||
],
|
||||
"description": "Half thickness of beam core (mm) - affects weight and stiffness"
|
||||
},
|
||||
{
|
||||
"parameter": "beam_face_thickness",
|
||||
"bounds": [
|
||||
1,
|
||||
3
|
||||
],
|
||||
"description": "Thickness of beam face sheets (mm) - bending resistance"
|
||||
},
|
||||
{
|
||||
"parameter": "holes_diameter",
|
||||
"bounds": [
|
||||
10,
|
||||
50
|
||||
],
|
||||
"description": "Diameter of lightening holes (mm) - weight reduction"
|
||||
},
|
||||
{
|
||||
"parameter": "hole_count",
|
||||
"bounds": [
|
||||
8,
|
||||
14
|
||||
],
|
||||
"description": "Number of lightening holes - balance weight vs strength"
|
||||
}
|
||||
],
|
||||
"objectives": [
|
||||
{
|
||||
"name": "mass",
|
||||
"goal": "minimize",
|
||||
"weight": 1.0,
|
||||
"description": "Total mass (grams) - minimize for longer flight time",
|
||||
"target": 4000,
|
||||
"extraction": {
|
||||
"action": "extract_mass",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "mass",
|
||||
"metric": "total"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "fundamental_frequency",
|
||||
"goal": "maximize",
|
||||
"weight": 1.0,
|
||||
"description": "First natural frequency (Hz) - avoid rotor resonance",
|
||||
"target": 150,
|
||||
"extraction": {
|
||||
"action": "extract_frequency",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "frequency",
|
||||
"mode_number": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_displacement_limit",
|
||||
"type": "less_than",
|
||||
"threshold": 1.5,
|
||||
"description": "Maximum tip displacement under 850g camera load < 1.5mm for image stabilization",
|
||||
"extraction": {
|
||||
"action": "extract_displacement",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "displacement",
|
||||
"metric": "max"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "max_stress_limit",
|
||||
"type": "less_than",
|
||||
"threshold": 120,
|
||||
"description": "Maximum von Mises stress < 120 MPa (Al 6061-T6, SF=2.3)",
|
||||
"extraction": {
|
||||
"action": "extract_stress",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "stress",
|
||||
"metric": "max_von_mises"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "min_frequency_limit",
|
||||
"type": "greater_than",
|
||||
"threshold": 150,
|
||||
"description": "Natural frequency > 150 Hz to avoid rotor frequencies (80-120 Hz safety margin)",
|
||||
"extraction": {
|
||||
"action": "extract_frequency",
|
||||
"domain": "result_extraction",
|
||||
"params": {
|
||||
"result_type": "frequency",
|
||||
"mode_number": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"simulation": {
|
||||
"model_file": "Beam.prt",
|
||||
"sim_file": "Beam_sim1.sim",
|
||||
"fem_file": "Beam_fem1.fem",
|
||||
"solver": "nastran",
|
||||
"analysis_types": [
|
||||
"static",
|
||||
"modal"
|
||||
]
|
||||
},
|
||||
"reporting": {
|
||||
"generate_plots": true,
|
||||
"save_incremental": true,
|
||||
"llm_summary": false
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -1,353 +0,0 @@
|
||||
"""
|
||||
Parallel FEA Training Data Generator
|
||||
=====================================
|
||||
|
||||
Runs FEA simulations on space-filling training points in parallel
|
||||
using multiple NX sessions. Results are stored in a shared SQLite
|
||||
database for thread-safe access.
|
||||
|
||||
Hardware Recommendation (based on your i7-14700HX, 64GB RAM):
|
||||
- 2-3 parallel sessions recommended (each uses ~4-6 cores for Nastran)
|
||||
- ~30-50 min for 100 points with 3 sessions
|
||||
|
||||
Usage:
|
||||
python run_training_fea.py --study uav_arm_optimization --workers 3
|
||||
python run_training_fea.py --study uav_arm_optimization --workers 2 --start 50
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
import shutil
|
||||
import sqlite3
|
||||
import time
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from concurrent.futures import ProcessPoolExecutor, as_completed
|
||||
from multiprocessing import Manager
|
||||
import threading
|
||||
|
||||
# Add project root to path
|
||||
project_root = Path(__file__).parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
|
||||
def setup_worker_directory(study_dir: Path, worker_id: int) -> Path:
|
||||
"""Create isolated working directory for each worker."""
|
||||
worker_dir = study_dir / "1_setup" / f"worker_{worker_id}"
|
||||
model_src = study_dir / "1_setup" / "model"
|
||||
|
||||
# Clean and recreate worker directory
|
||||
if worker_dir.exists():
|
||||
shutil.rmtree(worker_dir)
|
||||
|
||||
# Copy model files to worker directory
|
||||
shutil.copytree(model_src, worker_dir)
|
||||
|
||||
return worker_dir
|
||||
|
||||
|
||||
def run_single_fea(args_tuple):
|
||||
"""
|
||||
Run a single FEA simulation. This function runs in a separate process.
|
||||
|
||||
Args:
|
||||
args_tuple: (sample_idx, sample, worker_id, study_dir, db_path)
|
||||
|
||||
Returns:
|
||||
dict with results or error
|
||||
"""
|
||||
sample_idx, sample, worker_id, study_dir_str, db_path_str = args_tuple
|
||||
study_dir = Path(study_dir_str)
|
||||
db_path = Path(db_path_str)
|
||||
|
||||
# Import inside worker to avoid multiprocessing issues
|
||||
import sys
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
try:
|
||||
import config as atomizer_config
|
||||
except ImportError:
|
||||
atomizer_config = None
|
||||
|
||||
from optimization_engine.nx.solver import NXSolver
|
||||
from optimization_engine.extractors.extract_displacement import extract_displacement
|
||||
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
|
||||
from optimization_engine.extractors.extract_frequency import extract_frequency
|
||||
from optimization_engine.extractors.extract_mass_from_expression import extract_mass_from_expression
|
||||
|
||||
result = {
|
||||
'sample_idx': sample_idx,
|
||||
'worker_id': worker_id,
|
||||
'params': sample,
|
||||
'success': False,
|
||||
'error': None,
|
||||
'mass': None,
|
||||
'frequency': None,
|
||||
'max_displacement': None,
|
||||
'max_stress': None
|
||||
}
|
||||
|
||||
try:
|
||||
# Setup worker directory
|
||||
worker_dir = setup_worker_directory(study_dir, worker_id)
|
||||
|
||||
# Initialize NX solver for this worker
|
||||
nx_solver = NXSolver(
|
||||
nastran_version=atomizer_config.NX_VERSION if atomizer_config else "2412",
|
||||
timeout=atomizer_config.NASTRAN_TIMEOUT if atomizer_config else 600,
|
||||
use_journal=True,
|
||||
enable_session_management=True,
|
||||
study_name=f"training_worker_{worker_id}"
|
||||
)
|
||||
|
||||
# Setup paths
|
||||
model_file = worker_dir / "Beam.prt"
|
||||
sim_file = worker_dir / "Beam_sim1.sim"
|
||||
|
||||
print(f"[Worker {worker_id}] Sample {sample_idx}: {sample}")
|
||||
|
||||
# Run simulation
|
||||
sim_result = nx_solver.run_simulation(
|
||||
sim_file=sim_file,
|
||||
working_dir=worker_dir,
|
||||
expression_updates=sample,
|
||||
solution_name=None # Solve all solutions
|
||||
)
|
||||
|
||||
if not sim_result['success']:
|
||||
result['error'] = sim_result.get('error', 'Unknown error')
|
||||
print(f"[Worker {worker_id}] Sample {sample_idx} FAILED: {result['error']}")
|
||||
return result
|
||||
|
||||
op2_file = sim_result['op2_file']
|
||||
|
||||
# Extract results
|
||||
# Mass from CAD expression
|
||||
mass_kg = extract_mass_from_expression(model_file, expression_name="p173")
|
||||
result['mass'] = mass_kg * 1000.0 # Convert to grams
|
||||
|
||||
# Frequency from modal analysis
|
||||
op2_modal = str(op2_file).replace("solution_1", "solution_2")
|
||||
freq_result = extract_frequency(op2_modal, subcase=1, mode_number=1)
|
||||
result['frequency'] = freq_result['frequency']
|
||||
|
||||
# Displacement from static analysis
|
||||
disp_result = extract_displacement(op2_file, subcase=1)
|
||||
result['max_displacement'] = disp_result['max_displacement']
|
||||
|
||||
# Stress from static analysis
|
||||
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
|
||||
result['max_stress'] = stress_result['max_von_mises']
|
||||
|
||||
result['success'] = True
|
||||
|
||||
print(f"[Worker {worker_id}] Sample {sample_idx} SUCCESS: mass={result['mass']:.1f}g, freq={result['frequency']:.1f}Hz")
|
||||
|
||||
# Save to database immediately (thread-safe with retries)
|
||||
save_result_to_db(db_path, result)
|
||||
|
||||
except Exception as e:
|
||||
result['error'] = str(e)
|
||||
print(f"[Worker {worker_id}] Sample {sample_idx} ERROR: {e}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def save_result_to_db(db_path: Path, result: dict, max_retries: int = 5):
|
||||
"""Save result to SQLite database with retry logic for concurrency."""
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
conn = sqlite3.connect(str(db_path), timeout=30)
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("""
|
||||
INSERT OR REPLACE INTO training_results
|
||||
(sample_idx, params_json, success, error, mass, frequency, max_displacement, max_stress, timestamp)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""", (
|
||||
result['sample_idx'],
|
||||
json.dumps(result['params']),
|
||||
result['success'],
|
||||
result['error'],
|
||||
result['mass'],
|
||||
result['frequency'],
|
||||
result['max_displacement'],
|
||||
result['max_stress'],
|
||||
datetime.now().isoformat()
|
||||
))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except sqlite3.OperationalError as e:
|
||||
if "locked" in str(e) and attempt < max_retries - 1:
|
||||
time.sleep(0.5 * (attempt + 1)) # Exponential backoff
|
||||
else:
|
||||
raise
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def init_database(db_path: Path):
|
||||
"""Initialize SQLite database for training results."""
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS training_results (
|
||||
sample_idx INTEGER PRIMARY KEY,
|
||||
params_json TEXT,
|
||||
success INTEGER,
|
||||
error TEXT,
|
||||
mass REAL,
|
||||
frequency REAL,
|
||||
max_displacement REAL,
|
||||
max_stress REAL,
|
||||
timestamp TEXT
|
||||
)
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
|
||||
def get_completed_samples(db_path: Path) -> set:
|
||||
"""Get set of already completed sample indices."""
|
||||
if not db_path.exists():
|
||||
return set()
|
||||
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
cursor = conn.cursor()
|
||||
|
||||
try:
|
||||
cursor.execute("SELECT sample_idx FROM training_results WHERE success = 1")
|
||||
completed = {row[0] for row in cursor.fetchall()}
|
||||
except sqlite3.OperationalError:
|
||||
completed = set()
|
||||
|
||||
conn.close()
|
||||
return completed
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Run parallel FEA training data generation')
|
||||
parser.add_argument('--study', required=True, help='Study name (e.g., uav_arm_optimization)')
|
||||
parser.add_argument('--workers', type=int, default=2, help='Number of parallel workers (default: 2)')
|
||||
parser.add_argument('--start', type=int, default=0, help='Starting sample index (for resuming)')
|
||||
parser.add_argument('--end', type=int, default=None, help='Ending sample index (exclusive)')
|
||||
parser.add_argument('--resume', action='store_true', help='Skip already completed samples')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup paths
|
||||
study_dir = project_root / "studies" / args.study
|
||||
training_points_path = study_dir / "1_setup" / "training_points.json"
|
||||
db_path = study_dir / "2_results" / "training_data.db"
|
||||
|
||||
if not study_dir.exists():
|
||||
print(f"ERROR: Study not found: {study_dir}")
|
||||
return
|
||||
|
||||
if not training_points_path.exists():
|
||||
print(f"ERROR: Training points not found: {training_points_path}")
|
||||
print(f"Generate them first: python generate_training_data.py --study {args.study}")
|
||||
return
|
||||
|
||||
# Load training points
|
||||
with open(training_points_path) as f:
|
||||
data = json.load(f)
|
||||
|
||||
samples = data['samples']
|
||||
total_samples = len(samples)
|
||||
|
||||
# Apply start/end filtering
|
||||
end_idx = args.end if args.end else total_samples
|
||||
samples_to_run = [(i, samples[i]) for i in range(args.start, min(end_idx, total_samples))]
|
||||
|
||||
print("=" * 70)
|
||||
print("PARALLEL FEA TRAINING DATA GENERATOR")
|
||||
print("=" * 70)
|
||||
print(f"Study: {args.study}")
|
||||
print(f"Total training points: {total_samples}")
|
||||
print(f"Processing range: {args.start} to {end_idx}")
|
||||
print(f"Parallel workers: {args.workers}")
|
||||
print(f"Database: {db_path}")
|
||||
print()
|
||||
|
||||
# Initialize database
|
||||
db_path.parent.mkdir(exist_ok=True)
|
||||
init_database(db_path)
|
||||
|
||||
# Check for already completed samples
|
||||
if args.resume:
|
||||
completed = get_completed_samples(db_path)
|
||||
samples_to_run = [(i, s) for i, s in samples_to_run if i not in completed]
|
||||
print(f"Already completed: {len(completed)} samples")
|
||||
print(f"Remaining to process: {len(samples_to_run)} samples")
|
||||
|
||||
if not samples_to_run:
|
||||
print("All samples already completed!")
|
||||
return
|
||||
|
||||
print()
|
||||
print(f"Starting {args.workers} parallel workers...")
|
||||
print("=" * 70)
|
||||
|
||||
# Prepare worker arguments
|
||||
worker_args = []
|
||||
for idx, (sample_idx, sample) in enumerate(samples_to_run):
|
||||
worker_id = idx % args.workers
|
||||
worker_args.append((sample_idx, sample, worker_id, str(study_dir), str(db_path)))
|
||||
|
||||
# Track progress
|
||||
start_time = time.time()
|
||||
completed_count = 0
|
||||
failed_count = 0
|
||||
|
||||
# Run with ProcessPoolExecutor
|
||||
with ProcessPoolExecutor(max_workers=args.workers) as executor:
|
||||
futures = {executor.submit(run_single_fea, arg): arg[0] for arg in worker_args}
|
||||
|
||||
for future in as_completed(futures):
|
||||
sample_idx = futures[future]
|
||||
try:
|
||||
result = future.result()
|
||||
if result['success']:
|
||||
completed_count += 1
|
||||
else:
|
||||
failed_count += 1
|
||||
except Exception as e:
|
||||
print(f"Sample {sample_idx} raised exception: {e}")
|
||||
failed_count += 1
|
||||
|
||||
# Progress update
|
||||
total_done = completed_count + failed_count
|
||||
elapsed = time.time() - start_time
|
||||
rate = total_done / elapsed if elapsed > 0 else 0
|
||||
remaining = len(samples_to_run) - total_done
|
||||
eta = remaining / rate / 60 if rate > 0 else 0
|
||||
|
||||
print(f"\nProgress: {total_done}/{len(samples_to_run)} ({completed_count} OK, {failed_count} failed)")
|
||||
print(f"Rate: {rate:.2f} samples/min | ETA: {eta:.1f} min")
|
||||
|
||||
# Summary
|
||||
elapsed = time.time() - start_time
|
||||
print()
|
||||
print("=" * 70)
|
||||
print("TRAINING DATA GENERATION COMPLETE")
|
||||
print("=" * 70)
|
||||
print(f"Total time: {elapsed/60:.1f} minutes")
|
||||
print(f"Completed: {completed_count}/{len(samples_to_run)}")
|
||||
print(f"Failed: {failed_count}")
|
||||
print(f"Results saved to: {db_path}")
|
||||
print()
|
||||
print("Next steps:")
|
||||
print(" 1. Merge with existing optimization data:")
|
||||
print(f" python merge_training_data.py --study {args.study}")
|
||||
print(" 2. Retrain neural network:")
|
||||
print(f" python train_nn_surrogate.py --study {args.study}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
115
train_neural.bat
115
train_neural.bat
@@ -1,115 +0,0 @@
|
||||
@echo off
|
||||
REM ============================================================================
|
||||
REM Atomizer Neural Network Training Script
|
||||
REM ============================================================================
|
||||
REM Trains a parametric neural network on collected FEA data
|
||||
REM
|
||||
REM Usage:
|
||||
REM train_neural.bat (uses default: bracket study)
|
||||
REM train_neural.bat study_name (specify study name)
|
||||
REM train_neural.bat study_name 200 (specify epochs)
|
||||
REM ============================================================================
|
||||
|
||||
echo.
|
||||
echo ============================================================================
|
||||
echo ATOMIZER NEURAL NETWORK TRAINING
|
||||
echo ============================================================================
|
||||
echo.
|
||||
|
||||
REM Set defaults
|
||||
set STUDY_NAME=bracket_stiffness_optimization_atomizerfield
|
||||
set EPOCHS=100
|
||||
|
||||
REM Parse arguments
|
||||
if not "%~1"=="" set STUDY_NAME=%~1
|
||||
if not "%~2"=="" set EPOCHS=%~2
|
||||
|
||||
set TRAIN_DIR=atomizer_field_training_data\%STUDY_NAME%
|
||||
set OUTPUT_DIR=atomizer-field\runs\%STUDY_NAME%
|
||||
|
||||
echo Study: %STUDY_NAME%
|
||||
echo Epochs: %EPOCHS%
|
||||
echo Train Dir: %TRAIN_DIR%
|
||||
echo Output: %OUTPUT_DIR%
|
||||
echo.
|
||||
|
||||
REM Check training data exists
|
||||
if not exist "%TRAIN_DIR%" (
|
||||
echo ERROR: Training data not found at %TRAIN_DIR%
|
||||
echo.
|
||||
echo Available training data directories:
|
||||
dir atomizer_field_training_data /b 2>nul
|
||||
echo.
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Count trials
|
||||
echo Counting training trials...
|
||||
set /a count=0
|
||||
for /d %%d in ("%TRAIN_DIR%\trial_*") do set /a count+=1
|
||||
echo Found %count% trials
|
||||
echo.
|
||||
|
||||
if %count% LSS 20 (
|
||||
echo WARNING: Less than 20 trials. Training may not converge well.
|
||||
echo Consider running more FEA simulations first.
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Check PyTorch
|
||||
python -c "import torch; print(f'PyTorch {torch.__version__}')" 2>nul
|
||||
if errorlevel 1 (
|
||||
echo ERROR: PyTorch not installed
|
||||
echo Run install.bat first
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Check for GPU
|
||||
echo.
|
||||
echo Checking for GPU...
|
||||
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'Device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\"}')"
|
||||
echo.
|
||||
|
||||
REM Start training
|
||||
echo ============================================================================
|
||||
echo Starting training...
|
||||
echo ============================================================================
|
||||
echo.
|
||||
echo Press Ctrl+C to stop training early (checkpoint will be saved)
|
||||
echo.
|
||||
|
||||
cd atomizer-field
|
||||
python train_parametric.py ^
|
||||
--train_dir "..\%TRAIN_DIR%" ^
|
||||
--epochs %EPOCHS% ^
|
||||
--output_dir "runs\%STUDY_NAME%" ^
|
||||
--batch_size 16 ^
|
||||
--learning_rate 0.001 ^
|
||||
--hidden_channels 128 ^
|
||||
--num_layers 4
|
||||
|
||||
if errorlevel 1 (
|
||||
echo.
|
||||
echo Training failed! Check error messages above.
|
||||
cd ..
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
cd ..
|
||||
|
||||
echo.
|
||||
echo ============================================================================
|
||||
echo TRAINING COMPLETE!
|
||||
echo ============================================================================
|
||||
echo.
|
||||
echo Model saved to: %OUTPUT_DIR%\checkpoint_best.pt
|
||||
echo.
|
||||
echo To use the trained model in optimization:
|
||||
echo cd studies\%STUDY_NAME%
|
||||
echo python run_optimization.py --run --trials 100 --enable-nn --resume
|
||||
echo.
|
||||
|
||||
pause
|
||||
Reference in New Issue
Block a user