7.5 KiB
7.5 KiB
Development Guide
Project Setup Complete! ✅
Your Atomizer project has been initialized with the following structure:
C:\Users\antoi\Documents\Atomaste\Atomizer\
├── .git/ # Git repository
├── .gitignore # Ignore patterns
├── LICENSE # Proprietary license
├── README.md # Main documentation
├── GITHUB_SETUP.md # GitHub push instructions
├── DEVELOPMENT.md # This file
├── pyproject.toml # Python package configuration
├── requirements.txt # Pip dependencies
│
├── config/ # Configuration templates
│ ├── nx_config.json.template
│ └── optimization_config_template.json
│
├── mcp_server/ # MCP Server (Phase 1)
│ ├── __init__.py
│ ├── tools/ # MCP tool implementations
│ │ └── __init__.py
│ ├── schemas/ # JSON schemas for validation
│ └── prompts/ # LLM system prompts
│ └── examples/ # Few-shot examples
│
├── optimization_engine/ # Core optimization (Phase 4)
│ ├── __init__.py
│ └── result_extractors/ # Pluggable metric extractors
│ └── __init__.py # Base classes + registry
│
├── nx_journals/ # NXOpen scripts (Phase 3)
│ ├── __init__.py
│ └── utils/ # Helper functions
│
├── dashboard/ # Web UI (Phase 2)
│ ├── frontend/ # React app
│ └── backend/ # FastAPI server
│
├── tests/ # Unit tests
├── docs/ # Documentation
└── examples/ # Example projects
Current Status: Phase 0 - Foundation ✅
- Project structure created
- Git repository initialized
- Python package configuration
- License and documentation
- Initial commit ready
Next Development Phases
🎯 Immediate Next Steps (Choose One)
Option A: Start with MCP Server (Recommended)
Goal: Get conversational FEA model discovery working
-
Implement
discover_fea_modeltool:# Create the tool touch mcp_server/tools/model_discovery.py- Parse .sim files to extract solutions, expressions, FEM info
- Use existing Atomizer patterns from your P04 project
- Return structured JSON for LLM consumption
-
Set up MCP server skeleton:
# Install MCP SDK pip install mcp # Create server entry point touch mcp_server/server.py -
Test with a real .sim file:
- Point it to one of your existing models
- Verify it extracts expressions correctly
Option B: Port Atomizer Optimization Engine
Goal: Get core optimization working independently
-
Copy Atomizer modules:
# From your P04/Atomizer project, copy: cp ../Projects/P04/Atomizer/code/multi_optimizer.py optimization_engine/ cp ../Projects/P04/Atomizer/code/config_loader.py optimization_engine/ cp ../Projects/P04/Atomizer/code/surrogate_optimizer.py optimization_engine/ -
Adapt for general use:
- Remove Zernike-specific code
- Generalize result extraction to use plugin system
- Update import paths
-
Create a simple test:
# tests/test_optimizer.py def test_basic_optimization(): config = load_config("config/optimization_config_template.json") optimizer = MultiParameterOptimizer(config) # ...
Option C: Build Dashboard First
Goal: Get real-time monitoring UI working
-
Set up React frontend:
cd dashboard/frontend npx create-react-app . --template typescript npm install plotly.js recharts -
Set up FastAPI backend:
cd dashboard/backend touch server.py # Implement WebSocket endpoint for live updates -
Create mock data endpoint:
- Serve fake optimization history
- Test plots and visualizations
Recommended Workflow: Iterative Development
Week 1: MCP + Model Discovery
- Implement
discover_fea_modeltool - Test with real .sim files
- Get LLM integration working
Week 2: Optimization Engine Port
- Copy and adapt Atomizer core modules
- Create pluggable result extractors
- Test with simple optimization
Week 3: NXOpen Bridge
- Build file-based communication
- Create generic journal dispatcher
- Test expression updates
Week 4: Dashboard MVP
- React frontend skeleton
- FastAPI backend with WebSocket
- Real-time iteration monitoring
Development Commands
Python Environment
# Create conda environment
conda create -n atomizer python=3.10
conda activate atomizer
# Install in development mode
pip install -e .
# Install dev dependencies
pip install -e ".[dev]"
Testing
# Run all tests
pytest
# With coverage
pytest --cov=mcp_server --cov=optimization_engine
# Run specific test
pytest tests/test_model_discovery.py -v
Code Quality
# Format code
black .
# Lint
ruff check .
# Type checking
mypy mcp_server optimization_engine
Git Workflow
# Create feature branch
git checkout -b feature/model-discovery
# Make changes, test, commit
git add .
git commit -m "feat: implement FEA model discovery tool"
# Push to GitHub
git push -u origin feature/model-discovery
# Merge to main
git checkout main
git merge feature/model-discovery
git push origin main
Integration with Existing Atomizer
Your existing Atomizer project is at:
C:\Users\antoi\Documents\Atomaste\Projects\P04\Atomizer\
You can reference and copy modules from there as needed. Key files to adapt:
| Atomizer File | New Location | Adaptation Needed |
|---|---|---|
code/multi_optimizer.py |
optimization_engine/multi_optimizer.py |
Minimal - works as-is |
code/config_loader.py |
optimization_engine/config_loader.py |
Extend schema for extractors |
code/zernike_Post_Script_NX.py |
optimization_engine/result_extractors/zernike.py |
Convert to plugin class |
code/journal_NX_Update_and_Solve.py |
nx_journals/update_and_solve.py |
Generalize for any .sim |
code/nx_post_each_iter.py |
nx_journals/post_process.py |
Use extractor registry |
Useful Resources
- Optuna Docs: https://optuna.readthedocs.io/
- NXOpen API: https://docs.sw.siemens.com/en-US/doc/209349590/
- MCP Protocol: https://modelcontextprotocol.io/
- FastAPI: https://fastapi.tiangolo.com/
- React + TypeScript: https://react-typescript-cheatsheet.netlify.app/
Questions to Consider
Before starting development, decide on:
- Which phase to tackle first? (MCP, Engine, Dashboard, or NXOpen)
- Target NX version? (NX 2412 | Future: multi-version support, or multi-version support)
- Deployment strategy? (Local only or client-server architecture)
- Testing approach? (Unit tests only or integration tests with real NX)
- Documentation format? (Markdown, Sphinx, MkDocs)
Getting Help
When you're ready to start coding:
- Choose a phase from the options above
- Tell me which component you want to build first
- I'll create the detailed implementation with working code
- We'll test it with your existing .sim files
- Iterate and expand
You're all set! The foundation is ready. Choose your starting point and let's build! 🚀