docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
This commit is contained in:
@@ -0,0 +1,431 @@
|
||||
# NXOpen Documentation Integration Strategy
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the strategy for integrating NXOpen Python documentation into Atomizer's AI-powered code generation system.
|
||||
|
||||
**Target Documentation**: https://docs.sw.siemens.com/en-US/doc/209349590/PL20190529153447339.nxopen_python_ref
|
||||
|
||||
**Goal**: Enable Atomizer to automatically research NXOpen APIs and generate correct code without manual documentation lookup.
|
||||
|
||||
## Current State (Phase 2.7 Complete)
|
||||
|
||||
✅ **Intelligent Workflow Analysis**: LLM detects engineering features needing research
|
||||
✅ **Capability Matching**: System knows what's already implemented
|
||||
✅ **Gap Identification**: Identifies missing FEA/CAE operations
|
||||
|
||||
❌ **Auto-Research**: No automated documentation lookup
|
||||
❌ **Code Generation**: Manual implementation still required
|
||||
|
||||
## Documentation Access Challenges
|
||||
|
||||
### Challenge 1: Authentication Required
|
||||
- Siemens documentation requires login
|
||||
- Not accessible via direct WebFetch
|
||||
- Cannot be scraped programmatically
|
||||
|
||||
### Challenge 2: Dynamic Content
|
||||
- Documentation is JavaScript-rendered
|
||||
- Not available as static HTML
|
||||
- Requires browser automation or API access
|
||||
|
||||
## Integration Strategies
|
||||
|
||||
### Strategy 1: MCP Server (RECOMMENDED) 🚀
|
||||
|
||||
**Concept**: Build a Model Context Protocol (MCP) server for NXOpen documentation
|
||||
|
||||
**How it Works**:
|
||||
```
|
||||
Atomizer (Phase 2.5-2.7)
|
||||
↓
|
||||
Detects: "Need to modify PCOMP ply thickness"
|
||||
↓
|
||||
MCP Server Query: "How to modify PCOMP in NXOpen?"
|
||||
↓
|
||||
MCP Server → Local Documentation Cache or Live Lookup
|
||||
↓
|
||||
Returns: Code examples + API reference
|
||||
↓
|
||||
Phase 2.8-2.9: Auto-generate code
|
||||
```
|
||||
|
||||
**Implementation**:
|
||||
1. **Local Documentation Cache**
|
||||
- Download key NXOpen docs pages locally (one-time setup)
|
||||
- Store as markdown/JSON in `knowledge_base/nxopen/`
|
||||
- Index by module/class/method
|
||||
|
||||
2. **MCP Server**
|
||||
- Runs locally on `localhost:3000`
|
||||
- Provides search/query API
|
||||
- Returns relevant code snippets + documentation
|
||||
|
||||
3. **Integration with Atomizer**
|
||||
- `research_agent.py` calls MCP server
|
||||
- Gets documentation for missing capabilities
|
||||
- Generates code based on examples
|
||||
|
||||
**Advantages**:
|
||||
- ✅ No API consumption costs (runs locally)
|
||||
- ✅ Fast lookups (local cache)
|
||||
- ✅ Works offline after initial setup
|
||||
- ✅ Can be extended to pyNastran docs later
|
||||
|
||||
**Disadvantages**:
|
||||
- Requires one-time manual documentation download
|
||||
- Needs periodic updates for new NX versions
|
||||
|
||||
### Strategy 2: NX Journal Recording (USER-DRIVEN LEARNING) 🎯 **RECOMMENDED!**
|
||||
|
||||
**Concept**: User records NX journals while performing operations, system learns from recorded Python code
|
||||
|
||||
**How it Works**:
|
||||
1. User needs to learn how to "merge FEM nodes"
|
||||
2. User starts journal recording in NX (Tools → Journal → Record)
|
||||
3. User performs the operation manually in NX GUI
|
||||
4. NX automatically generates Python journal showing exact API calls
|
||||
5. User shares journal file with Atomizer
|
||||
6. Atomizer extracts pattern and stores in knowledge base
|
||||
|
||||
**Example Workflow**:
|
||||
```
|
||||
User Action: Merge duplicate FEM nodes in NX
|
||||
↓
|
||||
NX Records: journal_merge_nodes.py
|
||||
↓
|
||||
Contains: session.FemPart().MergeNodes(tolerance=0.001, ...)
|
||||
↓
|
||||
Atomizer learns: "To merge nodes, use FemPart().MergeNodes()"
|
||||
↓
|
||||
Pattern saved to: knowledge_base/nxopen_patterns/fem/merge_nodes.md
|
||||
↓
|
||||
Future requests: Auto-generate code using this pattern!
|
||||
```
|
||||
|
||||
**Real Recorded Journal Example**:
|
||||
```python
|
||||
# User records: "Renumber elements starting from 1000"
|
||||
import NXOpen
|
||||
|
||||
def main():
|
||||
session = NXOpen.Session.GetSession()
|
||||
fem_part = session.Parts.Work.BasePart.FemPart
|
||||
|
||||
# NX generates this automatically!
|
||||
fem_part.RenumberElements(
|
||||
startingNumber=1000,
|
||||
increment=1,
|
||||
applyToAll=True
|
||||
)
|
||||
```
|
||||
|
||||
**Advantages**:
|
||||
- ✅ **User-driven**: Learn exactly what you need, when you need it
|
||||
- ✅ **Accurate**: Code comes directly from NX (can't be wrong!)
|
||||
- ✅ **Comprehensive**: Captures full API signature and parameters
|
||||
- ✅ **No documentation hunting**: NX generates the code for you
|
||||
- ✅ **Builds knowledge base organically**: Grows with actual usage
|
||||
- ✅ **Handles edge cases**: Records exactly how you solved the problem
|
||||
|
||||
**Use Cases Perfect for Journal Recording**:
|
||||
- Merge/renumber FEM nodes
|
||||
- Node/element renumbering
|
||||
- Mesh quality checks
|
||||
- Geometry modifications
|
||||
- Property assignments
|
||||
- Solver setup configurations
|
||||
- Any complex operation hard to find in docs
|
||||
|
||||
**Integration with Atomizer**:
|
||||
```python
|
||||
# User provides recorded journal
|
||||
atomizer.learn_from_journal("journal_merge_nodes.py")
|
||||
|
||||
# System analyzes:
|
||||
# - Identifies API calls (FemPart().MergeNodes)
|
||||
# - Extracts parameters (tolerance, node_ids, etc.)
|
||||
# - Creates reusable pattern
|
||||
# - Stores in knowledge_base with description
|
||||
|
||||
# Future requests automatically use this pattern!
|
||||
```
|
||||
|
||||
### Strategy 3: Python Introspection
|
||||
|
||||
**Concept**: Use Python's introspection to explore NXOpen modules at runtime
|
||||
|
||||
**How it Works**:
|
||||
```python
|
||||
import NXOpen
|
||||
|
||||
# Discover all classes
|
||||
for name in dir(NXOpen):
|
||||
cls = getattr(NXOpen, name)
|
||||
print(f"{name}: {cls.__doc__}")
|
||||
|
||||
# Discover methods
|
||||
for method in dir(NXOpen.Part):
|
||||
print(f"{method}: {getattr(NXOpen.Part, method).__doc__}")
|
||||
```
|
||||
|
||||
**Advantages**:
|
||||
- ✅ No external dependencies
|
||||
- ✅ Always up-to-date with installed NX version
|
||||
- ✅ Includes method signatures automatically
|
||||
|
||||
**Disadvantages**:
|
||||
- ❌ Limited documentation (docstrings often minimal)
|
||||
- ❌ No usage examples
|
||||
- ❌ Requires NX to be running
|
||||
|
||||
### Strategy 4: Hybrid Approach (BEST COMBINATION) 🏆
|
||||
|
||||
**Combine all strategies for maximum effectiveness**:
|
||||
|
||||
**Phase 1 (Immediate)**: Journal Recording + pyNastran
|
||||
1. **For NXOpen**:
|
||||
- User records journals for needed operations
|
||||
- Atomizer learns from recorded code
|
||||
- Builds knowledge base organically
|
||||
|
||||
2. **For Result Extraction**:
|
||||
- Use pyNastran docs (publicly accessible!)
|
||||
- WebFetch documentation as needed
|
||||
- Auto-generate OP2 extraction code
|
||||
|
||||
**Phase 2 (Short Term)**: Pattern Library + Introspection
|
||||
1. **Knowledge Base Growth**:
|
||||
- Store learned patterns from journals
|
||||
- Categorize by domain (FEM, geometry, properties, etc.)
|
||||
- Add examples and parameter descriptions
|
||||
|
||||
2. **Python Introspection**:
|
||||
- Supplement journal learning with introspection
|
||||
- Discover available methods automatically
|
||||
- Validate generated code against signatures
|
||||
|
||||
**Phase 3 (Future)**: MCP Server + Full Automation
|
||||
1. **MCP Integration**:
|
||||
- Build MCP server for documentation lookup
|
||||
- Index knowledge base for fast retrieval
|
||||
- Integrate with NXOpen TSE resources
|
||||
|
||||
2. **Full Automation**:
|
||||
- Auto-generate code for any request
|
||||
- Self-learn from successful executions
|
||||
- Continuous improvement through usage
|
||||
|
||||
**This is the winning strategy!**
|
||||
|
||||
## Recommended Immediate Implementation
|
||||
|
||||
### Step 1: Python Introspection Module
|
||||
|
||||
Create `optimization_engine/nxopen_introspector.py`:
|
||||
```python
|
||||
class NXOpenIntrospector:
|
||||
def get_module_docs(self, module_path: str) -> Dict[str, Any]:
|
||||
"""Get all classes/methods from NXOpen module"""
|
||||
|
||||
def find_methods_for_task(self, task_description: str) -> List[str]:
|
||||
"""Use LLM to match task to NXOpen methods"""
|
||||
|
||||
def generate_code_skeleton(self, method_name: str) -> str:
|
||||
"""Generate code template from method signature"""
|
||||
```
|
||||
|
||||
### Step 2: Knowledge Base Structure
|
||||
|
||||
```
|
||||
knowledge_base/
|
||||
├── nxopen_patterns/
|
||||
│ ├── geometry/
|
||||
│ │ ├── create_part.md
|
||||
│ │ ├── modify_expression.md
|
||||
│ │ └── update_parameter.md
|
||||
│ ├── fea_properties/
|
||||
│ │ ├── modify_pcomp.md
|
||||
│ │ ├── modify_cbar.md
|
||||
│ │ └── modify_cbush.md
|
||||
│ ├── materials/
|
||||
│ │ └── create_material.md
|
||||
│ └── simulation/
|
||||
│ ├── run_solve.md
|
||||
│ └── check_solution.md
|
||||
└── pynastran_patterns/
|
||||
├── op2_extraction/
|
||||
│ ├── stress_extraction.md
|
||||
│ ├── displacement_extraction.md
|
||||
│ └── element_forces.md
|
||||
└── bdf_modification/
|
||||
└── property_updates.md
|
||||
```
|
||||
|
||||
### Step 3: Integration with Research Agent
|
||||
|
||||
Update `research_agent.py`:
|
||||
```python
|
||||
def research_engineering_feature(self, feature_name: str, domain: str):
|
||||
# 1. Check knowledge base first
|
||||
kb_result = self.search_knowledge_base(feature_name)
|
||||
|
||||
# 2. If not found, use introspection
|
||||
if not kb_result:
|
||||
introspection_result = self.introspector.find_methods_for_task(feature_name)
|
||||
|
||||
# 3. Generate code skeleton
|
||||
code = self.introspector.generate_code_skeleton(method)
|
||||
|
||||
# 4. Use LLM to complete implementation
|
||||
full_implementation = self.llm_generate_implementation(code, feature_name)
|
||||
|
||||
# 5. Save to knowledge base for future use
|
||||
self.save_to_knowledge_base(feature_name, full_implementation)
|
||||
```
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 2.8: Inline Code Generator (CURRENT PRIORITY)
|
||||
**Timeline**: Next 1-2 sessions
|
||||
**Scope**: Auto-generate simple math operations
|
||||
|
||||
**What to Build**:
|
||||
- `optimization_engine/inline_code_generator.py`
|
||||
- Takes inline_calculations from Phase 2.7 LLM output
|
||||
- Generates Python code directly
|
||||
- No documentation needed (it's just math!)
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
Input: {
|
||||
"action": "normalize_stress",
|
||||
"params": {"input": "max_stress", "divisor": 200.0}
|
||||
}
|
||||
|
||||
Output:
|
||||
norm_stress = max_stress / 200.0
|
||||
```
|
||||
|
||||
### Phase 2.9: Post-Processing Hook Generator
|
||||
**Timeline**: Following Phase 2.8
|
||||
**Scope**: Generate middleware scripts
|
||||
|
||||
**What to Build**:
|
||||
- `optimization_engine/hook_generator.py`
|
||||
- Takes post_processing_hooks from Phase 2.7 LLM output
|
||||
- Generates standalone Python scripts
|
||||
- Handles I/O between FEA steps
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
Input: {
|
||||
"action": "weighted_objective",
|
||||
"params": {
|
||||
"inputs": ["norm_stress", "norm_disp"],
|
||||
"weights": [0.7, 0.3],
|
||||
"formula": "0.7 * norm_stress + 0.3 * norm_disp"
|
||||
}
|
||||
}
|
||||
|
||||
Output: hook script that reads inputs, calculates, writes output
|
||||
```
|
||||
|
||||
### Phase 3: MCP Integration for Documentation
|
||||
**Timeline**: After Phase 2.9
|
||||
**Scope**: Automated NXOpen/pyNastran research
|
||||
|
||||
**What to Build**:
|
||||
1. Local documentation cache system
|
||||
2. MCP server for doc lookup
|
||||
3. Integration with research_agent.py
|
||||
4. Automated code generation from docs
|
||||
|
||||
## Alternative: Community Resources & pyNastran (RECOMMENDED STARTING POINT)
|
||||
|
||||
### pyNastran Documentation (START HERE!) 🚀
|
||||
|
||||
**URL**: https://pynastran-git.readthedocs.io/en/latest/index.html
|
||||
|
||||
**Why Start with pyNastran**:
|
||||
- ✅ Fully open and publicly accessible
|
||||
- ✅ Comprehensive API documentation
|
||||
- ✅ Code examples for every operation
|
||||
- ✅ Already used extensively in Atomizer
|
||||
- ✅ Can WebFetch directly - no authentication needed
|
||||
- ✅ Covers 80% of FEA result extraction needs
|
||||
|
||||
**What pyNastran Handles**:
|
||||
- OP2 file reading (displacement, stress, strain, element forces)
|
||||
- F06 file parsing
|
||||
- BDF/Nastran deck modification
|
||||
- Result post-processing
|
||||
- Nodal/Element data extraction
|
||||
|
||||
**Strategy**: Use pyNastran as the primary documentation source for result extraction, and NXOpen only when modifying geometry/properties in NX.
|
||||
|
||||
### NXOpen Community Resources
|
||||
|
||||
1. **NXOpen TSE** (The Scripting Engineer)
|
||||
- https://nxopentsedocumentation.thescriptingengineer.com/
|
||||
- Extensive examples and tutorials
|
||||
- Can be scraped/cached legally
|
||||
|
||||
2. **GitHub NXOpen Examples**
|
||||
- Search GitHub for "NXOpen" + specific functionality
|
||||
- Real-world code examples
|
||||
- Community-vetted patterns
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate (This Session):
|
||||
1. ✅ Create this strategy document
|
||||
2. ✅ Implement Phase 2.8: Inline Code Generator
|
||||
3. ✅ Test inline code generation (all tests passing!)
|
||||
4. ⏳ Implement Phase 2.9: Post-Processing Hook Generator
|
||||
5. ⏳ Integrate pyNastran documentation lookup via WebFetch
|
||||
|
||||
### Short Term (Next 2-3 Sessions):
|
||||
1. Implement Phase 2.9: Hook Generator
|
||||
2. Build NXOpenIntrospector module
|
||||
3. Start curating knowledge_base/nxopen_patterns/
|
||||
4. Test with real optimization scenarios
|
||||
|
||||
### Medium Term (Phase 3):
|
||||
1. Build local documentation cache
|
||||
2. Implement MCP server
|
||||
3. Integrate automated research
|
||||
4. Full end-to-end code generation
|
||||
|
||||
## Success Metrics
|
||||
|
||||
**Phase 2.8 Success**:
|
||||
- ✅ Auto-generates 100% of inline calculations
|
||||
- ✅ Correct Python syntax every time
|
||||
- ✅ Properly handles variable naming
|
||||
|
||||
**Phase 2.9 Success**:
|
||||
- ✅ Auto-generates functional hook scripts
|
||||
- ✅ Correct I/O handling
|
||||
- ✅ Integrates with optimization loop
|
||||
|
||||
**Phase 3 Success**:
|
||||
- ✅ Automatically finds correct NXOpen methods
|
||||
- ✅ Generates working code 80%+ of the time
|
||||
- ✅ Self-learns from successful patterns
|
||||
|
||||
## Conclusion
|
||||
|
||||
**Recommended Path Forward**:
|
||||
1. Focus on Phase 2.8-2.9 first (inline + hooks)
|
||||
2. Build knowledge base organically as we encounter patterns
|
||||
3. Use Python introspection for discovery
|
||||
4. Build MCP server once we have critical mass of patterns
|
||||
|
||||
This approach:
|
||||
- ✅ Delivers value incrementally
|
||||
- ✅ No external dependencies initially
|
||||
- ✅ Builds towards full automation
|
||||
- ✅ Leverages both LLM intelligence and structured knowledge
|
||||
|
||||
**The documentation will come to us through usage, not upfront scraping!**
|
||||
Reference in New Issue
Block a user