feat: Add Studio UI, intake system, and extractor improvements

Dashboard:
- Add Studio page with drag-drop model upload and Claude chat
- Add intake system for study creation workflow
- Improve session manager and context builder
- Add intake API routes and frontend components

Optimization Engine:
- Add CLI module for command-line operations
- Add intake module for study preprocessing
- Add validation module with gate checks
- Improve Zernike extractor documentation
- Update spec models with better validation
- Enhance solve_simulation robustness

Documentation:
- Add ATOMIZER_STUDIO.md planning doc
- Add ATOMIZER_UX_SYSTEM.md for UX patterns
- Update extractor library docs
- Add study-readme-generator skill

Tools:
- Add test scripts for extraction validation
- Add Zernike recentering test

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-27 12:02:30 -05:00
parent 3193831340
commit a26914bbe8
56 changed files with 14173 additions and 646 deletions

View File

@@ -1,7 +1,7 @@
---
skill_id: SKILL_001
version: 2.4
last_updated: 2025-12-31
version: 2.5
last_updated: 2026-01-22
type: reference
code_dependencies:
- optimization_engine/extractors/__init__.py
@@ -14,8 +14,8 @@ requires_skills:
# Atomizer Quick Reference Cheatsheet
**Version**: 2.4
**Updated**: 2025-12-31
**Version**: 2.5
**Updated**: 2026-01-22
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
---
@@ -37,6 +37,8 @@ requires_skills:
| **Use SAT (Self-Aware Turbo)** | **SYS_16** | SAT v3 for high-efficiency neural-accelerated optimization |
| Generate physics insight | SYS_17 | `python -m optimization_engine.insights generate <study>` |
| **Manage knowledge/playbook** | **SYS_18** | `from optimization_engine.context import AtomizerPlaybook` |
| **Automate dev tasks** | **DevLoop** | `python tools/devloop_cli.py start "task"` |
| **Test dashboard UI** | **DevLoop** | `python tools/devloop_cli.py browser --level full` |
---
@@ -678,6 +680,67 @@ feedback.process_trial_result(
---
## DevLoop Quick Reference
Closed-loop development system using AI agents + Playwright testing.
### CLI Commands
| Task | Command |
|------|---------|
| Full dev cycle | `python tools/devloop_cli.py start "Create new study"` |
| Plan only | `python tools/devloop_cli.py plan "Fix validation"` |
| Implement plan | `python tools/devloop_cli.py implement` |
| Test study files | `python tools/devloop_cli.py test --study support_arm` |
| Analyze failures | `python tools/devloop_cli.py analyze` |
| Browser smoke test | `python tools/devloop_cli.py browser` |
| Browser full tests | `python tools/devloop_cli.py browser --level full` |
| Check status | `python tools/devloop_cli.py status` |
| Quick test | `python tools/devloop_cli.py quick` |
### Browser Test Levels
| Level | Description | Tests |
|-------|-------------|-------|
| `quick` | Smoke test (page loads) | 1 |
| `home` | Home page verification | 2 |
| `full` | All UI + study tests | 5+ |
| `study` | Canvas/dashboard for specific study | 3 |
### State Files (`.devloop/`)
| File | Purpose |
|------|---------|
| `current_plan.json` | Current implementation plan |
| `test_results.json` | Filesystem/API test results |
| `browser_test_results.json` | Playwright test results |
| `analysis.json` | Failure analysis |
### Prerequisites
```bash
# Start backend
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload --port 8000
# Start frontend
cd atomizer-dashboard/frontend && npm run dev
# Install Playwright (once)
cd atomizer-dashboard/frontend && npx playwright install chromium
```
### Standalone Playwright Tests
```bash
cd atomizer-dashboard/frontend
npm run test:e2e # Run all E2E tests
npm run test:e2e:ui # Playwright UI mode
```
**Full documentation**: `docs/guides/DEVLOOP.md`
---
## Report Generation Quick Reference (OP_08)
Generate comprehensive study reports from optimization data.

View File

@@ -0,0 +1,206 @@
# Study README Generator Skill
**Skill ID**: STUDY_README_GENERATOR
**Version**: 1.0
**Purpose**: Generate intelligent, context-aware README.md files for optimization studies
## When to Use
This skill is invoked automatically during the study intake workflow when:
1. A study moves from `introspected` to `configured` status
2. User explicitly requests README generation
3. Finalizing a study from the inbox
## Input Context
The README generator receives:
```json
{
"study_name": "bracket_mass_opt_v1",
"topic": "Brackets",
"description": "User's description from intake form",
"spec": { /* Full AtomizerSpec v2.0 */ },
"introspection": {
"expressions": [...],
"mass_kg": 1.234,
"solver_type": "NX_Nastran"
},
"context_files": {
"goals.md": "User's goals markdown content",
"notes.txt": "Any additional notes"
}
}
```
## Output Format
Generate a README.md with these sections:
### 1. Title & Overview
```markdown
# {Study Name}
**Topic**: {Topic}
**Created**: {Date}
**Status**: {Status}
{One paragraph executive summary of the optimization goal}
```
### 2. Engineering Problem
```markdown
## Engineering Problem
{Describe the physical problem being solved}
### Model Description
- **Geometry**: {Describe the part/assembly}
- **Material**: {If known from introspection}
- **Baseline Mass**: {mass_kg} kg
### Loading Conditions
{Describe loads and boundary conditions if available}
```
### 3. Optimization Formulation
```markdown
## Optimization Formulation
### Design Variables ({count})
| Variable | Expression | Range | Units |
|----------|------------|-------|-------|
| {name} | {expr_name} | [{min}, {max}] | {units} |
### Objectives ({count})
| Objective | Direction | Weight | Source |
|-----------|-----------|--------|--------|
| {name} | {direction} | {weight} | {extractor} |
### Constraints ({count})
| Constraint | Condition | Threshold | Type |
|------------|-----------|-----------|------|
| {name} | {operator} | {threshold} | {type} |
```
### 4. Methodology
```markdown
## Methodology
### Algorithm
- **Primary**: {algorithm_type}
- **Max Trials**: {max_trials}
- **Surrogate**: {if enabled}
### Physics Extraction
{Describe extractors used}
### Convergence Criteria
{Describe stopping conditions}
```
### 5. Expected Outcomes
```markdown
## Expected Outcomes
Based on the optimization setup:
- Expected improvement: {estimate if baseline available}
- Key trade-offs: {identify from objectives/constraints}
- Risk factors: {any warnings from validation}
```
## Generation Guidelines
1. **Be Specific**: Use actual values from the spec, not placeholders
2. **Be Concise**: Engineers don't want to read novels
3. **Be Accurate**: Only state facts that can be verified from input
4. **Be Helpful**: Include insights that aid understanding
5. **No Fluff**: Avoid marketing language or excessive praise
## Claude Prompt Template
```
You are generating a README.md for an FEA optimization study.
CONTEXT:
{json_context}
RULES:
1. Use the actual data provided - never use placeholder values
2. Write in technical engineering language appropriate for structural engineers
3. Keep each section concise but complete
4. If information is missing, note it as "TBD" or skip the section
5. Include physical units wherever applicable
6. Format tables properly with alignment
Generate the README.md content:
```
## Example Output
```markdown
# Bracket Mass Optimization V1
**Topic**: Simple_Bracket
**Created**: 2026-01-22
**Status**: Configured
Optimize the mass of a structural L-bracket while maintaining stress below yield and displacement within tolerance.
## Engineering Problem
### Model Description
- **Geometry**: L-shaped mounting bracket with web and flange
- **Material**: Steel (assumed based on typical applications)
- **Baseline Mass**: 0.847 kg
### Loading Conditions
Static loading with force applied at mounting holes. Fixed constraints at base.
## Optimization Formulation
### Design Variables (3)
| Variable | Expression | Range | Units |
|----------|------------|-------|-------|
| Web Thickness | web_thickness | [2.0, 10.0] | mm |
| Flange Width | flange_width | [15.0, 40.0] | mm |
| Fillet Radius | fillet_radius | [2.0, 8.0] | mm |
### Objectives (1)
| Objective | Direction | Weight | Source |
|-----------|-----------|--------|--------|
| Total Mass | minimize | 1.0 | mass_extractor |
### Constraints (1)
| Constraint | Condition | Threshold | Type |
|------------|-----------|-----------|------|
| Max Stress | <= | 250 MPa | hard |
## Methodology
### Algorithm
- **Primary**: TPE (Tree-structured Parzen Estimator)
- **Max Trials**: 100
- **Surrogate**: Disabled
### Physics Extraction
- Mass: Extracted from NX expression `total_mass`
- Stress: Von Mises stress from SOL101 static analysis
### Convergence Criteria
- Max trials: 100
- Early stopping: 20 trials without improvement
## Expected Outcomes
Based on the optimization setup:
- Expected improvement: 15-30% mass reduction (typical for thickness optimization)
- Key trade-offs: Mass vs. stress margin
- Risk factors: None identified
```
## Integration Points
- **Backend**: `api/services/claude_readme.py` calls Claude API with this prompt
- **Endpoint**: `POST /api/intake/{study_name}/readme`
- **Trigger**: Automatic on status transition to `configured`

View File

@@ -7,6 +7,10 @@
"ATOMIZER_MODE": "user",
"ATOMIZER_ROOT": "C:/Users/antoi/Atomizer"
}
},
"nxopen-docs": {
"command": "C:/Users/antoi/CADtomaste/Atomaste-NXOpen-MCP/.venv/Scripts/python.exe",
"args": ["-m", "nxopen_mcp.server", "--data-dir", "C:/Users/antoi/CADtomaste/Atomaste-NXOpen-MCP/data"]
}
}
}

View File

@@ -13,7 +13,19 @@ import sys
# Add parent directory to path to import optimization_engine
sys.path.append(str(Path(__file__).parent.parent.parent.parent))
from api.routes import optimization, claude, terminal, insights, context, files, nx, claude_code, spec
from api.routes import (
optimization,
claude,
terminal,
insights,
context,
files,
nx,
claude_code,
spec,
devloop,
intake,
)
from api.websocket import optimization_stream
@@ -23,6 +35,7 @@ async def lifespan(app: FastAPI):
"""Manage application lifespan - start/stop session manager"""
# Startup
from api.routes.claude import get_session_manager
manager = get_session_manager()
await manager.start()
print("Session manager started")
@@ -63,6 +76,9 @@ app.include_router(nx.router, prefix="/api/nx", tags=["nx"])
app.include_router(claude_code.router, prefix="/api", tags=["claude-code"])
app.include_router(spec.router, prefix="/api", tags=["spec"])
app.include_router(spec.validate_router, prefix="/api", tags=["spec"])
app.include_router(devloop.router, prefix="/api", tags=["devloop"])
app.include_router(intake.router, prefix="/api", tags=["intake"])
@app.get("/")
async def root():
@@ -70,11 +86,13 @@ async def root():
dashboard_path = Path(__file__).parent.parent.parent / "dashboard-enhanced.html"
return FileResponse(dashboard_path)
@app.get("/health")
async def health_check():
"""Health check endpoint with database status"""
try:
from api.services.conversation_store import ConversationStore
store = ConversationStore()
# Test database by creating/getting a health check session
store.get_session("health_check")
@@ -87,12 +105,8 @@ async def health_check():
"database": db_status,
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"main:app",
host="0.0.0.0",
port=8000,
reload=True,
log_level="info"
)
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True, log_level="info")

File diff suppressed because it is too large Load Diff

View File

@@ -245,17 +245,45 @@ def _get_study_error_info(study_dir: Path, results_dir: Path) -> dict:
def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[dict]:
"""Load study info from a study directory. Returns None if not a valid study."""
# Look for optimization config (check multiple locations)
config_file = study_dir / "optimization_config.json"
if not config_file.exists():
config_file = study_dir / "1_setup" / "optimization_config.json"
if not config_file.exists():
# Look for config file - prefer atomizer_spec.json (v2.0), fall back to legacy optimization_config.json
config_file = None
is_atomizer_spec = False
# Check for AtomizerSpec v2.0 first
for spec_path in [
study_dir / "atomizer_spec.json",
study_dir / "1_setup" / "atomizer_spec.json",
]:
if spec_path.exists():
config_file = spec_path
is_atomizer_spec = True
break
# Fall back to legacy optimization_config.json
if config_file is None:
for legacy_path in [
study_dir / "optimization_config.json",
study_dir / "1_setup" / "optimization_config.json",
]:
if legacy_path.exists():
config_file = legacy_path
break
if config_file is None:
return None
# Load config
with open(config_file) as f:
config = json.load(f)
# Normalize AtomizerSpec v2.0 to legacy format for compatibility
if is_atomizer_spec and "meta" in config:
# Extract study_name and description from meta
meta = config.get("meta", {})
config["study_name"] = meta.get("study_name", study_dir.name)
config["description"] = meta.get("description", "")
config["version"] = meta.get("version", "2.0")
# Check if results directory exists (support both 2_results and 3_results)
results_dir = study_dir / "2_results"
if not results_dir.exists():
@@ -311,12 +339,21 @@ def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[d
best_trial = min(history, key=lambda x: x["objective"])
best_value = best_trial["objective"]
# Get total trials from config (supports both formats)
total_trials = (
config.get("optimization_settings", {}).get("n_trials")
or config.get("optimization", {}).get("n_trials")
or config.get("trials", {}).get("n_trials", 50)
)
# Get total trials from config (supports AtomizerSpec v2.0 and legacy formats)
total_trials = None
# AtomizerSpec v2.0: optimization.budget.max_trials
if is_atomizer_spec:
total_trials = config.get("optimization", {}).get("budget", {}).get("max_trials")
# Legacy formats
if total_trials is None:
total_trials = (
config.get("optimization_settings", {}).get("n_trials")
or config.get("optimization", {}).get("n_trials")
or config.get("optimization", {}).get("max_trials")
or config.get("trials", {}).get("n_trials", 100)
)
# Get accurate status using process detection
status = get_accurate_study_status(study_dir.name, trial_count, total_trials, has_db)
@@ -380,7 +417,12 @@ async def list_studies():
continue
# Check if this is a study (flat structure) or a topic folder (nested structure)
is_study = (item / "1_setup").exists() or (item / "optimization_config.json").exists()
# Support both AtomizerSpec v2.0 (atomizer_spec.json) and legacy (optimization_config.json)
is_study = (
(item / "1_setup").exists()
or (item / "atomizer_spec.json").exists()
or (item / "optimization_config.json").exists()
)
if is_study:
# Flat structure: study directly in studies/
@@ -396,10 +438,12 @@ async def list_studies():
if sub_item.name.startswith("."):
continue
# Check if this subdirectory is a study
sub_is_study = (sub_item / "1_setup").exists() or (
sub_item / "optimization_config.json"
).exists()
# Check if this subdirectory is a study (AtomizerSpec v2.0 or legacy)
sub_is_study = (
(sub_item / "1_setup").exists()
or (sub_item / "atomizer_spec.json").exists()
or (sub_item / "optimization_config.json").exists()
)
if sub_is_study:
study_info = _load_study_info(sub_item, topic=item.name)
if study_info:

View File

@@ -0,0 +1,396 @@
"""
Claude README Generator Service
Generates intelligent README.md files for optimization studies
using Claude Code CLI (not API) with study context from AtomizerSpec.
"""
import asyncio
import json
import subprocess
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Optional
# Base directory
ATOMIZER_ROOT = Path(__file__).parent.parent.parent.parent.parent
# Load skill prompt
SKILL_PATH = ATOMIZER_ROOT / ".claude" / "skills" / "modules" / "study-readme-generator.md"
def load_skill_prompt() -> str:
"""Load the README generator skill prompt."""
if SKILL_PATH.exists():
return SKILL_PATH.read_text(encoding="utf-8")
return ""
class ClaudeReadmeGenerator:
"""Generate README.md files using Claude Code CLI."""
def __init__(self):
self.skill_prompt = load_skill_prompt()
def generate_readme(
self,
study_name: str,
spec: Dict[str, Any],
context_files: Optional[Dict[str, str]] = None,
topic: Optional[str] = None,
) -> str:
"""
Generate a README.md for a study using Claude Code CLI.
Args:
study_name: Name of the study
spec: Full AtomizerSpec v2.0 dict
context_files: Optional dict of {filename: content} for context
topic: Optional topic folder name
Returns:
Generated README.md content
"""
# Build context for Claude
context = self._build_context(study_name, spec, context_files, topic)
# Build the prompt
prompt = self._build_prompt(context)
try:
# Run Claude Code CLI synchronously
result = self._run_claude_cli(prompt)
# Extract markdown content from response
readme_content = self._extract_markdown(result)
if readme_content:
return readme_content
# If no markdown found, return the whole response
return result if result else self._generate_fallback_readme(study_name, spec)
except Exception as e:
print(f"Claude CLI error: {e}")
return self._generate_fallback_readme(study_name, spec)
async def generate_readme_async(
self,
study_name: str,
spec: Dict[str, Any],
context_files: Optional[Dict[str, str]] = None,
topic: Optional[str] = None,
) -> str:
"""Async version of generate_readme."""
# Run in thread pool to not block
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None, lambda: self.generate_readme(study_name, spec, context_files, topic)
)
def _run_claude_cli(self, prompt: str) -> str:
"""Run Claude Code CLI and get response."""
try:
# Use claude CLI with --print flag for non-interactive output
result = subprocess.run(
["claude", "--print", prompt],
capture_output=True,
text=True,
timeout=120, # 2 minute timeout
cwd=str(ATOMIZER_ROOT),
)
if result.returncode != 0:
error_msg = result.stderr or "Unknown error"
raise Exception(f"Claude CLI error: {error_msg}")
return result.stdout.strip()
except subprocess.TimeoutExpired:
raise Exception("Request timed out")
except FileNotFoundError:
raise Exception("Claude CLI not found. Make sure 'claude' is in PATH.")
def _build_context(
self,
study_name: str,
spec: Dict[str, Any],
context_files: Optional[Dict[str, str]],
topic: Optional[str],
) -> Dict[str, Any]:
"""Build the context object for Claude."""
meta = spec.get("meta", {})
model = spec.get("model", {})
introspection = model.get("introspection", {}) or {}
context = {
"study_name": study_name,
"topic": topic or meta.get("topic", "Other"),
"description": meta.get("description", ""),
"created": meta.get("created", datetime.now().isoformat()),
"status": meta.get("status", "draft"),
"design_variables": spec.get("design_variables", []),
"extractors": spec.get("extractors", []),
"objectives": spec.get("objectives", []),
"constraints": spec.get("constraints", []),
"optimization": spec.get("optimization", {}),
"introspection": {
"mass_kg": introspection.get("mass_kg"),
"volume_mm3": introspection.get("volume_mm3"),
"solver_type": introspection.get("solver_type"),
"expressions": introspection.get("expressions", []),
"expressions_count": len(introspection.get("expressions", [])),
},
"model_files": {
"sim": model.get("sim", {}).get("path") if model.get("sim") else None,
"prt": model.get("prt", {}).get("path") if model.get("prt") else None,
"fem": model.get("fem", {}).get("path") if model.get("fem") else None,
},
}
# Add context files if provided
if context_files:
context["context_files"] = context_files
return context
def _build_prompt(self, context: Dict[str, Any]) -> str:
"""Build the prompt for Claude CLI."""
# Build context files section if available
context_files_section = ""
if context.get("context_files"):
context_files_section = "\n\n## User-Provided Context Files\n\nIMPORTANT: Use this information to understand the optimization goals, design variables, objectives, and constraints:\n\n"
for filename, content in context.get("context_files", {}).items():
context_files_section += f"### {filename}\n```\n{content}\n```\n\n"
# Remove context_files from JSON dump to avoid duplication
context_for_json = {k: v for k, v in context.items() if k != "context_files"}
prompt = f"""Generate a README.md for this FEA optimization study.
## Study Technical Data
```json
{json.dumps(context_for_json, indent=2, default=str)}
```
{context_files_section}
## Requirements
1. Use the EXACT values from the technical data - no placeholders
2. If context files are provided, extract:
- Design variable bounds (min/max)
- Optimization objectives (minimize/maximize what)
- Constraints (stress limits, etc.)
- Any specific requirements mentioned
3. Format the README with these sections:
- Title (# Study Name)
- Overview (topic, date, status, description from context)
- Engineering Problem (what we're optimizing and why - from context files)
- Model Information (mass, solver, files)
- Design Variables (if context specifies bounds, include them in a table)
- Optimization Objectives (from context files)
- Constraints (from context files)
- Expressions Found (table of discovered expressions, highlight candidates)
- Next Steps (what needs to be configured)
4. Keep it professional and concise
5. Use proper markdown table formatting
6. Include units where applicable
7. For expressions table, show: name, value, units, is_candidate
Generate ONLY the README.md content in markdown format, no explanations:"""
return prompt
def _extract_markdown(self, response: str) -> Optional[str]:
"""Extract markdown content from Claude response."""
if not response:
return None
# If response starts with #, it's already markdown
if response.strip().startswith("#"):
return response.strip()
# Try to find markdown block
if "```markdown" in response:
start = response.find("```markdown") + len("```markdown")
end = response.find("```", start)
if end > start:
return response[start:end].strip()
if "```md" in response:
start = response.find("```md") + len("```md")
end = response.find("```", start)
if end > start:
return response[start:end].strip()
# Look for first # heading
lines = response.split("\n")
for i, line in enumerate(lines):
if line.strip().startswith("# "):
return "\n".join(lines[i:]).strip()
return None
def _generate_fallback_readme(self, study_name: str, spec: Dict[str, Any]) -> str:
"""Generate a basic README if Claude fails."""
meta = spec.get("meta", {})
model = spec.get("model", {})
introspection = model.get("introspection", {}) or {}
dvs = spec.get("design_variables", [])
objs = spec.get("objectives", [])
cons = spec.get("constraints", [])
opt = spec.get("optimization", {})
expressions = introspection.get("expressions", [])
lines = [
f"# {study_name.replace('_', ' ').title()}",
"",
f"**Topic**: {meta.get('topic', 'Other')}",
f"**Created**: {meta.get('created', 'Unknown')[:10] if meta.get('created') else 'Unknown'}",
f"**Status**: {meta.get('status', 'draft')}",
"",
]
if meta.get("description"):
lines.extend([meta["description"], ""])
# Model Information
lines.extend(
[
"## Model Information",
"",
]
)
if introspection.get("mass_kg"):
lines.append(f"- **Mass**: {introspection['mass_kg']:.2f} kg")
sim_path = model.get("sim", {}).get("path") if model.get("sim") else None
if sim_path:
lines.append(f"- **Simulation**: {sim_path}")
lines.append("")
# Expressions Found
if expressions:
lines.extend(
[
"## Expressions Found",
"",
"| Name | Value | Units | Candidate |",
"|------|-------|-------|-----------|",
]
)
for expr in expressions:
is_candidate = "" if expr.get("is_candidate") else ""
value = f"{expr.get('value', '-')}"
units = expr.get("units", "-")
lines.append(f"| {expr.get('name', '-')} | {value} | {units} | {is_candidate} |")
lines.append("")
# Design Variables (if configured)
if dvs:
lines.extend(
[
"## Design Variables",
"",
"| Variable | Expression | Range | Units |",
"|----------|------------|-------|-------|",
]
)
for dv in dvs:
bounds = dv.get("bounds", {})
units = dv.get("units", "-")
lines.append(
f"| {dv.get('name', 'Unknown')} | "
f"{dv.get('expression_name', '-')} | "
f"[{bounds.get('min', '-')}, {bounds.get('max', '-')}] | "
f"{units} |"
)
lines.append("")
# Objectives
if objs:
lines.extend(
[
"## Objectives",
"",
"| Objective | Direction | Weight |",
"|-----------|-----------|--------|",
]
)
for obj in objs:
lines.append(
f"| {obj.get('name', 'Unknown')} | "
f"{obj.get('direction', 'minimize')} | "
f"{obj.get('weight', 1.0)} |"
)
lines.append("")
# Constraints
if cons:
lines.extend(
[
"## Constraints",
"",
"| Constraint | Condition | Threshold |",
"|------------|-----------|-----------|",
]
)
for con in cons:
lines.append(
f"| {con.get('name', 'Unknown')} | "
f"{con.get('operator', '<=')} | "
f"{con.get('threshold', '-')} |"
)
lines.append("")
# Algorithm
algo = opt.get("algorithm", {})
budget = opt.get("budget", {})
lines.extend(
[
"## Methodology",
"",
f"- **Algorithm**: {algo.get('type', 'TPE')}",
f"- **Max Trials**: {budget.get('max_trials', 100)}",
"",
]
)
# Next Steps
lines.extend(
[
"## Next Steps",
"",
]
)
if not dvs:
lines.append("- [ ] Configure design variables from discovered expressions")
if not objs:
lines.append("- [ ] Define optimization objectives")
if not dvs and not objs:
lines.append("- [ ] Open in Canvas Builder to complete configuration")
else:
lines.append("- [ ] Run baseline solve to validate setup")
lines.append("- [ ] Finalize study to move to studies folder")
lines.append("")
return "\n".join(lines)
# Singleton instance
_generator: Optional[ClaudeReadmeGenerator] = None
def get_readme_generator() -> ClaudeReadmeGenerator:
"""Get the singleton README generator instance."""
global _generator
if _generator is None:
_generator = ClaudeReadmeGenerator()
return _generator

View File

@@ -26,6 +26,7 @@ class ContextBuilder:
study_id: Optional[str] = None,
conversation_history: Optional[List[Dict[str, Any]]] = None,
canvas_state: Optional[Dict[str, Any]] = None,
spec_path: Optional[str] = None,
) -> str:
"""
Build full system prompt with context.
@@ -35,6 +36,7 @@ class ContextBuilder:
study_id: Optional study name to provide context for
conversation_history: Optional recent messages for continuity
canvas_state: Optional canvas state (nodes, edges) from the UI
spec_path: Optional path to the atomizer_spec.json file
Returns:
Complete system prompt string
@@ -45,7 +47,7 @@ class ContextBuilder:
if canvas_state:
node_count = len(canvas_state.get("nodes", []))
print(f"[ContextBuilder] Including canvas context with {node_count} nodes")
parts.append(self._canvas_context(canvas_state))
parts.append(self._canvas_context(canvas_state, spec_path))
else:
print("[ContextBuilder] No canvas state provided")
@@ -57,7 +59,7 @@ class ContextBuilder:
if conversation_history:
parts.append(self._conversation_context(conversation_history))
parts.append(self._mode_instructions(mode))
parts.append(self._mode_instructions(mode, spec_path))
return "\n\n---\n\n".join(parts)
@@ -298,7 +300,7 @@ Important guidelines:
return context
def _canvas_context(self, canvas_state: Dict[str, Any]) -> str:
def _canvas_context(self, canvas_state: Dict[str, Any], spec_path: Optional[str] = None) -> str:
"""
Build context from canvas state (nodes and edges).
@@ -317,6 +319,8 @@ Important guidelines:
context += f"**Study Name**: {study_name}\n"
if study_path:
context += f"**Study Path**: {study_path}\n"
if spec_path:
context += f"**Spec File**: `{spec_path}`\n"
context += "\n"
# Group nodes by type
@@ -438,61 +442,100 @@ Important guidelines:
context += f"Total edges: {len(edges)}\n"
context += "Flow: Design Variables → Model → Solver → Extractors → Objectives/Constraints → Algorithm\n\n"
# Canvas modification instructions
context += """## Canvas Modification Tools
**For AtomizerSpec v2.0 studies (preferred):**
Use spec tools when working with v2.0 studies (check if study uses `atomizer_spec.json`):
- `spec_modify` - Modify spec values using JSONPath (e.g., "design_variables[0].bounds.min")
- `spec_add_node` - Add design variables, extractors, objectives, or constraints
- `spec_remove_node` - Remove nodes from the spec
- `spec_add_custom_extractor` - Add a Python-based custom extractor function
**For Legacy Canvas (optimization_config.json):**
- `canvas_add_node` - Add a new node (designVar, extractor, objective, constraint)
- `canvas_update_node` - Update node properties (bounds, weights, names)
- `canvas_remove_node` - Remove a node from the canvas
- `canvas_connect_nodes` - Create an edge between nodes
**Example user requests you can handle:**
- "Add a design variable called hole_diameter with range 5-15 mm" → Use spec_add_node or canvas_add_node
- "Change the weight of wfe_40_20 to 8" → Use spec_modify or canvas_update_node
- "Remove the constraint node" → Use spec_remove_node or canvas_remove_node
- "Add a custom extractor that computes stress ratio" → Use spec_add_custom_extractor
Always respond with confirmation of changes made to the canvas/spec.
"""
# Instructions will be in _mode_instructions based on spec_path
return context
def _mode_instructions(self, mode: str) -> str:
def _mode_instructions(self, mode: str, spec_path: Optional[str] = None) -> str:
"""Mode-specific instructions"""
if mode == "power":
return """# Power Mode Instructions
instructions = """# Power Mode Instructions
You have **FULL ACCESS** to modify Atomizer studies. **DO NOT ASK FOR PERMISSION** - just do it.
## Direct Actions (no confirmation needed):
- **Add design variables**: Use `canvas_add_node` or `spec_add_node` with node_type="designVar"
- **Add extractors**: Use `canvas_add_node` with node_type="extractor"
- **Add objectives**: Use `canvas_add_node` with node_type="objective"
- **Add constraints**: Use `canvas_add_node` with node_type="constraint"
- **Update node properties**: Use `canvas_update_node` or `spec_modify`
- **Remove nodes**: Use `canvas_remove_node`
- **Edit atomizer_spec.json directly**: Use the Edit tool
## CRITICAL: How to Modify the Spec
## For custom extractors with Python code:
Use `spec_add_custom_extractor` to add a custom function.
## IMPORTANT:
- You have --dangerously-skip-permissions enabled
- The user has explicitly granted you power mode access
- **ACT IMMEDIATELY** when asked to add/modify/remove things
- Explain what you did AFTER doing it, not before
- Do NOT say "I need permission" - you already have it
Example: If user says "add a volume extractor", immediately use canvas_add_node to add it.
"""
if spec_path:
instructions += f"""**The spec file is at**: `{spec_path}`
When asked to add/modify/remove design variables, extractors, objectives, or constraints:
1. **Read the spec file first** using the Read tool
2. **Edit the spec file** using the Edit tool to make precise changes
3. **Confirm what you changed** in your response
### AtomizerSpec v2.0 Structure
The spec has these main arrays you can modify:
- `design_variables` - Parameters to optimize
- `extractors` - Physics extraction functions
- `objectives` - What to minimize/maximize
- `constraints` - Limits that must be satisfied
### Example: Add a Design Variable
To add a design variable called "thickness" with bounds [1, 10]:
1. Read the spec: `Read({spec_path})`
2. Find the `"design_variables": [...]` array
3. Add a new entry like:
```json
{{
"id": "dv_thickness",
"name": "thickness",
"expression_name": "thickness",
"type": "continuous",
"bounds": {{"min": 1, "max": 10}},
"baseline": 5,
"units": "mm",
"enabled": true
}}
```
4. Use Edit tool to insert this into the array
### Example: Add an Objective
To add a "minimize mass" objective:
```json
{{
"id": "obj_mass",
"name": "mass",
"direction": "minimize",
"weight": 1.0,
"source": {{
"extractor_id": "ext_mass",
"output_name": "mass"
}}
}}
```
### Example: Add an Extractor
To add a mass extractor:
```json
{{
"id": "ext_mass",
"name": "mass",
"type": "mass",
"builtin": true,
"outputs": [{{"name": "mass", "units": "kg"}}]
}}
```
"""
else:
instructions += """No spec file is currently set. Ask the user which study they want to work with.
"""
instructions += """## IMPORTANT Rules:
- You have --dangerously-skip-permissions enabled
- **ACT IMMEDIATELY** when asked to add/modify/remove things
- Use the **Edit** tool to modify the spec file directly
- Generate unique IDs like `dv_<name>`, `ext_<name>`, `obj_<name>`, `con_<name>`
- Explain what you changed AFTER doing it, not before
- Do NOT say "I need permission" - you already have it
"""
return instructions
else:
return """# User Mode Instructions
@@ -503,29 +546,11 @@ You can help with optimization workflows:
- Generate reports
- Explain FEA concepts
**For code modifications**, suggest switching to Power Mode.
**For modifying studies**, the user needs to switch to Power Mode.
Available tools:
- `list_studies`, `get_study_status`, `create_study`
- `run_optimization`, `stop_optimization`, `get_optimization_status`
- `get_trial_data`, `analyze_convergence`, `compare_trials`, `get_best_design`
- `generate_report`, `export_data`
- `explain_physics`, `recommend_method`, `query_extractors`
**AtomizerSpec v2.0 Tools (preferred for new studies):**
- `spec_get` - Get the full AtomizerSpec for a study
- `spec_modify` - Modify spec values using JSONPath (e.g., "design_variables[0].bounds.min")
- `spec_add_node` - Add design variables, extractors, objectives, or constraints
- `spec_remove_node` - Remove nodes from the spec
- `spec_validate` - Validate spec against JSON Schema
- `spec_add_custom_extractor` - Add a Python-based custom extractor function
- `spec_create_from_description` - Create a new study from natural language description
**Canvas Tools (for visual workflow builder):**
- `validate_canvas_intent` - Validate a canvas-generated optimization intent
- `execute_canvas_intent` - Create a study from a canvas intent
- `interpret_canvas_intent` - Analyze intent and provide recommendations
When you receive a message containing "INTENT:" followed by JSON, this is from the Canvas UI.
Parse the intent and use the appropriate canvas tool to process it.
In user mode you can:
- Read and explain study configurations
- Analyze optimization results
- Provide recommendations
- Answer questions about FEA and optimization
"""

View File

@@ -1,11 +1,15 @@
"""
Session Manager
Manages persistent Claude Code sessions with MCP integration.
Manages persistent Claude Code sessions with direct file editing.
Fixed for Windows compatibility - uses subprocess.Popen with ThreadPoolExecutor.
Strategy: Claude edits atomizer_spec.json directly using Edit/Write tools
(no MCP dependency for reliability).
"""
import asyncio
import hashlib
import json
import os
import subprocess
@@ -26,6 +30,10 @@ MCP_SERVER_PATH = ATOMIZER_ROOT / "mcp-server" / "atomizer-tools"
# Thread pool for subprocess operations (Windows compatible)
_executor = ThreadPoolExecutor(max_workers=4)
import logging
logger = logging.getLogger(__name__)
@dataclass
class ClaudeSession:
@@ -130,6 +138,7 @@ class SessionManager:
Send a message to a session and stream the response.
Uses synchronous subprocess.Popen via ThreadPoolExecutor for Windows compatibility.
Claude edits atomizer_spec.json directly using Edit/Write tools (no MCP).
Args:
session_id: The session ID
@@ -147,45 +156,48 @@ class SessionManager:
# Store user message
self.store.add_message(session_id, "user", message)
# Get spec path and hash BEFORE Claude runs (to detect changes)
spec_path = self._get_spec_path(session.study_id) if session.study_id else None
spec_hash_before = self._get_file_hash(spec_path) if spec_path else None
# Build context with conversation history AND canvas state
history = self.store.get_history(session_id, limit=10)
full_prompt = self.context_builder.build(
mode=session.mode,
study_id=session.study_id,
conversation_history=history[:-1],
canvas_state=canvas_state, # Pass canvas state for context
canvas_state=canvas_state,
spec_path=str(spec_path) if spec_path else None, # Tell Claude where the spec is
)
full_prompt += f"\n\nUser: {message}\n\nRespond helpfully and concisely:"
# Build CLI arguments
# Build CLI arguments - NO MCP for reliability
cli_args = ["claude", "--print"]
# Ensure MCP config exists
mcp_config_path = ATOMIZER_ROOT / f".claude-mcp-{session_id}.json"
if not mcp_config_path.exists():
mcp_config = self._build_mcp_config(session.mode)
with open(mcp_config_path, "w") as f:
json.dump(mcp_config, f)
cli_args.extend(["--mcp-config", str(mcp_config_path)])
if session.mode == "user":
cli_args.extend([
"--allowedTools",
"Read Write(**/STUDY_REPORT.md) Write(**/3_results/*.md) Bash(python:*) mcp__atomizer-tools__*"
])
# User mode: limited tools
cli_args.extend(
[
"--allowedTools",
"Read Bash(python:*)",
]
)
else:
# Power mode: full access to edit files
cli_args.append("--dangerously-skip-permissions")
cli_args.append("-") # Read from stdin
full_response = ""
tool_calls: List[Dict] = []
process: Optional[subprocess.Popen] = None
try:
loop = asyncio.get_event_loop()
# Run subprocess in thread pool (Windows compatible)
def run_claude():
nonlocal process
try:
process = subprocess.Popen(
cli_args,
@@ -194,8 +206,8 @@ class SessionManager:
stderr=subprocess.PIPE,
cwd=str(ATOMIZER_ROOT),
text=True,
encoding='utf-8',
errors='replace',
encoding="utf-8",
errors="replace",
)
stdout, stderr = process.communicate(input=full_prompt, timeout=300)
return {
@@ -204,10 +216,13 @@ class SessionManager:
"returncode": process.returncode,
}
except subprocess.TimeoutExpired:
process.kill()
if process:
process.kill()
return {"error": "Response timeout (5 minutes)"}
except FileNotFoundError:
return {"error": "Claude CLI not found in PATH. Install with: npm install -g @anthropic-ai/claude-code"}
return {
"error": "Claude CLI not found in PATH. Install with: npm install -g @anthropic-ai/claude-code"
}
except Exception as e:
return {"error": str(e)}
@@ -219,24 +234,14 @@ class SessionManager:
full_response = result["stdout"] or ""
if full_response:
# Check if response contains canvas modifications (from MCP tools)
import logging
logger = logging.getLogger(__name__)
modifications = self._extract_canvas_modifications(full_response)
logger.info(f"[SEND_MSG] Found {len(modifications)} canvas modifications to send")
for mod in modifications:
logger.info(f"[SEND_MSG] Sending canvas_modification: {mod.get('action')} {mod.get('nodeType')}")
yield {"type": "canvas_modification", "modification": mod}
# Always send the text response
# Always send the text response first
yield {"type": "text", "content": full_response}
if result["returncode"] != 0 and result["stderr"]:
yield {"type": "error", "message": f"CLI error: {result['stderr']}"}
logger.warning(f"[SEND_MSG] CLI stderr: {result['stderr']}")
except Exception as e:
logger.error(f"[SEND_MSG] Exception: {e}")
yield {"type": "error", "message": str(e)}
# Store assistant response
@@ -248,8 +253,46 @@ class SessionManager:
tool_calls=tool_calls if tool_calls else None,
)
# Check if spec was modified by comparing hashes
if spec_path and session.mode == "power" and session.study_id:
spec_hash_after = self._get_file_hash(spec_path)
if spec_hash_before != spec_hash_after:
logger.info(f"[SEND_MSG] Spec file was modified! Sending update.")
spec_update = await self._check_spec_updated(session.study_id)
if spec_update:
yield {
"type": "spec_updated",
"spec": spec_update,
"tool": "direct_edit",
"reason": "Claude modified spec file directly",
}
yield {"type": "done", "tool_calls": tool_calls}
def _get_spec_path(self, study_id: str) -> Optional[Path]:
"""Get the atomizer_spec.json path for a study."""
if not study_id:
return None
if study_id.startswith("draft_"):
spec_path = ATOMIZER_ROOT / "studies" / "_inbox" / study_id / "atomizer_spec.json"
else:
spec_path = ATOMIZER_ROOT / "studies" / study_id / "atomizer_spec.json"
if not spec_path.exists():
spec_path = ATOMIZER_ROOT / "studies" / study_id / "1_setup" / "atomizer_spec.json"
return spec_path if spec_path.exists() else None
def _get_file_hash(self, path: Optional[Path]) -> Optional[str]:
"""Get MD5 hash of a file for change detection."""
if not path or not path.exists():
return None
try:
with open(path, "rb") as f:
return hashlib.md5(f.read()).hexdigest()
except Exception:
return None
async def switch_mode(
self,
session_id: str,
@@ -313,6 +356,7 @@ class SessionManager:
"""
import re
import logging
logger = logging.getLogger(__name__)
modifications = []
@@ -327,14 +371,16 @@ class SessionManager:
try:
# Method 1: Look for JSON in code fences
code_block_pattern = r'```(?:json)?\s*([\s\S]*?)```'
code_block_pattern = r"```(?:json)?\s*([\s\S]*?)```"
for match in re.finditer(code_block_pattern, response):
block_content = match.group(1).strip()
try:
obj = json.loads(block_content)
if isinstance(obj, dict) and 'modification' in obj:
logger.info(f"[CANVAS_MOD] Found modification in code fence: {obj['modification']}")
modifications.append(obj['modification'])
if isinstance(obj, dict) and "modification" in obj:
logger.info(
f"[CANVAS_MOD] Found modification in code fence: {obj['modification']}"
)
modifications.append(obj["modification"])
except json.JSONDecodeError:
continue
@@ -342,7 +388,7 @@ class SessionManager:
# This handles nested objects correctly
i = 0
while i < len(response):
if response[i] == '{':
if response[i] == "{":
# Found a potential JSON start, find matching close
brace_count = 1
j = i + 1
@@ -354,14 +400,14 @@ class SessionManager:
if escape_next:
escape_next = False
elif char == '\\':
elif char == "\\":
escape_next = True
elif char == '"' and not escape_next:
in_string = not in_string
elif not in_string:
if char == '{':
if char == "{":
brace_count += 1
elif char == '}':
elif char == "}":
brace_count -= 1
j += 1
@@ -369,11 +415,13 @@ class SessionManager:
potential_json = response[i:j]
try:
obj = json.loads(potential_json)
if isinstance(obj, dict) and 'modification' in obj:
mod = obj['modification']
if isinstance(obj, dict) and "modification" in obj:
mod = obj["modification"]
# Avoid duplicates
if mod not in modifications:
logger.info(f"[CANVAS_MOD] Found inline modification: action={mod.get('action')}, nodeType={mod.get('nodeType')}")
logger.info(
f"[CANVAS_MOD] Found inline modification: action={mod.get('action')}, nodeType={mod.get('nodeType')}"
)
modifications.append(mod)
except json.JSONDecodeError as e:
# Not valid JSON, skip
@@ -388,6 +436,43 @@ class SessionManager:
logger.info(f"[CANVAS_MOD] Extracted {len(modifications)} modification(s)")
return modifications
async def _check_spec_updated(self, study_id: str) -> Optional[Dict]:
"""
Check if the atomizer_spec.json was modified and return the updated spec.
For drafts in _inbox/, we check the spec file directly.
"""
import logging
logger = logging.getLogger(__name__)
try:
# Determine spec path based on study_id
if study_id.startswith("draft_"):
spec_path = ATOMIZER_ROOT / "studies" / "_inbox" / study_id / "atomizer_spec.json"
else:
# Regular study path
spec_path = ATOMIZER_ROOT / "studies" / study_id / "atomizer_spec.json"
if not spec_path.exists():
spec_path = (
ATOMIZER_ROOT / "studies" / study_id / "1_setup" / "atomizer_spec.json"
)
if not spec_path.exists():
logger.debug(f"[SPEC_CHECK] Spec not found at {spec_path}")
return None
# Read and return the spec
with open(spec_path, "r", encoding="utf-8") as f:
spec = json.load(f)
logger.info(f"[SPEC_CHECK] Loaded spec from {spec_path}")
return spec
except Exception as e:
logger.error(f"[SPEC_CHECK] Error checking spec: {e}")
return None
def _build_mcp_config(self, mode: Literal["user", "power"]) -> dict:
"""Build MCP configuration for Claude"""
return {

View File

@@ -47,11 +47,13 @@ from optimization_engine.config.spec_validator import (
class SpecManagerError(Exception):
"""Base error for SpecManager operations."""
pass
class SpecNotFoundError(SpecManagerError):
"""Raised when spec file doesn't exist."""
pass
@@ -118,7 +120,7 @@ class SpecManager:
if not self.spec_path.exists():
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
with open(self.spec_path, 'r', encoding='utf-8') as f:
with open(self.spec_path, "r", encoding="utf-8") as f:
data = json.load(f)
if validate:
@@ -141,14 +143,15 @@ class SpecManager:
if not self.spec_path.exists():
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
with open(self.spec_path, 'r', encoding='utf-8') as f:
with open(self.spec_path, "r", encoding="utf-8") as f:
return json.load(f)
def save(
self,
spec: Union[AtomizerSpec, Dict[str, Any]],
modified_by: str = "api",
expected_hash: Optional[str] = None
expected_hash: Optional[str] = None,
skip_validation: bool = False,
) -> str:
"""
Save spec with validation and broadcast.
@@ -157,6 +160,7 @@ class SpecManager:
spec: Spec to save (AtomizerSpec or dict)
modified_by: Who/what is making the change
expected_hash: If provided, verify current file hash matches
skip_validation: If True, skip strict validation (for draft specs)
Returns:
New spec hash
@@ -167,7 +171,7 @@ class SpecManager:
"""
# Convert to dict if needed
if isinstance(spec, AtomizerSpec):
data = spec.model_dump(mode='json')
data = spec.model_dump(mode="json")
else:
data = spec
@@ -176,24 +180,30 @@ class SpecManager:
current_hash = self.get_hash()
if current_hash != expected_hash:
raise SpecConflictError(
"Spec was modified by another client",
current_hash=current_hash
"Spec was modified by another client", current_hash=current_hash
)
# Update metadata
now = datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')
now = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
data["meta"]["modified"] = now
data["meta"]["modified_by"] = modified_by
# Validate
self.validator.validate(data, strict=True)
# Validate (skip for draft specs or when explicitly requested)
status = data.get("meta", {}).get("status", "draft")
is_draft = status in ("draft", "introspected", "configured")
if not skip_validation and not is_draft:
self.validator.validate(data, strict=True)
elif not skip_validation:
# For draft specs, just validate non-strictly (collect warnings only)
self.validator.validate(data, strict=False)
# Compute new hash
new_hash = self._compute_hash(data)
# Atomic write (write to temp, then rename)
temp_path = self.spec_path.with_suffix('.tmp')
with open(temp_path, 'w', encoding='utf-8') as f:
temp_path = self.spec_path.with_suffix(".tmp")
with open(temp_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
temp_path.replace(self.spec_path)
@@ -202,12 +212,9 @@ class SpecManager:
self._last_hash = new_hash
# Broadcast to subscribers
self._broadcast({
"type": "spec_updated",
"hash": new_hash,
"modified_by": modified_by,
"timestamp": now
})
self._broadcast(
{"type": "spec_updated", "hash": new_hash, "modified_by": modified_by, "timestamp": now}
)
return new_hash
@@ -219,7 +226,7 @@ class SpecManager:
"""Get current spec hash."""
if not self.spec_path.exists():
return ""
with open(self.spec_path, 'r', encoding='utf-8') as f:
with open(self.spec_path, "r", encoding="utf-8") as f:
data = json.load(f)
return self._compute_hash(data)
@@ -240,12 +247,7 @@ class SpecManager:
# Patch Operations
# =========================================================================
def patch(
self,
path: str,
value: Any,
modified_by: str = "api"
) -> AtomizerSpec:
def patch(self, path: str, value: Any, modified_by: str = "api") -> AtomizerSpec:
"""
Apply a JSONPath-style modification.
@@ -306,7 +308,7 @@ class SpecManager:
"""Parse JSONPath into parts."""
# Handle both dot notation and bracket notation
parts = []
for part in re.split(r'\.|\[|\]', path):
for part in re.split(r"\.|\[|\]", path):
if part:
parts.append(part)
return parts
@@ -316,10 +318,7 @@ class SpecManager:
# =========================================================================
def add_node(
self,
node_type: str,
node_data: Dict[str, Any],
modified_by: str = "canvas"
self, node_type: str, node_data: Dict[str, Any], modified_by: str = "canvas"
) -> str:
"""
Add a new node (design var, extractor, objective, constraint).
@@ -353,20 +352,19 @@ class SpecManager:
self.save(data, modified_by)
# Broadcast node addition
self._broadcast({
"type": "node_added",
"node_type": node_type,
"node_id": node_id,
"modified_by": modified_by
})
self._broadcast(
{
"type": "node_added",
"node_type": node_type,
"node_id": node_id,
"modified_by": modified_by,
}
)
return node_id
def update_node(
self,
node_id: str,
updates: Dict[str, Any],
modified_by: str = "canvas"
self, node_id: str, updates: Dict[str, Any], modified_by: str = "canvas"
) -> None:
"""
Update an existing node.
@@ -396,11 +394,7 @@ class SpecManager:
self.save(data, modified_by)
def remove_node(
self,
node_id: str,
modified_by: str = "canvas"
) -> None:
def remove_node(self, node_id: str, modified_by: str = "canvas") -> None:
"""
Remove a node and all edges referencing it.
@@ -427,24 +421,18 @@ class SpecManager:
# Remove edges referencing this node
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
data["canvas"]["edges"] = [
e for e in data["canvas"]["edges"]
e
for e in data["canvas"]["edges"]
if e.get("source") != node_id and e.get("target") != node_id
]
self.save(data, modified_by)
# Broadcast node removal
self._broadcast({
"type": "node_removed",
"node_id": node_id,
"modified_by": modified_by
})
self._broadcast({"type": "node_removed", "node_id": node_id, "modified_by": modified_by})
def update_node_position(
self,
node_id: str,
position: Dict[str, float],
modified_by: str = "canvas"
self, node_id: str, position: Dict[str, float], modified_by: str = "canvas"
) -> None:
"""
Update a node's canvas position.
@@ -456,12 +444,7 @@ class SpecManager:
"""
self.update_node(node_id, {"canvas_position": position}, modified_by)
def add_edge(
self,
source: str,
target: str,
modified_by: str = "canvas"
) -> None:
def add_edge(self, source: str, target: str, modified_by: str = "canvas") -> None:
"""
Add a canvas edge between nodes.
@@ -483,19 +466,11 @@ class SpecManager:
if edge.get("source") == source and edge.get("target") == target:
return # Already exists
data["canvas"]["edges"].append({
"source": source,
"target": target
})
data["canvas"]["edges"].append({"source": source, "target": target})
self.save(data, modified_by)
def remove_edge(
self,
source: str,
target: str,
modified_by: str = "canvas"
) -> None:
def remove_edge(self, source: str, target: str, modified_by: str = "canvas") -> None:
"""
Remove a canvas edge.
@@ -508,7 +483,8 @@ class SpecManager:
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
data["canvas"]["edges"] = [
e for e in data["canvas"]["edges"]
e
for e in data["canvas"]["edges"]
if not (e.get("source") == source and e.get("target") == target)
]
@@ -524,7 +500,7 @@ class SpecManager:
code: str,
outputs: List[str],
description: Optional[str] = None,
modified_by: str = "claude"
modified_by: str = "claude",
) -> str:
"""
Add a custom extractor function.
@@ -546,9 +522,7 @@ class SpecManager:
try:
compile(code, f"<custom:{name}>", "exec")
except SyntaxError as e:
raise SpecValidationError(
f"Invalid Python syntax: {e.msg} at line {e.lineno}"
)
raise SpecValidationError(f"Invalid Python syntax: {e.msg} at line {e.lineno}")
data = self.load_raw()
@@ -561,13 +535,9 @@ class SpecManager:
"name": description or f"Custom: {name}",
"type": "custom_function",
"builtin": False,
"function": {
"name": name,
"module": "custom_extractors.dynamic",
"source_code": code
},
"function": {"name": name, "module": "custom_extractors.dynamic", "source_code": code},
"outputs": [{"name": o, "metric": "custom"} for o in outputs],
"canvas_position": self._auto_position("extractor", data)
"canvas_position": self._auto_position("extractor", data),
}
data["extractors"].append(extractor)
@@ -580,7 +550,7 @@ class SpecManager:
extractor_id: str,
code: Optional[str] = None,
outputs: Optional[List[str]] = None,
modified_by: str = "claude"
modified_by: str = "claude",
) -> None:
"""
Update an existing custom function.
@@ -611,9 +581,7 @@ class SpecManager:
try:
compile(code, f"<custom:{extractor_id}>", "exec")
except SyntaxError as e:
raise SpecValidationError(
f"Invalid Python syntax: {e.msg} at line {e.lineno}"
)
raise SpecValidationError(f"Invalid Python syntax: {e.msg} at line {e.lineno}")
if "function" not in extractor:
extractor["function"] = {}
extractor["function"]["source_code"] = code
@@ -672,7 +640,7 @@ class SpecManager:
"design_variable": "dv",
"extractor": "ext",
"objective": "obj",
"constraint": "con"
"constraint": "con",
}
prefix = prefix_map.get(node_type, node_type[:3])
@@ -697,7 +665,7 @@ class SpecManager:
"design_variable": "design_variables",
"extractor": "extractors",
"objective": "objectives",
"constraint": "constraints"
"constraint": "constraints",
}
return section_map.get(node_type, node_type + "s")
@@ -709,7 +677,7 @@ class SpecManager:
"design_variable": 50,
"extractor": 740,
"objective": 1020,
"constraint": 1020
"constraint": 1020,
}
x = x_positions.get(node_type, 400)
@@ -729,11 +697,123 @@ class SpecManager:
return {"x": x, "y": y}
# =========================================================================
# Intake Workflow Methods
# =========================================================================
def update_status(self, status: str, modified_by: str = "api") -> None:
"""
Update the spec status field.
Args:
status: New status (draft, introspected, configured, validated, ready, running, completed, failed)
modified_by: Who/what is making the change
"""
data = self.load_raw()
data["meta"]["status"] = status
self.save(data, modified_by)
def get_status(self) -> str:
"""
Get the current spec status.
Returns:
Current status string
"""
if not self.exists():
return "unknown"
data = self.load_raw()
return data.get("meta", {}).get("status", "draft")
def add_introspection(
self, introspection_data: Dict[str, Any], modified_by: str = "introspection"
) -> None:
"""
Add introspection data to the spec's model section.
Args:
introspection_data: Dict with timestamp, expressions, mass_kg, etc.
modified_by: Who/what is making the change
"""
data = self.load_raw()
if "model" not in data:
data["model"] = {}
data["model"]["introspection"] = introspection_data
data["meta"]["status"] = "introspected"
self.save(data, modified_by)
def add_baseline(
self, baseline_data: Dict[str, Any], modified_by: str = "baseline_solve"
) -> None:
"""
Add baseline solve results to introspection data.
Args:
baseline_data: Dict with timestamp, solve_time_seconds, mass_kg, etc.
modified_by: Who/what is making the change
"""
data = self.load_raw()
if "model" not in data:
data["model"] = {}
if "introspection" not in data["model"] or data["model"]["introspection"] is None:
data["model"]["introspection"] = {}
data["model"]["introspection"]["baseline"] = baseline_data
# Update status based on baseline success
if baseline_data.get("success", False):
data["meta"]["status"] = "validated"
self.save(data, modified_by)
def set_topic(self, topic: str, modified_by: str = "api") -> None:
"""
Set the spec's topic field.
Args:
topic: Topic folder name
modified_by: Who/what is making the change
"""
data = self.load_raw()
data["meta"]["topic"] = topic
self.save(data, modified_by)
def get_introspection(self) -> Optional[Dict[str, Any]]:
"""
Get introspection data from spec.
Returns:
Introspection dict or None if not present
"""
if not self.exists():
return None
data = self.load_raw()
return data.get("model", {}).get("introspection")
def get_design_candidates(self) -> List[Dict[str, Any]]:
"""
Get expressions marked as design variable candidates.
Returns:
List of expression dicts where is_candidate=True
"""
introspection = self.get_introspection()
if not introspection:
return []
expressions = introspection.get("expressions", [])
return [e for e in expressions if e.get("is_candidate", False)]
# =========================================================================
# Factory Function
# =========================================================================
def get_spec_manager(study_path: Union[str, Path]) -> SpecManager:
"""
Get a SpecManager instance for a study.

View File

@@ -9,6 +9,7 @@ import Analysis from './pages/Analysis';
import Insights from './pages/Insights';
import Results from './pages/Results';
import CanvasView from './pages/CanvasView';
import Studio from './pages/Studio';
const queryClient = new QueryClient({
defaultOptions: {
@@ -32,6 +33,10 @@ function App() {
<Route path="canvas" element={<CanvasView />} />
<Route path="canvas/*" element={<CanvasView />} />
{/* Studio - unified study creation environment */}
<Route path="studio" element={<Studio />} />
<Route path="studio/:draftId" element={<Studio />} />
{/* Study pages - with sidebar layout */}
<Route element={<MainLayout />}>
<Route path="setup" element={<Setup />} />

View File

@@ -0,0 +1,411 @@
/**
* Intake API Client
*
* API client methods for the study intake workflow.
*/
import {
CreateInboxRequest,
CreateInboxResponse,
IntrospectRequest,
IntrospectResponse,
ListInboxResponse,
ListTopicsResponse,
InboxStudyDetail,
GenerateReadmeResponse,
FinalizeRequest,
FinalizeResponse,
UploadFilesResponse,
} from '../types/intake';
const API_BASE = '/api';
/**
* Intake API client for study creation workflow.
*/
export const intakeApi = {
/**
* Create a new inbox study folder with initial spec.
*/
async createInbox(request: CreateInboxRequest): Promise<CreateInboxResponse> {
const response = await fetch(`${API_BASE}/intake/create`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to create inbox study');
}
return response.json();
},
/**
* Run NX introspection on an inbox study.
*/
async introspect(request: IntrospectRequest): Promise<IntrospectResponse> {
const response = await fetch(`${API_BASE}/intake/introspect`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Introspection failed');
}
return response.json();
},
/**
* List all studies in the inbox.
*/
async listInbox(): Promise<ListInboxResponse> {
const response = await fetch(`${API_BASE}/intake/list`);
if (!response.ok) {
throw new Error('Failed to fetch inbox studies');
}
return response.json();
},
/**
* List existing topic folders.
*/
async listTopics(): Promise<ListTopicsResponse> {
const response = await fetch(`${API_BASE}/intake/topics`);
if (!response.ok) {
throw new Error('Failed to fetch topics');
}
return response.json();
},
/**
* Get detailed information about an inbox study.
*/
async getInboxStudy(studyName: string): Promise<InboxStudyDetail> {
const response = await fetch(`${API_BASE}/intake/${encodeURIComponent(studyName)}`);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to fetch inbox study');
}
return response.json();
},
/**
* Delete an inbox study.
*/
async deleteInboxStudy(studyName: string): Promise<{ success: boolean; deleted: string }> {
const response = await fetch(`${API_BASE}/intake/${encodeURIComponent(studyName)}`, {
method: 'DELETE',
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to delete inbox study');
}
return response.json();
},
/**
* Generate README for an inbox study using Claude AI.
*/
async generateReadme(studyName: string): Promise<GenerateReadmeResponse> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/readme`,
{ method: 'POST' }
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'README generation failed');
}
return response.json();
},
/**
* Finalize an inbox study and move to studies directory.
*/
async finalize(studyName: string, request: FinalizeRequest): Promise<FinalizeResponse> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/finalize`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
}
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Finalization failed');
}
return response.json();
},
/**
* Upload model files to an inbox study.
*/
async uploadFiles(studyName: string, files: File[]): Promise<UploadFilesResponse> {
const formData = new FormData();
files.forEach((file) => {
formData.append('files', file);
});
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/upload`,
{
method: 'POST',
body: formData,
}
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'File upload failed');
}
return response.json();
},
/**
* Upload context files to an inbox study.
* Context files help Claude understand optimization goals.
*/
async uploadContextFiles(studyName: string, files: File[]): Promise<UploadFilesResponse> {
const formData = new FormData();
files.forEach((file) => {
formData.append('files', file);
});
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context`,
{
method: 'POST',
body: formData,
}
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Context file upload failed');
}
return response.json();
},
/**
* List context files for an inbox study.
*/
async listContextFiles(studyName: string): Promise<{
study_name: string;
context_files: Array<{ name: string; path: string; size: number; extension: string }>;
total: number;
}> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context`
);
if (!response.ok) {
throw new Error('Failed to list context files');
}
return response.json();
},
/**
* Delete a context file from an inbox study.
*/
async deleteContextFile(studyName: string, filename: string): Promise<{ success: boolean; deleted: string }> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context/${encodeURIComponent(filename)}`,
{ method: 'DELETE' }
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to delete context file');
}
return response.json();
},
/**
* Create design variables from selected expressions.
*/
async createDesignVariables(
studyName: string,
expressionNames: string[],
options?: { autoBounds?: boolean; boundFactor?: number }
): Promise<{
success: boolean;
study_name: string;
created: Array<{
id: string;
name: string;
expression_name: string;
bounds_min: number;
bounds_max: number;
baseline: number;
units: string | null;
}>;
total_created: number;
}> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/design-variables`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
expression_names: expressionNames,
auto_bounds: options?.autoBounds ?? true,
bound_factor: options?.boundFactor ?? 0.5,
}),
}
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to create design variables');
}
return response.json();
},
// ===========================================================================
// Studio Endpoints (Atomizer Studio - Unified Creation Environment)
// ===========================================================================
/**
* Create an anonymous draft study for Studio workflow.
* Returns a temporary draft_id that can be renamed during finalization.
*/
async createDraft(): Promise<{
success: boolean;
draft_id: string;
inbox_path: string;
spec_path: string;
status: string;
}> {
const response = await fetch(`${API_BASE}/intake/draft`, {
method: 'POST',
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to create draft');
}
return response.json();
},
/**
* Get extracted text content from context files.
* Used for AI context injection.
*/
async getContextContent(studyName: string): Promise<{
success: boolean;
study_name: string;
content: string;
files_read: Array<{
name: string;
extension: string;
size: number;
status: string;
characters?: number;
error?: string;
}>;
total_characters: number;
}> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context/content`
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to get context content');
}
return response.json();
},
/**
* Finalize a Studio draft with rename support.
* Enhanced version that supports renaming draft_xxx to proper names.
*/
async finalizeStudio(
studyName: string,
request: {
topic: string;
newName?: string;
runBaseline?: boolean;
}
): Promise<{
success: boolean;
original_name: string;
final_name: string;
final_path: string;
status: string;
baseline_success: boolean | null;
readme_generated: boolean;
}> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/finalize/studio`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
topic: request.topic,
new_name: request.newName,
run_baseline: request.runBaseline ?? false,
}),
}
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Studio finalization failed');
}
return response.json();
},
/**
* Get complete draft information for Studio UI.
* Convenience endpoint that returns everything the Studio needs.
*/
async getStudioDraft(studyName: string): Promise<{
success: boolean;
draft_id: string;
spec: Record<string, unknown>;
model_files: string[];
context_files: string[];
introspection_available: boolean;
design_variable_count: number;
objective_count: number;
}> {
const response = await fetch(
`${API_BASE}/intake/${encodeURIComponent(studyName)}/studio`
);
if (!response.ok) {
const error = await response.json();
throw new Error(error.detail || 'Failed to get studio draft');
}
return response.json();
},
};
export default intakeApi;

View File

@@ -777,6 +777,8 @@ function SpecRendererInner({
onConnect={onConnect}
onInit={(instance) => {
reactFlowInstance.current = instance;
// Auto-fit view on init with padding
setTimeout(() => instance.fitView({ padding: 0.2, duration: 300 }), 100);
}}
onDragOver={onDragOver}
onDrop={onDrop}
@@ -785,6 +787,7 @@ function SpecRendererInner({
onPaneClick={onPaneClick}
nodeTypes={nodeTypes}
fitView
fitViewOptions={{ padding: 0.2, includeHiddenNodes: false }}
deleteKeyCode={null} // We handle delete ourselves
nodesDraggable={editable}
nodesConnectable={editable}

View File

@@ -0,0 +1,292 @@
/**
* ContextFileUpload - Upload context files for study configuration
*
* Allows uploading markdown, text, PDF, and image files that help
* Claude understand optimization goals and generate better documentation.
*/
import React, { useState, useEffect, useRef, useCallback } from 'react';
import { Upload, FileText, X, Loader2, AlertCircle, CheckCircle, Trash2, BookOpen } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface ContextFileUploadProps {
studyName: string;
onUploadComplete: () => void;
}
interface ContextFile {
name: string;
path: string;
size: number;
extension: string;
}
interface FileStatus {
file: File;
status: 'pending' | 'uploading' | 'success' | 'error';
message?: string;
}
const VALID_EXTENSIONS = ['.md', '.txt', '.pdf', '.png', '.jpg', '.jpeg', '.json', '.csv'];
export const ContextFileUpload: React.FC<ContextFileUploadProps> = ({
studyName,
onUploadComplete,
}) => {
const [contextFiles, setContextFiles] = useState<ContextFile[]>([]);
const [pendingFiles, setPendingFiles] = useState<FileStatus[]>([]);
const [isUploading, setIsUploading] = useState(false);
const [error, setError] = useState<string | null>(null);
const fileInputRef = useRef<HTMLInputElement>(null);
// Load existing context files
const loadContextFiles = useCallback(async () => {
try {
const response = await intakeApi.listContextFiles(studyName);
setContextFiles(response.context_files);
} catch (err) {
console.error('Failed to load context files:', err);
}
}, [studyName]);
useEffect(() => {
loadContextFiles();
}, [loadContextFiles]);
const validateFile = (file: File): { valid: boolean; reason?: string } => {
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
if (!VALID_EXTENSIONS.includes(ext)) {
return { valid: false, reason: `Invalid type: ${ext}` };
}
// Max 10MB per file
if (file.size > 10 * 1024 * 1024) {
return { valid: false, reason: 'File too large (max 10MB)' };
}
return { valid: true };
};
const addFiles = useCallback((newFiles: File[]) => {
const validFiles: FileStatus[] = [];
for (const file of newFiles) {
// Skip duplicates
if (pendingFiles.some(f => f.file.name === file.name)) {
continue;
}
if (contextFiles.some(f => f.name === file.name)) {
continue;
}
const validation = validateFile(file);
if (validation.valid) {
validFiles.push({ file, status: 'pending' });
} else {
validFiles.push({ file, status: 'error', message: validation.reason });
}
}
setPendingFiles(prev => [...prev, ...validFiles]);
}, [pendingFiles, contextFiles]);
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
const selectedFiles = Array.from(e.target.files || []);
addFiles(selectedFiles);
e.target.value = '';
}, [addFiles]);
const removeFile = (index: number) => {
setPendingFiles(prev => prev.filter((_, i) => i !== index));
};
const handleUpload = async () => {
const filesToUpload = pendingFiles.filter(f => f.status === 'pending');
if (filesToUpload.length === 0) return;
setIsUploading(true);
setError(null);
try {
const response = await intakeApi.uploadContextFiles(
studyName,
filesToUpload.map(f => f.file)
);
// Update pending file statuses
const uploadResults = new Map(
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
);
setPendingFiles(prev => prev.map(f => {
if (f.status !== 'pending') return f;
const success = uploadResults.get(f.file.name);
return {
...f,
status: success ? 'success' : 'error',
message: success ? undefined : 'Upload failed',
};
}));
// Refresh and clear after a moment
setTimeout(() => {
setPendingFiles(prev => prev.filter(f => f.status !== 'success'));
loadContextFiles();
onUploadComplete();
}, 1500);
} catch (err) {
setError(err instanceof Error ? err.message : 'Upload failed');
} finally {
setIsUploading(false);
}
};
const handleDeleteFile = async (filename: string) => {
try {
await intakeApi.deleteContextFile(studyName, filename);
loadContextFiles();
} catch (err) {
setError(err instanceof Error ? err.message : 'Delete failed');
}
};
const pendingCount = pendingFiles.filter(f => f.status === 'pending').length;
const formatSize = (bytes: number) => {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
return `${(bytes / 1024 / 1024).toFixed(1)} MB`;
};
return (
<div className="space-y-3">
<div className="flex items-center justify-between">
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
<BookOpen className="w-4 h-4 text-purple-400" />
Context Files
</h5>
<button
onClick={() => fileInputRef.current?.click()}
className="flex items-center gap-1.5 px-2 py-1 rounded text-xs font-medium
bg-purple-500/10 text-purple-400 hover:bg-purple-500/20
transition-colors"
>
<Upload className="w-3 h-3" />
Add Context
</button>
</div>
<p className="text-xs text-dark-500">
Add .md, .txt, or .pdf files describing your optimization goals. Claude will use these to generate documentation.
</p>
{/* Error Display */}
{error && (
<div className="p-2 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-xs flex items-center gap-2">
<AlertCircle className="w-3 h-3 flex-shrink-0" />
{error}
<button onClick={() => setError(null)} className="ml-auto hover:text-white">
<X className="w-3 h-3" />
</button>
</div>
)}
{/* Existing Context Files */}
{contextFiles.length > 0 && (
<div className="space-y-1">
{contextFiles.map((file) => (
<div
key={file.name}
className="flex items-center justify-between p-2 rounded-lg bg-purple-500/5 border border-purple-500/20"
>
<div className="flex items-center gap-2">
<FileText className="w-4 h-4 text-purple-400" />
<span className="text-sm text-white">{file.name}</span>
<span className="text-xs text-dark-500">{formatSize(file.size)}</span>
</div>
<button
onClick={() => handleDeleteFile(file.name)}
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-red-400"
title="Delete file"
>
<Trash2 className="w-3 h-3" />
</button>
</div>
))}
</div>
)}
{/* Pending Files */}
{pendingFiles.length > 0 && (
<div className="space-y-1">
{pendingFiles.map((f, i) => (
<div
key={i}
className={`flex items-center justify-between p-2 rounded-lg
${f.status === 'error' ? 'bg-red-500/10' :
f.status === 'success' ? 'bg-green-500/10' :
'bg-dark-700'}`}
>
<div className="flex items-center gap-2">
{f.status === 'pending' && <FileText className="w-4 h-4 text-dark-400" />}
{f.status === 'uploading' && <Loader2 className="w-4 h-4 text-purple-400 animate-spin" />}
{f.status === 'success' && <CheckCircle className="w-4 h-4 text-green-400" />}
{f.status === 'error' && <AlertCircle className="w-4 h-4 text-red-400" />}
<span className={`text-sm ${f.status === 'error' ? 'text-red-400' :
f.status === 'success' ? 'text-green-400' :
'text-white'}`}>
{f.file.name}
</span>
{f.message && (
<span className="text-xs text-red-400">({f.message})</span>
)}
</div>
{f.status === 'pending' && (
<button
onClick={() => removeFile(i)}
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-white"
>
<X className="w-3 h-3" />
</button>
)}
</div>
))}
</div>
)}
{/* Upload Button */}
{pendingCount > 0 && (
<button
onClick={handleUpload}
disabled={isUploading}
className="w-full flex items-center justify-center gap-2 px-3 py-2 rounded-lg
bg-purple-500 text-white text-sm font-medium
hover:bg-purple-400 disabled:opacity-50 disabled:cursor-not-allowed
transition-colors"
>
{isUploading ? (
<>
<Loader2 className="w-4 h-4 animate-spin" />
Uploading...
</>
) : (
<>
<Upload className="w-4 h-4" />
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
</>
)}
</button>
)}
<input
ref={fileInputRef}
type="file"
multiple
accept={VALID_EXTENSIONS.join(',')}
onChange={handleFileSelect}
className="hidden"
/>
</div>
);
};
export default ContextFileUpload;

View File

@@ -0,0 +1,227 @@
/**
* CreateStudyCard - Card for initiating new study creation
*
* Displays a prominent card on the Home page that allows users to
* create a new study through the intake workflow.
*/
import React, { useState } from 'react';
import { Plus, Loader2 } from 'lucide-react';
import { intakeApi } from '../../api/intake';
import { TopicInfo } from '../../types/intake';
interface CreateStudyCardProps {
topics: TopicInfo[];
onStudyCreated: (studyName: string) => void;
}
export const CreateStudyCard: React.FC<CreateStudyCardProps> = ({
topics,
onStudyCreated,
}) => {
const [isExpanded, setIsExpanded] = useState(false);
const [studyName, setStudyName] = useState('');
const [description, setDescription] = useState('');
const [selectedTopic, setSelectedTopic] = useState('');
const [newTopic, setNewTopic] = useState('');
const [isCreating, setIsCreating] = useState(false);
const [error, setError] = useState<string | null>(null);
const handleCreate = async () => {
if (!studyName.trim()) {
setError('Study name is required');
return;
}
// Validate study name format
const nameRegex = /^[a-z0-9_]+$/;
if (!nameRegex.test(studyName)) {
setError('Study name must be lowercase with underscores only (e.g., my_study_name)');
return;
}
setIsCreating(true);
setError(null);
try {
const topic = newTopic.trim() || selectedTopic || undefined;
await intakeApi.createInbox({
study_name: studyName.trim(),
description: description.trim() || undefined,
topic,
});
// Reset form
setStudyName('');
setDescription('');
setSelectedTopic('');
setNewTopic('');
setIsExpanded(false);
onStudyCreated(studyName.trim());
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to create study');
} finally {
setIsCreating(false);
}
};
if (!isExpanded) {
return (
<button
onClick={() => setIsExpanded(true)}
className="w-full glass rounded-xl p-6 border border-dashed border-primary-400/30
hover:border-primary-400/60 hover:bg-primary-400/5 transition-all
flex items-center justify-center gap-3 group"
>
<div className="w-12 h-12 rounded-xl bg-primary-400/10 flex items-center justify-center
group-hover:bg-primary-400/20 transition-colors">
<Plus className="w-6 h-6 text-primary-400" />
</div>
<div className="text-left">
<h3 className="text-lg font-semibold text-white">Create New Study</h3>
<p className="text-sm text-dark-400">Set up a new optimization study</p>
</div>
</button>
);
}
return (
<div className="glass-strong rounded-xl border border-primary-400/20 overflow-hidden">
{/* Header */}
<div className="px-6 py-4 border-b border-primary-400/10 flex items-center justify-between">
<div className="flex items-center gap-3">
<div className="w-10 h-10 rounded-lg bg-primary-400/10 flex items-center justify-center">
<Plus className="w-5 h-5 text-primary-400" />
</div>
<h3 className="text-lg font-semibold text-white">Create New Study</h3>
</div>
<button
onClick={() => setIsExpanded(false)}
className="text-dark-400 hover:text-white transition-colors text-sm"
>
Cancel
</button>
</div>
{/* Form */}
<div className="p-6 space-y-4">
{/* Study Name */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Study Name <span className="text-red-400">*</span>
</label>
<input
type="text"
value={studyName}
onChange={(e) => setStudyName(e.target.value.toLowerCase().replace(/[^a-z0-9_]/g, '_'))}
placeholder="my_optimization_study"
className="w-full px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white placeholder-dark-500 focus:border-primary-400
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
/>
<p className="mt-1 text-xs text-dark-500">
Lowercase letters, numbers, and underscores only
</p>
</div>
{/* Description */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Description
</label>
<textarea
value={description}
onChange={(e) => setDescription(e.target.value)}
placeholder="Brief description of the optimization goal..."
rows={2}
className="w-full px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white placeholder-dark-500 focus:border-primary-400
focus:outline-none focus:ring-1 focus:ring-primary-400/50 resize-none"
/>
</div>
{/* Topic Selection */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Topic Folder
</label>
<div className="flex gap-2">
<select
value={selectedTopic}
onChange={(e) => {
setSelectedTopic(e.target.value);
setNewTopic('');
}}
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white focus:border-primary-400 focus:outline-none
focus:ring-1 focus:ring-primary-400/50"
>
<option value="">Select existing topic...</option>
{topics.map((topic) => (
<option key={topic.name} value={topic.name}>
{topic.name} ({topic.study_count} studies)
</option>
))}
</select>
<span className="text-dark-500 self-center">or</span>
<input
type="text"
value={newTopic}
onChange={(e) => {
setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'));
setSelectedTopic('');
}}
placeholder="New_Topic"
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white placeholder-dark-500 focus:border-primary-400
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
/>
</div>
</div>
{/* Error Message */}
{error && (
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm">
{error}
</div>
)}
{/* Actions */}
<div className="flex justify-end gap-3 pt-2">
<button
onClick={() => setIsExpanded(false)}
className="px-4 py-2 rounded-lg border border-dark-600 text-dark-300
hover:border-dark-500 hover:text-white transition-colors"
>
Cancel
</button>
<button
onClick={handleCreate}
disabled={isCreating || !studyName.trim()}
className="px-6 py-2 rounded-lg font-medium transition-all disabled:opacity-50
flex items-center gap-2"
style={{
background: 'linear-gradient(135deg, #00d4e6 0%, #0891b2 100%)',
color: '#000',
}}
>
{isCreating ? (
<>
<Loader2 className="w-4 h-4 animate-spin" />
Creating...
</>
) : (
<>
<Plus className="w-4 h-4" />
Create Study
</>
)}
</button>
</div>
</div>
</div>
);
};
export default CreateStudyCard;

View File

@@ -0,0 +1,270 @@
/**
* ExpressionList - Display discovered expressions with selection capability
*
* Shows expressions from NX introspection, allowing users to:
* - View all discovered expressions
* - See which are design variable candidates (auto-detected)
* - Select/deselect expressions to use as design variables
* - View expression values and units
*/
import React, { useState } from 'react';
import {
Check,
Search,
AlertTriangle,
Sparkles,
Info,
Variable,
} from 'lucide-react';
import { ExpressionInfo } from '../../types/intake';
interface ExpressionListProps {
/** Expression data from introspection */
expressions: ExpressionInfo[];
/** Mass from introspection (kg) */
massKg?: number | null;
/** Currently selected expressions (to become DVs) */
selectedExpressions: string[];
/** Callback when selection changes */
onSelectionChange: (selected: string[]) => void;
/** Whether in read-only mode */
readOnly?: boolean;
/** Compact display mode */
compact?: boolean;
}
export const ExpressionList: React.FC<ExpressionListProps> = ({
expressions,
massKg,
selectedExpressions,
onSelectionChange,
readOnly = false,
compact = false,
}) => {
const [filter, setFilter] = useState('');
const [showCandidatesOnly, setShowCandidatesOnly] = useState(true);
// Filter expressions based on search and candidate toggle
const filteredExpressions = expressions.filter((expr) => {
const matchesSearch = filter === '' ||
expr.name.toLowerCase().includes(filter.toLowerCase());
const matchesCandidate = !showCandidatesOnly || expr.is_candidate;
return matchesSearch && matchesCandidate;
});
// Sort: candidates first, then by confidence, then alphabetically
const sortedExpressions = [...filteredExpressions].sort((a, b) => {
if (a.is_candidate !== b.is_candidate) {
return a.is_candidate ? -1 : 1;
}
if (a.confidence !== b.confidence) {
return b.confidence - a.confidence;
}
return a.name.localeCompare(b.name);
});
const toggleExpression = (name: string) => {
if (readOnly) return;
if (selectedExpressions.includes(name)) {
onSelectionChange(selectedExpressions.filter(n => n !== name));
} else {
onSelectionChange([...selectedExpressions, name]);
}
};
const selectAllCandidates = () => {
const candidateNames = expressions
.filter(e => e.is_candidate)
.map(e => e.name);
onSelectionChange(candidateNames);
};
const clearSelection = () => {
onSelectionChange([]);
};
const candidateCount = expressions.filter(e => e.is_candidate).length;
if (expressions.length === 0) {
return (
<div className="p-4 rounded-lg bg-dark-700/50 border border-dark-600">
<div className="flex items-center gap-2 text-dark-400">
<AlertTriangle className="w-4 h-4" />
<span>No expressions found. Run introspection to discover model parameters.</span>
</div>
</div>
);
}
return (
<div className="space-y-3">
{/* Header with stats */}
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
<Variable className="w-4 h-4" />
Discovered Expressions
</h5>
<span className="text-xs text-dark-500">
{expressions.length} total, {candidateCount} candidates
</span>
{massKg && (
<span className="text-xs text-primary-400">
Mass: {massKg.toFixed(3)} kg
</span>
)}
</div>
{!readOnly && selectedExpressions.length > 0 && (
<span className="text-xs text-green-400">
{selectedExpressions.length} selected
</span>
)}
</div>
{/* Controls */}
{!compact && (
<div className="flex items-center gap-3">
{/* Search */}
<div className="relative flex-1 max-w-xs">
<Search className="absolute left-2.5 top-1/2 -translate-y-1/2 w-4 h-4 text-dark-500" />
<input
type="text"
placeholder="Search expressions..."
value={filter}
onChange={(e) => setFilter(e.target.value)}
className="w-full pl-8 pr-3 py-1.5 text-sm rounded-lg bg-dark-700 border border-dark-600
text-white placeholder-dark-500 focus:border-primary-500/50 focus:outline-none"
/>
</div>
{/* Show candidates only toggle */}
<label className="flex items-center gap-2 text-xs text-dark-400 cursor-pointer">
<input
type="checkbox"
checked={showCandidatesOnly}
onChange={(e) => setShowCandidatesOnly(e.target.checked)}
className="w-4 h-4 rounded border-dark-500 bg-dark-700 text-primary-500
focus:ring-primary-500/30"
/>
Candidates only
</label>
{/* Quick actions */}
{!readOnly && (
<div className="flex items-center gap-2">
<button
onClick={selectAllCandidates}
className="px-2 py-1 text-xs rounded bg-primary-500/10 text-primary-400
hover:bg-primary-500/20 transition-colors"
>
Select all candidates
</button>
<button
onClick={clearSelection}
className="px-2 py-1 text-xs rounded bg-dark-600 text-dark-400
hover:bg-dark-500 transition-colors"
>
Clear
</button>
</div>
)}
</div>
)}
{/* Expression list */}
<div className={`rounded-lg border border-dark-600 overflow-hidden ${
compact ? 'max-h-48' : 'max-h-72'
} overflow-y-auto`}>
<table className="w-full text-sm">
<thead className="bg-dark-700 sticky top-0">
<tr>
{!readOnly && (
<th className="w-8 px-2 py-2"></th>
)}
<th className="px-3 py-2 text-left text-dark-400 font-medium">Name</th>
<th className="px-3 py-2 text-right text-dark-400 font-medium w-24">Value</th>
<th className="px-3 py-2 text-left text-dark-400 font-medium w-16">Units</th>
<th className="px-3 py-2 text-center text-dark-400 font-medium w-20">Candidate</th>
</tr>
</thead>
<tbody className="divide-y divide-dark-700">
{sortedExpressions.map((expr) => {
const isSelected = selectedExpressions.includes(expr.name);
return (
<tr
key={expr.name}
onClick={() => toggleExpression(expr.name)}
className={`
${readOnly ? '' : 'cursor-pointer hover:bg-dark-700/50'}
${isSelected ? 'bg-primary-500/10' : ''}
transition-colors
`}
>
{!readOnly && (
<td className="px-2 py-2">
<div className={`w-5 h-5 rounded border flex items-center justify-center
${isSelected
? 'bg-primary-500 border-primary-500'
: 'border-dark-500 bg-dark-700'
}`}
>
{isSelected && <Check className="w-3 h-3 text-white" />}
</div>
</td>
)}
<td className="px-3 py-2">
<div className="flex items-center gap-2">
<code className={`text-xs ${isSelected ? 'text-primary-300' : 'text-white'}`}>
{expr.name}
</code>
{expr.formula && (
<span className="text-xs text-dark-500" title={expr.formula}>
<Info className="w-3 h-3" />
</span>
)}
</div>
</td>
<td className="px-3 py-2 text-right font-mono text-xs text-dark-300">
{expr.value !== null ? expr.value.toFixed(3) : '-'}
</td>
<td className="px-3 py-2 text-xs text-dark-400">
{expr.units || '-'}
</td>
<td className="px-3 py-2 text-center">
{expr.is_candidate ? (
<span className="inline-flex items-center gap-1 px-1.5 py-0.5 rounded text-xs
bg-green-500/10 text-green-400">
<Sparkles className="w-3 h-3" />
{Math.round(expr.confidence * 100)}%
</span>
) : (
<span className="text-xs text-dark-500">-</span>
)}
</td>
</tr>
);
})}
</tbody>
</table>
{sortedExpressions.length === 0 && (
<div className="px-4 py-8 text-center text-dark-500">
No expressions match your filter
</div>
)}
</div>
{/* Help text */}
{!readOnly && !compact && (
<p className="text-xs text-dark-500">
Select expressions to use as design variables. Candidates (marked with %) are
automatically identified based on naming patterns and units.
</p>
)}
</div>
);
};
export default ExpressionList;

View File

@@ -0,0 +1,348 @@
/**
* FileDropzone - Drag and drop file upload component
*
* Supports drag-and-drop or click-to-browse for model files.
* Accepts .prt, .sim, .fem, .afem files.
*/
import React, { useState, useCallback, useRef } from 'react';
import { Upload, FileText, X, Loader2, AlertCircle, CheckCircle } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface FileDropzoneProps {
studyName: string;
onUploadComplete: () => void;
compact?: boolean;
}
interface FileStatus {
file: File;
status: 'pending' | 'uploading' | 'success' | 'error';
message?: string;
}
const VALID_EXTENSIONS = ['.prt', '.sim', '.fem', '.afem'];
export const FileDropzone: React.FC<FileDropzoneProps> = ({
studyName,
onUploadComplete,
compact = false,
}) => {
const [isDragging, setIsDragging] = useState(false);
const [files, setFiles] = useState<FileStatus[]>([]);
const [isUploading, setIsUploading] = useState(false);
const [error, setError] = useState<string | null>(null);
const fileInputRef = useRef<HTMLInputElement>(null);
const validateFile = (file: File): { valid: boolean; reason?: string } => {
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
if (!VALID_EXTENSIONS.includes(ext)) {
return { valid: false, reason: `Invalid type: ${ext}` };
}
// Max 500MB per file
if (file.size > 500 * 1024 * 1024) {
return { valid: false, reason: 'File too large (max 500MB)' };
}
return { valid: true };
};
const handleDragEnter = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(true);
}, []);
const handleDragLeave = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
}, []);
const handleDragOver = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
}, []);
const addFiles = useCallback((newFiles: File[]) => {
const validFiles: FileStatus[] = [];
for (const file of newFiles) {
// Skip duplicates
if (files.some(f => f.file.name === file.name)) {
continue;
}
const validation = validateFile(file);
if (validation.valid) {
validFiles.push({ file, status: 'pending' });
} else {
validFiles.push({ file, status: 'error', message: validation.reason });
}
}
setFiles(prev => [...prev, ...validFiles]);
}, [files]);
const handleDrop = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
const droppedFiles = Array.from(e.dataTransfer.files);
addFiles(droppedFiles);
}, [addFiles]);
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
const selectedFiles = Array.from(e.target.files || []);
addFiles(selectedFiles);
// Reset input so the same file can be selected again
e.target.value = '';
}, [addFiles]);
const removeFile = (index: number) => {
setFiles(prev => prev.filter((_, i) => i !== index));
};
const handleUpload = async () => {
const pendingFiles = files.filter(f => f.status === 'pending');
if (pendingFiles.length === 0) return;
setIsUploading(true);
setError(null);
try {
// Upload files
const response = await intakeApi.uploadFiles(
studyName,
pendingFiles.map(f => f.file)
);
// Update file statuses based on response
const uploadResults = new Map(
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
);
setFiles(prev => prev.map(f => {
if (f.status !== 'pending') return f;
const success = uploadResults.get(f.file.name);
return {
...f,
status: success ? 'success' : 'error',
message: success ? undefined : 'Upload failed',
};
}));
// Clear successful uploads after a moment and refresh
setTimeout(() => {
setFiles(prev => prev.filter(f => f.status !== 'success'));
onUploadComplete();
}, 1500);
} catch (err) {
setError(err instanceof Error ? err.message : 'Upload failed');
setFiles(prev => prev.map(f =>
f.status === 'pending'
? { ...f, status: 'error', message: 'Upload failed' }
: f
));
} finally {
setIsUploading(false);
}
};
const pendingCount = files.filter(f => f.status === 'pending').length;
if (compact) {
// Compact inline version
return (
<div className="space-y-2">
<div className="flex items-center gap-2">
<button
onClick={() => fileInputRef.current?.click()}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-dark-700 text-dark-300 hover:bg-dark-600 hover:text-white
transition-colors"
>
<Upload className="w-4 h-4" />
Add Files
</button>
{pendingCount > 0 && (
<button
onClick={handleUpload}
disabled={isUploading}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-primary-500/10 text-primary-400 hover:bg-primary-500/20
disabled:opacity-50 transition-colors"
>
{isUploading ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Upload className="w-4 h-4" />
)}
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
</button>
)}
</div>
{/* File list */}
{files.length > 0 && (
<div className="flex flex-wrap gap-2">
{files.map((f, i) => (
<span
key={i}
className={`inline-flex items-center gap-1.5 px-2 py-1 rounded text-xs
${f.status === 'error' ? 'bg-red-500/10 text-red-400' :
f.status === 'success' ? 'bg-green-500/10 text-green-400' :
'bg-dark-700 text-dark-300'}`}
>
{f.status === 'uploading' && <Loader2 className="w-3 h-3 animate-spin" />}
{f.status === 'success' && <CheckCircle className="w-3 h-3" />}
{f.status === 'error' && <AlertCircle className="w-3 h-3" />}
{f.file.name}
{f.status === 'pending' && (
<button onClick={() => removeFile(i)} className="hover:text-white">
<X className="w-3 h-3" />
</button>
)}
</span>
))}
</div>
)}
<input
ref={fileInputRef}
type="file"
multiple
accept={VALID_EXTENSIONS.join(',')}
onChange={handleFileSelect}
className="hidden"
/>
</div>
);
}
// Full dropzone version
return (
<div className="space-y-4">
{/* Dropzone */}
<div
onDragEnter={handleDragEnter}
onDragLeave={handleDragLeave}
onDragOver={handleDragOver}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`
relative border-2 border-dashed rounded-xl p-6 cursor-pointer
transition-all duration-200
${isDragging
? 'border-primary-400 bg-primary-400/5'
: 'border-dark-600 hover:border-primary-400/50 hover:bg-white/5'
}
`}
>
<div className="flex flex-col items-center text-center">
<div className={`w-12 h-12 rounded-full flex items-center justify-center mb-3
${isDragging ? 'bg-primary-400/20 text-primary-400' : 'bg-dark-700 text-dark-400'}`}>
<Upload className="w-6 h-6" />
</div>
<p className="text-white font-medium mb-1">
{isDragging ? 'Drop files here' : 'Drop model files here'}
</p>
<p className="text-sm text-dark-400">
or <span className="text-primary-400">click to browse</span>
</p>
<p className="text-xs text-dark-500 mt-2">
Accepts: {VALID_EXTENSIONS.join(', ')}
</p>
</div>
</div>
{/* Error */}
{error && (
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
<AlertCircle className="w-4 h-4 flex-shrink-0" />
{error}
</div>
)}
{/* File List */}
{files.length > 0 && (
<div className="space-y-2">
<h5 className="text-sm font-medium text-dark-300">Files to Upload</h5>
<div className="space-y-1">
{files.map((f, i) => (
<div
key={i}
className={`flex items-center justify-between p-2 rounded-lg
${f.status === 'error' ? 'bg-red-500/10' :
f.status === 'success' ? 'bg-green-500/10' :
'bg-dark-700'}`}
>
<div className="flex items-center gap-2">
{f.status === 'pending' && <FileText className="w-4 h-4 text-dark-400" />}
{f.status === 'uploading' && <Loader2 className="w-4 h-4 text-primary-400 animate-spin" />}
{f.status === 'success' && <CheckCircle className="w-4 h-4 text-green-400" />}
{f.status === 'error' && <AlertCircle className="w-4 h-4 text-red-400" />}
<span className={`text-sm ${f.status === 'error' ? 'text-red-400' :
f.status === 'success' ? 'text-green-400' :
'text-white'}`}>
{f.file.name}
</span>
{f.message && (
<span className="text-xs text-red-400">({f.message})</span>
)}
</div>
{f.status === 'pending' && (
<button
onClick={(e) => {
e.stopPropagation();
removeFile(i);
}}
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-white"
>
<X className="w-4 h-4" />
</button>
)}
</div>
))}
</div>
{/* Upload Button */}
{pendingCount > 0 && (
<button
onClick={handleUpload}
disabled={isUploading}
className="w-full flex items-center justify-center gap-2 px-4 py-2 rounded-lg
bg-primary-500 text-white font-medium
hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed
transition-colors"
>
{isUploading ? (
<>
<Loader2 className="w-4 h-4 animate-spin" />
Uploading...
</>
) : (
<>
<Upload className="w-4 h-4" />
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
</>
)}
</button>
)}
</div>
)}
<input
ref={fileInputRef}
type="file"
multiple
accept={VALID_EXTENSIONS.join(',')}
onChange={handleFileSelect}
className="hidden"
/>
</div>
);
};
export default FileDropzone;

View File

@@ -0,0 +1,272 @@
/**
* FinalizeModal - Modal for finalizing an inbox study
*
* Allows user to:
* - Select/create topic folder
* - Choose whether to run baseline FEA
* - See progress during finalization
*/
import React, { useState, useEffect } from 'react';
import {
X,
Folder,
CheckCircle,
Loader2,
AlertCircle,
} from 'lucide-react';
import { intakeApi } from '../../api/intake';
import { TopicInfo, InboxStudyDetail } from '../../types/intake';
interface FinalizeModalProps {
studyName: string;
topics: TopicInfo[];
onClose: () => void;
onFinalized: (finalPath: string) => void;
}
export const FinalizeModal: React.FC<FinalizeModalProps> = ({
studyName,
topics,
onClose,
onFinalized,
}) => {
const [studyDetail, setStudyDetail] = useState<InboxStudyDetail | null>(null);
const [selectedTopic, setSelectedTopic] = useState('');
const [newTopic, setNewTopic] = useState('');
const [runBaseline, setRunBaseline] = useState(true);
const [isLoading, setIsLoading] = useState(true);
const [isFinalizing, setIsFinalizing] = useState(false);
const [progress, setProgress] = useState<string>('');
const [error, setError] = useState<string | null>(null);
// Load study detail
useEffect(() => {
const loadStudy = async () => {
try {
const detail = await intakeApi.getInboxStudy(studyName);
setStudyDetail(detail);
// Pre-select topic if set in spec
if (detail.spec.meta.topic) {
setSelectedTopic(detail.spec.meta.topic);
}
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load study');
} finally {
setIsLoading(false);
}
};
loadStudy();
}, [studyName]);
const handleFinalize = async () => {
const topic = newTopic.trim() || selectedTopic;
if (!topic) {
setError('Please select or create a topic folder');
return;
}
setIsFinalizing(true);
setError(null);
setProgress('Starting finalization...');
try {
setProgress('Validating study configuration...');
await new Promise((r) => setTimeout(r, 500)); // Visual feedback
if (runBaseline) {
setProgress('Running baseline FEA solve...');
}
const result = await intakeApi.finalize(studyName, {
topic,
run_baseline: runBaseline,
});
setProgress('Finalization complete!');
await new Promise((r) => setTimeout(r, 500));
onFinalized(result.final_path);
} catch (err) {
setError(err instanceof Error ? err.message : 'Finalization failed');
setIsFinalizing(false);
}
};
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-dark-900/80 backdrop-blur-sm">
<div className="w-full max-w-lg glass-strong rounded-xl border border-primary-400/20 overflow-hidden">
{/* Header */}
<div className="px-6 py-4 border-b border-primary-400/10 flex items-center justify-between">
<div className="flex items-center gap-3">
<div className="w-10 h-10 rounded-lg bg-primary-400/10 flex items-center justify-center">
<Folder className="w-5 h-5 text-primary-400" />
</div>
<div>
<h3 className="text-lg font-semibold text-white">Finalize Study</h3>
<p className="text-sm text-dark-400">{studyName}</p>
</div>
</div>
{!isFinalizing && (
<button
onClick={onClose}
className="p-2 hover:bg-white/5 rounded-lg transition-colors text-dark-400 hover:text-white"
>
<X className="w-5 h-5" />
</button>
)}
</div>
{/* Content */}
<div className="p-6 space-y-6">
{isLoading ? (
<div className="flex items-center justify-center py-8">
<Loader2 className="w-6 h-6 animate-spin text-primary-400" />
</div>
) : isFinalizing ? (
/* Progress View */
<div className="text-center py-8 space-y-4">
<Loader2 className="w-12 h-12 animate-spin text-primary-400 mx-auto" />
<p className="text-white font-medium">{progress}</p>
<p className="text-sm text-dark-400">
Please wait while your study is being finalized...
</p>
</div>
) : (
<>
{/* Error Display */}
{error && (
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
<AlertCircle className="w-4 h-4 flex-shrink-0" />
{error}
</div>
)}
{/* Study Summary */}
{studyDetail && (
<div className="p-4 rounded-lg bg-dark-800 space-y-2">
<h4 className="text-sm font-medium text-dark-300">Study Summary</h4>
<div className="grid grid-cols-2 gap-4 text-sm">
<div>
<span className="text-dark-500">Status:</span>
<span className="ml-2 text-white capitalize">
{studyDetail.spec.meta.status}
</span>
</div>
<div>
<span className="text-dark-500">Model Files:</span>
<span className="ml-2 text-white">
{studyDetail.files.sim.length + studyDetail.files.prt.length + studyDetail.files.fem.length}
</span>
</div>
<div>
<span className="text-dark-500">Design Variables:</span>
<span className="ml-2 text-white">
{studyDetail.spec.design_variables?.length || 0}
</span>
</div>
<div>
<span className="text-dark-500">Objectives:</span>
<span className="ml-2 text-white">
{studyDetail.spec.objectives?.length || 0}
</span>
</div>
</div>
</div>
)}
{/* Topic Selection */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Topic Folder <span className="text-red-400">*</span>
</label>
<div className="flex gap-2">
<select
value={selectedTopic}
onChange={(e) => {
setSelectedTopic(e.target.value);
setNewTopic('');
}}
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white focus:border-primary-400 focus:outline-none
focus:ring-1 focus:ring-primary-400/50"
>
<option value="">Select existing topic...</option>
{topics.map((topic) => (
<option key={topic.name} value={topic.name}>
{topic.name} ({topic.study_count} studies)
</option>
))}
</select>
<span className="text-dark-500 self-center">or</span>
<input
type="text"
value={newTopic}
onChange={(e) => {
setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'));
setSelectedTopic('');
}}
placeholder="New_Topic"
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
text-white placeholder-dark-500 focus:border-primary-400
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
/>
</div>
<p className="mt-1 text-xs text-dark-500">
Study will be created at: studies/{newTopic || selectedTopic || '<topic>'}/{studyName}/
</p>
</div>
{/* Baseline Option */}
<div>
<label className="flex items-center gap-3 cursor-pointer">
<input
type="checkbox"
checked={runBaseline}
onChange={(e) => setRunBaseline(e.target.checked)}
className="w-4 h-4 rounded border-dark-600 bg-dark-800 text-primary-400
focus:ring-primary-400/50"
/>
<div>
<span className="text-white font-medium">Run baseline FEA solve</span>
<p className="text-xs text-dark-500">
Validates the model and captures baseline performance metrics
</p>
</div>
</label>
</div>
</>
)}
</div>
{/* Footer */}
{!isLoading && !isFinalizing && (
<div className="px-6 py-4 border-t border-primary-400/10 flex justify-end gap-3">
<button
onClick={onClose}
className="px-4 py-2 rounded-lg border border-dark-600 text-dark-300
hover:border-dark-500 hover:text-white transition-colors"
>
Cancel
</button>
<button
onClick={handleFinalize}
disabled={!selectedTopic && !newTopic.trim()}
className="px-6 py-2 rounded-lg font-medium transition-all disabled:opacity-50
flex items-center gap-2"
style={{
background: 'linear-gradient(135deg, #00d4e6 0%, #0891b2 100%)',
color: '#000',
}}
>
<CheckCircle className="w-4 h-4" />
Finalize Study
</button>
</div>
)}
</div>
</div>
);
};
export default FinalizeModal;

View File

@@ -0,0 +1,147 @@
/**
* InboxSection - Section displaying inbox studies on Home page
*
* Shows the "Create New Study" card and lists all inbox studies
* with their current status and available actions.
*/
import React, { useState, useEffect, useCallback } from 'react';
import { Inbox, RefreshCw, ChevronDown, ChevronRight } from 'lucide-react';
import { intakeApi } from '../../api/intake';
import { InboxStudy, TopicInfo } from '../../types/intake';
import { CreateStudyCard } from './CreateStudyCard';
import { InboxStudyCard } from './InboxStudyCard';
import { FinalizeModal } from './FinalizeModal';
interface InboxSectionProps {
onStudyFinalized?: () => void;
}
export const InboxSection: React.FC<InboxSectionProps> = ({ onStudyFinalized }) => {
const [inboxStudies, setInboxStudies] = useState<InboxStudy[]>([]);
const [topics, setTopics] = useState<TopicInfo[]>([]);
const [isLoading, setIsLoading] = useState(true);
const [isExpanded, setIsExpanded] = useState(true);
const [selectedStudyForFinalize, setSelectedStudyForFinalize] = useState<string | null>(null);
const loadData = useCallback(async () => {
setIsLoading(true);
try {
const [inboxResponse, topicsResponse] = await Promise.all([
intakeApi.listInbox(),
intakeApi.listTopics(),
]);
setInboxStudies(inboxResponse.studies);
setTopics(topicsResponse.topics);
} catch (err) {
console.error('Failed to load inbox data:', err);
} finally {
setIsLoading(false);
}
}, []);
useEffect(() => {
loadData();
}, [loadData]);
const handleStudyCreated = (_studyName: string) => {
loadData();
};
const handleStudyFinalized = (_finalPath: string) => {
setSelectedStudyForFinalize(null);
loadData();
onStudyFinalized?.();
};
const pendingStudies = inboxStudies.filter(
(s) => !['ready', 'running', 'completed'].includes(s.status)
);
return (
<div className="space-y-4">
{/* Section Header */}
<button
onClick={() => setIsExpanded(!isExpanded)}
className="w-full flex items-center justify-between px-2 py-1 hover:bg-white/5 rounded-lg transition-colors"
>
<div className="flex items-center gap-3">
<div className="w-8 h-8 rounded-lg bg-primary-400/10 flex items-center justify-center">
<Inbox className="w-4 h-4 text-primary-400" />
</div>
<div className="text-left">
<h2 className="text-lg font-semibold text-white">Study Inbox</h2>
<p className="text-sm text-dark-400">
{pendingStudies.length} pending studies
</p>
</div>
</div>
<div className="flex items-center gap-2">
<button
onClick={(e) => {
e.stopPropagation();
loadData();
}}
className="p-2 hover:bg-white/5 rounded-lg transition-colors text-dark-400 hover:text-primary-400"
title="Refresh"
>
<RefreshCw className={`w-4 h-4 ${isLoading ? 'animate-spin' : ''}`} />
</button>
{isExpanded ? (
<ChevronDown className="w-5 h-5 text-dark-400" />
) : (
<ChevronRight className="w-5 h-5 text-dark-400" />
)}
</div>
</button>
{/* Content */}
{isExpanded && (
<div className="space-y-4">
{/* Create Study Card */}
<CreateStudyCard topics={topics} onStudyCreated={handleStudyCreated} />
{/* Inbox Studies List */}
{inboxStudies.length > 0 && (
<div className="space-y-3">
<h3 className="text-sm font-medium text-dark-400 px-2">
Inbox Studies ({inboxStudies.length})
</h3>
{inboxStudies.map((study) => (
<InboxStudyCard
key={study.study_name}
study={study}
onRefresh={loadData}
onSelect={setSelectedStudyForFinalize}
/>
))}
</div>
)}
{/* Empty State */}
{!isLoading && inboxStudies.length === 0 && (
<div className="text-center py-8 text-dark-400">
<Inbox className="w-12 h-12 mx-auto mb-3 opacity-30" />
<p>No studies in inbox</p>
<p className="text-sm text-dark-500">
Create a new study to get started
</p>
</div>
)}
</div>
)}
{/* Finalize Modal */}
{selectedStudyForFinalize && (
<FinalizeModal
studyName={selectedStudyForFinalize}
topics={topics}
onClose={() => setSelectedStudyForFinalize(null)}
onFinalized={handleStudyFinalized}
/>
)}
</div>
);
};
export default InboxSection;

View File

@@ -0,0 +1,455 @@
/**
* InboxStudyCard - Card displaying an inbox study with actions
*
* Shows study status, files, and provides actions for:
* - Running introspection
* - Generating README
* - Finalizing the study
*/
import React, { useState, useEffect } from 'react';
import {
FileText,
Folder,
Trash2,
Play,
CheckCircle,
Clock,
AlertCircle,
Loader2,
ChevronDown,
ChevronRight,
Sparkles,
ArrowRight,
Eye,
Save,
} from 'lucide-react';
import { InboxStudy, SpecStatus, ExpressionInfo, InboxStudyDetail } from '../../types/intake';
import { intakeApi } from '../../api/intake';
import { FileDropzone } from './FileDropzone';
import { ContextFileUpload } from './ContextFileUpload';
import { ExpressionList } from './ExpressionList';
interface InboxStudyCardProps {
study: InboxStudy;
onRefresh: () => void;
onSelect: (studyName: string) => void;
}
const statusConfig: Record<SpecStatus, { icon: React.ReactNode; color: string; label: string }> = {
draft: {
icon: <Clock className="w-4 h-4" />,
color: 'text-dark-400 bg-dark-600',
label: 'Draft',
},
introspected: {
icon: <CheckCircle className="w-4 h-4" />,
color: 'text-blue-400 bg-blue-500/10',
label: 'Introspected',
},
configured: {
icon: <CheckCircle className="w-4 h-4" />,
color: 'text-green-400 bg-green-500/10',
label: 'Configured',
},
validated: {
icon: <CheckCircle className="w-4 h-4" />,
color: 'text-green-400 bg-green-500/10',
label: 'Validated',
},
ready: {
icon: <CheckCircle className="w-4 h-4" />,
color: 'text-primary-400 bg-primary-500/10',
label: 'Ready',
},
running: {
icon: <Play className="w-4 h-4" />,
color: 'text-yellow-400 bg-yellow-500/10',
label: 'Running',
},
completed: {
icon: <CheckCircle className="w-4 h-4" />,
color: 'text-green-400 bg-green-500/10',
label: 'Completed',
},
failed: {
icon: <AlertCircle className="w-4 h-4" />,
color: 'text-red-400 bg-red-500/10',
label: 'Failed',
},
};
export const InboxStudyCard: React.FC<InboxStudyCardProps> = ({
study,
onRefresh,
onSelect,
}) => {
const [isExpanded, setIsExpanded] = useState(false);
const [isIntrospecting, setIsIntrospecting] = useState(false);
const [isGeneratingReadme, setIsGeneratingReadme] = useState(false);
const [isDeleting, setIsDeleting] = useState(false);
const [error, setError] = useState<string | null>(null);
// Introspection data (fetched when expanded)
const [studyDetail, setStudyDetail] = useState<InboxStudyDetail | null>(null);
const [isLoadingDetail, setIsLoadingDetail] = useState(false);
const [selectedExpressions, setSelectedExpressions] = useState<string[]>([]);
const [showReadme, setShowReadme] = useState(false);
const [readmeContent, setReadmeContent] = useState<string | null>(null);
const [isSavingDVs, setIsSavingDVs] = useState(false);
const [dvSaveMessage, setDvSaveMessage] = useState<string | null>(null);
const status = statusConfig[study.status] || statusConfig.draft;
// Fetch study details when expanded for the first time
useEffect(() => {
if (isExpanded && !studyDetail && !isLoadingDetail) {
loadStudyDetail();
}
}, [isExpanded]);
const loadStudyDetail = async () => {
setIsLoadingDetail(true);
try {
const detail = await intakeApi.getInboxStudy(study.study_name);
setStudyDetail(detail);
// Auto-select candidate expressions
const introspection = detail.spec?.model?.introspection;
if (introspection?.expressions) {
const candidates = introspection.expressions
.filter((e: ExpressionInfo) => e.is_candidate)
.map((e: ExpressionInfo) => e.name);
setSelectedExpressions(candidates);
}
} catch (err) {
console.error('Failed to load study detail:', err);
} finally {
setIsLoadingDetail(false);
}
};
const handleIntrospect = async () => {
setIsIntrospecting(true);
setError(null);
try {
await intakeApi.introspect({ study_name: study.study_name });
// Reload study detail to get new introspection data
await loadStudyDetail();
onRefresh();
} catch (err) {
setError(err instanceof Error ? err.message : 'Introspection failed');
} finally {
setIsIntrospecting(false);
}
};
const handleGenerateReadme = async () => {
setIsGeneratingReadme(true);
setError(null);
try {
const response = await intakeApi.generateReadme(study.study_name);
setReadmeContent(response.content);
setShowReadme(true);
onRefresh();
} catch (err) {
setError(err instanceof Error ? err.message : 'README generation failed');
} finally {
setIsGeneratingReadme(false);
}
};
const handleDelete = async () => {
if (!confirm(`Delete inbox study "${study.study_name}"? This cannot be undone.`)) {
return;
}
setIsDeleting(true);
try {
await intakeApi.deleteInboxStudy(study.study_name);
onRefresh();
} catch (err) {
setError(err instanceof Error ? err.message : 'Delete failed');
setIsDeleting(false);
}
};
const handleSaveDesignVariables = async () => {
if (selectedExpressions.length === 0) {
setError('Please select at least one expression to use as a design variable');
return;
}
setIsSavingDVs(true);
setError(null);
setDvSaveMessage(null);
try {
const result = await intakeApi.createDesignVariables(study.study_name, selectedExpressions);
setDvSaveMessage(`Created ${result.total_created} design variable(s)`);
// Reload study detail to see updated spec
await loadStudyDetail();
onRefresh();
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to save design variables');
} finally {
setIsSavingDVs(false);
}
};
const canIntrospect = study.status === 'draft' && study.model_files.length > 0;
const canGenerateReadme = study.status === 'introspected';
const canFinalize = ['introspected', 'configured'].includes(study.status);
const canSaveDVs = study.status === 'introspected' && selectedExpressions.length > 0;
return (
<div className="glass rounded-xl border border-primary-400/10 overflow-hidden">
{/* Header - Always visible */}
<button
onClick={() => setIsExpanded(!isExpanded)}
className="w-full px-4 py-3 flex items-center justify-between hover:bg-white/5 transition-colors"
>
<div className="flex items-center gap-3">
<div className="w-10 h-10 rounded-lg bg-dark-700 flex items-center justify-center">
<Folder className="w-5 h-5 text-primary-400" />
</div>
<div className="text-left">
<h4 className="text-white font-medium">{study.study_name}</h4>
{study.description && (
<p className="text-sm text-dark-400 truncate max-w-[300px]">
{study.description}
</p>
)}
</div>
</div>
<div className="flex items-center gap-3">
{/* Status Badge */}
<span className={`inline-flex items-center gap-1.5 px-2.5 py-1 rounded-full text-xs font-medium ${status.color}`}>
{status.icon}
{status.label}
</span>
{/* File Count */}
<span className="text-dark-500 text-sm">
{study.model_files.length} files
</span>
{/* Expand Icon */}
{isExpanded ? (
<ChevronDown className="w-4 h-4 text-dark-400" />
) : (
<ChevronRight className="w-4 h-4 text-dark-400" />
)}
</div>
</button>
{/* Expanded Content */}
{isExpanded && (
<div className="px-4 pb-4 space-y-4 border-t border-primary-400/10 pt-4">
{/* Error Display */}
{error && (
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
<AlertCircle className="w-4 h-4 flex-shrink-0" />
{error}
</div>
)}
{/* Success Message */}
{dvSaveMessage && (
<div className="p-3 rounded-lg bg-green-500/10 border border-green-500/30 text-green-400 text-sm flex items-center gap-2">
<CheckCircle className="w-4 h-4 flex-shrink-0" />
{dvSaveMessage}
</div>
)}
{/* Files Section */}
{study.model_files.length > 0 && (
<div>
<h5 className="text-sm font-medium text-dark-300 mb-2">Model Files</h5>
<div className="flex flex-wrap gap-2">
{study.model_files.map((file) => (
<span
key={file}
className="inline-flex items-center gap-1.5 px-2 py-1 rounded bg-dark-700 text-dark-300 text-xs"
>
<FileText className="w-3 h-3" />
{file}
</span>
))}
</div>
</div>
)}
{/* Model File Upload Section */}
<div>
<h5 className="text-sm font-medium text-dark-300 mb-2">Upload Model Files</h5>
<FileDropzone
studyName={study.study_name}
onUploadComplete={onRefresh}
compact={true}
/>
</div>
{/* Context File Upload Section */}
<ContextFileUpload
studyName={study.study_name}
onUploadComplete={onRefresh}
/>
{/* Introspection Results - Expressions */}
{isLoadingDetail && (
<div className="flex items-center gap-2 text-dark-400 text-sm py-4">
<Loader2 className="w-4 h-4 animate-spin" />
Loading introspection data...
</div>
)}
{studyDetail?.spec?.model?.introspection?.expressions &&
studyDetail.spec.model.introspection.expressions.length > 0 && (
<ExpressionList
expressions={studyDetail.spec.model.introspection.expressions}
massKg={studyDetail.spec.model.introspection.mass_kg}
selectedExpressions={selectedExpressions}
onSelectionChange={setSelectedExpressions}
readOnly={study.status === 'configured'}
compact={true}
/>
)}
{/* README Preview Section */}
{(readmeContent || study.status === 'configured') && (
<div className="space-y-2">
<div className="flex items-center justify-between">
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
<FileText className="w-4 h-4" />
README.md
</h5>
<button
onClick={() => setShowReadme(!showReadme)}
className="flex items-center gap-1 px-2 py-1 text-xs rounded bg-dark-600
text-dark-300 hover:bg-dark-500 transition-colors"
>
<Eye className="w-3 h-3" />
{showReadme ? 'Hide' : 'Preview'}
</button>
</div>
{showReadme && readmeContent && (
<div className="max-h-64 overflow-y-auto rounded-lg border border-dark-600
bg-dark-800 p-4">
<pre className="text-xs text-dark-300 whitespace-pre-wrap font-mono">
{readmeContent}
</pre>
</div>
)}
</div>
)}
{/* No Files Warning */}
{study.model_files.length === 0 && (
<div className="p-3 rounded-lg bg-yellow-500/10 border border-yellow-500/30 text-yellow-400 text-sm flex items-center gap-2">
<AlertCircle className="w-4 h-4 flex-shrink-0" />
No model files found. Upload .prt, .sim, or .fem files to continue.
</div>
)}
{/* Actions */}
<div className="flex flex-wrap gap-2">
{/* Introspect */}
{canIntrospect && (
<button
onClick={handleIntrospect}
disabled={isIntrospecting}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-blue-500/10 text-blue-400 hover:bg-blue-500/20
disabled:opacity-50 transition-colors"
>
{isIntrospecting ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Play className="w-4 h-4" />
)}
Introspect Model
</button>
)}
{/* Save Design Variables */}
{canSaveDVs && (
<button
onClick={handleSaveDesignVariables}
disabled={isSavingDVs}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-green-500/10 text-green-400 hover:bg-green-500/20
disabled:opacity-50 transition-colors"
>
{isSavingDVs ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Save className="w-4 h-4" />
)}
Save as DVs ({selectedExpressions.length})
</button>
)}
{/* Generate README */}
{canGenerateReadme && (
<button
onClick={handleGenerateReadme}
disabled={isGeneratingReadme}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-purple-500/10 text-purple-400 hover:bg-purple-500/20
disabled:opacity-50 transition-colors"
>
{isGeneratingReadme ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Sparkles className="w-4 h-4" />
)}
Generate README
</button>
)}
{/* Finalize */}
{canFinalize && (
<button
onClick={() => onSelect(study.study_name)}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-primary-500/10 text-primary-400 hover:bg-primary-500/20
transition-colors"
>
<ArrowRight className="w-4 h-4" />
Finalize Study
</button>
)}
{/* Delete */}
<button
onClick={handleDelete}
disabled={isDeleting}
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
bg-red-500/10 text-red-400 hover:bg-red-500/20
disabled:opacity-50 transition-colors ml-auto"
>
{isDeleting ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Trash2 className="w-4 h-4" />
)}
Delete
</button>
</div>
{/* Workflow Hint */}
{study.status === 'draft' && study.model_files.length > 0 && (
<p className="text-xs text-dark-500">
Next step: Run introspection to discover expressions and model properties.
</p>
)}
{study.status === 'introspected' && (
<p className="text-xs text-dark-500">
Next step: Generate README with Claude AI, then finalize to create the study.
</p>
)}
</div>
)}
</div>
);
};
export default InboxStudyCard;

View File

@@ -0,0 +1,13 @@
/**
* Intake Components Index
*
* Export all intake workflow components.
*/
export { CreateStudyCard } from './CreateStudyCard';
export { InboxStudyCard } from './InboxStudyCard';
export { FinalizeModal } from './FinalizeModal';
export { InboxSection } from './InboxSection';
export { FileDropzone } from './FileDropzone';
export { ContextFileUpload } from './ContextFileUpload';
export { ExpressionList } from './ExpressionList';

View File

@@ -0,0 +1,254 @@
/**
* StudioBuildDialog - Final dialog to name and build the study
*/
import React, { useState, useEffect } from 'react';
import { X, Loader2, FolderOpen, AlertCircle, CheckCircle, Sparkles, Play } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface StudioBuildDialogProps {
draftId: string;
onClose: () => void;
onBuildComplete: (finalPath: string, finalName: string) => void;
}
interface Topic {
name: string;
study_count: number;
}
export const StudioBuildDialog: React.FC<StudioBuildDialogProps> = ({
draftId,
onClose,
onBuildComplete,
}) => {
const [studyName, setStudyName] = useState('');
const [topic, setTopic] = useState('');
const [newTopic, setNewTopic] = useState('');
const [useNewTopic, setUseNewTopic] = useState(false);
const [topics, setTopics] = useState<Topic[]>([]);
const [isBuilding, setIsBuilding] = useState(false);
const [error, setError] = useState<string | null>(null);
const [validationErrors, setValidationErrors] = useState<string[]>([]);
// Load topics
useEffect(() => {
loadTopics();
}, []);
const loadTopics = async () => {
try {
const response = await intakeApi.listTopics();
setTopics(response.topics);
if (response.topics.length > 0) {
setTopic(response.topics[0].name);
}
} catch (err) {
console.error('Failed to load topics:', err);
}
};
// Validate study name
useEffect(() => {
const errors: string[] = [];
if (studyName.length > 0) {
if (studyName.length < 3) {
errors.push('Name must be at least 3 characters');
}
if (!/^[a-z0-9_]+$/.test(studyName)) {
errors.push('Use only lowercase letters, numbers, and underscores');
}
if (studyName.startsWith('draft_')) {
errors.push('Name cannot start with "draft_"');
}
}
setValidationErrors(errors);
}, [studyName]);
const handleBuild = async () => {
const finalTopic = useNewTopic ? newTopic : topic;
if (!studyName || !finalTopic) {
setError('Please provide both a study name and topic');
return;
}
if (validationErrors.length > 0) {
setError('Please fix validation errors');
return;
}
setIsBuilding(true);
setError(null);
try {
const response = await intakeApi.finalizeStudio(draftId, {
topic: finalTopic,
newName: studyName,
runBaseline: false,
});
onBuildComplete(response.final_path, response.final_name);
} catch (err) {
setError(err instanceof Error ? err.message : 'Build failed');
} finally {
setIsBuilding(false);
}
};
const isValid = studyName.length >= 3 &&
validationErrors.length === 0 &&
(topic || (useNewTopic && newTopic));
return (
<div className="fixed inset-0 bg-black/50 flex items-center justify-center z-50">
<div className="bg-dark-850 border border-dark-700 rounded-xl shadow-xl w-full max-w-lg mx-4">
{/* Header */}
<div className="flex items-center justify-between p-4 border-b border-dark-700">
<div className="flex items-center gap-2">
<Sparkles className="w-5 h-5 text-primary-400" />
<h2 className="text-lg font-semibold text-white">Build Study</h2>
</div>
<button
onClick={onClose}
className="p-1 hover:bg-dark-700 rounded text-dark-400 hover:text-white transition-colors"
>
<X className="w-5 h-5" />
</button>
</div>
{/* Content */}
<div className="p-6 space-y-6">
{/* Study Name */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Study Name
</label>
<input
type="text"
value={studyName}
onChange={(e) => setStudyName(e.target.value.toLowerCase().replace(/[^a-z0-9_]/g, '_'))}
placeholder="my_optimization_study"
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white placeholder-dark-500 focus:outline-none focus:border-primary-400"
/>
{validationErrors.length > 0 && (
<div className="mt-2 space-y-1">
{validationErrors.map((err, i) => (
<p key={i} className="text-xs text-red-400 flex items-center gap-1">
<AlertCircle className="w-3 h-3" />
{err}
</p>
))}
</div>
)}
{studyName.length >= 3 && validationErrors.length === 0 && (
<p className="mt-2 text-xs text-green-400 flex items-center gap-1">
<CheckCircle className="w-3 h-3" />
Name is valid
</p>
)}
</div>
{/* Topic Selection */}
<div>
<label className="block text-sm font-medium text-dark-300 mb-2">
Topic Folder
</label>
{!useNewTopic && topics.length > 0 && (
<div className="space-y-2">
<select
value={topic}
onChange={(e) => setTopic(e.target.value)}
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white focus:outline-none focus:border-primary-400"
>
{topics.map((t) => (
<option key={t.name} value={t.name}>
{t.name} ({t.study_count} studies)
</option>
))}
</select>
<button
onClick={() => setUseNewTopic(true)}
className="text-sm text-primary-400 hover:text-primary-300"
>
+ Create new topic
</button>
</div>
)}
{(useNewTopic || topics.length === 0) && (
<div className="space-y-2">
<input
type="text"
value={newTopic}
onChange={(e) => setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'))}
placeholder="NewTopic"
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white placeholder-dark-500 focus:outline-none focus:border-primary-400"
/>
{topics.length > 0 && (
<button
onClick={() => setUseNewTopic(false)}
className="text-sm text-dark-400 hover:text-white"
>
Use existing topic
</button>
)}
</div>
)}
</div>
{/* Preview */}
<div className="p-3 bg-dark-700/50 rounded-lg">
<p className="text-xs text-dark-400 mb-1">Study will be created at:</p>
<p className="text-sm text-white font-mono flex items-center gap-2">
<FolderOpen className="w-4 h-4 text-primary-400" />
studies/{useNewTopic ? newTopic || '...' : topic}/{studyName || '...'}
</p>
</div>
{/* Error */}
{error && (
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
<AlertCircle className="w-4 h-4 flex-shrink-0" />
{error}
</div>
)}
</div>
{/* Footer */}
<div className="flex items-center justify-end gap-3 p-4 border-t border-dark-700">
<button
onClick={onClose}
disabled={isBuilding}
className="px-4 py-2 text-sm text-dark-300 hover:text-white hover:bg-dark-700 rounded-lg transition-colors"
>
Cancel
</button>
<button
onClick={handleBuild}
disabled={!isValid || isBuilding}
className="flex items-center gap-2 px-4 py-2 text-sm font-medium bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
>
{isBuilding ? (
<>
<Loader2 className="w-4 h-4 animate-spin" />
Building...
</>
) : (
<>
<Play className="w-4 h-4" />
Build Study
</>
)}
</button>
</div>
</div>
</div>
);
};
export default StudioBuildDialog;

View File

@@ -0,0 +1,375 @@
/**
* StudioChat - Context-aware AI chat for Studio
*
* Uses the existing useChat hook to communicate with Claude via WebSocket.
* Injects model files and context documents into the conversation.
*/
import React, { useRef, useEffect, useState, useMemo } from 'react';
import { Send, Loader2, Sparkles, FileText, Wifi, WifiOff, Bot, User, File, AlertCircle } from 'lucide-react';
import { useChat } from '../../hooks/useChat';
import { useSpecStore, useSpec } from '../../hooks/useSpecStore';
import { MarkdownRenderer } from '../MarkdownRenderer';
import { ToolCallCard } from '../chat/ToolCallCard';
interface StudioChatProps {
draftId: string;
contextFiles: string[];
contextContent: string;
modelFiles: string[];
onSpecUpdated: () => void;
}
export const StudioChat: React.FC<StudioChatProps> = ({
draftId,
contextFiles,
contextContent,
modelFiles,
onSpecUpdated,
}) => {
const messagesEndRef = useRef<HTMLDivElement>(null);
const inputRef = useRef<HTMLTextAreaElement>(null);
const [input, setInput] = useState('');
const [hasInjectedContext, setHasInjectedContext] = useState(false);
// Get spec store for canvas updates
const spec = useSpec();
const { reloadSpec, setSpecFromWebSocket } = useSpecStore();
// Build canvas state with full context for Claude
const canvasState = useMemo(() => ({
nodes: [],
edges: [],
studyName: draftId,
studyPath: `_inbox/${draftId}`,
// Include file info for Claude context
modelFiles,
contextFiles,
contextContent: contextContent.substring(0, 50000), // Limit context size
}), [draftId, modelFiles, contextFiles, contextContent]);
// Use the chat hook with WebSocket
// Power mode gives Claude write permissions to modify the spec
const {
messages,
isThinking,
error,
isConnected,
sendMessage,
updateCanvasState,
} = useChat({
studyId: draftId,
mode: 'power', // Power mode = --dangerously-skip-permissions = can write files
useWebSocket: true,
canvasState,
onError: (err) => console.error('[StudioChat] Error:', err),
onSpecUpdated: (newSpec) => {
// Claude modified the spec - update the store directly
console.log('[StudioChat] Spec updated by Claude');
setSpecFromWebSocket(newSpec, draftId);
onSpecUpdated();
},
onCanvasModification: (modification) => {
// Claude wants to modify canvas - reload the spec
console.log('[StudioChat] Canvas modification:', modification);
reloadSpec();
onSpecUpdated();
},
});
// Update canvas state when context changes
useEffect(() => {
updateCanvasState(canvasState);
}, [canvasState, updateCanvasState]);
// Scroll to bottom when messages change
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
// Auto-focus input
useEffect(() => {
inputRef.current?.focus();
}, []);
// Build context summary for display
const contextSummary = useMemo(() => {
const parts: string[] = [];
if (modelFiles.length > 0) {
parts.push(`${modelFiles.length} model file${modelFiles.length > 1 ? 's' : ''}`);
}
if (contextFiles.length > 0) {
parts.push(`${contextFiles.length} context doc${contextFiles.length > 1 ? 's' : ''}`);
}
if (contextContent) {
parts.push(`${contextContent.length.toLocaleString()} chars context`);
}
return parts.join(', ');
}, [modelFiles, contextFiles, contextContent]);
const handleSend = () => {
if (!input.trim() || isThinking) return;
let messageToSend = input.trim();
// On first message, inject full context so Claude has everything it needs
if (!hasInjectedContext && (modelFiles.length > 0 || contextContent)) {
const contextParts: string[] = [];
// Add model files info
if (modelFiles.length > 0) {
contextParts.push(`**Model Files Uploaded:**\n${modelFiles.map(f => `- ${f}`).join('\n')}`);
}
// Add context document content (full text)
if (contextContent) {
contextParts.push(`**Context Documents Content:**\n\`\`\`\n${contextContent.substring(0, 30000)}\n\`\`\``);
}
// Add current spec state
if (spec) {
const dvCount = spec.design_variables?.length || 0;
const objCount = spec.objectives?.length || 0;
const extCount = spec.extractors?.length || 0;
if (dvCount > 0 || objCount > 0 || extCount > 0) {
contextParts.push(`**Current Configuration:** ${dvCount} design variables, ${objCount} objectives, ${extCount} extractors`);
}
}
if (contextParts.length > 0) {
messageToSend = `${contextParts.join('\n\n')}\n\n---\n\n**User Request:** ${messageToSend}`;
}
setHasInjectedContext(true);
}
sendMessage(messageToSend);
setInput('');
};
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
handleSend();
}
};
// Welcome message for empty state
const showWelcome = messages.length === 0;
// Check if we have any context
const hasContext = modelFiles.length > 0 || contextContent.length > 0;
return (
<div className="h-full flex flex-col">
{/* Header */}
<div className="p-3 border-b border-dark-700 flex-shrink-0">
<div className="flex items-center justify-between mb-2">
<div className="flex items-center gap-2">
<Sparkles className="w-5 h-5 text-primary-400" />
<span className="font-medium text-white">Studio Assistant</span>
</div>
<span className={`flex items-center gap-1 text-xs px-2 py-0.5 rounded ${
isConnected
? 'text-green-400 bg-green-400/10'
: 'text-red-400 bg-red-400/10'
}`}>
{isConnected ? <Wifi className="w-3 h-3" /> : <WifiOff className="w-3 h-3" />}
{isConnected ? 'Connected' : 'Disconnected'}
</span>
</div>
{/* Context indicator */}
{contextSummary && (
<div className="flex items-center gap-2 text-xs">
<div className="flex items-center gap-1 text-amber-400 bg-amber-400/10 px-2 py-1 rounded">
<FileText className="w-3 h-3" />
<span>{contextSummary}</span>
</div>
{hasContext && !hasInjectedContext && (
<span className="text-dark-500">Will be sent with first message</span>
)}
{hasInjectedContext && (
<span className="text-green-500">Context sent</span>
)}
</div>
)}
</div>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-3 space-y-4">
{/* Welcome message with context awareness */}
{showWelcome && (
<div className="flex gap-3">
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-primary-500/20 text-primary-400">
<Bot className="w-4 h-4" />
</div>
<div className="flex-1 bg-dark-700 rounded-lg px-4 py-3 text-sm text-dark-100">
<MarkdownRenderer content={hasContext
? `I can see you've uploaded files. Here's what I have access to:
${modelFiles.length > 0 ? `**Model Files:** ${modelFiles.join(', ')}` : ''}
${contextContent ? `\n**Context Document:** ${contextContent.substring(0, 200)}...` : ''}
Tell me what you want to optimize and I'll help you configure the study!`
: `Welcome to Atomizer Studio! I'm here to help you configure your optimization study.
**What I can do:**
- Read your uploaded context documents
- Help set up design variables, objectives, and constraints
- Create extractors for physics outputs
- Suggest optimization strategies
Upload your model files and any requirements documents, then tell me what you want to optimize!`} />
</div>
</div>
)}
{/* File context display (only if we have files but no messages yet) */}
{showWelcome && modelFiles.length > 0 && (
<div className="bg-dark-800/50 rounded-lg p-3 border border-dark-700">
<p className="text-xs text-dark-400 mb-2 font-medium">Loaded Files:</p>
<div className="flex flex-wrap gap-2">
{modelFiles.map((file, idx) => (
<span key={idx} className="flex items-center gap-1 text-xs bg-blue-500/10 text-blue-400 px-2 py-1 rounded">
<File className="w-3 h-3" />
{file}
</span>
))}
{contextFiles.map((file, idx) => (
<span key={idx} className="flex items-center gap-1 text-xs bg-amber-500/10 text-amber-400 px-2 py-1 rounded">
<FileText className="w-3 h-3" />
{file}
</span>
))}
</div>
</div>
)}
{/* Chat messages */}
{messages.map((msg) => {
const isAssistant = msg.role === 'assistant';
const isSystem = msg.role === 'system';
// System messages
if (isSystem) {
return (
<div key={msg.id} className="flex justify-center my-2">
<div className="px-3 py-1 bg-dark-700/50 rounded-full text-xs text-dark-400 border border-dark-600">
{msg.content}
</div>
</div>
);
}
return (
<div
key={msg.id}
className={`flex gap-3 ${isAssistant ? '' : 'flex-row-reverse'}`}
>
{/* Avatar */}
<div
className={`flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center ${
isAssistant
? 'bg-primary-500/20 text-primary-400'
: 'bg-dark-600 text-dark-300'
}`}
>
{isAssistant ? <Bot className="w-4 h-4" /> : <User className="w-4 h-4" />}
</div>
{/* Message content */}
<div
className={`flex-1 max-w-[85%] rounded-lg px-4 py-3 text-sm ${
isAssistant
? 'bg-dark-700 text-dark-100'
: 'bg-primary-500 text-white ml-auto'
}`}
>
{isAssistant ? (
<>
{msg.content && <MarkdownRenderer content={msg.content} />}
{msg.isStreaming && !msg.content && (
<span className="text-dark-400">Thinking...</span>
)}
{/* Tool calls */}
{msg.toolCalls && msg.toolCalls.length > 0 && (
<div className="mt-3 space-y-2">
{msg.toolCalls.map((tool, idx) => (
<ToolCallCard key={idx} toolCall={tool} />
))}
</div>
)}
</>
) : (
<span className="whitespace-pre-wrap">{msg.content}</span>
)}
</div>
</div>
);
})}
{/* Thinking indicator */}
{isThinking && messages.length > 0 && !messages[messages.length - 1]?.isStreaming && (
<div className="flex gap-3">
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-primary-500/20 text-primary-400">
<Bot className="w-4 h-4" />
</div>
<div className="bg-dark-700 rounded-lg px-4 py-3 flex items-center gap-2">
<Loader2 className="w-4 h-4 text-primary-400 animate-spin" />
<span className="text-sm text-dark-300">Thinking...</span>
</div>
</div>
)}
{/* Error display */}
{error && (
<div className="flex gap-3">
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-red-500/20 text-red-400">
<AlertCircle className="w-4 h-4" />
</div>
<div className="flex-1 px-4 py-3 bg-red-500/10 rounded-lg text-sm text-red-400 border border-red-500/30">
{error}
</div>
</div>
)}
<div ref={messagesEndRef} />
</div>
{/* Input */}
<div className="p-3 border-t border-dark-700 flex-shrink-0">
<div className="flex gap-2">
<textarea
ref={inputRef}
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={handleKeyDown}
placeholder={isConnected ? "Ask about your optimization..." : "Connecting..."}
disabled={!isConnected}
rows={1}
className="flex-1 bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-sm text-white placeholder-dark-400 resize-none focus:outline-none focus:border-primary-400 disabled:opacity-50"
/>
<button
onClick={handleSend}
disabled={!input.trim() || isThinking || !isConnected}
className="p-2 bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
>
{isThinking ? (
<Loader2 className="w-5 h-5 animate-spin" />
) : (
<Send className="w-5 h-5" />
)}
</button>
</div>
{!isConnected && (
<p className="text-xs text-dark-500 mt-1">
Waiting for connection to Claude...
</p>
)}
</div>
</div>
);
};
export default StudioChat;

View File

@@ -0,0 +1,117 @@
/**
* StudioContextFiles - Context document upload and display
*/
import React, { useState, useRef } from 'react';
import { FileText, Upload, Trash2, Loader2 } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface StudioContextFilesProps {
draftId: string;
files: string[];
onUploadComplete: () => void;
}
export const StudioContextFiles: React.FC<StudioContextFilesProps> = ({
draftId,
files,
onUploadComplete,
}) => {
const [isUploading, setIsUploading] = useState(false);
const [deleting, setDeleting] = useState<string | null>(null);
const fileInputRef = useRef<HTMLInputElement>(null);
const VALID_EXTENSIONS = ['.md', '.txt', '.pdf', '.json', '.csv', '.docx'];
const handleFileSelect = async (e: React.ChangeEvent<HTMLInputElement>) => {
const selectedFiles = Array.from(e.target.files || []);
if (selectedFiles.length === 0) return;
e.target.value = '';
setIsUploading(true);
try {
await intakeApi.uploadContextFiles(draftId, selectedFiles);
onUploadComplete();
} catch (err) {
console.error('Failed to upload context files:', err);
} finally {
setIsUploading(false);
}
};
const deleteFile = async (filename: string) => {
setDeleting(filename);
try {
await intakeApi.deleteContextFile(draftId, filename);
onUploadComplete();
} catch (err) {
console.error('Failed to delete context file:', err);
} finally {
setDeleting(null);
}
};
const getFileIcon = (_filename: string) => {
return <FileText className="w-3.5 h-3.5 text-amber-400" />;
};
return (
<div className="space-y-2">
{/* File List */}
{files.length > 0 && (
<div className="space-y-1">
{files.map((name) => (
<div
key={name}
className="flex items-center gap-2 px-2 py-1.5 rounded bg-dark-700/50 text-sm group"
>
{getFileIcon(name)}
<span className="text-dark-200 truncate flex-1">{name}</span>
<button
onClick={() => deleteFile(name)}
disabled={deleting === name}
className="p-1 opacity-0 group-hover:opacity-100 hover:bg-red-500/20 rounded text-red-400 transition-all"
>
{deleting === name ? (
<Loader2 className="w-3 h-3 animate-spin" />
) : (
<Trash2 className="w-3 h-3" />
)}
</button>
</div>
))}
</div>
)}
{/* Upload Button */}
<button
onClick={() => fileInputRef.current?.click()}
disabled={isUploading}
className="w-full flex items-center justify-center gap-2 px-3 py-2 rounded-lg
border border-dashed border-dark-600 text-dark-400 text-sm
hover:border-primary-400/50 hover:text-primary-400 hover:bg-primary-400/5
disabled:opacity-50 transition-colors"
>
{isUploading ? (
<Loader2 className="w-4 h-4 animate-spin" />
) : (
<Upload className="w-4 h-4" />
)}
{isUploading ? 'Uploading...' : 'Add context files'}
</button>
<input
ref={fileInputRef}
type="file"
multiple
accept={VALID_EXTENSIONS.join(',')}
onChange={handleFileSelect}
className="hidden"
/>
</div>
);
};
export default StudioContextFiles;

View File

@@ -0,0 +1,242 @@
/**
* StudioDropZone - Smart file drop zone for Studio
*
* Handles both model files (.sim, .prt, .fem) and context files (.pdf, .md, .txt)
*/
import React, { useState, useCallback, useRef } from 'react';
import { Upload, X, Loader2, AlertCircle, CheckCircle, File } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface StudioDropZoneProps {
draftId: string;
type: 'model' | 'context';
files: string[];
onUploadComplete: () => void;
}
interface FileStatus {
file: File;
status: 'pending' | 'uploading' | 'success' | 'error';
message?: string;
}
const MODEL_EXTENSIONS = ['.prt', '.sim', '.fem', '.afem'];
const CONTEXT_EXTENSIONS = ['.md', '.txt', '.pdf', '.json', '.csv', '.docx'];
export const StudioDropZone: React.FC<StudioDropZoneProps> = ({
draftId,
type,
files,
onUploadComplete,
}) => {
const [isDragging, setIsDragging] = useState(false);
const [pendingFiles, setPendingFiles] = useState<FileStatus[]>([]);
const [isUploading, setIsUploading] = useState(false);
const fileInputRef = useRef<HTMLInputElement>(null);
const validExtensions = type === 'model' ? MODEL_EXTENSIONS : CONTEXT_EXTENSIONS;
const validateFile = (file: File): { valid: boolean; reason?: string } => {
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
if (!validExtensions.includes(ext)) {
return { valid: false, reason: `Invalid type: ${ext}` };
}
if (file.size > 500 * 1024 * 1024) {
return { valid: false, reason: 'File too large (max 500MB)' };
}
return { valid: true };
};
const handleDragEnter = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(true);
}, []);
const handleDragLeave = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
}, []);
const handleDragOver = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
}, []);
const addFiles = useCallback((newFiles: File[]) => {
const validFiles: FileStatus[] = [];
for (const file of newFiles) {
if (pendingFiles.some(f => f.file.name === file.name)) {
continue;
}
const validation = validateFile(file);
validFiles.push({
file,
status: validation.valid ? 'pending' : 'error',
message: validation.reason,
});
}
setPendingFiles(prev => [...prev, ...validFiles]);
}, [pendingFiles, validExtensions]);
const handleDrop = useCallback((e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
addFiles(Array.from(e.dataTransfer.files));
}, [addFiles]);
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
addFiles(Array.from(e.target.files || []));
e.target.value = '';
}, [addFiles]);
const removeFile = (index: number) => {
setPendingFiles(prev => prev.filter((_, i) => i !== index));
};
const uploadFiles = async () => {
const toUpload = pendingFiles.filter(f => f.status === 'pending');
if (toUpload.length === 0) return;
setIsUploading(true);
try {
const uploadFn = type === 'model'
? intakeApi.uploadFiles
: intakeApi.uploadContextFiles;
const response = await uploadFn(draftId, toUpload.map(f => f.file));
const results = new Map(
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
);
setPendingFiles(prev => prev.map(f => {
if (f.status !== 'pending') return f;
const success = results.get(f.file.name);
return {
...f,
status: success ? 'success' : 'error',
message: success ? undefined : 'Upload failed',
};
}));
setTimeout(() => {
setPendingFiles(prev => prev.filter(f => f.status !== 'success'));
onUploadComplete();
}, 1000);
} catch (err) {
setPendingFiles(prev => prev.map(f =>
f.status === 'pending'
? { ...f, status: 'error', message: 'Upload failed' }
: f
));
} finally {
setIsUploading(false);
}
};
// Auto-upload when files are added
React.useEffect(() => {
const pending = pendingFiles.filter(f => f.status === 'pending');
if (pending.length > 0 && !isUploading) {
uploadFiles();
}
}, [pendingFiles, isUploading]);
return (
<div className="space-y-2">
{/* Drop Zone */}
<div
onDragEnter={handleDragEnter}
onDragLeave={handleDragLeave}
onDragOver={handleDragOver}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`
relative border-2 border-dashed rounded-lg p-4 cursor-pointer
transition-all duration-200 text-center
${isDragging
? 'border-primary-400 bg-primary-400/5'
: 'border-dark-600 hover:border-primary-400/50 hover:bg-white/5'
}
`}
>
<div className={`w-8 h-8 rounded-full flex items-center justify-center mx-auto mb-2
${isDragging ? 'bg-primary-400/20 text-primary-400' : 'bg-dark-700 text-dark-400'}`}>
<Upload className="w-4 h-4" />
</div>
<p className="text-sm text-dark-300">
{isDragging ? 'Drop files here' : 'Drop or click to add'}
</p>
<p className="text-xs text-dark-500 mt-1">
{validExtensions.join(', ')}
</p>
</div>
{/* Existing Files */}
{files.length > 0 && (
<div className="space-y-1">
{files.map((name, i) => (
<div
key={i}
className="flex items-center gap-2 px-2 py-1.5 rounded bg-dark-700/50 text-sm"
>
<File className="w-3.5 h-3.5 text-dark-400" />
<span className="text-dark-200 truncate flex-1">{name}</span>
<CheckCircle className="w-3.5 h-3.5 text-green-400" />
</div>
))}
</div>
)}
{/* Pending Files */}
{pendingFiles.length > 0 && (
<div className="space-y-1">
{pendingFiles.map((f, i) => (
<div
key={i}
className={`flex items-center gap-2 px-2 py-1.5 rounded text-sm
${f.status === 'error' ? 'bg-red-500/10' :
f.status === 'success' ? 'bg-green-500/10' : 'bg-dark-700'}`}
>
{f.status === 'pending' && <Loader2 className="w-3.5 h-3.5 text-primary-400 animate-spin" />}
{f.status === 'uploading' && <Loader2 className="w-3.5 h-3.5 text-primary-400 animate-spin" />}
{f.status === 'success' && <CheckCircle className="w-3.5 h-3.5 text-green-400" />}
{f.status === 'error' && <AlertCircle className="w-3.5 h-3.5 text-red-400" />}
<span className={`truncate flex-1 ${f.status === 'error' ? 'text-red-400' : 'text-dark-200'}`}>
{f.file.name}
</span>
{f.message && (
<span className="text-xs text-red-400">({f.message})</span>
)}
{f.status === 'pending' && (
<button onClick={(e) => { e.stopPropagation(); removeFile(i); }} className="p-0.5 hover:bg-white/10 rounded">
<X className="w-3 h-3 text-dark-400" />
</button>
)}
</div>
))}
</div>
)}
<input
ref={fileInputRef}
type="file"
multiple
accept={validExtensions.join(',')}
onChange={handleFileSelect}
className="hidden"
/>
</div>
);
};
export default StudioDropZone;

View File

@@ -0,0 +1,172 @@
/**
* StudioParameterList - Display and add discovered parameters as design variables
*/
import React, { useState, useEffect } from 'react';
import { Plus, Check, SlidersHorizontal, Loader2 } from 'lucide-react';
import { intakeApi } from '../../api/intake';
interface Expression {
name: string;
value: number | null;
units: string | null;
is_candidate: boolean;
confidence: number;
}
interface StudioParameterListProps {
draftId: string;
onParameterAdded: () => void;
}
export const StudioParameterList: React.FC<StudioParameterListProps> = ({
draftId,
onParameterAdded,
}) => {
const [expressions, setExpressions] = useState<Expression[]>([]);
const [addedParams, setAddedParams] = useState<Set<string>>(new Set());
const [adding, setAdding] = useState<string | null>(null);
const [loading, setLoading] = useState(true);
// Load expressions from spec introspection
useEffect(() => {
loadExpressions();
}, [draftId]);
const loadExpressions = async () => {
setLoading(true);
try {
const data = await intakeApi.getStudioDraft(draftId);
const introspection = (data.spec as any)?.model?.introspection;
if (introspection?.expressions) {
setExpressions(introspection.expressions);
// Check which are already added as DVs
const existingDVs = new Set<string>(
((data.spec as any)?.design_variables || []).map((dv: any) => dv.expression_name as string)
);
setAddedParams(existingDVs);
}
} catch (err) {
console.error('Failed to load expressions:', err);
} finally {
setLoading(false);
}
};
const addAsDesignVariable = async (expressionName: string) => {
setAdding(expressionName);
try {
await intakeApi.createDesignVariables(draftId, [expressionName]);
setAddedParams(prev => new Set([...prev, expressionName]));
onParameterAdded();
} catch (err) {
console.error('Failed to add design variable:', err);
} finally {
setAdding(null);
}
};
// Sort: candidates first, then by confidence
const sortedExpressions = [...expressions].sort((a, b) => {
if (a.is_candidate !== b.is_candidate) {
return b.is_candidate ? 1 : -1;
}
return (b.confidence || 0) - (a.confidence || 0);
});
// Show only candidates by default, with option to show all
const [showAll, setShowAll] = useState(false);
const displayExpressions = showAll
? sortedExpressions
: sortedExpressions.filter(e => e.is_candidate);
if (loading) {
return (
<div className="flex items-center justify-center py-4">
<Loader2 className="w-5 h-5 text-primary-400 animate-spin" />
</div>
);
}
if (expressions.length === 0) {
return (
<p className="text-xs text-dark-500 italic py-2">
No expressions found. Try running introspection.
</p>
);
}
const candidateCount = expressions.filter(e => e.is_candidate).length;
return (
<div className="space-y-2">
{/* Header with toggle */}
<div className="flex items-center justify-between text-xs text-dark-400">
<span>{candidateCount} candidates</span>
<button
onClick={() => setShowAll(!showAll)}
className="hover:text-primary-400 transition-colors"
>
{showAll ? 'Show candidates only' : `Show all (${expressions.length})`}
</button>
</div>
{/* Parameter List */}
<div className="space-y-1 max-h-48 overflow-y-auto">
{displayExpressions.map((expr) => {
const isAdded = addedParams.has(expr.name);
const isAdding = adding === expr.name;
return (
<div
key={expr.name}
className={`flex items-center gap-2 px-2 py-1.5 rounded text-sm
${isAdded ? 'bg-green-500/10' : 'bg-dark-700/50 hover:bg-dark-700'}
transition-colors`}
>
<SlidersHorizontal className="w-3.5 h-3.5 text-dark-400 flex-shrink-0" />
<div className="flex-1 min-w-0">
<span className={`block truncate ${isAdded ? 'text-green-400' : 'text-dark-200'}`}>
{expr.name}
</span>
{expr.value !== null && (
<span className="text-xs text-dark-500">
= {expr.value}{expr.units ? ` ${expr.units}` : ''}
</span>
)}
</div>
{isAdded ? (
<Check className="w-4 h-4 text-green-400 flex-shrink-0" />
) : (
<button
onClick={() => addAsDesignVariable(expr.name)}
disabled={isAdding}
className="p-1 hover:bg-primary-400/20 rounded text-primary-400 transition-colors disabled:opacity-50"
title="Add as design variable"
>
{isAdding ? (
<Loader2 className="w-3.5 h-3.5 animate-spin" />
) : (
<Plus className="w-3.5 h-3.5" />
)}
</button>
)}
</div>
);
})}
</div>
{displayExpressions.length === 0 && (
<p className="text-xs text-dark-500 italic py-2">
No candidate parameters found. Click "Show all" to see all expressions.
</p>
)}
</div>
);
};
export default StudioParameterList;

View File

@@ -0,0 +1,11 @@
/**
* Studio Components Index
*
* Export all Studio-related components.
*/
export { StudioDropZone } from './StudioDropZone';
export { StudioParameterList } from './StudioParameterList';
export { StudioContextFiles } from './StudioContextFiles';
export { StudioChat } from './StudioChat';
export { StudioBuildDialog } from './StudioBuildDialog';

View File

@@ -18,12 +18,15 @@ import {
FolderOpen,
Maximize2,
X,
Layers
Layers,
Sparkles,
Settings2
} from 'lucide-react';
import { useStudy } from '../context/StudyContext';
import { Study } from '../types';
import { apiClient } from '../api/client';
import { MarkdownRenderer } from '../components/MarkdownRenderer';
import { InboxSection } from '../components/intake';
const Home: React.FC = () => {
const { studies, setSelectedStudy, refreshStudies, isLoading } = useStudy();
@@ -174,6 +177,18 @@ const Home: React.FC = () => {
/>
</div>
<div className="flex items-center gap-3">
<button
onClick={() => navigate('/studio')}
className="flex items-center gap-2 px-4 py-2 rounded-lg transition-all font-medium hover:-translate-y-0.5"
style={{
background: 'linear-gradient(135deg, #f59e0b 0%, #d97706 100%)',
color: '#000',
boxShadow: '0 4px 15px rgba(245, 158, 11, 0.3)'
}}
>
<Sparkles className="w-4 h-4" />
New Study
</button>
<button
onClick={() => navigate('/canvas')}
className="flex items-center gap-2 px-4 py-2 rounded-lg transition-all font-medium hover:-translate-y-0.5"
@@ -250,6 +265,11 @@ const Home: React.FC = () => {
</div>
</div>
{/* Inbox Section - Study Creation Workflow */}
<div className="mb-8">
<InboxSection onStudyFinalized={refreshStudies} />
</div>
{/* Two-column layout: Table + Preview */}
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
{/* Study Table */}
@@ -407,6 +427,19 @@ const Home: React.FC = () => {
<Layers className="w-4 h-4" />
Canvas
</button>
<button
onClick={() => navigate(`/studio/${selectedPreview.id}`)}
className="flex items-center gap-2 px-4 py-2.5 rounded-lg transition-all font-medium whitespace-nowrap hover:-translate-y-0.5"
style={{
background: 'rgba(8, 15, 26, 0.85)',
border: '1px solid rgba(245, 158, 11, 0.3)',
color: '#f59e0b'
}}
title="Edit study configuration with AI assistant"
>
<Settings2 className="w-4 h-4" />
Studio
</button>
<button
onClick={() => handleSelectStudy(selectedPreview)}
className="flex items-center gap-2 px-5 py-2.5 rounded-lg transition-all font-semibold whitespace-nowrap hover:-translate-y-0.5"

View File

@@ -0,0 +1,672 @@
/**
* Atomizer Studio - Unified Study Creation Environment
*
* A drag-and-drop workspace for creating optimization studies with:
* - File upload (models + context documents)
* - Visual canvas configuration
* - AI-powered assistance
* - One-click build to final study
*/
import { useState, useEffect, useCallback, useRef } from 'react';
import { useNavigate, useParams } from 'react-router-dom';
import {
Home,
ChevronRight,
Upload,
FileText,
Settings,
Sparkles,
Save,
RefreshCw,
Trash2,
MessageSquare,
Layers,
CheckCircle,
AlertCircle,
Loader2,
X,
ChevronLeft,
ChevronRight as ChevronRightIcon,
GripVertical,
} from 'lucide-react';
import { intakeApi } from '../api/intake';
import { SpecRenderer } from '../components/canvas/SpecRenderer';
import { NodePalette } from '../components/canvas/palette/NodePalette';
import { NodeConfigPanelV2 } from '../components/canvas/panels/NodeConfigPanelV2';
import { useSpecStore, useSpec, useSpecLoading } from '../hooks/useSpecStore';
import { StudioDropZone } from '../components/studio/StudioDropZone';
import { StudioParameterList } from '../components/studio/StudioParameterList';
import { StudioContextFiles } from '../components/studio/StudioContextFiles';
import { StudioChat } from '../components/studio/StudioChat';
import { StudioBuildDialog } from '../components/studio/StudioBuildDialog';
interface DraftState {
draftId: string | null;
status: 'idle' | 'creating' | 'ready' | 'error';
error: string | null;
modelFiles: string[];
contextFiles: string[];
contextContent: string;
introspectionAvailable: boolean;
designVariableCount: number;
objectiveCount: number;
}
export default function Studio() {
const navigate = useNavigate();
const { draftId: urlDraftId } = useParams<{ draftId: string }>();
// Draft state
const [draft, setDraft] = useState<DraftState>({
draftId: null,
status: 'idle',
error: null,
modelFiles: [],
contextFiles: [],
contextContent: '',
introspectionAvailable: false,
designVariableCount: 0,
objectiveCount: 0,
});
// UI state
const [leftPanelWidth, setLeftPanelWidth] = useState(320);
const [rightPanelCollapsed, setRightPanelCollapsed] = useState(false);
const [showBuildDialog, setShowBuildDialog] = useState(false);
const [isIntrospecting, setIsIntrospecting] = useState(false);
const [notification, setNotification] = useState<{ type: 'success' | 'error' | 'info'; message: string } | null>(null);
// Resize state
const isResizing = useRef(false);
const minPanelWidth = 280;
const maxPanelWidth = 500;
// Spec store for canvas
const spec = useSpec();
const specLoading = useSpecLoading();
const { loadSpec, clearSpec } = useSpecStore();
// Handle panel resize
const handleMouseDown = useCallback((e: React.MouseEvent) => {
e.preventDefault();
isResizing.current = true;
document.body.style.cursor = 'col-resize';
document.body.style.userSelect = 'none';
}, []);
useEffect(() => {
const handleMouseMove = (e: MouseEvent) => {
if (!isResizing.current) return;
const newWidth = Math.min(maxPanelWidth, Math.max(minPanelWidth, e.clientX));
setLeftPanelWidth(newWidth);
};
const handleMouseUp = () => {
isResizing.current = false;
document.body.style.cursor = '';
document.body.style.userSelect = '';
};
document.addEventListener('mousemove', handleMouseMove);
document.addEventListener('mouseup', handleMouseUp);
return () => {
document.removeEventListener('mousemove', handleMouseMove);
document.removeEventListener('mouseup', handleMouseUp);
};
}, []);
// Initialize or load draft on mount
useEffect(() => {
if (urlDraftId) {
loadDraft(urlDraftId);
} else {
createNewDraft();
}
return () => {
// Cleanup: clear spec when leaving Studio
clearSpec();
};
}, [urlDraftId]);
// Create a new draft
const createNewDraft = async () => {
setDraft(prev => ({ ...prev, status: 'creating', error: null }));
try {
const response = await intakeApi.createDraft();
setDraft({
draftId: response.draft_id,
status: 'ready',
error: null,
modelFiles: [],
contextFiles: [],
contextContent: '',
introspectionAvailable: false,
designVariableCount: 0,
objectiveCount: 0,
});
// Update URL without navigation
window.history.replaceState(null, '', `/studio/${response.draft_id}`);
// Load the empty spec for this draft
await loadSpec(response.draft_id);
showNotification('info', 'New studio session started. Drop your files to begin.');
} catch (err) {
setDraft(prev => ({
...prev,
status: 'error',
error: err instanceof Error ? err.message : 'Failed to create draft',
}));
}
};
// Load existing draft or study
const loadDraft = async (studyId: string) => {
setDraft(prev => ({ ...prev, status: 'creating', error: null }));
// Check if this is a draft (in _inbox) or an existing study
const isDraft = studyId.startsWith('draft_');
if (isDraft) {
// Load from intake API
try {
const response = await intakeApi.getStudioDraft(studyId);
// Also load context content if there are context files
let contextContent = '';
if (response.context_files.length > 0) {
try {
const contextResponse = await intakeApi.getContextContent(studyId);
contextContent = contextResponse.content;
} catch {
// Ignore context loading errors
}
}
setDraft({
draftId: response.draft_id,
status: 'ready',
error: null,
modelFiles: response.model_files,
contextFiles: response.context_files,
contextContent,
introspectionAvailable: response.introspection_available,
designVariableCount: response.design_variable_count,
objectiveCount: response.objective_count,
});
// Load the spec
await loadSpec(studyId);
showNotification('info', `Resuming draft: ${studyId}`);
} catch (err) {
// Draft doesn't exist, create new one
createNewDraft();
}
} else {
// Load existing study directly via spec store
try {
await loadSpec(studyId);
// Get counts from loaded spec
const loadedSpec = useSpecStore.getState().spec;
setDraft({
draftId: studyId,
status: 'ready',
error: null,
modelFiles: [], // Existing studies don't track files separately
contextFiles: [],
contextContent: '',
introspectionAvailable: true, // Assume introspection was done
designVariableCount: loadedSpec?.design_variables?.length || 0,
objectiveCount: loadedSpec?.objectives?.length || 0,
});
showNotification('info', `Editing study: ${studyId}`);
} catch (err) {
setDraft(prev => ({
...prev,
status: 'error',
error: err instanceof Error ? err.message : 'Failed to load study',
}));
}
}
};
// Refresh draft data
const refreshDraft = async () => {
if (!draft.draftId) return;
const isDraft = draft.draftId.startsWith('draft_');
if (isDraft) {
try {
const response = await intakeApi.getStudioDraft(draft.draftId);
// Also refresh context content
let contextContent = draft.contextContent;
if (response.context_files.length > 0) {
try {
const contextResponse = await intakeApi.getContextContent(draft.draftId);
contextContent = contextResponse.content;
} catch {
// Keep existing content
}
}
setDraft(prev => ({
...prev,
modelFiles: response.model_files,
contextFiles: response.context_files,
contextContent,
introspectionAvailable: response.introspection_available,
designVariableCount: response.design_variable_count,
objectiveCount: response.objective_count,
}));
// Reload spec
await loadSpec(draft.draftId);
} catch (err) {
showNotification('error', 'Failed to refresh draft');
}
} else {
// For existing studies, just reload the spec
try {
await loadSpec(draft.draftId);
const loadedSpec = useSpecStore.getState().spec;
setDraft(prev => ({
...prev,
designVariableCount: loadedSpec?.design_variables?.length || 0,
objectiveCount: loadedSpec?.objectives?.length || 0,
}));
} catch (err) {
showNotification('error', 'Failed to refresh study');
}
}
};
// Run introspection
const runIntrospection = async () => {
if (!draft.draftId || draft.modelFiles.length === 0) {
showNotification('error', 'Please upload model files first');
return;
}
setIsIntrospecting(true);
try {
const response = await intakeApi.introspect({ study_name: draft.draftId });
showNotification('success', `Found ${response.expressions_count} expressions (${response.candidates_count} candidates)`);
// Refresh draft state
await refreshDraft();
} catch (err) {
showNotification('error', err instanceof Error ? err.message : 'Introspection failed');
} finally {
setIsIntrospecting(false);
}
};
// Handle file upload complete
const handleUploadComplete = useCallback(() => {
refreshDraft();
showNotification('success', 'Files uploaded successfully');
}, [draft.draftId]);
// Handle build complete
const handleBuildComplete = (finalPath: string, finalName: string) => {
setShowBuildDialog(false);
showNotification('success', `Study "${finalName}" created successfully!`);
// Navigate to the new study
setTimeout(() => {
navigate(`/canvas/${finalPath.replace('studies/', '')}`);
}, 1500);
};
// Reset draft
const resetDraft = async () => {
if (!draft.draftId) return;
if (!confirm('Are you sure you want to reset? This will delete all uploaded files and configurations.')) {
return;
}
try {
await intakeApi.deleteInboxStudy(draft.draftId);
await createNewDraft();
} catch (err) {
showNotification('error', 'Failed to reset draft');
}
};
// Show notification
const showNotification = (type: 'success' | 'error' | 'info', message: string) => {
setNotification({ type, message });
setTimeout(() => setNotification(null), 4000);
};
// Can always save/build - even empty studies can be saved for later
const canBuild = draft.draftId !== null;
// Loading state
if (draft.status === 'creating') {
return (
<div className="min-h-screen bg-dark-900 flex items-center justify-center">
<div className="text-center">
<Loader2 className="w-12 h-12 text-primary-400 animate-spin mx-auto mb-4" />
<p className="text-dark-300">Initializing Studio...</p>
</div>
</div>
);
}
// Error state
if (draft.status === 'error') {
return (
<div className="min-h-screen bg-dark-900 flex items-center justify-center">
<div className="text-center max-w-md">
<AlertCircle className="w-12 h-12 text-red-400 mx-auto mb-4" />
<h2 className="text-xl font-semibold text-white mb-2">Failed to Initialize</h2>
<p className="text-dark-400 mb-4">{draft.error}</p>
<button
onClick={createNewDraft}
className="px-4 py-2 bg-primary-500 text-white rounded-lg hover:bg-primary-400 transition-colors"
>
Try Again
</button>
</div>
</div>
);
}
return (
<div className="min-h-screen bg-dark-900 flex flex-col">
{/* Header */}
<header className="h-14 bg-dark-850 border-b border-dark-700 flex items-center justify-between px-4 flex-shrink-0">
{/* Left: Navigation */}
<div className="flex items-center gap-3">
<button
onClick={() => navigate('/')}
className="p-2 hover:bg-dark-700 rounded-lg text-dark-400 hover:text-white transition-colors"
>
<Home className="w-5 h-5" />
</button>
<ChevronRight className="w-4 h-4 text-dark-600" />
<div className="flex items-center gap-2">
<Sparkles className="w-5 h-5 text-primary-400" />
<span className="text-white font-medium">Atomizer Studio</span>
</div>
{draft.draftId && (
<>
<ChevronRight className="w-4 h-4 text-dark-600" />
<span className="text-dark-400 text-sm font-mono">{draft.draftId}</span>
</>
)}
</div>
{/* Right: Actions */}
<div className="flex items-center gap-2">
<button
onClick={resetDraft}
className="flex items-center gap-2 px-3 py-1.5 text-sm text-dark-400 hover:text-white hover:bg-dark-700 rounded-lg transition-colors"
>
<Trash2 className="w-4 h-4" />
Reset
</button>
<button
onClick={() => setShowBuildDialog(true)}
disabled={!canBuild}
className="flex items-center gap-2 px-4 py-1.5 text-sm font-medium bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
>
<Save className="w-4 h-4" />
Save & Name Study
</button>
</div>
</header>
{/* Main Content */}
<div className="flex-1 flex overflow-hidden">
{/* Left Panel: Resources (Resizable) */}
<div
className="bg-dark-850 border-r border-dark-700 flex flex-col flex-shrink-0 relative"
style={{ width: leftPanelWidth }}
>
<div className="flex-1 overflow-y-auto p-4 space-y-6">
{/* Drop Zone */}
<section>
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
<Upload className="w-4 h-4" />
Model Files
</h3>
{draft.draftId && (
<StudioDropZone
draftId={draft.draftId}
type="model"
files={draft.modelFiles}
onUploadComplete={handleUploadComplete}
/>
)}
</section>
{/* Introspection */}
{draft.modelFiles.length > 0 && (
<section>
<div className="flex items-center justify-between mb-3">
<h3 className="text-sm font-medium text-dark-300 flex items-center gap-2">
<Settings className="w-4 h-4" />
Parameters
</h3>
<button
onClick={runIntrospection}
disabled={isIntrospecting}
className="flex items-center gap-1 px-2 py-1 text-xs text-primary-400 hover:bg-primary-400/10 rounded transition-colors disabled:opacity-50"
>
{isIntrospecting ? (
<Loader2 className="w-3 h-3 animate-spin" />
) : (
<RefreshCw className="w-3 h-3" />
)}
{isIntrospecting ? 'Scanning...' : 'Scan'}
</button>
</div>
{draft.draftId && draft.introspectionAvailable && (
<StudioParameterList
draftId={draft.draftId}
onParameterAdded={refreshDraft}
/>
)}
{!draft.introspectionAvailable && (
<p className="text-xs text-dark-500 italic">
Click "Scan" to discover parameters from your model.
</p>
)}
</section>
)}
{/* Context Files */}
<section>
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
<FileText className="w-4 h-4" />
Context Documents
</h3>
{draft.draftId && (
<StudioContextFiles
draftId={draft.draftId}
files={draft.contextFiles}
onUploadComplete={handleUploadComplete}
/>
)}
<p className="text-xs text-dark-500 mt-2">
Upload requirements, goals, or specs. The AI will read these.
</p>
{/* Show context preview if loaded */}
{draft.contextContent && (
<div className="mt-3 p-2 bg-dark-700/50 rounded-lg border border-dark-600">
<p className="text-xs text-amber-400 mb-1 font-medium">Context Loaded:</p>
<p className="text-xs text-dark-400 line-clamp-3">
{draft.contextContent.substring(0, 200)}...
</p>
</div>
)}
</section>
{/* Node Palette - EXPANDED, not collapsed */}
<section>
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
<Layers className="w-4 h-4" />
Components
</h3>
<NodePalette
collapsed={false}
showToggle={false}
className="!w-full !border-0 !bg-transparent"
/>
</section>
</div>
{/* Resize Handle */}
<div
className="absolute right-0 top-0 bottom-0 w-1 cursor-col-resize hover:bg-primary-500/50 transition-colors group"
onMouseDown={handleMouseDown}
>
<div className="absolute right-0 top-1/2 -translate-y-1/2 w-4 h-8 flex items-center justify-center opacity-0 group-hover:opacity-100 transition-opacity">
<GripVertical className="w-3 h-3 text-dark-400" />
</div>
</div>
</div>
{/* Center: Canvas */}
<div className="flex-1 relative bg-dark-900">
{draft.draftId && (
<SpecRenderer
studyId={draft.draftId}
editable={true}
showLoadingOverlay={false}
/>
)}
{/* Empty state */}
{!specLoading && (!spec || Object.keys(spec).length === 0) && (
<div className="absolute inset-0 flex items-center justify-center pointer-events-none">
<div className="text-center max-w-md p-8">
<div className="w-20 h-20 rounded-full bg-dark-800 flex items-center justify-center mx-auto mb-6">
<Sparkles className="w-10 h-10 text-primary-400" />
</div>
<h2 className="text-2xl font-semibold text-white mb-3">
Welcome to Atomizer Studio
</h2>
<p className="text-dark-400 mb-6">
Drop your model files on the left, or drag components from the palette to start building your optimization study.
</p>
<div className="flex flex-col gap-2 text-sm text-dark-500">
<div className="flex items-center gap-2">
<CheckCircle className="w-4 h-4 text-green-400" />
<span>Upload .sim, .prt, .fem files</span>
</div>
<div className="flex items-center gap-2">
<CheckCircle className="w-4 h-4 text-green-400" />
<span>Add context documents (PDF, MD, TXT)</span>
</div>
<div className="flex items-center gap-2">
<CheckCircle className="w-4 h-4 text-green-400" />
<span>Configure with AI assistance</span>
</div>
</div>
</div>
</div>
)}
</div>
{/* Right Panel: Assistant + Config - wider for better chat UX */}
<div
className={`bg-dark-850 border-l border-dark-700 flex flex-col transition-all duration-300 flex-shrink-0 ${
rightPanelCollapsed ? 'w-12' : 'w-[480px]'
}`}
>
{/* Collapse toggle */}
<button
onClick={() => setRightPanelCollapsed(!rightPanelCollapsed)}
className="absolute right-0 top-1/2 -translate-y-1/2 z-10 p-1 bg-dark-700 border border-dark-600 rounded-l-lg hover:bg-dark-600 transition-colors"
style={{ marginRight: rightPanelCollapsed ? '48px' : '480px' }}
>
{rightPanelCollapsed ? (
<ChevronLeft className="w-4 h-4 text-dark-400" />
) : (
<ChevronRightIcon className="w-4 h-4 text-dark-400" />
)}
</button>
{!rightPanelCollapsed && (
<div className="flex-1 flex flex-col overflow-hidden">
{/* Chat */}
<div className="flex-1 overflow-hidden">
{draft.draftId && (
<StudioChat
draftId={draft.draftId}
contextFiles={draft.contextFiles}
contextContent={draft.contextContent}
modelFiles={draft.modelFiles}
onSpecUpdated={refreshDraft}
/>
)}
</div>
{/* Config Panel (when node selected) */}
<NodeConfigPanelV2 />
</div>
)}
{rightPanelCollapsed && (
<div className="flex flex-col items-center py-4 gap-4">
<MessageSquare className="w-5 h-5 text-dark-400" />
</div>
)}
</div>
</div>
{/* Notification Toast */}
{notification && (
<div
className={`fixed bottom-4 right-4 flex items-center gap-3 px-4 py-3 rounded-lg shadow-lg z-50 animate-slide-up ${
notification.type === 'success'
? 'bg-green-500/10 border border-green-500/30 text-green-400'
: notification.type === 'error'
? 'bg-red-500/10 border border-red-500/30 text-red-400'
: 'bg-primary-500/10 border border-primary-500/30 text-primary-400'
}`}
>
{notification.type === 'success' && <CheckCircle className="w-5 h-5" />}
{notification.type === 'error' && <AlertCircle className="w-5 h-5" />}
{notification.type === 'info' && <Sparkles className="w-5 h-5" />}
<span>{notification.message}</span>
<button
onClick={() => setNotification(null)}
className="p-1 hover:bg-white/10 rounded"
>
<X className="w-4 h-4" />
</button>
</div>
)}
{/* Build Dialog */}
{showBuildDialog && draft.draftId && (
<StudioBuildDialog
draftId={draft.draftId}
onClose={() => setShowBuildDialog(false)}
onBuildComplete={handleBuildComplete}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,201 @@
/**
* Intake Workflow TypeScript Types
*
* Types for the study intake/creation workflow.
*/
// ============================================================================
// Status Types
// ============================================================================
export type SpecStatus =
| 'draft'
| 'introspected'
| 'configured'
| 'validated'
| 'ready'
| 'running'
| 'completed'
| 'failed';
// ============================================================================
// Expression/Introspection Types
// ============================================================================
export interface ExpressionInfo {
/** Expression name in NX */
name: string;
/** Current value */
value: number | null;
/** Physical units */
units: string | null;
/** Expression formula if any */
formula: string | null;
/** Whether this is a design variable candidate */
is_candidate: boolean;
/** Confidence that this is a DV (0-1) */
confidence: number;
}
export interface BaselineData {
/** When baseline was run */
timestamp: string;
/** How long the solve took */
solve_time_seconds: number;
/** Computed mass from BDF/FEM */
mass_kg: number | null;
/** Max displacement result */
max_displacement_mm: number | null;
/** Max von Mises stress */
max_stress_mpa: number | null;
/** Whether baseline solve succeeded */
success: boolean;
/** Error message if failed */
error: string | null;
}
export interface IntrospectionData {
/** When introspection was run */
timestamp: string;
/** Detected solver type */
solver_type: string | null;
/** Mass from expressions or properties */
mass_kg: number | null;
/** Volume from mass properties */
volume_mm3: number | null;
/** Discovered expressions */
expressions: ExpressionInfo[];
/** Baseline solve results */
baseline: BaselineData | null;
/** Warnings from introspection */
warnings: string[];
}
// ============================================================================
// Request/Response Types
// ============================================================================
export interface CreateInboxRequest {
study_name: string;
description?: string;
topic?: string;
}
export interface CreateInboxResponse {
success: boolean;
study_name: string;
inbox_path: string;
spec_path: string;
status: SpecStatus;
}
export interface IntrospectRequest {
study_name: string;
model_file?: string;
}
export interface IntrospectResponse {
success: boolean;
study_name: string;
status: SpecStatus;
expressions_count: number;
candidates_count: number;
mass_kg: number | null;
warnings: string[];
}
export interface InboxStudy {
study_name: string;
status: SpecStatus;
description: string | null;
topic: string | null;
created: string | null;
modified: string | null;
model_files: string[];
has_context: boolean;
}
export interface ListInboxResponse {
studies: InboxStudy[];
total: number;
}
export interface TopicInfo {
name: string;
study_count: number;
path: string;
}
export interface ListTopicsResponse {
topics: TopicInfo[];
total: number;
}
export interface InboxStudyDetail {
study_name: string;
inbox_path: string;
spec: import('./atomizer-spec').AtomizerSpec;
files: {
sim: string[];
prt: string[];
fem: string[];
};
context_files: string[];
}
// ============================================================================
// Finalize Types
// ============================================================================
export interface FinalizeRequest {
topic: string;
run_baseline?: boolean;
}
export interface FinalizeProgress {
step: string;
progress: number;
message: string;
completed: boolean;
error?: string;
}
export interface FinalizeResponse {
success: boolean;
study_name: string;
final_path: string;
status: SpecStatus;
baseline?: BaselineData;
readme_generated: boolean;
}
// ============================================================================
// README Generation Types
// ============================================================================
export interface GenerateReadmeRequest {
study_name: string;
}
export interface GenerateReadmeResponse {
success: boolean;
content: string;
path: string;
}
// ============================================================================
// Upload Types
// ============================================================================
export interface UploadFilesResponse {
success: boolean;
study_name: string;
uploaded_files: Array<{
name: string;
status: 'uploaded' | 'rejected' | 'skipped';
path?: string;
size?: number;
reason?: string;
}>;
total_uploaded: number;
}

View File

@@ -34,26 +34,42 @@ from typing import Optional
PROJECT_ROOT = Path(__file__).parent
sys.path.insert(0, str(PROJECT_ROOT))
from optimization_engine.processors.surrogates.auto_trainer import AutoTrainer, check_training_status
from optimization_engine.processors.surrogates.auto_trainer import (
AutoTrainer,
check_training_status,
)
from optimization_engine.config.template_loader import (
create_study_from_template,
list_templates,
get_template
)
from optimization_engine.validators.study_validator import (
validate_study,
list_studies,
quick_check
get_template,
)
from optimization_engine.validators.study_validator import validate_study, list_studies, quick_check
# New UX System imports (lazy loaded to avoid import errors)
def get_intake_processor():
from optimization_engine.intake import IntakeProcessor
return IntakeProcessor
def get_validation_gate():
from optimization_engine.validation import ValidationGate
return ValidationGate
def get_report_generator():
from optimization_engine.reporting.html_report import HTMLReportGenerator
return HTMLReportGenerator
def setup_logging(verbose: bool = False) -> None:
"""Configure logging."""
level = logging.DEBUG if verbose else logging.INFO
logging.basicConfig(
level=level,
format='%(asctime)s [%(levelname)s] %(message)s',
datefmt='%H:%M:%S'
level=level, format="%(asctime)s [%(levelname)s] %(message)s", datefmt="%H:%M:%S"
)
@@ -95,7 +111,7 @@ def cmd_neural_optimize(args) -> int:
study_name=args.study,
min_points=args.min_points,
epochs=args.epochs,
retrain_threshold=args.retrain_every
retrain_threshold=args.retrain_every,
)
status = trainer.get_status()
@@ -103,8 +119,8 @@ def cmd_neural_optimize(args) -> int:
print(f" Model version: v{status['model_version']}")
# Determine workflow phase
has_trained_model = status['model_version'] > 0
current_points = status['total_points']
has_trained_model = status["model_version"] > 0
current_points = status["total_points"]
if has_trained_model and current_points >= args.min_points:
print("\n[3/5] Neural model available - starting neural-accelerated optimization...")
@@ -138,11 +154,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
# Run FEA optimization
import subprocess
cmd = [
sys.executable,
str(run_script),
"--trials", str(fea_trials)
]
cmd = [sys.executable, str(run_script), "--trials", str(fea_trials)]
if args.resume:
cmd.append("--resume")
@@ -155,7 +167,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
elapsed = time.time() - start_time
print("-" * 60)
print(f"FEA optimization completed in {elapsed/60:.1f} minutes")
print(f"FEA optimization completed in {elapsed / 60:.1f} minutes")
# Check if we can now train
print("\n[5/5] Checking training data...")
@@ -169,7 +181,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
print(" Training failed - check logs")
else:
status = trainer.get_status()
remaining = args.min_points - status['total_points']
remaining = args.min_points - status["total_points"]
print(f" {status['total_points']} points collected")
print(f" Need {remaining} more for neural training")
@@ -188,12 +200,7 @@ def _run_neural_phase(args, trainer: AutoTrainer) -> int:
# Run with neural acceleration
import subprocess
cmd = [
sys.executable,
str(run_script),
"--trials", str(args.trials),
"--enable-nn"
]
cmd = [sys.executable, str(run_script), "--trials", str(args.trials), "--enable-nn"]
if args.resume:
cmd.append("--resume")
@@ -206,7 +213,7 @@ def _run_neural_phase(args, trainer: AutoTrainer) -> int:
elapsed = time.time() - start_time
print("-" * 60)
print(f"Neural optimization completed in {elapsed/60:.1f} minutes")
print(f"Neural optimization completed in {elapsed / 60:.1f} minutes")
# Check for retraining
print("\n[5/5] Checking if retraining needed...")
@@ -228,10 +235,7 @@ def cmd_create_study(args) -> int:
print(f"Creating study '{args.name}' from template '{args.template}'...")
try:
study_path = create_study_from_template(
template_name=args.template,
study_name=args.name
)
study_path = create_study_from_template(template_name=args.template, study_name=args.name)
print(f"\nSuccess! Study created at: {study_path}")
return 0
except FileNotFoundError as e:
@@ -290,7 +294,7 @@ def cmd_status(args) -> int:
print(f" Model version: v{status['model_version']}")
print(f" Should train: {status['should_train']}")
if status['latest_model']:
if status["latest_model"]:
print(f" Latest model: {status['latest_model']}")
else:
@@ -305,8 +309,8 @@ def cmd_status(args) -> int:
for study in studies:
icon = "[OK]" if study["is_ready"] else "[!]"
trials_info = f"{study['trials']} trials" if study['trials'] > 0 else "no trials"
pareto_info = f", {study['pareto']} Pareto" if study['pareto'] > 0 else ""
trials_info = f"{study['trials']} trials" if study["trials"] > 0 else "no trials"
pareto_info = f", {study['pareto']} Pareto" if study["pareto"] > 0 else ""
print(f" {icon} {study['name']}")
print(f" Status: {study['status']} ({trials_info}{pareto_info})")
@@ -317,11 +321,7 @@ def cmd_train(args) -> int:
"""Trigger neural network training."""
print(f"Training neural model for study: {args.study}")
trainer = AutoTrainer(
study_name=args.study,
min_points=args.min_points,
epochs=args.epochs
)
trainer = AutoTrainer(study_name=args.study, min_points=args.min_points, epochs=args.epochs)
status = trainer.get_status()
print(f"\nCurrent status:")
@@ -329,8 +329,10 @@ def cmd_train(args) -> int:
print(f" Min threshold: {args.min_points}")
if args.force or trainer.should_train():
if args.force and status['total_points'] < args.min_points:
print(f"\nWarning: Force training with {status['total_points']} points (< {args.min_points})")
if args.force and status["total_points"] < args.min_points:
print(
f"\nWarning: Force training with {status['total_points']} points (< {args.min_points})"
)
print("\nStarting training...")
model_path = trainer.train()
@@ -342,7 +344,7 @@ def cmd_train(args) -> int:
print("\nTraining failed - check logs")
return 1
else:
needed = args.min_points - status['total_points']
needed = args.min_points - status["total_points"]
print(f"\nNot enough data for training. Need {needed} more points.")
print("Use --force to train anyway.")
return 1
@@ -355,6 +357,269 @@ def cmd_validate(args) -> int:
return 0 if validation.is_ready_to_run else 1
# ============================================================================
# NEW UX SYSTEM COMMANDS
# ============================================================================
def cmd_intake(args) -> int:
"""Process an intake folder into a study."""
IntakeProcessor = get_intake_processor()
# Determine inbox folder
inbox_path = Path(args.folder)
if not inbox_path.is_absolute():
inbox_dir = PROJECT_ROOT / "studies" / "_inbox"
if (inbox_dir / args.folder).exists():
inbox_path = inbox_dir / args.folder
elif (PROJECT_ROOT / "studies" / args.folder).exists():
inbox_path = PROJECT_ROOT / "studies" / args.folder
if not inbox_path.exists():
print(f"Error: Folder not found: {inbox_path}")
return 1
print(f"Processing intake: {inbox_path}")
print("=" * 60)
def progress(message: str, percent: float):
bar_width = 30
filled = int(bar_width * percent)
bar = "=" * filled + "-" * (bar_width - filled)
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
if percent >= 1.0:
print()
try:
processor = IntakeProcessor(inbox_path, progress_callback=progress)
context = processor.process(
run_baseline=not args.skip_baseline,
copy_files=True,
run_introspection=True,
)
print("\n" + "=" * 60)
print("INTAKE COMPLETE")
print("=" * 60)
summary = context.get_context_summary()
print(f"\nStudy: {context.study_name}")
print(f"Location: {processor.study_dir}")
print(f"\nContext loaded:")
print(f" Model: {'Yes' if summary['has_model'] else 'No'}")
print(f" Introspection: {'Yes' if summary['has_introspection'] else 'No'}")
print(f" Baseline: {'Yes' if summary['has_baseline'] else 'No'}")
print(
f" Expressions: {summary['num_expressions']} ({summary['num_dv_candidates']} candidates)"
)
if context.has_baseline:
print(f"\nBaseline: {context.get_baseline_summary()}")
if summary["warnings"]:
print(f"\nWarnings:")
for w in summary["warnings"]:
print(f" - {w}")
print(f"\nNext: atomizer gate {context.study_name}")
return 0
except Exception as e:
print(f"\nError: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
def cmd_gate(args) -> int:
"""Run validation gate before optimization."""
ValidationGate = get_validation_gate()
study_path = Path(args.study)
if not study_path.is_absolute():
study_path = PROJECT_ROOT / "studies" / args.study
if not study_path.exists():
print(f"Error: Study not found: {study_path}")
return 1
print(f"Validation Gate: {study_path.name}")
print("=" * 60)
def progress(message: str, percent: float):
bar_width = 30
filled = int(bar_width * percent)
bar = "=" * filled + "-" * (bar_width - filled)
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
if percent >= 1.0:
print()
try:
gate = ValidationGate(study_path, progress_callback=progress)
result = gate.validate(
run_test_trials=not args.skip_trials,
n_test_trials=args.trials,
)
print("\n" + "=" * 60)
if result.passed:
print("VALIDATION PASSED")
else:
print("VALIDATION FAILED")
print("=" * 60)
# Show test trials
if result.test_trials:
print(
f"\nTest Trials: {len([t for t in result.test_trials if t.success])}/{len(result.test_trials)} passed"
)
if result.results_vary:
print("Results vary: Yes (mesh updating correctly)")
else:
print("Results vary: NO - MESH MAY NOT BE UPDATING!")
# Results table
print(f"\n{'Trial':<8} {'Status':<8} {'Time':<8}", end="")
if result.test_trials and result.test_trials[0].objectives:
for obj in list(result.test_trials[0].objectives.keys())[:3]:
print(f" {obj[:10]:<12}", end="")
print()
for trial in result.test_trials:
status = "OK" if trial.success else "FAIL"
print(
f"{trial.trial_number:<8} {status:<8} {trial.solve_time_seconds:<8.1f}", end=""
)
for val in list(trial.objectives.values())[:3]:
print(f" {val:<12.4f}", end="")
print()
# Runtime estimate
if result.avg_solve_time:
print(f"\nRuntime Estimate:")
print(f" Avg solve: {result.avg_solve_time:.1f}s")
if result.estimated_total_runtime:
print(f" Total: {result.estimated_total_runtime / 3600:.1f}h")
# Errors
if result.errors:
print(f"\nErrors:")
for err in result.errors:
print(f" - {err}")
if result.passed and args.approve:
gate.approve()
print(f"\nStudy approved for optimization!")
elif result.passed:
print(f"\nTo approve: atomizer gate {args.study} --approve")
gate.save_result(result)
return 0 if result.passed else 1
except Exception as e:
print(f"\nError: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
def cmd_finalize(args) -> int:
"""Generate final report for a study."""
HTMLReportGenerator = get_report_generator()
study_path = Path(args.study)
if not study_path.is_absolute():
study_path = PROJECT_ROOT / "studies" / args.study
if not study_path.exists():
print(f"Error: Study not found: {study_path}")
return 1
print(f"Generating report for: {study_path.name}")
print("=" * 60)
try:
generator = HTMLReportGenerator(study_path)
report_path = generator.generate(include_pdf=getattr(args, "pdf", False))
print(f"\nReport generated successfully!")
print(f" HTML: {report_path}")
print(f" Data: {report_path.parent / 'data'}")
if getattr(args, "open", False):
import webbrowser
webbrowser.open(str(report_path))
else:
print(f"\nOpen in browser: file://{report_path}")
return 0
except Exception as e:
print(f"\nError: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
def cmd_list_studies(args) -> int:
"""List all studies and inbox items."""
studies_dir = PROJECT_ROOT / "studies"
print("Atomizer Studies")
print("=" * 60)
# Inbox items
inbox_dir = studies_dir / "_inbox"
if inbox_dir.exists():
inbox_items = [d for d in inbox_dir.iterdir() if d.is_dir() and not d.name.startswith(".")]
if inbox_items:
print("\nPending Intake (_inbox/):")
for item in sorted(inbox_items):
has_config = (item / "intake.yaml").exists()
has_model = bool(list(item.glob("**/*.sim")))
status = []
if has_config:
status.append("yaml")
if has_model:
status.append("model")
print(f" {item.name:<30} [{', '.join(status) or 'empty'}]")
# Active studies
print("\nStudies:")
for study_dir in sorted(studies_dir.iterdir()):
if (
study_dir.is_dir()
and not study_dir.name.startswith("_")
and not study_dir.name.startswith(".")
):
has_spec = (study_dir / "atomizer_spec.json").exists() or (
study_dir / "optimization_config.json"
).exists()
has_db = any(study_dir.rglob("study.db"))
has_approval = (study_dir / ".validation_approved").exists()
status = []
if has_spec:
status.append("configured")
if has_approval:
status.append("approved")
if has_db:
status.append("has_data")
print(f" {study_dir.name:<30} [{', '.join(status) or 'new'}]")
return 0
def main():
parser = argparse.ArgumentParser(
description="Atomizer - Neural-Accelerated Structural Optimization",
@@ -372,7 +637,7 @@ Examples:
# Manual training
python atomizer.py train --study my_study --epochs 100
"""
""",
)
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
@@ -381,13 +646,14 @@ Examples:
# neural-optimize command
neural_parser = subparsers.add_parser(
"neural-optimize",
help="Run neural-accelerated optimization (main workflow)"
"neural-optimize", help="Run neural-accelerated optimization (main workflow)"
)
neural_parser.add_argument("--study", "-s", required=True, help="Study name")
neural_parser.add_argument("--trials", "-n", type=int, default=500, help="Total trials")
neural_parser.add_argument("--min-points", type=int, default=50, help="Min points for training")
neural_parser.add_argument("--retrain-every", type=int, default=50, help="Retrain after N new points")
neural_parser.add_argument(
"--retrain-every", type=int, default=50, help="Retrain after N new points"
)
neural_parser.add_argument("--epochs", type=int, default=100, help="Training epochs")
neural_parser.add_argument("--resume", action="store_true", help="Resume existing study")
@@ -414,6 +680,31 @@ Examples:
validate_parser = subparsers.add_parser("validate", help="Validate study setup")
validate_parser.add_argument("--study", "-s", required=True, help="Study name")
# ========================================================================
# NEW UX SYSTEM COMMANDS
# ========================================================================
# intake command
intake_parser = subparsers.add_parser("intake", help="Process an intake folder into a study")
intake_parser.add_argument("folder", help="Path to intake folder")
intake_parser.add_argument("--skip-baseline", action="store_true", help="Skip baseline solve")
# gate command (validation gate)
gate_parser = subparsers.add_parser("gate", help="Run validation gate with test trials")
gate_parser.add_argument("study", help="Study name or path")
gate_parser.add_argument("--skip-trials", action="store_true", help="Skip test trials")
gate_parser.add_argument("--trials", type=int, default=3, help="Number of test trials")
gate_parser.add_argument("--approve", action="store_true", help="Approve if validation passes")
# list command
list_studies_parser = subparsers.add_parser("list", help="List all studies and inbox items")
# finalize command
finalize_parser = subparsers.add_parser("finalize", help="Generate final HTML report")
finalize_parser.add_argument("study", help="Study name or path")
finalize_parser.add_argument("--pdf", action="store_true", help="Also generate PDF")
finalize_parser.add_argument("--open", action="store_true", help="Open report in browser")
args = parser.parse_args()
if not args.command:
@@ -429,7 +720,12 @@ Examples:
"list-templates": cmd_list_templates,
"status": cmd_status,
"train": cmd_train,
"validate": cmd_validate
"validate": cmd_validate,
# New UX commands
"intake": cmd_intake,
"gate": cmd_gate,
"list": cmd_list_studies,
"finalize": cmd_finalize,
}
handler = commands.get(args.command)

View File

@@ -0,0 +1,144 @@
# Atomizer Studio - Technical Implementation Plan
**Version**: 1.0
**Date**: January 24, 2026
**Status**: In Progress
**Author**: Atomizer Team
---
## 1. Executive Summary
**Atomizer Studio** is a unified, drag-and-drop study creation environment that consolidates file management, visual configuration (Canvas), and AI assistance into a single real-time workspace. It replaces the legacy wizard-based approach with a modern "Studio" experience.
### Core Principles
| Principle | Implementation |
|-----------|----------------|
| **Drag & Drop First** | No wizards, no forms. Drop files, see results. |
| **AI-Native** | Claude sees everything: files, parameters, goals. It proposes, you approve. |
| **Zero Commitment** | Work in "Draft Mode" until ready. Nothing is permanent until "Build". |
---
## 2. Architecture
### 2.1 The Draft Workflow
```
User Opens /studio
POST /intake/draft ──► Creates studies/_inbox/draft_{id}/
User Drops Files ──► Auto-Introspection ──► Parameters Discovered
AI Reads Context ──► Proposes Configuration ──► Canvas Updates
User Clicks BUILD ──► Finalize ──► studies/{topic}/{name}/
```
### 2.2 Interface Layout
```
┌───────────────────┬───────────────────────────────────────────────┬───────────────────┐
│ RESOURCES (Left) │ CANVAS (Center) │ ASSISTANT (Right) │
├───────────────────┼───────────────────────────────────────────────┼───────────────────┤
│ ▼ DROP ZONE │ │ │
│ [Drag files] │ ┌───────┐ ┌────────┐ ┌─────────┐ │ "I see you want │
│ │ │ Model ├─────►│ Solver ├─────►│ Extract │ │ to minimize │
│ ▼ MODEL FILES │ └───────┘ └────────┘ └────┬────┘ │ mass. Adding │
│ • bracket.sim │ │ │ objective..." │
│ • bracket.prt │ ┌────▼────┐ │ │
│ │ │ Objectiv│ │ [Apply Changes] │
│ ▼ PARAMETERS │ └─────────┘ │ │
│ • thickness │ │ │
│ • rib_count │ AtomizerSpec v2.0 │ │
│ │ (Draft Mode) │ │
│ ▼ CONTEXT │ │ │
│ • goals.pdf │ │ [ Chat Input... ] │
├───────────────────┼───────────────────────────────────────────────┼───────────────────┤
│ [ Reset Draft ] │ [ Validate ] [ BUILD STUDY ] │ │
└───────────────────┴───────────────────────────────────────────────┴───────────────────┘
```
---
## 3. Implementation Phases
### Phase 1: Backend API Enhancements
- `POST /intake/draft` - Create anonymous draft
- `GET /intake/{id}/context/content` - Extract text from uploaded files
- Enhanced `POST /intake/{id}/finalize` with rename support
### Phase 2: Frontend Studio Shell
- `/studio` route with 3-column layout
- DropZone component with file categorization
- Integrated Canvas in draft mode
- Parameter discovery panel
### Phase 3: AI Integration
- Context-aware chat system
- Spec modification via Claude
- Real-time canvas updates
### Phase 4: Polish & Testing
- Full DevLoop testing
- Edge case handling
- UX polish and animations
---
## 4. File Structure
```
atomizer-dashboard/
├── frontend/src/
│ ├── pages/
│ │ └── Studio.tsx
│ ├── components/
│ │ └── studio/
│ │ ├── StudioLayout.tsx
│ │ ├── ResourcePanel.tsx
│ │ ├── StudioCanvas.tsx
│ │ ├── StudioChat.tsx
│ │ ├── DropZone.tsx
│ │ ├── ParameterList.tsx
│ │ ├── ContextFileList.tsx
│ │ └── BuildDialog.tsx
│ └── hooks/
│ └── useDraft.ts
├── backend/api/
│ └── routes/
│ └── intake.py (enhanced)
```
---
## 5. API Endpoints
### New Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/intake/draft` | POST | Create anonymous draft study |
| `/intake/{id}/context/content` | GET | Extract text from context files |
### Enhanced Endpoints
| Endpoint | Change |
|----------|--------|
| `/intake/{id}/finalize` | Added `new_name` parameter for rename |
---
## 6. Status
- [x] Plan documented
- [ ] Phase 1: Backend
- [ ] Phase 2: Frontend Shell
- [ ] Phase 3: AI Integration
- [ ] Phase 4: Testing & Polish

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,637 @@
# Dashboard Intake & AtomizerSpec Integration Plan
**Version**: 1.0
**Author**: Atomizer Team
**Date**: January 22, 2026
**Status**: APPROVED FOR IMPLEMENTATION
**Dependencies**: ATOMIZER_UX_SYSTEM.md, UNIFIED_CONFIGURATION_ARCHITECTURE.md
---
## Executive Summary
This plan implements visual study creation in the Atomizer Dashboard, with `atomizer_spec.json` as the **single source of truth** for all study configuration. Engineers can:
1. **Drop files** into the dashboard to create a new study
2. **See introspection results** inline (expressions, mass, solver type)
3. **Open Canvas** for detailed configuration (one-click from create card)
4. **Generate README with Claude** (intelligent, not template-based)
5. **Run baseline solve** with real-time progress via WebSocket
6. **Finalize** to move study from inbox to studies folder
**Key Principle**: Every operation reads from or writes to `atomizer_spec.json`. Nothing bypasses the spec.
---
## 1. Architecture Overview
### 1.1 AtomizerSpec as Central Data Hub
```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ ATOMIZER_SPEC.JSON - CENTRAL DATA HUB │
├─────────────────────────────────────────────────────────────────────────────────┤
│ │
│ INPUTS (write to spec) SPEC OUTPUTS (read spec) │
│ ┌──────────────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │ File Upload │ │ │ │ Canvas Builder │ │
│ │ Introspection │ ────────→ │ atomizer │ ────────→ │ Dashboard Views │ │
│ │ Claude Interview │ │ _spec │ │ Optimization Run │ │
│ │ Canvas Edits │ │ .json │ │ README Generator │ │
│ │ Manual Edit │ │ │ │ Report Generator │ │
│ └──────────────────┘ └──────────┘ └──────────────────┘ │
│ │ │
│ │ validates against │
│ ↓ │
│ ┌──────────────────┐ │
│ │ atomizer_spec │ │
│ │ _v2.json │ │
│ │ (JSON Schema) │ │
│ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────┘
```
### 1.2 Study Creation Flow
```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ STUDY CREATION FLOW │
├─────────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. DROP FILES 2. INTROSPECT 3. CLAUDE README 4. FINALIZE │
│ ┌────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ │
│ │ .sim .prt │ → │ Expressions │ → │ Analyzes │ → │ Baseline │ │
│ │ .fem _i.prt│ │ Mass props │ │ context+model│ │ solve │ │
│ │ goals.md │ │ Solver type │ │ Writes full │ │ Update │ │
│ └────────────┘ └──────────────┘ │ README.md │ │ README │ │
│ │ │ └──────────────┘ │ Archive │ │
│ ↓ ↓ │ │ inbox │ │
│ Creates initial Updates spec Claude skill └──────────┘ │
│ atomizer_spec.json with introspection for study docs │ │
│ status="draft" status="introspected" ↓ │
│ studies/ │
│ {topic}/ │
│ {name}/ │
└─────────────────────────────────────────────────────────────────────────────────┘
```
### 1.3 Spec Status State Machine
```
draft → introspected → configured → validated → ready → running → completed
│ │ │ │ │ │ │
│ │ │ │ │ │ └─ optimization done
│ │ │ │ │ └─ optimization started
│ │ │ │ └─ can start optimization
│ │ │ └─ baseline solve done
│ │ └─ DVs, objectives, constraints set (Claude or Canvas)
│ └─ introspection done
└─ files uploaded, minimal spec created
```
---
## 2. AtomizerSpec Schema Extensions
### 2.1 New Fields in SpecMeta
Add to `optimization_engine/config/spec_models.py`:
```python
class SpecStatus(str, Enum):
"""Study lifecycle status."""
DRAFT = "draft"
INTROSPECTED = "introspected"
CONFIGURED = "configured"
VALIDATED = "validated"
READY = "ready"
RUNNING = "running"
COMPLETED = "completed"
FAILED = "failed"
class SpecCreatedBy(str, Enum):
"""Who/what created the spec."""
CANVAS = "canvas"
CLAUDE = "claude"
API = "api"
MIGRATION = "migration"
MANUAL = "manual"
DASHBOARD_INTAKE = "dashboard_intake" # NEW
class SpecMeta(BaseModel):
"""Metadata about the spec."""
version: str = Field(..., pattern=r"^2\.\d+$")
study_name: str
created: Optional[datetime] = None
modified: Optional[datetime] = None
created_by: Optional[SpecCreatedBy] = None
modified_by: Optional[str] = None
status: SpecStatus = SpecStatus.DRAFT # NEW
topic: Optional[str] = None # NEW - folder grouping
```
### 2.2 IntrospectionData Model
New model for storing introspection results in the spec:
```python
class ExpressionInfo(BaseModel):
"""Information about an NX expression."""
name: str
value: Optional[float] = None
units: Optional[str] = None
formula: Optional[str] = None
is_candidate: bool = False
confidence: float = 0.0 # 0.0 to 1.0
class BaselineData(BaseModel):
"""Results from baseline FEA solve."""
timestamp: datetime
solve_time_seconds: float
mass_kg: Optional[float] = None
max_displacement_mm: Optional[float] = None
max_stress_mpa: Optional[float] = None
success: bool = True
error: Optional[str] = None
class IntrospectionData(BaseModel):
"""Model introspection results."""
timestamp: datetime
solver_type: Optional[str] = None
mass_kg: Optional[float] = None
volume_mm3: Optional[float] = None
expressions: List[ExpressionInfo] = []
baseline: Optional[BaselineData] = None
warnings: List[str] = []
```
### 2.3 Extended ModelConfig
```python
class ModelConfig(BaseModel):
"""Model file configuration."""
sim: Optional[SimFile] = None
fem: Optional[str] = None
prt: Optional[str] = None
idealized_prt: Optional[str] = None # NEW - critical for mesh updating
introspection: Optional[IntrospectionData] = None # NEW
```
### 2.4 JSON Schema Updates
Add to `optimization_engine/schemas/atomizer_spec_v2.json`:
```json
{
"definitions": {
"SpecMeta": {
"properties": {
"status": {
"type": "string",
"enum": ["draft", "introspected", "configured", "validated", "ready", "running", "completed", "failed"],
"default": "draft"
},
"topic": {
"type": "string",
"pattern": "^[A-Za-z0-9_]+$",
"description": "Topic folder for grouping related studies"
}
}
},
"IntrospectionData": {
"type": "object",
"properties": {
"timestamp": { "type": "string", "format": "date-time" },
"solver_type": { "type": "string" },
"mass_kg": { "type": "number" },
"volume_mm3": { "type": "number" },
"expressions": {
"type": "array",
"items": { "$ref": "#/definitions/ExpressionInfo" }
},
"baseline": { "$ref": "#/definitions/BaselineData" },
"warnings": { "type": "array", "items": { "type": "string" } }
}
},
"ExpressionInfo": {
"type": "object",
"properties": {
"name": { "type": "string" },
"value": { "type": "number" },
"units": { "type": "string" },
"formula": { "type": "string" },
"is_candidate": { "type": "boolean", "default": false },
"confidence": { "type": "number", "minimum": 0, "maximum": 1 }
},
"required": ["name"]
},
"BaselineData": {
"type": "object",
"properties": {
"timestamp": { "type": "string", "format": "date-time" },
"solve_time_seconds": { "type": "number" },
"mass_kg": { "type": "number" },
"max_displacement_mm": { "type": "number" },
"max_stress_mpa": { "type": "number" },
"success": { "type": "boolean", "default": true },
"error": { "type": "string" }
}
}
}
}
```
---
## 3. Backend Implementation
### 3.1 New File: `backend/api/routes/intake.py`
Core intake API endpoints:
| Endpoint | Method | Purpose | Status After |
|----------|--------|---------|--------------|
| `/api/intake/create` | POST | Create inbox folder with initial spec | draft |
| `/api/intake/introspect` | POST | Run NX introspection, update spec | introspected |
| `/api/intake/readme/generate` | POST | Claude generates README + config suggestions | configured |
| `/api/intake/finalize` | POST | Baseline solve, move to studies folder | validated/ready |
| `/api/intake/list` | GET | List inbox folders with status | - |
| `/api/intake/topics` | GET | List existing topic folders | - |
**Key Implementation Details:**
1. **Create** - Creates folder structure + minimal `atomizer_spec.json`
2. **Introspect** - Runs NX introspection, updates spec with expressions, mass, solver type
3. **Generate README** - Calls Claude with spec + goals.md, returns README + suggested config
4. **Finalize** - Full workflow: copy files, baseline solve (optional), move to studies, archive inbox
### 3.2 New File: `backend/api/services/spec_manager.py`
Centralized spec operations:
```python
class SpecManager:
"""Single source of truth for spec operations."""
def load(self) -> dict
def save(self, spec: dict) -> None
def update_status(self, status: str, modified_by: str) -> dict
def add_introspection(self, data: dict) -> dict
def get_status(self) -> str
def exists(self) -> bool
```
### 3.3 New File: `backend/api/services/claude_readme.py`
Claude-powered README generation:
- Loads skill from `.claude/skills/modules/study-readme-generator.md`
- Builds prompt with spec + goals
- Returns README content + suggested DVs/objectives/constraints
- Uses Claude API (claude-sonnet-4-20250514)
### 3.4 WebSocket for Finalization Progress
The `/api/intake/finalize` endpoint will support WebSocket for real-time progress:
```typescript
// Progress steps
const steps = [
'Creating study folder',
'Copying model files',
'Running introspection',
'Running baseline solve',
'Extracting baseline results',
'Generating README with Claude',
'Moving to studies folder',
'Archiving inbox'
];
```
---
## 4. Frontend Implementation
### 4.1 Component Structure
```
frontend/src/components/home/
├── CreateStudyCard.tsx # Main study creation UI
├── IntrospectionResults.tsx # Display introspection data
├── TopicSelector.tsx # Topic dropdown + new topic input
├── StudyFilesPanel.tsx # File display in preview
└── index.ts # Exports
frontend/src/components/common/
└── ProgressModal.tsx # Finalization progress display
```
### 4.2 CreateStudyCard States
```typescript
type CardState =
| 'empty' // No files, just showing dropzone
| 'staged' // Files selected, ready to upload
| 'uploading' // Uploading files to inbox
| 'introspecting' // Running introspection
| 'ready' // Introspection done, can finalize or open canvas
| 'finalizing' // Running finalization
| 'complete'; // Study created, showing success
```
### 4.3 CreateStudyCard UI
```
╔═══════════════════════════════════════════════════════════════╗
║ + Create New Study [Open Canvas] ║
╠═══════════════════════════════════════════════════════════════╣
║ ║
║ Study Name ║
║ ┌─────────────────────────────────────────────────────────┐ ║
║ │ bracket_optimization_v1 │ ║
║ └─────────────────────────────────────────────────────────┘ ║
║ ║
║ Topic ║
║ ┌─────────────────────────────────────────────────────────┐ ║
║ │ M1_Mirror ▼ │ ║
║ └─────────────────────────────────────────────────────────┘ ║
║ ○ Brackets ○ M1_Mirror ○ Support_Arms ● + New Topic ║
║ ║
║ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ║
║ │ 📁 Drop model files here │ ║
║ │ .sim .prt .fem _i.prt │ ║
║ │ or click to browse │ ║
║ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ ║
║ ║
║ Files [Clear] ║
║ ┌─────────────────────────────────────────────────────────┐ ║
║ │ ✓ bracket_sim1.sim 1.2 MB │ ║
║ │ ✓ bracket.prt 3.4 MB │ ║
║ │ ✓ bracket_fem1.fem 2.1 MB │ ║
║ │ ✓ bracket_fem1_i.prt 0.8 MB ← Idealized! │ ║
║ └─────────────────────────────────────────────────────────┘ ║
║ ║
║ ▼ Model Information ✓ Ready ║
║ ┌─────────────────────────────────────────────────────────┐ ║
║ │ Solver: NX Nastran │ ║
║ │ Estimated Mass: 2.34 kg │ ║
║ │ │ ║
║ │ Design Variable Candidates (5 found): │ ║
║ │ ★ rib_thickness = 5.0 mm [2.5 - 10.0] │ ║
║ │ ★ web_height = 20.0 mm [10.0 - 40.0] │ ║
║ │ ★ flange_width = 15.0 mm [7.5 - 30.0] │ ║
║ └─────────────────────────────────────────────────────────┘ ║
║ ║
║ ┌─────────────────────────────────────────────────────────┐ ║
║ │ ☑ Run baseline solve (recommended for accurate values) │ ║
║ │ ☑ Generate README with Claude │ ║
║ └─────────────────────────────────────────────────────────┘ ║
║ ║
║ [ Finalize Study ] [ Open Canvas → ] ║
║ ║
╚═══════════════════════════════════════════════════════════════╝
```
### 4.4 ProgressModal UI
```
╔════════════════════════════════════════════════════════════════╗
║ Creating Study [X] ║
╠════════════════════════════════════════════════════════════════╣
║ ║
║ bracket_optimization_v1 ║
║ Topic: M1_Mirror ║
║ ║
║ ┌──────────────────────────────────────────────────────────┐ ║
║ │ │ ║
║ │ ✓ Creating study folder 0.5s │ ║
║ │ ✓ Copying model files 1.2s │ ║
║ │ ✓ Running introspection 3.4s │ ║
║ │ ● Running baseline solve... │ ║
║ │ ├─ Updating parameters │ ║
║ │ ├─ Meshing... │ ║
║ │ └─ Solving (iteration 2/5) │ ║
║ │ ○ Extracting baseline results │ ║
║ │ ○ Generating README with Claude │ ║
║ │ ○ Moving to studies folder │ ║
║ │ ○ Archiving inbox │ ║
║ │ │ ║
║ └──────────────────────────────────────────────────────────┘ ║
║ ║
║ [━━━━━━━━━━━━━━━━░░░░░░░░░░░░░░░░░░░] 42% ║
║ ║
║ Estimated time remaining: ~45 seconds ║
║ ║
╚════════════════════════════════════════════════════════════════╝
```
### 4.5 StudyFilesPanel (in Preview)
```
┌────────────────────────────────────────────────────────────────┐
│ 📁 Model Files (4) [+ Add] [↗] │
├────────────────────────────────────────────────────────────────┤
│ 📦 bracket_sim1.sim 1.2 MB Simulation │
│ 📐 bracket.prt 3.4 MB Geometry │
│ 🔷 bracket_fem1.fem 2.1 MB FEM │
│ 🔶 bracket_fem1_i.prt 0.8 MB Idealized Part │
└────────────────────────────────────────────────────────────────┘
```
---
## 5. Claude Skill for README Generation
### 5.1 Skill Location
`.claude/skills/modules/study-readme-generator.md`
### 5.2 Skill Purpose
Claude analyzes the AtomizerSpec and generates:
1. **Comprehensive README.md** - Not a template, but intelligent documentation
2. **Suggested design variables** - Based on introspection candidates
3. **Suggested objectives** - Based on goals.md or reasonable defaults
4. **Suggested extractors** - Mapped to objectives
5. **Suggested constraints** - If mentioned in goals
### 5.3 README Structure
1. **Title & Overview** - Study name, description, quick stats
2. **Optimization Goals** - Primary objective, constraints summary
3. **Model Information** - Solver, baseline mass, warnings
4. **Design Variables** - Table with baseline, bounds, units
5. **Extractors & Objectives** - Physics extraction mapping
6. **Constraints** - Limits and thresholds
7. **Recommended Approach** - Algorithm, trial budget
8. **Files** - Model file listing
---
## 6. Home Page Integration
### 6.1 Layout
```
┌──────────────────────────────────────────────────────────────────┐
│ [Logo] [Canvas Builder] [Refresh] │
├──────────────────────────────────────────────────────────────────┤
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Studies │ │ Running │ │ Trials │ │ Best │ │
│ │ 15 │ │ 2 │ │ 1,234 │ │ 2.34e-3 │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
├─────────────────────────────┬────────────────────────────────────┤
│ ┌─────────────────────────┐ │ Study Preview │
│ │ + Create New Study │ │ ┌────────────────────────────────┐ │
│ │ │ │ │ 📁 Model Files (4) [+] │ │
│ │ [CreateStudyCard] │ │ │ (StudyFilesPanel) │ │
│ │ │ │ └────────────────────────────────┘ │
│ └─────────────────────────┘ │ │
│ │ README.md │
│ Studies (15) │ (MarkdownRenderer) │
│ ▶ M1_Mirror (5) │ │
│ ▶ Brackets (3) │ │
│ ▼ Other (2) │ │
│ └─ test_study ● Running │ │
└─────────────────────────────┴────────────────────────────────────┘
```
---
## 7. File Changes Summary
| File | Action | Est. Lines |
|------|--------|------------|
| **Backend** | | |
| `backend/api/routes/intake.py` | CREATE | ~350 |
| `backend/api/services/spec_manager.py` | CREATE | ~80 |
| `backend/api/services/claude_readme.py` | CREATE | ~150 |
| `backend/api/main.py` | MODIFY | +5 |
| **Schema/Models** | | |
| `optimization_engine/config/spec_models.py` | MODIFY | +60 |
| `optimization_engine/schemas/atomizer_spec_v2.json` | MODIFY | +50 |
| **Frontend** | | |
| `frontend/src/components/home/CreateStudyCard.tsx` | CREATE | ~400 |
| `frontend/src/components/home/IntrospectionResults.tsx` | CREATE | ~120 |
| `frontend/src/components/home/TopicSelector.tsx` | CREATE | ~80 |
| `frontend/src/components/home/StudyFilesPanel.tsx` | CREATE | ~100 |
| `frontend/src/components/common/ProgressModal.tsx` | CREATE | ~150 |
| `frontend/src/pages/Home.tsx` | MODIFY | +80 |
| `frontend/src/api/client.ts` | MODIFY | +100 |
| `frontend/src/types/atomizer-spec.ts` | MODIFY | +40 |
| **Skills** | | |
| `.claude/skills/modules/study-readme-generator.md` | CREATE | ~120 |
**Total: ~1,885 lines**
---
## 8. Implementation Order
### Phase 1: Backend Foundation (Day 1)
1. Update `spec_models.py` with new fields (status, IntrospectionData)
2. Update JSON schema
3. Create `spec_manager.py` service
4. Create `intake.py` routes (create, introspect, list, topics)
5. Register in `main.py`
6. Test with curl/Postman
### Phase 2: Claude Integration (Day 1-2)
1. Create `study-readme-generator.md` skill
2. Create `claude_readme.py` service
3. Add `/readme/generate` endpoint
4. Test README generation
### Phase 3: Frontend Components (Day 2-3)
1. Add TypeScript types
2. Add API client methods
3. Create `TopicSelector` component
4. Create `IntrospectionResults` component
5. Create `ProgressModal` component
6. Create `CreateStudyCard` component
7. Create `StudyFilesPanel` component
### Phase 4: Home Page Integration (Day 3)
1. Modify `Home.tsx` layout
2. Integrate `CreateStudyCard` above study list
3. Add `StudyFilesPanel` to study preview
4. Test full flow
### Phase 5: Finalization & WebSocket (Day 4)
1. Implement `/finalize` endpoint with baseline solve
2. Add WebSocket progress updates
3. Implement inbox archiving
4. End-to-end testing
5. Documentation updates
---
## 9. Validation Gate Integration
The Validation Gate runs 2-3 test trials before full optimization to catch:
- Mesh not updating (identical results)
- Extractor failures
- Constraint evaluation errors
**Integration point**: After study is finalized, before optimization starts, a "Validate" button appears in the study preview that runs the gate.
---
## 10. Success Criteria
- [ ] User can create study by dropping files in dashboard
- [ ] Introspection runs automatically after upload
- [ ] Introspection results show inline with design candidates highlighted
- [ ] "Open Canvas" button works, loading spec into canvas
- [ ] Claude generates comprehensive README from spec + goals
- [ ] Baseline solve runs with WebSocket progress display
- [ ] Study moves to correct topic folder
- [ ] Inbox folder is archived after success
- [ ] `atomizer_spec.json` is the ONLY configuration file used
- [ ] Spec status updates correctly through workflow
- [ ] Canvas can load and edit spec from inbox (pre-finalization)
---
## 11. Error Handling
### Baseline Solve Failure
If baseline solve fails:
- Still create the study
- Set `spec.meta.status = "configured"` (not "validated")
- Store error in `spec.model.introspection.baseline.error`
- README notes baseline was attempted but failed
- User can retry baseline later or proceed without it
### Missing Idealized Part
If `*_i.prt` is not found:
- Add CRITICAL warning to introspection
- Highlight in UI with warning icon
- Still allow study creation (user may add later)
- README includes warning about mesh not updating
### Introspection Failure
If NX introspection fails:
- Store error in spec
- Allow manual configuration via Canvas
- User can retry introspection after fixing issues
---
## 12. Future Enhancements (Out of Scope)
- PDF extraction with Claude Vision
- Image analysis for sketches in context/
- Batch study creation (multiple studies at once)
- Study templates from existing studies
- Auto-retry failed baseline with different parameters
---
*Document created: January 22, 2026*
*Approved for implementation*

View File

@@ -110,31 +110,48 @@ frequency = result['frequency'] # Hz
```python
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
# For shell elements (CQUAD4, CTRIA3)
result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
# RECOMMENDED: Check ALL solid element types (returns max across all)
result = extract_solid_stress(op2_file, subcase=1)
# For solid elements (CTETRA, CHEXA)
result = extract_solid_stress(op2_file, subcase=1, element_type='ctetra')
# Or specify single element type
result = extract_solid_stress(op2_file, subcase=1, element_type='chexa')
# Returns: {
# 'max_von_mises': float, # MPa
# 'max_stress_element': int
# 'max_von_mises': float, # MPa (auto-converted from kPa)
# 'max_stress_element': int,
# 'element_type': str, # e.g., 'CHEXA', 'CTETRA'
# 'units': 'MPa'
# }
max_stress = result['max_von_mises'] # MPa
```
**IMPORTANT (Updated 2026-01-22):**
- By default, checks ALL solid types: CTETRA, CHEXA, CPENTA, CPYRAM
- CHEXA elements often have highest stress (not CTETRA!)
- Auto-converts from kPa to MPa (NX kg-mm-s unit system outputs kPa)
- Returns Elemental Nodal stress (peak), not Elemental Centroid (averaged)
### E4: BDF Mass Extraction
**Module**: `optimization_engine.extractors.bdf_mass_extractor`
**Module**: `optimization_engine.extractors.extract_mass_from_bdf`
```python
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
from optimization_engine.extractors import extract_mass_from_bdf
mass_kg = extract_mass_from_bdf(str(bdf_file)) # kg
result = extract_mass_from_bdf(bdf_file)
# Returns: {
# 'total_mass': float, # kg (primary key)
# 'mass_kg': float, # kg
# 'mass_g': float, # grams
# 'cg': [x, y, z], # center of gravity
# 'num_elements': int
# }
mass_kg = result['mass_kg'] # kg
```
**Note**: Reads mass directly from BDF/DAT file material and element definitions.
**Note**: Uses `BDFMassExtractor` internally. Reads mass from element geometry and material density in BDF/DAT file. NX kg-mm-s unit system - mass is directly in kg.
### E5: CAD Expression Mass

View File

@@ -0,0 +1,19 @@
"""
Atomizer CLI
============
Command-line interface for Atomizer operations.
Commands:
- atomizer intake <folder> - Process an intake folder
- atomizer validate <study> - Validate a study before running
- atomizer finalize <study> - Generate final report
Usage:
from optimization_engine.cli import main
main()
"""
from .main import main, app
__all__ = ["main", "app"]

View File

@@ -0,0 +1,383 @@
"""
Atomizer CLI Main Entry Point
=============================
Provides the `atomizer` command with subcommands:
- intake: Process an intake folder
- validate: Validate a study
- finalize: Generate final report
- list: List studies
Usage:
atomizer intake bracket_project
atomizer validate bracket_mass_opt
atomizer finalize bracket_mass_opt --format html
"""
from __future__ import annotations
import sys
from pathlib import Path
from typing import Optional
import argparse
import logging
def setup_logging(verbose: bool = False):
"""Setup logging configuration."""
level = logging.DEBUG if verbose else logging.INFO
logging.basicConfig(
level=level,
format="%(message)s",
)
def find_project_root() -> Path:
"""Find the Atomizer project root."""
current = Path(__file__).parent
while current != current.parent:
if (current / "CLAUDE.md").exists():
return current
current = current.parent
return Path.cwd()
def cmd_intake(args):
"""Process an intake folder."""
from optimization_engine.intake import IntakeProcessor
# Determine inbox folder
inbox_path = Path(args.folder)
if not inbox_path.is_absolute():
# Check if it's in _inbox
project_root = find_project_root()
inbox_dir = project_root / "studies" / "_inbox"
if (inbox_dir / args.folder).exists():
inbox_path = inbox_dir / args.folder
elif (project_root / "studies" / args.folder).exists():
inbox_path = project_root / "studies" / args.folder
if not inbox_path.exists():
print(f"Error: Folder not found: {inbox_path}")
return 1
print(f"Processing intake: {inbox_path}")
print("=" * 60)
# Progress callback
def progress(message: str, percent: float):
bar_width = 30
filled = int(bar_width * percent)
bar = "=" * filled + "-" * (bar_width - filled)
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
if percent >= 1.0:
print() # Newline at end
try:
processor = IntakeProcessor(
inbox_path,
progress_callback=progress if not args.quiet else None,
)
context = processor.process(
run_baseline=not args.skip_baseline,
copy_files=True,
run_introspection=True,
)
print("\n" + "=" * 60)
print("INTAKE COMPLETE")
print("=" * 60)
# Show summary
summary = context.get_context_summary()
print(f"\nStudy: {context.study_name}")
print(f"Location: {processor.study_dir}")
print(f"\nContext loaded:")
print(f" Model: {'Yes' if summary['has_model'] else 'No'}")
print(f" Introspection: {'Yes' if summary['has_introspection'] else 'No'}")
print(f" Baseline: {'Yes' if summary['has_baseline'] else 'No'}")
print(f" Goals: {'Yes' if summary['has_goals'] else 'No'}")
print(f" Pre-config: {'Yes' if summary['has_preconfig'] else 'No'}")
print(
f" Expressions: {summary['num_expressions']} ({summary['num_dv_candidates']} candidates)"
)
if context.has_baseline:
print(f"\nBaseline: {context.get_baseline_summary()}")
if summary["warnings"]:
print(f"\nWarnings:")
for w in summary["warnings"]:
print(f" - {w}")
if args.interview:
print(f"\nTo continue with interview: atomizer interview {context.study_name}")
elif args.canvas:
print(f"\nOpen dashboard to configure in Canvas mode")
else:
print(f"\nNext steps:")
print(f" 1. Review context in {processor.study_dir / '0_intake'}")
print(f" 2. Configure study via interview or canvas")
print(f" 3. Run: atomizer validate {context.study_name}")
return 0
except Exception as e:
print(f"\nError: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
def cmd_validate(args):
"""Validate a study before running."""
from optimization_engine.validation import ValidationGate
# Find study directory
study_path = Path(args.study)
if not study_path.is_absolute():
project_root = find_project_root()
study_path = project_root / "studies" / args.study
if not study_path.exists():
print(f"Error: Study not found: {study_path}")
return 1
print(f"Validating study: {study_path.name}")
print("=" * 60)
# Progress callback
def progress(message: str, percent: float):
bar_width = 30
filled = int(bar_width * percent)
bar = "=" * filled + "-" * (bar_width - filled)
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
if percent >= 1.0:
print()
try:
gate = ValidationGate(
study_path,
progress_callback=progress if not args.quiet else None,
)
result = gate.validate(
run_test_trials=not args.skip_trials,
n_test_trials=args.trials,
)
print("\n" + "=" * 60)
if result.passed:
print("VALIDATION PASSED")
else:
print("VALIDATION FAILED")
print("=" * 60)
# Show spec validation
if result.spec_check:
print(f"\nSpec Validation:")
print(f" Errors: {len(result.spec_check.errors)}")
print(f" Warnings: {len(result.spec_check.warnings)}")
for issue in result.spec_check.errors:
print(f" [ERROR] {issue.message}")
for issue in result.spec_check.warnings[:5]: # Limit warnings shown
print(f" [WARN] {issue.message}")
# Show test trials
if result.test_trials:
print(f"\nTest Trials:")
successful = [t for t in result.test_trials if t.success]
print(f" Completed: {len(successful)}/{len(result.test_trials)}")
if result.results_vary:
print(f" Results vary: Yes (good!)")
else:
print(f" Results vary: NO - MESH MAY NOT BE UPDATING!")
# Show trial results table
print(f"\n {'Trial':<8} {'Status':<10} {'Time (s)':<10}", end="")
if successful and successful[0].objectives:
for obj in list(successful[0].objectives.keys())[:3]:
print(f" {obj:<12}", end="")
print()
print(" " + "-" * 50)
for trial in result.test_trials:
status = "OK" if trial.success else "FAIL"
print(
f" {trial.trial_number:<8} {status:<10} {trial.solve_time_seconds:<10.1f}",
end="",
)
for val in list(trial.objectives.values())[:3]:
print(f" {val:<12.4f}", end="")
print()
# Show estimates
if result.avg_solve_time:
print(f"\nRuntime Estimate:")
print(f" Avg solve time: {result.avg_solve_time:.1f}s")
if result.estimated_total_runtime:
hours = result.estimated_total_runtime / 3600
print(f" Est. total: {hours:.1f} hours")
# Show errors
if result.errors:
print(f"\nErrors:")
for err in result.errors:
print(f" - {err}")
# Approve if passed and requested
if result.passed:
if args.approve:
gate.approve()
print(f"\nStudy approved for optimization.")
else:
print(f"\nTo approve and start: atomizer validate {args.study} --approve")
# Save result
output_path = gate.save_result(result)
print(f"\nResult saved: {output_path}")
return 0 if result.passed else 1
except Exception as e:
print(f"\nError: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
def cmd_list(args):
"""List available studies."""
project_root = find_project_root()
studies_dir = project_root / "studies"
print("Available Studies:")
print("=" * 60)
# List inbox items
inbox_dir = studies_dir / "_inbox"
if inbox_dir.exists():
inbox_items = [d for d in inbox_dir.iterdir() if d.is_dir() and not d.name.startswith(".")]
if inbox_items:
print("\nPending Intake (_inbox/):")
for item in sorted(inbox_items):
has_config = (item / "intake.yaml").exists()
has_model = bool(list(item.glob("**/*.sim")))
status = []
if has_config:
status.append("config")
if has_model:
status.append("model")
print(f" {item.name:<30} [{', '.join(status) or 'empty'}]")
# List active studies
print("\nActive Studies:")
for study_dir in sorted(studies_dir.iterdir()):
if (
study_dir.is_dir()
and not study_dir.name.startswith("_")
and not study_dir.name.startswith(".")
):
# Check status
has_spec = (study_dir / "atomizer_spec.json").exists() or (
study_dir / "optimization_config.json"
).exists()
has_db = (study_dir / "3_results" / "study.db").exists() or (
study_dir / "2_results" / "study.db"
).exists()
has_approval = (study_dir / ".validation_approved").exists()
status = []
if has_spec:
status.append("configured")
if has_approval:
status.append("approved")
if has_db:
status.append("has_results")
print(f" {study_dir.name:<30} [{', '.join(status) or 'new'}]")
return 0
def cmd_finalize(args):
"""Generate final report for a study."""
print(f"Finalize command not yet implemented for: {args.study}")
print("This will generate the interactive HTML report.")
return 0
def create_parser() -> argparse.ArgumentParser:
"""Create the argument parser."""
parser = argparse.ArgumentParser(
prog="atomizer",
description="Atomizer - FEA Optimization Command Line Interface",
)
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
parser.add_argument("-q", "--quiet", action="store_true", help="Minimal output")
subparsers = parser.add_subparsers(dest="command", help="Available commands")
# intake command
intake_parser = subparsers.add_parser("intake", help="Process an intake folder")
intake_parser.add_argument("folder", help="Path to intake folder")
intake_parser.add_argument("--skip-baseline", action="store_true", help="Skip baseline solve")
intake_parser.add_argument(
"--interview", action="store_true", help="Continue to interview mode"
)
intake_parser.add_argument("--canvas", action="store_true", help="Open in canvas mode")
intake_parser.set_defaults(func=cmd_intake)
# validate command
validate_parser = subparsers.add_parser("validate", help="Validate a study")
validate_parser.add_argument("study", help="Study name or path")
validate_parser.add_argument("--skip-trials", action="store_true", help="Skip test trials")
validate_parser.add_argument("--trials", type=int, default=3, help="Number of test trials")
validate_parser.add_argument(
"--approve", action="store_true", help="Approve if validation passes"
)
validate_parser.set_defaults(func=cmd_validate)
# list command
list_parser = subparsers.add_parser("list", help="List studies")
list_parser.set_defaults(func=cmd_list)
# finalize command
finalize_parser = subparsers.add_parser("finalize", help="Generate final report")
finalize_parser.add_argument("study", help="Study name or path")
finalize_parser.add_argument("--format", choices=["html", "pdf", "all"], default="html")
finalize_parser.set_defaults(func=cmd_finalize)
return parser
def main(args=None):
"""Main entry point."""
parser = create_parser()
parsed_args = parser.parse_args(args)
setup_logging(getattr(parsed_args, "verbose", False))
if parsed_args.command is None:
parser.print_help()
return 0
return parsed_args.func(parsed_args)
# For typer/click compatibility
app = main
if __name__ == "__main__":
sys.exit(main())

View File

@@ -7,7 +7,7 @@ They provide validation and type safety for the unified configuration system.
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Literal, Optional, Union
from typing import Any, Dict, List, Literal, Optional, Tuple, Union
from pydantic import BaseModel, Field, field_validator, model_validator
import re
@@ -16,17 +16,34 @@ import re
# Enums
# ============================================================================
class SpecCreatedBy(str, Enum):
"""Who/what created the spec."""
CANVAS = "canvas"
CLAUDE = "claude"
API = "api"
MIGRATION = "migration"
MANUAL = "manual"
DASHBOARD_INTAKE = "dashboard_intake"
class SpecStatus(str, Enum):
"""Study lifecycle status."""
DRAFT = "draft"
INTROSPECTED = "introspected"
CONFIGURED = "configured"
VALIDATED = "validated"
READY = "ready"
RUNNING = "running"
COMPLETED = "completed"
FAILED = "failed"
class SolverType(str, Enum):
"""Supported solver types."""
NASTRAN = "nastran"
NX_NASTRAN = "NX_Nastran"
ABAQUS = "abaqus"
@@ -34,6 +51,7 @@ class SolverType(str, Enum):
class SubcaseType(str, Enum):
"""Subcase analysis types."""
STATIC = "static"
MODAL = "modal"
THERMAL = "thermal"
@@ -42,6 +60,7 @@ class SubcaseType(str, Enum):
class DesignVariableType(str, Enum):
"""Design variable types."""
CONTINUOUS = "continuous"
INTEGER = "integer"
CATEGORICAL = "categorical"
@@ -49,6 +68,7 @@ class DesignVariableType(str, Enum):
class ExtractorType(str, Enum):
"""Physics extractor types."""
DISPLACEMENT = "displacement"
FREQUENCY = "frequency"
STRESS = "stress"
@@ -62,18 +82,21 @@ class ExtractorType(str, Enum):
class OptimizationDirection(str, Enum):
"""Optimization direction."""
MINIMIZE = "minimize"
MAXIMIZE = "maximize"
class ConstraintType(str, Enum):
"""Constraint types."""
HARD = "hard"
SOFT = "soft"
class ConstraintOperator(str, Enum):
"""Constraint comparison operators."""
LE = "<="
GE = ">="
LT = "<"
@@ -83,6 +106,7 @@ class ConstraintOperator(str, Enum):
class PenaltyMethod(str, Enum):
"""Penalty methods for constraints."""
LINEAR = "linear"
QUADRATIC = "quadratic"
EXPONENTIAL = "exponential"
@@ -90,6 +114,7 @@ class PenaltyMethod(str, Enum):
class AlgorithmType(str, Enum):
"""Optimization algorithm types."""
TPE = "TPE"
CMA_ES = "CMA-ES"
NSGA_II = "NSGA-II"
@@ -100,6 +125,7 @@ class AlgorithmType(str, Enum):
class SurrogateType(str, Enum):
"""Surrogate model types."""
MLP = "MLP"
GNN = "GNN"
ENSEMBLE = "ensemble"
@@ -109,58 +135,104 @@ class SurrogateType(str, Enum):
# Position Model
# ============================================================================
class CanvasPosition(BaseModel):
"""Canvas position for nodes."""
x: float = 0
y: float = 0
# ============================================================================
# Introspection Models (for intake workflow)
# ============================================================================
class ExpressionInfo(BaseModel):
"""Information about an NX expression from introspection."""
name: str = Field(..., description="Expression name in NX")
value: Optional[float] = Field(default=None, description="Current value")
units: Optional[str] = Field(default=None, description="Physical units")
formula: Optional[str] = Field(default=None, description="Expression formula if any")
is_candidate: bool = Field(
default=False, description="Whether this is a design variable candidate"
)
confidence: float = Field(
default=0.0, ge=0.0, le=1.0, description="Confidence that this is a DV"
)
class BaselineData(BaseModel):
"""Results from baseline FEA solve."""
timestamp: datetime = Field(..., description="When baseline was run")
solve_time_seconds: float = Field(..., description="How long the solve took")
mass_kg: Optional[float] = Field(default=None, description="Computed mass from BDF/FEM")
max_displacement_mm: Optional[float] = Field(
default=None, description="Max displacement result"
)
max_stress_mpa: Optional[float] = Field(default=None, description="Max von Mises stress")
success: bool = Field(default=True, description="Whether baseline solve succeeded")
error: Optional[str] = Field(default=None, description="Error message if failed")
class IntrospectionData(BaseModel):
"""Model introspection results stored in the spec."""
timestamp: datetime = Field(..., description="When introspection was run")
solver_type: Optional[str] = Field(default=None, description="Detected solver type")
mass_kg: Optional[float] = Field(
default=None, description="Mass from expressions or properties"
)
volume_mm3: Optional[float] = Field(default=None, description="Volume from mass properties")
expressions: List[ExpressionInfo] = Field(
default_factory=list, description="Discovered expressions"
)
baseline: Optional[BaselineData] = Field(default=None, description="Baseline solve results")
warnings: List[str] = Field(default_factory=list, description="Warnings from introspection")
def get_design_candidates(self) -> List[ExpressionInfo]:
"""Return expressions marked as design variable candidates."""
return [e for e in self.expressions if e.is_candidate]
# ============================================================================
# Meta Models
# ============================================================================
class SpecMeta(BaseModel):
"""Metadata about the spec."""
version: str = Field(
...,
pattern=r"^2\.\d+$",
description="Schema version (e.g., '2.0')"
)
created: Optional[datetime] = Field(
default=None,
description="When the spec was created"
)
version: str = Field(..., pattern=r"^2\.\d+$", description="Schema version (e.g., '2.0')")
created: Optional[datetime] = Field(default=None, description="When the spec was created")
modified: Optional[datetime] = Field(
default=None,
description="When the spec was last modified"
default=None, description="When the spec was last modified"
)
created_by: Optional[SpecCreatedBy] = Field(
default=None,
description="Who/what created the spec"
)
modified_by: Optional[str] = Field(
default=None,
description="Who/what last modified the spec"
default=None, description="Who/what created the spec"
)
modified_by: Optional[str] = Field(default=None, description="Who/what last modified the spec")
study_name: str = Field(
...,
min_length=3,
max_length=100,
pattern=r"^[a-z0-9_]+$",
description="Unique study identifier (snake_case)"
description="Unique study identifier (snake_case)",
)
description: Optional[str] = Field(
default=None,
max_length=1000,
description="Human-readable description"
)
tags: Optional[List[str]] = Field(
default=None,
description="Tags for categorization"
default=None, max_length=1000, description="Human-readable description"
)
tags: Optional[List[str]] = Field(default=None, description="Tags for categorization")
engineering_context: Optional[str] = Field(
default=None, description="Real-world engineering context"
)
status: SpecStatus = Field(default=SpecStatus.DRAFT, description="Study lifecycle status")
topic: Optional[str] = Field(
default=None,
description="Real-world engineering context"
pattern=r"^[A-Za-z0-9_]+$",
description="Topic folder for grouping related studies",
)
@@ -168,15 +240,20 @@ class SpecMeta(BaseModel):
# Model Configuration Models
# ============================================================================
class NxPartConfig(BaseModel):
"""NX geometry part file configuration."""
path: Optional[str] = Field(default=None, description="Path to .prt file")
hash: Optional[str] = Field(default=None, description="File hash for change detection")
idealized_part: Optional[str] = Field(default=None, description="Idealized part filename (_i.prt)")
idealized_part: Optional[str] = Field(
default=None, description="Idealized part filename (_i.prt)"
)
class FemConfig(BaseModel):
"""FEM mesh file configuration."""
path: Optional[str] = Field(default=None, description="Path to .fem file")
element_count: Optional[int] = Field(default=None, description="Number of elements")
node_count: Optional[int] = Field(default=None, description="Number of nodes")
@@ -184,6 +261,7 @@ class FemConfig(BaseModel):
class Subcase(BaseModel):
"""Simulation subcase definition."""
id: int
name: Optional[str] = None
type: Optional[SubcaseType] = None
@@ -191,18 +269,18 @@ class Subcase(BaseModel):
class SimConfig(BaseModel):
"""Simulation file configuration."""
path: str = Field(..., description="Path to .sim file")
solver: SolverType = Field(..., description="Solver type")
solution_type: Optional[str] = Field(
default=None,
pattern=r"^SOL\d+$",
description="Solution type (e.g., SOL101)"
default=None, pattern=r"^SOL\d+$", description="Solution type (e.g., SOL101)"
)
subcases: Optional[List[Subcase]] = Field(default=None, description="Defined subcases")
class NxSettings(BaseModel):
"""NX runtime settings."""
nx_install_path: Optional[str] = None
simulation_timeout_s: Optional[int] = Field(default=None, ge=60, le=7200)
auto_start_nx: Optional[bool] = None
@@ -210,23 +288,31 @@ class NxSettings(BaseModel):
class ModelConfig(BaseModel):
"""NX model files and configuration."""
nx_part: Optional[NxPartConfig] = None
fem: Optional[FemConfig] = None
sim: SimConfig
sim: Optional[SimConfig] = Field(
default=None, description="Simulation file config (required for optimization)"
)
nx_settings: Optional[NxSettings] = None
introspection: Optional[IntrospectionData] = Field(
default=None, description="Model introspection results from intake"
)
# ============================================================================
# Design Variable Models
# ============================================================================
class DesignVariableBounds(BaseModel):
"""Design variable bounds."""
min: float
max: float
@model_validator(mode='after')
def validate_bounds(self) -> 'DesignVariableBounds':
@model_validator(mode="after")
def validate_bounds(self) -> "DesignVariableBounds":
if self.min >= self.max:
raise ValueError(f"min ({self.min}) must be less than max ({self.max})")
return self
@@ -234,16 +320,13 @@ class DesignVariableBounds(BaseModel):
class DesignVariable(BaseModel):
"""A design variable to optimize."""
id: str = Field(
...,
pattern=r"^dv_\d{3}$",
description="Unique identifier (pattern: dv_XXX)"
)
id: str = Field(..., pattern=r"^dv_\d{3}$", description="Unique identifier (pattern: dv_XXX)")
name: str = Field(..., description="Human-readable name")
expression_name: str = Field(
...,
pattern=r"^[a-zA-Z_][a-zA-Z0-9_]*$",
description="NX expression name (must match model)"
description="NX expression name (must match model)",
)
type: DesignVariableType = Field(..., description="Variable type")
bounds: DesignVariableBounds = Field(..., description="Value bounds")
@@ -259,8 +342,10 @@ class DesignVariable(BaseModel):
# Extractor Models
# ============================================================================
class ExtractorConfig(BaseModel):
"""Type-specific extractor configuration."""
inner_radius_mm: Optional[float] = None
outer_radius_mm: Optional[float] = None
n_modes: Optional[int] = None
@@ -279,6 +364,7 @@ class ExtractorConfig(BaseModel):
class CustomFunction(BaseModel):
"""Custom function definition for custom_function extractors."""
name: Optional[str] = Field(default=None, description="Function name")
module: Optional[str] = Field(default=None, description="Python module path")
signature: Optional[str] = Field(default=None, description="Function signature")
@@ -287,32 +373,33 @@ class CustomFunction(BaseModel):
class ExtractorOutput(BaseModel):
"""Output definition for an extractor."""
name: str = Field(..., description="Output name (used by objectives/constraints)")
metric: Optional[str] = Field(default=None, description="Specific metric (max, total, rms, etc.)")
metric: Optional[str] = Field(
default=None, description="Specific metric (max, total, rms, etc.)"
)
subcase: Optional[int] = Field(default=None, description="Subcase ID for this output")
units: Optional[str] = None
class Extractor(BaseModel):
"""Physics extractor that computes outputs from FEA."""
id: str = Field(
...,
pattern=r"^ext_\d{3}$",
description="Unique identifier (pattern: ext_XXX)"
)
id: str = Field(..., pattern=r"^ext_\d{3}$", description="Unique identifier (pattern: ext_XXX)")
name: str = Field(..., description="Human-readable name")
type: ExtractorType = Field(..., description="Extractor type")
builtin: bool = Field(default=True, description="Whether this is a built-in extractor")
config: Optional[ExtractorConfig] = Field(default=None, description="Type-specific configuration")
config: Optional[ExtractorConfig] = Field(
default=None, description="Type-specific configuration"
)
function: Optional[CustomFunction] = Field(
default=None,
description="Custom function definition (for custom_function type)"
default=None, description="Custom function definition (for custom_function type)"
)
outputs: List[ExtractorOutput] = Field(..., min_length=1, description="Output values")
canvas_position: Optional[CanvasPosition] = None
@model_validator(mode='after')
def validate_custom_function(self) -> 'Extractor':
@model_validator(mode="after")
def validate_custom_function(self) -> "Extractor":
if self.type == ExtractorType.CUSTOM_FUNCTION and self.function is None:
raise ValueError("custom_function extractor requires function definition")
return self
@@ -322,19 +409,18 @@ class Extractor(BaseModel):
# Objective Models
# ============================================================================
class ObjectiveSource(BaseModel):
"""Source reference for objective value."""
extractor_id: str = Field(..., description="Reference to extractor")
output_name: str = Field(..., description="Which output from the extractor")
class Objective(BaseModel):
"""Optimization objective."""
id: str = Field(
...,
pattern=r"^obj_\d{3}$",
description="Unique identifier (pattern: obj_XXX)"
)
id: str = Field(..., pattern=r"^obj_\d{3}$", description="Unique identifier (pattern: obj_XXX)")
name: str = Field(..., description="Human-readable name")
direction: OptimizationDirection = Field(..., description="Optimization direction")
weight: float = Field(default=1.0, ge=0, description="Weight for weighted sum")
@@ -349,14 +435,17 @@ class Objective(BaseModel):
# Constraint Models
# ============================================================================
class ConstraintSource(BaseModel):
"""Source reference for constraint value."""
extractor_id: str
output_name: str
class PenaltyConfig(BaseModel):
"""Penalty method configuration for constraints."""
method: Optional[PenaltyMethod] = None
weight: Optional[float] = None
margin: Optional[float] = Field(default=None, description="Soft margin before penalty kicks in")
@@ -364,11 +453,8 @@ class PenaltyConfig(BaseModel):
class Constraint(BaseModel):
"""Hard or soft constraint."""
id: str = Field(
...,
pattern=r"^con_\d{3}$",
description="Unique identifier (pattern: con_XXX)"
)
id: str = Field(..., pattern=r"^con_\d{3}$", description="Unique identifier (pattern: con_XXX)")
name: str
type: ConstraintType = Field(..., description="Constraint type")
operator: ConstraintOperator = Field(..., description="Comparison operator")
@@ -383,8 +469,10 @@ class Constraint(BaseModel):
# Optimization Models
# ============================================================================
class AlgorithmConfig(BaseModel):
"""Algorithm-specific settings."""
population_size: Optional[int] = None
n_generations: Optional[int] = None
mutation_prob: Optional[float] = None
@@ -399,22 +487,24 @@ class AlgorithmConfig(BaseModel):
class Algorithm(BaseModel):
"""Optimization algorithm configuration."""
type: AlgorithmType
config: Optional[AlgorithmConfig] = None
class OptimizationBudget(BaseModel):
"""Computational budget for optimization."""
max_trials: Optional[int] = Field(default=None, ge=1, le=10000)
max_time_hours: Optional[float] = None
convergence_patience: Optional[int] = Field(
default=None,
description="Stop if no improvement for N trials"
default=None, description="Stop if no improvement for N trials"
)
class SurrogateConfig(BaseModel):
"""Neural surrogate model configuration."""
n_models: Optional[int] = None
architecture: Optional[List[int]] = None
train_every_n_trials: Optional[int] = None
@@ -425,6 +515,7 @@ class SurrogateConfig(BaseModel):
class Surrogate(BaseModel):
"""Surrogate model settings."""
enabled: Optional[bool] = None
type: Optional[SurrogateType] = None
config: Optional[SurrogateConfig] = None
@@ -432,6 +523,7 @@ class Surrogate(BaseModel):
class OptimizationConfig(BaseModel):
"""Optimization algorithm configuration."""
algorithm: Algorithm
budget: OptimizationBudget
surrogate: Optional[Surrogate] = None
@@ -442,8 +534,10 @@ class OptimizationConfig(BaseModel):
# Workflow Models
# ============================================================================
class WorkflowStage(BaseModel):
"""A stage in a multi-stage optimization workflow."""
id: str
name: str
algorithm: Optional[str] = None
@@ -453,6 +547,7 @@ class WorkflowStage(BaseModel):
class WorkflowTransition(BaseModel):
"""Transition between workflow stages."""
from_: str = Field(..., alias="from")
to: str
condition: Optional[str] = None
@@ -463,6 +558,7 @@ class WorkflowTransition(BaseModel):
class Workflow(BaseModel):
"""Multi-stage optimization workflow."""
stages: Optional[List[WorkflowStage]] = None
transitions: Optional[List[WorkflowTransition]] = None
@@ -471,8 +567,10 @@ class Workflow(BaseModel):
# Reporting Models
# ============================================================================
class InsightConfig(BaseModel):
"""Insight-specific configuration."""
include_html: Optional[bool] = None
show_pareto_evolution: Optional[bool] = None
@@ -482,6 +580,7 @@ class InsightConfig(BaseModel):
class Insight(BaseModel):
"""Reporting insight definition."""
type: Optional[str] = None
for_trials: Optional[str] = None
config: Optional[InsightConfig] = None
@@ -489,6 +588,7 @@ class Insight(BaseModel):
class ReportingConfig(BaseModel):
"""Reporting configuration."""
auto_report: Optional[bool] = None
report_triggers: Optional[List[str]] = None
insights: Optional[List[Insight]] = None
@@ -498,8 +598,10 @@ class ReportingConfig(BaseModel):
# Canvas Models
# ============================================================================
class CanvasViewport(BaseModel):
"""Canvas viewport settings."""
x: float = 0
y: float = 0
zoom: float = 1.0
@@ -507,6 +609,7 @@ class CanvasViewport(BaseModel):
class CanvasEdge(BaseModel):
"""Connection between canvas nodes."""
source: str
target: str
sourceHandle: Optional[str] = None
@@ -515,6 +618,7 @@ class CanvasEdge(BaseModel):
class CanvasGroup(BaseModel):
"""Grouping of canvas nodes."""
id: str
name: str
node_ids: List[str]
@@ -522,6 +626,7 @@ class CanvasGroup(BaseModel):
class CanvasConfig(BaseModel):
"""Canvas UI state (persisted for reconstruction)."""
layout_version: Optional[str] = None
viewport: Optional[CanvasViewport] = None
edges: Optional[List[CanvasEdge]] = None
@@ -532,6 +637,7 @@ class CanvasConfig(BaseModel):
# Main AtomizerSpec Model
# ============================================================================
class AtomizerSpec(BaseModel):
"""
AtomizerSpec v2.0 - The unified configuration schema for Atomizer optimization studies.
@@ -542,36 +648,32 @@ class AtomizerSpec(BaseModel):
- Claude Assistant (reading and modifying)
- Optimization Engine (execution)
"""
meta: SpecMeta = Field(..., description="Metadata about the spec")
model: ModelConfig = Field(..., description="NX model files and configuration")
design_variables: List[DesignVariable] = Field(
...,
min_length=1,
default_factory=list,
max_length=50,
description="Design variables to optimize"
description="Design variables to optimize (required for running)",
)
extractors: List[Extractor] = Field(
...,
min_length=1,
description="Physics extractors"
default_factory=list, description="Physics extractors (required for running)"
)
objectives: List[Objective] = Field(
...,
min_length=1,
default_factory=list,
max_length=5,
description="Optimization objectives"
description="Optimization objectives (required for running)",
)
constraints: Optional[List[Constraint]] = Field(
default=None,
description="Hard and soft constraints"
default=None, description="Hard and soft constraints"
)
optimization: OptimizationConfig = Field(..., description="Algorithm configuration")
workflow: Optional[Workflow] = Field(default=None, description="Multi-stage workflow")
reporting: Optional[ReportingConfig] = Field(default=None, description="Reporting config")
canvas: Optional[CanvasConfig] = Field(default=None, description="Canvas UI state")
@model_validator(mode='after')
def validate_references(self) -> 'AtomizerSpec':
@model_validator(mode="after")
def validate_references(self) -> "AtomizerSpec":
"""Validate that all references are valid."""
# Collect valid extractor IDs and their outputs
extractor_outputs: Dict[str, set] = {}
@@ -638,13 +740,44 @@ class AtomizerSpec(BaseModel):
"""Check if this is a multi-objective optimization."""
return len(self.objectives) > 1
def is_ready_for_optimization(self) -> Tuple[bool, List[str]]:
"""
Check if spec is complete enough to run optimization.
Returns:
Tuple of (is_ready, list of missing requirements)
"""
missing = []
# Check required fields for optimization
if not self.model.sim:
missing.append("No simulation file (.sim) configured")
if not self.design_variables:
missing.append("No design variables defined")
if not self.extractors:
missing.append("No extractors defined")
if not self.objectives:
missing.append("No objectives defined")
# Check that enabled DVs have valid bounds
for dv in self.get_enabled_design_variables():
if dv.bounds.min >= dv.bounds.max:
missing.append(f"Design variable '{dv.name}' has invalid bounds")
return len(missing) == 0, missing
# ============================================================================
# Validation Response Models
# ============================================================================
class ValidationError(BaseModel):
"""A validation error."""
type: str # 'schema', 'semantic', 'reference'
path: List[str]
message: str
@@ -652,6 +785,7 @@ class ValidationError(BaseModel):
class ValidationWarning(BaseModel):
"""A validation warning."""
type: str
path: List[str]
message: str
@@ -659,6 +793,7 @@ class ValidationWarning(BaseModel):
class ValidationSummary(BaseModel):
"""Summary of spec contents."""
design_variables: int
extractors: int
objectives: int
@@ -668,6 +803,7 @@ class ValidationSummary(BaseModel):
class ValidationReport(BaseModel):
"""Full validation report."""
valid: bool
errors: List[ValidationError]
warnings: List[ValidationWarning]

View File

@@ -65,6 +65,16 @@ from optimization_engine.extractors.extract_zernike_figure import (
extract_zernike_figure_rms,
)
# Displacement extraction
from optimization_engine.extractors.extract_displacement import (
extract_displacement,
)
# Mass extraction from BDF
from optimization_engine.extractors.extract_mass_from_bdf import (
extract_mass_from_bdf,
)
# Part mass and material extractor (from NX .prt files)
from optimization_engine.extractors.extract_part_mass_material import (
extract_part_mass_material,
@@ -145,72 +155,76 @@ from optimization_engine.extractors.spec_extractor_builder import (
)
__all__ = [
# Displacement extraction
"extract_displacement",
# Mass extraction (from BDF)
"extract_mass_from_bdf",
# Part mass & material (from .prt)
'extract_part_mass_material',
'extract_part_mass',
'extract_part_material',
'PartMassExtractor',
"extract_part_mass_material",
"extract_part_mass",
"extract_part_material",
"PartMassExtractor",
# Stress extractors
'extract_solid_stress',
'extract_principal_stress',
'extract_max_principal_stress',
'extract_min_principal_stress',
"extract_solid_stress",
"extract_principal_stress",
"extract_max_principal_stress",
"extract_min_principal_stress",
# Strain energy
'extract_strain_energy',
'extract_total_strain_energy',
'extract_strain_energy_density',
"extract_strain_energy",
"extract_total_strain_energy",
"extract_strain_energy_density",
# SPC forces / reactions
'extract_spc_forces',
'extract_total_reaction_force',
'extract_reaction_component',
'check_force_equilibrium',
"extract_spc_forces",
"extract_total_reaction_force",
"extract_reaction_component",
"check_force_equilibrium",
# Zernike (telescope mirrors) - Standard Z-only method
'ZernikeExtractor',
'extract_zernike_from_op2',
'extract_zernike_filtered_rms',
'extract_zernike_relative_rms',
"ZernikeExtractor",
"extract_zernike_from_op2",
"extract_zernike_filtered_rms",
"extract_zernike_relative_rms",
# Zernike OPD (RECOMMENDED - uses actual geometry, no shape assumption)
# Supports annular apertures via inner_radius parameter
'ZernikeOPDExtractor',
'extract_zernike_opd',
'extract_zernike_opd_filtered_rms',
'compute_zernike_coefficients_annular',
"ZernikeOPDExtractor",
"extract_zernike_opd",
"extract_zernike_opd_filtered_rms",
"compute_zernike_coefficients_annular",
# Zernike Analytic (parabola-based with lateral displacement correction)
'ZernikeAnalyticExtractor',
'extract_zernike_analytic',
'extract_zernike_analytic_filtered_rms',
'compare_zernike_methods',
"ZernikeAnalyticExtractor",
"extract_zernike_analytic",
"extract_zernike_analytic_filtered_rms",
"compare_zernike_methods",
# Backwards compatibility (deprecated)
'ZernikeFigureExtractor',
'extract_zernike_figure',
'extract_zernike_figure_rms',
"ZernikeFigureExtractor",
"extract_zernike_figure",
"extract_zernike_figure_rms",
# Temperature (Phase 3 - thermal)
'extract_temperature',
'extract_temperature_gradient',
'extract_heat_flux',
'get_max_temperature',
"extract_temperature",
"extract_temperature_gradient",
"extract_heat_flux",
"get_max_temperature",
# Modal mass (Phase 3 - dynamics)
'extract_modal_mass',
'extract_frequencies',
'get_first_frequency',
'get_modal_mass_ratio',
"extract_modal_mass",
"extract_frequencies",
"get_first_frequency",
"get_modal_mass_ratio",
# Part introspection (Phase 4)
'introspect_part',
'get_expressions_dict',
'get_expression_value',
'print_introspection_summary',
"introspect_part",
"get_expressions_dict",
"get_expression_value",
"print_introspection_summary",
# Custom extractor loader (Phase 5)
'CustomExtractor',
'CustomExtractorLoader',
'CustomExtractorContext',
'ExtractorSecurityError',
'ExtractorValidationError',
'load_custom_extractors',
'execute_custom_extractor',
'validate_custom_extractor',
"CustomExtractor",
"CustomExtractorLoader",
"CustomExtractorContext",
"ExtractorSecurityError",
"ExtractorValidationError",
"load_custom_extractors",
"execute_custom_extractor",
"validate_custom_extractor",
# Spec extractor builder
'SpecExtractorBuilder',
'build_extractors_from_spec',
'get_extractor_outputs',
'list_available_builtin_extractors',
"SpecExtractorBuilder",
"build_extractors_from_spec",
"get_extractor_outputs",
"list_available_builtin_extractors",
]

View File

@@ -1,26 +1,30 @@
"""
Extract mass from Nastran BDF/DAT file as fallback when OP2 doesn't have GRDPNT
Extract mass from Nastran BDF/DAT file.
This module provides a simple wrapper around the BDFMassExtractor class.
"""
from pathlib import Path
from typing import Dict, Any
import re
from optimization_engine.extractors.bdf_mass_extractor import BDFMassExtractor
def extract_mass_from_bdf(bdf_file: Path) -> Dict[str, Any]:
"""
Extract mass from Nastran BDF file by parsing material and element definitions.
This is a fallback when OP2 doesn't have PARAM,GRDPNT output.
Extract mass from Nastran BDF file.
Args:
bdf_file: Path to .dat or .bdf file
Returns:
dict: {
'mass_kg': total mass in kg,
'mass_g': total mass in grams,
'method': 'bdf_calculation'
'total_mass': mass in kg (primary key),
'mass_kg': mass in kg,
'mass_g': mass in grams,
'cg': center of gravity [x, y, z],
'num_elements': number of elements,
'breakdown': mass by element type
}
"""
bdf_file = Path(bdf_file)
@@ -28,35 +32,23 @@ def extract_mass_from_bdf(bdf_file: Path) -> Dict[str, Any]:
if not bdf_file.exists():
raise FileNotFoundError(f"BDF file not found: {bdf_file}")
# Parse using pyNastran BDF reader
from pyNastran.bdf.bdf import read_bdf
extractor = BDFMassExtractor(str(bdf_file))
result = extractor.extract_mass()
model = read_bdf(str(bdf_file), validate=False, xref=True, punch=False,
encoding='utf-8', log=None, debug=False, mode='msc')
# Add 'total_mass' as primary key for compatibility
result["total_mass"] = result["mass_kg"]
# Calculate total mass by summing element masses
# model.mass_properties() returns (mass, cg, inertia)
mass_properties = model.mass_properties()
mass_ton = mass_properties[0] # Mass in tons (ton-mm-sec)
# NX Nastran typically uses ton-mm-sec units
mass_kg = mass_ton * 1000.0 # Convert tons to kg
mass_g = mass_kg * 1000.0 # Convert kg to grams
return {
'mass_kg': mass_kg,
'mass_g': mass_g,
'mass_ton': mass_ton,
'method': 'bdf_calculation',
'units': 'ton-mm-sec (converted to kg/g)'
}
return result
if __name__ == '__main__':
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
bdf_file = Path(sys.argv[1])
result = extract_mass_from_bdf(bdf_file)
print(f"Mass from BDF: {result['mass_kg']:.6f} kg ({result['mass_g']:.3f} g)")
print(f"CG: {result['cg']}")
print(f"Elements: {result['num_elements']}")
else:
print(f"Usage: python {sys.argv[0]} <bdf_file>")

View File

@@ -1,74 +1,86 @@
"""
Extract maximum von Mises stress from structural analysis
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
Extract maximum von Mises stress from structural analysis.
Pattern: solid_stress
Element Type: CTETRA
Result Type: stress
API: model.ctetra_stress[subcase] or model.chexa_stress[subcase]
Supports all solid element types (CTETRA, CHEXA, CPENTA, CPYRAM) and
shell elements (CQUAD4, CTRIA3).
Unit Note: NX Nastran in kg-mm-s outputs stress in kPa. This extractor
converts to MPa (divide by 1000) for engineering use.
"""
from pathlib import Path
from typing import Dict, Any
from typing import Dict, Any, Optional
import numpy as np
from pyNastran.op2.op2 import OP2
def extract_solid_stress(op2_file: Path, subcase: int = 1, element_type: str = 'ctetra'):
"""Extract stress from solid elements."""
from pyNastran.op2.op2 import OP2
import numpy as np
def extract_solid_stress(
op2_file: Path,
subcase: int = 1,
element_type: Optional[str] = None,
convert_to_mpa: bool = True,
) -> Dict[str, Any]:
"""
Extract maximum von Mises stress from solid elements.
model = OP2()
Args:
op2_file: Path to OP2 results file
subcase: Subcase ID (default 1)
element_type: Specific element type to check ('ctetra', 'chexa', etc.)
If None, checks ALL solid element types and returns max.
convert_to_mpa: If True, divide by 1000 to convert kPa to MPa (default True)
Returns:
dict with 'max_von_mises' (in MPa if convert_to_mpa=True),
'max_stress_element', and 'element_type'
"""
model = OP2(debug=False, log=None)
model.read_op2(str(op2_file))
# Get stress object for element type
# Different element types have different stress attributes
stress_attr_map = {
'ctetra': 'ctetra_stress',
'chexa': 'chexa_stress',
'cquad4': 'cquad4_stress',
'ctria3': 'ctria3_stress'
}
# All solid element types to check
solid_element_types = ["ctetra", "chexa", "cpenta", "cpyram"]
shell_element_types = ["cquad4", "ctria3"]
stress_attr = stress_attr_map.get(element_type.lower())
if not stress_attr:
raise ValueError(f"Unknown element type: {element_type}")
# Access stress through op2_results container
# pyNastran structure: model.op2_results.stress.cquad4_stress[subcase]
stress_dict = None
if hasattr(model, 'op2_results') and hasattr(model.op2_results, 'stress'):
stress_container = model.op2_results.stress
if hasattr(stress_container, stress_attr):
stress_dict = getattr(stress_container, stress_attr)
if stress_dict is None:
raise ValueError(f"No {element_type} stress results in OP2. Available attributes: {[a for a in dir(model) if 'stress' in a.lower()]}")
# stress_dict is a dictionary with subcase IDs as keys
available_subcases = list(stress_dict.keys())
if not available_subcases:
raise ValueError(f"No stress data found in OP2 file")
# Use the specified subcase or first available
if subcase in available_subcases:
actual_subcase = subcase
# If specific element type requested, only check that one
if element_type:
element_types_to_check = [element_type.lower()]
else:
actual_subcase = available_subcases[0]
# Check all solid types by default
element_types_to_check = solid_element_types
stress = stress_dict[actual_subcase]
if not hasattr(model, "op2_results") or not hasattr(model.op2_results, "stress"):
raise ValueError("No stress results in OP2 file")
itime = 0
stress_container = model.op2_results.stress
# Extract von Mises if available
if stress.is_von_mises: # Property, not method
# Different element types have von Mises at different column indices
# Shell elements (CQUAD4, CTRIA3): 8 columns, von Mises at column 7
# Solid elements (CTETRA, CHEXA): 10 columns, von Mises at column 9
# Find max stress across all requested element types
max_stress = 0.0
max_stress_elem = 0
max_stress_type = None
for elem_type in element_types_to_check:
stress_attr = f"{elem_type}_stress"
if not hasattr(stress_container, stress_attr):
continue
stress_dict = getattr(stress_container, stress_attr)
if not stress_dict:
continue
# Get subcase
available_subcases = list(stress_dict.keys())
if not available_subcases:
continue
actual_subcase = subcase if subcase in available_subcases else available_subcases[0]
stress = stress_dict[actual_subcase]
if not stress.is_von_mises:
continue
# Determine von Mises column
ncols = stress.data.shape[2]
if ncols == 8:
# Shell elements - von Mises is last column
von_mises_col = 7
@@ -76,27 +88,37 @@ def extract_solid_stress(op2_file: Path, subcase: int = 1, element_type: str = '
# Solid elements - von Mises is column 9
von_mises_col = 9
else:
# Unknown format, try last column
von_mises_col = ncols - 1
itime = 0
von_mises = stress.data[itime, :, von_mises_col]
max_stress = float(np.max(von_mises))
elem_max = float(np.max(von_mises))
# Get element info
element_ids = [eid for (eid, node) in stress.element_node]
max_stress_elem = element_ids[np.argmax(von_mises)]
if elem_max > max_stress:
max_stress = elem_max
element_ids = [eid for (eid, node) in stress.element_node]
max_stress_elem = int(element_ids[np.argmax(von_mises)])
max_stress_type = elem_type.upper()
return {
'max_von_mises': max_stress,
'max_stress_element': int(max_stress_elem)
}
else:
raise ValueError("von Mises stress not available")
if max_stress_type is None:
raise ValueError(f"No stress results found for element types: {element_types_to_check}")
# Convert from kPa to MPa (NX kg-mm-s unit system outputs kPa)
if convert_to_mpa:
max_stress = max_stress / 1000.0
return {
"max_von_mises": max_stress,
"max_stress_element": max_stress_elem,
"element_type": max_stress_type,
"units": "MPa" if convert_to_mpa else "kPa",
}
if __name__ == '__main__':
if __name__ == "__main__":
# Example usage
import sys
if len(sys.argv) > 1:
op2_file = Path(sys.argv[1])
result = extract_solid_stress(op2_file)

View File

@@ -473,23 +473,33 @@ def extract_displacements_by_subcase(
ngt = darr.node_gridtype.astype(int)
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
# Try to identify subcase from subtitle or isubcase
# Try to identify subcase from subtitle, label, or isubcase
subtitle = getattr(darr, 'subtitle', None)
op2_label = getattr(darr, 'label', None)
isubcase = getattr(darr, 'isubcase', None)
# Extract numeric from subtitle
label = None
if isinstance(subtitle, str):
import re
# Extract numeric from subtitle first, then label, then isubcase
import re
subcase_id = None
# Priority 1: subtitle (e.g., "GRAVITY 20 DEG")
if isinstance(subtitle, str) and subtitle.strip():
m = re.search(r'-?\d+', subtitle)
if m:
label = m.group(0)
subcase_id = m.group(0)
if label is None and isinstance(isubcase, int):
label = str(isubcase)
# Priority 2: label field (e.g., "90 SUBCASE 1")
if subcase_id is None and isinstance(op2_label, str) and op2_label.strip():
m = re.search(r'-?\d+', op2_label)
if m:
subcase_id = m.group(0)
if label:
result[label] = {
# Priority 3: isubcase number
if subcase_id is None and isinstance(isubcase, int):
subcase_id = str(isubcase)
if subcase_id:
result[subcase_id] = {
'node_ids': node_ids.astype(int),
'disp': dmat.copy()
}

View File

@@ -0,0 +1,46 @@
"""
Atomizer Intake System
======================
Provides structured intake processing for optimization studies.
Components:
- IntakeConfig: Pydantic schema for intake.yaml
- StudyContext: Complete assembled context for study creation
- IntakeProcessor: File handling and processing
- ContextAssembler: Combines all context sources
Usage:
from optimization_engine.intake import IntakeProcessor, IntakeConfig
processor = IntakeProcessor(inbox_folder)
context = processor.process()
"""
from .config import (
IntakeConfig,
StudyConfig,
ObjectiveConfig,
ConstraintConfig,
DesignVariableConfig,
BudgetConfig,
AlgorithmConfig,
MaterialConfig,
)
from .context import StudyContext, IntrospectionData, BaselineResult
from .processor import IntakeProcessor
__all__ = [
"IntakeConfig",
"StudyConfig",
"ObjectiveConfig",
"ConstraintConfig",
"DesignVariableConfig",
"BudgetConfig",
"AlgorithmConfig",
"MaterialConfig",
"StudyContext",
"IntrospectionData",
"BaselineResult",
"IntakeProcessor",
]

View File

@@ -0,0 +1,371 @@
"""
Intake Configuration Schema
===========================
Pydantic models for intake.yaml configuration files.
These models define the structure of pre-configuration that users can
provide to skip interview questions and speed up study setup.
"""
from __future__ import annotations
from pathlib import Path
from typing import Optional, List, Literal, Union, Any, Dict
from pydantic import BaseModel, Field, field_validator, model_validator
import yaml
class ObjectiveConfig(BaseModel):
"""Configuration for an optimization objective."""
goal: Literal["minimize", "maximize"]
target: str = Field(
description="What to optimize: mass, displacement, stress, frequency, stiffness, or custom name"
)
weight: float = Field(default=1.0, ge=0.0, le=10.0)
extractor: Optional[str] = Field(
default=None, description="Custom extractor function name (auto-detected if not specified)"
)
@field_validator("target")
@classmethod
def validate_target(cls, v: str) -> str:
"""Normalize target names."""
known_targets = {
"mass",
"weight",
"displacement",
"deflection",
"stress",
"frequency",
"stiffness",
"strain_energy",
"volume",
}
normalized = v.lower().strip()
# Map common aliases
aliases = {
"weight": "mass",
"deflection": "displacement",
}
return aliases.get(normalized, normalized)
class ConstraintConfig(BaseModel):
"""Configuration for an optimization constraint."""
type: str = Field(
description="Constraint type: max_stress, max_displacement, min_frequency, etc."
)
threshold: float
units: str = ""
description: Optional[str] = None
@field_validator("type")
@classmethod
def normalize_type(cls, v: str) -> str:
"""Normalize constraint type names."""
return v.lower().strip().replace(" ", "_")
class DesignVariableConfig(BaseModel):
"""Configuration for a design variable."""
name: str = Field(description="NX expression name")
bounds: tuple[float, float] = Field(description="(min, max) bounds")
units: Optional[str] = None
description: Optional[str] = None
step: Optional[float] = Field(default=None, description="Step size for discrete variables")
@field_validator("bounds")
@classmethod
def validate_bounds(cls, v: tuple[float, float]) -> tuple[float, float]:
"""Ensure bounds are valid."""
if len(v) != 2:
raise ValueError("Bounds must be a tuple of (min, max)")
if v[0] >= v[1]:
raise ValueError(f"Lower bound ({v[0]}) must be less than upper bound ({v[1]})")
return v
@property
def range(self) -> float:
"""Get the range of the design variable."""
return self.bounds[1] - self.bounds[0]
@property
def range_ratio(self) -> float:
"""Get the ratio of upper to lower bound."""
if self.bounds[0] == 0:
return float("inf")
return self.bounds[1] / self.bounds[0]
class BudgetConfig(BaseModel):
"""Configuration for optimization budget."""
max_trials: int = Field(default=100, ge=1, le=10000)
timeout_per_trial: int = Field(default=300, ge=10, le=7200, description="Seconds per FEA solve")
target_runtime: Optional[str] = Field(
default=None, description="Target total runtime (e.g., '2h', '30m')"
)
def get_target_runtime_seconds(self) -> Optional[int]:
"""Parse target_runtime string to seconds."""
if not self.target_runtime:
return None
runtime = self.target_runtime.lower().strip()
if runtime.endswith("h"):
return int(float(runtime[:-1]) * 3600)
elif runtime.endswith("m"):
return int(float(runtime[:-1]) * 60)
elif runtime.endswith("s"):
return int(float(runtime[:-1]))
else:
# Assume seconds
return int(float(runtime))
class AlgorithmConfig(BaseModel):
"""Configuration for optimization algorithm."""
method: Literal["auto", "TPE", "CMA-ES", "NSGA-II", "random"] = "auto"
neural_acceleration: bool = Field(
default=False, description="Enable surrogate model for speedup"
)
priority: Literal["speed", "accuracy", "balanced"] = "balanced"
seed: Optional[int] = Field(default=None, description="Random seed for reproducibility")
class MaterialConfig(BaseModel):
"""Configuration for material properties."""
name: str
yield_stress: Optional[float] = Field(default=None, ge=0, description="Yield stress in MPa")
ultimate_stress: Optional[float] = Field(
default=None, ge=0, description="Ultimate stress in MPa"
)
density: Optional[float] = Field(default=None, ge=0, description="Density in kg/m3")
youngs_modulus: Optional[float] = Field(
default=None, ge=0, description="Young's modulus in GPa"
)
poissons_ratio: Optional[float] = Field(
default=None, ge=0, le=0.5, description="Poisson's ratio"
)
class ObjectivesConfig(BaseModel):
"""Configuration for all objectives."""
primary: ObjectiveConfig
secondary: Optional[List[ObjectiveConfig]] = None
@property
def is_multi_objective(self) -> bool:
"""Check if this is a multi-objective problem."""
return self.secondary is not None and len(self.secondary) > 0
@property
def all_objectives(self) -> List[ObjectiveConfig]:
"""Get all objectives as a flat list."""
objectives = [self.primary]
if self.secondary:
objectives.extend(self.secondary)
return objectives
class StudyConfig(BaseModel):
"""Configuration for study metadata."""
name: Optional[str] = Field(
default=None, description="Study name (auto-generated from folder if omitted)"
)
type: Literal["single_objective", "multi_objective"] = "single_objective"
description: Optional[str] = None
tags: Optional[List[str]] = None
class IntakeConfig(BaseModel):
"""
Complete intake.yaml configuration schema.
All fields are optional - anything not specified will be asked
in the interview or auto-detected from introspection.
"""
study: Optional[StudyConfig] = None
objectives: Optional[ObjectivesConfig] = None
constraints: Optional[List[ConstraintConfig]] = None
design_variables: Optional[List[DesignVariableConfig]] = None
budget: Optional[BudgetConfig] = None
algorithm: Optional[AlgorithmConfig] = None
material: Optional[MaterialConfig] = None
notes: Optional[str] = None
@classmethod
def from_yaml(cls, yaml_path: Union[str, Path]) -> "IntakeConfig":
"""Load configuration from a YAML file."""
yaml_path = Path(yaml_path)
if not yaml_path.exists():
raise FileNotFoundError(f"Intake config not found: {yaml_path}")
with open(yaml_path, "r", encoding="utf-8") as f:
data = yaml.safe_load(f)
if data is None:
return cls()
return cls.model_validate(data)
@classmethod
def from_yaml_safe(cls, yaml_path: Union[str, Path]) -> Optional["IntakeConfig"]:
"""Load configuration from YAML, returning None if file doesn't exist."""
yaml_path = Path(yaml_path)
if not yaml_path.exists():
return None
try:
return cls.from_yaml(yaml_path)
except Exception:
return None
def to_yaml(self, yaml_path: Union[str, Path]) -> None:
"""Save configuration to a YAML file."""
yaml_path = Path(yaml_path)
data = self.model_dump(exclude_none=True)
with open(yaml_path, "w", encoding="utf-8") as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
def get_value(self, key: str) -> Optional[Any]:
"""
Get a configuration value by dot-notation key.
Examples:
config.get_value("study.name")
config.get_value("budget.max_trials")
config.get_value("objectives.primary.goal")
"""
parts = key.split(".")
value: Any = self
for part in parts:
if value is None:
return None
if hasattr(value, part):
value = getattr(value, part)
elif isinstance(value, dict):
value = value.get(part)
else:
return None
return value
def is_complete(self) -> bool:
"""Check if all required configuration is provided."""
return (
self.objectives is not None
and self.design_variables is not None
and len(self.design_variables) > 0
)
def get_missing_fields(self) -> List[str]:
"""Get list of fields that still need to be configured."""
missing = []
if self.objectives is None:
missing.append("objectives")
if self.design_variables is None or len(self.design_variables) == 0:
missing.append("design_variables")
if self.constraints is None:
missing.append("constraints (recommended)")
if self.budget is None:
missing.append("budget")
return missing
@model_validator(mode="after")
def validate_consistency(self) -> "IntakeConfig":
"""Validate consistency between configuration sections."""
# Check study type matches objectives
if self.study and self.objectives:
is_multi = self.objectives.is_multi_objective
declared_multi = self.study.type == "multi_objective"
if is_multi and not declared_multi:
# Auto-correct study type
self.study.type = "multi_objective"
return self
# Common material presets
MATERIAL_PRESETS: Dict[str, MaterialConfig] = {
"aluminum_6061_t6": MaterialConfig(
name="Aluminum 6061-T6",
yield_stress=276,
ultimate_stress=310,
density=2700,
youngs_modulus=68.9,
poissons_ratio=0.33,
),
"aluminum_7075_t6": MaterialConfig(
name="Aluminum 7075-T6",
yield_stress=503,
ultimate_stress=572,
density=2810,
youngs_modulus=71.7,
poissons_ratio=0.33,
),
"steel_a36": MaterialConfig(
name="Steel A36",
yield_stress=250,
ultimate_stress=400,
density=7850,
youngs_modulus=200,
poissons_ratio=0.26,
),
"stainless_304": MaterialConfig(
name="Stainless Steel 304",
yield_stress=215,
ultimate_stress=505,
density=8000,
youngs_modulus=193,
poissons_ratio=0.29,
),
"titanium_6al4v": MaterialConfig(
name="Titanium Ti-6Al-4V",
yield_stress=880,
ultimate_stress=950,
density=4430,
youngs_modulus=113.8,
poissons_ratio=0.342,
),
}
def get_material_preset(name: str) -> Optional[MaterialConfig]:
"""
Get a material preset by name (fuzzy matching).
Examples:
get_material_preset("6061") # Returns aluminum_6061_t6
get_material_preset("steel") # Returns steel_a36
"""
name_lower = name.lower().replace("-", "_").replace(" ", "_")
# Direct match
if name_lower in MATERIAL_PRESETS:
return MATERIAL_PRESETS[name_lower]
# Partial match
for key, material in MATERIAL_PRESETS.items():
if name_lower in key or name_lower in material.name.lower():
return material
return None

View File

@@ -0,0 +1,540 @@
"""
Study Context
=============
Complete assembled context for study creation, combining:
- Model introspection results
- Context files (goals.md, PDFs, images)
- Pre-configuration (intake.yaml)
- LAC memory (similar studies, recommendations)
This context object is used by both Interview Mode and Canvas Mode
to provide intelligent suggestions and pre-filled values.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import Optional, List, Dict, Any
from enum import Enum
import json
class ConfidenceLevel(str, Enum):
"""Confidence level for suggestions."""
HIGH = "high"
MEDIUM = "medium"
LOW = "low"
@dataclass
class ExpressionInfo:
"""Information about an NX expression."""
name: str
value: Optional[float] = None
units: Optional[str] = None
formula: Optional[str] = None
type: str = "Number"
is_design_candidate: bool = False
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
reason: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
"name": self.name,
"value": self.value,
"units": self.units,
"formula": self.formula,
"type": self.type,
"is_design_candidate": self.is_design_candidate,
"confidence": self.confidence.value,
"reason": self.reason,
}
@dataclass
class SolutionInfo:
"""Information about an NX solution."""
name: str
type: str # SOL 101, SOL 103, etc.
description: Optional[str] = None
@dataclass
class BoundaryConditionInfo:
"""Information about a boundary condition."""
name: str
type: str # Fixed, Pinned, etc.
location: Optional[str] = None
@dataclass
class LoadInfo:
"""Information about a load."""
name: str
type: str # Force, Pressure, etc.
magnitude: Optional[float] = None
units: Optional[str] = None
location: Optional[str] = None
@dataclass
class MaterialInfo:
"""Information about a material in the model."""
name: str
yield_stress: Optional[float] = None
density: Optional[float] = None
youngs_modulus: Optional[float] = None
@dataclass
class MeshInfo:
"""Information about the mesh."""
element_count: int = 0
node_count: int = 0
element_types: List[str] = field(default_factory=list)
quality_metrics: Dict[str, float] = field(default_factory=dict)
@dataclass
class BaselineResult:
"""Results from baseline solve."""
mass_kg: Optional[float] = None
max_displacement_mm: Optional[float] = None
max_stress_mpa: Optional[float] = None
max_strain: Optional[float] = None
first_frequency_hz: Optional[float] = None
strain_energy_j: Optional[float] = None
solve_time_seconds: Optional[float] = None
success: bool = False
error: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
"mass_kg": self.mass_kg,
"max_displacement_mm": self.max_displacement_mm,
"max_stress_mpa": self.max_stress_mpa,
"max_strain": self.max_strain,
"first_frequency_hz": self.first_frequency_hz,
"strain_energy_j": self.strain_energy_j,
"solve_time_seconds": self.solve_time_seconds,
"success": self.success,
"error": self.error,
}
def get_summary(self) -> str:
"""Get a human-readable summary of baseline results."""
if not self.success:
return f"Baseline solve failed: {self.error or 'Unknown error'}"
parts = []
if self.mass_kg is not None:
parts.append(f"mass={self.mass_kg:.2f}kg")
if self.max_displacement_mm is not None:
parts.append(f"disp={self.max_displacement_mm:.3f}mm")
if self.max_stress_mpa is not None:
parts.append(f"stress={self.max_stress_mpa:.1f}MPa")
if self.first_frequency_hz is not None:
parts.append(f"freq={self.first_frequency_hz:.1f}Hz")
return ", ".join(parts) if parts else "No results"
@dataclass
class IntrospectionData:
"""Complete introspection results from NX model."""
success: bool = False
timestamp: Optional[datetime] = None
error: Optional[str] = None
# Part information
expressions: List[ExpressionInfo] = field(default_factory=list)
bodies: List[Dict[str, Any]] = field(default_factory=list)
# Simulation information
solutions: List[SolutionInfo] = field(default_factory=list)
boundary_conditions: List[BoundaryConditionInfo] = field(default_factory=list)
loads: List[LoadInfo] = field(default_factory=list)
materials: List[MaterialInfo] = field(default_factory=list)
mesh_info: Optional[MeshInfo] = None
# Available result types (from OP2)
available_results: Dict[str, bool] = field(default_factory=dict)
subcases: List[int] = field(default_factory=list)
# Baseline solve
baseline: Optional[BaselineResult] = None
def get_expression_names(self) -> List[str]:
"""Get list of all expression names."""
return [e.name for e in self.expressions]
def get_design_candidates(self) -> List[ExpressionInfo]:
"""Get expressions that look like design variables."""
return [e for e in self.expressions if e.is_design_candidate]
def get_expression(self, name: str) -> Optional[ExpressionInfo]:
"""Get expression by name."""
for expr in self.expressions:
if expr.name == name:
return expr
return None
def get_solver_type(self) -> Optional[str]:
"""Get the primary solver type (SOL 101, etc.)."""
if self.solutions:
return self.solutions[0].type
return None
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for JSON serialization."""
return {
"success": self.success,
"timestamp": self.timestamp.isoformat() if self.timestamp else None,
"error": self.error,
"expressions": [e.to_dict() for e in self.expressions],
"solutions": [{"name": s.name, "type": s.type} for s in self.solutions],
"boundary_conditions": [
{"name": bc.name, "type": bc.type} for bc in self.boundary_conditions
],
"loads": [
{"name": l.name, "type": l.type, "magnitude": l.magnitude} for l in self.loads
],
"materials": [{"name": m.name, "yield_stress": m.yield_stress} for m in self.materials],
"available_results": self.available_results,
"subcases": self.subcases,
"baseline": self.baseline.to_dict() if self.baseline else None,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "IntrospectionData":
"""Create from dictionary."""
introspection = cls(
success=data.get("success", False),
error=data.get("error"),
)
if data.get("timestamp"):
introspection.timestamp = datetime.fromisoformat(data["timestamp"])
# Parse expressions
for expr_data in data.get("expressions", []):
introspection.expressions.append(
ExpressionInfo(
name=expr_data["name"],
value=expr_data.get("value"),
units=expr_data.get("units"),
formula=expr_data.get("formula"),
type=expr_data.get("type", "Number"),
is_design_candidate=expr_data.get("is_design_candidate", False),
confidence=ConfidenceLevel(expr_data.get("confidence", "medium")),
)
)
# Parse solutions
for sol_data in data.get("solutions", []):
introspection.solutions.append(
SolutionInfo(
name=sol_data["name"],
type=sol_data["type"],
)
)
introspection.available_results = data.get("available_results", {})
introspection.subcases = data.get("subcases", [])
# Parse baseline
if data.get("baseline"):
baseline_data = data["baseline"]
introspection.baseline = BaselineResult(
mass_kg=baseline_data.get("mass_kg"),
max_displacement_mm=baseline_data.get("max_displacement_mm"),
max_stress_mpa=baseline_data.get("max_stress_mpa"),
solve_time_seconds=baseline_data.get("solve_time_seconds"),
success=baseline_data.get("success", False),
error=baseline_data.get("error"),
)
return introspection
@dataclass
class DVSuggestion:
"""Suggested design variable."""
name: str
current_value: Optional[float] = None
suggested_bounds: Optional[tuple[float, float]] = None
units: Optional[str] = None
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
reason: str = ""
source: str = "introspection" # introspection, preconfig, lac
lac_insight: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
"name": self.name,
"current_value": self.current_value,
"suggested_bounds": list(self.suggested_bounds) if self.suggested_bounds else None,
"units": self.units,
"confidence": self.confidence.value,
"reason": self.reason,
"source": self.source,
"lac_insight": self.lac_insight,
}
@dataclass
class ObjectiveSuggestion:
"""Suggested optimization objective."""
name: str
goal: str # minimize, maximize
extractor: str
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
reason: str = ""
source: str = "goals"
@dataclass
class ConstraintSuggestion:
"""Suggested optimization constraint."""
name: str
type: str # less_than, greater_than
suggested_threshold: Optional[float] = None
units: Optional[str] = None
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
reason: str = ""
source: str = "requirements"
@dataclass
class ImageAnalysis:
"""Analysis result from Claude Vision for an image."""
image_path: Path
component_type: Optional[str] = None
dimensions: List[str] = field(default_factory=list)
load_conditions: List[str] = field(default_factory=list)
annotations: List[str] = field(default_factory=list)
suggestions: List[str] = field(default_factory=list)
raw_analysis: Optional[str] = None
@dataclass
class LACInsight:
"""Insight from Learning Atomizer Core."""
study_name: str
similarity_score: float
geometry_type: str
method_used: str
objectives: List[str]
trials_to_convergence: Optional[int] = None
success: bool = True
lesson: Optional[str] = None
@dataclass
class StudyContext:
"""
Complete context for study creation.
This is the central data structure that combines all information
gathered during intake processing, ready for use by Interview Mode
or Canvas Mode.
"""
# === Identity ===
study_name: str
source_folder: Path
created_at: datetime = field(default_factory=datetime.now)
# === Model Files ===
sim_file: Optional[Path] = None
fem_file: Optional[Path] = None
prt_file: Optional[Path] = None
idealized_prt_file: Optional[Path] = None
# === From Introspection ===
introspection: Optional[IntrospectionData] = None
# === From Context Files ===
goals_text: Optional[str] = None
requirements_text: Optional[str] = None
constraints_text: Optional[str] = None
notes_text: Optional[str] = None
image_analyses: List[ImageAnalysis] = field(default_factory=list)
# === From intake.yaml ===
preconfig: Optional[Any] = None # IntakeConfig, imported dynamically to avoid circular import
# === From LAC ===
similar_studies: List[LACInsight] = field(default_factory=list)
recommended_method: Optional[str] = None
known_issues: List[str] = field(default_factory=list)
user_preferences: Dict[str, Any] = field(default_factory=dict)
# === Derived Suggestions ===
suggested_dvs: List[DVSuggestion] = field(default_factory=list)
suggested_objectives: List[ObjectiveSuggestion] = field(default_factory=list)
suggested_constraints: List[ConstraintSuggestion] = field(default_factory=list)
# === Status ===
warnings: List[str] = field(default_factory=list)
errors: List[str] = field(default_factory=list)
@property
def has_introspection(self) -> bool:
"""Check if introspection data is available."""
return self.introspection is not None and self.introspection.success
@property
def has_baseline(self) -> bool:
"""Check if baseline results are available."""
return (
self.introspection is not None
and self.introspection.baseline is not None
and self.introspection.baseline.success
)
@property
def has_preconfig(self) -> bool:
"""Check if pre-configuration is available."""
return self.preconfig is not None
@property
def ready_for_interview(self) -> bool:
"""Check if context is ready for interview mode."""
return self.has_introspection and len(self.errors) == 0
@property
def ready_for_canvas(self) -> bool:
"""Check if context is ready for canvas mode."""
return self.has_introspection and self.sim_file is not None
def get_baseline_summary(self) -> str:
"""Get human-readable baseline summary."""
if self.introspection is None:
return "No baseline data"
if self.introspection.baseline is None:
return "No baseline data"
return self.introspection.baseline.get_summary()
def get_missing_required(self) -> List[str]:
"""Get list of missing required items."""
missing = []
if self.sim_file is None:
missing.append("Simulation file (.sim)")
if not self.has_introspection:
missing.append("Model introspection")
return missing
def get_context_summary(self) -> Dict[str, Any]:
"""Get a summary of loaded context for display."""
return {
"study_name": self.study_name,
"has_model": self.sim_file is not None,
"has_introspection": self.has_introspection,
"has_baseline": self.has_baseline,
"has_goals": self.goals_text is not None,
"has_requirements": self.requirements_text is not None,
"has_preconfig": self.has_preconfig,
"num_expressions": len(self.introspection.expressions) if self.introspection else 0,
"num_dv_candidates": len(self.introspection.get_design_candidates())
if self.introspection
else 0,
"num_similar_studies": len(self.similar_studies),
"warnings": self.warnings,
"errors": self.errors,
}
def to_interview_context(self) -> Dict[str, Any]:
"""Get context formatted for interview mode."""
return {
"study_name": self.study_name,
"baseline": (
self.introspection.baseline.to_dict()
if self.introspection is not None and self.introspection.baseline is not None
else None
),
"expressions": [e.to_dict() for e in self.introspection.expressions]
if self.introspection
else [],
"design_candidates": [e.to_dict() for e in self.introspection.get_design_candidates()]
if self.introspection
else [],
"solver_type": self.introspection.get_solver_type() if self.introspection else None,
"goals_text": self.goals_text,
"requirements_text": self.requirements_text,
"preconfig": self.preconfig.model_dump() if self.preconfig else None,
"suggested_dvs": [dv.to_dict() for dv in self.suggested_dvs],
"similar_studies": [
{"name": s.study_name, "method": s.method_used, "similarity": s.similarity_score}
for s in self.similar_studies
],
"recommended_method": self.recommended_method,
}
def save(self, output_path: Path) -> None:
"""Save context to JSON file."""
data = {
"study_name": self.study_name,
"source_folder": str(self.source_folder),
"created_at": self.created_at.isoformat(),
"sim_file": str(self.sim_file) if self.sim_file else None,
"fem_file": str(self.fem_file) if self.fem_file else None,
"prt_file": str(self.prt_file) if self.prt_file else None,
"introspection": self.introspection.to_dict() if self.introspection else None,
"goals_text": self.goals_text,
"requirements_text": self.requirements_text,
"suggested_dvs": [dv.to_dict() for dv in self.suggested_dvs],
"warnings": self.warnings,
"errors": self.errors,
}
with open(output_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2)
@classmethod
def load(cls, input_path: Path) -> "StudyContext":
"""Load context from JSON file."""
with open(input_path, "r", encoding="utf-8") as f:
data = json.load(f)
context = cls(
study_name=data["study_name"],
source_folder=Path(data["source_folder"]),
created_at=datetime.fromisoformat(data["created_at"]),
)
if data.get("sim_file"):
context.sim_file = Path(data["sim_file"])
if data.get("fem_file"):
context.fem_file = Path(data["fem_file"])
if data.get("prt_file"):
context.prt_file = Path(data["prt_file"])
if data.get("introspection"):
context.introspection = IntrospectionData.from_dict(data["introspection"])
context.goals_text = data.get("goals_text")
context.requirements_text = data.get("requirements_text")
context.warnings = data.get("warnings", [])
context.errors = data.get("errors", [])
return context

View File

@@ -0,0 +1,789 @@
"""
Intake Processor
================
Processes intake folders to create study context:
1. Validates folder structure
2. Copies model files to study directory
3. Parses intake.yaml pre-configuration
4. Extracts text from context files (goals.md, PDFs)
5. Runs model introspection
6. Optionally runs baseline solve
7. Assembles complete StudyContext
Usage:
from optimization_engine.intake import IntakeProcessor
processor = IntakeProcessor(Path("studies/_inbox/my_project"))
context = processor.process(run_baseline=True)
"""
from __future__ import annotations
import logging
import shutil
import re
from datetime import datetime
from pathlib import Path
from typing import Optional, List, Callable, Dict, Any
from .config import IntakeConfig, DesignVariableConfig
from .context import (
StudyContext,
IntrospectionData,
ExpressionInfo,
SolutionInfo,
BaselineResult,
DVSuggestion,
ObjectiveSuggestion,
ConstraintSuggestion,
ConfidenceLevel,
)
logger = logging.getLogger(__name__)
class IntakeError(Exception):
"""Error during intake processing."""
pass
class IntakeProcessor:
"""
Processes an intake folder to create a complete StudyContext.
The processor handles:
- File discovery and validation
- Model file copying
- Configuration parsing
- Context file extraction
- Model introspection (via NX journals)
- Baseline solve (optional)
- Suggestion generation
"""
def __init__(
self,
inbox_folder: Path,
studies_dir: Optional[Path] = None,
progress_callback: Optional[Callable[[str, float], None]] = None,
):
"""
Initialize the intake processor.
Args:
inbox_folder: Path to the intake folder (in _inbox/)
studies_dir: Base studies directory (default: auto-detect)
progress_callback: Optional callback for progress updates (message, percent)
"""
self.inbox_folder = Path(inbox_folder)
self.progress_callback = progress_callback or (lambda m, p: None)
# Validate inbox folder exists
if not self.inbox_folder.exists():
raise IntakeError(f"Inbox folder not found: {self.inbox_folder}")
# Determine study name from folder name
self.study_name = self.inbox_folder.name
if self.study_name.startswith("_"):
# Strip leading underscore (used for examples)
self.study_name = self.study_name[1:]
# Set studies directory
if studies_dir is None:
# Find project root
current = Path(__file__).parent
while current != current.parent:
if (current / "CLAUDE.md").exists():
studies_dir = current / "studies"
break
current = current.parent
else:
studies_dir = Path.cwd() / "studies"
self.studies_dir = Path(studies_dir)
self.study_dir = self.studies_dir / self.study_name
# Initialize context
self.context = StudyContext(
study_name=self.study_name,
source_folder=self.inbox_folder,
)
def process(
self,
run_baseline: bool = True,
copy_files: bool = True,
run_introspection: bool = True,
) -> StudyContext:
"""
Process the intake folder and create StudyContext.
Args:
run_baseline: Run a baseline FEA solve to get actual values
copy_files: Copy model files to study directory
run_introspection: Run NX model introspection
Returns:
Complete StudyContext ready for interview or canvas
"""
logger.info(f"Processing intake: {self.inbox_folder}")
try:
# Step 1: Discover files
self._progress("Discovering files...", 0.0)
self._discover_files()
# Step 2: Parse intake.yaml
self._progress("Parsing configuration...", 0.1)
self._parse_config()
# Step 3: Extract context files
self._progress("Extracting context...", 0.2)
self._extract_context_files()
# Step 4: Copy model files
if copy_files:
self._progress("Copying model files...", 0.3)
self._copy_model_files()
# Step 5: Run introspection
if run_introspection:
self._progress("Introspecting model...", 0.4)
self._run_introspection()
# Step 6: Run baseline solve
if run_baseline and self.context.sim_file:
self._progress("Running baseline solve...", 0.6)
self._run_baseline_solve()
# Step 7: Generate suggestions
self._progress("Generating suggestions...", 0.8)
self._generate_suggestions()
# Step 8: Save context
self._progress("Saving context...", 0.9)
self._save_context()
self._progress("Complete!", 1.0)
except Exception as e:
self.context.errors.append(str(e))
logger.error(f"Intake processing failed: {e}")
raise
return self.context
def _progress(self, message: str, percent: float) -> None:
"""Report progress."""
logger.info(f"[{percent * 100:.0f}%] {message}")
self.progress_callback(message, percent)
def _discover_files(self) -> None:
"""Discover model and context files in the inbox folder."""
# Look for model files
models_dir = self.inbox_folder / "models"
if models_dir.exists():
search_dir = models_dir
else:
# Fall back to root folder
search_dir = self.inbox_folder
# Find simulation file (required)
sim_files = list(search_dir.glob("*.sim"))
if sim_files:
self.context.sim_file = sim_files[0]
logger.info(f"Found sim file: {self.context.sim_file.name}")
else:
self.context.warnings.append("No .sim file found in models/")
# Find FEM file
fem_files = list(search_dir.glob("*.fem"))
if fem_files:
self.context.fem_file = fem_files[0]
logger.info(f"Found fem file: {self.context.fem_file.name}")
# Find part file
prt_files = [f for f in search_dir.glob("*.prt") if "_i.prt" not in f.name.lower()]
if prt_files:
self.context.prt_file = prt_files[0]
logger.info(f"Found prt file: {self.context.prt_file.name}")
# Find idealized part (CRITICAL!)
idealized_files = list(search_dir.glob("*_i.prt")) + list(search_dir.glob("*_I.prt"))
if idealized_files:
self.context.idealized_prt_file = idealized_files[0]
logger.info(f"Found idealized prt: {self.context.idealized_prt_file.name}")
else:
self.context.warnings.append(
"No idealized part (*_i.prt) found - mesh may not update during optimization!"
)
def _parse_config(self) -> None:
"""Parse intake.yaml if present."""
config_path = self.inbox_folder / "intake.yaml"
if config_path.exists():
try:
self.context.preconfig = IntakeConfig.from_yaml(config_path)
logger.info("Loaded intake.yaml configuration")
# Update study name if specified
if self.context.preconfig.study and self.context.preconfig.study.name:
self.context.study_name = self.context.preconfig.study.name
self.study_name = self.context.study_name
self.study_dir = self.studies_dir / self.study_name
except Exception as e:
self.context.warnings.append(f"Failed to parse intake.yaml: {e}")
logger.warning(f"Failed to parse intake.yaml: {e}")
else:
logger.info("No intake.yaml found, will use interview mode")
def _extract_context_files(self) -> None:
"""Extract text from context files."""
context_dir = self.inbox_folder / "context"
# Read goals.md
goals_path = context_dir / "goals.md"
if goals_path.exists():
self.context.goals_text = goals_path.read_text(encoding="utf-8")
logger.info("Loaded goals.md")
# Read constraints.txt
constraints_path = context_dir / "constraints.txt"
if constraints_path.exists():
self.context.constraints_text = constraints_path.read_text(encoding="utf-8")
logger.info("Loaded constraints.txt")
# Read any .txt or .md files in context/
if context_dir.exists():
for txt_file in context_dir.glob("*.txt"):
if txt_file.name != "constraints.txt":
content = txt_file.read_text(encoding="utf-8")
if self.context.notes_text:
self.context.notes_text += f"\n\n--- {txt_file.name} ---\n{content}"
else:
self.context.notes_text = content
# Extract PDF text (basic implementation)
# TODO: Add PyMuPDF and Claude Vision integration
for pdf_path in context_dir.glob("*.pdf") if context_dir.exists() else []:
try:
text = self._extract_pdf_text(pdf_path)
if text:
self.context.requirements_text = text
logger.info(f"Extracted text from {pdf_path.name}")
except Exception as e:
self.context.warnings.append(f"Failed to extract PDF {pdf_path.name}: {e}")
def _extract_pdf_text(self, pdf_path: Path) -> Optional[str]:
"""Extract text from PDF using PyMuPDF if available."""
try:
import fitz # PyMuPDF
doc = fitz.open(pdf_path)
text_parts = []
for page in doc:
text_parts.append(page.get_text())
doc.close()
return "\n".join(text_parts)
except ImportError:
logger.warning("PyMuPDF not installed, skipping PDF extraction")
return None
except Exception as e:
logger.warning(f"PDF extraction failed: {e}")
return None
def _copy_model_files(self) -> None:
"""Copy model files to study directory."""
# Create study directory structure
model_dir = self.study_dir / "1_model"
model_dir.mkdir(parents=True, exist_ok=True)
(self.study_dir / "2_iterations").mkdir(exist_ok=True)
(self.study_dir / "3_results").mkdir(exist_ok=True)
# Copy files
files_to_copy = [
self.context.sim_file,
self.context.fem_file,
self.context.prt_file,
self.context.idealized_prt_file,
]
for src in files_to_copy:
if src and src.exists():
dst = model_dir / src.name
if not dst.exists():
shutil.copy2(src, dst)
logger.info(f"Copied: {src.name}")
else:
logger.info(f"Already exists: {src.name}")
# Update paths to point to copied files
if self.context.sim_file:
self.context.sim_file = model_dir / self.context.sim_file.name
if self.context.fem_file:
self.context.fem_file = model_dir / self.context.fem_file.name
if self.context.prt_file:
self.context.prt_file = model_dir / self.context.prt_file.name
if self.context.idealized_prt_file:
self.context.idealized_prt_file = model_dir / self.context.idealized_prt_file.name
def _run_introspection(self) -> None:
"""Run NX model introspection."""
if not self.context.sim_file or not self.context.sim_file.exists():
self.context.warnings.append("Cannot introspect - no sim file")
return
introspection = IntrospectionData(timestamp=datetime.now())
try:
# Try to use existing introspection modules
from optimization_engine.extractors.introspect_part import introspect_part_expressions
# Introspect part for expressions
if self.context.prt_file and self.context.prt_file.exists():
expressions = introspect_part_expressions(str(self.context.prt_file))
for expr in expressions:
is_candidate = self._is_design_candidate(expr["name"], expr.get("value"))
introspection.expressions.append(
ExpressionInfo(
name=expr["name"],
value=expr.get("value"),
units=expr.get("units"),
formula=expr.get("formula"),
type=expr.get("type", "Number"),
is_design_candidate=is_candidate,
confidence=ConfidenceLevel.HIGH
if is_candidate
else ConfidenceLevel.MEDIUM,
)
)
introspection.success = True
logger.info(f"Introspected {len(introspection.expressions)} expressions")
except ImportError:
logger.warning("Introspection module not available, using fallback")
introspection.success = False
introspection.error = "Introspection module not available"
except Exception as e:
logger.error(f"Introspection failed: {e}")
introspection.success = False
introspection.error = str(e)
self.context.introspection = introspection
def _is_design_candidate(self, name: str, value: Optional[float]) -> bool:
"""Check if an expression looks like a design variable candidate."""
# Skip if no value or non-numeric
if value is None:
return False
# Skip system/reference expressions
if name.startswith("p") and name[1:].isdigit():
return False
# Skip mass-related outputs (not inputs)
if "mass" in name.lower() and "input" not in name.lower():
return False
# Look for typical design parameter names
design_keywords = [
"thickness",
"width",
"height",
"length",
"radius",
"diameter",
"angle",
"offset",
"depth",
"size",
"span",
"pitch",
"gap",
"rib",
"flange",
"web",
"wall",
"fillet",
"chamfer",
]
name_lower = name.lower()
return any(kw in name_lower for kw in design_keywords)
def _run_baseline_solve(self) -> None:
"""Run baseline FEA solve to get actual values."""
if not self.context.introspection:
self.context.introspection = IntrospectionData(timestamp=datetime.now())
baseline = BaselineResult()
try:
from optimization_engine.nx.solver import NXSolver
solver = NXSolver()
model_dir = self.context.sim_file.parent
result = solver.run_simulation(
sim_file=self.context.sim_file,
working_dir=model_dir,
expression_updates={}, # No updates for baseline
cleanup=True,
)
if result["success"]:
baseline.success = True
baseline.solve_time_seconds = result.get("solve_time", 0)
# Extract results from OP2
op2_file = result.get("op2_file")
if op2_file and Path(op2_file).exists():
self._extract_baseline_results(baseline, Path(op2_file), model_dir)
logger.info(f"Baseline solve complete: {baseline.get_summary()}")
else:
baseline.success = False
baseline.error = result.get("error", "Unknown error")
logger.warning(f"Baseline solve failed: {baseline.error}")
except ImportError:
logger.warning("NXSolver not available, skipping baseline")
baseline.success = False
baseline.error = "NXSolver not available"
except Exception as e:
logger.error(f"Baseline solve failed: {e}")
baseline.success = False
baseline.error = str(e)
self.context.introspection.baseline = baseline
def _extract_baseline_results(
self, baseline: BaselineResult, op2_file: Path, model_dir: Path
) -> None:
"""Extract results from OP2 file."""
try:
# Try to extract displacement
from optimization_engine.extractors.extract_displacement import extract_displacement
disp_result = extract_displacement(op2_file, subcase=1)
baseline.max_displacement_mm = disp_result.get("max_displacement")
except Exception as e:
logger.debug(f"Displacement extraction failed: {e}")
try:
# Try to extract stress
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
stress_result = extract_solid_stress(op2_file, subcase=1)
baseline.max_stress_mpa = stress_result.get("max_von_mises")
except Exception as e:
logger.debug(f"Stress extraction failed: {e}")
try:
# Try to extract mass from BDF
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
dat_files = list(model_dir.glob("*.dat"))
if dat_files:
baseline.mass_kg = extract_mass_from_bdf(str(dat_files[0]))
except Exception as e:
logger.debug(f"Mass extraction failed: {e}")
def _generate_suggestions(self) -> None:
"""Generate intelligent suggestions based on all context."""
self._generate_dv_suggestions()
self._generate_objective_suggestions()
self._generate_constraint_suggestions()
self._query_lac()
def _generate_dv_suggestions(self) -> None:
"""Generate design variable suggestions."""
suggestions: Dict[str, DVSuggestion] = {}
# From introspection
if self.context.introspection:
for expr in self.context.introspection.get_design_candidates():
if expr.value is not None and isinstance(expr.value, (int, float)):
# Calculate suggested bounds (50% to 150% of current value)
if expr.value > 0:
bounds = (expr.value * 0.5, expr.value * 1.5)
else:
bounds = (expr.value * 1.5, expr.value * 0.5)
suggestions[expr.name] = DVSuggestion(
name=expr.name,
current_value=expr.value,
suggested_bounds=bounds,
units=expr.units,
confidence=expr.confidence,
reason=f"Numeric expression with value {expr.value}",
source="introspection",
)
# Override/add from preconfig
if self.context.preconfig and self.context.preconfig.design_variables:
for dv in self.context.preconfig.design_variables:
if dv.name in suggestions:
# Update existing suggestion
suggestions[dv.name].suggested_bounds = dv.bounds
suggestions[dv.name].units = dv.units or suggestions[dv.name].units
suggestions[dv.name].source = "preconfig"
suggestions[dv.name].confidence = ConfidenceLevel.HIGH
else:
# Add new suggestion
suggestions[dv.name] = DVSuggestion(
name=dv.name,
suggested_bounds=dv.bounds,
units=dv.units,
confidence=ConfidenceLevel.HIGH,
reason="Specified in intake.yaml",
source="preconfig",
)
self.context.suggested_dvs = list(suggestions.values())
logger.info(f"Generated {len(self.context.suggested_dvs)} DV suggestions")
def _generate_objective_suggestions(self) -> None:
"""Generate objective suggestions from context."""
suggestions = []
# From preconfig
if self.context.preconfig and self.context.preconfig.objectives:
obj = self.context.preconfig.objectives.primary
extractor = self._get_extractor_for_target(obj.target)
suggestions.append(
ObjectiveSuggestion(
name=obj.target,
goal=obj.goal,
extractor=extractor,
confidence=ConfidenceLevel.HIGH,
reason="Specified in intake.yaml",
source="preconfig",
)
)
# From goals text (simple keyword matching)
elif self.context.goals_text:
goals_lower = self.context.goals_text.lower()
if "minimize" in goals_lower and "mass" in goals_lower:
suggestions.append(
ObjectiveSuggestion(
name="mass",
goal="minimize",
extractor="extract_mass_from_bdf",
confidence=ConfidenceLevel.MEDIUM,
reason="Found 'minimize mass' in goals",
source="goals",
)
)
elif "minimize" in goals_lower and "weight" in goals_lower:
suggestions.append(
ObjectiveSuggestion(
name="mass",
goal="minimize",
extractor="extract_mass_from_bdf",
confidence=ConfidenceLevel.MEDIUM,
reason="Found 'minimize weight' in goals",
source="goals",
)
)
if "maximize" in goals_lower and "stiffness" in goals_lower:
suggestions.append(
ObjectiveSuggestion(
name="stiffness",
goal="maximize",
extractor="extract_displacement", # Inverse of displacement
confidence=ConfidenceLevel.MEDIUM,
reason="Found 'maximize stiffness' in goals",
source="goals",
)
)
self.context.suggested_objectives = suggestions
def _generate_constraint_suggestions(self) -> None:
"""Generate constraint suggestions from context."""
suggestions = []
# From preconfig
if self.context.preconfig and self.context.preconfig.constraints:
for const in self.context.preconfig.constraints:
suggestions.append(
ConstraintSuggestion(
name=const.type,
type="less_than" if "max" in const.type else "greater_than",
suggested_threshold=const.threshold,
units=const.units,
confidence=ConfidenceLevel.HIGH,
reason="Specified in intake.yaml",
source="preconfig",
)
)
# From requirements text
if self.context.requirements_text:
# Simple pattern matching for constraints
text = self.context.requirements_text
# Look for stress limits
stress_pattern = r"(?:max(?:imum)?|stress)\s*[:<]?\s*(\d+(?:\.\d+)?)\s*(?:MPa|mpa)"
matches = re.findall(stress_pattern, text, re.IGNORECASE)
if matches:
suggestions.append(
ConstraintSuggestion(
name="max_stress",
type="less_than",
suggested_threshold=float(matches[0]),
units="MPa",
confidence=ConfidenceLevel.MEDIUM,
reason=f"Found stress limit in requirements: {matches[0]} MPa",
source="requirements",
)
)
# Look for displacement limits
disp_pattern = (
r"(?:max(?:imum)?|displacement|deflection)\s*[:<]?\s*(\d+(?:\.\d+)?)\s*(?:mm|MM)"
)
matches = re.findall(disp_pattern, text, re.IGNORECASE)
if matches:
suggestions.append(
ConstraintSuggestion(
name="max_displacement",
type="less_than",
suggested_threshold=float(matches[0]),
units="mm",
confidence=ConfidenceLevel.MEDIUM,
reason=f"Found displacement limit in requirements: {matches[0]} mm",
source="requirements",
)
)
self.context.suggested_constraints = suggestions
def _get_extractor_for_target(self, target: str) -> str:
"""Map optimization target to extractor function."""
extractors = {
"mass": "extract_mass_from_bdf",
"displacement": "extract_displacement",
"stress": "extract_solid_stress",
"frequency": "extract_frequency",
"stiffness": "extract_displacement", # Inverse
"strain_energy": "extract_strain_energy",
}
return extractors.get(target.lower(), f"extract_{target}")
def _query_lac(self) -> None:
"""Query Learning Atomizer Core for similar studies."""
try:
from knowledge_base.lac import get_lac
lac = get_lac()
# Build query from context
query_parts = [self.study_name]
if self.context.goals_text:
query_parts.append(self.context.goals_text[:200])
query = " ".join(query_parts)
# Get similar studies
similar = lac.query_similar_optimizations(query)
# Get method recommendation
n_objectives = 1
if self.context.preconfig and self.context.preconfig.objectives:
n_objectives = len(self.context.preconfig.objectives.all_objectives)
recommendation = lac.get_best_method_for(
geometry_type="unknown", n_objectives=n_objectives
)
if recommendation:
self.context.recommended_method = recommendation.get("method")
logger.info(f"LAC query complete: {len(similar)} similar studies found")
except ImportError:
logger.debug("LAC not available")
except Exception as e:
logger.debug(f"LAC query failed: {e}")
def _save_context(self) -> None:
"""Save assembled context to study directory."""
# Ensure study directory exists
self.study_dir.mkdir(parents=True, exist_ok=True)
# Save context JSON
context_path = self.study_dir / "0_intake" / "study_context.json"
context_path.parent.mkdir(exist_ok=True)
self.context.save(context_path)
# Save introspection report
if self.context.introspection:
introspection_path = self.study_dir / "0_intake" / "introspection.json"
import json
with open(introspection_path, "w") as f:
json.dump(self.context.introspection.to_dict(), f, indent=2)
# Copy original context files
intake_dir = self.study_dir / "0_intake" / "original_context"
intake_dir.mkdir(parents=True, exist_ok=True)
context_source = self.inbox_folder / "context"
if context_source.exists():
for f in context_source.iterdir():
if f.is_file():
shutil.copy2(f, intake_dir / f.name)
# Copy intake.yaml
intake_yaml = self.inbox_folder / "intake.yaml"
if intake_yaml.exists():
shutil.copy2(intake_yaml, self.study_dir / "0_intake" / "intake.yaml")
logger.info(f"Saved context to {self.study_dir / '0_intake'}")
def process_intake(
inbox_folder: Path,
run_baseline: bool = True,
progress_callback: Optional[Callable[[str, float], None]] = None,
) -> StudyContext:
"""
Convenience function to process an intake folder.
Args:
inbox_folder: Path to inbox folder
run_baseline: Run baseline solve
progress_callback: Optional progress callback
Returns:
Complete StudyContext
"""
processor = IntakeProcessor(inbox_folder, progress_callback=progress_callback)
return processor.process(run_baseline=run_baseline)

View File

@@ -70,15 +70,15 @@ def extract_part_mass(theSession, part, output_dir):
import json
results = {
'part_file': part.Name,
'mass_kg': 0.0,
'mass_g': 0.0,
'volume_mm3': 0.0,
'surface_area_mm2': 0.0,
'center_of_gravity_mm': [0.0, 0.0, 0.0],
'num_bodies': 0,
'success': False,
'error': None
"part_file": part.Name,
"mass_kg": 0.0,
"mass_g": 0.0,
"volume_mm3": 0.0,
"surface_area_mm2": 0.0,
"center_of_gravity_mm": [0.0, 0.0, 0.0],
"num_bodies": 0,
"success": False,
"error": None,
}
try:
@@ -88,10 +88,10 @@ def extract_part_mass(theSession, part, output_dir):
if body.IsSolidBody:
bodies.append(body)
results['num_bodies'] = len(bodies)
results["num_bodies"] = len(bodies)
if not bodies:
results['error'] = "No solid bodies found"
results["error"] = "No solid bodies found"
raise ValueError("No solid bodies found in part")
# Get the measure manager
@@ -104,30 +104,30 @@ def extract_part_mass(theSession, part, output_dir):
uc.GetBase("Area"),
uc.GetBase("Volume"),
uc.GetBase("Mass"),
uc.GetBase("Length")
uc.GetBase("Length"),
]
# Create mass properties measurement
measureBodies = measureManager.NewMassProperties(mass_units, 0.99, bodies)
if measureBodies:
results['mass_kg'] = measureBodies.Mass
results['mass_g'] = results['mass_kg'] * 1000.0
results["mass_kg"] = measureBodies.Mass
results["mass_g"] = results["mass_kg"] * 1000.0
try:
results['volume_mm3'] = measureBodies.Volume
results["volume_mm3"] = measureBodies.Volume
except:
pass
try:
results['surface_area_mm2'] = measureBodies.Area
results["surface_area_mm2"] = measureBodies.Area
except:
pass
try:
cog = measureBodies.Centroid
if cog:
results['center_of_gravity_mm'] = [cog.X, cog.Y, cog.Z]
results["center_of_gravity_mm"] = [cog.X, cog.Y, cog.Z]
except:
pass
@@ -136,26 +136,26 @@ def extract_part_mass(theSession, part, output_dir):
except:
pass
results['success'] = True
results["success"] = True
except Exception as e:
results['error'] = str(e)
results['success'] = False
results["error"] = str(e)
results["success"] = False
# Write results to JSON file
output_file = os.path.join(output_dir, "_temp_part_properties.json")
with open(output_file, 'w') as f:
with open(output_file, "w") as f:
json.dump(results, f, indent=2)
# Write simple mass value for backward compatibility
mass_file = os.path.join(output_dir, "_temp_mass.txt")
with open(mass_file, 'w') as f:
f.write(str(results['mass_kg']))
with open(mass_file, "w") as f:
f.write(str(results["mass_kg"]))
if not results['success']:
raise ValueError(results['error'])
if not results["success"]:
raise ValueError(results["error"])
return results['mass_kg']
return results["mass_kg"]
def find_or_open_part(theSession, part_path):
@@ -164,7 +164,7 @@ def find_or_open_part(theSession, part_path):
In NX, calling Parts.Open() on an already-loaded part raises 'File already exists'.
"""
part_name = os.path.splitext(os.path.basename(part_path))[0]
# Try to find in already-loaded parts
for part in theSession.Parts:
if part.Name == part_name:
@@ -174,9 +174,9 @@ def find_or_open_part(theSession, part_path):
return part, True
except:
pass
# Not found, open it
markId = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f'Load {part_name}')
markId = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f"Load {part_name}")
part, partLoadStatus = theSession.Parts.Open(part_path)
partLoadStatus.Dispose()
return part, False
@@ -194,26 +194,28 @@ def main(args):
"""
if len(args) < 1:
print("ERROR: No .sim file path provided")
print("Usage: run_journal.exe solve_simulation.py <sim_file_path> [solution_name] [expr1=val1] ...")
print(
"Usage: run_journal.exe solve_simulation.py <sim_file_path> [solution_name] [expr1=val1] ..."
)
return False
sim_file_path = args[0]
solution_name = args[1] if len(args) > 1 and args[1] != 'None' else None
solution_name = args[1] if len(args) > 1 and args[1] != "None" else None
# Parse expression updates
expression_updates = {}
for arg in args[2:]:
if '=' in arg:
name, value = arg.split('=', 1)
if "=" in arg:
name, value = arg.split("=", 1)
expression_updates[name] = float(value)
# Get working directory
working_dir = os.path.dirname(os.path.abspath(sim_file_path))
sim_filename = os.path.basename(sim_file_path)
print(f"[JOURNAL] " + "="*60)
print(f"[JOURNAL] " + "=" * 60)
print(f"[JOURNAL] NX SIMULATION SOLVER (Assembly FEM Workflow)")
print(f"[JOURNAL] " + "="*60)
print(f"[JOURNAL] " + "=" * 60)
print(f"[JOURNAL] Simulation: {sim_filename}")
print(f"[JOURNAL] Working directory: {working_dir}")
print(f"[JOURNAL] Solution: {solution_name or 'Solution 1'}")
@@ -226,7 +228,9 @@ def main(args):
# Set load options
theSession.Parts.LoadOptions.LoadLatest = False
theSession.Parts.LoadOptions.ComponentLoadMethod = NXOpen.LoadOptions.LoadMethod.FromDirectory
theSession.Parts.LoadOptions.ComponentLoadMethod = (
NXOpen.LoadOptions.LoadMethod.FromDirectory
)
theSession.Parts.LoadOptions.SetSearchDirectories([working_dir], [True])
theSession.Parts.LoadOptions.ComponentsToLoad = NXOpen.LoadOptions.LoadComponents.All
theSession.Parts.LoadOptions.PartLoadOption = NXOpen.LoadOptions.LoadOption.FullyLoad
@@ -240,7 +244,7 @@ def main(args):
pass
# Check for assembly FEM files
afm_files = [f for f in os.listdir(working_dir) if f.endswith('.afm')]
afm_files = [f for f in os.listdir(working_dir) if f.endswith(".afm")]
is_assembly = len(afm_files) > 0
if is_assembly and expression_updates:
@@ -262,11 +266,14 @@ def main(args):
except Exception as e:
print(f"[JOURNAL] FATAL ERROR: {e}")
import traceback
traceback.print_exc()
return False
def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expression_updates, working_dir):
def solve_assembly_fem_workflow(
theSession, sim_file_path, solution_name, expression_updates, working_dir
):
"""
Full assembly FEM workflow based on recorded NX journal.
@@ -285,8 +292,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
sim_file_full_path = os.path.join(working_dir, sim_filename)
print(f"[JOURNAL] Opening SIM file: {sim_filename}")
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
sim_file_full_path,
NXOpen.DisplayPartOption.AllowAdditional
sim_file_full_path, NXOpen.DisplayPartOption.AllowAdditional
)
partLoadStatus.Dispose()
@@ -330,7 +336,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] WARNING: M1_Blank_fem1_i.prt not found!")
# Load M1_Vertical_Support_Skeleton_fem1_i.prt (CRITICAL: idealized geometry for support)
skeleton_idealized_prt_path = os.path.join(working_dir, "M1_Vertical_Support_Skeleton_fem1_i.prt")
skeleton_idealized_prt_path = os.path.join(
working_dir, "M1_Vertical_Support_Skeleton_fem1_i.prt"
)
if os.path.exists(skeleton_idealized_prt_path):
print(f"[JOURNAL] Loading M1_Vertical_Support_Skeleton_fem1_i.prt...")
part3_skel, was_loaded = find_or_open_part(theSession, skeleton_idealized_prt_path)
@@ -347,11 +355,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
# Find and switch to M1_Blank part
try:
part3 = theSession.Parts.FindObject("M1_Blank")
markId3 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part")
markId3 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part"
)
status1, partLoadStatus3 = theSession.Parts.SetActiveDisplay(
part3,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.UseLast
NXOpen.PartDisplayPartWorkPartOption.UseLast,
)
partLoadStatus3.Dispose()
@@ -366,10 +376,10 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
# Write expressions to a temp file and import (more reliable than editing one by one)
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
with open(exp_file_path, 'w') as f:
with open(exp_file_path, "w") as f:
for expr_name, expr_value in expression_updates.items():
# Determine unit
if 'angle' in expr_name.lower() or 'vertical' in expr_name.lower():
if "angle" in expr_name.lower() or "vertical" in expr_name.lower():
unit_str = "Degrees"
else:
unit_str = "MilliMeter"
@@ -377,12 +387,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] {expr_name} = {expr_value} ({unit_str})")
print(f"[JOURNAL] Importing expressions from file...")
markId_import = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Import Expressions")
markId_import = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Import Expressions"
)
try:
expModified, errorMessages = workPart.Expressions.ImportFromFile(
exp_file_path,
NXOpen.ExpressionCollection.ImportMode.Replace
exp_file_path, NXOpen.ExpressionCollection.ImportMode.Replace
)
print(f"[JOURNAL] Expressions imported: {expModified} modified")
if errorMessages:
@@ -390,14 +401,18 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
# Update geometry after import
print(f"[JOURNAL] Rebuilding M1_Blank geometry...")
markId_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
markId_update = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Invisible, "NX update"
)
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
theSession.DeleteUndoMark(markId_update, "NX update")
print(f"[JOURNAL] M1_Blank geometry rebuilt ({nErrs} errors)")
# CRITICAL: Save M1_Blank after geometry update so FEM can read updated geometry
print(f"[JOURNAL] Saving M1_Blank...")
partSaveStatus_blank = workPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_blank = workPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
)
partSaveStatus_blank.Dispose()
print(f"[JOURNAL] M1_Blank saved")
@@ -445,11 +460,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] Updating {part_name}...")
linked_part = theSession.Parts.FindObject(part_name)
markId_linked = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f"Update {part_name}")
markId_linked = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, f"Update {part_name}"
)
status_linked, partLoadStatus_linked = theSession.Parts.SetActiveDisplay(
linked_part,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.UseLast
NXOpen.PartDisplayPartWorkPartOption.UseLast,
)
partLoadStatus_linked.Dispose()
@@ -457,14 +474,18 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
theSession.ApplicationSwitchImmediate("UG_APP_MODELING")
# Update to propagate linked expression changes
markId_linked_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
markId_linked_update = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Invisible, "NX update"
)
nErrs_linked = theSession.UpdateManager.DoUpdate(markId_linked_update)
theSession.DeleteUndoMark(markId_linked_update, "NX update")
print(f"[JOURNAL] {part_name} geometry rebuilt ({nErrs_linked} errors)")
# CRITICAL: Save part after geometry update so FEM can read updated geometry
print(f"[JOURNAL] Saving {part_name}...")
partSaveStatus_linked = linked_part.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_linked = linked_part.Save(
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
)
partSaveStatus_linked.Dispose()
print(f"[JOURNAL] {part_name} saved")
@@ -482,7 +503,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
sim_part_name = os.path.splitext(sim_filename)[0] # e.g., "ASSY_M1_assyfem1_sim1"
print(f"[JOURNAL] Looking for sim part: {sim_part_name}")
markId_sim = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part")
markId_sim = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part"
)
try:
# First try to find it among loaded parts (like recorded journal)
@@ -490,7 +513,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
status_sim, partLoadStatus = theSession.Parts.SetActiveDisplay(
simPart1,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.UseLast
NXOpen.PartDisplayPartWorkPartOption.UseLast,
)
partLoadStatus.Dispose()
print(f"[JOURNAL] Found and activated existing sim part")
@@ -498,8 +521,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
# Fallback: Open fresh if not found
print(f"[JOURNAL] Sim part not found, opening fresh: {sim_filename}")
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
sim_file_path,
NXOpen.DisplayPartOption.AllowAdditional
sim_file_path, NXOpen.DisplayPartOption.AllowAdditional
)
partLoadStatus.Dispose()
@@ -517,23 +539,29 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] Updating M1_Blank_fem1...")
try:
component2 = component1.FindObject("COMPONENT M1_Blank_fem1 1")
markId_fem1 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Make Work Part")
markId_fem1 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Make Work Part"
)
partLoadStatus5 = theSession.Parts.SetWorkComponent(
component2,
NXOpen.PartCollection.RefsetOption.Entire,
NXOpen.PartCollection.WorkComponentOption.Visible
NXOpen.PartCollection.WorkComponentOption.Visible,
)
workFemPart = theSession.Parts.BaseWork
partLoadStatus5.Dispose()
markId_update1 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update FE Model")
markId_update1 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Update FE Model"
)
fEModel1 = workFemPart.FindObject("FEModel")
fEModel1.UpdateFemodel()
print(f"[JOURNAL] M1_Blank_fem1 updated")
# CRITICAL: Save FEM file after update to persist mesh changes
print(f"[JOURNAL] Saving M1_Blank_fem1...")
partSaveStatus_fem1 = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_fem1 = workFemPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
)
partSaveStatus_fem1.Dispose()
print(f"[JOURNAL] M1_Blank_fem1 saved")
except Exception as e:
@@ -543,23 +571,29 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] Updating M1_Vertical_Support_Skeleton_fem1...")
try:
component3 = component1.FindObject("COMPONENT M1_Vertical_Support_Skeleton_fem1 3")
markId_fem2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Make Work Part")
markId_fem2 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Make Work Part"
)
partLoadStatus6 = theSession.Parts.SetWorkComponent(
component3,
NXOpen.PartCollection.RefsetOption.Entire,
NXOpen.PartCollection.WorkComponentOption.Visible
NXOpen.PartCollection.WorkComponentOption.Visible,
)
workFemPart = theSession.Parts.BaseWork
partLoadStatus6.Dispose()
markId_update2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update FE Model")
markId_update2 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Update FE Model"
)
fEModel2 = workFemPart.FindObject("FEModel")
fEModel2.UpdateFemodel()
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 updated")
# CRITICAL: Save FEM file after update to persist mesh changes
print(f"[JOURNAL] Saving M1_Vertical_Support_Skeleton_fem1...")
partSaveStatus_fem2 = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_fem2 = workFemPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
)
partSaveStatus_fem2.Dispose()
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 saved")
except Exception as e:
@@ -578,7 +612,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
partLoadStatus8 = theSession.Parts.SetWorkComponent(
component1,
NXOpen.PartCollection.RefsetOption.Entire,
NXOpen.PartCollection.WorkComponentOption.Visible
NXOpen.PartCollection.WorkComponentOption.Visible,
)
workAssyFemPart = theSession.Parts.BaseWork
displaySimPart = theSession.Parts.BaseDisplay
@@ -643,13 +677,17 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
elif numMerged == 0:
print(f"[JOURNAL] No nodes were merged (0 returned)")
if numDuplicates is None:
print(f"[JOURNAL] WARNING: IdentifyDuplicateNodes returned None - mesh may need display refresh")
print(
f"[JOURNAL] WARNING: IdentifyDuplicateNodes returned None - mesh may need display refresh"
)
else:
print(f"[JOURNAL] MergeDuplicateNodes returned None - batch mode limitation")
except Exception as merge_error:
print(f"[JOURNAL] MergeDuplicateNodes failed: {merge_error}")
if numDuplicates is None:
print(f"[JOURNAL] This combined with IdentifyDuplicateNodes=None suggests display issue")
print(
f"[JOURNAL] This combined with IdentifyDuplicateNodes=None suggests display issue"
)
theSession.SetUndoMarkName(markId_merge, "Duplicate Nodes")
duplicateNodesCheckBuilder1.Destroy()
@@ -658,6 +696,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
except Exception as e:
print(f"[JOURNAL] WARNING: Node merge: {e}")
import traceback
traceback.print_exc()
# ==========================================================================
@@ -673,7 +712,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
theSession.SetUndoMarkName(markId_labels, "Assembly Label Manager Dialog")
markId_labels2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "Assembly Label Manager")
markId_labels2 = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Invisible, "Assembly Label Manager"
)
# Set offsets for each FE model occurrence
# These offsets ensure unique node/element labels across components
@@ -720,7 +761,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
print(f"[JOURNAL] STEP 5b: Saving assembly FEM after all updates...")
try:
# Save the assembly FEM to persist all mesh updates and node merges
partSaveStatus_afem = workAssyFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_afem = workAssyFemPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
)
partSaveStatus_afem.Dispose()
print(f"[JOURNAL] Assembly FEM saved: {workAssyFemPart.Name}")
except Exception as e:
@@ -736,7 +779,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
partLoadStatus9 = theSession.Parts.SetWorkComponent(
NXOpen.Assemblies.Component.Null,
NXOpen.PartCollection.RefsetOption.Entire,
NXOpen.PartCollection.WorkComponentOption.Visible
NXOpen.PartCollection.WorkComponentOption.Visible,
)
workSimPart = theSession.Parts.BaseWork
partLoadStatus9.Dispose()
@@ -760,13 +803,15 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
psolutions1,
NXOpen.CAE.SimSolution.SolveOption.Solve,
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
NXOpen.CAE.SimSolution.SolveMode.Foreground # Use Foreground to ensure OP2 is complete
NXOpen.CAE.SimSolution.SolveMode.Foreground, # Use Foreground to ensure OP2 is complete
)
theSession.DeleteUndoMark(markId_solve2, None)
theSession.SetUndoMarkName(markId_solve, "Solve")
print(f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped")
print(
f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped"
)
# ==========================================================================
# STEP 7: SAVE ALL - Save all modified parts (FEM, SIM, PRT)
@@ -784,11 +829,14 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
except Exception as e:
print(f"[JOURNAL] ERROR solving: {e}")
import traceback
traceback.print_exc()
return False
def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_updates, working_dir):
def solve_simple_workflow(
theSession, sim_file_path, solution_name, expression_updates, working_dir
):
"""
Workflow for single-part simulations with optional expression updates.
@@ -802,8 +850,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
# Open the .sim file
basePart1, partLoadStatus1 = theSession.Parts.OpenActiveDisplay(
sim_file_path,
NXOpen.DisplayPartOption.AllowAdditional
sim_file_path, NXOpen.DisplayPartOption.AllowAdditional
)
partLoadStatus1.Dispose()
@@ -830,11 +877,11 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
part_type = type(part).__name__
# Skip FEM and SIM parts by type
if 'fem' in part_type.lower() or 'sim' in part_type.lower():
if "fem" in part_type.lower() or "sim" in part_type.lower():
continue
# Skip parts with _fem or _sim in name
if '_fem' in part_name or '_sim' in part_name:
if "_fem" in part_name or "_sim" in part_name:
continue
geom_part = part
@@ -845,25 +892,38 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
if geom_part is None:
print(f"[JOURNAL] Geometry part not loaded, searching for .prt file...")
for filename in os.listdir(working_dir):
if filename.endswith('.prt') and '_fem' not in filename.lower() and '_sim' not in filename.lower():
# Skip idealized parts (_i.prt), FEM parts, and SIM parts
if (
filename.endswith(".prt")
and "_fem" not in filename.lower()
and "_sim" not in filename.lower()
and "_i.prt" not in filename.lower()
):
prt_path = os.path.join(working_dir, filename)
print(f"[JOURNAL] Loading geometry part: {filename}")
try:
geom_part, partLoadStatus = theSession.Parts.Open(prt_path)
loaded_part, partLoadStatus = theSession.Parts.Open(prt_path)
partLoadStatus.Dispose()
print(f"[JOURNAL] Geometry part loaded: {geom_part.Name}")
break
# Check if load actually succeeded (Parts.Open can return None)
if loaded_part is not None:
geom_part = loaded_part
print(f"[JOURNAL] Geometry part loaded: {geom_part.Name}")
break
else:
print(f"[JOURNAL] WARNING: Parts.Open returned None for {filename}")
except Exception as e:
print(f"[JOURNAL] WARNING: Could not load {filename}: {e}")
if geom_part:
try:
# Switch to the geometry part for expression editing
markId_expr = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update Expressions")
markId_expr = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Visible, "Update Expressions"
)
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
geom_part,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.UseLast
NXOpen.PartDisplayPartWorkPartOption.UseLast,
)
partLoadStatus.Dispose()
@@ -874,10 +934,10 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
# Write expressions to temp file and import
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
with open(exp_file_path, 'w') as f:
with open(exp_file_path, "w") as f:
for expr_name, expr_value in expression_updates.items():
# Determine unit based on name
if 'angle' in expr_name.lower():
if "angle" in expr_name.lower():
unit_str = "Degrees"
else:
unit_str = "MilliMeter"
@@ -886,8 +946,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
print(f"[JOURNAL] Importing expressions...")
expModified, errorMessages = workPart.Expressions.ImportFromFile(
exp_file_path,
NXOpen.ExpressionCollection.ImportMode.Replace
exp_file_path, NXOpen.ExpressionCollection.ImportMode.Replace
)
print(f"[JOURNAL] Expressions modified: {expModified}")
if errorMessages:
@@ -895,14 +954,19 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
# Update geometry
print(f"[JOURNAL] Rebuilding geometry...")
markId_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
markId_update = theSession.SetUndoMark(
NXOpen.Session.MarkVisibility.Invisible, "NX update"
)
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
theSession.DeleteUndoMark(markId_update, "NX update")
print(f"[JOURNAL] Geometry rebuilt ({nErrs} errors)")
# Save geometry part
print(f"[JOURNAL] Saving geometry part...")
partSaveStatus_geom = workPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_geom = workPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue,
NXOpen.BasePart.CloseAfterSave.FalseValue,
)
partSaveStatus_geom.Dispose()
# Clean up temp file
@@ -914,6 +978,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
except Exception as e:
print(f"[JOURNAL] ERROR updating expressions: {e}")
import traceback
traceback.print_exc()
else:
print(f"[JOURNAL] WARNING: Could not find geometry part for expression updates!")
@@ -928,13 +993,18 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
# The chain is: .prt (geometry) -> _i.prt (idealized) -> .fem (mesh)
idealized_part = None
for filename in os.listdir(working_dir):
if '_i.prt' in filename.lower():
if "_i.prt" in filename.lower():
idealized_path = os.path.join(working_dir, filename)
print(f"[JOURNAL] Loading idealized part: {filename}")
try:
idealized_part, partLoadStatus = theSession.Parts.Open(idealized_path)
loaded_part, partLoadStatus = theSession.Parts.Open(idealized_path)
partLoadStatus.Dispose()
print(f"[JOURNAL] Idealized part loaded: {idealized_part.Name}")
# Check if load actually succeeded (Parts.Open can return None)
if loaded_part is not None:
idealized_part = loaded_part
print(f"[JOURNAL] Idealized part loaded: {idealized_part.Name}")
else:
print(f"[JOURNAL] WARNING: Parts.Open returned None for idealized part")
except Exception as e:
print(f"[JOURNAL] WARNING: Could not load idealized part: {e}")
break
@@ -942,7 +1012,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
# Find the FEM part
fem_part = None
for part in theSession.Parts:
if '_fem' in part.Name.lower() or part.Name.lower().endswith('.fem'):
if "_fem" in part.Name.lower() or part.Name.lower().endswith(".fem"):
fem_part = part
print(f"[JOURNAL] Found FEM part: {part.Name}")
break
@@ -956,7 +1026,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
fem_part,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.SameAsDisplay # Critical fix!
NXOpen.PartDisplayPartWorkPartOption.SameAsDisplay, # Critical fix!
)
partLoadStatus.Dispose()
@@ -972,13 +1042,17 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
print(f"[JOURNAL] FE model updated")
# Save FEM
partSaveStatus_fem = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
partSaveStatus_fem = workFemPart.Save(
NXOpen.BasePart.SaveComponents.TrueValue,
NXOpen.BasePart.CloseAfterSave.FalseValue,
)
partSaveStatus_fem.Dispose()
print(f"[JOURNAL] FEM saved")
except Exception as e:
print(f"[JOURNAL] ERROR updating FEM: {e}")
import traceback
traceback.print_exc()
# =========================================================================
@@ -990,7 +1064,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
workSimPart,
NXOpen.DisplayPartOption.AllowAdditional,
NXOpen.PartDisplayPartWorkPartOption.UseLast
NXOpen.PartDisplayPartWorkPartOption.UseLast,
)
partLoadStatus.Dispose()
@@ -1016,13 +1090,15 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
psolutions1,
NXOpen.CAE.SimSolution.SolveOption.Solve,
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
NXOpen.CAE.SimSolution.SolveMode.Foreground # Use Foreground to wait for completion
NXOpen.CAE.SimSolution.SolveMode.Foreground, # Use Foreground to wait for completion
)
theSession.DeleteUndoMark(markId_solve2, None)
theSession.SetUndoMarkName(markId_solve, "Solve")
print(f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped")
print(
f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped"
)
# Save all
try:
@@ -1035,6 +1111,6 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
return numfailed == 0
if __name__ == '__main__':
if __name__ == "__main__":
success = main(sys.argv[1:])
sys.exit(0 if success else 1)

View File

@@ -85,7 +85,7 @@
"created_by": {
"type": "string",
"description": "Who/what created the spec",
"enum": ["canvas", "claude", "api", "migration", "manual"]
"enum": ["canvas", "claude", "api", "migration", "manual", "dashboard_intake"]
},
"modified_by": {
"type": "string",
@@ -114,6 +114,17 @@
"engineering_context": {
"type": "string",
"description": "Real-world engineering scenario"
},
"status": {
"type": "string",
"description": "Study lifecycle status",
"enum": ["draft", "introspected", "configured", "validated", "ready", "running", "completed", "failed"],
"default": "draft"
},
"topic": {
"type": "string",
"description": "Topic folder for grouping related studies",
"pattern": "^[A-Za-z0-9_]+$"
}
}
},
@@ -215,6 +226,124 @@
"type": "boolean"
}
}
},
"introspection": {
"$ref": "#/definitions/introspection_data",
"description": "Model introspection results from intake workflow"
}
}
},
"introspection_data": {
"type": "object",
"description": "Model introspection results stored in the spec",
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"description": "When introspection was run"
},
"solver_type": {
"type": "string",
"description": "Detected solver type"
},
"mass_kg": {
"type": "number",
"description": "Mass from expressions or mass properties"
},
"volume_mm3": {
"type": "number",
"description": "Volume from mass properties"
},
"expressions": {
"type": "array",
"description": "Discovered NX expressions",
"items": {
"$ref": "#/definitions/expression_info"
}
},
"baseline": {
"$ref": "#/definitions/baseline_data",
"description": "Baseline FEA solve results"
},
"warnings": {
"type": "array",
"description": "Warnings from introspection",
"items": {
"type": "string"
}
}
}
},
"expression_info": {
"type": "object",
"description": "Information about an NX expression from introspection",
"required": ["name"],
"properties": {
"name": {
"type": "string",
"description": "Expression name in NX"
},
"value": {
"type": "number",
"description": "Current value"
},
"units": {
"type": "string",
"description": "Physical units"
},
"formula": {
"type": "string",
"description": "Expression formula if any"
},
"is_candidate": {
"type": "boolean",
"description": "Whether this is a design variable candidate",
"default": false
},
"confidence": {
"type": "number",
"description": "Confidence that this is a design variable (0.0 to 1.0)",
"minimum": 0,
"maximum": 1
}
}
},
"baseline_data": {
"type": "object",
"description": "Results from baseline FEA solve",
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"description": "When baseline was run"
},
"solve_time_seconds": {
"type": "number",
"description": "How long the solve took"
},
"mass_kg": {
"type": "number",
"description": "Computed mass from BDF/FEM"
},
"max_displacement_mm": {
"type": "number",
"description": "Max displacement result"
},
"max_stress_mpa": {
"type": "number",
"description": "Max von Mises stress"
},
"success": {
"type": "boolean",
"description": "Whether baseline solve succeeded",
"default": true
},
"error": {
"type": "string",
"description": "Error message if failed"
}
}
},

View File

@@ -0,0 +1,31 @@
"""
Atomizer Validation System
==========================
Validates study configuration before optimization starts.
Components:
- ValidationGate: Main orchestrator for validation
- SpecChecker: Validates atomizer_spec.json
- TestTrialRunner: Runs 2-3 test trials to verify setup
Usage:
from optimization_engine.validation import ValidationGate
gate = ValidationGate(study_dir)
result = gate.validate(run_test_trials=True)
if result.passed:
gate.approve() # Start optimization
"""
from .gate import ValidationGate, ValidationResult, TestTrialResult
from .checker import SpecChecker, ValidationIssue
__all__ = [
"ValidationGate",
"ValidationResult",
"TestTrialResult",
"SpecChecker",
"ValidationIssue",
]

View File

@@ -0,0 +1,454 @@
"""
Specification Checker
=====================
Validates atomizer_spec.json (or optimization_config.json) for:
- Schema compliance
- Semantic correctness
- Anti-pattern detection
- Expression existence
This catches configuration errors BEFORE wasting time on failed trials.
"""
from __future__ import annotations
import json
import logging
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from typing import List, Dict, Any, Optional
logger = logging.getLogger(__name__)
class IssueSeverity(str, Enum):
"""Severity level for validation issues."""
ERROR = "error" # Must fix before proceeding
WARNING = "warning" # Should review, but can proceed
INFO = "info" # Informational note
@dataclass
class ValidationIssue:
"""A single validation issue."""
severity: IssueSeverity
code: str
message: str
path: Optional[str] = None # JSON path to the issue
suggestion: Optional[str] = None
def __str__(self) -> str:
prefix = {
IssueSeverity.ERROR: "[ERROR]",
IssueSeverity.WARNING: "[WARN]",
IssueSeverity.INFO: "[INFO]",
}[self.severity]
location = f" at {self.path}" if self.path else ""
return f"{prefix} {self.message}{location}"
@dataclass
class CheckResult:
"""Result of running the spec checker."""
valid: bool
issues: List[ValidationIssue] = field(default_factory=list)
@property
def errors(self) -> List[ValidationIssue]:
return [i for i in self.issues if i.severity == IssueSeverity.ERROR]
@property
def warnings(self) -> List[ValidationIssue]:
return [i for i in self.issues if i.severity == IssueSeverity.WARNING]
def add_error(self, code: str, message: str, path: str = None, suggestion: str = None):
self.issues.append(
ValidationIssue(
severity=IssueSeverity.ERROR,
code=code,
message=message,
path=path,
suggestion=suggestion,
)
)
self.valid = False
def add_warning(self, code: str, message: str, path: str = None, suggestion: str = None):
self.issues.append(
ValidationIssue(
severity=IssueSeverity.WARNING,
code=code,
message=message,
path=path,
suggestion=suggestion,
)
)
def add_info(self, code: str, message: str, path: str = None):
self.issues.append(
ValidationIssue(
severity=IssueSeverity.INFO,
code=code,
message=message,
path=path,
)
)
class SpecChecker:
"""
Validates study specification files.
Checks:
1. Required fields present
2. Design variable bounds valid
3. Expressions exist in model (if introspection available)
4. Extractors available for objectives/constraints
5. Anti-patterns (mass minimization without constraints, etc.)
"""
# Known extractors
KNOWN_EXTRACTORS = {
"extract_mass_from_bdf",
"extract_part_mass",
"extract_displacement",
"extract_solid_stress",
"extract_principal_stress",
"extract_frequency",
"extract_strain_energy",
"extract_temperature",
"extract_zernike_from_op2",
}
def __init__(
self,
spec_path: Optional[Path] = None,
available_expressions: Optional[List[str]] = None,
):
"""
Initialize the checker.
Args:
spec_path: Path to spec file (atomizer_spec.json or optimization_config.json)
available_expressions: List of expression names from introspection
"""
self.spec_path = spec_path
self.available_expressions = available_expressions or []
self.spec: Dict[str, Any] = {}
def check(self, spec_data: Optional[Dict[str, Any]] = None) -> CheckResult:
"""
Run all validation checks.
Args:
spec_data: Spec dict (or load from spec_path if not provided)
Returns:
CheckResult with all issues found
"""
result = CheckResult(valid=True)
# Load spec if not provided
if spec_data:
self.spec = spec_data
elif self.spec_path and self.spec_path.exists():
with open(self.spec_path) as f:
self.spec = json.load(f)
else:
result.add_error("SPEC_NOT_FOUND", "No specification file found")
return result
# Run checks
self._check_required_fields(result)
self._check_design_variables(result)
self._check_objectives(result)
self._check_constraints(result)
self._check_extractors(result)
self._check_anti_patterns(result)
self._check_files(result)
return result
def _check_required_fields(self, result: CheckResult) -> None:
"""Check that required fields are present."""
# Check for design variables
dvs = self.spec.get("design_variables", [])
if not dvs:
result.add_error(
"NO_DESIGN_VARIABLES",
"No design variables defined",
suggestion="Add at least one design variable to optimize",
)
# Check for objectives
objectives = self.spec.get("objectives", [])
if not objectives:
result.add_error(
"NO_OBJECTIVES",
"No objectives defined",
suggestion="Define at least one objective (e.g., minimize mass)",
)
# Check for simulation settings
sim = self.spec.get("simulation", {})
if not sim.get("sim_file"):
result.add_warning(
"NO_SIM_FILE", "No simulation file specified", path="simulation.sim_file"
)
def _check_design_variables(self, result: CheckResult) -> None:
"""Check design variable definitions."""
dvs = self.spec.get("design_variables", [])
for i, dv in enumerate(dvs):
param = dv.get("parameter", dv.get("expression_name", dv.get("name", f"dv_{i}")))
bounds = dv.get("bounds", [])
path = f"design_variables[{i}]"
# Handle both formats: [min, max] or {"min": x, "max": y}
if isinstance(bounds, dict):
min_val = bounds.get("min")
max_val = bounds.get("max")
elif isinstance(bounds, (list, tuple)) and len(bounds) == 2:
min_val, max_val = bounds
else:
result.add_error(
"INVALID_BOUNDS",
f"Design variable '{param}' has invalid bounds format",
path=path,
suggestion="Bounds must be [min, max] or {min: x, max: y}",
)
continue
# Convert to float if strings
try:
min_val = float(min_val)
max_val = float(max_val)
except (TypeError, ValueError):
result.add_error(
"INVALID_BOUNDS_TYPE",
f"Design variable '{param}' bounds must be numeric",
path=path,
)
continue
# Check bounds order
if min_val >= max_val:
result.add_error(
"BOUNDS_INVERTED",
f"Design variable '{param}': min ({min_val}) >= max ({max_val})",
path=path,
suggestion="Ensure min < max",
)
# Check for very wide bounds
if max_val > 0 and min_val > 0:
ratio = max_val / min_val
if ratio > 100:
result.add_warning(
"BOUNDS_TOO_WIDE",
f"Design variable '{param}' has very wide bounds (ratio: {ratio:.1f}x)",
path=path,
suggestion="Consider narrowing bounds for faster convergence",
)
# Check for very narrow bounds
if max_val > 0 and min_val > 0:
ratio = max_val / min_val
if ratio < 1.1:
result.add_warning(
"BOUNDS_TOO_NARROW",
f"Design variable '{param}' has very narrow bounds (ratio: {ratio:.2f}x)",
path=path,
suggestion="Consider widening bounds to explore more design space",
)
# Check expression exists (if introspection available)
if self.available_expressions and param not in self.available_expressions:
result.add_error(
"EXPRESSION_NOT_FOUND",
f"Expression '{param}' not found in model",
path=path,
suggestion=f"Available expressions: {', '.join(self.available_expressions[:5])}...",
)
def _check_objectives(self, result: CheckResult) -> None:
"""Check objective definitions."""
objectives = self.spec.get("objectives", [])
for i, obj in enumerate(objectives):
name = obj.get("name", f"objective_{i}")
# Handle both formats: "goal" or "direction"
goal = obj.get("goal", obj.get("direction", "")).lower()
path = f"objectives[{i}]"
# Check goal is valid
if goal not in ("minimize", "maximize"):
result.add_error(
"INVALID_GOAL",
f"Objective '{name}' has invalid goal: '{goal}'",
path=path,
suggestion="Use 'minimize' or 'maximize'",
)
# Check extraction is defined
extraction = obj.get("extraction", {})
if not extraction.get("action"):
result.add_warning(
"NO_EXTRACTOR",
f"Objective '{name}' has no extractor specified",
path=path,
)
def _check_constraints(self, result: CheckResult) -> None:
"""Check constraint definitions."""
constraints = self.spec.get("constraints", [])
for i, const in enumerate(constraints):
name = const.get("name", f"constraint_{i}")
const_type = const.get("type", "").lower()
threshold = const.get("threshold")
path = f"constraints[{i}]"
# Check type is valid
if const_type not in ("less_than", "greater_than", "equal_to"):
result.add_warning(
"INVALID_CONSTRAINT_TYPE",
f"Constraint '{name}' has unusual type: '{const_type}'",
path=path,
suggestion="Use 'less_than' or 'greater_than'",
)
# Check threshold is defined
if threshold is None:
result.add_error(
"NO_THRESHOLD",
f"Constraint '{name}' has no threshold defined",
path=path,
)
def _check_extractors(self, result: CheckResult) -> None:
"""Check that referenced extractors exist."""
# Check objective extractors
for obj in self.spec.get("objectives", []):
extraction = obj.get("extraction", {})
action = extraction.get("action", "")
if action and action not in self.KNOWN_EXTRACTORS:
result.add_warning(
"UNKNOWN_EXTRACTOR",
f"Extractor '{action}' is not in the standard library",
suggestion="Ensure custom extractor is available",
)
# Check constraint extractors
for const in self.spec.get("constraints", []):
extraction = const.get("extraction", {})
action = extraction.get("action", "")
if action and action not in self.KNOWN_EXTRACTORS:
result.add_warning(
"UNKNOWN_EXTRACTOR",
f"Extractor '{action}' is not in the standard library",
)
def _check_anti_patterns(self, result: CheckResult) -> None:
"""Check for common optimization anti-patterns."""
objectives = self.spec.get("objectives", [])
constraints = self.spec.get("constraints", [])
# Anti-pattern: Mass minimization without stress/displacement constraints
has_mass_objective = any(
"mass" in obj.get("name", "").lower() and obj.get("goal") == "minimize"
for obj in objectives
)
has_structural_constraint = any(
any(
kw in const.get("name", "").lower()
for kw in ["stress", "displacement", "deflection"]
)
for const in constraints
)
if has_mass_objective and not has_structural_constraint:
result.add_warning(
"MASS_NO_CONSTRAINT",
"Mass minimization without structural constraints",
suggestion="Add stress or displacement constraints to prevent over-optimization",
)
# Anti-pattern: Too many design variables for trial count
n_dvs = len(self.spec.get("design_variables", []))
n_trials = self.spec.get("optimization_settings", {}).get("n_trials", 100)
if n_dvs > 0 and n_trials / n_dvs < 10:
result.add_warning(
"LOW_TRIALS_PER_DV",
f"Only {n_trials / n_dvs:.1f} trials per design variable",
suggestion=f"Consider increasing trials to at least {n_dvs * 20} for better coverage",
)
# Anti-pattern: Too many objectives
n_objectives = len(objectives)
if n_objectives > 3:
result.add_warning(
"TOO_MANY_OBJECTIVES",
f"{n_objectives} objectives may lead to sparse Pareto front",
suggestion="Consider consolidating or using weighted objectives",
)
def _check_files(self, result: CheckResult) -> None:
"""Check that referenced files exist."""
if not self.spec_path:
return
study_dir = self.spec_path.parent.parent # Assuming spec is in 1_setup/
sim = self.spec.get("simulation", {})
sim_file = sim.get("sim_file")
if sim_file:
# Check multiple possible locations
possible_paths = [
study_dir / "1_model" / sim_file,
study_dir / "1_setup" / "model" / sim_file,
study_dir / sim_file,
]
found = any(p.exists() for p in possible_paths)
if not found:
result.add_error(
"SIM_FILE_NOT_FOUND",
f"Simulation file not found: {sim_file}",
path="simulation.sim_file",
suggestion="Ensure model files are copied to study directory",
)
def validate_spec(spec_path: Path, expressions: List[str] = None) -> CheckResult:
"""
Convenience function to validate a spec file.
Args:
spec_path: Path to spec file
expressions: List of available expressions (from introspection)
Returns:
CheckResult with validation issues
"""
checker = SpecChecker(spec_path, expressions)
return checker.check()

View File

@@ -0,0 +1,508 @@
"""
Validation Gate
===============
The final checkpoint before optimization begins.
1. Validates the study specification
2. Runs 2-3 test trials to verify:
- Parameters actually update the model
- Mesh regenerates correctly
- Extractors work
- Results are different (not stuck)
3. Estimates runtime
4. Gets user approval
This is CRITICAL for catching the "mesh not updating" issue that
wastes hours of optimization time.
"""
from __future__ import annotations
import json
import logging
import random
import time
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import Optional, List, Dict, Any, Callable
import numpy as np
from .checker import SpecChecker, CheckResult, IssueSeverity
logger = logging.getLogger(__name__)
@dataclass
class TestTrialResult:
"""Result of a single test trial."""
trial_number: int
parameters: Dict[str, float]
objectives: Dict[str, float]
constraints: Dict[str, float] = field(default_factory=dict)
solve_time_seconds: float = 0.0
success: bool = False
error: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
"trial_number": self.trial_number,
"parameters": self.parameters,
"objectives": self.objectives,
"constraints": self.constraints,
"solve_time_seconds": self.solve_time_seconds,
"success": self.success,
"error": self.error,
}
@dataclass
class ValidationResult:
"""Complete validation result."""
passed: bool
timestamp: datetime = field(default_factory=datetime.now)
# Spec validation
spec_check: Optional[CheckResult] = None
# Test trials
test_trials: List[TestTrialResult] = field(default_factory=list)
results_vary: bool = False
variance_by_objective: Dict[str, float] = field(default_factory=dict)
# Runtime estimates
avg_solve_time: Optional[float] = None
estimated_total_runtime: Optional[float] = None
# Summary
errors: List[str] = field(default_factory=list)
warnings: List[str] = field(default_factory=list)
def add_error(self, message: str):
self.errors.append(message)
self.passed = False
def add_warning(self, message: str):
self.warnings.append(message)
def get_summary(self) -> str:
"""Get human-readable summary."""
lines = []
if self.passed:
lines.append("VALIDATION PASSED")
else:
lines.append("VALIDATION FAILED")
lines.append(f"\nSpec Validation:")
if self.spec_check:
lines.append(f" Errors: {len(self.spec_check.errors)}")
lines.append(f" Warnings: {len(self.spec_check.warnings)}")
lines.append(f"\nTest Trials:")
lines.append(
f" Completed: {len([t for t in self.test_trials if t.success])}/{len(self.test_trials)}"
)
lines.append(f" Results Vary: {'Yes' if self.results_vary else 'NO - PROBLEM!'}")
if self.variance_by_objective:
lines.append(" Variance by Objective:")
for obj, var in self.variance_by_objective.items():
lines.append(f" {obj}: {var:.6f}")
if self.avg_solve_time:
lines.append(f"\nRuntime Estimate:")
lines.append(f" Avg solve time: {self.avg_solve_time:.1f}s")
if self.estimated_total_runtime:
hours = self.estimated_total_runtime / 3600
lines.append(f" Est. total: {hours:.1f} hours")
return "\n".join(lines)
def to_dict(self) -> Dict[str, Any]:
return {
"passed": self.passed,
"timestamp": self.timestamp.isoformat(),
"spec_errors": len(self.spec_check.errors) if self.spec_check else 0,
"spec_warnings": len(self.spec_check.warnings) if self.spec_check else 0,
"test_trials": [t.to_dict() for t in self.test_trials],
"results_vary": self.results_vary,
"variance_by_objective": self.variance_by_objective,
"avg_solve_time": self.avg_solve_time,
"estimated_total_runtime": self.estimated_total_runtime,
"errors": self.errors,
"warnings": self.warnings,
}
class ValidationGate:
"""
Validates study setup before optimization.
This is the critical checkpoint that prevents wasted optimization time
by catching issues like:
- Missing files
- Invalid bounds
- Mesh not updating (all results identical)
- Broken extractors
"""
def __init__(
self,
study_dir: Path,
progress_callback: Optional[Callable[[str, float], None]] = None,
):
"""
Initialize the validation gate.
Args:
study_dir: Path to the study directory
progress_callback: Optional callback for progress updates
"""
self.study_dir = Path(study_dir)
self.progress_callback = progress_callback or (lambda m, p: None)
# Find spec file
self.spec_path = self._find_spec_path()
self.spec: Dict[str, Any] = {}
if self.spec_path and self.spec_path.exists():
with open(self.spec_path) as f:
self.spec = json.load(f)
def _find_spec_path(self) -> Optional[Path]:
"""Find the specification file."""
# Try atomizer_spec.json first (v2.0)
candidates = [
self.study_dir / "atomizer_spec.json",
self.study_dir / "1_setup" / "atomizer_spec.json",
self.study_dir / "optimization_config.json",
self.study_dir / "1_setup" / "optimization_config.json",
]
for path in candidates:
if path.exists():
return path
return None
def validate(
self,
run_test_trials: bool = True,
n_test_trials: int = 3,
available_expressions: Optional[List[str]] = None,
) -> ValidationResult:
"""
Run full validation.
Args:
run_test_trials: Whether to run test FEA solves
n_test_trials: Number of test trials (2-3 recommended)
available_expressions: Expression names from introspection
Returns:
ValidationResult with all findings
"""
result = ValidationResult(passed=True)
logger.info(f"Validating study: {self.study_dir.name}")
self._progress("Starting validation...", 0.0)
# Step 1: Check spec file exists
if not self.spec_path:
result.add_error("No specification file found")
return result
# Step 2: Validate spec
self._progress("Validating specification...", 0.1)
checker = SpecChecker(self.spec_path, available_expressions)
result.spec_check = checker.check(self.spec)
# Add spec errors to result
for issue in result.spec_check.errors:
result.add_error(str(issue))
for issue in result.spec_check.warnings:
result.add_warning(str(issue))
# Stop if spec has errors (unless they're non-critical)
if result.spec_check.errors:
self._progress("Validation failed: spec errors", 1.0)
return result
# Step 3: Run test trials
if run_test_trials:
self._progress("Running test trials...", 0.2)
self._run_test_trials(result, n_test_trials)
# Step 4: Calculate estimates
self._progress("Calculating estimates...", 0.9)
self._calculate_estimates(result)
self._progress("Validation complete", 1.0)
return result
def _progress(self, message: str, percent: float):
"""Report progress."""
logger.info(f"[{percent * 100:.0f}%] {message}")
self.progress_callback(message, percent)
def _run_test_trials(self, result: ValidationResult, n_trials: int) -> None:
"""Run test trials to verify setup."""
try:
from optimization_engine.nx.solver import NXSolver
except ImportError:
result.add_warning("NXSolver not available - skipping test trials")
return
# Get design variables
design_vars = self.spec.get("design_variables", [])
if not design_vars:
result.add_error("No design variables to test")
return
# Get model directory
model_dir = self._find_model_dir()
if not model_dir:
result.add_error("Model directory not found")
return
# Get sim file
sim_file = self._find_sim_file(model_dir)
if not sim_file:
result.add_error("Simulation file not found")
return
solver = NXSolver()
for i in range(n_trials):
self._progress(f"Running test trial {i + 1}/{n_trials}...", 0.2 + (0.6 * i / n_trials))
trial_result = TestTrialResult(trial_number=i + 1, parameters={}, objectives={})
# Generate random parameters within bounds
params = {}
for dv in design_vars:
param_name = dv.get("parameter", dv.get("name"))
bounds = dv.get("bounds", [0, 1])
# Use random value within bounds
value = random.uniform(bounds[0], bounds[1])
params[param_name] = value
trial_result.parameters = params
try:
start_time = time.time()
# Run simulation
solve_result = solver.run_simulation(
sim_file=sim_file,
working_dir=model_dir,
expression_updates=params,
cleanup=True,
)
trial_result.solve_time_seconds = time.time() - start_time
if solve_result.get("success"):
trial_result.success = True
# Extract results
op2_file = solve_result.get("op2_file")
if op2_file:
objectives = self._extract_objectives(Path(op2_file), model_dir)
trial_result.objectives = objectives
else:
trial_result.success = False
trial_result.error = solve_result.get("error", "Unknown error")
except Exception as e:
trial_result.success = False
trial_result.error = str(e)
logger.error(f"Test trial {i + 1} failed: {e}")
result.test_trials.append(trial_result)
# Check if results vary
self._check_results_variance(result)
def _find_model_dir(self) -> Optional[Path]:
"""Find the model directory."""
candidates = [
self.study_dir / "1_model",
self.study_dir / "1_setup" / "model",
self.study_dir,
]
for path in candidates:
if path.exists() and list(path.glob("*.sim")):
return path
return None
def _find_sim_file(self, model_dir: Path) -> Optional[Path]:
"""Find the simulation file."""
# From spec
sim = self.spec.get("simulation", {})
sim_name = sim.get("sim_file")
if sim_name:
sim_path = model_dir / sim_name
if sim_path.exists():
return sim_path
# Search for .sim files
sim_files = list(model_dir.glob("*.sim"))
if sim_files:
return sim_files[0]
return None
def _extract_objectives(self, op2_file: Path, model_dir: Path) -> Dict[str, float]:
"""Extract objective values from results."""
objectives = {}
# Extract based on configured objectives
for obj in self.spec.get("objectives", []):
name = obj.get("name", "objective")
extraction = obj.get("extraction", {})
action = extraction.get("action", "")
try:
if "mass" in action.lower():
from optimization_engine.extractors.bdf_mass_extractor import (
extract_mass_from_bdf,
)
dat_files = list(model_dir.glob("*.dat"))
if dat_files:
objectives[name] = extract_mass_from_bdf(str(dat_files[0]))
elif "displacement" in action.lower():
from optimization_engine.extractors.extract_displacement import (
extract_displacement,
)
result = extract_displacement(op2_file, subcase=1)
objectives[name] = result.get("max_displacement", 0)
elif "stress" in action.lower():
from optimization_engine.extractors.extract_von_mises_stress import (
extract_solid_stress,
)
result = extract_solid_stress(op2_file, subcase=1)
objectives[name] = result.get("max_von_mises", 0)
except Exception as e:
logger.debug(f"Failed to extract {name}: {e}")
return objectives
def _check_results_variance(self, result: ValidationResult) -> None:
"""Check if test trial results vary (indicating mesh is updating)."""
successful_trials = [t for t in result.test_trials if t.success]
if len(successful_trials) < 2:
result.add_warning("Not enough successful trials to check variance")
return
# Check variance for each objective
for obj_name in successful_trials[0].objectives.keys():
values = [t.objectives.get(obj_name, 0) for t in successful_trials]
if len(values) > 1:
variance = np.var(values)
result.variance_by_objective[obj_name] = variance
# Check if variance is too low (results are stuck)
mean_val = np.mean(values)
if mean_val != 0:
cv = np.sqrt(variance) / abs(mean_val) # Coefficient of variation
if cv < 0.001: # Less than 0.1% variation
result.add_error(
f"Results for '{obj_name}' are nearly identical (CV={cv:.6f}). "
"The mesh may not be updating!"
)
result.results_vary = False
else:
result.results_vary = True
else:
# Can't calculate CV if mean is 0
if variance < 1e-10:
result.add_warning(f"Results for '{obj_name}' show no variation")
else:
result.results_vary = True
# Default to True if we couldn't check
if not result.variance_by_objective:
result.results_vary = True
def _calculate_estimates(self, result: ValidationResult) -> None:
"""Calculate runtime estimates."""
successful_trials = [t for t in result.test_trials if t.success]
if successful_trials:
solve_times = [t.solve_time_seconds for t in successful_trials]
result.avg_solve_time = np.mean(solve_times)
# Get total trials from spec
n_trials = self.spec.get("optimization_settings", {}).get("n_trials", 100)
result.estimated_total_runtime = result.avg_solve_time * n_trials
def approve(self) -> bool:
"""
Mark the study as approved for optimization.
Creates an approval file to indicate validation passed.
"""
approval_file = self.study_dir / ".validation_approved"
try:
approval_file.write_text(datetime.now().isoformat())
logger.info(f"Study approved: {self.study_dir.name}")
return True
except Exception as e:
logger.error(f"Failed to approve: {e}")
return False
def is_approved(self) -> bool:
"""Check if study has been approved."""
approval_file = self.study_dir / ".validation_approved"
return approval_file.exists()
def save_result(self, result: ValidationResult) -> Path:
"""Save validation result to file."""
output_path = self.study_dir / "validation_result.json"
with open(output_path, "w") as f:
json.dump(result.to_dict(), f, indent=2)
return output_path
def validate_study(
study_dir: Path,
run_test_trials: bool = True,
n_test_trials: int = 3,
) -> ValidationResult:
"""
Convenience function to validate a study.
Args:
study_dir: Path to study directory
run_test_trials: Whether to run test FEA solves
n_test_trials: Number of test trials
Returns:
ValidationResult
"""
gate = ValidationGate(study_dir)
return gate.validate(run_test_trials=run_test_trials, n_test_trials=n_test_trials)

View File

@@ -4,7 +4,7 @@
"extraction_method": "ZernikeOPD_Annular",
"inner_radius_mm": 135.75,
"objectives_note": "Mass NOT in objective - WFE only",
"total_trials": 169,
"total_trials": 171,
"feasible_trials": 167,
"best_trial": {
"number": 163,
@@ -24,5 +24,5 @@
},
"iter_folder": "C:\\Users\\antoi\\Atomizer\\studies\\M1_Mirror\\m1_mirror_cost_reduction_lateral\\2_iterations\\iter164"
},
"timestamp": "2026-01-14T17:59:38.649254"
"timestamp": "2026-01-20T16:12:29.817282"
}

View File

@@ -2,9 +2,9 @@
"meta": {
"version": "2.0",
"created": "2026-01-17T15:35:12.024432Z",
"modified": "2026-01-20T20:05:28.197219Z",
"modified": "2026-01-22T13:48:14.104039Z",
"created_by": "migration",
"modified_by": "test",
"modified_by": "canvas",
"study_name": "m1_mirror_cost_reduction_lateral",
"description": "Lateral support optimization with new U-joint expressions (lateral_inner_u, lateral_outer_u) for cost reduction model. Focus on WFE and MFG only - no mass objective.",
"tags": [
@@ -152,22 +152,6 @@
"y": 580
}
},
{
"name": "variable_1768938898079",
"expression_name": "expr_1768938898079",
"type": "continuous",
"bounds": {
"min": 0,
"max": 1
},
"baseline": 0.5,
"enabled": true,
"canvas_position": {
"x": -185.06035488622524,
"y": 91.62521000204346
},
"id": "dv_008"
},
{
"name": "test_dv",
"expression_name": "test_expr",
@@ -304,6 +288,27 @@
"y": 262.2501934501369
},
"id": "ext_004"
},
{
"name": "extractor_1769089672375",
"type": "custom_function",
"builtin": false,
"enabled": true,
"function": {
"name": "extract",
"source_code": "def extract(op2_path: str, config: dict = None) -> dict:\n \"\"\"\n Custom extractor function.\n \n Args:\n op2_path: Path to the OP2 results file\n config: Optional configuration dict\n \n Returns:\n Dictionary with extracted values\n \"\"\"\n # TODO: Implement extraction logic\n return {'value': 0.0}\n"
},
"outputs": [
{
"name": "value",
"metric": "custom"
}
],
"canvas_position": {
"x": 1114.5479601736847,
"y": 880.0345512775555
},
"id": "ext_006"
}
],
"objectives": [

38
tools/test_extraction.py Normal file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
"""Test extraction pipeline on existing OP2 file."""
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from optimization_engine.extractors import extract_displacement, extract_solid_stress
op2_path = r"C:\Users\antoi\Atomizer\studies\_Other\Model_for_dev\support_arm_sim1-solution_1.op2"
print("Testing extractors on existing OP2 file...")
print(f"File: {op2_path}")
print(f"Exists: {Path(op2_path).exists()}")
print()
# Test displacement extraction
print("1. Displacement extraction:")
try:
result = extract_displacement(op2_path)
print(f" Max magnitude: {result.get('max_magnitude', 'N/A')} mm")
print(f" Full result: {result}")
except Exception as e:
print(f" ERROR: {e}")
# Test stress extraction
print()
print("2. Stress extraction (CTETRA):")
try:
result = extract_solid_stress(op2_path, element_type="ctetra")
print(f" Max von Mises: {result.get('max_von_mises', 'N/A')} MPa")
print(f" Full result: {result}")
except Exception as e:
print(f" ERROR: {e}")
print()
print("Done!")

View File

@@ -0,0 +1,172 @@
"""
Test: J1 coefficient vs mean(WFE) for each subcase individually.
Hypothesis: At 90 deg (zenith), gravity is axially symmetric, so J1 should
closely match mean(WFE). At other angles (20, 40, 60), lateral gravity
components break symmetry, potentially causing J1 != mean.
"""
import numpy as np
from pathlib import Path
import sys
sys.path.insert(0, str(Path(__file__).parent.parent))
from optimization_engine.extractors.extract_zernike import (
ZernikeExtractor,
compute_zernike_coefficients,
DEFAULT_N_MODES,
)
def test_j1_vs_mean_per_subcase():
"""Test J1 vs mean for each subcase in real data."""
print("=" * 70)
print("J1 vs mean(WFE) PER SUBCASE")
print("=" * 70)
# Find OP2 files
studies_dir = Path(__file__).parent.parent / "studies"
# Look for M1 mirror studies specifically
op2_files = list(studies_dir.rglob("**/m1_mirror*/**/*.op2"))
if not op2_files:
op2_files = list(studies_dir.rglob("*.op2"))
if not op2_files:
print("No OP2 files found!")
return
# Use the first one
op2_file = op2_files[0]
print(f"\nUsing: {op2_file.relative_to(studies_dir.parent)}")
try:
extractor = ZernikeExtractor(op2_file)
subcases = list(extractor.displacements.keys())
print(f"Available subcases: {subcases}")
print(f"\n{'Subcase':<10} {'Mean(WFE)':<15} {'J1 Coeff':<15} {'|Diff|':<12} {'Diff %':<10}")
print("-" * 70)
results = []
for sc in sorted(subcases, key=lambda x: int(x) if x.isdigit() else 0):
try:
X, Y, WFE = extractor._build_dataframe(sc)
# Compute Zernike coefficients
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, DEFAULT_N_MODES)
j1 = coeffs[0]
wfe_mean = np.mean(WFE)
diff = abs(j1 - wfe_mean)
pct_diff = 100 * diff / abs(wfe_mean) if abs(wfe_mean) > 1e-6 else 0
print(f"{sc:<10} {wfe_mean:<15.4f} {j1:<15.4f} {diff:<12.4f} {pct_diff:<10.4f}")
results.append({
'subcase': sc,
'mean_wfe': wfe_mean,
'j1': j1,
'diff': diff,
'pct_diff': pct_diff
})
except Exception as e:
print(f"{sc:<10} ERROR: {e}")
# Also check RELATIVE WFE (e.g., 20 vs 90, 40 vs 90, 60 vs 90)
print(f"\n" + "=" * 70)
print("RELATIVE WFE (vs reference subcase)")
print("=" * 70)
# Find 90 or use first subcase as reference
ref_sc = '90' if '90' in subcases else subcases[0]
print(f"Reference subcase: {ref_sc}")
print(f"\n{'Relative':<15} {'Mean(WFE_rel)':<15} {'J1 Coeff':<15} {'|Diff|':<12} {'Diff %':<10}")
print("-" * 70)
ref_data = extractor.displacements[ref_sc]
ref_node_to_idx = {int(nid): i for i, nid in enumerate(ref_data['node_ids'])}
for sc in sorted(subcases, key=lambda x: int(x) if x.isdigit() else 0):
if sc == ref_sc:
continue
try:
target_data = extractor.displacements[sc]
X_rel, Y_rel, WFE_rel = [], [], []
for i, nid in enumerate(target_data['node_ids']):
nid = int(nid)
if nid not in ref_node_to_idx:
continue
ref_idx = ref_node_to_idx[nid]
geo = extractor.node_geometry.get(nid)
if geo is None:
continue
X_rel.append(geo[0])
Y_rel.append(geo[1])
target_wfe = target_data['disp'][i, 2] * extractor.wfe_factor
ref_wfe = ref_data['disp'][ref_idx, 2] * extractor.wfe_factor
WFE_rel.append(target_wfe - ref_wfe)
X_rel = np.array(X_rel)
Y_rel = np.array(Y_rel)
WFE_rel = np.array(WFE_rel)
# Compute Zernike on relative WFE
coeffs_rel, _ = compute_zernike_coefficients(X_rel, Y_rel, WFE_rel, DEFAULT_N_MODES)
j1_rel = coeffs_rel[0]
wfe_rel_mean = np.mean(WFE_rel)
diff_rel = abs(j1_rel - wfe_rel_mean)
pct_diff_rel = 100 * diff_rel / abs(wfe_rel_mean) if abs(wfe_rel_mean) > 1e-6 else 0
label = f"{sc} vs {ref_sc}"
print(f"{label:<15} {wfe_rel_mean:<15.4f} {j1_rel:<15.4f} {diff_rel:<12.4f} {pct_diff_rel:<10.4f}")
except Exception as e:
print(f"{sc} vs {ref_sc}: ERROR: {e}")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
def test_symmetry_analysis():
"""Analyze why J1 != mean for different subcases."""
print(f"\n" + "=" * 70)
print("SYMMETRY ANALYSIS")
print("=" * 70)
print("""
Theory: J1 (piston) should equal mean(WFE) when:
1. The aperture is circular/annular AND
2. The sampling is uniform in angle AND
3. The WFE has no bias correlated with position
At 90 deg (zenith):
- Gravity acts purely in Z direction
- Deformation should be axially symmetric
- J1 should closely match mean(WFE)
At 20/40/60 deg:
- Gravity has lateral (X,Y) components
- Deformation may have asymmetric patterns
- Tip/tilt (J2,J3) will be large
- But J1 vs mean should still be close IF sampling is uniform
The difference J1-mean comes from:
- Non-uniform radial sampling (mesh density varies)
- Correlation between WFE and position (asymmetric loading)
""")
if __name__ == "__main__":
test_j1_vs_mean_per_subcase()
test_symmetry_analysis()

View File

@@ -0,0 +1,349 @@
"""
Test: Is recentering Z per subcase equivalent to removing piston from relative WFE?
Theory:
- Option A: Recenter each subcase, then subtract
WFE_rel_A = (dZ_20 - mean(dZ_20)) - (dZ_90 - mean(dZ_90))
- Option B: Subtract raw, then remove piston via Zernike J1
WFE_rel_B = dZ_20 - dZ_90
Then filter J1 from Zernike fit
Mathematically:
WFE_rel_A = dZ_20 - dZ_90 - mean(dZ_20) + mean(dZ_90)
= (dZ_20 - dZ_90) - (mean(dZ_20) - mean(dZ_90))
If nodes are identical: mean(dZ_20) - mean(dZ_90) = mean(dZ_20 - dZ_90)
So: WFE_rel_A = WFE_rel_B - mean(WFE_rel_B)
This should equal WFE_rel_B with J1 (piston) removed, BUT only if:
1. Piston Zernike Z1 = 1 (constant)
2. Sampling is uniform (or Z1 coefficient = mean for non-uniform)
For annular apertures and non-uniform mesh sampling, this might not be exact!
"""
import numpy as np
from pathlib import Path
import sys
# Add project root to path
sys.path.insert(0, str(Path(__file__).parent.parent))
from optimization_engine.extractors.extract_zernike import (
ZernikeExtractor,
compute_zernike_coefficients,
compute_rms_metrics,
zernike_noll,
DEFAULT_N_MODES,
DEFAULT_FILTER_ORDERS,
)
def test_recentering_equivalence_synthetic():
"""Test with synthetic data where we control everything."""
print("=" * 70)
print("TEST 1: Synthetic Data")
print("=" * 70)
np.random.seed(42)
# Create annular aperture (like a telescope mirror)
n_points = 5000
r_inner = 0.3 # 30% central obscuration
r_outer = 1.0
# Generate random points in annulus
r = np.sqrt(np.random.uniform(r_inner**2, r_outer**2, n_points))
theta = np.random.uniform(0, 2*np.pi, n_points)
X = r * np.cos(theta) * 500 # Scale to 500mm radius
Y = r * np.sin(theta) * 500
# Simulate Z-displacements for two subcases (in mm, will convert to nm)
# Subcase 90: Some aberration pattern + piston
piston_90 = 0.001 # 1 micron mean displacement
dZ_90 = piston_90 + 0.0005 * (X**2 + Y**2) / 500**2 # Add some power
dZ_90 += 0.0002 * np.random.randn(n_points) # Add noise
# Subcase 20: Different aberration + different piston
piston_20 = 0.003 # 3 micron mean displacement
dZ_20 = piston_20 + 0.0008 * (X**2 + Y**2) / 500**2 # More power
dZ_20 += 0.0003 * X / 500 # Add some tilt
dZ_20 += 0.0002 * np.random.randn(n_points) # Add noise
# Convert to WFE in nm (2x for reflection, 1e6 for mm->nm)
wfe_factor = 2.0 * 1e6
WFE_90 = dZ_90 * wfe_factor
WFE_20 = dZ_20 * wfe_factor
print(f"\nInput data:")
print(f" Points: {n_points} (annular, r_inner={r_inner})")
print(f" Mean WFE_90: {np.mean(WFE_90):.2f} nm")
print(f" Mean WFE_20: {np.mean(WFE_20):.2f} nm")
print(f" Mean difference: {np.mean(WFE_20) - np.mean(WFE_90):.2f} nm")
# =========================================================================
# Option A: Recenter each subcase BEFORE subtraction
# =========================================================================
WFE_90_centered = WFE_90 - np.mean(WFE_90)
WFE_20_centered = WFE_20 - np.mean(WFE_20)
WFE_rel_A = WFE_20_centered - WFE_90_centered
# Fit Zernike to option A
coeffs_A, R_max_A = compute_zernike_coefficients(X, Y, WFE_rel_A, DEFAULT_N_MODES)
# Compute RMS metrics for option A
rms_A = compute_rms_metrics(X, Y, WFE_rel_A, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
print(f"\nOption A (recenter before subtraction):")
print(f" Mean WFE_rel_A: {np.mean(WFE_rel_A):.6f} nm (should be ~0)")
print(f" J1 (piston) coefficient: {coeffs_A[0]:.6f} nm")
print(f" Global RMS: {rms_A['global_rms_nm']:.2f} nm")
print(f" Filtered RMS (J1-J4 removed): {rms_A['filtered_rms_nm']:.2f} nm")
# =========================================================================
# Option B: Subtract raw, then analyze (current implementation)
# =========================================================================
WFE_rel_B = WFE_20 - WFE_90 # No recentering
# Fit Zernike to option B
coeffs_B, R_max_B = compute_zernike_coefficients(X, Y, WFE_rel_B, DEFAULT_N_MODES)
# Compute RMS metrics for option B
rms_B = compute_rms_metrics(X, Y, WFE_rel_B, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
print(f"\nOption B (current: subtract raw, filter J1-J4 after):")
print(f" Mean WFE_rel_B: {np.mean(WFE_rel_B):.2f} nm")
print(f" J1 (piston) coefficient: {coeffs_B[0]:.2f} nm")
print(f" Global RMS: {rms_B['global_rms_nm']:.2f} nm")
print(f" Filtered RMS (J1-J4 removed): {rms_B['filtered_rms_nm']:.2f} nm")
# =========================================================================
# Compare
# =========================================================================
print(f"\n" + "=" * 70)
print("COMPARISON")
print("=" * 70)
# Check if J1 coefficient in B equals the mean
j1_vs_mean_diff = abs(coeffs_B[0] - np.mean(WFE_rel_B))
print(f"\n1. Does J1 = mean(WFE)?")
print(f" J1 coefficient: {coeffs_B[0]:.6f} nm")
print(f" Mean of WFE: {np.mean(WFE_rel_B):.6f} nm")
print(f" Difference: {j1_vs_mean_diff:.6f} nm")
print(f" --> {'YES (negligible diff)' if j1_vs_mean_diff < 0.01 else 'NO (significant diff!)'}")
# Check if filtered RMS is the same
filtered_rms_diff = abs(rms_A['filtered_rms_nm'] - rms_B['filtered_rms_nm'])
print(f"\n2. Is filtered RMS the same?")
print(f" Option A filtered RMS: {rms_A['filtered_rms_nm']:.6f} nm")
print(f" Option B filtered RMS: {rms_B['filtered_rms_nm']:.6f} nm")
print(f" Difference: {filtered_rms_diff:.6f} nm")
print(f" --> {'EQUIVALENT' if filtered_rms_diff < 0.01 else 'NOT EQUIVALENT!'}")
# Check all coefficients (J2 onwards should be identical)
coeff_diffs = np.abs(coeffs_A[1:] - coeffs_B[1:]) # Skip J1
max_coeff_diff = np.max(coeff_diffs)
print(f"\n3. Are non-piston coefficients (J2+) identical?")
print(f" Max difference in J2-J{DEFAULT_N_MODES}: {max_coeff_diff:.6f} nm")
print(f" --> {'YES' if max_coeff_diff < 0.01 else 'NO!'}")
# The key insight: for non-uniform sampling, J1 might not equal mean exactly
# Let's check how Z1 (piston polynomial) behaves
x_c = X - np.mean(X)
y_c = Y - np.mean(Y)
R_max = np.max(np.hypot(x_c, y_c))
r_norm = np.hypot(x_c / R_max, y_c / R_max)
theta_norm = np.arctan2(y_c, x_c)
Z1 = zernike_noll(1, r_norm, theta_norm) # Piston polynomial
print(f"\n4. Piston polynomial Z1 statistics:")
print(f" Z1 should be constant = 1 for all points")
print(f" Mean(Z1): {np.mean(Z1):.6f}")
print(f" Std(Z1): {np.std(Z1):.6f}")
print(f" Min(Z1): {np.min(Z1):.6f}")
print(f" Max(Z1): {np.max(Z1):.6f}")
return filtered_rms_diff < 0.01
def test_recentering_with_real_data():
"""Test with real OP2 data if available."""
print("\n" + "=" * 70)
print("TEST 2: Real Data (if available)")
print("=" * 70)
# Look for a real study with OP2 data
studies_dir = Path(__file__).parent.parent / "studies"
op2_files = list(studies_dir.rglob("*.op2"))
if not op2_files:
print(" No OP2 files found in studies directory. Skipping real data test.")
return None
# Use the first one found
op2_file = op2_files[0]
print(f"\n Using: {op2_file.relative_to(studies_dir.parent)}")
try:
extractor = ZernikeExtractor(op2_file)
subcases = list(extractor.displacements.keys())
print(f" Available subcases: {subcases}")
if len(subcases) < 2:
print(" Need at least 2 subcases for relative comparison. Skipping.")
return None
# Pick two subcases
ref_subcase = subcases[0]
target_subcase = subcases[1]
print(f" Comparing: {target_subcase} vs {ref_subcase}")
# Get raw data
X_t, Y_t, WFE_t = extractor._build_dataframe(target_subcase)
X_r, Y_r, WFE_r = extractor._build_dataframe(ref_subcase)
# Build node matching (same logic as extract_relative)
target_data = extractor.displacements[target_subcase]
ref_data = extractor.displacements[ref_subcase]
ref_node_to_idx = {
int(nid): i for i, nid in enumerate(ref_data['node_ids'])
}
X_common, Y_common = [], []
WFE_target_common, WFE_ref_common = [], []
for i, nid in enumerate(target_data['node_ids']):
nid = int(nid)
if nid not in ref_node_to_idx:
continue
ref_idx = ref_node_to_idx[nid]
geo = extractor.node_geometry.get(nid)
if geo is None:
continue
X_common.append(geo[0])
Y_common.append(geo[1])
WFE_target_common.append(target_data['disp'][i, 2] * extractor.wfe_factor)
WFE_ref_common.append(ref_data['disp'][ref_idx, 2] * extractor.wfe_factor)
X = np.array(X_common)
Y = np.array(Y_common)
WFE_target = np.array(WFE_target_common)
WFE_ref = np.array(WFE_ref_common)
print(f" Common nodes: {len(X)}")
print(f" Mean WFE target ({target_subcase}): {np.mean(WFE_target):.2f} nm")
print(f" Mean WFE ref ({ref_subcase}): {np.mean(WFE_ref):.2f} nm")
# Option A: Recenter before
WFE_target_centered = WFE_target - np.mean(WFE_target)
WFE_ref_centered = WFE_ref - np.mean(WFE_ref)
WFE_rel_A = WFE_target_centered - WFE_ref_centered
rms_A = compute_rms_metrics(X, Y, WFE_rel_A, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
# Option B: Current (no recentering)
WFE_rel_B = WFE_target - WFE_ref
rms_B = compute_rms_metrics(X, Y, WFE_rel_B, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
print(f"\n Option A (recenter before):")
print(f" Filtered RMS: {rms_A['filtered_rms_nm']:.4f} nm")
print(f"\n Option B (current, filter after):")
print(f" Filtered RMS: {rms_B['filtered_rms_nm']:.4f} nm")
diff = abs(rms_A['filtered_rms_nm'] - rms_B['filtered_rms_nm'])
pct_diff = 100 * diff / rms_B['filtered_rms_nm'] if rms_B['filtered_rms_nm'] > 0 else 0
print(f"\n Difference: {diff:.4f} nm ({pct_diff:.4f}%)")
print(f" --> {'EQUIVALENT' if pct_diff < 0.1 else 'NOT EQUIVALENT!'}")
return pct_diff < 0.1
except Exception as e:
print(f" Error: {e}")
import traceback
traceback.print_exc()
return None
def test_annular_zernike_piston():
"""
Specifically test whether J1 = mean for annular apertures.
For a filled circular aperture with uniform sampling, J1 = mean exactly.
For annular apertures or non-uniform sampling, this may not hold!
"""
print("\n" + "=" * 70)
print("TEST 3: Annular Aperture - Does J1 = mean?")
print("=" * 70)
np.random.seed(123)
# Test different inner radii
for r_inner in [0.0, 0.2, 0.4, 0.6]:
n_points = 10000
r_outer = 1.0
# Generate points in annulus
if r_inner > 0:
r = np.sqrt(np.random.uniform(r_inner**2, r_outer**2, n_points))
else:
r = np.sqrt(np.random.uniform(0, r_outer**2, n_points))
theta = np.random.uniform(0, 2*np.pi, n_points)
X = r * np.cos(theta) * 500
Y = r * np.sin(theta) * 500
# Create WFE with known piston
true_piston = 1000.0 # nm
WFE = true_piston + 100 * np.random.randn(n_points)
# Fit Zernike
coeffs, _ = compute_zernike_coefficients(X, Y, WFE, DEFAULT_N_MODES)
j1_coeff = coeffs[0]
wfe_mean = np.mean(WFE)
diff = abs(j1_coeff - wfe_mean)
print(f"\n r_inner = {r_inner}:")
print(f" True piston: {true_piston:.2f} nm")
print(f" Mean(WFE): {wfe_mean:.2f} nm")
print(f" J1 coefficient: {j1_coeff:.2f} nm")
print(f" |J1 - mean|: {diff:.4f} nm")
print(f" --> {'J1 ≈ mean' if diff < 1.0 else 'J1 ≠ mean!'}")
if __name__ == "__main__":
print("\n" + "=" * 70)
print("ZERNIKE RECENTERING EQUIVALENCE TEST")
print("=" * 70)
print("\nQuestion: Is recentering Z per subcase equivalent to")
print(" removing piston (J1) from relative WFE?")
print("=" * 70)
# Run tests
test1_passed = test_recentering_equivalence_synthetic()
test2_passed = test_recentering_with_real_data()
test_annular_zernike_piston()
# Summary
print("\n" + "=" * 70)
print("SUMMARY")
print("=" * 70)
print(f"\nSynthetic data test: {'PASSED' if test1_passed else 'FAILED'}")
if test2_passed is not None:
print(f"Real data test: {'PASSED' if test2_passed else 'FAILED'}")
else:
print("Real data test: SKIPPED")
print("\nConclusion:")
if test1_passed:
print(" For standard Zernike on circular/annular apertures,")
print(" recentering Z before subtraction IS equivalent to")
print(" filtering J1 (piston) after Zernike fit.")
print("\n The current implementation is correct!")
else:
print(" WARNING: Recentering and J1 filtering are NOT equivalent!")
print(" Consider updating the extractor to recenter before subtraction.")