Compare commits
2 Commits
a3f18dc377
...
a26914bbe8
| Author | SHA1 | Date | |
|---|---|---|---|
| a26914bbe8 | |||
| 3193831340 |
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
skill_id: SKILL_001
|
skill_id: SKILL_001
|
||||||
version: 2.4
|
version: 2.5
|
||||||
last_updated: 2025-12-31
|
last_updated: 2026-01-22
|
||||||
type: reference
|
type: reference
|
||||||
code_dependencies:
|
code_dependencies:
|
||||||
- optimization_engine/extractors/__init__.py
|
- optimization_engine/extractors/__init__.py
|
||||||
@@ -14,8 +14,8 @@ requires_skills:
|
|||||||
|
|
||||||
# Atomizer Quick Reference Cheatsheet
|
# Atomizer Quick Reference Cheatsheet
|
||||||
|
|
||||||
**Version**: 2.4
|
**Version**: 2.5
|
||||||
**Updated**: 2025-12-31
|
**Updated**: 2026-01-22
|
||||||
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -37,6 +37,8 @@ requires_skills:
|
|||||||
| **Use SAT (Self-Aware Turbo)** | **SYS_16** | SAT v3 for high-efficiency neural-accelerated optimization |
|
| **Use SAT (Self-Aware Turbo)** | **SYS_16** | SAT v3 for high-efficiency neural-accelerated optimization |
|
||||||
| Generate physics insight | SYS_17 | `python -m optimization_engine.insights generate <study>` |
|
| Generate physics insight | SYS_17 | `python -m optimization_engine.insights generate <study>` |
|
||||||
| **Manage knowledge/playbook** | **SYS_18** | `from optimization_engine.context import AtomizerPlaybook` |
|
| **Manage knowledge/playbook** | **SYS_18** | `from optimization_engine.context import AtomizerPlaybook` |
|
||||||
|
| **Automate dev tasks** | **DevLoop** | `python tools/devloop_cli.py start "task"` |
|
||||||
|
| **Test dashboard UI** | **DevLoop** | `python tools/devloop_cli.py browser --level full` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -678,6 +680,67 @@ feedback.process_trial_result(
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## DevLoop Quick Reference
|
||||||
|
|
||||||
|
Closed-loop development system using AI agents + Playwright testing.
|
||||||
|
|
||||||
|
### CLI Commands
|
||||||
|
|
||||||
|
| Task | Command |
|
||||||
|
|------|---------|
|
||||||
|
| Full dev cycle | `python tools/devloop_cli.py start "Create new study"` |
|
||||||
|
| Plan only | `python tools/devloop_cli.py plan "Fix validation"` |
|
||||||
|
| Implement plan | `python tools/devloop_cli.py implement` |
|
||||||
|
| Test study files | `python tools/devloop_cli.py test --study support_arm` |
|
||||||
|
| Analyze failures | `python tools/devloop_cli.py analyze` |
|
||||||
|
| Browser smoke test | `python tools/devloop_cli.py browser` |
|
||||||
|
| Browser full tests | `python tools/devloop_cli.py browser --level full` |
|
||||||
|
| Check status | `python tools/devloop_cli.py status` |
|
||||||
|
| Quick test | `python tools/devloop_cli.py quick` |
|
||||||
|
|
||||||
|
### Browser Test Levels
|
||||||
|
|
||||||
|
| Level | Description | Tests |
|
||||||
|
|-------|-------------|-------|
|
||||||
|
| `quick` | Smoke test (page loads) | 1 |
|
||||||
|
| `home` | Home page verification | 2 |
|
||||||
|
| `full` | All UI + study tests | 5+ |
|
||||||
|
| `study` | Canvas/dashboard for specific study | 3 |
|
||||||
|
|
||||||
|
### State Files (`.devloop/`)
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `current_plan.json` | Current implementation plan |
|
||||||
|
| `test_results.json` | Filesystem/API test results |
|
||||||
|
| `browser_test_results.json` | Playwright test results |
|
||||||
|
| `analysis.json` | Failure analysis |
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start backend
|
||||||
|
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload --port 8000
|
||||||
|
|
||||||
|
# Start frontend
|
||||||
|
cd atomizer-dashboard/frontend && npm run dev
|
||||||
|
|
||||||
|
# Install Playwright (once)
|
||||||
|
cd atomizer-dashboard/frontend && npx playwright install chromium
|
||||||
|
```
|
||||||
|
|
||||||
|
### Standalone Playwright Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd atomizer-dashboard/frontend
|
||||||
|
npm run test:e2e # Run all E2E tests
|
||||||
|
npm run test:e2e:ui # Playwright UI mode
|
||||||
|
```
|
||||||
|
|
||||||
|
**Full documentation**: `docs/guides/DEVLOOP.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Report Generation Quick Reference (OP_08)
|
## Report Generation Quick Reference (OP_08)
|
||||||
|
|
||||||
Generate comprehensive study reports from optimization data.
|
Generate comprehensive study reports from optimization data.
|
||||||
|
|||||||
206
.claude/skills/modules/study-readme-generator.md
Normal file
206
.claude/skills/modules/study-readme-generator.md
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
# Study README Generator Skill
|
||||||
|
|
||||||
|
**Skill ID**: STUDY_README_GENERATOR
|
||||||
|
**Version**: 1.0
|
||||||
|
**Purpose**: Generate intelligent, context-aware README.md files for optimization studies
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
This skill is invoked automatically during the study intake workflow when:
|
||||||
|
1. A study moves from `introspected` to `configured` status
|
||||||
|
2. User explicitly requests README generation
|
||||||
|
3. Finalizing a study from the inbox
|
||||||
|
|
||||||
|
## Input Context
|
||||||
|
|
||||||
|
The README generator receives:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"study_name": "bracket_mass_opt_v1",
|
||||||
|
"topic": "Brackets",
|
||||||
|
"description": "User's description from intake form",
|
||||||
|
"spec": { /* Full AtomizerSpec v2.0 */ },
|
||||||
|
"introspection": {
|
||||||
|
"expressions": [...],
|
||||||
|
"mass_kg": 1.234,
|
||||||
|
"solver_type": "NX_Nastran"
|
||||||
|
},
|
||||||
|
"context_files": {
|
||||||
|
"goals.md": "User's goals markdown content",
|
||||||
|
"notes.txt": "Any additional notes"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Generate a README.md with these sections:
|
||||||
|
|
||||||
|
### 1. Title & Overview
|
||||||
|
```markdown
|
||||||
|
# {Study Name}
|
||||||
|
|
||||||
|
**Topic**: {Topic}
|
||||||
|
**Created**: {Date}
|
||||||
|
**Status**: {Status}
|
||||||
|
|
||||||
|
{One paragraph executive summary of the optimization goal}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Engineering Problem
|
||||||
|
```markdown
|
||||||
|
## Engineering Problem
|
||||||
|
|
||||||
|
{Describe the physical problem being solved}
|
||||||
|
|
||||||
|
### Model Description
|
||||||
|
- **Geometry**: {Describe the part/assembly}
|
||||||
|
- **Material**: {If known from introspection}
|
||||||
|
- **Baseline Mass**: {mass_kg} kg
|
||||||
|
|
||||||
|
### Loading Conditions
|
||||||
|
{Describe loads and boundary conditions if available}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Optimization Formulation
|
||||||
|
```markdown
|
||||||
|
## Optimization Formulation
|
||||||
|
|
||||||
|
### Design Variables ({count})
|
||||||
|
| Variable | Expression | Range | Units |
|
||||||
|
|----------|------------|-------|-------|
|
||||||
|
| {name} | {expr_name} | [{min}, {max}] | {units} |
|
||||||
|
|
||||||
|
### Objectives ({count})
|
||||||
|
| Objective | Direction | Weight | Source |
|
||||||
|
|-----------|-----------|--------|--------|
|
||||||
|
| {name} | {direction} | {weight} | {extractor} |
|
||||||
|
|
||||||
|
### Constraints ({count})
|
||||||
|
| Constraint | Condition | Threshold | Type |
|
||||||
|
|------------|-----------|-----------|------|
|
||||||
|
| {name} | {operator} | {threshold} | {type} |
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Methodology
|
||||||
|
```markdown
|
||||||
|
## Methodology
|
||||||
|
|
||||||
|
### Algorithm
|
||||||
|
- **Primary**: {algorithm_type}
|
||||||
|
- **Max Trials**: {max_trials}
|
||||||
|
- **Surrogate**: {if enabled}
|
||||||
|
|
||||||
|
### Physics Extraction
|
||||||
|
{Describe extractors used}
|
||||||
|
|
||||||
|
### Convergence Criteria
|
||||||
|
{Describe stopping conditions}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Expected Outcomes
|
||||||
|
```markdown
|
||||||
|
## Expected Outcomes
|
||||||
|
|
||||||
|
Based on the optimization setup:
|
||||||
|
- Expected improvement: {estimate if baseline available}
|
||||||
|
- Key trade-offs: {identify from objectives/constraints}
|
||||||
|
- Risk factors: {any warnings from validation}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Generation Guidelines
|
||||||
|
|
||||||
|
1. **Be Specific**: Use actual values from the spec, not placeholders
|
||||||
|
2. **Be Concise**: Engineers don't want to read novels
|
||||||
|
3. **Be Accurate**: Only state facts that can be verified from input
|
||||||
|
4. **Be Helpful**: Include insights that aid understanding
|
||||||
|
5. **No Fluff**: Avoid marketing language or excessive praise
|
||||||
|
|
||||||
|
## Claude Prompt Template
|
||||||
|
|
||||||
|
```
|
||||||
|
You are generating a README.md for an FEA optimization study.
|
||||||
|
|
||||||
|
CONTEXT:
|
||||||
|
{json_context}
|
||||||
|
|
||||||
|
RULES:
|
||||||
|
1. Use the actual data provided - never use placeholder values
|
||||||
|
2. Write in technical engineering language appropriate for structural engineers
|
||||||
|
3. Keep each section concise but complete
|
||||||
|
4. If information is missing, note it as "TBD" or skip the section
|
||||||
|
5. Include physical units wherever applicable
|
||||||
|
6. Format tables properly with alignment
|
||||||
|
|
||||||
|
Generate the README.md content:
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Output
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Bracket Mass Optimization V1
|
||||||
|
|
||||||
|
**Topic**: Simple_Bracket
|
||||||
|
**Created**: 2026-01-22
|
||||||
|
**Status**: Configured
|
||||||
|
|
||||||
|
Optimize the mass of a structural L-bracket while maintaining stress below yield and displacement within tolerance.
|
||||||
|
|
||||||
|
## Engineering Problem
|
||||||
|
|
||||||
|
### Model Description
|
||||||
|
- **Geometry**: L-shaped mounting bracket with web and flange
|
||||||
|
- **Material**: Steel (assumed based on typical applications)
|
||||||
|
- **Baseline Mass**: 0.847 kg
|
||||||
|
|
||||||
|
### Loading Conditions
|
||||||
|
Static loading with force applied at mounting holes. Fixed constraints at base.
|
||||||
|
|
||||||
|
## Optimization Formulation
|
||||||
|
|
||||||
|
### Design Variables (3)
|
||||||
|
| Variable | Expression | Range | Units |
|
||||||
|
|----------|------------|-------|-------|
|
||||||
|
| Web Thickness | web_thickness | [2.0, 10.0] | mm |
|
||||||
|
| Flange Width | flange_width | [15.0, 40.0] | mm |
|
||||||
|
| Fillet Radius | fillet_radius | [2.0, 8.0] | mm |
|
||||||
|
|
||||||
|
### Objectives (1)
|
||||||
|
| Objective | Direction | Weight | Source |
|
||||||
|
|-----------|-----------|--------|--------|
|
||||||
|
| Total Mass | minimize | 1.0 | mass_extractor |
|
||||||
|
|
||||||
|
### Constraints (1)
|
||||||
|
| Constraint | Condition | Threshold | Type |
|
||||||
|
|------------|-----------|-----------|------|
|
||||||
|
| Max Stress | <= | 250 MPa | hard |
|
||||||
|
|
||||||
|
## Methodology
|
||||||
|
|
||||||
|
### Algorithm
|
||||||
|
- **Primary**: TPE (Tree-structured Parzen Estimator)
|
||||||
|
- **Max Trials**: 100
|
||||||
|
- **Surrogate**: Disabled
|
||||||
|
|
||||||
|
### Physics Extraction
|
||||||
|
- Mass: Extracted from NX expression `total_mass`
|
||||||
|
- Stress: Von Mises stress from SOL101 static analysis
|
||||||
|
|
||||||
|
### Convergence Criteria
|
||||||
|
- Max trials: 100
|
||||||
|
- Early stopping: 20 trials without improvement
|
||||||
|
|
||||||
|
## Expected Outcomes
|
||||||
|
|
||||||
|
Based on the optimization setup:
|
||||||
|
- Expected improvement: 15-30% mass reduction (typical for thickness optimization)
|
||||||
|
- Key trade-offs: Mass vs. stress margin
|
||||||
|
- Risk factors: None identified
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
- **Backend**: `api/services/claude_readme.py` calls Claude API with this prompt
|
||||||
|
- **Endpoint**: `POST /api/intake/{study_name}/readme`
|
||||||
|
- **Trigger**: Automatic on status transition to `configured`
|
||||||
33
.devloop/browser_test_results.json
Normal file
33
.devloop/browser_test_results.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"timestamp": "2026-01-22T18:13:30.884945",
|
||||||
|
"scenarios": [
|
||||||
|
{
|
||||||
|
"scenario_id": "browser_home_stats",
|
||||||
|
"scenario_name": "Home page shows statistics",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 1413.166,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"navigated_to": "http://localhost:3003/",
|
||||||
|
"found_selector": "text=Total Trials"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario_id": "browser_expand_folder",
|
||||||
|
"scenario_name": "Topic folder expands on click",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 2785.3219999999997,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"navigated_to": "http://localhost:3003/",
|
||||||
|
"found_selector": "span:has-text('completed'), span:has-text('running'), span:has-text('paused')",
|
||||||
|
"clicked": "button:has-text('trials')"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 2,
|
||||||
|
"failed": 0,
|
||||||
|
"total": 2
|
||||||
|
}
|
||||||
|
}
|
||||||
16
.devloop/current_plan.json
Normal file
16
.devloop/current_plan.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"objective": "Implement Dashboard Intake & AtomizerSpec Integration: Phase 1 - Create backend intake API routes (create, introspect, list, topics endpoints) and spec_manager service. The spec_models.py and JSON schema have already been updated with SpecStatus, IntrospectionData, BaselineData, and ExpressionInfo models. Now need to create: 1) backend/api/services/spec_manager.py for centralized spec CRUD, 2) backend/api/routes/intake.py with endpoints for creating inbox folders, running introspection, listing inbox contents, and listing topics, 3) Register the intake router in main.py. Reference the plan at docs/plans/DASHBOARD_INTAKE_ATOMIZERSPEC_INTEGRATION.md",
|
||||||
|
"approach": "Fallback plan - manual implementation",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": "Implement: Implement Dashboard Intake & AtomizerSpec Integration: Phase 1 - Create backend intake API routes (create, introspect, list, topics endpoints) and spec_manager service. The spec_models.py and JSON schema have already been updated with SpecStatus, IntrospectionData, BaselineData, and ExpressionInfo models. Now need to create: 1) backend/api/services/spec_manager.py for centralized spec CRUD, 2) backend/api/routes/intake.py with endpoints for creating inbox folders, running introspection, listing inbox contents, and listing topics, 3) Register the intake router in main.py. Reference the plan at docs/plans/DASHBOARD_INTAKE_ATOMIZERSPEC_INTEGRATION.md",
|
||||||
|
"file": "TBD",
|
||||||
|
"priority": "high"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"test_scenarios": [],
|
||||||
|
"acceptance_criteria": [
|
||||||
|
"Implement Dashboard Intake & AtomizerSpec Integration: Phase 1 - Create backend intake API routes (create, introspect, list, topics endpoints) and spec_manager service. The spec_models.py and JSON schema have already been updated with SpecStatus, IntrospectionData, BaselineData, and ExpressionInfo models. Now need to create: 1) backend/api/services/spec_manager.py for centralized spec CRUD, 2) backend/api/routes/intake.py with endpoints for creating inbox folders, running introspection, listing inbox contents, and listing topics, 3) Register the intake router in main.py. Reference the plan at docs/plans/DASHBOARD_INTAKE_ATOMIZERSPEC_INTEGRATION.md"
|
||||||
|
]
|
||||||
|
}
|
||||||
64
.devloop/test_results.json
Normal file
64
.devloop/test_results.json
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
{
|
||||||
|
"timestamp": "2026-01-22T21:10:54.742272",
|
||||||
|
"scenarios": [
|
||||||
|
{
|
||||||
|
"scenario_id": "test_study_dir",
|
||||||
|
"scenario_name": "Study directory exists: stage_3_arm",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 0.0,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"path": "C:\\Users\\antoi\\Atomizer\\studies\\Stage3\\stage_3_arm",
|
||||||
|
"exists": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario_id": "test_spec",
|
||||||
|
"scenario_name": "AtomizerSpec is valid JSON",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 1.045,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"valid_json": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario_id": "test_readme",
|
||||||
|
"scenario_name": "README exists",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 0.0,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"path": "C:\\Users\\antoi\\Atomizer\\studies\\Stage3\\stage_3_arm\\README.md",
|
||||||
|
"exists": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario_id": "test_run_script",
|
||||||
|
"scenario_name": "run_optimization.py exists",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 0.0,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"path": "C:\\Users\\antoi\\Atomizer\\studies\\Stage3\\stage_3_arm\\run_optimization.py",
|
||||||
|
"exists": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario_id": "test_model_dir",
|
||||||
|
"scenario_name": "Model directory exists",
|
||||||
|
"passed": true,
|
||||||
|
"duration_ms": 0.0,
|
||||||
|
"error": null,
|
||||||
|
"details": {
|
||||||
|
"path": "C:\\Users\\antoi\\Atomizer\\studies\\Stage3\\stage_3_arm\\1_setup\\model",
|
||||||
|
"exists": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 5,
|
||||||
|
"failed": 0,
|
||||||
|
"total": 5
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -7,6 +7,10 @@
|
|||||||
"ATOMIZER_MODE": "user",
|
"ATOMIZER_MODE": "user",
|
||||||
"ATOMIZER_ROOT": "C:/Users/antoi/Atomizer"
|
"ATOMIZER_ROOT": "C:/Users/antoi/Atomizer"
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
"nxopen-docs": {
|
||||||
|
"command": "C:/Users/antoi/CADtomaste/Atomaste-NXOpen-MCP/.venv/Scripts/python.exe",
|
||||||
|
"args": ["-m", "nxopen_mcp.server", "--data-dir", "C:/Users/antoi/CADtomaste/Atomaste-NXOpen-MCP/data"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -13,7 +13,19 @@ import sys
|
|||||||
# Add parent directory to path to import optimization_engine
|
# Add parent directory to path to import optimization_engine
|
||||||
sys.path.append(str(Path(__file__).parent.parent.parent.parent))
|
sys.path.append(str(Path(__file__).parent.parent.parent.parent))
|
||||||
|
|
||||||
from api.routes import optimization, claude, terminal, insights, context, files, nx, claude_code, spec
|
from api.routes import (
|
||||||
|
optimization,
|
||||||
|
claude,
|
||||||
|
terminal,
|
||||||
|
insights,
|
||||||
|
context,
|
||||||
|
files,
|
||||||
|
nx,
|
||||||
|
claude_code,
|
||||||
|
spec,
|
||||||
|
devloop,
|
||||||
|
intake,
|
||||||
|
)
|
||||||
from api.websocket import optimization_stream
|
from api.websocket import optimization_stream
|
||||||
|
|
||||||
|
|
||||||
@@ -23,6 +35,7 @@ async def lifespan(app: FastAPI):
|
|||||||
"""Manage application lifespan - start/stop session manager"""
|
"""Manage application lifespan - start/stop session manager"""
|
||||||
# Startup
|
# Startup
|
||||||
from api.routes.claude import get_session_manager
|
from api.routes.claude import get_session_manager
|
||||||
|
|
||||||
manager = get_session_manager()
|
manager = get_session_manager()
|
||||||
await manager.start()
|
await manager.start()
|
||||||
print("Session manager started")
|
print("Session manager started")
|
||||||
@@ -63,6 +76,9 @@ app.include_router(nx.router, prefix="/api/nx", tags=["nx"])
|
|||||||
app.include_router(claude_code.router, prefix="/api", tags=["claude-code"])
|
app.include_router(claude_code.router, prefix="/api", tags=["claude-code"])
|
||||||
app.include_router(spec.router, prefix="/api", tags=["spec"])
|
app.include_router(spec.router, prefix="/api", tags=["spec"])
|
||||||
app.include_router(spec.validate_router, prefix="/api", tags=["spec"])
|
app.include_router(spec.validate_router, prefix="/api", tags=["spec"])
|
||||||
|
app.include_router(devloop.router, prefix="/api", tags=["devloop"])
|
||||||
|
app.include_router(intake.router, prefix="/api", tags=["intake"])
|
||||||
|
|
||||||
|
|
||||||
@app.get("/")
|
@app.get("/")
|
||||||
async def root():
|
async def root():
|
||||||
@@ -70,11 +86,13 @@ async def root():
|
|||||||
dashboard_path = Path(__file__).parent.parent.parent / "dashboard-enhanced.html"
|
dashboard_path = Path(__file__).parent.parent.parent / "dashboard-enhanced.html"
|
||||||
return FileResponse(dashboard_path)
|
return FileResponse(dashboard_path)
|
||||||
|
|
||||||
|
|
||||||
@app.get("/health")
|
@app.get("/health")
|
||||||
async def health_check():
|
async def health_check():
|
||||||
"""Health check endpoint with database status"""
|
"""Health check endpoint with database status"""
|
||||||
try:
|
try:
|
||||||
from api.services.conversation_store import ConversationStore
|
from api.services.conversation_store import ConversationStore
|
||||||
|
|
||||||
store = ConversationStore()
|
store = ConversationStore()
|
||||||
# Test database by creating/getting a health check session
|
# Test database by creating/getting a health check session
|
||||||
store.get_session("health_check")
|
store.get_session("health_check")
|
||||||
@@ -87,12 +105,8 @@ async def health_check():
|
|||||||
"database": db_status,
|
"database": db_status,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import uvicorn
|
import uvicorn
|
||||||
uvicorn.run(
|
|
||||||
"main:app",
|
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True, log_level="info")
|
||||||
host="0.0.0.0",
|
|
||||||
port=8000,
|
|
||||||
reload=True,
|
|
||||||
log_level="info"
|
|
||||||
)
|
|
||||||
|
|||||||
416
atomizer-dashboard/backend/api/routes/devloop.py
Normal file
416
atomizer-dashboard/backend/api/routes/devloop.py
Normal file
@@ -0,0 +1,416 @@
|
|||||||
|
"""
|
||||||
|
DevLoop API Endpoints - Closed-loop development orchestration.
|
||||||
|
|
||||||
|
Provides REST API and WebSocket for:
|
||||||
|
- Starting/stopping development cycles
|
||||||
|
- Monitoring progress
|
||||||
|
- Executing single phases
|
||||||
|
- Viewing history and learnings
|
||||||
|
"""
|
||||||
|
|
||||||
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect, BackgroundTasks
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Add project root to path
|
||||||
|
sys.path.append(str(Path(__file__).parent.parent.parent.parent.parent))
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/devloop", tags=["devloop"])
|
||||||
|
|
||||||
|
# Global orchestrator instance
|
||||||
|
_orchestrator = None
|
||||||
|
_active_cycle = None
|
||||||
|
_websocket_clients: List[WebSocket] = []
|
||||||
|
|
||||||
|
|
||||||
|
def get_orchestrator():
|
||||||
|
"""Get or create the DevLoop orchestrator."""
|
||||||
|
global _orchestrator
|
||||||
|
if _orchestrator is None:
|
||||||
|
from optimization_engine.devloop import DevLoopOrchestrator
|
||||||
|
|
||||||
|
_orchestrator = DevLoopOrchestrator(
|
||||||
|
{
|
||||||
|
"dashboard_url": "http://localhost:8000",
|
||||||
|
"websocket_url": "ws://localhost:8000",
|
||||||
|
"studies_dir": str(Path(__file__).parent.parent.parent.parent.parent / "studies"),
|
||||||
|
"learning_enabled": True,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Subscribe to state updates
|
||||||
|
_orchestrator.subscribe(_broadcast_state_update)
|
||||||
|
|
||||||
|
return _orchestrator
|
||||||
|
|
||||||
|
|
||||||
|
def _broadcast_state_update(state):
|
||||||
|
"""Broadcast state updates to all WebSocket clients."""
|
||||||
|
asyncio.create_task(
|
||||||
|
_send_to_all_clients(
|
||||||
|
{
|
||||||
|
"type": "state_update",
|
||||||
|
"state": {
|
||||||
|
"phase": state.phase.value,
|
||||||
|
"iteration": state.iteration,
|
||||||
|
"current_task": state.current_task,
|
||||||
|
"last_update": state.last_update,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def _send_to_all_clients(message: Dict):
|
||||||
|
"""Send message to all connected WebSocket clients."""
|
||||||
|
disconnected = []
|
||||||
|
for client in _websocket_clients:
|
||||||
|
try:
|
||||||
|
await client.send_json(message)
|
||||||
|
except Exception:
|
||||||
|
disconnected.append(client)
|
||||||
|
|
||||||
|
# Clean up disconnected clients
|
||||||
|
for client in disconnected:
|
||||||
|
if client in _websocket_clients:
|
||||||
|
_websocket_clients.remove(client)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Request/Response Models
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
class StartCycleRequest(BaseModel):
|
||||||
|
"""Request to start a development cycle."""
|
||||||
|
|
||||||
|
objective: str = Field(..., description="What to achieve")
|
||||||
|
context: Optional[Dict[str, Any]] = Field(default=None, description="Additional context")
|
||||||
|
max_iterations: Optional[int] = Field(default=10, description="Maximum iterations")
|
||||||
|
|
||||||
|
|
||||||
|
class StepRequest(BaseModel):
|
||||||
|
"""Request to execute a single step."""
|
||||||
|
|
||||||
|
phase: str = Field(..., description="Phase to execute: plan, implement, test, analyze")
|
||||||
|
data: Optional[Dict[str, Any]] = Field(default=None, description="Phase-specific data")
|
||||||
|
|
||||||
|
|
||||||
|
class CycleStatusResponse(BaseModel):
|
||||||
|
"""Response with cycle status."""
|
||||||
|
|
||||||
|
active: bool
|
||||||
|
phase: str
|
||||||
|
iteration: int
|
||||||
|
current_task: Optional[str]
|
||||||
|
last_update: str
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# REST Endpoints
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/status")
|
||||||
|
async def get_status() -> CycleStatusResponse:
|
||||||
|
"""Get current DevLoop status."""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
state = orchestrator.get_state()
|
||||||
|
|
||||||
|
return CycleStatusResponse(
|
||||||
|
active=state["phase"] != "idle",
|
||||||
|
phase=state["phase"],
|
||||||
|
iteration=state["iteration"],
|
||||||
|
current_task=state.get("current_task"),
|
||||||
|
last_update=state["last_update"],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/start")
|
||||||
|
async def start_cycle(request: StartCycleRequest, background_tasks: BackgroundTasks):
|
||||||
|
"""
|
||||||
|
Start a new development cycle.
|
||||||
|
|
||||||
|
The cycle runs in the background and broadcasts progress via WebSocket.
|
||||||
|
"""
|
||||||
|
global _active_cycle
|
||||||
|
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
# Check if already running
|
||||||
|
if orchestrator.state.phase.value != "idle":
|
||||||
|
raise HTTPException(status_code=409, detail="A development cycle is already running")
|
||||||
|
|
||||||
|
# Start cycle in background
|
||||||
|
async def run_cycle():
|
||||||
|
global _active_cycle
|
||||||
|
try:
|
||||||
|
result = await orchestrator.run_development_cycle(
|
||||||
|
objective=request.objective,
|
||||||
|
context=request.context,
|
||||||
|
max_iterations=request.max_iterations,
|
||||||
|
)
|
||||||
|
_active_cycle = result
|
||||||
|
|
||||||
|
# Broadcast completion
|
||||||
|
await _send_to_all_clients(
|
||||||
|
{
|
||||||
|
"type": "cycle_complete",
|
||||||
|
"result": {
|
||||||
|
"objective": result.objective,
|
||||||
|
"status": result.status,
|
||||||
|
"iterations": len(result.iterations),
|
||||||
|
"duration_seconds": result.total_duration_seconds,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
await _send_to_all_clients({"type": "cycle_error", "error": str(e)})
|
||||||
|
|
||||||
|
background_tasks.add_task(run_cycle)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"message": "Development cycle started",
|
||||||
|
"objective": request.objective,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/stop")
|
||||||
|
async def stop_cycle():
|
||||||
|
"""Stop the current development cycle."""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
if orchestrator.state.phase.value == "idle":
|
||||||
|
raise HTTPException(status_code=400, detail="No active cycle to stop")
|
||||||
|
|
||||||
|
# Set state to idle (will stop at next phase boundary)
|
||||||
|
orchestrator._update_state(phase=orchestrator.state.phase.__class__.IDLE, task="Stopping...")
|
||||||
|
|
||||||
|
return {"message": "Cycle stop requested"}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/step")
|
||||||
|
async def execute_step(request: StepRequest):
|
||||||
|
"""
|
||||||
|
Execute a single phase step.
|
||||||
|
|
||||||
|
Useful for manual control or debugging.
|
||||||
|
"""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
if request.phase == "plan":
|
||||||
|
objective = request.data.get("objective", "") if request.data else ""
|
||||||
|
context = request.data.get("context") if request.data else None
|
||||||
|
result = await orchestrator.step_plan(objective, context)
|
||||||
|
|
||||||
|
elif request.phase == "implement":
|
||||||
|
plan = request.data if request.data else {}
|
||||||
|
result = await orchestrator.step_implement(plan)
|
||||||
|
|
||||||
|
elif request.phase == "test":
|
||||||
|
scenarios = request.data.get("scenarios", []) if request.data else []
|
||||||
|
result = await orchestrator.step_test(scenarios)
|
||||||
|
|
||||||
|
elif request.phase == "analyze":
|
||||||
|
test_results = request.data if request.data else {}
|
||||||
|
result = await orchestrator.step_analyze(test_results)
|
||||||
|
|
||||||
|
else:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Unknown phase: {request.phase}. Valid: plan, implement, test, analyze",
|
||||||
|
)
|
||||||
|
|
||||||
|
return {"phase": request.phase, "result": result}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/history")
|
||||||
|
async def get_history():
|
||||||
|
"""Get history of past development cycles."""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
return orchestrator.export_history()
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/last-cycle")
|
||||||
|
async def get_last_cycle():
|
||||||
|
"""Get details of the most recent cycle."""
|
||||||
|
global _active_cycle
|
||||||
|
|
||||||
|
if _active_cycle is None:
|
||||||
|
raise HTTPException(status_code=404, detail="No cycle has been run yet")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"objective": _active_cycle.objective,
|
||||||
|
"status": _active_cycle.status,
|
||||||
|
"start_time": _active_cycle.start_time,
|
||||||
|
"end_time": _active_cycle.end_time,
|
||||||
|
"iterations": [
|
||||||
|
{
|
||||||
|
"iteration": it.iteration,
|
||||||
|
"success": it.success,
|
||||||
|
"duration_seconds": it.duration_seconds,
|
||||||
|
"has_plan": it.plan is not None,
|
||||||
|
"has_tests": it.test_results is not None,
|
||||||
|
"has_fixes": it.fixes is not None,
|
||||||
|
}
|
||||||
|
for it in _active_cycle.iterations
|
||||||
|
],
|
||||||
|
"total_duration_seconds": _active_cycle.total_duration_seconds,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/health")
|
||||||
|
async def health_check():
|
||||||
|
"""Check DevLoop system health."""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
# Check dashboard connection
|
||||||
|
from optimization_engine.devloop import DashboardTestRunner
|
||||||
|
|
||||||
|
runner = DashboardTestRunner()
|
||||||
|
dashboard_health = await runner.run_health_check()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"devloop": "healthy",
|
||||||
|
"orchestrator_state": orchestrator.get_state()["phase"],
|
||||||
|
"dashboard": dashboard_health,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# WebSocket Endpoint
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
@router.websocket("/ws")
|
||||||
|
async def websocket_endpoint(websocket: WebSocket):
|
||||||
|
"""
|
||||||
|
WebSocket endpoint for real-time DevLoop updates.
|
||||||
|
|
||||||
|
Messages sent:
|
||||||
|
- state_update: Phase/iteration changes
|
||||||
|
- cycle_complete: Cycle finished
|
||||||
|
- cycle_error: Cycle failed
|
||||||
|
- test_progress: Individual test results
|
||||||
|
"""
|
||||||
|
await websocket.accept()
|
||||||
|
_websocket_clients.append(websocket)
|
||||||
|
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Send initial state
|
||||||
|
await websocket.send_json(
|
||||||
|
{
|
||||||
|
"type": "connection_ack",
|
||||||
|
"state": orchestrator.get_state(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Handle incoming messages
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
data = await asyncio.wait_for(websocket.receive_json(), timeout=30.0)
|
||||||
|
|
||||||
|
msg_type = data.get("type")
|
||||||
|
|
||||||
|
if msg_type == "ping":
|
||||||
|
await websocket.send_json({"type": "pong"})
|
||||||
|
|
||||||
|
elif msg_type == "get_state":
|
||||||
|
await websocket.send_json(
|
||||||
|
{
|
||||||
|
"type": "state",
|
||||||
|
"state": orchestrator.get_state(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
elif msg_type == "start_cycle":
|
||||||
|
# Allow starting cycle via WebSocket
|
||||||
|
objective = data.get("objective", "")
|
||||||
|
context = data.get("context")
|
||||||
|
|
||||||
|
asyncio.create_task(orchestrator.run_development_cycle(objective, context))
|
||||||
|
|
||||||
|
await websocket.send_json(
|
||||||
|
{
|
||||||
|
"type": "cycle_started",
|
||||||
|
"objective": objective,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
# Send heartbeat
|
||||||
|
await websocket.send_json({"type": "heartbeat"})
|
||||||
|
|
||||||
|
except WebSocketDisconnect:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
if websocket in _websocket_clients:
|
||||||
|
_websocket_clients.remove(websocket)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Convenience Endpoints for Common Tasks
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/create-study")
|
||||||
|
async def create_study_cycle(
|
||||||
|
study_name: str,
|
||||||
|
problem_statement: Optional[str] = None,
|
||||||
|
background_tasks: BackgroundTasks = None,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Convenience endpoint to start a study creation cycle.
|
||||||
|
|
||||||
|
This is a common workflow that combines planning, implementation, and testing.
|
||||||
|
"""
|
||||||
|
orchestrator = get_orchestrator()
|
||||||
|
|
||||||
|
context = {
|
||||||
|
"study_name": study_name,
|
||||||
|
"task_type": "create_study",
|
||||||
|
}
|
||||||
|
|
||||||
|
if problem_statement:
|
||||||
|
context["problem_statement"] = problem_statement
|
||||||
|
|
||||||
|
# Start the cycle
|
||||||
|
async def run_cycle():
|
||||||
|
result = await orchestrator.run_development_cycle(
|
||||||
|
objective=f"Create optimization study: {study_name}",
|
||||||
|
context=context,
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
if background_tasks:
|
||||||
|
background_tasks.add_task(run_cycle)
|
||||||
|
return {"message": f"Study creation cycle started for '{study_name}'"}
|
||||||
|
else:
|
||||||
|
result = await run_cycle()
|
||||||
|
return {
|
||||||
|
"message": f"Study '{study_name}' creation completed",
|
||||||
|
"status": result.status,
|
||||||
|
"iterations": len(result.iterations),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/run-tests")
|
||||||
|
async def run_tests(scenarios: List[Dict[str, Any]]):
|
||||||
|
"""
|
||||||
|
Run a set of test scenarios directly.
|
||||||
|
|
||||||
|
Useful for testing specific features without a full cycle.
|
||||||
|
"""
|
||||||
|
from optimization_engine.devloop import DashboardTestRunner
|
||||||
|
|
||||||
|
runner = DashboardTestRunner()
|
||||||
|
results = await runner.run_test_suite(scenarios)
|
||||||
|
|
||||||
|
return results
|
||||||
1721
atomizer-dashboard/backend/api/routes/intake.py
Normal file
1721
atomizer-dashboard/backend/api/routes/intake.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -245,17 +245,45 @@ def _get_study_error_info(study_dir: Path, results_dir: Path) -> dict:
|
|||||||
|
|
||||||
def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[dict]:
|
def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[dict]:
|
||||||
"""Load study info from a study directory. Returns None if not a valid study."""
|
"""Load study info from a study directory. Returns None if not a valid study."""
|
||||||
# Look for optimization config (check multiple locations)
|
# Look for config file - prefer atomizer_spec.json (v2.0), fall back to legacy optimization_config.json
|
||||||
config_file = study_dir / "optimization_config.json"
|
config_file = None
|
||||||
if not config_file.exists():
|
is_atomizer_spec = False
|
||||||
config_file = study_dir / "1_setup" / "optimization_config.json"
|
|
||||||
if not config_file.exists():
|
# Check for AtomizerSpec v2.0 first
|
||||||
|
for spec_path in [
|
||||||
|
study_dir / "atomizer_spec.json",
|
||||||
|
study_dir / "1_setup" / "atomizer_spec.json",
|
||||||
|
]:
|
||||||
|
if spec_path.exists():
|
||||||
|
config_file = spec_path
|
||||||
|
is_atomizer_spec = True
|
||||||
|
break
|
||||||
|
|
||||||
|
# Fall back to legacy optimization_config.json
|
||||||
|
if config_file is None:
|
||||||
|
for legacy_path in [
|
||||||
|
study_dir / "optimization_config.json",
|
||||||
|
study_dir / "1_setup" / "optimization_config.json",
|
||||||
|
]:
|
||||||
|
if legacy_path.exists():
|
||||||
|
config_file = legacy_path
|
||||||
|
break
|
||||||
|
|
||||||
|
if config_file is None:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Load config
|
# Load config
|
||||||
with open(config_file) as f:
|
with open(config_file) as f:
|
||||||
config = json.load(f)
|
config = json.load(f)
|
||||||
|
|
||||||
|
# Normalize AtomizerSpec v2.0 to legacy format for compatibility
|
||||||
|
if is_atomizer_spec and "meta" in config:
|
||||||
|
# Extract study_name and description from meta
|
||||||
|
meta = config.get("meta", {})
|
||||||
|
config["study_name"] = meta.get("study_name", study_dir.name)
|
||||||
|
config["description"] = meta.get("description", "")
|
||||||
|
config["version"] = meta.get("version", "2.0")
|
||||||
|
|
||||||
# Check if results directory exists (support both 2_results and 3_results)
|
# Check if results directory exists (support both 2_results and 3_results)
|
||||||
results_dir = study_dir / "2_results"
|
results_dir = study_dir / "2_results"
|
||||||
if not results_dir.exists():
|
if not results_dir.exists():
|
||||||
@@ -311,12 +339,21 @@ def _load_study_info(study_dir: Path, topic: Optional[str] = None) -> Optional[d
|
|||||||
best_trial = min(history, key=lambda x: x["objective"])
|
best_trial = min(history, key=lambda x: x["objective"])
|
||||||
best_value = best_trial["objective"]
|
best_value = best_trial["objective"]
|
||||||
|
|
||||||
# Get total trials from config (supports both formats)
|
# Get total trials from config (supports AtomizerSpec v2.0 and legacy formats)
|
||||||
total_trials = (
|
total_trials = None
|
||||||
config.get("optimization_settings", {}).get("n_trials")
|
|
||||||
or config.get("optimization", {}).get("n_trials")
|
# AtomizerSpec v2.0: optimization.budget.max_trials
|
||||||
or config.get("trials", {}).get("n_trials", 50)
|
if is_atomizer_spec:
|
||||||
)
|
total_trials = config.get("optimization", {}).get("budget", {}).get("max_trials")
|
||||||
|
|
||||||
|
# Legacy formats
|
||||||
|
if total_trials is None:
|
||||||
|
total_trials = (
|
||||||
|
config.get("optimization_settings", {}).get("n_trials")
|
||||||
|
or config.get("optimization", {}).get("n_trials")
|
||||||
|
or config.get("optimization", {}).get("max_trials")
|
||||||
|
or config.get("trials", {}).get("n_trials", 100)
|
||||||
|
)
|
||||||
|
|
||||||
# Get accurate status using process detection
|
# Get accurate status using process detection
|
||||||
status = get_accurate_study_status(study_dir.name, trial_count, total_trials, has_db)
|
status = get_accurate_study_status(study_dir.name, trial_count, total_trials, has_db)
|
||||||
@@ -380,7 +417,12 @@ async def list_studies():
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
# Check if this is a study (flat structure) or a topic folder (nested structure)
|
# Check if this is a study (flat structure) or a topic folder (nested structure)
|
||||||
is_study = (item / "1_setup").exists() or (item / "optimization_config.json").exists()
|
# Support both AtomizerSpec v2.0 (atomizer_spec.json) and legacy (optimization_config.json)
|
||||||
|
is_study = (
|
||||||
|
(item / "1_setup").exists()
|
||||||
|
or (item / "atomizer_spec.json").exists()
|
||||||
|
or (item / "optimization_config.json").exists()
|
||||||
|
)
|
||||||
|
|
||||||
if is_study:
|
if is_study:
|
||||||
# Flat structure: study directly in studies/
|
# Flat structure: study directly in studies/
|
||||||
@@ -396,10 +438,12 @@ async def list_studies():
|
|||||||
if sub_item.name.startswith("."):
|
if sub_item.name.startswith("."):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Check if this subdirectory is a study
|
# Check if this subdirectory is a study (AtomizerSpec v2.0 or legacy)
|
||||||
sub_is_study = (sub_item / "1_setup").exists() or (
|
sub_is_study = (
|
||||||
sub_item / "optimization_config.json"
|
(sub_item / "1_setup").exists()
|
||||||
).exists()
|
or (sub_item / "atomizer_spec.json").exists()
|
||||||
|
or (sub_item / "optimization_config.json").exists()
|
||||||
|
)
|
||||||
if sub_is_study:
|
if sub_is_study:
|
||||||
study_info = _load_study_info(sub_item, topic=item.name)
|
study_info = _load_study_info(sub_item, topic=item.name)
|
||||||
if study_info:
|
if study_info:
|
||||||
|
|||||||
396
atomizer-dashboard/backend/api/services/claude_readme.py
Normal file
396
atomizer-dashboard/backend/api/services/claude_readme.py
Normal file
@@ -0,0 +1,396 @@
|
|||||||
|
"""
|
||||||
|
Claude README Generator Service
|
||||||
|
|
||||||
|
Generates intelligent README.md files for optimization studies
|
||||||
|
using Claude Code CLI (not API) with study context from AtomizerSpec.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
# Base directory
|
||||||
|
ATOMIZER_ROOT = Path(__file__).parent.parent.parent.parent.parent
|
||||||
|
|
||||||
|
# Load skill prompt
|
||||||
|
SKILL_PATH = ATOMIZER_ROOT / ".claude" / "skills" / "modules" / "study-readme-generator.md"
|
||||||
|
|
||||||
|
|
||||||
|
def load_skill_prompt() -> str:
|
||||||
|
"""Load the README generator skill prompt."""
|
||||||
|
if SKILL_PATH.exists():
|
||||||
|
return SKILL_PATH.read_text(encoding="utf-8")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeReadmeGenerator:
|
||||||
|
"""Generate README.md files using Claude Code CLI."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.skill_prompt = load_skill_prompt()
|
||||||
|
|
||||||
|
def generate_readme(
|
||||||
|
self,
|
||||||
|
study_name: str,
|
||||||
|
spec: Dict[str, Any],
|
||||||
|
context_files: Optional[Dict[str, str]] = None,
|
||||||
|
topic: Optional[str] = None,
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Generate a README.md for a study using Claude Code CLI.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_name: Name of the study
|
||||||
|
spec: Full AtomizerSpec v2.0 dict
|
||||||
|
context_files: Optional dict of {filename: content} for context
|
||||||
|
topic: Optional topic folder name
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Generated README.md content
|
||||||
|
"""
|
||||||
|
# Build context for Claude
|
||||||
|
context = self._build_context(study_name, spec, context_files, topic)
|
||||||
|
|
||||||
|
# Build the prompt
|
||||||
|
prompt = self._build_prompt(context)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Run Claude Code CLI synchronously
|
||||||
|
result = self._run_claude_cli(prompt)
|
||||||
|
|
||||||
|
# Extract markdown content from response
|
||||||
|
readme_content = self._extract_markdown(result)
|
||||||
|
|
||||||
|
if readme_content:
|
||||||
|
return readme_content
|
||||||
|
|
||||||
|
# If no markdown found, return the whole response
|
||||||
|
return result if result else self._generate_fallback_readme(study_name, spec)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Claude CLI error: {e}")
|
||||||
|
return self._generate_fallback_readme(study_name, spec)
|
||||||
|
|
||||||
|
async def generate_readme_async(
|
||||||
|
self,
|
||||||
|
study_name: str,
|
||||||
|
spec: Dict[str, Any],
|
||||||
|
context_files: Optional[Dict[str, str]] = None,
|
||||||
|
topic: Optional[str] = None,
|
||||||
|
) -> str:
|
||||||
|
"""Async version of generate_readme."""
|
||||||
|
# Run in thread pool to not block
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
return await loop.run_in_executor(
|
||||||
|
None, lambda: self.generate_readme(study_name, spec, context_files, topic)
|
||||||
|
)
|
||||||
|
|
||||||
|
def _run_claude_cli(self, prompt: str) -> str:
|
||||||
|
"""Run Claude Code CLI and get response."""
|
||||||
|
try:
|
||||||
|
# Use claude CLI with --print flag for non-interactive output
|
||||||
|
result = subprocess.run(
|
||||||
|
["claude", "--print", prompt],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=120, # 2 minute timeout
|
||||||
|
cwd=str(ATOMIZER_ROOT),
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.returncode != 0:
|
||||||
|
error_msg = result.stderr or "Unknown error"
|
||||||
|
raise Exception(f"Claude CLI error: {error_msg}")
|
||||||
|
|
||||||
|
return result.stdout.strip()
|
||||||
|
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
raise Exception("Request timed out")
|
||||||
|
except FileNotFoundError:
|
||||||
|
raise Exception("Claude CLI not found. Make sure 'claude' is in PATH.")
|
||||||
|
|
||||||
|
def _build_context(
|
||||||
|
self,
|
||||||
|
study_name: str,
|
||||||
|
spec: Dict[str, Any],
|
||||||
|
context_files: Optional[Dict[str, str]],
|
||||||
|
topic: Optional[str],
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Build the context object for Claude."""
|
||||||
|
meta = spec.get("meta", {})
|
||||||
|
model = spec.get("model", {})
|
||||||
|
introspection = model.get("introspection", {}) or {}
|
||||||
|
|
||||||
|
context = {
|
||||||
|
"study_name": study_name,
|
||||||
|
"topic": topic or meta.get("topic", "Other"),
|
||||||
|
"description": meta.get("description", ""),
|
||||||
|
"created": meta.get("created", datetime.now().isoformat()),
|
||||||
|
"status": meta.get("status", "draft"),
|
||||||
|
"design_variables": spec.get("design_variables", []),
|
||||||
|
"extractors": spec.get("extractors", []),
|
||||||
|
"objectives": spec.get("objectives", []),
|
||||||
|
"constraints": spec.get("constraints", []),
|
||||||
|
"optimization": spec.get("optimization", {}),
|
||||||
|
"introspection": {
|
||||||
|
"mass_kg": introspection.get("mass_kg"),
|
||||||
|
"volume_mm3": introspection.get("volume_mm3"),
|
||||||
|
"solver_type": introspection.get("solver_type"),
|
||||||
|
"expressions": introspection.get("expressions", []),
|
||||||
|
"expressions_count": len(introspection.get("expressions", [])),
|
||||||
|
},
|
||||||
|
"model_files": {
|
||||||
|
"sim": model.get("sim", {}).get("path") if model.get("sim") else None,
|
||||||
|
"prt": model.get("prt", {}).get("path") if model.get("prt") else None,
|
||||||
|
"fem": model.get("fem", {}).get("path") if model.get("fem") else None,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add context files if provided
|
||||||
|
if context_files:
|
||||||
|
context["context_files"] = context_files
|
||||||
|
|
||||||
|
return context
|
||||||
|
|
||||||
|
def _build_prompt(self, context: Dict[str, Any]) -> str:
|
||||||
|
"""Build the prompt for Claude CLI."""
|
||||||
|
|
||||||
|
# Build context files section if available
|
||||||
|
context_files_section = ""
|
||||||
|
if context.get("context_files"):
|
||||||
|
context_files_section = "\n\n## User-Provided Context Files\n\nIMPORTANT: Use this information to understand the optimization goals, design variables, objectives, and constraints:\n\n"
|
||||||
|
for filename, content in context.get("context_files", {}).items():
|
||||||
|
context_files_section += f"### {filename}\n```\n{content}\n```\n\n"
|
||||||
|
|
||||||
|
# Remove context_files from JSON dump to avoid duplication
|
||||||
|
context_for_json = {k: v for k, v in context.items() if k != "context_files"}
|
||||||
|
|
||||||
|
prompt = f"""Generate a README.md for this FEA optimization study.
|
||||||
|
|
||||||
|
## Study Technical Data
|
||||||
|
|
||||||
|
```json
|
||||||
|
{json.dumps(context_for_json, indent=2, default=str)}
|
||||||
|
```
|
||||||
|
{context_files_section}
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
1. Use the EXACT values from the technical data - no placeholders
|
||||||
|
2. If context files are provided, extract:
|
||||||
|
- Design variable bounds (min/max)
|
||||||
|
- Optimization objectives (minimize/maximize what)
|
||||||
|
- Constraints (stress limits, etc.)
|
||||||
|
- Any specific requirements mentioned
|
||||||
|
|
||||||
|
3. Format the README with these sections:
|
||||||
|
- Title (# Study Name)
|
||||||
|
- Overview (topic, date, status, description from context)
|
||||||
|
- Engineering Problem (what we're optimizing and why - from context files)
|
||||||
|
- Model Information (mass, solver, files)
|
||||||
|
- Design Variables (if context specifies bounds, include them in a table)
|
||||||
|
- Optimization Objectives (from context files)
|
||||||
|
- Constraints (from context files)
|
||||||
|
- Expressions Found (table of discovered expressions, highlight candidates)
|
||||||
|
- Next Steps (what needs to be configured)
|
||||||
|
|
||||||
|
4. Keep it professional and concise
|
||||||
|
5. Use proper markdown table formatting
|
||||||
|
6. Include units where applicable
|
||||||
|
7. For expressions table, show: name, value, units, is_candidate
|
||||||
|
|
||||||
|
Generate ONLY the README.md content in markdown format, no explanations:"""
|
||||||
|
|
||||||
|
return prompt
|
||||||
|
|
||||||
|
def _extract_markdown(self, response: str) -> Optional[str]:
|
||||||
|
"""Extract markdown content from Claude response."""
|
||||||
|
if not response:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# If response starts with #, it's already markdown
|
||||||
|
if response.strip().startswith("#"):
|
||||||
|
return response.strip()
|
||||||
|
|
||||||
|
# Try to find markdown block
|
||||||
|
if "```markdown" in response:
|
||||||
|
start = response.find("```markdown") + len("```markdown")
|
||||||
|
end = response.find("```", start)
|
||||||
|
if end > start:
|
||||||
|
return response[start:end].strip()
|
||||||
|
|
||||||
|
if "```md" in response:
|
||||||
|
start = response.find("```md") + len("```md")
|
||||||
|
end = response.find("```", start)
|
||||||
|
if end > start:
|
||||||
|
return response[start:end].strip()
|
||||||
|
|
||||||
|
# Look for first # heading
|
||||||
|
lines = response.split("\n")
|
||||||
|
for i, line in enumerate(lines):
|
||||||
|
if line.strip().startswith("# "):
|
||||||
|
return "\n".join(lines[i:]).strip()
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _generate_fallback_readme(self, study_name: str, spec: Dict[str, Any]) -> str:
|
||||||
|
"""Generate a basic README if Claude fails."""
|
||||||
|
meta = spec.get("meta", {})
|
||||||
|
model = spec.get("model", {})
|
||||||
|
introspection = model.get("introspection", {}) or {}
|
||||||
|
dvs = spec.get("design_variables", [])
|
||||||
|
objs = spec.get("objectives", [])
|
||||||
|
cons = spec.get("constraints", [])
|
||||||
|
opt = spec.get("optimization", {})
|
||||||
|
expressions = introspection.get("expressions", [])
|
||||||
|
|
||||||
|
lines = [
|
||||||
|
f"# {study_name.replace('_', ' ').title()}",
|
||||||
|
"",
|
||||||
|
f"**Topic**: {meta.get('topic', 'Other')}",
|
||||||
|
f"**Created**: {meta.get('created', 'Unknown')[:10] if meta.get('created') else 'Unknown'}",
|
||||||
|
f"**Status**: {meta.get('status', 'draft')}",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
|
||||||
|
if meta.get("description"):
|
||||||
|
lines.extend([meta["description"], ""])
|
||||||
|
|
||||||
|
# Model Information
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Model Information",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
if introspection.get("mass_kg"):
|
||||||
|
lines.append(f"- **Mass**: {introspection['mass_kg']:.2f} kg")
|
||||||
|
|
||||||
|
sim_path = model.get("sim", {}).get("path") if model.get("sim") else None
|
||||||
|
if sim_path:
|
||||||
|
lines.append(f"- **Simulation**: {sim_path}")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Expressions Found
|
||||||
|
if expressions:
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Expressions Found",
|
||||||
|
"",
|
||||||
|
"| Name | Value | Units | Candidate |",
|
||||||
|
"|------|-------|-------|-----------|",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
for expr in expressions:
|
||||||
|
is_candidate = "✓" if expr.get("is_candidate") else ""
|
||||||
|
value = f"{expr.get('value', '-')}"
|
||||||
|
units = expr.get("units", "-")
|
||||||
|
lines.append(f"| {expr.get('name', '-')} | {value} | {units} | {is_candidate} |")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Design Variables (if configured)
|
||||||
|
if dvs:
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Design Variables",
|
||||||
|
"",
|
||||||
|
"| Variable | Expression | Range | Units |",
|
||||||
|
"|----------|------------|-------|-------|",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
for dv in dvs:
|
||||||
|
bounds = dv.get("bounds", {})
|
||||||
|
units = dv.get("units", "-")
|
||||||
|
lines.append(
|
||||||
|
f"| {dv.get('name', 'Unknown')} | "
|
||||||
|
f"{dv.get('expression_name', '-')} | "
|
||||||
|
f"[{bounds.get('min', '-')}, {bounds.get('max', '-')}] | "
|
||||||
|
f"{units} |"
|
||||||
|
)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Objectives
|
||||||
|
if objs:
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Objectives",
|
||||||
|
"",
|
||||||
|
"| Objective | Direction | Weight |",
|
||||||
|
"|-----------|-----------|--------|",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
for obj in objs:
|
||||||
|
lines.append(
|
||||||
|
f"| {obj.get('name', 'Unknown')} | "
|
||||||
|
f"{obj.get('direction', 'minimize')} | "
|
||||||
|
f"{obj.get('weight', 1.0)} |"
|
||||||
|
)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Constraints
|
||||||
|
if cons:
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Constraints",
|
||||||
|
"",
|
||||||
|
"| Constraint | Condition | Threshold |",
|
||||||
|
"|------------|-----------|-----------|",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
for con in cons:
|
||||||
|
lines.append(
|
||||||
|
f"| {con.get('name', 'Unknown')} | "
|
||||||
|
f"{con.get('operator', '<=')} | "
|
||||||
|
f"{con.get('threshold', '-')} |"
|
||||||
|
)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Algorithm
|
||||||
|
algo = opt.get("algorithm", {})
|
||||||
|
budget = opt.get("budget", {})
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Methodology",
|
||||||
|
"",
|
||||||
|
f"- **Algorithm**: {algo.get('type', 'TPE')}",
|
||||||
|
f"- **Max Trials**: {budget.get('max_trials', 100)}",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Next Steps
|
||||||
|
lines.extend(
|
||||||
|
[
|
||||||
|
"## Next Steps",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
if not dvs:
|
||||||
|
lines.append("- [ ] Configure design variables from discovered expressions")
|
||||||
|
if not objs:
|
||||||
|
lines.append("- [ ] Define optimization objectives")
|
||||||
|
if not dvs and not objs:
|
||||||
|
lines.append("- [ ] Open in Canvas Builder to complete configuration")
|
||||||
|
else:
|
||||||
|
lines.append("- [ ] Run baseline solve to validate setup")
|
||||||
|
lines.append("- [ ] Finalize study to move to studies folder")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
# Singleton instance
|
||||||
|
_generator: Optional[ClaudeReadmeGenerator] = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_readme_generator() -> ClaudeReadmeGenerator:
|
||||||
|
"""Get the singleton README generator instance."""
|
||||||
|
global _generator
|
||||||
|
if _generator is None:
|
||||||
|
_generator = ClaudeReadmeGenerator()
|
||||||
|
return _generator
|
||||||
@@ -26,6 +26,7 @@ class ContextBuilder:
|
|||||||
study_id: Optional[str] = None,
|
study_id: Optional[str] = None,
|
||||||
conversation_history: Optional[List[Dict[str, Any]]] = None,
|
conversation_history: Optional[List[Dict[str, Any]]] = None,
|
||||||
canvas_state: Optional[Dict[str, Any]] = None,
|
canvas_state: Optional[Dict[str, Any]] = None,
|
||||||
|
spec_path: Optional[str] = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Build full system prompt with context.
|
Build full system prompt with context.
|
||||||
@@ -35,6 +36,7 @@ class ContextBuilder:
|
|||||||
study_id: Optional study name to provide context for
|
study_id: Optional study name to provide context for
|
||||||
conversation_history: Optional recent messages for continuity
|
conversation_history: Optional recent messages for continuity
|
||||||
canvas_state: Optional canvas state (nodes, edges) from the UI
|
canvas_state: Optional canvas state (nodes, edges) from the UI
|
||||||
|
spec_path: Optional path to the atomizer_spec.json file
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Complete system prompt string
|
Complete system prompt string
|
||||||
@@ -45,7 +47,7 @@ class ContextBuilder:
|
|||||||
if canvas_state:
|
if canvas_state:
|
||||||
node_count = len(canvas_state.get("nodes", []))
|
node_count = len(canvas_state.get("nodes", []))
|
||||||
print(f"[ContextBuilder] Including canvas context with {node_count} nodes")
|
print(f"[ContextBuilder] Including canvas context with {node_count} nodes")
|
||||||
parts.append(self._canvas_context(canvas_state))
|
parts.append(self._canvas_context(canvas_state, spec_path))
|
||||||
else:
|
else:
|
||||||
print("[ContextBuilder] No canvas state provided")
|
print("[ContextBuilder] No canvas state provided")
|
||||||
|
|
||||||
@@ -57,7 +59,7 @@ class ContextBuilder:
|
|||||||
if conversation_history:
|
if conversation_history:
|
||||||
parts.append(self._conversation_context(conversation_history))
|
parts.append(self._conversation_context(conversation_history))
|
||||||
|
|
||||||
parts.append(self._mode_instructions(mode))
|
parts.append(self._mode_instructions(mode, spec_path))
|
||||||
|
|
||||||
return "\n\n---\n\n".join(parts)
|
return "\n\n---\n\n".join(parts)
|
||||||
|
|
||||||
@@ -298,7 +300,7 @@ Important guidelines:
|
|||||||
|
|
||||||
return context
|
return context
|
||||||
|
|
||||||
def _canvas_context(self, canvas_state: Dict[str, Any]) -> str:
|
def _canvas_context(self, canvas_state: Dict[str, Any], spec_path: Optional[str] = None) -> str:
|
||||||
"""
|
"""
|
||||||
Build context from canvas state (nodes and edges).
|
Build context from canvas state (nodes and edges).
|
||||||
|
|
||||||
@@ -317,6 +319,8 @@ Important guidelines:
|
|||||||
context += f"**Study Name**: {study_name}\n"
|
context += f"**Study Name**: {study_name}\n"
|
||||||
if study_path:
|
if study_path:
|
||||||
context += f"**Study Path**: {study_path}\n"
|
context += f"**Study Path**: {study_path}\n"
|
||||||
|
if spec_path:
|
||||||
|
context += f"**Spec File**: `{spec_path}`\n"
|
||||||
context += "\n"
|
context += "\n"
|
||||||
|
|
||||||
# Group nodes by type
|
# Group nodes by type
|
||||||
@@ -438,61 +442,100 @@ Important guidelines:
|
|||||||
context += f"Total edges: {len(edges)}\n"
|
context += f"Total edges: {len(edges)}\n"
|
||||||
context += "Flow: Design Variables → Model → Solver → Extractors → Objectives/Constraints → Algorithm\n\n"
|
context += "Flow: Design Variables → Model → Solver → Extractors → Objectives/Constraints → Algorithm\n\n"
|
||||||
|
|
||||||
# Canvas modification instructions
|
# Instructions will be in _mode_instructions based on spec_path
|
||||||
context += """## Canvas Modification Tools
|
|
||||||
|
|
||||||
**For AtomizerSpec v2.0 studies (preferred):**
|
|
||||||
Use spec tools when working with v2.0 studies (check if study uses `atomizer_spec.json`):
|
|
||||||
- `spec_modify` - Modify spec values using JSONPath (e.g., "design_variables[0].bounds.min")
|
|
||||||
- `spec_add_node` - Add design variables, extractors, objectives, or constraints
|
|
||||||
- `spec_remove_node` - Remove nodes from the spec
|
|
||||||
- `spec_add_custom_extractor` - Add a Python-based custom extractor function
|
|
||||||
|
|
||||||
**For Legacy Canvas (optimization_config.json):**
|
|
||||||
- `canvas_add_node` - Add a new node (designVar, extractor, objective, constraint)
|
|
||||||
- `canvas_update_node` - Update node properties (bounds, weights, names)
|
|
||||||
- `canvas_remove_node` - Remove a node from the canvas
|
|
||||||
- `canvas_connect_nodes` - Create an edge between nodes
|
|
||||||
|
|
||||||
**Example user requests you can handle:**
|
|
||||||
- "Add a design variable called hole_diameter with range 5-15 mm" → Use spec_add_node or canvas_add_node
|
|
||||||
- "Change the weight of wfe_40_20 to 8" → Use spec_modify or canvas_update_node
|
|
||||||
- "Remove the constraint node" → Use spec_remove_node or canvas_remove_node
|
|
||||||
- "Add a custom extractor that computes stress ratio" → Use spec_add_custom_extractor
|
|
||||||
|
|
||||||
Always respond with confirmation of changes made to the canvas/spec.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return context
|
return context
|
||||||
|
|
||||||
def _mode_instructions(self, mode: str) -> str:
|
def _mode_instructions(self, mode: str, spec_path: Optional[str] = None) -> str:
|
||||||
"""Mode-specific instructions"""
|
"""Mode-specific instructions"""
|
||||||
if mode == "power":
|
if mode == "power":
|
||||||
return """# Power Mode Instructions
|
instructions = """# Power Mode Instructions
|
||||||
|
|
||||||
You have **FULL ACCESS** to modify Atomizer studies. **DO NOT ASK FOR PERMISSION** - just do it.
|
You have **FULL ACCESS** to modify Atomizer studies. **DO NOT ASK FOR PERMISSION** - just do it.
|
||||||
|
|
||||||
## Direct Actions (no confirmation needed):
|
## CRITICAL: How to Modify the Spec
|
||||||
- **Add design variables**: Use `canvas_add_node` or `spec_add_node` with node_type="designVar"
|
|
||||||
- **Add extractors**: Use `canvas_add_node` with node_type="extractor"
|
|
||||||
- **Add objectives**: Use `canvas_add_node` with node_type="objective"
|
|
||||||
- **Add constraints**: Use `canvas_add_node` with node_type="constraint"
|
|
||||||
- **Update node properties**: Use `canvas_update_node` or `spec_modify`
|
|
||||||
- **Remove nodes**: Use `canvas_remove_node`
|
|
||||||
- **Edit atomizer_spec.json directly**: Use the Edit tool
|
|
||||||
|
|
||||||
## For custom extractors with Python code:
|
|
||||||
Use `spec_add_custom_extractor` to add a custom function.
|
|
||||||
|
|
||||||
## IMPORTANT:
|
|
||||||
- You have --dangerously-skip-permissions enabled
|
|
||||||
- The user has explicitly granted you power mode access
|
|
||||||
- **ACT IMMEDIATELY** when asked to add/modify/remove things
|
|
||||||
- Explain what you did AFTER doing it, not before
|
|
||||||
- Do NOT say "I need permission" - you already have it
|
|
||||||
|
|
||||||
Example: If user says "add a volume extractor", immediately use canvas_add_node to add it.
|
|
||||||
"""
|
"""
|
||||||
|
if spec_path:
|
||||||
|
instructions += f"""**The spec file is at**: `{spec_path}`
|
||||||
|
|
||||||
|
When asked to add/modify/remove design variables, extractors, objectives, or constraints:
|
||||||
|
1. **Read the spec file first** using the Read tool
|
||||||
|
2. **Edit the spec file** using the Edit tool to make precise changes
|
||||||
|
3. **Confirm what you changed** in your response
|
||||||
|
|
||||||
|
### AtomizerSpec v2.0 Structure
|
||||||
|
|
||||||
|
The spec has these main arrays you can modify:
|
||||||
|
- `design_variables` - Parameters to optimize
|
||||||
|
- `extractors` - Physics extraction functions
|
||||||
|
- `objectives` - What to minimize/maximize
|
||||||
|
- `constraints` - Limits that must be satisfied
|
||||||
|
|
||||||
|
### Example: Add a Design Variable
|
||||||
|
|
||||||
|
To add a design variable called "thickness" with bounds [1, 10]:
|
||||||
|
|
||||||
|
1. Read the spec: `Read({spec_path})`
|
||||||
|
2. Find the `"design_variables": [...]` array
|
||||||
|
3. Add a new entry like:
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"id": "dv_thickness",
|
||||||
|
"name": "thickness",
|
||||||
|
"expression_name": "thickness",
|
||||||
|
"type": "continuous",
|
||||||
|
"bounds": {{"min": 1, "max": 10}},
|
||||||
|
"baseline": 5,
|
||||||
|
"units": "mm",
|
||||||
|
"enabled": true
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
4. Use Edit tool to insert this into the array
|
||||||
|
|
||||||
|
### Example: Add an Objective
|
||||||
|
|
||||||
|
To add a "minimize mass" objective:
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"id": "obj_mass",
|
||||||
|
"name": "mass",
|
||||||
|
"direction": "minimize",
|
||||||
|
"weight": 1.0,
|
||||||
|
"source": {{
|
||||||
|
"extractor_id": "ext_mass",
|
||||||
|
"output_name": "mass"
|
||||||
|
}}
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example: Add an Extractor
|
||||||
|
|
||||||
|
To add a mass extractor:
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"id": "ext_mass",
|
||||||
|
"name": "mass",
|
||||||
|
"type": "mass",
|
||||||
|
"builtin": true,
|
||||||
|
"outputs": [{{"name": "mass", "units": "kg"}}]
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
|
||||||
|
"""
|
||||||
|
else:
|
||||||
|
instructions += """No spec file is currently set. Ask the user which study they want to work with.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
instructions += """## IMPORTANT Rules:
|
||||||
|
- You have --dangerously-skip-permissions enabled
|
||||||
|
- **ACT IMMEDIATELY** when asked to add/modify/remove things
|
||||||
|
- Use the **Edit** tool to modify the spec file directly
|
||||||
|
- Generate unique IDs like `dv_<name>`, `ext_<name>`, `obj_<name>`, `con_<name>`
|
||||||
|
- Explain what you changed AFTER doing it, not before
|
||||||
|
- Do NOT say "I need permission" - you already have it
|
||||||
|
"""
|
||||||
|
return instructions
|
||||||
else:
|
else:
|
||||||
return """# User Mode Instructions
|
return """# User Mode Instructions
|
||||||
|
|
||||||
@@ -503,29 +546,11 @@ You can help with optimization workflows:
|
|||||||
- Generate reports
|
- Generate reports
|
||||||
- Explain FEA concepts
|
- Explain FEA concepts
|
||||||
|
|
||||||
**For code modifications**, suggest switching to Power Mode.
|
**For modifying studies**, the user needs to switch to Power Mode.
|
||||||
|
|
||||||
Available tools:
|
In user mode you can:
|
||||||
- `list_studies`, `get_study_status`, `create_study`
|
- Read and explain study configurations
|
||||||
- `run_optimization`, `stop_optimization`, `get_optimization_status`
|
- Analyze optimization results
|
||||||
- `get_trial_data`, `analyze_convergence`, `compare_trials`, `get_best_design`
|
- Provide recommendations
|
||||||
- `generate_report`, `export_data`
|
- Answer questions about FEA and optimization
|
||||||
- `explain_physics`, `recommend_method`, `query_extractors`
|
|
||||||
|
|
||||||
**AtomizerSpec v2.0 Tools (preferred for new studies):**
|
|
||||||
- `spec_get` - Get the full AtomizerSpec for a study
|
|
||||||
- `spec_modify` - Modify spec values using JSONPath (e.g., "design_variables[0].bounds.min")
|
|
||||||
- `spec_add_node` - Add design variables, extractors, objectives, or constraints
|
|
||||||
- `spec_remove_node` - Remove nodes from the spec
|
|
||||||
- `spec_validate` - Validate spec against JSON Schema
|
|
||||||
- `spec_add_custom_extractor` - Add a Python-based custom extractor function
|
|
||||||
- `spec_create_from_description` - Create a new study from natural language description
|
|
||||||
|
|
||||||
**Canvas Tools (for visual workflow builder):**
|
|
||||||
- `validate_canvas_intent` - Validate a canvas-generated optimization intent
|
|
||||||
- `execute_canvas_intent` - Create a study from a canvas intent
|
|
||||||
- `interpret_canvas_intent` - Analyze intent and provide recommendations
|
|
||||||
|
|
||||||
When you receive a message containing "INTENT:" followed by JSON, this is from the Canvas UI.
|
|
||||||
Parse the intent and use the appropriate canvas tool to process it.
|
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
"""
|
"""
|
||||||
Session Manager
|
Session Manager
|
||||||
|
|
||||||
Manages persistent Claude Code sessions with MCP integration.
|
Manages persistent Claude Code sessions with direct file editing.
|
||||||
Fixed for Windows compatibility - uses subprocess.Popen with ThreadPoolExecutor.
|
Fixed for Windows compatibility - uses subprocess.Popen with ThreadPoolExecutor.
|
||||||
|
|
||||||
|
Strategy: Claude edits atomizer_spec.json directly using Edit/Write tools
|
||||||
|
(no MCP dependency for reliability).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import hashlib
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
@@ -26,6 +30,10 @@ MCP_SERVER_PATH = ATOMIZER_ROOT / "mcp-server" / "atomizer-tools"
|
|||||||
# Thread pool for subprocess operations (Windows compatible)
|
# Thread pool for subprocess operations (Windows compatible)
|
||||||
_executor = ThreadPoolExecutor(max_workers=4)
|
_executor = ThreadPoolExecutor(max_workers=4)
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ClaudeSession:
|
class ClaudeSession:
|
||||||
@@ -130,6 +138,7 @@ class SessionManager:
|
|||||||
Send a message to a session and stream the response.
|
Send a message to a session and stream the response.
|
||||||
|
|
||||||
Uses synchronous subprocess.Popen via ThreadPoolExecutor for Windows compatibility.
|
Uses synchronous subprocess.Popen via ThreadPoolExecutor for Windows compatibility.
|
||||||
|
Claude edits atomizer_spec.json directly using Edit/Write tools (no MCP).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
session_id: The session ID
|
session_id: The session ID
|
||||||
@@ -147,45 +156,48 @@ class SessionManager:
|
|||||||
# Store user message
|
# Store user message
|
||||||
self.store.add_message(session_id, "user", message)
|
self.store.add_message(session_id, "user", message)
|
||||||
|
|
||||||
|
# Get spec path and hash BEFORE Claude runs (to detect changes)
|
||||||
|
spec_path = self._get_spec_path(session.study_id) if session.study_id else None
|
||||||
|
spec_hash_before = self._get_file_hash(spec_path) if spec_path else None
|
||||||
|
|
||||||
# Build context with conversation history AND canvas state
|
# Build context with conversation history AND canvas state
|
||||||
history = self.store.get_history(session_id, limit=10)
|
history = self.store.get_history(session_id, limit=10)
|
||||||
full_prompt = self.context_builder.build(
|
full_prompt = self.context_builder.build(
|
||||||
mode=session.mode,
|
mode=session.mode,
|
||||||
study_id=session.study_id,
|
study_id=session.study_id,
|
||||||
conversation_history=history[:-1],
|
conversation_history=history[:-1],
|
||||||
canvas_state=canvas_state, # Pass canvas state for context
|
canvas_state=canvas_state,
|
||||||
|
spec_path=str(spec_path) if spec_path else None, # Tell Claude where the spec is
|
||||||
)
|
)
|
||||||
full_prompt += f"\n\nUser: {message}\n\nRespond helpfully and concisely:"
|
full_prompt += f"\n\nUser: {message}\n\nRespond helpfully and concisely:"
|
||||||
|
|
||||||
# Build CLI arguments
|
# Build CLI arguments - NO MCP for reliability
|
||||||
cli_args = ["claude", "--print"]
|
cli_args = ["claude", "--print"]
|
||||||
|
|
||||||
# Ensure MCP config exists
|
|
||||||
mcp_config_path = ATOMIZER_ROOT / f".claude-mcp-{session_id}.json"
|
|
||||||
if not mcp_config_path.exists():
|
|
||||||
mcp_config = self._build_mcp_config(session.mode)
|
|
||||||
with open(mcp_config_path, "w") as f:
|
|
||||||
json.dump(mcp_config, f)
|
|
||||||
cli_args.extend(["--mcp-config", str(mcp_config_path)])
|
|
||||||
|
|
||||||
if session.mode == "user":
|
if session.mode == "user":
|
||||||
cli_args.extend([
|
# User mode: limited tools
|
||||||
"--allowedTools",
|
cli_args.extend(
|
||||||
"Read Write(**/STUDY_REPORT.md) Write(**/3_results/*.md) Bash(python:*) mcp__atomizer-tools__*"
|
[
|
||||||
])
|
"--allowedTools",
|
||||||
|
"Read Bash(python:*)",
|
||||||
|
]
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
|
# Power mode: full access to edit files
|
||||||
cli_args.append("--dangerously-skip-permissions")
|
cli_args.append("--dangerously-skip-permissions")
|
||||||
|
|
||||||
cli_args.append("-") # Read from stdin
|
cli_args.append("-") # Read from stdin
|
||||||
|
|
||||||
full_response = ""
|
full_response = ""
|
||||||
tool_calls: List[Dict] = []
|
tool_calls: List[Dict] = []
|
||||||
|
process: Optional[subprocess.Popen] = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
loop = asyncio.get_event_loop()
|
loop = asyncio.get_event_loop()
|
||||||
|
|
||||||
# Run subprocess in thread pool (Windows compatible)
|
# Run subprocess in thread pool (Windows compatible)
|
||||||
def run_claude():
|
def run_claude():
|
||||||
|
nonlocal process
|
||||||
try:
|
try:
|
||||||
process = subprocess.Popen(
|
process = subprocess.Popen(
|
||||||
cli_args,
|
cli_args,
|
||||||
@@ -194,8 +206,8 @@ class SessionManager:
|
|||||||
stderr=subprocess.PIPE,
|
stderr=subprocess.PIPE,
|
||||||
cwd=str(ATOMIZER_ROOT),
|
cwd=str(ATOMIZER_ROOT),
|
||||||
text=True,
|
text=True,
|
||||||
encoding='utf-8',
|
encoding="utf-8",
|
||||||
errors='replace',
|
errors="replace",
|
||||||
)
|
)
|
||||||
stdout, stderr = process.communicate(input=full_prompt, timeout=300)
|
stdout, stderr = process.communicate(input=full_prompt, timeout=300)
|
||||||
return {
|
return {
|
||||||
@@ -204,10 +216,13 @@ class SessionManager:
|
|||||||
"returncode": process.returncode,
|
"returncode": process.returncode,
|
||||||
}
|
}
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
process.kill()
|
if process:
|
||||||
|
process.kill()
|
||||||
return {"error": "Response timeout (5 minutes)"}
|
return {"error": "Response timeout (5 minutes)"}
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
return {"error": "Claude CLI not found in PATH. Install with: npm install -g @anthropic-ai/claude-code"}
|
return {
|
||||||
|
"error": "Claude CLI not found in PATH. Install with: npm install -g @anthropic-ai/claude-code"
|
||||||
|
}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {"error": str(e)}
|
return {"error": str(e)}
|
||||||
|
|
||||||
@@ -219,24 +234,14 @@ class SessionManager:
|
|||||||
full_response = result["stdout"] or ""
|
full_response = result["stdout"] or ""
|
||||||
|
|
||||||
if full_response:
|
if full_response:
|
||||||
# Check if response contains canvas modifications (from MCP tools)
|
# Always send the text response first
|
||||||
import logging
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
modifications = self._extract_canvas_modifications(full_response)
|
|
||||||
logger.info(f"[SEND_MSG] Found {len(modifications)} canvas modifications to send")
|
|
||||||
|
|
||||||
for mod in modifications:
|
|
||||||
logger.info(f"[SEND_MSG] Sending canvas_modification: {mod.get('action')} {mod.get('nodeType')}")
|
|
||||||
yield {"type": "canvas_modification", "modification": mod}
|
|
||||||
|
|
||||||
# Always send the text response
|
|
||||||
yield {"type": "text", "content": full_response}
|
yield {"type": "text", "content": full_response}
|
||||||
|
|
||||||
if result["returncode"] != 0 and result["stderr"]:
|
if result["returncode"] != 0 and result["stderr"]:
|
||||||
yield {"type": "error", "message": f"CLI error: {result['stderr']}"}
|
logger.warning(f"[SEND_MSG] CLI stderr: {result['stderr']}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
logger.error(f"[SEND_MSG] Exception: {e}")
|
||||||
yield {"type": "error", "message": str(e)}
|
yield {"type": "error", "message": str(e)}
|
||||||
|
|
||||||
# Store assistant response
|
# Store assistant response
|
||||||
@@ -248,8 +253,46 @@ class SessionManager:
|
|||||||
tool_calls=tool_calls if tool_calls else None,
|
tool_calls=tool_calls if tool_calls else None,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Check if spec was modified by comparing hashes
|
||||||
|
if spec_path and session.mode == "power" and session.study_id:
|
||||||
|
spec_hash_after = self._get_file_hash(spec_path)
|
||||||
|
if spec_hash_before != spec_hash_after:
|
||||||
|
logger.info(f"[SEND_MSG] Spec file was modified! Sending update.")
|
||||||
|
spec_update = await self._check_spec_updated(session.study_id)
|
||||||
|
if spec_update:
|
||||||
|
yield {
|
||||||
|
"type": "spec_updated",
|
||||||
|
"spec": spec_update,
|
||||||
|
"tool": "direct_edit",
|
||||||
|
"reason": "Claude modified spec file directly",
|
||||||
|
}
|
||||||
|
|
||||||
yield {"type": "done", "tool_calls": tool_calls}
|
yield {"type": "done", "tool_calls": tool_calls}
|
||||||
|
|
||||||
|
def _get_spec_path(self, study_id: str) -> Optional[Path]:
|
||||||
|
"""Get the atomizer_spec.json path for a study."""
|
||||||
|
if not study_id:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if study_id.startswith("draft_"):
|
||||||
|
spec_path = ATOMIZER_ROOT / "studies" / "_inbox" / study_id / "atomizer_spec.json"
|
||||||
|
else:
|
||||||
|
spec_path = ATOMIZER_ROOT / "studies" / study_id / "atomizer_spec.json"
|
||||||
|
if not spec_path.exists():
|
||||||
|
spec_path = ATOMIZER_ROOT / "studies" / study_id / "1_setup" / "atomizer_spec.json"
|
||||||
|
|
||||||
|
return spec_path if spec_path.exists() else None
|
||||||
|
|
||||||
|
def _get_file_hash(self, path: Optional[Path]) -> Optional[str]:
|
||||||
|
"""Get MD5 hash of a file for change detection."""
|
||||||
|
if not path or not path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(path, "rb") as f:
|
||||||
|
return hashlib.md5(f.read()).hexdigest()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
async def switch_mode(
|
async def switch_mode(
|
||||||
self,
|
self,
|
||||||
session_id: str,
|
session_id: str,
|
||||||
@@ -313,6 +356,7 @@ class SessionManager:
|
|||||||
"""
|
"""
|
||||||
import re
|
import re
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
modifications = []
|
modifications = []
|
||||||
@@ -327,14 +371,16 @@ class SessionManager:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
# Method 1: Look for JSON in code fences
|
# Method 1: Look for JSON in code fences
|
||||||
code_block_pattern = r'```(?:json)?\s*([\s\S]*?)```'
|
code_block_pattern = r"```(?:json)?\s*([\s\S]*?)```"
|
||||||
for match in re.finditer(code_block_pattern, response):
|
for match in re.finditer(code_block_pattern, response):
|
||||||
block_content = match.group(1).strip()
|
block_content = match.group(1).strip()
|
||||||
try:
|
try:
|
||||||
obj = json.loads(block_content)
|
obj = json.loads(block_content)
|
||||||
if isinstance(obj, dict) and 'modification' in obj:
|
if isinstance(obj, dict) and "modification" in obj:
|
||||||
logger.info(f"[CANVAS_MOD] Found modification in code fence: {obj['modification']}")
|
logger.info(
|
||||||
modifications.append(obj['modification'])
|
f"[CANVAS_MOD] Found modification in code fence: {obj['modification']}"
|
||||||
|
)
|
||||||
|
modifications.append(obj["modification"])
|
||||||
except json.JSONDecodeError:
|
except json.JSONDecodeError:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@@ -342,7 +388,7 @@ class SessionManager:
|
|||||||
# This handles nested objects correctly
|
# This handles nested objects correctly
|
||||||
i = 0
|
i = 0
|
||||||
while i < len(response):
|
while i < len(response):
|
||||||
if response[i] == '{':
|
if response[i] == "{":
|
||||||
# Found a potential JSON start, find matching close
|
# Found a potential JSON start, find matching close
|
||||||
brace_count = 1
|
brace_count = 1
|
||||||
j = i + 1
|
j = i + 1
|
||||||
@@ -354,14 +400,14 @@ class SessionManager:
|
|||||||
|
|
||||||
if escape_next:
|
if escape_next:
|
||||||
escape_next = False
|
escape_next = False
|
||||||
elif char == '\\':
|
elif char == "\\":
|
||||||
escape_next = True
|
escape_next = True
|
||||||
elif char == '"' and not escape_next:
|
elif char == '"' and not escape_next:
|
||||||
in_string = not in_string
|
in_string = not in_string
|
||||||
elif not in_string:
|
elif not in_string:
|
||||||
if char == '{':
|
if char == "{":
|
||||||
brace_count += 1
|
brace_count += 1
|
||||||
elif char == '}':
|
elif char == "}":
|
||||||
brace_count -= 1
|
brace_count -= 1
|
||||||
j += 1
|
j += 1
|
||||||
|
|
||||||
@@ -369,11 +415,13 @@ class SessionManager:
|
|||||||
potential_json = response[i:j]
|
potential_json = response[i:j]
|
||||||
try:
|
try:
|
||||||
obj = json.loads(potential_json)
|
obj = json.loads(potential_json)
|
||||||
if isinstance(obj, dict) and 'modification' in obj:
|
if isinstance(obj, dict) and "modification" in obj:
|
||||||
mod = obj['modification']
|
mod = obj["modification"]
|
||||||
# Avoid duplicates
|
# Avoid duplicates
|
||||||
if mod not in modifications:
|
if mod not in modifications:
|
||||||
logger.info(f"[CANVAS_MOD] Found inline modification: action={mod.get('action')}, nodeType={mod.get('nodeType')}")
|
logger.info(
|
||||||
|
f"[CANVAS_MOD] Found inline modification: action={mod.get('action')}, nodeType={mod.get('nodeType')}"
|
||||||
|
)
|
||||||
modifications.append(mod)
|
modifications.append(mod)
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
# Not valid JSON, skip
|
# Not valid JSON, skip
|
||||||
@@ -388,6 +436,43 @@ class SessionManager:
|
|||||||
logger.info(f"[CANVAS_MOD] Extracted {len(modifications)} modification(s)")
|
logger.info(f"[CANVAS_MOD] Extracted {len(modifications)} modification(s)")
|
||||||
return modifications
|
return modifications
|
||||||
|
|
||||||
|
async def _check_spec_updated(self, study_id: str) -> Optional[Dict]:
|
||||||
|
"""
|
||||||
|
Check if the atomizer_spec.json was modified and return the updated spec.
|
||||||
|
|
||||||
|
For drafts in _inbox/, we check the spec file directly.
|
||||||
|
"""
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Determine spec path based on study_id
|
||||||
|
if study_id.startswith("draft_"):
|
||||||
|
spec_path = ATOMIZER_ROOT / "studies" / "_inbox" / study_id / "atomizer_spec.json"
|
||||||
|
else:
|
||||||
|
# Regular study path
|
||||||
|
spec_path = ATOMIZER_ROOT / "studies" / study_id / "atomizer_spec.json"
|
||||||
|
if not spec_path.exists():
|
||||||
|
spec_path = (
|
||||||
|
ATOMIZER_ROOT / "studies" / study_id / "1_setup" / "atomizer_spec.json"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not spec_path.exists():
|
||||||
|
logger.debug(f"[SPEC_CHECK] Spec not found at {spec_path}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Read and return the spec
|
||||||
|
with open(spec_path, "r", encoding="utf-8") as f:
|
||||||
|
spec = json.load(f)
|
||||||
|
|
||||||
|
logger.info(f"[SPEC_CHECK] Loaded spec from {spec_path}")
|
||||||
|
return spec
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[SPEC_CHECK] Error checking spec: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
def _build_mcp_config(self, mode: Literal["user", "power"]) -> dict:
|
def _build_mcp_config(self, mode: Literal["user", "power"]) -> dict:
|
||||||
"""Build MCP configuration for Claude"""
|
"""Build MCP configuration for Claude"""
|
||||||
return {
|
return {
|
||||||
|
|||||||
@@ -47,11 +47,13 @@ from optimization_engine.config.spec_validator import (
|
|||||||
|
|
||||||
class SpecManagerError(Exception):
|
class SpecManagerError(Exception):
|
||||||
"""Base error for SpecManager operations."""
|
"""Base error for SpecManager operations."""
|
||||||
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class SpecNotFoundError(SpecManagerError):
|
class SpecNotFoundError(SpecManagerError):
|
||||||
"""Raised when spec file doesn't exist."""
|
"""Raised when spec file doesn't exist."""
|
||||||
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
@@ -118,7 +120,7 @@ class SpecManager:
|
|||||||
if not self.spec_path.exists():
|
if not self.spec_path.exists():
|
||||||
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
|
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
|
||||||
|
|
||||||
with open(self.spec_path, 'r', encoding='utf-8') as f:
|
with open(self.spec_path, "r", encoding="utf-8") as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
|
|
||||||
if validate:
|
if validate:
|
||||||
@@ -141,14 +143,15 @@ class SpecManager:
|
|||||||
if not self.spec_path.exists():
|
if not self.spec_path.exists():
|
||||||
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
|
raise SpecNotFoundError(f"Spec not found: {self.spec_path}")
|
||||||
|
|
||||||
with open(self.spec_path, 'r', encoding='utf-8') as f:
|
with open(self.spec_path, "r", encoding="utf-8") as f:
|
||||||
return json.load(f)
|
return json.load(f)
|
||||||
|
|
||||||
def save(
|
def save(
|
||||||
self,
|
self,
|
||||||
spec: Union[AtomizerSpec, Dict[str, Any]],
|
spec: Union[AtomizerSpec, Dict[str, Any]],
|
||||||
modified_by: str = "api",
|
modified_by: str = "api",
|
||||||
expected_hash: Optional[str] = None
|
expected_hash: Optional[str] = None,
|
||||||
|
skip_validation: bool = False,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Save spec with validation and broadcast.
|
Save spec with validation and broadcast.
|
||||||
@@ -157,6 +160,7 @@ class SpecManager:
|
|||||||
spec: Spec to save (AtomizerSpec or dict)
|
spec: Spec to save (AtomizerSpec or dict)
|
||||||
modified_by: Who/what is making the change
|
modified_by: Who/what is making the change
|
||||||
expected_hash: If provided, verify current file hash matches
|
expected_hash: If provided, verify current file hash matches
|
||||||
|
skip_validation: If True, skip strict validation (for draft specs)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
New spec hash
|
New spec hash
|
||||||
@@ -167,7 +171,7 @@ class SpecManager:
|
|||||||
"""
|
"""
|
||||||
# Convert to dict if needed
|
# Convert to dict if needed
|
||||||
if isinstance(spec, AtomizerSpec):
|
if isinstance(spec, AtomizerSpec):
|
||||||
data = spec.model_dump(mode='json')
|
data = spec.model_dump(mode="json")
|
||||||
else:
|
else:
|
||||||
data = spec
|
data = spec
|
||||||
|
|
||||||
@@ -176,24 +180,30 @@ class SpecManager:
|
|||||||
current_hash = self.get_hash()
|
current_hash = self.get_hash()
|
||||||
if current_hash != expected_hash:
|
if current_hash != expected_hash:
|
||||||
raise SpecConflictError(
|
raise SpecConflictError(
|
||||||
"Spec was modified by another client",
|
"Spec was modified by another client", current_hash=current_hash
|
||||||
current_hash=current_hash
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Update metadata
|
# Update metadata
|
||||||
now = datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')
|
now = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
|
||||||
data["meta"]["modified"] = now
|
data["meta"]["modified"] = now
|
||||||
data["meta"]["modified_by"] = modified_by
|
data["meta"]["modified_by"] = modified_by
|
||||||
|
|
||||||
# Validate
|
# Validate (skip for draft specs or when explicitly requested)
|
||||||
self.validator.validate(data, strict=True)
|
status = data.get("meta", {}).get("status", "draft")
|
||||||
|
is_draft = status in ("draft", "introspected", "configured")
|
||||||
|
|
||||||
|
if not skip_validation and not is_draft:
|
||||||
|
self.validator.validate(data, strict=True)
|
||||||
|
elif not skip_validation:
|
||||||
|
# For draft specs, just validate non-strictly (collect warnings only)
|
||||||
|
self.validator.validate(data, strict=False)
|
||||||
|
|
||||||
# Compute new hash
|
# Compute new hash
|
||||||
new_hash = self._compute_hash(data)
|
new_hash = self._compute_hash(data)
|
||||||
|
|
||||||
# Atomic write (write to temp, then rename)
|
# Atomic write (write to temp, then rename)
|
||||||
temp_path = self.spec_path.with_suffix('.tmp')
|
temp_path = self.spec_path.with_suffix(".tmp")
|
||||||
with open(temp_path, 'w', encoding='utf-8') as f:
|
with open(temp_path, "w", encoding="utf-8") as f:
|
||||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||||
|
|
||||||
temp_path.replace(self.spec_path)
|
temp_path.replace(self.spec_path)
|
||||||
@@ -202,12 +212,9 @@ class SpecManager:
|
|||||||
self._last_hash = new_hash
|
self._last_hash = new_hash
|
||||||
|
|
||||||
# Broadcast to subscribers
|
# Broadcast to subscribers
|
||||||
self._broadcast({
|
self._broadcast(
|
||||||
"type": "spec_updated",
|
{"type": "spec_updated", "hash": new_hash, "modified_by": modified_by, "timestamp": now}
|
||||||
"hash": new_hash,
|
)
|
||||||
"modified_by": modified_by,
|
|
||||||
"timestamp": now
|
|
||||||
})
|
|
||||||
|
|
||||||
return new_hash
|
return new_hash
|
||||||
|
|
||||||
@@ -219,7 +226,7 @@ class SpecManager:
|
|||||||
"""Get current spec hash."""
|
"""Get current spec hash."""
|
||||||
if not self.spec_path.exists():
|
if not self.spec_path.exists():
|
||||||
return ""
|
return ""
|
||||||
with open(self.spec_path, 'r', encoding='utf-8') as f:
|
with open(self.spec_path, "r", encoding="utf-8") as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
return self._compute_hash(data)
|
return self._compute_hash(data)
|
||||||
|
|
||||||
@@ -240,12 +247,7 @@ class SpecManager:
|
|||||||
# Patch Operations
|
# Patch Operations
|
||||||
# =========================================================================
|
# =========================================================================
|
||||||
|
|
||||||
def patch(
|
def patch(self, path: str, value: Any, modified_by: str = "api") -> AtomizerSpec:
|
||||||
self,
|
|
||||||
path: str,
|
|
||||||
value: Any,
|
|
||||||
modified_by: str = "api"
|
|
||||||
) -> AtomizerSpec:
|
|
||||||
"""
|
"""
|
||||||
Apply a JSONPath-style modification.
|
Apply a JSONPath-style modification.
|
||||||
|
|
||||||
@@ -306,7 +308,7 @@ class SpecManager:
|
|||||||
"""Parse JSONPath into parts."""
|
"""Parse JSONPath into parts."""
|
||||||
# Handle both dot notation and bracket notation
|
# Handle both dot notation and bracket notation
|
||||||
parts = []
|
parts = []
|
||||||
for part in re.split(r'\.|\[|\]', path):
|
for part in re.split(r"\.|\[|\]", path):
|
||||||
if part:
|
if part:
|
||||||
parts.append(part)
|
parts.append(part)
|
||||||
return parts
|
return parts
|
||||||
@@ -316,10 +318,7 @@ class SpecManager:
|
|||||||
# =========================================================================
|
# =========================================================================
|
||||||
|
|
||||||
def add_node(
|
def add_node(
|
||||||
self,
|
self, node_type: str, node_data: Dict[str, Any], modified_by: str = "canvas"
|
||||||
node_type: str,
|
|
||||||
node_data: Dict[str, Any],
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Add a new node (design var, extractor, objective, constraint).
|
Add a new node (design var, extractor, objective, constraint).
|
||||||
@@ -353,20 +352,19 @@ class SpecManager:
|
|||||||
self.save(data, modified_by)
|
self.save(data, modified_by)
|
||||||
|
|
||||||
# Broadcast node addition
|
# Broadcast node addition
|
||||||
self._broadcast({
|
self._broadcast(
|
||||||
"type": "node_added",
|
{
|
||||||
"node_type": node_type,
|
"type": "node_added",
|
||||||
"node_id": node_id,
|
"node_type": node_type,
|
||||||
"modified_by": modified_by
|
"node_id": node_id,
|
||||||
})
|
"modified_by": modified_by,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
return node_id
|
return node_id
|
||||||
|
|
||||||
def update_node(
|
def update_node(
|
||||||
self,
|
self, node_id: str, updates: Dict[str, Any], modified_by: str = "canvas"
|
||||||
node_id: str,
|
|
||||||
updates: Dict[str, Any],
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Update an existing node.
|
Update an existing node.
|
||||||
@@ -396,11 +394,7 @@ class SpecManager:
|
|||||||
|
|
||||||
self.save(data, modified_by)
|
self.save(data, modified_by)
|
||||||
|
|
||||||
def remove_node(
|
def remove_node(self, node_id: str, modified_by: str = "canvas") -> None:
|
||||||
self,
|
|
||||||
node_id: str,
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> None:
|
|
||||||
"""
|
"""
|
||||||
Remove a node and all edges referencing it.
|
Remove a node and all edges referencing it.
|
||||||
|
|
||||||
@@ -427,24 +421,18 @@ class SpecManager:
|
|||||||
# Remove edges referencing this node
|
# Remove edges referencing this node
|
||||||
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
|
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
|
||||||
data["canvas"]["edges"] = [
|
data["canvas"]["edges"] = [
|
||||||
e for e in data["canvas"]["edges"]
|
e
|
||||||
|
for e in data["canvas"]["edges"]
|
||||||
if e.get("source") != node_id and e.get("target") != node_id
|
if e.get("source") != node_id and e.get("target") != node_id
|
||||||
]
|
]
|
||||||
|
|
||||||
self.save(data, modified_by)
|
self.save(data, modified_by)
|
||||||
|
|
||||||
# Broadcast node removal
|
# Broadcast node removal
|
||||||
self._broadcast({
|
self._broadcast({"type": "node_removed", "node_id": node_id, "modified_by": modified_by})
|
||||||
"type": "node_removed",
|
|
||||||
"node_id": node_id,
|
|
||||||
"modified_by": modified_by
|
|
||||||
})
|
|
||||||
|
|
||||||
def update_node_position(
|
def update_node_position(
|
||||||
self,
|
self, node_id: str, position: Dict[str, float], modified_by: str = "canvas"
|
||||||
node_id: str,
|
|
||||||
position: Dict[str, float],
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Update a node's canvas position.
|
Update a node's canvas position.
|
||||||
@@ -456,12 +444,7 @@ class SpecManager:
|
|||||||
"""
|
"""
|
||||||
self.update_node(node_id, {"canvas_position": position}, modified_by)
|
self.update_node(node_id, {"canvas_position": position}, modified_by)
|
||||||
|
|
||||||
def add_edge(
|
def add_edge(self, source: str, target: str, modified_by: str = "canvas") -> None:
|
||||||
self,
|
|
||||||
source: str,
|
|
||||||
target: str,
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> None:
|
|
||||||
"""
|
"""
|
||||||
Add a canvas edge between nodes.
|
Add a canvas edge between nodes.
|
||||||
|
|
||||||
@@ -483,19 +466,11 @@ class SpecManager:
|
|||||||
if edge.get("source") == source and edge.get("target") == target:
|
if edge.get("source") == source and edge.get("target") == target:
|
||||||
return # Already exists
|
return # Already exists
|
||||||
|
|
||||||
data["canvas"]["edges"].append({
|
data["canvas"]["edges"].append({"source": source, "target": target})
|
||||||
"source": source,
|
|
||||||
"target": target
|
|
||||||
})
|
|
||||||
|
|
||||||
self.save(data, modified_by)
|
self.save(data, modified_by)
|
||||||
|
|
||||||
def remove_edge(
|
def remove_edge(self, source: str, target: str, modified_by: str = "canvas") -> None:
|
||||||
self,
|
|
||||||
source: str,
|
|
||||||
target: str,
|
|
||||||
modified_by: str = "canvas"
|
|
||||||
) -> None:
|
|
||||||
"""
|
"""
|
||||||
Remove a canvas edge.
|
Remove a canvas edge.
|
||||||
|
|
||||||
@@ -508,7 +483,8 @@ class SpecManager:
|
|||||||
|
|
||||||
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
|
if "canvas" in data and data["canvas"] and "edges" in data["canvas"]:
|
||||||
data["canvas"]["edges"] = [
|
data["canvas"]["edges"] = [
|
||||||
e for e in data["canvas"]["edges"]
|
e
|
||||||
|
for e in data["canvas"]["edges"]
|
||||||
if not (e.get("source") == source and e.get("target") == target)
|
if not (e.get("source") == source and e.get("target") == target)
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -524,7 +500,7 @@ class SpecManager:
|
|||||||
code: str,
|
code: str,
|
||||||
outputs: List[str],
|
outputs: List[str],
|
||||||
description: Optional[str] = None,
|
description: Optional[str] = None,
|
||||||
modified_by: str = "claude"
|
modified_by: str = "claude",
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Add a custom extractor function.
|
Add a custom extractor function.
|
||||||
@@ -546,9 +522,7 @@ class SpecManager:
|
|||||||
try:
|
try:
|
||||||
compile(code, f"<custom:{name}>", "exec")
|
compile(code, f"<custom:{name}>", "exec")
|
||||||
except SyntaxError as e:
|
except SyntaxError as e:
|
||||||
raise SpecValidationError(
|
raise SpecValidationError(f"Invalid Python syntax: {e.msg} at line {e.lineno}")
|
||||||
f"Invalid Python syntax: {e.msg} at line {e.lineno}"
|
|
||||||
)
|
|
||||||
|
|
||||||
data = self.load_raw()
|
data = self.load_raw()
|
||||||
|
|
||||||
@@ -561,13 +535,9 @@ class SpecManager:
|
|||||||
"name": description or f"Custom: {name}",
|
"name": description or f"Custom: {name}",
|
||||||
"type": "custom_function",
|
"type": "custom_function",
|
||||||
"builtin": False,
|
"builtin": False,
|
||||||
"function": {
|
"function": {"name": name, "module": "custom_extractors.dynamic", "source_code": code},
|
||||||
"name": name,
|
|
||||||
"module": "custom_extractors.dynamic",
|
|
||||||
"source_code": code
|
|
||||||
},
|
|
||||||
"outputs": [{"name": o, "metric": "custom"} for o in outputs],
|
"outputs": [{"name": o, "metric": "custom"} for o in outputs],
|
||||||
"canvas_position": self._auto_position("extractor", data)
|
"canvas_position": self._auto_position("extractor", data),
|
||||||
}
|
}
|
||||||
|
|
||||||
data["extractors"].append(extractor)
|
data["extractors"].append(extractor)
|
||||||
@@ -580,7 +550,7 @@ class SpecManager:
|
|||||||
extractor_id: str,
|
extractor_id: str,
|
||||||
code: Optional[str] = None,
|
code: Optional[str] = None,
|
||||||
outputs: Optional[List[str]] = None,
|
outputs: Optional[List[str]] = None,
|
||||||
modified_by: str = "claude"
|
modified_by: str = "claude",
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Update an existing custom function.
|
Update an existing custom function.
|
||||||
@@ -611,9 +581,7 @@ class SpecManager:
|
|||||||
try:
|
try:
|
||||||
compile(code, f"<custom:{extractor_id}>", "exec")
|
compile(code, f"<custom:{extractor_id}>", "exec")
|
||||||
except SyntaxError as e:
|
except SyntaxError as e:
|
||||||
raise SpecValidationError(
|
raise SpecValidationError(f"Invalid Python syntax: {e.msg} at line {e.lineno}")
|
||||||
f"Invalid Python syntax: {e.msg} at line {e.lineno}"
|
|
||||||
)
|
|
||||||
if "function" not in extractor:
|
if "function" not in extractor:
|
||||||
extractor["function"] = {}
|
extractor["function"] = {}
|
||||||
extractor["function"]["source_code"] = code
|
extractor["function"]["source_code"] = code
|
||||||
@@ -672,7 +640,7 @@ class SpecManager:
|
|||||||
"design_variable": "dv",
|
"design_variable": "dv",
|
||||||
"extractor": "ext",
|
"extractor": "ext",
|
||||||
"objective": "obj",
|
"objective": "obj",
|
||||||
"constraint": "con"
|
"constraint": "con",
|
||||||
}
|
}
|
||||||
prefix = prefix_map.get(node_type, node_type[:3])
|
prefix = prefix_map.get(node_type, node_type[:3])
|
||||||
|
|
||||||
@@ -697,7 +665,7 @@ class SpecManager:
|
|||||||
"design_variable": "design_variables",
|
"design_variable": "design_variables",
|
||||||
"extractor": "extractors",
|
"extractor": "extractors",
|
||||||
"objective": "objectives",
|
"objective": "objectives",
|
||||||
"constraint": "constraints"
|
"constraint": "constraints",
|
||||||
}
|
}
|
||||||
return section_map.get(node_type, node_type + "s")
|
return section_map.get(node_type, node_type + "s")
|
||||||
|
|
||||||
@@ -709,7 +677,7 @@ class SpecManager:
|
|||||||
"design_variable": 50,
|
"design_variable": 50,
|
||||||
"extractor": 740,
|
"extractor": 740,
|
||||||
"objective": 1020,
|
"objective": 1020,
|
||||||
"constraint": 1020
|
"constraint": 1020,
|
||||||
}
|
}
|
||||||
|
|
||||||
x = x_positions.get(node_type, 400)
|
x = x_positions.get(node_type, 400)
|
||||||
@@ -729,11 +697,123 @@ class SpecManager:
|
|||||||
|
|
||||||
return {"x": x, "y": y}
|
return {"x": x, "y": y}
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Intake Workflow Methods
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
def update_status(self, status: str, modified_by: str = "api") -> None:
|
||||||
|
"""
|
||||||
|
Update the spec status field.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
status: New status (draft, introspected, configured, validated, ready, running, completed, failed)
|
||||||
|
modified_by: Who/what is making the change
|
||||||
|
"""
|
||||||
|
data = self.load_raw()
|
||||||
|
data["meta"]["status"] = status
|
||||||
|
self.save(data, modified_by)
|
||||||
|
|
||||||
|
def get_status(self) -> str:
|
||||||
|
"""
|
||||||
|
Get the current spec status.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Current status string
|
||||||
|
"""
|
||||||
|
if not self.exists():
|
||||||
|
return "unknown"
|
||||||
|
data = self.load_raw()
|
||||||
|
return data.get("meta", {}).get("status", "draft")
|
||||||
|
|
||||||
|
def add_introspection(
|
||||||
|
self, introspection_data: Dict[str, Any], modified_by: str = "introspection"
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Add introspection data to the spec's model section.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
introspection_data: Dict with timestamp, expressions, mass_kg, etc.
|
||||||
|
modified_by: Who/what is making the change
|
||||||
|
"""
|
||||||
|
data = self.load_raw()
|
||||||
|
|
||||||
|
if "model" not in data:
|
||||||
|
data["model"] = {}
|
||||||
|
|
||||||
|
data["model"]["introspection"] = introspection_data
|
||||||
|
data["meta"]["status"] = "introspected"
|
||||||
|
|
||||||
|
self.save(data, modified_by)
|
||||||
|
|
||||||
|
def add_baseline(
|
||||||
|
self, baseline_data: Dict[str, Any], modified_by: str = "baseline_solve"
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Add baseline solve results to introspection data.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
baseline_data: Dict with timestamp, solve_time_seconds, mass_kg, etc.
|
||||||
|
modified_by: Who/what is making the change
|
||||||
|
"""
|
||||||
|
data = self.load_raw()
|
||||||
|
|
||||||
|
if "model" not in data:
|
||||||
|
data["model"] = {}
|
||||||
|
if "introspection" not in data["model"] or data["model"]["introspection"] is None:
|
||||||
|
data["model"]["introspection"] = {}
|
||||||
|
|
||||||
|
data["model"]["introspection"]["baseline"] = baseline_data
|
||||||
|
|
||||||
|
# Update status based on baseline success
|
||||||
|
if baseline_data.get("success", False):
|
||||||
|
data["meta"]["status"] = "validated"
|
||||||
|
|
||||||
|
self.save(data, modified_by)
|
||||||
|
|
||||||
|
def set_topic(self, topic: str, modified_by: str = "api") -> None:
|
||||||
|
"""
|
||||||
|
Set the spec's topic field.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
topic: Topic folder name
|
||||||
|
modified_by: Who/what is making the change
|
||||||
|
"""
|
||||||
|
data = self.load_raw()
|
||||||
|
data["meta"]["topic"] = topic
|
||||||
|
self.save(data, modified_by)
|
||||||
|
|
||||||
|
def get_introspection(self) -> Optional[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Get introspection data from spec.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Introspection dict or None if not present
|
||||||
|
"""
|
||||||
|
if not self.exists():
|
||||||
|
return None
|
||||||
|
data = self.load_raw()
|
||||||
|
return data.get("model", {}).get("introspection")
|
||||||
|
|
||||||
|
def get_design_candidates(self) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Get expressions marked as design variable candidates.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of expression dicts where is_candidate=True
|
||||||
|
"""
|
||||||
|
introspection = self.get_introspection()
|
||||||
|
if not introspection:
|
||||||
|
return []
|
||||||
|
|
||||||
|
expressions = introspection.get("expressions", [])
|
||||||
|
return [e for e in expressions if e.get("is_candidate", False)]
|
||||||
|
|
||||||
|
|
||||||
# =========================================================================
|
# =========================================================================
|
||||||
# Factory Function
|
# Factory Function
|
||||||
# =========================================================================
|
# =========================================================================
|
||||||
|
|
||||||
|
|
||||||
def get_spec_manager(study_path: Union[str, Path]) -> SpecManager:
|
def get_spec_manager(study_path: Union[str, Path]) -> SpecManager:
|
||||||
"""
|
"""
|
||||||
Get a SpecManager instance for a study.
|
Get a SpecManager instance for a study.
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import Analysis from './pages/Analysis';
|
|||||||
import Insights from './pages/Insights';
|
import Insights from './pages/Insights';
|
||||||
import Results from './pages/Results';
|
import Results from './pages/Results';
|
||||||
import CanvasView from './pages/CanvasView';
|
import CanvasView from './pages/CanvasView';
|
||||||
|
import Studio from './pages/Studio';
|
||||||
|
|
||||||
const queryClient = new QueryClient({
|
const queryClient = new QueryClient({
|
||||||
defaultOptions: {
|
defaultOptions: {
|
||||||
@@ -32,6 +33,10 @@ function App() {
|
|||||||
<Route path="canvas" element={<CanvasView />} />
|
<Route path="canvas" element={<CanvasView />} />
|
||||||
<Route path="canvas/*" element={<CanvasView />} />
|
<Route path="canvas/*" element={<CanvasView />} />
|
||||||
|
|
||||||
|
{/* Studio - unified study creation environment */}
|
||||||
|
<Route path="studio" element={<Studio />} />
|
||||||
|
<Route path="studio/:draftId" element={<Studio />} />
|
||||||
|
|
||||||
{/* Study pages - with sidebar layout */}
|
{/* Study pages - with sidebar layout */}
|
||||||
<Route element={<MainLayout />}>
|
<Route element={<MainLayout />}>
|
||||||
<Route path="setup" element={<Setup />} />
|
<Route path="setup" element={<Setup />} />
|
||||||
|
|||||||
411
atomizer-dashboard/frontend/src/api/intake.ts
Normal file
411
atomizer-dashboard/frontend/src/api/intake.ts
Normal file
@@ -0,0 +1,411 @@
|
|||||||
|
/**
|
||||||
|
* Intake API Client
|
||||||
|
*
|
||||||
|
* API client methods for the study intake workflow.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
CreateInboxRequest,
|
||||||
|
CreateInboxResponse,
|
||||||
|
IntrospectRequest,
|
||||||
|
IntrospectResponse,
|
||||||
|
ListInboxResponse,
|
||||||
|
ListTopicsResponse,
|
||||||
|
InboxStudyDetail,
|
||||||
|
GenerateReadmeResponse,
|
||||||
|
FinalizeRequest,
|
||||||
|
FinalizeResponse,
|
||||||
|
UploadFilesResponse,
|
||||||
|
} from '../types/intake';
|
||||||
|
|
||||||
|
const API_BASE = '/api';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Intake API client for study creation workflow.
|
||||||
|
*/
|
||||||
|
export const intakeApi = {
|
||||||
|
/**
|
||||||
|
* Create a new inbox study folder with initial spec.
|
||||||
|
*/
|
||||||
|
async createInbox(request: CreateInboxRequest): Promise<CreateInboxResponse> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/create`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify(request),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to create inbox study');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run NX introspection on an inbox study.
|
||||||
|
*/
|
||||||
|
async introspect(request: IntrospectRequest): Promise<IntrospectResponse> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/introspect`, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify(request),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Introspection failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all studies in the inbox.
|
||||||
|
*/
|
||||||
|
async listInbox(): Promise<ListInboxResponse> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/list`);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error('Failed to fetch inbox studies');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List existing topic folders.
|
||||||
|
*/
|
||||||
|
async listTopics(): Promise<ListTopicsResponse> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/topics`);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error('Failed to fetch topics');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get detailed information about an inbox study.
|
||||||
|
*/
|
||||||
|
async getInboxStudy(studyName: string): Promise<InboxStudyDetail> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/${encodeURIComponent(studyName)}`);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to fetch inbox study');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete an inbox study.
|
||||||
|
*/
|
||||||
|
async deleteInboxStudy(studyName: string): Promise<{ success: boolean; deleted: string }> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/${encodeURIComponent(studyName)}`, {
|
||||||
|
method: 'DELETE',
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to delete inbox study');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate README for an inbox study using Claude AI.
|
||||||
|
*/
|
||||||
|
async generateReadme(studyName: string): Promise<GenerateReadmeResponse> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/readme`,
|
||||||
|
{ method: 'POST' }
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'README generation failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Finalize an inbox study and move to studies directory.
|
||||||
|
*/
|
||||||
|
async finalize(studyName: string, request: FinalizeRequest): Promise<FinalizeResponse> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/finalize`,
|
||||||
|
{
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify(request),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Finalization failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Upload model files to an inbox study.
|
||||||
|
*/
|
||||||
|
async uploadFiles(studyName: string, files: File[]): Promise<UploadFilesResponse> {
|
||||||
|
const formData = new FormData();
|
||||||
|
files.forEach((file) => {
|
||||||
|
formData.append('files', file);
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/upload`,
|
||||||
|
{
|
||||||
|
method: 'POST',
|
||||||
|
body: formData,
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'File upload failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Upload context files to an inbox study.
|
||||||
|
* Context files help Claude understand optimization goals.
|
||||||
|
*/
|
||||||
|
async uploadContextFiles(studyName: string, files: File[]): Promise<UploadFilesResponse> {
|
||||||
|
const formData = new FormData();
|
||||||
|
files.forEach((file) => {
|
||||||
|
formData.append('files', file);
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context`,
|
||||||
|
{
|
||||||
|
method: 'POST',
|
||||||
|
body: formData,
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Context file upload failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List context files for an inbox study.
|
||||||
|
*/
|
||||||
|
async listContextFiles(studyName: string): Promise<{
|
||||||
|
study_name: string;
|
||||||
|
context_files: Array<{ name: string; path: string; size: number; extension: string }>;
|
||||||
|
total: number;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error('Failed to list context files');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete a context file from an inbox study.
|
||||||
|
*/
|
||||||
|
async deleteContextFile(studyName: string, filename: string): Promise<{ success: boolean; deleted: string }> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context/${encodeURIComponent(filename)}`,
|
||||||
|
{ method: 'DELETE' }
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to delete context file');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create design variables from selected expressions.
|
||||||
|
*/
|
||||||
|
async createDesignVariables(
|
||||||
|
studyName: string,
|
||||||
|
expressionNames: string[],
|
||||||
|
options?: { autoBounds?: boolean; boundFactor?: number }
|
||||||
|
): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
created: Array<{
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
expression_name: string;
|
||||||
|
bounds_min: number;
|
||||||
|
bounds_max: number;
|
||||||
|
baseline: number;
|
||||||
|
units: string | null;
|
||||||
|
}>;
|
||||||
|
total_created: number;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/design-variables`,
|
||||||
|
{
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
expression_names: expressionNames,
|
||||||
|
auto_bounds: options?.autoBounds ?? true,
|
||||||
|
bound_factor: options?.boundFactor ?? 0.5,
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to create design variables');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===========================================================================
|
||||||
|
// Studio Endpoints (Atomizer Studio - Unified Creation Environment)
|
||||||
|
// ===========================================================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create an anonymous draft study for Studio workflow.
|
||||||
|
* Returns a temporary draft_id that can be renamed during finalization.
|
||||||
|
*/
|
||||||
|
async createDraft(): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
draft_id: string;
|
||||||
|
inbox_path: string;
|
||||||
|
spec_path: string;
|
||||||
|
status: string;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(`${API_BASE}/intake/draft`, {
|
||||||
|
method: 'POST',
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to create draft');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get extracted text content from context files.
|
||||||
|
* Used for AI context injection.
|
||||||
|
*/
|
||||||
|
async getContextContent(studyName: string): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
content: string;
|
||||||
|
files_read: Array<{
|
||||||
|
name: string;
|
||||||
|
extension: string;
|
||||||
|
size: number;
|
||||||
|
status: string;
|
||||||
|
characters?: number;
|
||||||
|
error?: string;
|
||||||
|
}>;
|
||||||
|
total_characters: number;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/context/content`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to get context content');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Finalize a Studio draft with rename support.
|
||||||
|
* Enhanced version that supports renaming draft_xxx to proper names.
|
||||||
|
*/
|
||||||
|
async finalizeStudio(
|
||||||
|
studyName: string,
|
||||||
|
request: {
|
||||||
|
topic: string;
|
||||||
|
newName?: string;
|
||||||
|
runBaseline?: boolean;
|
||||||
|
}
|
||||||
|
): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
original_name: string;
|
||||||
|
final_name: string;
|
||||||
|
final_path: string;
|
||||||
|
status: string;
|
||||||
|
baseline_success: boolean | null;
|
||||||
|
readme_generated: boolean;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/finalize/studio`,
|
||||||
|
{
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
topic: request.topic,
|
||||||
|
new_name: request.newName,
|
||||||
|
run_baseline: request.runBaseline ?? false,
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Studio finalization failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get complete draft information for Studio UI.
|
||||||
|
* Convenience endpoint that returns everything the Studio needs.
|
||||||
|
*/
|
||||||
|
async getStudioDraft(studyName: string): Promise<{
|
||||||
|
success: boolean;
|
||||||
|
draft_id: string;
|
||||||
|
spec: Record<string, unknown>;
|
||||||
|
model_files: string[];
|
||||||
|
context_files: string[];
|
||||||
|
introspection_available: boolean;
|
||||||
|
design_variable_count: number;
|
||||||
|
objective_count: number;
|
||||||
|
}> {
|
||||||
|
const response = await fetch(
|
||||||
|
`${API_BASE}/intake/${encodeURIComponent(studyName)}/studio`
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
throw new Error(error.detail || 'Failed to get studio draft');
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.json();
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
export default intakeApi;
|
||||||
@@ -777,6 +777,8 @@ function SpecRendererInner({
|
|||||||
onConnect={onConnect}
|
onConnect={onConnect}
|
||||||
onInit={(instance) => {
|
onInit={(instance) => {
|
||||||
reactFlowInstance.current = instance;
|
reactFlowInstance.current = instance;
|
||||||
|
// Auto-fit view on init with padding
|
||||||
|
setTimeout(() => instance.fitView({ padding: 0.2, duration: 300 }), 100);
|
||||||
}}
|
}}
|
||||||
onDragOver={onDragOver}
|
onDragOver={onDragOver}
|
||||||
onDrop={onDrop}
|
onDrop={onDrop}
|
||||||
@@ -785,6 +787,7 @@ function SpecRendererInner({
|
|||||||
onPaneClick={onPaneClick}
|
onPaneClick={onPaneClick}
|
||||||
nodeTypes={nodeTypes}
|
nodeTypes={nodeTypes}
|
||||||
fitView
|
fitView
|
||||||
|
fitViewOptions={{ padding: 0.2, includeHiddenNodes: false }}
|
||||||
deleteKeyCode={null} // We handle delete ourselves
|
deleteKeyCode={null} // We handle delete ourselves
|
||||||
nodesDraggable={editable}
|
nodesDraggable={editable}
|
||||||
nodesConnectable={editable}
|
nodesConnectable={editable}
|
||||||
|
|||||||
@@ -0,0 +1,342 @@
|
|||||||
|
/**
|
||||||
|
* DevLoopPanel - Control panel for closed-loop development
|
||||||
|
*
|
||||||
|
* Features:
|
||||||
|
* - Start/stop development cycles
|
||||||
|
* - Real-time phase monitoring
|
||||||
|
* - Iteration history view
|
||||||
|
* - Test result visualization
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useState, useEffect, useCallback } from 'react';
|
||||||
|
import {
|
||||||
|
PlayCircle,
|
||||||
|
StopCircle,
|
||||||
|
RefreshCw,
|
||||||
|
CheckCircle,
|
||||||
|
XCircle,
|
||||||
|
AlertCircle,
|
||||||
|
Clock,
|
||||||
|
ListChecks,
|
||||||
|
Zap,
|
||||||
|
ChevronDown,
|
||||||
|
ChevronRight,
|
||||||
|
} from 'lucide-react';
|
||||||
|
import useWebSocket from 'react-use-websocket';
|
||||||
|
|
||||||
|
interface LoopState {
|
||||||
|
phase: string;
|
||||||
|
iteration: number;
|
||||||
|
current_task: string | null;
|
||||||
|
last_update: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface CycleResult {
|
||||||
|
objective: string;
|
||||||
|
status: string;
|
||||||
|
iterations: number;
|
||||||
|
duration_seconds: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface TestResult {
|
||||||
|
scenario_id: string;
|
||||||
|
scenario_name: string;
|
||||||
|
passed: boolean;
|
||||||
|
duration_ms: number;
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const PHASE_COLORS: Record<string, string> = {
|
||||||
|
idle: 'bg-gray-500',
|
||||||
|
planning: 'bg-blue-500',
|
||||||
|
implementing: 'bg-purple-500',
|
||||||
|
testing: 'bg-yellow-500',
|
||||||
|
analyzing: 'bg-orange-500',
|
||||||
|
fixing: 'bg-red-500',
|
||||||
|
verifying: 'bg-green-500',
|
||||||
|
};
|
||||||
|
|
||||||
|
const PHASE_ICONS: Record<string, React.ReactNode> = {
|
||||||
|
idle: <Clock className="w-4 h-4" />,
|
||||||
|
planning: <ListChecks className="w-4 h-4" />,
|
||||||
|
implementing: <Zap className="w-4 h-4" />,
|
||||||
|
testing: <RefreshCw className="w-4 h-4 animate-spin" />,
|
||||||
|
analyzing: <AlertCircle className="w-4 h-4" />,
|
||||||
|
fixing: <Zap className="w-4 h-4" />,
|
||||||
|
verifying: <CheckCircle className="w-4 h-4" />,
|
||||||
|
};
|
||||||
|
|
||||||
|
export function DevLoopPanel() {
|
||||||
|
const [state, setState] = useState<LoopState>({
|
||||||
|
phase: 'idle',
|
||||||
|
iteration: 0,
|
||||||
|
current_task: null,
|
||||||
|
last_update: new Date().toISOString(),
|
||||||
|
});
|
||||||
|
const [objective, setObjective] = useState('');
|
||||||
|
const [history, setHistory] = useState<CycleResult[]>([]);
|
||||||
|
const [testResults, setTestResults] = useState<TestResult[]>([]);
|
||||||
|
const [expanded, setExpanded] = useState(true);
|
||||||
|
const [isStarting, setIsStarting] = useState(false);
|
||||||
|
|
||||||
|
// WebSocket connection for real-time updates
|
||||||
|
const { lastJsonMessage, readyState } = useWebSocket(
|
||||||
|
'ws://localhost:8000/api/devloop/ws',
|
||||||
|
{
|
||||||
|
shouldReconnect: () => true,
|
||||||
|
reconnectInterval: 3000,
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Handle WebSocket messages
|
||||||
|
useEffect(() => {
|
||||||
|
if (!lastJsonMessage) return;
|
||||||
|
|
||||||
|
const msg = lastJsonMessage as any;
|
||||||
|
|
||||||
|
switch (msg.type) {
|
||||||
|
case 'connection_ack':
|
||||||
|
case 'state_update':
|
||||||
|
case 'state':
|
||||||
|
if (msg.state) {
|
||||||
|
setState(msg.state);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
case 'cycle_complete':
|
||||||
|
setHistory(prev => [msg.result, ...prev].slice(0, 10));
|
||||||
|
setIsStarting(false);
|
||||||
|
break;
|
||||||
|
case 'cycle_error':
|
||||||
|
console.error('DevLoop error:', msg.error);
|
||||||
|
setIsStarting(false);
|
||||||
|
break;
|
||||||
|
case 'test_progress':
|
||||||
|
if (msg.result) {
|
||||||
|
setTestResults(prev => [...prev, msg.result]);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}, [lastJsonMessage]);
|
||||||
|
|
||||||
|
// Start a development cycle
|
||||||
|
const startCycle = useCallback(async () => {
|
||||||
|
if (!objective.trim()) return;
|
||||||
|
|
||||||
|
setIsStarting(true);
|
||||||
|
setTestResults([]);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch('http://localhost:8000/api/devloop/start', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
objective: objective.trim(),
|
||||||
|
max_iterations: 10,
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const error = await response.json();
|
||||||
|
console.error('Failed to start cycle:', error);
|
||||||
|
setIsStarting(false);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to start cycle:', error);
|
||||||
|
setIsStarting(false);
|
||||||
|
}
|
||||||
|
}, [objective]);
|
||||||
|
|
||||||
|
// Stop the current cycle
|
||||||
|
const stopCycle = useCallback(async () => {
|
||||||
|
try {
|
||||||
|
await fetch('http://localhost:8000/api/devloop/stop', {
|
||||||
|
method: 'POST',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Failed to stop cycle:', error);
|
||||||
|
}
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Quick start: Create support_arm study
|
||||||
|
const quickStartSupportArm = useCallback(() => {
|
||||||
|
setObjective('Create support_arm optimization study with 5 design variables (center_space, arm_thk, arm_angle, end_thk, base_thk), objectives (minimize displacement, minimize mass), and stress constraint (< 30% yield)');
|
||||||
|
// Auto-start after a brief delay
|
||||||
|
setTimeout(() => {
|
||||||
|
startCycle();
|
||||||
|
}, 500);
|
||||||
|
}, [startCycle]);
|
||||||
|
|
||||||
|
const isActive = state.phase !== 'idle';
|
||||||
|
const wsConnected = readyState === WebSocket.OPEN;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-gray-900 rounded-lg border border-gray-700 overflow-hidden">
|
||||||
|
{/* Header */}
|
||||||
|
<div
|
||||||
|
className="flex items-center justify-between px-4 py-3 bg-gray-800 cursor-pointer"
|
||||||
|
onClick={() => setExpanded(!expanded)}
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{expanded ? (
|
||||||
|
<ChevronDown className="w-4 h-4 text-gray-400" />
|
||||||
|
) : (
|
||||||
|
<ChevronRight className="w-4 h-4 text-gray-400" />
|
||||||
|
)}
|
||||||
|
<RefreshCw className="w-5 h-5 text-blue-400" />
|
||||||
|
<h3 className="font-semibold text-white">DevLoop Control</h3>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Status indicator */}
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<div
|
||||||
|
className={`w-2 h-2 rounded-full ${
|
||||||
|
wsConnected ? 'bg-green-500' : 'bg-red-500'
|
||||||
|
}`}
|
||||||
|
/>
|
||||||
|
<span className={`px-2 py-1 text-xs rounded ${PHASE_COLORS[state.phase]} text-white`}>
|
||||||
|
{state.phase.toUpperCase()}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{expanded && (
|
||||||
|
<div className="p-4 space-y-4">
|
||||||
|
{/* Objective Input */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm text-gray-400 mb-1">
|
||||||
|
Development Objective
|
||||||
|
</label>
|
||||||
|
<textarea
|
||||||
|
value={objective}
|
||||||
|
onChange={(e) => setObjective(e.target.value)}
|
||||||
|
placeholder="e.g., Create support_arm optimization study..."
|
||||||
|
className="w-full px-3 py-2 bg-gray-800 border border-gray-600 rounded text-white text-sm resize-none h-20"
|
||||||
|
disabled={isActive}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Quick Actions */}
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<button
|
||||||
|
onClick={quickStartSupportArm}
|
||||||
|
disabled={isActive}
|
||||||
|
className="px-3 py-1.5 bg-purple-600 hover:bg-purple-700 disabled:bg-gray-600 text-white text-sm rounded flex items-center gap-1"
|
||||||
|
>
|
||||||
|
<Zap className="w-4 h-4" />
|
||||||
|
Quick: support_arm
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Control Buttons */}
|
||||||
|
<div className="flex gap-2">
|
||||||
|
{!isActive ? (
|
||||||
|
<button
|
||||||
|
onClick={startCycle}
|
||||||
|
disabled={!objective.trim() || isStarting}
|
||||||
|
className="flex-1 px-4 py-2 bg-green-600 hover:bg-green-700 disabled:bg-gray-600 text-white rounded flex items-center justify-center gap-2"
|
||||||
|
>
|
||||||
|
<PlayCircle className="w-5 h-5" />
|
||||||
|
{isStarting ? 'Starting...' : 'Start Cycle'}
|
||||||
|
</button>
|
||||||
|
) : (
|
||||||
|
<button
|
||||||
|
onClick={stopCycle}
|
||||||
|
className="flex-1 px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded flex items-center justify-center gap-2"
|
||||||
|
>
|
||||||
|
<StopCircle className="w-5 h-5" />
|
||||||
|
Stop Cycle
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Current Phase Progress */}
|
||||||
|
{isActive && (
|
||||||
|
<div className="bg-gray-800 rounded p-3 space-y-2">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{PHASE_ICONS[state.phase]}
|
||||||
|
<span className="text-sm text-white font-medium">
|
||||||
|
{state.phase.charAt(0).toUpperCase() + state.phase.slice(1)}
|
||||||
|
</span>
|
||||||
|
<span className="text-xs text-gray-400">
|
||||||
|
Iteration {state.iteration + 1}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
{state.current_task && (
|
||||||
|
<p className="text-xs text-gray-400 truncate">
|
||||||
|
{state.current_task}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Test Results */}
|
||||||
|
{testResults.length > 0 && (
|
||||||
|
<div className="bg-gray-800 rounded p-3">
|
||||||
|
<h4 className="text-sm font-medium text-white mb-2">Test Results</h4>
|
||||||
|
<div className="space-y-1 max-h-32 overflow-y-auto">
|
||||||
|
{testResults.map((test, i) => (
|
||||||
|
<div
|
||||||
|
key={`${test.scenario_id}-${i}`}
|
||||||
|
className="flex items-center gap-2 text-xs"
|
||||||
|
>
|
||||||
|
{test.passed ? (
|
||||||
|
<CheckCircle className="w-3 h-3 text-green-500" />
|
||||||
|
) : (
|
||||||
|
<XCircle className="w-3 h-3 text-red-500" />
|
||||||
|
)}
|
||||||
|
<span className="text-gray-300 truncate flex-1">
|
||||||
|
{test.scenario_name}
|
||||||
|
</span>
|
||||||
|
<span className="text-gray-500">
|
||||||
|
{test.duration_ms.toFixed(0)}ms
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* History */}
|
||||||
|
{history.length > 0 && (
|
||||||
|
<div className="bg-gray-800 rounded p-3">
|
||||||
|
<h4 className="text-sm font-medium text-white mb-2">Recent Cycles</h4>
|
||||||
|
<div className="space-y-2">
|
||||||
|
{history.slice(0, 3).map((cycle, i) => (
|
||||||
|
<div
|
||||||
|
key={i}
|
||||||
|
className="flex items-center justify-between text-xs"
|
||||||
|
>
|
||||||
|
<span className="text-gray-300 truncate flex-1">
|
||||||
|
{cycle.objective.substring(0, 40)}...
|
||||||
|
</span>
|
||||||
|
<span
|
||||||
|
className={`px-1.5 py-0.5 rounded ${
|
||||||
|
cycle.status === 'completed'
|
||||||
|
? 'bg-green-900 text-green-300'
|
||||||
|
: 'bg-yellow-900 text-yellow-300'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{cycle.status}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Phase Legend */}
|
||||||
|
<div className="grid grid-cols-4 gap-2 text-xs">
|
||||||
|
{Object.entries(PHASE_COLORS).map(([phase, color]) => (
|
||||||
|
<div key={phase} className="flex items-center gap-1">
|
||||||
|
<div className={`w-2 h-2 rounded ${color}`} />
|
||||||
|
<span className="text-gray-400 capitalize">{phase}</span>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export default DevLoopPanel;
|
||||||
@@ -0,0 +1,292 @@
|
|||||||
|
/**
|
||||||
|
* ContextFileUpload - Upload context files for study configuration
|
||||||
|
*
|
||||||
|
* Allows uploading markdown, text, PDF, and image files that help
|
||||||
|
* Claude understand optimization goals and generate better documentation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect, useRef, useCallback } from 'react';
|
||||||
|
import { Upload, FileText, X, Loader2, AlertCircle, CheckCircle, Trash2, BookOpen } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface ContextFileUploadProps {
|
||||||
|
studyName: string;
|
||||||
|
onUploadComplete: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ContextFile {
|
||||||
|
name: string;
|
||||||
|
path: string;
|
||||||
|
size: number;
|
||||||
|
extension: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface FileStatus {
|
||||||
|
file: File;
|
||||||
|
status: 'pending' | 'uploading' | 'success' | 'error';
|
||||||
|
message?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const VALID_EXTENSIONS = ['.md', '.txt', '.pdf', '.png', '.jpg', '.jpeg', '.json', '.csv'];
|
||||||
|
|
||||||
|
export const ContextFileUpload: React.FC<ContextFileUploadProps> = ({
|
||||||
|
studyName,
|
||||||
|
onUploadComplete,
|
||||||
|
}) => {
|
||||||
|
const [contextFiles, setContextFiles] = useState<ContextFile[]>([]);
|
||||||
|
const [pendingFiles, setPendingFiles] = useState<FileStatus[]>([]);
|
||||||
|
const [isUploading, setIsUploading] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
const fileInputRef = useRef<HTMLInputElement>(null);
|
||||||
|
|
||||||
|
// Load existing context files
|
||||||
|
const loadContextFiles = useCallback(async () => {
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.listContextFiles(studyName);
|
||||||
|
setContextFiles(response.context_files);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load context files:', err);
|
||||||
|
}
|
||||||
|
}, [studyName]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
loadContextFiles();
|
||||||
|
}, [loadContextFiles]);
|
||||||
|
|
||||||
|
const validateFile = (file: File): { valid: boolean; reason?: string } => {
|
||||||
|
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
|
||||||
|
if (!VALID_EXTENSIONS.includes(ext)) {
|
||||||
|
return { valid: false, reason: `Invalid type: ${ext}` };
|
||||||
|
}
|
||||||
|
// Max 10MB per file
|
||||||
|
if (file.size > 10 * 1024 * 1024) {
|
||||||
|
return { valid: false, reason: 'File too large (max 10MB)' };
|
||||||
|
}
|
||||||
|
return { valid: true };
|
||||||
|
};
|
||||||
|
|
||||||
|
const addFiles = useCallback((newFiles: File[]) => {
|
||||||
|
const validFiles: FileStatus[] = [];
|
||||||
|
|
||||||
|
for (const file of newFiles) {
|
||||||
|
// Skip duplicates
|
||||||
|
if (pendingFiles.some(f => f.file.name === file.name)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (contextFiles.some(f => f.name === file.name)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const validation = validateFile(file);
|
||||||
|
if (validation.valid) {
|
||||||
|
validFiles.push({ file, status: 'pending' });
|
||||||
|
} else {
|
||||||
|
validFiles.push({ file, status: 'error', message: validation.reason });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setPendingFiles(prev => [...prev, ...validFiles]);
|
||||||
|
}, [pendingFiles, contextFiles]);
|
||||||
|
|
||||||
|
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
|
||||||
|
const selectedFiles = Array.from(e.target.files || []);
|
||||||
|
addFiles(selectedFiles);
|
||||||
|
e.target.value = '';
|
||||||
|
}, [addFiles]);
|
||||||
|
|
||||||
|
const removeFile = (index: number) => {
|
||||||
|
setPendingFiles(prev => prev.filter((_, i) => i !== index));
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleUpload = async () => {
|
||||||
|
const filesToUpload = pendingFiles.filter(f => f.status === 'pending');
|
||||||
|
if (filesToUpload.length === 0) return;
|
||||||
|
|
||||||
|
setIsUploading(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.uploadContextFiles(
|
||||||
|
studyName,
|
||||||
|
filesToUpload.map(f => f.file)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Update pending file statuses
|
||||||
|
const uploadResults = new Map(
|
||||||
|
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
|
||||||
|
);
|
||||||
|
|
||||||
|
setPendingFiles(prev => prev.map(f => {
|
||||||
|
if (f.status !== 'pending') return f;
|
||||||
|
const success = uploadResults.get(f.file.name);
|
||||||
|
return {
|
||||||
|
...f,
|
||||||
|
status: success ? 'success' : 'error',
|
||||||
|
message: success ? undefined : 'Upload failed',
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Refresh and clear after a moment
|
||||||
|
setTimeout(() => {
|
||||||
|
setPendingFiles(prev => prev.filter(f => f.status !== 'success'));
|
||||||
|
loadContextFiles();
|
||||||
|
onUploadComplete();
|
||||||
|
}, 1500);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Upload failed');
|
||||||
|
} finally {
|
||||||
|
setIsUploading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleDeleteFile = async (filename: string) => {
|
||||||
|
try {
|
||||||
|
await intakeApi.deleteContextFile(studyName, filename);
|
||||||
|
loadContextFiles();
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Delete failed');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const pendingCount = pendingFiles.filter(f => f.status === 'pending').length;
|
||||||
|
|
||||||
|
const formatSize = (bytes: number) => {
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / 1024 / 1024).toFixed(1)} MB`;
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
|
||||||
|
<BookOpen className="w-4 h-4 text-purple-400" />
|
||||||
|
Context Files
|
||||||
|
</h5>
|
||||||
|
<button
|
||||||
|
onClick={() => fileInputRef.current?.click()}
|
||||||
|
className="flex items-center gap-1.5 px-2 py-1 rounded text-xs font-medium
|
||||||
|
bg-purple-500/10 text-purple-400 hover:bg-purple-500/20
|
||||||
|
transition-colors"
|
||||||
|
>
|
||||||
|
<Upload className="w-3 h-3" />
|
||||||
|
Add Context
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<p className="text-xs text-dark-500">
|
||||||
|
Add .md, .txt, or .pdf files describing your optimization goals. Claude will use these to generate documentation.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Error Display */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-2 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-xs flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-3 h-3 flex-shrink-0" />
|
||||||
|
{error}
|
||||||
|
<button onClick={() => setError(null)} className="ml-auto hover:text-white">
|
||||||
|
<X className="w-3 h-3" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Existing Context Files */}
|
||||||
|
{contextFiles.length > 0 && (
|
||||||
|
<div className="space-y-1">
|
||||||
|
{contextFiles.map((file) => (
|
||||||
|
<div
|
||||||
|
key={file.name}
|
||||||
|
className="flex items-center justify-between p-2 rounded-lg bg-purple-500/5 border border-purple-500/20"
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<FileText className="w-4 h-4 text-purple-400" />
|
||||||
|
<span className="text-sm text-white">{file.name}</span>
|
||||||
|
<span className="text-xs text-dark-500">{formatSize(file.size)}</span>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={() => handleDeleteFile(file.name)}
|
||||||
|
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-red-400"
|
||||||
|
title="Delete file"
|
||||||
|
>
|
||||||
|
<Trash2 className="w-3 h-3" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Pending Files */}
|
||||||
|
{pendingFiles.length > 0 && (
|
||||||
|
<div className="space-y-1">
|
||||||
|
{pendingFiles.map((f, i) => (
|
||||||
|
<div
|
||||||
|
key={i}
|
||||||
|
className={`flex items-center justify-between p-2 rounded-lg
|
||||||
|
${f.status === 'error' ? 'bg-red-500/10' :
|
||||||
|
f.status === 'success' ? 'bg-green-500/10' :
|
||||||
|
'bg-dark-700'}`}
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{f.status === 'pending' && <FileText className="w-4 h-4 text-dark-400" />}
|
||||||
|
{f.status === 'uploading' && <Loader2 className="w-4 h-4 text-purple-400 animate-spin" />}
|
||||||
|
{f.status === 'success' && <CheckCircle className="w-4 h-4 text-green-400" />}
|
||||||
|
{f.status === 'error' && <AlertCircle className="w-4 h-4 text-red-400" />}
|
||||||
|
<span className={`text-sm ${f.status === 'error' ? 'text-red-400' :
|
||||||
|
f.status === 'success' ? 'text-green-400' :
|
||||||
|
'text-white'}`}>
|
||||||
|
{f.file.name}
|
||||||
|
</span>
|
||||||
|
{f.message && (
|
||||||
|
<span className="text-xs text-red-400">({f.message})</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
{f.status === 'pending' && (
|
||||||
|
<button
|
||||||
|
onClick={() => removeFile(i)}
|
||||||
|
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-white"
|
||||||
|
>
|
||||||
|
<X className="w-3 h-3" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Upload Button */}
|
||||||
|
{pendingCount > 0 && (
|
||||||
|
<button
|
||||||
|
onClick={handleUpload}
|
||||||
|
disabled={isUploading}
|
||||||
|
className="w-full flex items-center justify-center gap-2 px-3 py-2 rounded-lg
|
||||||
|
bg-purple-500 text-white text-sm font-medium
|
||||||
|
hover:bg-purple-400 disabled:opacity-50 disabled:cursor-not-allowed
|
||||||
|
transition-colors"
|
||||||
|
>
|
||||||
|
{isUploading ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
Uploading...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={fileInputRef}
|
||||||
|
type="file"
|
||||||
|
multiple
|
||||||
|
accept={VALID_EXTENSIONS.join(',')}
|
||||||
|
onChange={handleFileSelect}
|
||||||
|
className="hidden"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default ContextFileUpload;
|
||||||
@@ -0,0 +1,227 @@
|
|||||||
|
/**
|
||||||
|
* CreateStudyCard - Card for initiating new study creation
|
||||||
|
*
|
||||||
|
* Displays a prominent card on the Home page that allows users to
|
||||||
|
* create a new study through the intake workflow.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState } from 'react';
|
||||||
|
import { Plus, Loader2 } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
import { TopicInfo } from '../../types/intake';
|
||||||
|
|
||||||
|
interface CreateStudyCardProps {
|
||||||
|
topics: TopicInfo[];
|
||||||
|
onStudyCreated: (studyName: string) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const CreateStudyCard: React.FC<CreateStudyCardProps> = ({
|
||||||
|
topics,
|
||||||
|
onStudyCreated,
|
||||||
|
}) => {
|
||||||
|
const [isExpanded, setIsExpanded] = useState(false);
|
||||||
|
const [studyName, setStudyName] = useState('');
|
||||||
|
const [description, setDescription] = useState('');
|
||||||
|
const [selectedTopic, setSelectedTopic] = useState('');
|
||||||
|
const [newTopic, setNewTopic] = useState('');
|
||||||
|
const [isCreating, setIsCreating] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
const handleCreate = async () => {
|
||||||
|
if (!studyName.trim()) {
|
||||||
|
setError('Study name is required');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate study name format
|
||||||
|
const nameRegex = /^[a-z0-9_]+$/;
|
||||||
|
if (!nameRegex.test(studyName)) {
|
||||||
|
setError('Study name must be lowercase with underscores only (e.g., my_study_name)');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setIsCreating(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const topic = newTopic.trim() || selectedTopic || undefined;
|
||||||
|
await intakeApi.createInbox({
|
||||||
|
study_name: studyName.trim(),
|
||||||
|
description: description.trim() || undefined,
|
||||||
|
topic,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Reset form
|
||||||
|
setStudyName('');
|
||||||
|
setDescription('');
|
||||||
|
setSelectedTopic('');
|
||||||
|
setNewTopic('');
|
||||||
|
setIsExpanded(false);
|
||||||
|
|
||||||
|
onStudyCreated(studyName.trim());
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Failed to create study');
|
||||||
|
} finally {
|
||||||
|
setIsCreating(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (!isExpanded) {
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
onClick={() => setIsExpanded(true)}
|
||||||
|
className="w-full glass rounded-xl p-6 border border-dashed border-primary-400/30
|
||||||
|
hover:border-primary-400/60 hover:bg-primary-400/5 transition-all
|
||||||
|
flex items-center justify-center gap-3 group"
|
||||||
|
>
|
||||||
|
<div className="w-12 h-12 rounded-xl bg-primary-400/10 flex items-center justify-center
|
||||||
|
group-hover:bg-primary-400/20 transition-colors">
|
||||||
|
<Plus className="w-6 h-6 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<div className="text-left">
|
||||||
|
<h3 className="text-lg font-semibold text-white">Create New Study</h3>
|
||||||
|
<p className="text-sm text-dark-400">Set up a new optimization study</p>
|
||||||
|
</div>
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="glass-strong rounded-xl border border-primary-400/20 overflow-hidden">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="px-6 py-4 border-b border-primary-400/10 flex items-center justify-between">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<div className="w-10 h-10 rounded-lg bg-primary-400/10 flex items-center justify-center">
|
||||||
|
<Plus className="w-5 h-5 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<h3 className="text-lg font-semibold text-white">Create New Study</h3>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={() => setIsExpanded(false)}
|
||||||
|
className="text-dark-400 hover:text-white transition-colors text-sm"
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Form */}
|
||||||
|
<div className="p-6 space-y-4">
|
||||||
|
{/* Study Name */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Study Name <span className="text-red-400">*</span>
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={studyName}
|
||||||
|
onChange={(e) => setStudyName(e.target.value.toLowerCase().replace(/[^a-z0-9_]/g, '_'))}
|
||||||
|
placeholder="my_optimization_study"
|
||||||
|
className="w-full px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white placeholder-dark-500 focus:border-primary-400
|
||||||
|
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
|
||||||
|
/>
|
||||||
|
<p className="mt-1 text-xs text-dark-500">
|
||||||
|
Lowercase letters, numbers, and underscores only
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Description */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Description
|
||||||
|
</label>
|
||||||
|
<textarea
|
||||||
|
value={description}
|
||||||
|
onChange={(e) => setDescription(e.target.value)}
|
||||||
|
placeholder="Brief description of the optimization goal..."
|
||||||
|
rows={2}
|
||||||
|
className="w-full px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white placeholder-dark-500 focus:border-primary-400
|
||||||
|
focus:outline-none focus:ring-1 focus:ring-primary-400/50 resize-none"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Topic Selection */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Topic Folder
|
||||||
|
</label>
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<select
|
||||||
|
value={selectedTopic}
|
||||||
|
onChange={(e) => {
|
||||||
|
setSelectedTopic(e.target.value);
|
||||||
|
setNewTopic('');
|
||||||
|
}}
|
||||||
|
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white focus:border-primary-400 focus:outline-none
|
||||||
|
focus:ring-1 focus:ring-primary-400/50"
|
||||||
|
>
|
||||||
|
<option value="">Select existing topic...</option>
|
||||||
|
{topics.map((topic) => (
|
||||||
|
<option key={topic.name} value={topic.name}>
|
||||||
|
{topic.name} ({topic.study_count} studies)
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
<span className="text-dark-500 self-center">or</span>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={newTopic}
|
||||||
|
onChange={(e) => {
|
||||||
|
setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'));
|
||||||
|
setSelectedTopic('');
|
||||||
|
}}
|
||||||
|
placeholder="New_Topic"
|
||||||
|
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white placeholder-dark-500 focus:border-primary-400
|
||||||
|
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Error Message */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm">
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Actions */}
|
||||||
|
<div className="flex justify-end gap-3 pt-2">
|
||||||
|
<button
|
||||||
|
onClick={() => setIsExpanded(false)}
|
||||||
|
className="px-4 py-2 rounded-lg border border-dark-600 text-dark-300
|
||||||
|
hover:border-dark-500 hover:text-white transition-colors"
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={handleCreate}
|
||||||
|
disabled={isCreating || !studyName.trim()}
|
||||||
|
className="px-6 py-2 rounded-lg font-medium transition-all disabled:opacity-50
|
||||||
|
flex items-center gap-2"
|
||||||
|
style={{
|
||||||
|
background: 'linear-gradient(135deg, #00d4e6 0%, #0891b2 100%)',
|
||||||
|
color: '#000',
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{isCreating ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
Creating...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<Plus className="w-4 h-4" />
|
||||||
|
Create Study
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default CreateStudyCard;
|
||||||
@@ -0,0 +1,270 @@
|
|||||||
|
/**
|
||||||
|
* ExpressionList - Display discovered expressions with selection capability
|
||||||
|
*
|
||||||
|
* Shows expressions from NX introspection, allowing users to:
|
||||||
|
* - View all discovered expressions
|
||||||
|
* - See which are design variable candidates (auto-detected)
|
||||||
|
* - Select/deselect expressions to use as design variables
|
||||||
|
* - View expression values and units
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState } from 'react';
|
||||||
|
import {
|
||||||
|
Check,
|
||||||
|
Search,
|
||||||
|
AlertTriangle,
|
||||||
|
Sparkles,
|
||||||
|
Info,
|
||||||
|
Variable,
|
||||||
|
} from 'lucide-react';
|
||||||
|
import { ExpressionInfo } from '../../types/intake';
|
||||||
|
|
||||||
|
interface ExpressionListProps {
|
||||||
|
/** Expression data from introspection */
|
||||||
|
expressions: ExpressionInfo[];
|
||||||
|
/** Mass from introspection (kg) */
|
||||||
|
massKg?: number | null;
|
||||||
|
/** Currently selected expressions (to become DVs) */
|
||||||
|
selectedExpressions: string[];
|
||||||
|
/** Callback when selection changes */
|
||||||
|
onSelectionChange: (selected: string[]) => void;
|
||||||
|
/** Whether in read-only mode */
|
||||||
|
readOnly?: boolean;
|
||||||
|
/** Compact display mode */
|
||||||
|
compact?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const ExpressionList: React.FC<ExpressionListProps> = ({
|
||||||
|
expressions,
|
||||||
|
massKg,
|
||||||
|
selectedExpressions,
|
||||||
|
onSelectionChange,
|
||||||
|
readOnly = false,
|
||||||
|
compact = false,
|
||||||
|
}) => {
|
||||||
|
const [filter, setFilter] = useState('');
|
||||||
|
const [showCandidatesOnly, setShowCandidatesOnly] = useState(true);
|
||||||
|
|
||||||
|
// Filter expressions based on search and candidate toggle
|
||||||
|
const filteredExpressions = expressions.filter((expr) => {
|
||||||
|
const matchesSearch = filter === '' ||
|
||||||
|
expr.name.toLowerCase().includes(filter.toLowerCase());
|
||||||
|
const matchesCandidate = !showCandidatesOnly || expr.is_candidate;
|
||||||
|
return matchesSearch && matchesCandidate;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Sort: candidates first, then by confidence, then alphabetically
|
||||||
|
const sortedExpressions = [...filteredExpressions].sort((a, b) => {
|
||||||
|
if (a.is_candidate !== b.is_candidate) {
|
||||||
|
return a.is_candidate ? -1 : 1;
|
||||||
|
}
|
||||||
|
if (a.confidence !== b.confidence) {
|
||||||
|
return b.confidence - a.confidence;
|
||||||
|
}
|
||||||
|
return a.name.localeCompare(b.name);
|
||||||
|
});
|
||||||
|
|
||||||
|
const toggleExpression = (name: string) => {
|
||||||
|
if (readOnly) return;
|
||||||
|
|
||||||
|
if (selectedExpressions.includes(name)) {
|
||||||
|
onSelectionChange(selectedExpressions.filter(n => n !== name));
|
||||||
|
} else {
|
||||||
|
onSelectionChange([...selectedExpressions, name]);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const selectAllCandidates = () => {
|
||||||
|
const candidateNames = expressions
|
||||||
|
.filter(e => e.is_candidate)
|
||||||
|
.map(e => e.name);
|
||||||
|
onSelectionChange(candidateNames);
|
||||||
|
};
|
||||||
|
|
||||||
|
const clearSelection = () => {
|
||||||
|
onSelectionChange([]);
|
||||||
|
};
|
||||||
|
|
||||||
|
const candidateCount = expressions.filter(e => e.is_candidate).length;
|
||||||
|
|
||||||
|
if (expressions.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="p-4 rounded-lg bg-dark-700/50 border border-dark-600">
|
||||||
|
<div className="flex items-center gap-2 text-dark-400">
|
||||||
|
<AlertTriangle className="w-4 h-4" />
|
||||||
|
<span>No expressions found. Run introspection to discover model parameters.</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
{/* Header with stats */}
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
|
||||||
|
<Variable className="w-4 h-4" />
|
||||||
|
Discovered Expressions
|
||||||
|
</h5>
|
||||||
|
<span className="text-xs text-dark-500">
|
||||||
|
{expressions.length} total, {candidateCount} candidates
|
||||||
|
</span>
|
||||||
|
{massKg && (
|
||||||
|
<span className="text-xs text-primary-400">
|
||||||
|
Mass: {massKg.toFixed(3)} kg
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
{!readOnly && selectedExpressions.length > 0 && (
|
||||||
|
<span className="text-xs text-green-400">
|
||||||
|
{selectedExpressions.length} selected
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Controls */}
|
||||||
|
{!compact && (
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{/* Search */}
|
||||||
|
<div className="relative flex-1 max-w-xs">
|
||||||
|
<Search className="absolute left-2.5 top-1/2 -translate-y-1/2 w-4 h-4 text-dark-500" />
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
placeholder="Search expressions..."
|
||||||
|
value={filter}
|
||||||
|
onChange={(e) => setFilter(e.target.value)}
|
||||||
|
className="w-full pl-8 pr-3 py-1.5 text-sm rounded-lg bg-dark-700 border border-dark-600
|
||||||
|
text-white placeholder-dark-500 focus:border-primary-500/50 focus:outline-none"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Show candidates only toggle */}
|
||||||
|
<label className="flex items-center gap-2 text-xs text-dark-400 cursor-pointer">
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
checked={showCandidatesOnly}
|
||||||
|
onChange={(e) => setShowCandidatesOnly(e.target.checked)}
|
||||||
|
className="w-4 h-4 rounded border-dark-500 bg-dark-700 text-primary-500
|
||||||
|
focus:ring-primary-500/30"
|
||||||
|
/>
|
||||||
|
Candidates only
|
||||||
|
</label>
|
||||||
|
|
||||||
|
{/* Quick actions */}
|
||||||
|
{!readOnly && (
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<button
|
||||||
|
onClick={selectAllCandidates}
|
||||||
|
className="px-2 py-1 text-xs rounded bg-primary-500/10 text-primary-400
|
||||||
|
hover:bg-primary-500/20 transition-colors"
|
||||||
|
>
|
||||||
|
Select all candidates
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={clearSelection}
|
||||||
|
className="px-2 py-1 text-xs rounded bg-dark-600 text-dark-400
|
||||||
|
hover:bg-dark-500 transition-colors"
|
||||||
|
>
|
||||||
|
Clear
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Expression list */}
|
||||||
|
<div className={`rounded-lg border border-dark-600 overflow-hidden ${
|
||||||
|
compact ? 'max-h-48' : 'max-h-72'
|
||||||
|
} overflow-y-auto`}>
|
||||||
|
<table className="w-full text-sm">
|
||||||
|
<thead className="bg-dark-700 sticky top-0">
|
||||||
|
<tr>
|
||||||
|
{!readOnly && (
|
||||||
|
<th className="w-8 px-2 py-2"></th>
|
||||||
|
)}
|
||||||
|
<th className="px-3 py-2 text-left text-dark-400 font-medium">Name</th>
|
||||||
|
<th className="px-3 py-2 text-right text-dark-400 font-medium w-24">Value</th>
|
||||||
|
<th className="px-3 py-2 text-left text-dark-400 font-medium w-16">Units</th>
|
||||||
|
<th className="px-3 py-2 text-center text-dark-400 font-medium w-20">Candidate</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="divide-y divide-dark-700">
|
||||||
|
{sortedExpressions.map((expr) => {
|
||||||
|
const isSelected = selectedExpressions.includes(expr.name);
|
||||||
|
return (
|
||||||
|
<tr
|
||||||
|
key={expr.name}
|
||||||
|
onClick={() => toggleExpression(expr.name)}
|
||||||
|
className={`
|
||||||
|
${readOnly ? '' : 'cursor-pointer hover:bg-dark-700/50'}
|
||||||
|
${isSelected ? 'bg-primary-500/10' : ''}
|
||||||
|
transition-colors
|
||||||
|
`}
|
||||||
|
>
|
||||||
|
{!readOnly && (
|
||||||
|
<td className="px-2 py-2">
|
||||||
|
<div className={`w-5 h-5 rounded border flex items-center justify-center
|
||||||
|
${isSelected
|
||||||
|
? 'bg-primary-500 border-primary-500'
|
||||||
|
: 'border-dark-500 bg-dark-700'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{isSelected && <Check className="w-3 h-3 text-white" />}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
)}
|
||||||
|
<td className="px-3 py-2">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<code className={`text-xs ${isSelected ? 'text-primary-300' : 'text-white'}`}>
|
||||||
|
{expr.name}
|
||||||
|
</code>
|
||||||
|
{expr.formula && (
|
||||||
|
<span className="text-xs text-dark-500" title={expr.formula}>
|
||||||
|
<Info className="w-3 h-3" />
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-3 py-2 text-right font-mono text-xs text-dark-300">
|
||||||
|
{expr.value !== null ? expr.value.toFixed(3) : '-'}
|
||||||
|
</td>
|
||||||
|
<td className="px-3 py-2 text-xs text-dark-400">
|
||||||
|
{expr.units || '-'}
|
||||||
|
</td>
|
||||||
|
<td className="px-3 py-2 text-center">
|
||||||
|
{expr.is_candidate ? (
|
||||||
|
<span className="inline-flex items-center gap-1 px-1.5 py-0.5 rounded text-xs
|
||||||
|
bg-green-500/10 text-green-400">
|
||||||
|
<Sparkles className="w-3 h-3" />
|
||||||
|
{Math.round(expr.confidence * 100)}%
|
||||||
|
</span>
|
||||||
|
) : (
|
||||||
|
<span className="text-xs text-dark-500">-</span>
|
||||||
|
)}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
{sortedExpressions.length === 0 && (
|
||||||
|
<div className="px-4 py-8 text-center text-dark-500">
|
||||||
|
No expressions match your filter
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Help text */}
|
||||||
|
{!readOnly && !compact && (
|
||||||
|
<p className="text-xs text-dark-500">
|
||||||
|
Select expressions to use as design variables. Candidates (marked with %) are
|
||||||
|
automatically identified based on naming patterns and units.
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default ExpressionList;
|
||||||
@@ -0,0 +1,348 @@
|
|||||||
|
/**
|
||||||
|
* FileDropzone - Drag and drop file upload component
|
||||||
|
*
|
||||||
|
* Supports drag-and-drop or click-to-browse for model files.
|
||||||
|
* Accepts .prt, .sim, .fem, .afem files.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useCallback, useRef } from 'react';
|
||||||
|
import { Upload, FileText, X, Loader2, AlertCircle, CheckCircle } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface FileDropzoneProps {
|
||||||
|
studyName: string;
|
||||||
|
onUploadComplete: () => void;
|
||||||
|
compact?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface FileStatus {
|
||||||
|
file: File;
|
||||||
|
status: 'pending' | 'uploading' | 'success' | 'error';
|
||||||
|
message?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const VALID_EXTENSIONS = ['.prt', '.sim', '.fem', '.afem'];
|
||||||
|
|
||||||
|
export const FileDropzone: React.FC<FileDropzoneProps> = ({
|
||||||
|
studyName,
|
||||||
|
onUploadComplete,
|
||||||
|
compact = false,
|
||||||
|
}) => {
|
||||||
|
const [isDragging, setIsDragging] = useState(false);
|
||||||
|
const [files, setFiles] = useState<FileStatus[]>([]);
|
||||||
|
const [isUploading, setIsUploading] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
const fileInputRef = useRef<HTMLInputElement>(null);
|
||||||
|
|
||||||
|
const validateFile = (file: File): { valid: boolean; reason?: string } => {
|
||||||
|
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
|
||||||
|
if (!VALID_EXTENSIONS.includes(ext)) {
|
||||||
|
return { valid: false, reason: `Invalid type: ${ext}` };
|
||||||
|
}
|
||||||
|
// Max 500MB per file
|
||||||
|
if (file.size > 500 * 1024 * 1024) {
|
||||||
|
return { valid: false, reason: 'File too large (max 500MB)' };
|
||||||
|
}
|
||||||
|
return { valid: true };
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleDragEnter = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(true);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleDragLeave = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(false);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleDragOver = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const addFiles = useCallback((newFiles: File[]) => {
|
||||||
|
const validFiles: FileStatus[] = [];
|
||||||
|
|
||||||
|
for (const file of newFiles) {
|
||||||
|
// Skip duplicates
|
||||||
|
if (files.some(f => f.file.name === file.name)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const validation = validateFile(file);
|
||||||
|
if (validation.valid) {
|
||||||
|
validFiles.push({ file, status: 'pending' });
|
||||||
|
} else {
|
||||||
|
validFiles.push({ file, status: 'error', message: validation.reason });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setFiles(prev => [...prev, ...validFiles]);
|
||||||
|
}, [files]);
|
||||||
|
|
||||||
|
const handleDrop = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(false);
|
||||||
|
|
||||||
|
const droppedFiles = Array.from(e.dataTransfer.files);
|
||||||
|
addFiles(droppedFiles);
|
||||||
|
}, [addFiles]);
|
||||||
|
|
||||||
|
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
|
||||||
|
const selectedFiles = Array.from(e.target.files || []);
|
||||||
|
addFiles(selectedFiles);
|
||||||
|
// Reset input so the same file can be selected again
|
||||||
|
e.target.value = '';
|
||||||
|
}, [addFiles]);
|
||||||
|
|
||||||
|
const removeFile = (index: number) => {
|
||||||
|
setFiles(prev => prev.filter((_, i) => i !== index));
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleUpload = async () => {
|
||||||
|
const pendingFiles = files.filter(f => f.status === 'pending');
|
||||||
|
if (pendingFiles.length === 0) return;
|
||||||
|
|
||||||
|
setIsUploading(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Upload files
|
||||||
|
const response = await intakeApi.uploadFiles(
|
||||||
|
studyName,
|
||||||
|
pendingFiles.map(f => f.file)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Update file statuses based on response
|
||||||
|
const uploadResults = new Map(
|
||||||
|
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
|
||||||
|
);
|
||||||
|
|
||||||
|
setFiles(prev => prev.map(f => {
|
||||||
|
if (f.status !== 'pending') return f;
|
||||||
|
const success = uploadResults.get(f.file.name);
|
||||||
|
return {
|
||||||
|
...f,
|
||||||
|
status: success ? 'success' : 'error',
|
||||||
|
message: success ? undefined : 'Upload failed',
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Clear successful uploads after a moment and refresh
|
||||||
|
setTimeout(() => {
|
||||||
|
setFiles(prev => prev.filter(f => f.status !== 'success'));
|
||||||
|
onUploadComplete();
|
||||||
|
}, 1500);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Upload failed');
|
||||||
|
setFiles(prev => prev.map(f =>
|
||||||
|
f.status === 'pending'
|
||||||
|
? { ...f, status: 'error', message: 'Upload failed' }
|
||||||
|
: f
|
||||||
|
));
|
||||||
|
} finally {
|
||||||
|
setIsUploading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const pendingCount = files.filter(f => f.status === 'pending').length;
|
||||||
|
|
||||||
|
if (compact) {
|
||||||
|
// Compact inline version
|
||||||
|
return (
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<button
|
||||||
|
onClick={() => fileInputRef.current?.click()}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-dark-700 text-dark-300 hover:bg-dark-600 hover:text-white
|
||||||
|
transition-colors"
|
||||||
|
>
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
Add Files
|
||||||
|
</button>
|
||||||
|
{pendingCount > 0 && (
|
||||||
|
<button
|
||||||
|
onClick={handleUpload}
|
||||||
|
disabled={isUploading}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-primary-500/10 text-primary-400 hover:bg-primary-500/20
|
||||||
|
disabled:opacity-50 transition-colors"
|
||||||
|
>
|
||||||
|
{isUploading ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* File list */}
|
||||||
|
{files.length > 0 && (
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{files.map((f, i) => (
|
||||||
|
<span
|
||||||
|
key={i}
|
||||||
|
className={`inline-flex items-center gap-1.5 px-2 py-1 rounded text-xs
|
||||||
|
${f.status === 'error' ? 'bg-red-500/10 text-red-400' :
|
||||||
|
f.status === 'success' ? 'bg-green-500/10 text-green-400' :
|
||||||
|
'bg-dark-700 text-dark-300'}`}
|
||||||
|
>
|
||||||
|
{f.status === 'uploading' && <Loader2 className="w-3 h-3 animate-spin" />}
|
||||||
|
{f.status === 'success' && <CheckCircle className="w-3 h-3" />}
|
||||||
|
{f.status === 'error' && <AlertCircle className="w-3 h-3" />}
|
||||||
|
{f.file.name}
|
||||||
|
{f.status === 'pending' && (
|
||||||
|
<button onClick={() => removeFile(i)} className="hover:text-white">
|
||||||
|
<X className="w-3 h-3" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={fileInputRef}
|
||||||
|
type="file"
|
||||||
|
multiple
|
||||||
|
accept={VALID_EXTENSIONS.join(',')}
|
||||||
|
onChange={handleFileSelect}
|
||||||
|
className="hidden"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Full dropzone version
|
||||||
|
return (
|
||||||
|
<div className="space-y-4">
|
||||||
|
{/* Dropzone */}
|
||||||
|
<div
|
||||||
|
onDragEnter={handleDragEnter}
|
||||||
|
onDragLeave={handleDragLeave}
|
||||||
|
onDragOver={handleDragOver}
|
||||||
|
onDrop={handleDrop}
|
||||||
|
onClick={() => fileInputRef.current?.click()}
|
||||||
|
className={`
|
||||||
|
relative border-2 border-dashed rounded-xl p-6 cursor-pointer
|
||||||
|
transition-all duration-200
|
||||||
|
${isDragging
|
||||||
|
? 'border-primary-400 bg-primary-400/5'
|
||||||
|
: 'border-dark-600 hover:border-primary-400/50 hover:bg-white/5'
|
||||||
|
}
|
||||||
|
`}
|
||||||
|
>
|
||||||
|
<div className="flex flex-col items-center text-center">
|
||||||
|
<div className={`w-12 h-12 rounded-full flex items-center justify-center mb-3
|
||||||
|
${isDragging ? 'bg-primary-400/20 text-primary-400' : 'bg-dark-700 text-dark-400'}`}>
|
||||||
|
<Upload className="w-6 h-6" />
|
||||||
|
</div>
|
||||||
|
<p className="text-white font-medium mb-1">
|
||||||
|
{isDragging ? 'Drop files here' : 'Drop model files here'}
|
||||||
|
</p>
|
||||||
|
<p className="text-sm text-dark-400">
|
||||||
|
or <span className="text-primary-400">click to browse</span>
|
||||||
|
</p>
|
||||||
|
<p className="text-xs text-dark-500 mt-2">
|
||||||
|
Accepts: {VALID_EXTENSIONS.join(', ')}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Error */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* File List */}
|
||||||
|
{files.length > 0 && (
|
||||||
|
<div className="space-y-2">
|
||||||
|
<h5 className="text-sm font-medium text-dark-300">Files to Upload</h5>
|
||||||
|
<div className="space-y-1">
|
||||||
|
{files.map((f, i) => (
|
||||||
|
<div
|
||||||
|
key={i}
|
||||||
|
className={`flex items-center justify-between p-2 rounded-lg
|
||||||
|
${f.status === 'error' ? 'bg-red-500/10' :
|
||||||
|
f.status === 'success' ? 'bg-green-500/10' :
|
||||||
|
'bg-dark-700'}`}
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{f.status === 'pending' && <FileText className="w-4 h-4 text-dark-400" />}
|
||||||
|
{f.status === 'uploading' && <Loader2 className="w-4 h-4 text-primary-400 animate-spin" />}
|
||||||
|
{f.status === 'success' && <CheckCircle className="w-4 h-4 text-green-400" />}
|
||||||
|
{f.status === 'error' && <AlertCircle className="w-4 h-4 text-red-400" />}
|
||||||
|
<span className={`text-sm ${f.status === 'error' ? 'text-red-400' :
|
||||||
|
f.status === 'success' ? 'text-green-400' :
|
||||||
|
'text-white'}`}>
|
||||||
|
{f.file.name}
|
||||||
|
</span>
|
||||||
|
{f.message && (
|
||||||
|
<span className="text-xs text-red-400">({f.message})</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
{f.status === 'pending' && (
|
||||||
|
<button
|
||||||
|
onClick={(e) => {
|
||||||
|
e.stopPropagation();
|
||||||
|
removeFile(i);
|
||||||
|
}}
|
||||||
|
className="p-1 hover:bg-white/10 rounded text-dark-400 hover:text-white"
|
||||||
|
>
|
||||||
|
<X className="w-4 h-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Upload Button */}
|
||||||
|
{pendingCount > 0 && (
|
||||||
|
<button
|
||||||
|
onClick={handleUpload}
|
||||||
|
disabled={isUploading}
|
||||||
|
className="w-full flex items-center justify-center gap-2 px-4 py-2 rounded-lg
|
||||||
|
bg-primary-500 text-white font-medium
|
||||||
|
hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed
|
||||||
|
transition-colors"
|
||||||
|
>
|
||||||
|
{isUploading ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
Uploading...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
Upload {pendingCount} {pendingCount === 1 ? 'File' : 'Files'}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={fileInputRef}
|
||||||
|
type="file"
|
||||||
|
multiple
|
||||||
|
accept={VALID_EXTENSIONS.join(',')}
|
||||||
|
onChange={handleFileSelect}
|
||||||
|
className="hidden"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default FileDropzone;
|
||||||
@@ -0,0 +1,272 @@
|
|||||||
|
/**
|
||||||
|
* FinalizeModal - Modal for finalizing an inbox study
|
||||||
|
*
|
||||||
|
* Allows user to:
|
||||||
|
* - Select/create topic folder
|
||||||
|
* - Choose whether to run baseline FEA
|
||||||
|
* - See progress during finalization
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect } from 'react';
|
||||||
|
import {
|
||||||
|
X,
|
||||||
|
Folder,
|
||||||
|
CheckCircle,
|
||||||
|
Loader2,
|
||||||
|
AlertCircle,
|
||||||
|
} from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
import { TopicInfo, InboxStudyDetail } from '../../types/intake';
|
||||||
|
|
||||||
|
interface FinalizeModalProps {
|
||||||
|
studyName: string;
|
||||||
|
topics: TopicInfo[];
|
||||||
|
onClose: () => void;
|
||||||
|
onFinalized: (finalPath: string) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const FinalizeModal: React.FC<FinalizeModalProps> = ({
|
||||||
|
studyName,
|
||||||
|
topics,
|
||||||
|
onClose,
|
||||||
|
onFinalized,
|
||||||
|
}) => {
|
||||||
|
const [studyDetail, setStudyDetail] = useState<InboxStudyDetail | null>(null);
|
||||||
|
const [selectedTopic, setSelectedTopic] = useState('');
|
||||||
|
const [newTopic, setNewTopic] = useState('');
|
||||||
|
const [runBaseline, setRunBaseline] = useState(true);
|
||||||
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
|
const [isFinalizing, setIsFinalizing] = useState(false);
|
||||||
|
const [progress, setProgress] = useState<string>('');
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
// Load study detail
|
||||||
|
useEffect(() => {
|
||||||
|
const loadStudy = async () => {
|
||||||
|
try {
|
||||||
|
const detail = await intakeApi.getInboxStudy(studyName);
|
||||||
|
setStudyDetail(detail);
|
||||||
|
// Pre-select topic if set in spec
|
||||||
|
if (detail.spec.meta.topic) {
|
||||||
|
setSelectedTopic(detail.spec.meta.topic);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Failed to load study');
|
||||||
|
} finally {
|
||||||
|
setIsLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
loadStudy();
|
||||||
|
}, [studyName]);
|
||||||
|
|
||||||
|
const handleFinalize = async () => {
|
||||||
|
const topic = newTopic.trim() || selectedTopic;
|
||||||
|
if (!topic) {
|
||||||
|
setError('Please select or create a topic folder');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setIsFinalizing(true);
|
||||||
|
setError(null);
|
||||||
|
setProgress('Starting finalization...');
|
||||||
|
|
||||||
|
try {
|
||||||
|
setProgress('Validating study configuration...');
|
||||||
|
await new Promise((r) => setTimeout(r, 500)); // Visual feedback
|
||||||
|
|
||||||
|
if (runBaseline) {
|
||||||
|
setProgress('Running baseline FEA solve...');
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = await intakeApi.finalize(studyName, {
|
||||||
|
topic,
|
||||||
|
run_baseline: runBaseline,
|
||||||
|
});
|
||||||
|
|
||||||
|
setProgress('Finalization complete!');
|
||||||
|
await new Promise((r) => setTimeout(r, 500));
|
||||||
|
|
||||||
|
onFinalized(result.final_path);
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Finalization failed');
|
||||||
|
setIsFinalizing(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="fixed inset-0 z-50 flex items-center justify-center bg-dark-900/80 backdrop-blur-sm">
|
||||||
|
<div className="w-full max-w-lg glass-strong rounded-xl border border-primary-400/20 overflow-hidden">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="px-6 py-4 border-b border-primary-400/10 flex items-center justify-between">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<div className="w-10 h-10 rounded-lg bg-primary-400/10 flex items-center justify-center">
|
||||||
|
<Folder className="w-5 h-5 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<h3 className="text-lg font-semibold text-white">Finalize Study</h3>
|
||||||
|
<p className="text-sm text-dark-400">{studyName}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{!isFinalizing && (
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="p-2 hover:bg-white/5 rounded-lg transition-colors text-dark-400 hover:text-white"
|
||||||
|
>
|
||||||
|
<X className="w-5 h-5" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Content */}
|
||||||
|
<div className="p-6 space-y-6">
|
||||||
|
{isLoading ? (
|
||||||
|
<div className="flex items-center justify-center py-8">
|
||||||
|
<Loader2 className="w-6 h-6 animate-spin text-primary-400" />
|
||||||
|
</div>
|
||||||
|
) : isFinalizing ? (
|
||||||
|
/* Progress View */
|
||||||
|
<div className="text-center py-8 space-y-4">
|
||||||
|
<Loader2 className="w-12 h-12 animate-spin text-primary-400 mx-auto" />
|
||||||
|
<p className="text-white font-medium">{progress}</p>
|
||||||
|
<p className="text-sm text-dark-400">
|
||||||
|
Please wait while your study is being finalized...
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
{/* Error Display */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Study Summary */}
|
||||||
|
{studyDetail && (
|
||||||
|
<div className="p-4 rounded-lg bg-dark-800 space-y-2">
|
||||||
|
<h4 className="text-sm font-medium text-dark-300">Study Summary</h4>
|
||||||
|
<div className="grid grid-cols-2 gap-4 text-sm">
|
||||||
|
<div>
|
||||||
|
<span className="text-dark-500">Status:</span>
|
||||||
|
<span className="ml-2 text-white capitalize">
|
||||||
|
{studyDetail.spec.meta.status}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<span className="text-dark-500">Model Files:</span>
|
||||||
|
<span className="ml-2 text-white">
|
||||||
|
{studyDetail.files.sim.length + studyDetail.files.prt.length + studyDetail.files.fem.length}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<span className="text-dark-500">Design Variables:</span>
|
||||||
|
<span className="ml-2 text-white">
|
||||||
|
{studyDetail.spec.design_variables?.length || 0}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<span className="text-dark-500">Objectives:</span>
|
||||||
|
<span className="ml-2 text-white">
|
||||||
|
{studyDetail.spec.objectives?.length || 0}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Topic Selection */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Topic Folder <span className="text-red-400">*</span>
|
||||||
|
</label>
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<select
|
||||||
|
value={selectedTopic}
|
||||||
|
onChange={(e) => {
|
||||||
|
setSelectedTopic(e.target.value);
|
||||||
|
setNewTopic('');
|
||||||
|
}}
|
||||||
|
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white focus:border-primary-400 focus:outline-none
|
||||||
|
focus:ring-1 focus:ring-primary-400/50"
|
||||||
|
>
|
||||||
|
<option value="">Select existing topic...</option>
|
||||||
|
{topics.map((topic) => (
|
||||||
|
<option key={topic.name} value={topic.name}>
|
||||||
|
{topic.name} ({topic.study_count} studies)
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
<span className="text-dark-500 self-center">or</span>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={newTopic}
|
||||||
|
onChange={(e) => {
|
||||||
|
setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'));
|
||||||
|
setSelectedTopic('');
|
||||||
|
}}
|
||||||
|
placeholder="New_Topic"
|
||||||
|
className="flex-1 px-4 py-2.5 rounded-lg bg-dark-800 border border-dark-600
|
||||||
|
text-white placeholder-dark-500 focus:border-primary-400
|
||||||
|
focus:outline-none focus:ring-1 focus:ring-primary-400/50"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<p className="mt-1 text-xs text-dark-500">
|
||||||
|
Study will be created at: studies/{newTopic || selectedTopic || '<topic>'}/{studyName}/
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Baseline Option */}
|
||||||
|
<div>
|
||||||
|
<label className="flex items-center gap-3 cursor-pointer">
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
checked={runBaseline}
|
||||||
|
onChange={(e) => setRunBaseline(e.target.checked)}
|
||||||
|
className="w-4 h-4 rounded border-dark-600 bg-dark-800 text-primary-400
|
||||||
|
focus:ring-primary-400/50"
|
||||||
|
/>
|
||||||
|
<div>
|
||||||
|
<span className="text-white font-medium">Run baseline FEA solve</span>
|
||||||
|
<p className="text-xs text-dark-500">
|
||||||
|
Validates the model and captures baseline performance metrics
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Footer */}
|
||||||
|
{!isLoading && !isFinalizing && (
|
||||||
|
<div className="px-6 py-4 border-t border-primary-400/10 flex justify-end gap-3">
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="px-4 py-2 rounded-lg border border-dark-600 text-dark-300
|
||||||
|
hover:border-dark-500 hover:text-white transition-colors"
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={handleFinalize}
|
||||||
|
disabled={!selectedTopic && !newTopic.trim()}
|
||||||
|
className="px-6 py-2 rounded-lg font-medium transition-all disabled:opacity-50
|
||||||
|
flex items-center gap-2"
|
||||||
|
style={{
|
||||||
|
background: 'linear-gradient(135deg, #00d4e6 0%, #0891b2 100%)',
|
||||||
|
color: '#000',
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<CheckCircle className="w-4 h-4" />
|
||||||
|
Finalize Study
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default FinalizeModal;
|
||||||
@@ -0,0 +1,147 @@
|
|||||||
|
/**
|
||||||
|
* InboxSection - Section displaying inbox studies on Home page
|
||||||
|
*
|
||||||
|
* Shows the "Create New Study" card and lists all inbox studies
|
||||||
|
* with their current status and available actions.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect, useCallback } from 'react';
|
||||||
|
import { Inbox, RefreshCw, ChevronDown, ChevronRight } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
import { InboxStudy, TopicInfo } from '../../types/intake';
|
||||||
|
import { CreateStudyCard } from './CreateStudyCard';
|
||||||
|
import { InboxStudyCard } from './InboxStudyCard';
|
||||||
|
import { FinalizeModal } from './FinalizeModal';
|
||||||
|
|
||||||
|
interface InboxSectionProps {
|
||||||
|
onStudyFinalized?: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const InboxSection: React.FC<InboxSectionProps> = ({ onStudyFinalized }) => {
|
||||||
|
const [inboxStudies, setInboxStudies] = useState<InboxStudy[]>([]);
|
||||||
|
const [topics, setTopics] = useState<TopicInfo[]>([]);
|
||||||
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
|
const [isExpanded, setIsExpanded] = useState(true);
|
||||||
|
const [selectedStudyForFinalize, setSelectedStudyForFinalize] = useState<string | null>(null);
|
||||||
|
|
||||||
|
const loadData = useCallback(async () => {
|
||||||
|
setIsLoading(true);
|
||||||
|
try {
|
||||||
|
const [inboxResponse, topicsResponse] = await Promise.all([
|
||||||
|
intakeApi.listInbox(),
|
||||||
|
intakeApi.listTopics(),
|
||||||
|
]);
|
||||||
|
setInboxStudies(inboxResponse.studies);
|
||||||
|
setTopics(topicsResponse.topics);
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load inbox data:', err);
|
||||||
|
} finally {
|
||||||
|
setIsLoading(false);
|
||||||
|
}
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
loadData();
|
||||||
|
}, [loadData]);
|
||||||
|
|
||||||
|
const handleStudyCreated = (_studyName: string) => {
|
||||||
|
loadData();
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleStudyFinalized = (_finalPath: string) => {
|
||||||
|
setSelectedStudyForFinalize(null);
|
||||||
|
loadData();
|
||||||
|
onStudyFinalized?.();
|
||||||
|
};
|
||||||
|
|
||||||
|
const pendingStudies = inboxStudies.filter(
|
||||||
|
(s) => !['ready', 'running', 'completed'].includes(s.status)
|
||||||
|
);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-4">
|
||||||
|
{/* Section Header */}
|
||||||
|
<button
|
||||||
|
onClick={() => setIsExpanded(!isExpanded)}
|
||||||
|
className="w-full flex items-center justify-between px-2 py-1 hover:bg-white/5 rounded-lg transition-colors"
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<div className="w-8 h-8 rounded-lg bg-primary-400/10 flex items-center justify-center">
|
||||||
|
<Inbox className="w-4 h-4 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<div className="text-left">
|
||||||
|
<h2 className="text-lg font-semibold text-white">Study Inbox</h2>
|
||||||
|
<p className="text-sm text-dark-400">
|
||||||
|
{pendingStudies.length} pending studies
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<button
|
||||||
|
onClick={(e) => {
|
||||||
|
e.stopPropagation();
|
||||||
|
loadData();
|
||||||
|
}}
|
||||||
|
className="p-2 hover:bg-white/5 rounded-lg transition-colors text-dark-400 hover:text-primary-400"
|
||||||
|
title="Refresh"
|
||||||
|
>
|
||||||
|
<RefreshCw className={`w-4 h-4 ${isLoading ? 'animate-spin' : ''}`} />
|
||||||
|
</button>
|
||||||
|
{isExpanded ? (
|
||||||
|
<ChevronDown className="w-5 h-5 text-dark-400" />
|
||||||
|
) : (
|
||||||
|
<ChevronRight className="w-5 h-5 text-dark-400" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{/* Content */}
|
||||||
|
{isExpanded && (
|
||||||
|
<div className="space-y-4">
|
||||||
|
{/* Create Study Card */}
|
||||||
|
<CreateStudyCard topics={topics} onStudyCreated={handleStudyCreated} />
|
||||||
|
|
||||||
|
{/* Inbox Studies List */}
|
||||||
|
{inboxStudies.length > 0 && (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<h3 className="text-sm font-medium text-dark-400 px-2">
|
||||||
|
Inbox Studies ({inboxStudies.length})
|
||||||
|
</h3>
|
||||||
|
{inboxStudies.map((study) => (
|
||||||
|
<InboxStudyCard
|
||||||
|
key={study.study_name}
|
||||||
|
study={study}
|
||||||
|
onRefresh={loadData}
|
||||||
|
onSelect={setSelectedStudyForFinalize}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Empty State */}
|
||||||
|
{!isLoading && inboxStudies.length === 0 && (
|
||||||
|
<div className="text-center py-8 text-dark-400">
|
||||||
|
<Inbox className="w-12 h-12 mx-auto mb-3 opacity-30" />
|
||||||
|
<p>No studies in inbox</p>
|
||||||
|
<p className="text-sm text-dark-500">
|
||||||
|
Create a new study to get started
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Finalize Modal */}
|
||||||
|
{selectedStudyForFinalize && (
|
||||||
|
<FinalizeModal
|
||||||
|
studyName={selectedStudyForFinalize}
|
||||||
|
topics={topics}
|
||||||
|
onClose={() => setSelectedStudyForFinalize(null)}
|
||||||
|
onFinalized={handleStudyFinalized}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default InboxSection;
|
||||||
@@ -0,0 +1,455 @@
|
|||||||
|
/**
|
||||||
|
* InboxStudyCard - Card displaying an inbox study with actions
|
||||||
|
*
|
||||||
|
* Shows study status, files, and provides actions for:
|
||||||
|
* - Running introspection
|
||||||
|
* - Generating README
|
||||||
|
* - Finalizing the study
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect } from 'react';
|
||||||
|
import {
|
||||||
|
FileText,
|
||||||
|
Folder,
|
||||||
|
Trash2,
|
||||||
|
Play,
|
||||||
|
CheckCircle,
|
||||||
|
Clock,
|
||||||
|
AlertCircle,
|
||||||
|
Loader2,
|
||||||
|
ChevronDown,
|
||||||
|
ChevronRight,
|
||||||
|
Sparkles,
|
||||||
|
ArrowRight,
|
||||||
|
Eye,
|
||||||
|
Save,
|
||||||
|
} from 'lucide-react';
|
||||||
|
import { InboxStudy, SpecStatus, ExpressionInfo, InboxStudyDetail } from '../../types/intake';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
import { FileDropzone } from './FileDropzone';
|
||||||
|
import { ContextFileUpload } from './ContextFileUpload';
|
||||||
|
import { ExpressionList } from './ExpressionList';
|
||||||
|
|
||||||
|
interface InboxStudyCardProps {
|
||||||
|
study: InboxStudy;
|
||||||
|
onRefresh: () => void;
|
||||||
|
onSelect: (studyName: string) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
const statusConfig: Record<SpecStatus, { icon: React.ReactNode; color: string; label: string }> = {
|
||||||
|
draft: {
|
||||||
|
icon: <Clock className="w-4 h-4" />,
|
||||||
|
color: 'text-dark-400 bg-dark-600',
|
||||||
|
label: 'Draft',
|
||||||
|
},
|
||||||
|
introspected: {
|
||||||
|
icon: <CheckCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-blue-400 bg-blue-500/10',
|
||||||
|
label: 'Introspected',
|
||||||
|
},
|
||||||
|
configured: {
|
||||||
|
icon: <CheckCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-green-400 bg-green-500/10',
|
||||||
|
label: 'Configured',
|
||||||
|
},
|
||||||
|
validated: {
|
||||||
|
icon: <CheckCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-green-400 bg-green-500/10',
|
||||||
|
label: 'Validated',
|
||||||
|
},
|
||||||
|
ready: {
|
||||||
|
icon: <CheckCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-primary-400 bg-primary-500/10',
|
||||||
|
label: 'Ready',
|
||||||
|
},
|
||||||
|
running: {
|
||||||
|
icon: <Play className="w-4 h-4" />,
|
||||||
|
color: 'text-yellow-400 bg-yellow-500/10',
|
||||||
|
label: 'Running',
|
||||||
|
},
|
||||||
|
completed: {
|
||||||
|
icon: <CheckCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-green-400 bg-green-500/10',
|
||||||
|
label: 'Completed',
|
||||||
|
},
|
||||||
|
failed: {
|
||||||
|
icon: <AlertCircle className="w-4 h-4" />,
|
||||||
|
color: 'text-red-400 bg-red-500/10',
|
||||||
|
label: 'Failed',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
export const InboxStudyCard: React.FC<InboxStudyCardProps> = ({
|
||||||
|
study,
|
||||||
|
onRefresh,
|
||||||
|
onSelect,
|
||||||
|
}) => {
|
||||||
|
const [isExpanded, setIsExpanded] = useState(false);
|
||||||
|
const [isIntrospecting, setIsIntrospecting] = useState(false);
|
||||||
|
const [isGeneratingReadme, setIsGeneratingReadme] = useState(false);
|
||||||
|
const [isDeleting, setIsDeleting] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
// Introspection data (fetched when expanded)
|
||||||
|
const [studyDetail, setStudyDetail] = useState<InboxStudyDetail | null>(null);
|
||||||
|
const [isLoadingDetail, setIsLoadingDetail] = useState(false);
|
||||||
|
const [selectedExpressions, setSelectedExpressions] = useState<string[]>([]);
|
||||||
|
const [showReadme, setShowReadme] = useState(false);
|
||||||
|
const [readmeContent, setReadmeContent] = useState<string | null>(null);
|
||||||
|
const [isSavingDVs, setIsSavingDVs] = useState(false);
|
||||||
|
const [dvSaveMessage, setDvSaveMessage] = useState<string | null>(null);
|
||||||
|
|
||||||
|
const status = statusConfig[study.status] || statusConfig.draft;
|
||||||
|
|
||||||
|
// Fetch study details when expanded for the first time
|
||||||
|
useEffect(() => {
|
||||||
|
if (isExpanded && !studyDetail && !isLoadingDetail) {
|
||||||
|
loadStudyDetail();
|
||||||
|
}
|
||||||
|
}, [isExpanded]);
|
||||||
|
|
||||||
|
const loadStudyDetail = async () => {
|
||||||
|
setIsLoadingDetail(true);
|
||||||
|
try {
|
||||||
|
const detail = await intakeApi.getInboxStudy(study.study_name);
|
||||||
|
setStudyDetail(detail);
|
||||||
|
|
||||||
|
// Auto-select candidate expressions
|
||||||
|
const introspection = detail.spec?.model?.introspection;
|
||||||
|
if (introspection?.expressions) {
|
||||||
|
const candidates = introspection.expressions
|
||||||
|
.filter((e: ExpressionInfo) => e.is_candidate)
|
||||||
|
.map((e: ExpressionInfo) => e.name);
|
||||||
|
setSelectedExpressions(candidates);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load study detail:', err);
|
||||||
|
} finally {
|
||||||
|
setIsLoadingDetail(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleIntrospect = async () => {
|
||||||
|
setIsIntrospecting(true);
|
||||||
|
setError(null);
|
||||||
|
try {
|
||||||
|
await intakeApi.introspect({ study_name: study.study_name });
|
||||||
|
// Reload study detail to get new introspection data
|
||||||
|
await loadStudyDetail();
|
||||||
|
onRefresh();
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Introspection failed');
|
||||||
|
} finally {
|
||||||
|
setIsIntrospecting(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleGenerateReadme = async () => {
|
||||||
|
setIsGeneratingReadme(true);
|
||||||
|
setError(null);
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.generateReadme(study.study_name);
|
||||||
|
setReadmeContent(response.content);
|
||||||
|
setShowReadme(true);
|
||||||
|
onRefresh();
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'README generation failed');
|
||||||
|
} finally {
|
||||||
|
setIsGeneratingReadme(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleDelete = async () => {
|
||||||
|
if (!confirm(`Delete inbox study "${study.study_name}"? This cannot be undone.`)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
setIsDeleting(true);
|
||||||
|
try {
|
||||||
|
await intakeApi.deleteInboxStudy(study.study_name);
|
||||||
|
onRefresh();
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Delete failed');
|
||||||
|
setIsDeleting(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleSaveDesignVariables = async () => {
|
||||||
|
if (selectedExpressions.length === 0) {
|
||||||
|
setError('Please select at least one expression to use as a design variable');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setIsSavingDVs(true);
|
||||||
|
setError(null);
|
||||||
|
setDvSaveMessage(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await intakeApi.createDesignVariables(study.study_name, selectedExpressions);
|
||||||
|
setDvSaveMessage(`Created ${result.total_created} design variable(s)`);
|
||||||
|
// Reload study detail to see updated spec
|
||||||
|
await loadStudyDetail();
|
||||||
|
onRefresh();
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Failed to save design variables');
|
||||||
|
} finally {
|
||||||
|
setIsSavingDVs(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const canIntrospect = study.status === 'draft' && study.model_files.length > 0;
|
||||||
|
const canGenerateReadme = study.status === 'introspected';
|
||||||
|
const canFinalize = ['introspected', 'configured'].includes(study.status);
|
||||||
|
const canSaveDVs = study.status === 'introspected' && selectedExpressions.length > 0;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="glass rounded-xl border border-primary-400/10 overflow-hidden">
|
||||||
|
{/* Header - Always visible */}
|
||||||
|
<button
|
||||||
|
onClick={() => setIsExpanded(!isExpanded)}
|
||||||
|
className="w-full px-4 py-3 flex items-center justify-between hover:bg-white/5 transition-colors"
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<div className="w-10 h-10 rounded-lg bg-dark-700 flex items-center justify-center">
|
||||||
|
<Folder className="w-5 h-5 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<div className="text-left">
|
||||||
|
<h4 className="text-white font-medium">{study.study_name}</h4>
|
||||||
|
{study.description && (
|
||||||
|
<p className="text-sm text-dark-400 truncate max-w-[300px]">
|
||||||
|
{study.description}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{/* Status Badge */}
|
||||||
|
<span className={`inline-flex items-center gap-1.5 px-2.5 py-1 rounded-full text-xs font-medium ${status.color}`}>
|
||||||
|
{status.icon}
|
||||||
|
{status.label}
|
||||||
|
</span>
|
||||||
|
{/* File Count */}
|
||||||
|
<span className="text-dark-500 text-sm">
|
||||||
|
{study.model_files.length} files
|
||||||
|
</span>
|
||||||
|
{/* Expand Icon */}
|
||||||
|
{isExpanded ? (
|
||||||
|
<ChevronDown className="w-4 h-4 text-dark-400" />
|
||||||
|
) : (
|
||||||
|
<ChevronRight className="w-4 h-4 text-dark-400" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{/* Expanded Content */}
|
||||||
|
{isExpanded && (
|
||||||
|
<div className="px-4 pb-4 space-y-4 border-t border-primary-400/10 pt-4">
|
||||||
|
{/* Error Display */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Success Message */}
|
||||||
|
{dvSaveMessage && (
|
||||||
|
<div className="p-3 rounded-lg bg-green-500/10 border border-green-500/30 text-green-400 text-sm flex items-center gap-2">
|
||||||
|
<CheckCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
{dvSaveMessage}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Files Section */}
|
||||||
|
{study.model_files.length > 0 && (
|
||||||
|
<div>
|
||||||
|
<h5 className="text-sm font-medium text-dark-300 mb-2">Model Files</h5>
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{study.model_files.map((file) => (
|
||||||
|
<span
|
||||||
|
key={file}
|
||||||
|
className="inline-flex items-center gap-1.5 px-2 py-1 rounded bg-dark-700 text-dark-300 text-xs"
|
||||||
|
>
|
||||||
|
<FileText className="w-3 h-3" />
|
||||||
|
{file}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Model File Upload Section */}
|
||||||
|
<div>
|
||||||
|
<h5 className="text-sm font-medium text-dark-300 mb-2">Upload Model Files</h5>
|
||||||
|
<FileDropzone
|
||||||
|
studyName={study.study_name}
|
||||||
|
onUploadComplete={onRefresh}
|
||||||
|
compact={true}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Context File Upload Section */}
|
||||||
|
<ContextFileUpload
|
||||||
|
studyName={study.study_name}
|
||||||
|
onUploadComplete={onRefresh}
|
||||||
|
/>
|
||||||
|
|
||||||
|
{/* Introspection Results - Expressions */}
|
||||||
|
{isLoadingDetail && (
|
||||||
|
<div className="flex items-center gap-2 text-dark-400 text-sm py-4">
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
Loading introspection data...
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{studyDetail?.spec?.model?.introspection?.expressions &&
|
||||||
|
studyDetail.spec.model.introspection.expressions.length > 0 && (
|
||||||
|
<ExpressionList
|
||||||
|
expressions={studyDetail.spec.model.introspection.expressions}
|
||||||
|
massKg={studyDetail.spec.model.introspection.mass_kg}
|
||||||
|
selectedExpressions={selectedExpressions}
|
||||||
|
onSelectionChange={setSelectedExpressions}
|
||||||
|
readOnly={study.status === 'configured'}
|
||||||
|
compact={true}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* README Preview Section */}
|
||||||
|
{(readmeContent || study.status === 'configured') && (
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<h5 className="text-sm font-medium text-dark-300 flex items-center gap-2">
|
||||||
|
<FileText className="w-4 h-4" />
|
||||||
|
README.md
|
||||||
|
</h5>
|
||||||
|
<button
|
||||||
|
onClick={() => setShowReadme(!showReadme)}
|
||||||
|
className="flex items-center gap-1 px-2 py-1 text-xs rounded bg-dark-600
|
||||||
|
text-dark-300 hover:bg-dark-500 transition-colors"
|
||||||
|
>
|
||||||
|
<Eye className="w-3 h-3" />
|
||||||
|
{showReadme ? 'Hide' : 'Preview'}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{showReadme && readmeContent && (
|
||||||
|
<div className="max-h-64 overflow-y-auto rounded-lg border border-dark-600
|
||||||
|
bg-dark-800 p-4">
|
||||||
|
<pre className="text-xs text-dark-300 whitespace-pre-wrap font-mono">
|
||||||
|
{readmeContent}
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* No Files Warning */}
|
||||||
|
{study.model_files.length === 0 && (
|
||||||
|
<div className="p-3 rounded-lg bg-yellow-500/10 border border-yellow-500/30 text-yellow-400 text-sm flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
No model files found. Upload .prt, .sim, or .fem files to continue.
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Actions */}
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{/* Introspect */}
|
||||||
|
{canIntrospect && (
|
||||||
|
<button
|
||||||
|
onClick={handleIntrospect}
|
||||||
|
disabled={isIntrospecting}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-blue-500/10 text-blue-400 hover:bg-blue-500/20
|
||||||
|
disabled:opacity-50 transition-colors"
|
||||||
|
>
|
||||||
|
{isIntrospecting ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Play className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
Introspect Model
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Save Design Variables */}
|
||||||
|
{canSaveDVs && (
|
||||||
|
<button
|
||||||
|
onClick={handleSaveDesignVariables}
|
||||||
|
disabled={isSavingDVs}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-green-500/10 text-green-400 hover:bg-green-500/20
|
||||||
|
disabled:opacity-50 transition-colors"
|
||||||
|
>
|
||||||
|
{isSavingDVs ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Save className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
Save as DVs ({selectedExpressions.length})
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Generate README */}
|
||||||
|
{canGenerateReadme && (
|
||||||
|
<button
|
||||||
|
onClick={handleGenerateReadme}
|
||||||
|
disabled={isGeneratingReadme}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-purple-500/10 text-purple-400 hover:bg-purple-500/20
|
||||||
|
disabled:opacity-50 transition-colors"
|
||||||
|
>
|
||||||
|
{isGeneratingReadme ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Sparkles className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
Generate README
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Finalize */}
|
||||||
|
{canFinalize && (
|
||||||
|
<button
|
||||||
|
onClick={() => onSelect(study.study_name)}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-primary-500/10 text-primary-400 hover:bg-primary-500/20
|
||||||
|
transition-colors"
|
||||||
|
>
|
||||||
|
<ArrowRight className="w-4 h-4" />
|
||||||
|
Finalize Study
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Delete */}
|
||||||
|
<button
|
||||||
|
onClick={handleDelete}
|
||||||
|
disabled={isDeleting}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 rounded-lg text-sm font-medium
|
||||||
|
bg-red-500/10 text-red-400 hover:bg-red-500/20
|
||||||
|
disabled:opacity-50 transition-colors ml-auto"
|
||||||
|
>
|
||||||
|
{isDeleting ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Trash2 className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
Delete
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Workflow Hint */}
|
||||||
|
{study.status === 'draft' && study.model_files.length > 0 && (
|
||||||
|
<p className="text-xs text-dark-500">
|
||||||
|
Next step: Run introspection to discover expressions and model properties.
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
{study.status === 'introspected' && (
|
||||||
|
<p className="text-xs text-dark-500">
|
||||||
|
Next step: Generate README with Claude AI, then finalize to create the study.
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default InboxStudyCard;
|
||||||
13
atomizer-dashboard/frontend/src/components/intake/index.ts
Normal file
13
atomizer-dashboard/frontend/src/components/intake/index.ts
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
/**
|
||||||
|
* Intake Components Index
|
||||||
|
*
|
||||||
|
* Export all intake workflow components.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export { CreateStudyCard } from './CreateStudyCard';
|
||||||
|
export { InboxStudyCard } from './InboxStudyCard';
|
||||||
|
export { FinalizeModal } from './FinalizeModal';
|
||||||
|
export { InboxSection } from './InboxSection';
|
||||||
|
export { FileDropzone } from './FileDropzone';
|
||||||
|
export { ContextFileUpload } from './ContextFileUpload';
|
||||||
|
export { ExpressionList } from './ExpressionList';
|
||||||
@@ -0,0 +1,254 @@
|
|||||||
|
/**
|
||||||
|
* StudioBuildDialog - Final dialog to name and build the study
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect } from 'react';
|
||||||
|
import { X, Loader2, FolderOpen, AlertCircle, CheckCircle, Sparkles, Play } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface StudioBuildDialogProps {
|
||||||
|
draftId: string;
|
||||||
|
onClose: () => void;
|
||||||
|
onBuildComplete: (finalPath: string, finalName: string) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface Topic {
|
||||||
|
name: string;
|
||||||
|
study_count: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const StudioBuildDialog: React.FC<StudioBuildDialogProps> = ({
|
||||||
|
draftId,
|
||||||
|
onClose,
|
||||||
|
onBuildComplete,
|
||||||
|
}) => {
|
||||||
|
const [studyName, setStudyName] = useState('');
|
||||||
|
const [topic, setTopic] = useState('');
|
||||||
|
const [newTopic, setNewTopic] = useState('');
|
||||||
|
const [useNewTopic, setUseNewTopic] = useState(false);
|
||||||
|
const [topics, setTopics] = useState<Topic[]>([]);
|
||||||
|
const [isBuilding, setIsBuilding] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
const [validationErrors, setValidationErrors] = useState<string[]>([]);
|
||||||
|
|
||||||
|
// Load topics
|
||||||
|
useEffect(() => {
|
||||||
|
loadTopics();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const loadTopics = async () => {
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.listTopics();
|
||||||
|
setTopics(response.topics);
|
||||||
|
if (response.topics.length > 0) {
|
||||||
|
setTopic(response.topics[0].name);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load topics:', err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Validate study name
|
||||||
|
useEffect(() => {
|
||||||
|
const errors: string[] = [];
|
||||||
|
|
||||||
|
if (studyName.length > 0) {
|
||||||
|
if (studyName.length < 3) {
|
||||||
|
errors.push('Name must be at least 3 characters');
|
||||||
|
}
|
||||||
|
if (!/^[a-z0-9_]+$/.test(studyName)) {
|
||||||
|
errors.push('Use only lowercase letters, numbers, and underscores');
|
||||||
|
}
|
||||||
|
if (studyName.startsWith('draft_')) {
|
||||||
|
errors.push('Name cannot start with "draft_"');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setValidationErrors(errors);
|
||||||
|
}, [studyName]);
|
||||||
|
|
||||||
|
const handleBuild = async () => {
|
||||||
|
const finalTopic = useNewTopic ? newTopic : topic;
|
||||||
|
|
||||||
|
if (!studyName || !finalTopic) {
|
||||||
|
setError('Please provide both a study name and topic');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (validationErrors.length > 0) {
|
||||||
|
setError('Please fix validation errors');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setIsBuilding(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.finalizeStudio(draftId, {
|
||||||
|
topic: finalTopic,
|
||||||
|
newName: studyName,
|
||||||
|
runBaseline: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
onBuildComplete(response.final_path, response.final_name);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
setError(err instanceof Error ? err.message : 'Build failed');
|
||||||
|
} finally {
|
||||||
|
setIsBuilding(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const isValid = studyName.length >= 3 &&
|
||||||
|
validationErrors.length === 0 &&
|
||||||
|
(topic || (useNewTopic && newTopic));
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="fixed inset-0 bg-black/50 flex items-center justify-center z-50">
|
||||||
|
<div className="bg-dark-850 border border-dark-700 rounded-xl shadow-xl w-full max-w-lg mx-4">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="flex items-center justify-between p-4 border-b border-dark-700">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Sparkles className="w-5 h-5 text-primary-400" />
|
||||||
|
<h2 className="text-lg font-semibold text-white">Build Study</h2>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="p-1 hover:bg-dark-700 rounded text-dark-400 hover:text-white transition-colors"
|
||||||
|
>
|
||||||
|
<X className="w-5 h-5" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Content */}
|
||||||
|
<div className="p-6 space-y-6">
|
||||||
|
{/* Study Name */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Study Name
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={studyName}
|
||||||
|
onChange={(e) => setStudyName(e.target.value.toLowerCase().replace(/[^a-z0-9_]/g, '_'))}
|
||||||
|
placeholder="my_optimization_study"
|
||||||
|
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white placeholder-dark-500 focus:outline-none focus:border-primary-400"
|
||||||
|
/>
|
||||||
|
{validationErrors.length > 0 && (
|
||||||
|
<div className="mt-2 space-y-1">
|
||||||
|
{validationErrors.map((err, i) => (
|
||||||
|
<p key={i} className="text-xs text-red-400 flex items-center gap-1">
|
||||||
|
<AlertCircle className="w-3 h-3" />
|
||||||
|
{err}
|
||||||
|
</p>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
{studyName.length >= 3 && validationErrors.length === 0 && (
|
||||||
|
<p className="mt-2 text-xs text-green-400 flex items-center gap-1">
|
||||||
|
<CheckCircle className="w-3 h-3" />
|
||||||
|
Name is valid
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Topic Selection */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-dark-300 mb-2">
|
||||||
|
Topic Folder
|
||||||
|
</label>
|
||||||
|
|
||||||
|
{!useNewTopic && topics.length > 0 && (
|
||||||
|
<div className="space-y-2">
|
||||||
|
<select
|
||||||
|
value={topic}
|
||||||
|
onChange={(e) => setTopic(e.target.value)}
|
||||||
|
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white focus:outline-none focus:border-primary-400"
|
||||||
|
>
|
||||||
|
{topics.map((t) => (
|
||||||
|
<option key={t.name} value={t.name}>
|
||||||
|
{t.name} ({t.study_count} studies)
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
<button
|
||||||
|
onClick={() => setUseNewTopic(true)}
|
||||||
|
className="text-sm text-primary-400 hover:text-primary-300"
|
||||||
|
>
|
||||||
|
+ Create new topic
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{(useNewTopic || topics.length === 0) && (
|
||||||
|
<div className="space-y-2">
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={newTopic}
|
||||||
|
onChange={(e) => setNewTopic(e.target.value.replace(/[^A-Za-z0-9_]/g, '_'))}
|
||||||
|
placeholder="NewTopic"
|
||||||
|
className="w-full bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-white placeholder-dark-500 focus:outline-none focus:border-primary-400"
|
||||||
|
/>
|
||||||
|
{topics.length > 0 && (
|
||||||
|
<button
|
||||||
|
onClick={() => setUseNewTopic(false)}
|
||||||
|
className="text-sm text-dark-400 hover:text-white"
|
||||||
|
>
|
||||||
|
Use existing topic
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Preview */}
|
||||||
|
<div className="p-3 bg-dark-700/50 rounded-lg">
|
||||||
|
<p className="text-xs text-dark-400 mb-1">Study will be created at:</p>
|
||||||
|
<p className="text-sm text-white font-mono flex items-center gap-2">
|
||||||
|
<FolderOpen className="w-4 h-4 text-primary-400" />
|
||||||
|
studies/{useNewTopic ? newTopic || '...' : topic}/{studyName || '...'}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Error */}
|
||||||
|
{error && (
|
||||||
|
<div className="p-3 rounded-lg bg-red-500/10 border border-red-500/30 text-red-400 text-sm flex items-center gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 flex-shrink-0" />
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Footer */}
|
||||||
|
<div className="flex items-center justify-end gap-3 p-4 border-t border-dark-700">
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
disabled={isBuilding}
|
||||||
|
className="px-4 py-2 text-sm text-dark-300 hover:text-white hover:bg-dark-700 rounded-lg transition-colors"
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={handleBuild}
|
||||||
|
disabled={!isValid || isBuilding}
|
||||||
|
className="flex items-center gap-2 px-4 py-2 text-sm font-medium bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||||
|
>
|
||||||
|
{isBuilding ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
Building...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<Play className="w-4 h-4" />
|
||||||
|
Build Study
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StudioBuildDialog;
|
||||||
375
atomizer-dashboard/frontend/src/components/studio/StudioChat.tsx
Normal file
375
atomizer-dashboard/frontend/src/components/studio/StudioChat.tsx
Normal file
@@ -0,0 +1,375 @@
|
|||||||
|
/**
|
||||||
|
* StudioChat - Context-aware AI chat for Studio
|
||||||
|
*
|
||||||
|
* Uses the existing useChat hook to communicate with Claude via WebSocket.
|
||||||
|
* Injects model files and context documents into the conversation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useRef, useEffect, useState, useMemo } from 'react';
|
||||||
|
import { Send, Loader2, Sparkles, FileText, Wifi, WifiOff, Bot, User, File, AlertCircle } from 'lucide-react';
|
||||||
|
import { useChat } from '../../hooks/useChat';
|
||||||
|
import { useSpecStore, useSpec } from '../../hooks/useSpecStore';
|
||||||
|
import { MarkdownRenderer } from '../MarkdownRenderer';
|
||||||
|
import { ToolCallCard } from '../chat/ToolCallCard';
|
||||||
|
|
||||||
|
interface StudioChatProps {
|
||||||
|
draftId: string;
|
||||||
|
contextFiles: string[];
|
||||||
|
contextContent: string;
|
||||||
|
modelFiles: string[];
|
||||||
|
onSpecUpdated: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const StudioChat: React.FC<StudioChatProps> = ({
|
||||||
|
draftId,
|
||||||
|
contextFiles,
|
||||||
|
contextContent,
|
||||||
|
modelFiles,
|
||||||
|
onSpecUpdated,
|
||||||
|
}) => {
|
||||||
|
const messagesEndRef = useRef<HTMLDivElement>(null);
|
||||||
|
const inputRef = useRef<HTMLTextAreaElement>(null);
|
||||||
|
const [input, setInput] = useState('');
|
||||||
|
const [hasInjectedContext, setHasInjectedContext] = useState(false);
|
||||||
|
|
||||||
|
// Get spec store for canvas updates
|
||||||
|
const spec = useSpec();
|
||||||
|
const { reloadSpec, setSpecFromWebSocket } = useSpecStore();
|
||||||
|
|
||||||
|
// Build canvas state with full context for Claude
|
||||||
|
const canvasState = useMemo(() => ({
|
||||||
|
nodes: [],
|
||||||
|
edges: [],
|
||||||
|
studyName: draftId,
|
||||||
|
studyPath: `_inbox/${draftId}`,
|
||||||
|
// Include file info for Claude context
|
||||||
|
modelFiles,
|
||||||
|
contextFiles,
|
||||||
|
contextContent: contextContent.substring(0, 50000), // Limit context size
|
||||||
|
}), [draftId, modelFiles, contextFiles, contextContent]);
|
||||||
|
|
||||||
|
// Use the chat hook with WebSocket
|
||||||
|
// Power mode gives Claude write permissions to modify the spec
|
||||||
|
const {
|
||||||
|
messages,
|
||||||
|
isThinking,
|
||||||
|
error,
|
||||||
|
isConnected,
|
||||||
|
sendMessage,
|
||||||
|
updateCanvasState,
|
||||||
|
} = useChat({
|
||||||
|
studyId: draftId,
|
||||||
|
mode: 'power', // Power mode = --dangerously-skip-permissions = can write files
|
||||||
|
useWebSocket: true,
|
||||||
|
canvasState,
|
||||||
|
onError: (err) => console.error('[StudioChat] Error:', err),
|
||||||
|
onSpecUpdated: (newSpec) => {
|
||||||
|
// Claude modified the spec - update the store directly
|
||||||
|
console.log('[StudioChat] Spec updated by Claude');
|
||||||
|
setSpecFromWebSocket(newSpec, draftId);
|
||||||
|
onSpecUpdated();
|
||||||
|
},
|
||||||
|
onCanvasModification: (modification) => {
|
||||||
|
// Claude wants to modify canvas - reload the spec
|
||||||
|
console.log('[StudioChat] Canvas modification:', modification);
|
||||||
|
reloadSpec();
|
||||||
|
onSpecUpdated();
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Update canvas state when context changes
|
||||||
|
useEffect(() => {
|
||||||
|
updateCanvasState(canvasState);
|
||||||
|
}, [canvasState, updateCanvasState]);
|
||||||
|
|
||||||
|
// Scroll to bottom when messages change
|
||||||
|
useEffect(() => {
|
||||||
|
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
|
||||||
|
}, [messages]);
|
||||||
|
|
||||||
|
// Auto-focus input
|
||||||
|
useEffect(() => {
|
||||||
|
inputRef.current?.focus();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Build context summary for display
|
||||||
|
const contextSummary = useMemo(() => {
|
||||||
|
const parts: string[] = [];
|
||||||
|
if (modelFiles.length > 0) {
|
||||||
|
parts.push(`${modelFiles.length} model file${modelFiles.length > 1 ? 's' : ''}`);
|
||||||
|
}
|
||||||
|
if (contextFiles.length > 0) {
|
||||||
|
parts.push(`${contextFiles.length} context doc${contextFiles.length > 1 ? 's' : ''}`);
|
||||||
|
}
|
||||||
|
if (contextContent) {
|
||||||
|
parts.push(`${contextContent.length.toLocaleString()} chars context`);
|
||||||
|
}
|
||||||
|
return parts.join(', ');
|
||||||
|
}, [modelFiles, contextFiles, contextContent]);
|
||||||
|
|
||||||
|
const handleSend = () => {
|
||||||
|
if (!input.trim() || isThinking) return;
|
||||||
|
|
||||||
|
let messageToSend = input.trim();
|
||||||
|
|
||||||
|
// On first message, inject full context so Claude has everything it needs
|
||||||
|
if (!hasInjectedContext && (modelFiles.length > 0 || contextContent)) {
|
||||||
|
const contextParts: string[] = [];
|
||||||
|
|
||||||
|
// Add model files info
|
||||||
|
if (modelFiles.length > 0) {
|
||||||
|
contextParts.push(`**Model Files Uploaded:**\n${modelFiles.map(f => `- ${f}`).join('\n')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add context document content (full text)
|
||||||
|
if (contextContent) {
|
||||||
|
contextParts.push(`**Context Documents Content:**\n\`\`\`\n${contextContent.substring(0, 30000)}\n\`\`\``);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add current spec state
|
||||||
|
if (spec) {
|
||||||
|
const dvCount = spec.design_variables?.length || 0;
|
||||||
|
const objCount = spec.objectives?.length || 0;
|
||||||
|
const extCount = spec.extractors?.length || 0;
|
||||||
|
if (dvCount > 0 || objCount > 0 || extCount > 0) {
|
||||||
|
contextParts.push(`**Current Configuration:** ${dvCount} design variables, ${objCount} objectives, ${extCount} extractors`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (contextParts.length > 0) {
|
||||||
|
messageToSend = `${contextParts.join('\n\n')}\n\n---\n\n**User Request:** ${messageToSend}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
setHasInjectedContext(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
sendMessage(messageToSend);
|
||||||
|
setInput('');
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleKeyDown = (e: React.KeyboardEvent) => {
|
||||||
|
if (e.key === 'Enter' && !e.shiftKey) {
|
||||||
|
e.preventDefault();
|
||||||
|
handleSend();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Welcome message for empty state
|
||||||
|
const showWelcome = messages.length === 0;
|
||||||
|
|
||||||
|
// Check if we have any context
|
||||||
|
const hasContext = modelFiles.length > 0 || contextContent.length > 0;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="h-full flex flex-col">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="p-3 border-b border-dark-700 flex-shrink-0">
|
||||||
|
<div className="flex items-center justify-between mb-2">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Sparkles className="w-5 h-5 text-primary-400" />
|
||||||
|
<span className="font-medium text-white">Studio Assistant</span>
|
||||||
|
</div>
|
||||||
|
<span className={`flex items-center gap-1 text-xs px-2 py-0.5 rounded ${
|
||||||
|
isConnected
|
||||||
|
? 'text-green-400 bg-green-400/10'
|
||||||
|
: 'text-red-400 bg-red-400/10'
|
||||||
|
}`}>
|
||||||
|
{isConnected ? <Wifi className="w-3 h-3" /> : <WifiOff className="w-3 h-3" />}
|
||||||
|
{isConnected ? 'Connected' : 'Disconnected'}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Context indicator */}
|
||||||
|
{contextSummary && (
|
||||||
|
<div className="flex items-center gap-2 text-xs">
|
||||||
|
<div className="flex items-center gap-1 text-amber-400 bg-amber-400/10 px-2 py-1 rounded">
|
||||||
|
<FileText className="w-3 h-3" />
|
||||||
|
<span>{contextSummary}</span>
|
||||||
|
</div>
|
||||||
|
{hasContext && !hasInjectedContext && (
|
||||||
|
<span className="text-dark-500">Will be sent with first message</span>
|
||||||
|
)}
|
||||||
|
{hasInjectedContext && (
|
||||||
|
<span className="text-green-500">Context sent</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Messages */}
|
||||||
|
<div className="flex-1 overflow-y-auto p-3 space-y-4">
|
||||||
|
{/* Welcome message with context awareness */}
|
||||||
|
{showWelcome && (
|
||||||
|
<div className="flex gap-3">
|
||||||
|
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-primary-500/20 text-primary-400">
|
||||||
|
<Bot className="w-4 h-4" />
|
||||||
|
</div>
|
||||||
|
<div className="flex-1 bg-dark-700 rounded-lg px-4 py-3 text-sm text-dark-100">
|
||||||
|
<MarkdownRenderer content={hasContext
|
||||||
|
? `I can see you've uploaded files. Here's what I have access to:
|
||||||
|
|
||||||
|
${modelFiles.length > 0 ? `**Model Files:** ${modelFiles.join(', ')}` : ''}
|
||||||
|
${contextContent ? `\n**Context Document:** ${contextContent.substring(0, 200)}...` : ''}
|
||||||
|
|
||||||
|
Tell me what you want to optimize and I'll help you configure the study!`
|
||||||
|
: `Welcome to Atomizer Studio! I'm here to help you configure your optimization study.
|
||||||
|
|
||||||
|
**What I can do:**
|
||||||
|
- Read your uploaded context documents
|
||||||
|
- Help set up design variables, objectives, and constraints
|
||||||
|
- Create extractors for physics outputs
|
||||||
|
- Suggest optimization strategies
|
||||||
|
|
||||||
|
Upload your model files and any requirements documents, then tell me what you want to optimize!`} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* File context display (only if we have files but no messages yet) */}
|
||||||
|
{showWelcome && modelFiles.length > 0 && (
|
||||||
|
<div className="bg-dark-800/50 rounded-lg p-3 border border-dark-700">
|
||||||
|
<p className="text-xs text-dark-400 mb-2 font-medium">Loaded Files:</p>
|
||||||
|
<div className="flex flex-wrap gap-2">
|
||||||
|
{modelFiles.map((file, idx) => (
|
||||||
|
<span key={idx} className="flex items-center gap-1 text-xs bg-blue-500/10 text-blue-400 px-2 py-1 rounded">
|
||||||
|
<File className="w-3 h-3" />
|
||||||
|
{file}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
{contextFiles.map((file, idx) => (
|
||||||
|
<span key={idx} className="flex items-center gap-1 text-xs bg-amber-500/10 text-amber-400 px-2 py-1 rounded">
|
||||||
|
<FileText className="w-3 h-3" />
|
||||||
|
{file}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Chat messages */}
|
||||||
|
{messages.map((msg) => {
|
||||||
|
const isAssistant = msg.role === 'assistant';
|
||||||
|
const isSystem = msg.role === 'system';
|
||||||
|
|
||||||
|
// System messages
|
||||||
|
if (isSystem) {
|
||||||
|
return (
|
||||||
|
<div key={msg.id} className="flex justify-center my-2">
|
||||||
|
<div className="px-3 py-1 bg-dark-700/50 rounded-full text-xs text-dark-400 border border-dark-600">
|
||||||
|
{msg.content}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
key={msg.id}
|
||||||
|
className={`flex gap-3 ${isAssistant ? '' : 'flex-row-reverse'}`}
|
||||||
|
>
|
||||||
|
{/* Avatar */}
|
||||||
|
<div
|
||||||
|
className={`flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center ${
|
||||||
|
isAssistant
|
||||||
|
? 'bg-primary-500/20 text-primary-400'
|
||||||
|
: 'bg-dark-600 text-dark-300'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{isAssistant ? <Bot className="w-4 h-4" /> : <User className="w-4 h-4" />}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Message content */}
|
||||||
|
<div
|
||||||
|
className={`flex-1 max-w-[85%] rounded-lg px-4 py-3 text-sm ${
|
||||||
|
isAssistant
|
||||||
|
? 'bg-dark-700 text-dark-100'
|
||||||
|
: 'bg-primary-500 text-white ml-auto'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{isAssistant ? (
|
||||||
|
<>
|
||||||
|
{msg.content && <MarkdownRenderer content={msg.content} />}
|
||||||
|
{msg.isStreaming && !msg.content && (
|
||||||
|
<span className="text-dark-400">Thinking...</span>
|
||||||
|
)}
|
||||||
|
{/* Tool calls */}
|
||||||
|
{msg.toolCalls && msg.toolCalls.length > 0 && (
|
||||||
|
<div className="mt-3 space-y-2">
|
||||||
|
{msg.toolCalls.map((tool, idx) => (
|
||||||
|
<ToolCallCard key={idx} toolCall={tool} />
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<span className="whitespace-pre-wrap">{msg.content}</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
|
||||||
|
{/* Thinking indicator */}
|
||||||
|
{isThinking && messages.length > 0 && !messages[messages.length - 1]?.isStreaming && (
|
||||||
|
<div className="flex gap-3">
|
||||||
|
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-primary-500/20 text-primary-400">
|
||||||
|
<Bot className="w-4 h-4" />
|
||||||
|
</div>
|
||||||
|
<div className="bg-dark-700 rounded-lg px-4 py-3 flex items-center gap-2">
|
||||||
|
<Loader2 className="w-4 h-4 text-primary-400 animate-spin" />
|
||||||
|
<span className="text-sm text-dark-300">Thinking...</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Error display */}
|
||||||
|
{error && (
|
||||||
|
<div className="flex gap-3">
|
||||||
|
<div className="flex-shrink-0 w-8 h-8 rounded-lg flex items-center justify-center bg-red-500/20 text-red-400">
|
||||||
|
<AlertCircle className="w-4 h-4" />
|
||||||
|
</div>
|
||||||
|
<div className="flex-1 px-4 py-3 bg-red-500/10 rounded-lg text-sm text-red-400 border border-red-500/30">
|
||||||
|
{error}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div ref={messagesEndRef} />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Input */}
|
||||||
|
<div className="p-3 border-t border-dark-700 flex-shrink-0">
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<textarea
|
||||||
|
ref={inputRef}
|
||||||
|
value={input}
|
||||||
|
onChange={(e) => setInput(e.target.value)}
|
||||||
|
onKeyDown={handleKeyDown}
|
||||||
|
placeholder={isConnected ? "Ask about your optimization..." : "Connecting..."}
|
||||||
|
disabled={!isConnected}
|
||||||
|
rows={1}
|
||||||
|
className="flex-1 bg-dark-700 border border-dark-600 rounded-lg px-3 py-2 text-sm text-white placeholder-dark-400 resize-none focus:outline-none focus:border-primary-400 disabled:opacity-50"
|
||||||
|
/>
|
||||||
|
<button
|
||||||
|
onClick={handleSend}
|
||||||
|
disabled={!input.trim() || isThinking || !isConnected}
|
||||||
|
className="p-2 bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||||
|
>
|
||||||
|
{isThinking ? (
|
||||||
|
<Loader2 className="w-5 h-5 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Send className="w-5 h-5" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{!isConnected && (
|
||||||
|
<p className="text-xs text-dark-500 mt-1">
|
||||||
|
Waiting for connection to Claude...
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StudioChat;
|
||||||
@@ -0,0 +1,117 @@
|
|||||||
|
/**
|
||||||
|
* StudioContextFiles - Context document upload and display
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useRef } from 'react';
|
||||||
|
import { FileText, Upload, Trash2, Loader2 } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface StudioContextFilesProps {
|
||||||
|
draftId: string;
|
||||||
|
files: string[];
|
||||||
|
onUploadComplete: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const StudioContextFiles: React.FC<StudioContextFilesProps> = ({
|
||||||
|
draftId,
|
||||||
|
files,
|
||||||
|
onUploadComplete,
|
||||||
|
}) => {
|
||||||
|
const [isUploading, setIsUploading] = useState(false);
|
||||||
|
const [deleting, setDeleting] = useState<string | null>(null);
|
||||||
|
const fileInputRef = useRef<HTMLInputElement>(null);
|
||||||
|
|
||||||
|
const VALID_EXTENSIONS = ['.md', '.txt', '.pdf', '.json', '.csv', '.docx'];
|
||||||
|
|
||||||
|
const handleFileSelect = async (e: React.ChangeEvent<HTMLInputElement>) => {
|
||||||
|
const selectedFiles = Array.from(e.target.files || []);
|
||||||
|
if (selectedFiles.length === 0) return;
|
||||||
|
|
||||||
|
e.target.value = '';
|
||||||
|
setIsUploading(true);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await intakeApi.uploadContextFiles(draftId, selectedFiles);
|
||||||
|
onUploadComplete();
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to upload context files:', err);
|
||||||
|
} finally {
|
||||||
|
setIsUploading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const deleteFile = async (filename: string) => {
|
||||||
|
setDeleting(filename);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await intakeApi.deleteContextFile(draftId, filename);
|
||||||
|
onUploadComplete();
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to delete context file:', err);
|
||||||
|
} finally {
|
||||||
|
setDeleting(null);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const getFileIcon = (_filename: string) => {
|
||||||
|
return <FileText className="w-3.5 h-3.5 text-amber-400" />;
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-2">
|
||||||
|
{/* File List */}
|
||||||
|
{files.length > 0 && (
|
||||||
|
<div className="space-y-1">
|
||||||
|
{files.map((name) => (
|
||||||
|
<div
|
||||||
|
key={name}
|
||||||
|
className="flex items-center gap-2 px-2 py-1.5 rounded bg-dark-700/50 text-sm group"
|
||||||
|
>
|
||||||
|
{getFileIcon(name)}
|
||||||
|
<span className="text-dark-200 truncate flex-1">{name}</span>
|
||||||
|
<button
|
||||||
|
onClick={() => deleteFile(name)}
|
||||||
|
disabled={deleting === name}
|
||||||
|
className="p-1 opacity-0 group-hover:opacity-100 hover:bg-red-500/20 rounded text-red-400 transition-all"
|
||||||
|
>
|
||||||
|
{deleting === name ? (
|
||||||
|
<Loader2 className="w-3 h-3 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Trash2 className="w-3 h-3" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Upload Button */}
|
||||||
|
<button
|
||||||
|
onClick={() => fileInputRef.current?.click()}
|
||||||
|
disabled={isUploading}
|
||||||
|
className="w-full flex items-center justify-center gap-2 px-3 py-2 rounded-lg
|
||||||
|
border border-dashed border-dark-600 text-dark-400 text-sm
|
||||||
|
hover:border-primary-400/50 hover:text-primary-400 hover:bg-primary-400/5
|
||||||
|
disabled:opacity-50 transition-colors"
|
||||||
|
>
|
||||||
|
{isUploading ? (
|
||||||
|
<Loader2 className="w-4 h-4 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
)}
|
||||||
|
{isUploading ? 'Uploading...' : 'Add context files'}
|
||||||
|
</button>
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={fileInputRef}
|
||||||
|
type="file"
|
||||||
|
multiple
|
||||||
|
accept={VALID_EXTENSIONS.join(',')}
|
||||||
|
onChange={handleFileSelect}
|
||||||
|
className="hidden"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StudioContextFiles;
|
||||||
@@ -0,0 +1,242 @@
|
|||||||
|
/**
|
||||||
|
* StudioDropZone - Smart file drop zone for Studio
|
||||||
|
*
|
||||||
|
* Handles both model files (.sim, .prt, .fem) and context files (.pdf, .md, .txt)
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useCallback, useRef } from 'react';
|
||||||
|
import { Upload, X, Loader2, AlertCircle, CheckCircle, File } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface StudioDropZoneProps {
|
||||||
|
draftId: string;
|
||||||
|
type: 'model' | 'context';
|
||||||
|
files: string[];
|
||||||
|
onUploadComplete: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface FileStatus {
|
||||||
|
file: File;
|
||||||
|
status: 'pending' | 'uploading' | 'success' | 'error';
|
||||||
|
message?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const MODEL_EXTENSIONS = ['.prt', '.sim', '.fem', '.afem'];
|
||||||
|
const CONTEXT_EXTENSIONS = ['.md', '.txt', '.pdf', '.json', '.csv', '.docx'];
|
||||||
|
|
||||||
|
export const StudioDropZone: React.FC<StudioDropZoneProps> = ({
|
||||||
|
draftId,
|
||||||
|
type,
|
||||||
|
files,
|
||||||
|
onUploadComplete,
|
||||||
|
}) => {
|
||||||
|
const [isDragging, setIsDragging] = useState(false);
|
||||||
|
const [pendingFiles, setPendingFiles] = useState<FileStatus[]>([]);
|
||||||
|
const [isUploading, setIsUploading] = useState(false);
|
||||||
|
const fileInputRef = useRef<HTMLInputElement>(null);
|
||||||
|
|
||||||
|
const validExtensions = type === 'model' ? MODEL_EXTENSIONS : CONTEXT_EXTENSIONS;
|
||||||
|
|
||||||
|
const validateFile = (file: File): { valid: boolean; reason?: string } => {
|
||||||
|
const ext = '.' + file.name.split('.').pop()?.toLowerCase();
|
||||||
|
if (!validExtensions.includes(ext)) {
|
||||||
|
return { valid: false, reason: `Invalid type: ${ext}` };
|
||||||
|
}
|
||||||
|
if (file.size > 500 * 1024 * 1024) {
|
||||||
|
return { valid: false, reason: 'File too large (max 500MB)' };
|
||||||
|
}
|
||||||
|
return { valid: true };
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleDragEnter = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(true);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleDragLeave = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(false);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleDragOver = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const addFiles = useCallback((newFiles: File[]) => {
|
||||||
|
const validFiles: FileStatus[] = [];
|
||||||
|
|
||||||
|
for (const file of newFiles) {
|
||||||
|
if (pendingFiles.some(f => f.file.name === file.name)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const validation = validateFile(file);
|
||||||
|
validFiles.push({
|
||||||
|
file,
|
||||||
|
status: validation.valid ? 'pending' : 'error',
|
||||||
|
message: validation.reason,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
setPendingFiles(prev => [...prev, ...validFiles]);
|
||||||
|
}, [pendingFiles, validExtensions]);
|
||||||
|
|
||||||
|
const handleDrop = useCallback((e: React.DragEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
e.stopPropagation();
|
||||||
|
setIsDragging(false);
|
||||||
|
addFiles(Array.from(e.dataTransfer.files));
|
||||||
|
}, [addFiles]);
|
||||||
|
|
||||||
|
const handleFileSelect = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {
|
||||||
|
addFiles(Array.from(e.target.files || []));
|
||||||
|
e.target.value = '';
|
||||||
|
}, [addFiles]);
|
||||||
|
|
||||||
|
const removeFile = (index: number) => {
|
||||||
|
setPendingFiles(prev => prev.filter((_, i) => i !== index));
|
||||||
|
};
|
||||||
|
|
||||||
|
const uploadFiles = async () => {
|
||||||
|
const toUpload = pendingFiles.filter(f => f.status === 'pending');
|
||||||
|
if (toUpload.length === 0) return;
|
||||||
|
|
||||||
|
setIsUploading(true);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const uploadFn = type === 'model'
|
||||||
|
? intakeApi.uploadFiles
|
||||||
|
: intakeApi.uploadContextFiles;
|
||||||
|
|
||||||
|
const response = await uploadFn(draftId, toUpload.map(f => f.file));
|
||||||
|
|
||||||
|
const results = new Map(
|
||||||
|
response.uploaded_files.map(f => [f.name, f.status === 'uploaded'])
|
||||||
|
);
|
||||||
|
|
||||||
|
setPendingFiles(prev => prev.map(f => {
|
||||||
|
if (f.status !== 'pending') return f;
|
||||||
|
const success = results.get(f.file.name);
|
||||||
|
return {
|
||||||
|
...f,
|
||||||
|
status: success ? 'success' : 'error',
|
||||||
|
message: success ? undefined : 'Upload failed',
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
|
||||||
|
setTimeout(() => {
|
||||||
|
setPendingFiles(prev => prev.filter(f => f.status !== 'success'));
|
||||||
|
onUploadComplete();
|
||||||
|
}, 1000);
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
setPendingFiles(prev => prev.map(f =>
|
||||||
|
f.status === 'pending'
|
||||||
|
? { ...f, status: 'error', message: 'Upload failed' }
|
||||||
|
: f
|
||||||
|
));
|
||||||
|
} finally {
|
||||||
|
setIsUploading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Auto-upload when files are added
|
||||||
|
React.useEffect(() => {
|
||||||
|
const pending = pendingFiles.filter(f => f.status === 'pending');
|
||||||
|
if (pending.length > 0 && !isUploading) {
|
||||||
|
uploadFiles();
|
||||||
|
}
|
||||||
|
}, [pendingFiles, isUploading]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-2">
|
||||||
|
{/* Drop Zone */}
|
||||||
|
<div
|
||||||
|
onDragEnter={handleDragEnter}
|
||||||
|
onDragLeave={handleDragLeave}
|
||||||
|
onDragOver={handleDragOver}
|
||||||
|
onDrop={handleDrop}
|
||||||
|
onClick={() => fileInputRef.current?.click()}
|
||||||
|
className={`
|
||||||
|
relative border-2 border-dashed rounded-lg p-4 cursor-pointer
|
||||||
|
transition-all duration-200 text-center
|
||||||
|
${isDragging
|
||||||
|
? 'border-primary-400 bg-primary-400/5'
|
||||||
|
: 'border-dark-600 hover:border-primary-400/50 hover:bg-white/5'
|
||||||
|
}
|
||||||
|
`}
|
||||||
|
>
|
||||||
|
<div className={`w-8 h-8 rounded-full flex items-center justify-center mx-auto mb-2
|
||||||
|
${isDragging ? 'bg-primary-400/20 text-primary-400' : 'bg-dark-700 text-dark-400'}`}>
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
</div>
|
||||||
|
<p className="text-sm text-dark-300">
|
||||||
|
{isDragging ? 'Drop files here' : 'Drop or click to add'}
|
||||||
|
</p>
|
||||||
|
<p className="text-xs text-dark-500 mt-1">
|
||||||
|
{validExtensions.join(', ')}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Existing Files */}
|
||||||
|
{files.length > 0 && (
|
||||||
|
<div className="space-y-1">
|
||||||
|
{files.map((name, i) => (
|
||||||
|
<div
|
||||||
|
key={i}
|
||||||
|
className="flex items-center gap-2 px-2 py-1.5 rounded bg-dark-700/50 text-sm"
|
||||||
|
>
|
||||||
|
<File className="w-3.5 h-3.5 text-dark-400" />
|
||||||
|
<span className="text-dark-200 truncate flex-1">{name}</span>
|
||||||
|
<CheckCircle className="w-3.5 h-3.5 text-green-400" />
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Pending Files */}
|
||||||
|
{pendingFiles.length > 0 && (
|
||||||
|
<div className="space-y-1">
|
||||||
|
{pendingFiles.map((f, i) => (
|
||||||
|
<div
|
||||||
|
key={i}
|
||||||
|
className={`flex items-center gap-2 px-2 py-1.5 rounded text-sm
|
||||||
|
${f.status === 'error' ? 'bg-red-500/10' :
|
||||||
|
f.status === 'success' ? 'bg-green-500/10' : 'bg-dark-700'}`}
|
||||||
|
>
|
||||||
|
{f.status === 'pending' && <Loader2 className="w-3.5 h-3.5 text-primary-400 animate-spin" />}
|
||||||
|
{f.status === 'uploading' && <Loader2 className="w-3.5 h-3.5 text-primary-400 animate-spin" />}
|
||||||
|
{f.status === 'success' && <CheckCircle className="w-3.5 h-3.5 text-green-400" />}
|
||||||
|
{f.status === 'error' && <AlertCircle className="w-3.5 h-3.5 text-red-400" />}
|
||||||
|
<span className={`truncate flex-1 ${f.status === 'error' ? 'text-red-400' : 'text-dark-200'}`}>
|
||||||
|
{f.file.name}
|
||||||
|
</span>
|
||||||
|
{f.message && (
|
||||||
|
<span className="text-xs text-red-400">({f.message})</span>
|
||||||
|
)}
|
||||||
|
{f.status === 'pending' && (
|
||||||
|
<button onClick={(e) => { e.stopPropagation(); removeFile(i); }} className="p-0.5 hover:bg-white/10 rounded">
|
||||||
|
<X className="w-3 h-3 text-dark-400" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<input
|
||||||
|
ref={fileInputRef}
|
||||||
|
type="file"
|
||||||
|
multiple
|
||||||
|
accept={validExtensions.join(',')}
|
||||||
|
onChange={handleFileSelect}
|
||||||
|
className="hidden"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StudioDropZone;
|
||||||
@@ -0,0 +1,172 @@
|
|||||||
|
/**
|
||||||
|
* StudioParameterList - Display and add discovered parameters as design variables
|
||||||
|
*/
|
||||||
|
|
||||||
|
import React, { useState, useEffect } from 'react';
|
||||||
|
import { Plus, Check, SlidersHorizontal, Loader2 } from 'lucide-react';
|
||||||
|
import { intakeApi } from '../../api/intake';
|
||||||
|
|
||||||
|
interface Expression {
|
||||||
|
name: string;
|
||||||
|
value: number | null;
|
||||||
|
units: string | null;
|
||||||
|
is_candidate: boolean;
|
||||||
|
confidence: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface StudioParameterListProps {
|
||||||
|
draftId: string;
|
||||||
|
onParameterAdded: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const StudioParameterList: React.FC<StudioParameterListProps> = ({
|
||||||
|
draftId,
|
||||||
|
onParameterAdded,
|
||||||
|
}) => {
|
||||||
|
const [expressions, setExpressions] = useState<Expression[]>([]);
|
||||||
|
const [addedParams, setAddedParams] = useState<Set<string>>(new Set());
|
||||||
|
const [adding, setAdding] = useState<string | null>(null);
|
||||||
|
const [loading, setLoading] = useState(true);
|
||||||
|
|
||||||
|
// Load expressions from spec introspection
|
||||||
|
useEffect(() => {
|
||||||
|
loadExpressions();
|
||||||
|
}, [draftId]);
|
||||||
|
|
||||||
|
const loadExpressions = async () => {
|
||||||
|
setLoading(true);
|
||||||
|
try {
|
||||||
|
const data = await intakeApi.getStudioDraft(draftId);
|
||||||
|
const introspection = (data.spec as any)?.model?.introspection;
|
||||||
|
|
||||||
|
if (introspection?.expressions) {
|
||||||
|
setExpressions(introspection.expressions);
|
||||||
|
|
||||||
|
// Check which are already added as DVs
|
||||||
|
const existingDVs = new Set<string>(
|
||||||
|
((data.spec as any)?.design_variables || []).map((dv: any) => dv.expression_name as string)
|
||||||
|
);
|
||||||
|
setAddedParams(existingDVs);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to load expressions:', err);
|
||||||
|
} finally {
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const addAsDesignVariable = async (expressionName: string) => {
|
||||||
|
setAdding(expressionName);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await intakeApi.createDesignVariables(draftId, [expressionName]);
|
||||||
|
setAddedParams(prev => new Set([...prev, expressionName]));
|
||||||
|
onParameterAdded();
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to add design variable:', err);
|
||||||
|
} finally {
|
||||||
|
setAdding(null);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Sort: candidates first, then by confidence
|
||||||
|
const sortedExpressions = [...expressions].sort((a, b) => {
|
||||||
|
if (a.is_candidate !== b.is_candidate) {
|
||||||
|
return b.is_candidate ? 1 : -1;
|
||||||
|
}
|
||||||
|
return (b.confidence || 0) - (a.confidence || 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Show only candidates by default, with option to show all
|
||||||
|
const [showAll, setShowAll] = useState(false);
|
||||||
|
const displayExpressions = showAll
|
||||||
|
? sortedExpressions
|
||||||
|
: sortedExpressions.filter(e => e.is_candidate);
|
||||||
|
|
||||||
|
if (loading) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center justify-center py-4">
|
||||||
|
<Loader2 className="w-5 h-5 text-primary-400 animate-spin" />
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (expressions.length === 0) {
|
||||||
|
return (
|
||||||
|
<p className="text-xs text-dark-500 italic py-2">
|
||||||
|
No expressions found. Try running introspection.
|
||||||
|
</p>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const candidateCount = expressions.filter(e => e.is_candidate).length;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-2">
|
||||||
|
{/* Header with toggle */}
|
||||||
|
<div className="flex items-center justify-between text-xs text-dark-400">
|
||||||
|
<span>{candidateCount} candidates</span>
|
||||||
|
<button
|
||||||
|
onClick={() => setShowAll(!showAll)}
|
||||||
|
className="hover:text-primary-400 transition-colors"
|
||||||
|
>
|
||||||
|
{showAll ? 'Show candidates only' : `Show all (${expressions.length})`}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Parameter List */}
|
||||||
|
<div className="space-y-1 max-h-48 overflow-y-auto">
|
||||||
|
{displayExpressions.map((expr) => {
|
||||||
|
const isAdded = addedParams.has(expr.name);
|
||||||
|
const isAdding = adding === expr.name;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
key={expr.name}
|
||||||
|
className={`flex items-center gap-2 px-2 py-1.5 rounded text-sm
|
||||||
|
${isAdded ? 'bg-green-500/10' : 'bg-dark-700/50 hover:bg-dark-700'}
|
||||||
|
transition-colors`}
|
||||||
|
>
|
||||||
|
<SlidersHorizontal className="w-3.5 h-3.5 text-dark-400 flex-shrink-0" />
|
||||||
|
<div className="flex-1 min-w-0">
|
||||||
|
<span className={`block truncate ${isAdded ? 'text-green-400' : 'text-dark-200'}`}>
|
||||||
|
{expr.name}
|
||||||
|
</span>
|
||||||
|
{expr.value !== null && (
|
||||||
|
<span className="text-xs text-dark-500">
|
||||||
|
= {expr.value}{expr.units ? ` ${expr.units}` : ''}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isAdded ? (
|
||||||
|
<Check className="w-4 h-4 text-green-400 flex-shrink-0" />
|
||||||
|
) : (
|
||||||
|
<button
|
||||||
|
onClick={() => addAsDesignVariable(expr.name)}
|
||||||
|
disabled={isAdding}
|
||||||
|
className="p-1 hover:bg-primary-400/20 rounded text-primary-400 transition-colors disabled:opacity-50"
|
||||||
|
title="Add as design variable"
|
||||||
|
>
|
||||||
|
{isAdding ? (
|
||||||
|
<Loader2 className="w-3.5 h-3.5 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<Plus className="w-3.5 h-3.5" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{displayExpressions.length === 0 && (
|
||||||
|
<p className="text-xs text-dark-500 italic py-2">
|
||||||
|
No candidate parameters found. Click "Show all" to see all expressions.
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StudioParameterList;
|
||||||
11
atomizer-dashboard/frontend/src/components/studio/index.ts
Normal file
11
atomizer-dashboard/frontend/src/components/studio/index.ts
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
/**
|
||||||
|
* Studio Components Index
|
||||||
|
*
|
||||||
|
* Export all Studio-related components.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export { StudioDropZone } from './StudioDropZone';
|
||||||
|
export { StudioParameterList } from './StudioParameterList';
|
||||||
|
export { StudioContextFiles } from './StudioContextFiles';
|
||||||
|
export { StudioChat } from './StudioChat';
|
||||||
|
export { StudioBuildDialog } from './StudioBuildDialog';
|
||||||
@@ -18,12 +18,15 @@ import {
|
|||||||
FolderOpen,
|
FolderOpen,
|
||||||
Maximize2,
|
Maximize2,
|
||||||
X,
|
X,
|
||||||
Layers
|
Layers,
|
||||||
|
Sparkles,
|
||||||
|
Settings2
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import { useStudy } from '../context/StudyContext';
|
import { useStudy } from '../context/StudyContext';
|
||||||
import { Study } from '../types';
|
import { Study } from '../types';
|
||||||
import { apiClient } from '../api/client';
|
import { apiClient } from '../api/client';
|
||||||
import { MarkdownRenderer } from '../components/MarkdownRenderer';
|
import { MarkdownRenderer } from '../components/MarkdownRenderer';
|
||||||
|
import { InboxSection } from '../components/intake';
|
||||||
|
|
||||||
const Home: React.FC = () => {
|
const Home: React.FC = () => {
|
||||||
const { studies, setSelectedStudy, refreshStudies, isLoading } = useStudy();
|
const { studies, setSelectedStudy, refreshStudies, isLoading } = useStudy();
|
||||||
@@ -174,6 +177,18 @@ const Home: React.FC = () => {
|
|||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-center gap-3">
|
<div className="flex items-center gap-3">
|
||||||
|
<button
|
||||||
|
onClick={() => navigate('/studio')}
|
||||||
|
className="flex items-center gap-2 px-4 py-2 rounded-lg transition-all font-medium hover:-translate-y-0.5"
|
||||||
|
style={{
|
||||||
|
background: 'linear-gradient(135deg, #f59e0b 0%, #d97706 100%)',
|
||||||
|
color: '#000',
|
||||||
|
boxShadow: '0 4px 15px rgba(245, 158, 11, 0.3)'
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Sparkles className="w-4 h-4" />
|
||||||
|
New Study
|
||||||
|
</button>
|
||||||
<button
|
<button
|
||||||
onClick={() => navigate('/canvas')}
|
onClick={() => navigate('/canvas')}
|
||||||
className="flex items-center gap-2 px-4 py-2 rounded-lg transition-all font-medium hover:-translate-y-0.5"
|
className="flex items-center gap-2 px-4 py-2 rounded-lg transition-all font-medium hover:-translate-y-0.5"
|
||||||
@@ -250,6 +265,11 @@ const Home: React.FC = () => {
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Inbox Section - Study Creation Workflow */}
|
||||||
|
<div className="mb-8">
|
||||||
|
<InboxSection onStudyFinalized={refreshStudies} />
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Two-column layout: Table + Preview */}
|
{/* Two-column layout: Table + Preview */}
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
{/* Study Table */}
|
{/* Study Table */}
|
||||||
@@ -407,6 +427,19 @@ const Home: React.FC = () => {
|
|||||||
<Layers className="w-4 h-4" />
|
<Layers className="w-4 h-4" />
|
||||||
Canvas
|
Canvas
|
||||||
</button>
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => navigate(`/studio/${selectedPreview.id}`)}
|
||||||
|
className="flex items-center gap-2 px-4 py-2.5 rounded-lg transition-all font-medium whitespace-nowrap hover:-translate-y-0.5"
|
||||||
|
style={{
|
||||||
|
background: 'rgba(8, 15, 26, 0.85)',
|
||||||
|
border: '1px solid rgba(245, 158, 11, 0.3)',
|
||||||
|
color: '#f59e0b'
|
||||||
|
}}
|
||||||
|
title="Edit study configuration with AI assistant"
|
||||||
|
>
|
||||||
|
<Settings2 className="w-4 h-4" />
|
||||||
|
Studio
|
||||||
|
</button>
|
||||||
<button
|
<button
|
||||||
onClick={() => handleSelectStudy(selectedPreview)}
|
onClick={() => handleSelectStudy(selectedPreview)}
|
||||||
className="flex items-center gap-2 px-5 py-2.5 rounded-lg transition-all font-semibold whitespace-nowrap hover:-translate-y-0.5"
|
className="flex items-center gap-2 px-5 py-2.5 rounded-lg transition-all font-semibold whitespace-nowrap hover:-translate-y-0.5"
|
||||||
|
|||||||
672
atomizer-dashboard/frontend/src/pages/Studio.tsx
Normal file
672
atomizer-dashboard/frontend/src/pages/Studio.tsx
Normal file
@@ -0,0 +1,672 @@
|
|||||||
|
/**
|
||||||
|
* Atomizer Studio - Unified Study Creation Environment
|
||||||
|
*
|
||||||
|
* A drag-and-drop workspace for creating optimization studies with:
|
||||||
|
* - File upload (models + context documents)
|
||||||
|
* - Visual canvas configuration
|
||||||
|
* - AI-powered assistance
|
||||||
|
* - One-click build to final study
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useState, useEffect, useCallback, useRef } from 'react';
|
||||||
|
import { useNavigate, useParams } from 'react-router-dom';
|
||||||
|
import {
|
||||||
|
Home,
|
||||||
|
ChevronRight,
|
||||||
|
Upload,
|
||||||
|
FileText,
|
||||||
|
Settings,
|
||||||
|
Sparkles,
|
||||||
|
Save,
|
||||||
|
RefreshCw,
|
||||||
|
Trash2,
|
||||||
|
MessageSquare,
|
||||||
|
Layers,
|
||||||
|
CheckCircle,
|
||||||
|
AlertCircle,
|
||||||
|
Loader2,
|
||||||
|
X,
|
||||||
|
ChevronLeft,
|
||||||
|
ChevronRight as ChevronRightIcon,
|
||||||
|
GripVertical,
|
||||||
|
} from 'lucide-react';
|
||||||
|
import { intakeApi } from '../api/intake';
|
||||||
|
import { SpecRenderer } from '../components/canvas/SpecRenderer';
|
||||||
|
import { NodePalette } from '../components/canvas/palette/NodePalette';
|
||||||
|
import { NodeConfigPanelV2 } from '../components/canvas/panels/NodeConfigPanelV2';
|
||||||
|
import { useSpecStore, useSpec, useSpecLoading } from '../hooks/useSpecStore';
|
||||||
|
import { StudioDropZone } from '../components/studio/StudioDropZone';
|
||||||
|
import { StudioParameterList } from '../components/studio/StudioParameterList';
|
||||||
|
import { StudioContextFiles } from '../components/studio/StudioContextFiles';
|
||||||
|
import { StudioChat } from '../components/studio/StudioChat';
|
||||||
|
import { StudioBuildDialog } from '../components/studio/StudioBuildDialog';
|
||||||
|
|
||||||
|
interface DraftState {
|
||||||
|
draftId: string | null;
|
||||||
|
status: 'idle' | 'creating' | 'ready' | 'error';
|
||||||
|
error: string | null;
|
||||||
|
modelFiles: string[];
|
||||||
|
contextFiles: string[];
|
||||||
|
contextContent: string;
|
||||||
|
introspectionAvailable: boolean;
|
||||||
|
designVariableCount: number;
|
||||||
|
objectiveCount: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export default function Studio() {
|
||||||
|
const navigate = useNavigate();
|
||||||
|
const { draftId: urlDraftId } = useParams<{ draftId: string }>();
|
||||||
|
|
||||||
|
// Draft state
|
||||||
|
const [draft, setDraft] = useState<DraftState>({
|
||||||
|
draftId: null,
|
||||||
|
status: 'idle',
|
||||||
|
error: null,
|
||||||
|
modelFiles: [],
|
||||||
|
contextFiles: [],
|
||||||
|
contextContent: '',
|
||||||
|
introspectionAvailable: false,
|
||||||
|
designVariableCount: 0,
|
||||||
|
objectiveCount: 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
// UI state
|
||||||
|
const [leftPanelWidth, setLeftPanelWidth] = useState(320);
|
||||||
|
const [rightPanelCollapsed, setRightPanelCollapsed] = useState(false);
|
||||||
|
const [showBuildDialog, setShowBuildDialog] = useState(false);
|
||||||
|
const [isIntrospecting, setIsIntrospecting] = useState(false);
|
||||||
|
const [notification, setNotification] = useState<{ type: 'success' | 'error' | 'info'; message: string } | null>(null);
|
||||||
|
|
||||||
|
// Resize state
|
||||||
|
const isResizing = useRef(false);
|
||||||
|
const minPanelWidth = 280;
|
||||||
|
const maxPanelWidth = 500;
|
||||||
|
|
||||||
|
// Spec store for canvas
|
||||||
|
const spec = useSpec();
|
||||||
|
const specLoading = useSpecLoading();
|
||||||
|
const { loadSpec, clearSpec } = useSpecStore();
|
||||||
|
|
||||||
|
// Handle panel resize
|
||||||
|
const handleMouseDown = useCallback((e: React.MouseEvent) => {
|
||||||
|
e.preventDefault();
|
||||||
|
isResizing.current = true;
|
||||||
|
document.body.style.cursor = 'col-resize';
|
||||||
|
document.body.style.userSelect = 'none';
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const handleMouseMove = (e: MouseEvent) => {
|
||||||
|
if (!isResizing.current) return;
|
||||||
|
const newWidth = Math.min(maxPanelWidth, Math.max(minPanelWidth, e.clientX));
|
||||||
|
setLeftPanelWidth(newWidth);
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleMouseUp = () => {
|
||||||
|
isResizing.current = false;
|
||||||
|
document.body.style.cursor = '';
|
||||||
|
document.body.style.userSelect = '';
|
||||||
|
};
|
||||||
|
|
||||||
|
document.addEventListener('mousemove', handleMouseMove);
|
||||||
|
document.addEventListener('mouseup', handleMouseUp);
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
document.removeEventListener('mousemove', handleMouseMove);
|
||||||
|
document.removeEventListener('mouseup', handleMouseUp);
|
||||||
|
};
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Initialize or load draft on mount
|
||||||
|
useEffect(() => {
|
||||||
|
if (urlDraftId) {
|
||||||
|
loadDraft(urlDraftId);
|
||||||
|
} else {
|
||||||
|
createNewDraft();
|
||||||
|
}
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
// Cleanup: clear spec when leaving Studio
|
||||||
|
clearSpec();
|
||||||
|
};
|
||||||
|
}, [urlDraftId]);
|
||||||
|
|
||||||
|
// Create a new draft
|
||||||
|
const createNewDraft = async () => {
|
||||||
|
setDraft(prev => ({ ...prev, status: 'creating', error: null }));
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.createDraft();
|
||||||
|
|
||||||
|
setDraft({
|
||||||
|
draftId: response.draft_id,
|
||||||
|
status: 'ready',
|
||||||
|
error: null,
|
||||||
|
modelFiles: [],
|
||||||
|
contextFiles: [],
|
||||||
|
contextContent: '',
|
||||||
|
introspectionAvailable: false,
|
||||||
|
designVariableCount: 0,
|
||||||
|
objectiveCount: 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Update URL without navigation
|
||||||
|
window.history.replaceState(null, '', `/studio/${response.draft_id}`);
|
||||||
|
|
||||||
|
// Load the empty spec for this draft
|
||||||
|
await loadSpec(response.draft_id);
|
||||||
|
|
||||||
|
showNotification('info', 'New studio session started. Drop your files to begin.');
|
||||||
|
} catch (err) {
|
||||||
|
setDraft(prev => ({
|
||||||
|
...prev,
|
||||||
|
status: 'error',
|
||||||
|
error: err instanceof Error ? err.message : 'Failed to create draft',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Load existing draft or study
|
||||||
|
const loadDraft = async (studyId: string) => {
|
||||||
|
setDraft(prev => ({ ...prev, status: 'creating', error: null }));
|
||||||
|
|
||||||
|
// Check if this is a draft (in _inbox) or an existing study
|
||||||
|
const isDraft = studyId.startsWith('draft_');
|
||||||
|
|
||||||
|
if (isDraft) {
|
||||||
|
// Load from intake API
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.getStudioDraft(studyId);
|
||||||
|
|
||||||
|
// Also load context content if there are context files
|
||||||
|
let contextContent = '';
|
||||||
|
if (response.context_files.length > 0) {
|
||||||
|
try {
|
||||||
|
const contextResponse = await intakeApi.getContextContent(studyId);
|
||||||
|
contextContent = contextResponse.content;
|
||||||
|
} catch {
|
||||||
|
// Ignore context loading errors
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setDraft({
|
||||||
|
draftId: response.draft_id,
|
||||||
|
status: 'ready',
|
||||||
|
error: null,
|
||||||
|
modelFiles: response.model_files,
|
||||||
|
contextFiles: response.context_files,
|
||||||
|
contextContent,
|
||||||
|
introspectionAvailable: response.introspection_available,
|
||||||
|
designVariableCount: response.design_variable_count,
|
||||||
|
objectiveCount: response.objective_count,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Load the spec
|
||||||
|
await loadSpec(studyId);
|
||||||
|
|
||||||
|
showNotification('info', `Resuming draft: ${studyId}`);
|
||||||
|
} catch (err) {
|
||||||
|
// Draft doesn't exist, create new one
|
||||||
|
createNewDraft();
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Load existing study directly via spec store
|
||||||
|
try {
|
||||||
|
await loadSpec(studyId);
|
||||||
|
|
||||||
|
// Get counts from loaded spec
|
||||||
|
const loadedSpec = useSpecStore.getState().spec;
|
||||||
|
|
||||||
|
setDraft({
|
||||||
|
draftId: studyId,
|
||||||
|
status: 'ready',
|
||||||
|
error: null,
|
||||||
|
modelFiles: [], // Existing studies don't track files separately
|
||||||
|
contextFiles: [],
|
||||||
|
contextContent: '',
|
||||||
|
introspectionAvailable: true, // Assume introspection was done
|
||||||
|
designVariableCount: loadedSpec?.design_variables?.length || 0,
|
||||||
|
objectiveCount: loadedSpec?.objectives?.length || 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
showNotification('info', `Editing study: ${studyId}`);
|
||||||
|
} catch (err) {
|
||||||
|
setDraft(prev => ({
|
||||||
|
...prev,
|
||||||
|
status: 'error',
|
||||||
|
error: err instanceof Error ? err.message : 'Failed to load study',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Refresh draft data
|
||||||
|
const refreshDraft = async () => {
|
||||||
|
if (!draft.draftId) return;
|
||||||
|
|
||||||
|
const isDraft = draft.draftId.startsWith('draft_');
|
||||||
|
|
||||||
|
if (isDraft) {
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.getStudioDraft(draft.draftId);
|
||||||
|
|
||||||
|
// Also refresh context content
|
||||||
|
let contextContent = draft.contextContent;
|
||||||
|
if (response.context_files.length > 0) {
|
||||||
|
try {
|
||||||
|
const contextResponse = await intakeApi.getContextContent(draft.draftId);
|
||||||
|
contextContent = contextResponse.content;
|
||||||
|
} catch {
|
||||||
|
// Keep existing content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
setDraft(prev => ({
|
||||||
|
...prev,
|
||||||
|
modelFiles: response.model_files,
|
||||||
|
contextFiles: response.context_files,
|
||||||
|
contextContent,
|
||||||
|
introspectionAvailable: response.introspection_available,
|
||||||
|
designVariableCount: response.design_variable_count,
|
||||||
|
objectiveCount: response.objective_count,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Reload spec
|
||||||
|
await loadSpec(draft.draftId);
|
||||||
|
} catch (err) {
|
||||||
|
showNotification('error', 'Failed to refresh draft');
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// For existing studies, just reload the spec
|
||||||
|
try {
|
||||||
|
await loadSpec(draft.draftId);
|
||||||
|
|
||||||
|
const loadedSpec = useSpecStore.getState().spec;
|
||||||
|
setDraft(prev => ({
|
||||||
|
...prev,
|
||||||
|
designVariableCount: loadedSpec?.design_variables?.length || 0,
|
||||||
|
objectiveCount: loadedSpec?.objectives?.length || 0,
|
||||||
|
}));
|
||||||
|
} catch (err) {
|
||||||
|
showNotification('error', 'Failed to refresh study');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Run introspection
|
||||||
|
const runIntrospection = async () => {
|
||||||
|
if (!draft.draftId || draft.modelFiles.length === 0) {
|
||||||
|
showNotification('error', 'Please upload model files first');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setIsIntrospecting(true);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await intakeApi.introspect({ study_name: draft.draftId });
|
||||||
|
|
||||||
|
showNotification('success', `Found ${response.expressions_count} expressions (${response.candidates_count} candidates)`);
|
||||||
|
|
||||||
|
// Refresh draft state
|
||||||
|
await refreshDraft();
|
||||||
|
} catch (err) {
|
||||||
|
showNotification('error', err instanceof Error ? err.message : 'Introspection failed');
|
||||||
|
} finally {
|
||||||
|
setIsIntrospecting(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Handle file upload complete
|
||||||
|
const handleUploadComplete = useCallback(() => {
|
||||||
|
refreshDraft();
|
||||||
|
showNotification('success', 'Files uploaded successfully');
|
||||||
|
}, [draft.draftId]);
|
||||||
|
|
||||||
|
// Handle build complete
|
||||||
|
const handleBuildComplete = (finalPath: string, finalName: string) => {
|
||||||
|
setShowBuildDialog(false);
|
||||||
|
showNotification('success', `Study "${finalName}" created successfully!`);
|
||||||
|
|
||||||
|
// Navigate to the new study
|
||||||
|
setTimeout(() => {
|
||||||
|
navigate(`/canvas/${finalPath.replace('studies/', '')}`);
|
||||||
|
}, 1500);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Reset draft
|
||||||
|
const resetDraft = async () => {
|
||||||
|
if (!draft.draftId) return;
|
||||||
|
|
||||||
|
if (!confirm('Are you sure you want to reset? This will delete all uploaded files and configurations.')) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await intakeApi.deleteInboxStudy(draft.draftId);
|
||||||
|
await createNewDraft();
|
||||||
|
} catch (err) {
|
||||||
|
showNotification('error', 'Failed to reset draft');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Show notification
|
||||||
|
const showNotification = (type: 'success' | 'error' | 'info', message: string) => {
|
||||||
|
setNotification({ type, message });
|
||||||
|
setTimeout(() => setNotification(null), 4000);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Can always save/build - even empty studies can be saved for later
|
||||||
|
const canBuild = draft.draftId !== null;
|
||||||
|
|
||||||
|
// Loading state
|
||||||
|
if (draft.status === 'creating') {
|
||||||
|
return (
|
||||||
|
<div className="min-h-screen bg-dark-900 flex items-center justify-center">
|
||||||
|
<div className="text-center">
|
||||||
|
<Loader2 className="w-12 h-12 text-primary-400 animate-spin mx-auto mb-4" />
|
||||||
|
<p className="text-dark-300">Initializing Studio...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error state
|
||||||
|
if (draft.status === 'error') {
|
||||||
|
return (
|
||||||
|
<div className="min-h-screen bg-dark-900 flex items-center justify-center">
|
||||||
|
<div className="text-center max-w-md">
|
||||||
|
<AlertCircle className="w-12 h-12 text-red-400 mx-auto mb-4" />
|
||||||
|
<h2 className="text-xl font-semibold text-white mb-2">Failed to Initialize</h2>
|
||||||
|
<p className="text-dark-400 mb-4">{draft.error}</p>
|
||||||
|
<button
|
||||||
|
onClick={createNewDraft}
|
||||||
|
className="px-4 py-2 bg-primary-500 text-white rounded-lg hover:bg-primary-400 transition-colors"
|
||||||
|
>
|
||||||
|
Try Again
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="min-h-screen bg-dark-900 flex flex-col">
|
||||||
|
{/* Header */}
|
||||||
|
<header className="h-14 bg-dark-850 border-b border-dark-700 flex items-center justify-between px-4 flex-shrink-0">
|
||||||
|
{/* Left: Navigation */}
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<button
|
||||||
|
onClick={() => navigate('/')}
|
||||||
|
className="p-2 hover:bg-dark-700 rounded-lg text-dark-400 hover:text-white transition-colors"
|
||||||
|
>
|
||||||
|
<Home className="w-5 h-5" />
|
||||||
|
</button>
|
||||||
|
<ChevronRight className="w-4 h-4 text-dark-600" />
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Sparkles className="w-5 h-5 text-primary-400" />
|
||||||
|
<span className="text-white font-medium">Atomizer Studio</span>
|
||||||
|
</div>
|
||||||
|
{draft.draftId && (
|
||||||
|
<>
|
||||||
|
<ChevronRight className="w-4 h-4 text-dark-600" />
|
||||||
|
<span className="text-dark-400 text-sm font-mono">{draft.draftId}</span>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Right: Actions */}
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<button
|
||||||
|
onClick={resetDraft}
|
||||||
|
className="flex items-center gap-2 px-3 py-1.5 text-sm text-dark-400 hover:text-white hover:bg-dark-700 rounded-lg transition-colors"
|
||||||
|
>
|
||||||
|
<Trash2 className="w-4 h-4" />
|
||||||
|
Reset
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setShowBuildDialog(true)}
|
||||||
|
disabled={!canBuild}
|
||||||
|
className="flex items-center gap-2 px-4 py-1.5 text-sm font-medium bg-primary-500 text-white rounded-lg hover:bg-primary-400 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||||
|
>
|
||||||
|
<Save className="w-4 h-4" />
|
||||||
|
Save & Name Study
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
{/* Main Content */}
|
||||||
|
<div className="flex-1 flex overflow-hidden">
|
||||||
|
{/* Left Panel: Resources (Resizable) */}
|
||||||
|
<div
|
||||||
|
className="bg-dark-850 border-r border-dark-700 flex flex-col flex-shrink-0 relative"
|
||||||
|
style={{ width: leftPanelWidth }}
|
||||||
|
>
|
||||||
|
<div className="flex-1 overflow-y-auto p-4 space-y-6">
|
||||||
|
{/* Drop Zone */}
|
||||||
|
<section>
|
||||||
|
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
|
||||||
|
<Upload className="w-4 h-4" />
|
||||||
|
Model Files
|
||||||
|
</h3>
|
||||||
|
{draft.draftId && (
|
||||||
|
<StudioDropZone
|
||||||
|
draftId={draft.draftId}
|
||||||
|
type="model"
|
||||||
|
files={draft.modelFiles}
|
||||||
|
onUploadComplete={handleUploadComplete}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</section>
|
||||||
|
|
||||||
|
{/* Introspection */}
|
||||||
|
{draft.modelFiles.length > 0 && (
|
||||||
|
<section>
|
||||||
|
<div className="flex items-center justify-between mb-3">
|
||||||
|
<h3 className="text-sm font-medium text-dark-300 flex items-center gap-2">
|
||||||
|
<Settings className="w-4 h-4" />
|
||||||
|
Parameters
|
||||||
|
</h3>
|
||||||
|
<button
|
||||||
|
onClick={runIntrospection}
|
||||||
|
disabled={isIntrospecting}
|
||||||
|
className="flex items-center gap-1 px-2 py-1 text-xs text-primary-400 hover:bg-primary-400/10 rounded transition-colors disabled:opacity-50"
|
||||||
|
>
|
||||||
|
{isIntrospecting ? (
|
||||||
|
<Loader2 className="w-3 h-3 animate-spin" />
|
||||||
|
) : (
|
||||||
|
<RefreshCw className="w-3 h-3" />
|
||||||
|
)}
|
||||||
|
{isIntrospecting ? 'Scanning...' : 'Scan'}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{draft.draftId && draft.introspectionAvailable && (
|
||||||
|
<StudioParameterList
|
||||||
|
draftId={draft.draftId}
|
||||||
|
onParameterAdded={refreshDraft}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
{!draft.introspectionAvailable && (
|
||||||
|
<p className="text-xs text-dark-500 italic">
|
||||||
|
Click "Scan" to discover parameters from your model.
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</section>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Context Files */}
|
||||||
|
<section>
|
||||||
|
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
|
||||||
|
<FileText className="w-4 h-4" />
|
||||||
|
Context Documents
|
||||||
|
</h3>
|
||||||
|
{draft.draftId && (
|
||||||
|
<StudioContextFiles
|
||||||
|
draftId={draft.draftId}
|
||||||
|
files={draft.contextFiles}
|
||||||
|
onUploadComplete={handleUploadComplete}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
<p className="text-xs text-dark-500 mt-2">
|
||||||
|
Upload requirements, goals, or specs. The AI will read these.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
{/* Show context preview if loaded */}
|
||||||
|
{draft.contextContent && (
|
||||||
|
<div className="mt-3 p-2 bg-dark-700/50 rounded-lg border border-dark-600">
|
||||||
|
<p className="text-xs text-amber-400 mb-1 font-medium">Context Loaded:</p>
|
||||||
|
<p className="text-xs text-dark-400 line-clamp-3">
|
||||||
|
{draft.contextContent.substring(0, 200)}...
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</section>
|
||||||
|
|
||||||
|
{/* Node Palette - EXPANDED, not collapsed */}
|
||||||
|
<section>
|
||||||
|
<h3 className="text-sm font-medium text-dark-300 mb-3 flex items-center gap-2">
|
||||||
|
<Layers className="w-4 h-4" />
|
||||||
|
Components
|
||||||
|
</h3>
|
||||||
|
<NodePalette
|
||||||
|
collapsed={false}
|
||||||
|
showToggle={false}
|
||||||
|
className="!w-full !border-0 !bg-transparent"
|
||||||
|
/>
|
||||||
|
</section>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Resize Handle */}
|
||||||
|
<div
|
||||||
|
className="absolute right-0 top-0 bottom-0 w-1 cursor-col-resize hover:bg-primary-500/50 transition-colors group"
|
||||||
|
onMouseDown={handleMouseDown}
|
||||||
|
>
|
||||||
|
<div className="absolute right-0 top-1/2 -translate-y-1/2 w-4 h-8 flex items-center justify-center opacity-0 group-hover:opacity-100 transition-opacity">
|
||||||
|
<GripVertical className="w-3 h-3 text-dark-400" />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Center: Canvas */}
|
||||||
|
<div className="flex-1 relative bg-dark-900">
|
||||||
|
{draft.draftId && (
|
||||||
|
<SpecRenderer
|
||||||
|
studyId={draft.draftId}
|
||||||
|
editable={true}
|
||||||
|
showLoadingOverlay={false}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Empty state */}
|
||||||
|
{!specLoading && (!spec || Object.keys(spec).length === 0) && (
|
||||||
|
<div className="absolute inset-0 flex items-center justify-center pointer-events-none">
|
||||||
|
<div className="text-center max-w-md p-8">
|
||||||
|
<div className="w-20 h-20 rounded-full bg-dark-800 flex items-center justify-center mx-auto mb-6">
|
||||||
|
<Sparkles className="w-10 h-10 text-primary-400" />
|
||||||
|
</div>
|
||||||
|
<h2 className="text-2xl font-semibold text-white mb-3">
|
||||||
|
Welcome to Atomizer Studio
|
||||||
|
</h2>
|
||||||
|
<p className="text-dark-400 mb-6">
|
||||||
|
Drop your model files on the left, or drag components from the palette to start building your optimization study.
|
||||||
|
</p>
|
||||||
|
<div className="flex flex-col gap-2 text-sm text-dark-500">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<CheckCircle className="w-4 h-4 text-green-400" />
|
||||||
|
<span>Upload .sim, .prt, .fem files</span>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<CheckCircle className="w-4 h-4 text-green-400" />
|
||||||
|
<span>Add context documents (PDF, MD, TXT)</span>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<CheckCircle className="w-4 h-4 text-green-400" />
|
||||||
|
<span>Configure with AI assistance</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Right Panel: Assistant + Config - wider for better chat UX */}
|
||||||
|
<div
|
||||||
|
className={`bg-dark-850 border-l border-dark-700 flex flex-col transition-all duration-300 flex-shrink-0 ${
|
||||||
|
rightPanelCollapsed ? 'w-12' : 'w-[480px]'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{/* Collapse toggle */}
|
||||||
|
<button
|
||||||
|
onClick={() => setRightPanelCollapsed(!rightPanelCollapsed)}
|
||||||
|
className="absolute right-0 top-1/2 -translate-y-1/2 z-10 p-1 bg-dark-700 border border-dark-600 rounded-l-lg hover:bg-dark-600 transition-colors"
|
||||||
|
style={{ marginRight: rightPanelCollapsed ? '48px' : '480px' }}
|
||||||
|
>
|
||||||
|
{rightPanelCollapsed ? (
|
||||||
|
<ChevronLeft className="w-4 h-4 text-dark-400" />
|
||||||
|
) : (
|
||||||
|
<ChevronRightIcon className="w-4 h-4 text-dark-400" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{!rightPanelCollapsed && (
|
||||||
|
<div className="flex-1 flex flex-col overflow-hidden">
|
||||||
|
{/* Chat */}
|
||||||
|
<div className="flex-1 overflow-hidden">
|
||||||
|
{draft.draftId && (
|
||||||
|
<StudioChat
|
||||||
|
draftId={draft.draftId}
|
||||||
|
contextFiles={draft.contextFiles}
|
||||||
|
contextContent={draft.contextContent}
|
||||||
|
modelFiles={draft.modelFiles}
|
||||||
|
onSpecUpdated={refreshDraft}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Config Panel (when node selected) */}
|
||||||
|
<NodeConfigPanelV2 />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{rightPanelCollapsed && (
|
||||||
|
<div className="flex flex-col items-center py-4 gap-4">
|
||||||
|
<MessageSquare className="w-5 h-5 text-dark-400" />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Notification Toast */}
|
||||||
|
{notification && (
|
||||||
|
<div
|
||||||
|
className={`fixed bottom-4 right-4 flex items-center gap-3 px-4 py-3 rounded-lg shadow-lg z-50 animate-slide-up ${
|
||||||
|
notification.type === 'success'
|
||||||
|
? 'bg-green-500/10 border border-green-500/30 text-green-400'
|
||||||
|
: notification.type === 'error'
|
||||||
|
? 'bg-red-500/10 border border-red-500/30 text-red-400'
|
||||||
|
: 'bg-primary-500/10 border border-primary-500/30 text-primary-400'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{notification.type === 'success' && <CheckCircle className="w-5 h-5" />}
|
||||||
|
{notification.type === 'error' && <AlertCircle className="w-5 h-5" />}
|
||||||
|
{notification.type === 'info' && <Sparkles className="w-5 h-5" />}
|
||||||
|
<span>{notification.message}</span>
|
||||||
|
<button
|
||||||
|
onClick={() => setNotification(null)}
|
||||||
|
className="p-1 hover:bg-white/10 rounded"
|
||||||
|
>
|
||||||
|
<X className="w-4 h-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Build Dialog */}
|
||||||
|
{showBuildDialog && draft.draftId && (
|
||||||
|
<StudioBuildDialog
|
||||||
|
draftId={draft.draftId}
|
||||||
|
onClose={() => setShowBuildDialog(false)}
|
||||||
|
onBuildComplete={handleBuildComplete}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
201
atomizer-dashboard/frontend/src/types/intake.ts
Normal file
201
atomizer-dashboard/frontend/src/types/intake.ts
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
/**
|
||||||
|
* Intake Workflow TypeScript Types
|
||||||
|
*
|
||||||
|
* Types for the study intake/creation workflow.
|
||||||
|
*/
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Status Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export type SpecStatus =
|
||||||
|
| 'draft'
|
||||||
|
| 'introspected'
|
||||||
|
| 'configured'
|
||||||
|
| 'validated'
|
||||||
|
| 'ready'
|
||||||
|
| 'running'
|
||||||
|
| 'completed'
|
||||||
|
| 'failed';
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Expression/Introspection Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface ExpressionInfo {
|
||||||
|
/** Expression name in NX */
|
||||||
|
name: string;
|
||||||
|
/** Current value */
|
||||||
|
value: number | null;
|
||||||
|
/** Physical units */
|
||||||
|
units: string | null;
|
||||||
|
/** Expression formula if any */
|
||||||
|
formula: string | null;
|
||||||
|
/** Whether this is a design variable candidate */
|
||||||
|
is_candidate: boolean;
|
||||||
|
/** Confidence that this is a DV (0-1) */
|
||||||
|
confidence: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface BaselineData {
|
||||||
|
/** When baseline was run */
|
||||||
|
timestamp: string;
|
||||||
|
/** How long the solve took */
|
||||||
|
solve_time_seconds: number;
|
||||||
|
/** Computed mass from BDF/FEM */
|
||||||
|
mass_kg: number | null;
|
||||||
|
/** Max displacement result */
|
||||||
|
max_displacement_mm: number | null;
|
||||||
|
/** Max von Mises stress */
|
||||||
|
max_stress_mpa: number | null;
|
||||||
|
/** Whether baseline solve succeeded */
|
||||||
|
success: boolean;
|
||||||
|
/** Error message if failed */
|
||||||
|
error: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface IntrospectionData {
|
||||||
|
/** When introspection was run */
|
||||||
|
timestamp: string;
|
||||||
|
/** Detected solver type */
|
||||||
|
solver_type: string | null;
|
||||||
|
/** Mass from expressions or properties */
|
||||||
|
mass_kg: number | null;
|
||||||
|
/** Volume from mass properties */
|
||||||
|
volume_mm3: number | null;
|
||||||
|
/** Discovered expressions */
|
||||||
|
expressions: ExpressionInfo[];
|
||||||
|
/** Baseline solve results */
|
||||||
|
baseline: BaselineData | null;
|
||||||
|
/** Warnings from introspection */
|
||||||
|
warnings: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Request/Response Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface CreateInboxRequest {
|
||||||
|
study_name: string;
|
||||||
|
description?: string;
|
||||||
|
topic?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface CreateInboxResponse {
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
inbox_path: string;
|
||||||
|
spec_path: string;
|
||||||
|
status: SpecStatus;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface IntrospectRequest {
|
||||||
|
study_name: string;
|
||||||
|
model_file?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface IntrospectResponse {
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
status: SpecStatus;
|
||||||
|
expressions_count: number;
|
||||||
|
candidates_count: number;
|
||||||
|
mass_kg: number | null;
|
||||||
|
warnings: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface InboxStudy {
|
||||||
|
study_name: string;
|
||||||
|
status: SpecStatus;
|
||||||
|
description: string | null;
|
||||||
|
topic: string | null;
|
||||||
|
created: string | null;
|
||||||
|
modified: string | null;
|
||||||
|
model_files: string[];
|
||||||
|
has_context: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ListInboxResponse {
|
||||||
|
studies: InboxStudy[];
|
||||||
|
total: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface TopicInfo {
|
||||||
|
name: string;
|
||||||
|
study_count: number;
|
||||||
|
path: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ListTopicsResponse {
|
||||||
|
topics: TopicInfo[];
|
||||||
|
total: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface InboxStudyDetail {
|
||||||
|
study_name: string;
|
||||||
|
inbox_path: string;
|
||||||
|
spec: import('./atomizer-spec').AtomizerSpec;
|
||||||
|
files: {
|
||||||
|
sim: string[];
|
||||||
|
prt: string[];
|
||||||
|
fem: string[];
|
||||||
|
};
|
||||||
|
context_files: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Finalize Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface FinalizeRequest {
|
||||||
|
topic: string;
|
||||||
|
run_baseline?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface FinalizeProgress {
|
||||||
|
step: string;
|
||||||
|
progress: number;
|
||||||
|
message: string;
|
||||||
|
completed: boolean;
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface FinalizeResponse {
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
final_path: string;
|
||||||
|
status: SpecStatus;
|
||||||
|
baseline?: BaselineData;
|
||||||
|
readme_generated: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// README Generation Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface GenerateReadmeRequest {
|
||||||
|
study_name: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface GenerateReadmeResponse {
|
||||||
|
success: boolean;
|
||||||
|
content: string;
|
||||||
|
path: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Upload Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface UploadFilesResponse {
|
||||||
|
success: boolean;
|
||||||
|
study_name: string;
|
||||||
|
uploaded_files: Array<{
|
||||||
|
name: string;
|
||||||
|
status: 'uploaded' | 'rejected' | 'skipped';
|
||||||
|
path?: string;
|
||||||
|
size?: number;
|
||||||
|
reason?: string;
|
||||||
|
}>;
|
||||||
|
total_uploaded: number;
|
||||||
|
}
|
||||||
4
atomizer-dashboard/frontend/test-results/.last-run.json
Normal file
4
atomizer-dashboard/frontend/test-results/.last-run.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"status": "passed",
|
||||||
|
"failedTests": []
|
||||||
|
}
|
||||||
171
atomizer-dashboard/frontend/tests/e2e/home.spec.ts
Normal file
171
atomizer-dashboard/frontend/tests/e2e/home.spec.ts
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
import { test, expect } from '@playwright/test';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Home Page E2E Tests
|
||||||
|
*
|
||||||
|
* Tests the study list page at /
|
||||||
|
* Covers: study loading, topic expansion, navigation
|
||||||
|
*/
|
||||||
|
|
||||||
|
test.describe('Home Page - Study List', () => {
|
||||||
|
|
||||||
|
test.beforeEach(async ({ page }) => {
|
||||||
|
// Navigate to home page
|
||||||
|
await page.goto('/');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('displays page header', async ({ page }) => {
|
||||||
|
// Check header is visible
|
||||||
|
await expect(page.locator('header')).toBeVisible();
|
||||||
|
|
||||||
|
// Check for key header elements - Studies heading (exact match to avoid Inbox Studies)
|
||||||
|
await expect(page.getByRole('heading', { name: 'Studies', exact: true })).toBeVisible({ timeout: 10000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('shows aggregate statistics cards', async ({ page }) => {
|
||||||
|
// Wait for stats to load
|
||||||
|
await expect(page.getByText('Total Studies')).toBeVisible();
|
||||||
|
await expect(page.getByText('Running')).toBeVisible();
|
||||||
|
await expect(page.getByText('Total Trials')).toBeVisible();
|
||||||
|
await expect(page.getByText('Best Overall')).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('loads studies table with topic folders', async ({ page }) => {
|
||||||
|
// Wait for studies section (exact match to avoid Inbox Studies)
|
||||||
|
await expect(page.getByRole('heading', { name: 'Studies', exact: true })).toBeVisible();
|
||||||
|
|
||||||
|
// Wait for loading to complete - either see folders or empty state
|
||||||
|
// Folders have "trials" text in them
|
||||||
|
const folderLocator = page.locator('button:has-text("trials")');
|
||||||
|
const emptyStateLocator = page.getByText('No studies found');
|
||||||
|
|
||||||
|
// Wait for either studies loaded or empty state (10s timeout)
|
||||||
|
await expect(folderLocator.first().or(emptyStateLocator)).toBeVisible({ timeout: 10000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('expands topic folder to show studies', async ({ page }) => {
|
||||||
|
// Wait for folders to load
|
||||||
|
const folderButton = page.locator('button:has-text("trials")').first();
|
||||||
|
|
||||||
|
// Wait for folder to be visible (studies loaded)
|
||||||
|
await expect(folderButton).toBeVisible({ timeout: 10000 });
|
||||||
|
|
||||||
|
// Click to expand
|
||||||
|
await folderButton.click();
|
||||||
|
|
||||||
|
// After expansion, study rows should be visible (they have status badges)
|
||||||
|
// Status badges contain: running, completed, idle, paused, not_started
|
||||||
|
const statusBadges = page.locator('span:has-text("running"), span:has-text("completed"), span:has-text("idle"), span:has-text("paused"), span:has-text("not_started")');
|
||||||
|
await expect(statusBadges.first()).toBeVisible({ timeout: 5000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('clicking study shows preview panel', async ({ page }) => {
|
||||||
|
// Wait for and expand first folder
|
||||||
|
const folderButton = page.locator('button:has-text("trials")').first();
|
||||||
|
await expect(folderButton).toBeVisible({ timeout: 10000 });
|
||||||
|
await folderButton.click();
|
||||||
|
|
||||||
|
// Wait for expanded content and click first study row
|
||||||
|
const studyRow = page.locator('.bg-dark-850\\/50 > div').first();
|
||||||
|
await expect(studyRow).toBeVisible({ timeout: 5000 });
|
||||||
|
await studyRow.click();
|
||||||
|
|
||||||
|
// Preview panel should show with buttons - use exact match to avoid header nav button
|
||||||
|
await expect(page.getByRole('button', { name: 'Canvas', exact: true })).toBeVisible({ timeout: 5000 });
|
||||||
|
await expect(page.getByRole('button', { name: 'Open' })).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('Open button navigates to dashboard', async ({ page }) => {
|
||||||
|
// Wait for and expand first folder
|
||||||
|
const folderButton = page.locator('button:has-text("trials")').first();
|
||||||
|
await expect(folderButton).toBeVisible({ timeout: 10000 });
|
||||||
|
await folderButton.click();
|
||||||
|
|
||||||
|
// Wait for and click study row
|
||||||
|
const studyRow = page.locator('.bg-dark-850\\/50 > div').first();
|
||||||
|
await expect(studyRow).toBeVisible({ timeout: 5000 });
|
||||||
|
await studyRow.click();
|
||||||
|
|
||||||
|
// Wait for and click Open button
|
||||||
|
const openButton = page.getByRole('button', { name: 'Open' });
|
||||||
|
await expect(openButton).toBeVisible({ timeout: 5000 });
|
||||||
|
await openButton.click();
|
||||||
|
|
||||||
|
// Should navigate to dashboard
|
||||||
|
await expect(page).toHaveURL(/\/dashboard/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('Canvas button navigates to canvas view', async ({ page }) => {
|
||||||
|
// Wait for and expand first folder
|
||||||
|
const folderButton = page.locator('button:has-text("trials")').first();
|
||||||
|
await expect(folderButton).toBeVisible({ timeout: 10000 });
|
||||||
|
await folderButton.click();
|
||||||
|
|
||||||
|
// Wait for and click study row
|
||||||
|
const studyRow = page.locator('.bg-dark-850\\/50 > div').first();
|
||||||
|
await expect(studyRow).toBeVisible({ timeout: 5000 });
|
||||||
|
await studyRow.click();
|
||||||
|
|
||||||
|
// Wait for and click Canvas button (exact match to avoid header nav)
|
||||||
|
const canvasButton = page.getByRole('button', { name: 'Canvas', exact: true });
|
||||||
|
await expect(canvasButton).toBeVisible({ timeout: 5000 });
|
||||||
|
await canvasButton.click();
|
||||||
|
|
||||||
|
// Should navigate to canvas
|
||||||
|
await expect(page).toHaveURL(/\/canvas\//);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('refresh button reloads studies', async ({ page }) => {
|
||||||
|
// Find the main studies section refresh button (the one with visible text "Refresh")
|
||||||
|
const refreshButton = page.getByText('Refresh');
|
||||||
|
await expect(refreshButton).toBeVisible({ timeout: 5000 });
|
||||||
|
|
||||||
|
// Click refresh
|
||||||
|
await refreshButton.click();
|
||||||
|
|
||||||
|
// Should show loading state or complete quickly
|
||||||
|
// Just verify no errors occurred (exact match to avoid Inbox Studies)
|
||||||
|
await expect(page.getByRole('heading', { name: 'Studies', exact: true })).toBeVisible();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Inbox Section Tests
|
||||||
|
*
|
||||||
|
* Tests the new study intake workflow
|
||||||
|
*/
|
||||||
|
test.describe('Home Page - Inbox Section', () => {
|
||||||
|
|
||||||
|
test.beforeEach(async ({ page }) => {
|
||||||
|
await page.goto('/');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('displays inbox section with header', async ({ page }) => {
|
||||||
|
// Check for Study Inbox heading (section is expanded by default)
|
||||||
|
const inboxHeading = page.getByRole('heading', { name: 'Study Inbox' });
|
||||||
|
await expect(inboxHeading).toBeVisible({ timeout: 10000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('inbox section shows pending count', async ({ page }) => {
|
||||||
|
// Section should show pending studies count
|
||||||
|
const pendingText = page.getByText(/\d+ pending studies/);
|
||||||
|
await expect(pendingText).toBeVisible({ timeout: 10000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('inbox has new study button', async ({ page }) => {
|
||||||
|
// Section is expanded by default, look for the New Study button
|
||||||
|
const newStudyButton = page.getByRole('button', { name: /New Study/ });
|
||||||
|
await expect(newStudyButton).toBeVisible({ timeout: 10000 });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('clicking new study shows create form', async ({ page }) => {
|
||||||
|
// Click the New Study button
|
||||||
|
const newStudyButton = page.getByRole('button', { name: /New Study/ });
|
||||||
|
await expect(newStudyButton).toBeVisible({ timeout: 10000 });
|
||||||
|
await newStudyButton.click();
|
||||||
|
|
||||||
|
// Form should expand with input fields
|
||||||
|
const studyNameInput = page.getByPlaceholder(/my_study/i).or(page.locator('input[type="text"]').first());
|
||||||
|
await expect(studyNameInput).toBeVisible({ timeout: 5000 });
|
||||||
|
});
|
||||||
|
});
|
||||||
390
atomizer.py
390
atomizer.py
@@ -34,26 +34,42 @@ from typing import Optional
|
|||||||
PROJECT_ROOT = Path(__file__).parent
|
PROJECT_ROOT = Path(__file__).parent
|
||||||
sys.path.insert(0, str(PROJECT_ROOT))
|
sys.path.insert(0, str(PROJECT_ROOT))
|
||||||
|
|
||||||
from optimization_engine.processors.surrogates.auto_trainer import AutoTrainer, check_training_status
|
from optimization_engine.processors.surrogates.auto_trainer import (
|
||||||
|
AutoTrainer,
|
||||||
|
check_training_status,
|
||||||
|
)
|
||||||
from optimization_engine.config.template_loader import (
|
from optimization_engine.config.template_loader import (
|
||||||
create_study_from_template,
|
create_study_from_template,
|
||||||
list_templates,
|
list_templates,
|
||||||
get_template
|
get_template,
|
||||||
)
|
|
||||||
from optimization_engine.validators.study_validator import (
|
|
||||||
validate_study,
|
|
||||||
list_studies,
|
|
||||||
quick_check
|
|
||||||
)
|
)
|
||||||
|
from optimization_engine.validators.study_validator import validate_study, list_studies, quick_check
|
||||||
|
|
||||||
|
|
||||||
|
# New UX System imports (lazy loaded to avoid import errors)
|
||||||
|
def get_intake_processor():
|
||||||
|
from optimization_engine.intake import IntakeProcessor
|
||||||
|
|
||||||
|
return IntakeProcessor
|
||||||
|
|
||||||
|
|
||||||
|
def get_validation_gate():
|
||||||
|
from optimization_engine.validation import ValidationGate
|
||||||
|
|
||||||
|
return ValidationGate
|
||||||
|
|
||||||
|
|
||||||
|
def get_report_generator():
|
||||||
|
from optimization_engine.reporting.html_report import HTMLReportGenerator
|
||||||
|
|
||||||
|
return HTMLReportGenerator
|
||||||
|
|
||||||
|
|
||||||
def setup_logging(verbose: bool = False) -> None:
|
def setup_logging(verbose: bool = False) -> None:
|
||||||
"""Configure logging."""
|
"""Configure logging."""
|
||||||
level = logging.DEBUG if verbose else logging.INFO
|
level = logging.DEBUG if verbose else logging.INFO
|
||||||
logging.basicConfig(
|
logging.basicConfig(
|
||||||
level=level,
|
level=level, format="%(asctime)s [%(levelname)s] %(message)s", datefmt="%H:%M:%S"
|
||||||
format='%(asctime)s [%(levelname)s] %(message)s',
|
|
||||||
datefmt='%H:%M:%S'
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -95,7 +111,7 @@ def cmd_neural_optimize(args) -> int:
|
|||||||
study_name=args.study,
|
study_name=args.study,
|
||||||
min_points=args.min_points,
|
min_points=args.min_points,
|
||||||
epochs=args.epochs,
|
epochs=args.epochs,
|
||||||
retrain_threshold=args.retrain_every
|
retrain_threshold=args.retrain_every,
|
||||||
)
|
)
|
||||||
|
|
||||||
status = trainer.get_status()
|
status = trainer.get_status()
|
||||||
@@ -103,8 +119,8 @@ def cmd_neural_optimize(args) -> int:
|
|||||||
print(f" Model version: v{status['model_version']}")
|
print(f" Model version: v{status['model_version']}")
|
||||||
|
|
||||||
# Determine workflow phase
|
# Determine workflow phase
|
||||||
has_trained_model = status['model_version'] > 0
|
has_trained_model = status["model_version"] > 0
|
||||||
current_points = status['total_points']
|
current_points = status["total_points"]
|
||||||
|
|
||||||
if has_trained_model and current_points >= args.min_points:
|
if has_trained_model and current_points >= args.min_points:
|
||||||
print("\n[3/5] Neural model available - starting neural-accelerated optimization...")
|
print("\n[3/5] Neural model available - starting neural-accelerated optimization...")
|
||||||
@@ -138,11 +154,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
|
|||||||
# Run FEA optimization
|
# Run FEA optimization
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
cmd = [
|
cmd = [sys.executable, str(run_script), "--trials", str(fea_trials)]
|
||||||
sys.executable,
|
|
||||||
str(run_script),
|
|
||||||
"--trials", str(fea_trials)
|
|
||||||
]
|
|
||||||
|
|
||||||
if args.resume:
|
if args.resume:
|
||||||
cmd.append("--resume")
|
cmd.append("--resume")
|
||||||
@@ -155,7 +167,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
|
|||||||
elapsed = time.time() - start_time
|
elapsed = time.time() - start_time
|
||||||
|
|
||||||
print("-" * 60)
|
print("-" * 60)
|
||||||
print(f"FEA optimization completed in {elapsed/60:.1f} minutes")
|
print(f"FEA optimization completed in {elapsed / 60:.1f} minutes")
|
||||||
|
|
||||||
# Check if we can now train
|
# Check if we can now train
|
||||||
print("\n[5/5] Checking training data...")
|
print("\n[5/5] Checking training data...")
|
||||||
@@ -169,7 +181,7 @@ def _run_exploration_phase(args, trainer: AutoTrainer) -> int:
|
|||||||
print(" Training failed - check logs")
|
print(" Training failed - check logs")
|
||||||
else:
|
else:
|
||||||
status = trainer.get_status()
|
status = trainer.get_status()
|
||||||
remaining = args.min_points - status['total_points']
|
remaining = args.min_points - status["total_points"]
|
||||||
print(f" {status['total_points']} points collected")
|
print(f" {status['total_points']} points collected")
|
||||||
print(f" Need {remaining} more for neural training")
|
print(f" Need {remaining} more for neural training")
|
||||||
|
|
||||||
@@ -188,12 +200,7 @@ def _run_neural_phase(args, trainer: AutoTrainer) -> int:
|
|||||||
# Run with neural acceleration
|
# Run with neural acceleration
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
cmd = [
|
cmd = [sys.executable, str(run_script), "--trials", str(args.trials), "--enable-nn"]
|
||||||
sys.executable,
|
|
||||||
str(run_script),
|
|
||||||
"--trials", str(args.trials),
|
|
||||||
"--enable-nn"
|
|
||||||
]
|
|
||||||
|
|
||||||
if args.resume:
|
if args.resume:
|
||||||
cmd.append("--resume")
|
cmd.append("--resume")
|
||||||
@@ -206,7 +213,7 @@ def _run_neural_phase(args, trainer: AutoTrainer) -> int:
|
|||||||
elapsed = time.time() - start_time
|
elapsed = time.time() - start_time
|
||||||
|
|
||||||
print("-" * 60)
|
print("-" * 60)
|
||||||
print(f"Neural optimization completed in {elapsed/60:.1f} minutes")
|
print(f"Neural optimization completed in {elapsed / 60:.1f} minutes")
|
||||||
|
|
||||||
# Check for retraining
|
# Check for retraining
|
||||||
print("\n[5/5] Checking if retraining needed...")
|
print("\n[5/5] Checking if retraining needed...")
|
||||||
@@ -228,10 +235,7 @@ def cmd_create_study(args) -> int:
|
|||||||
print(f"Creating study '{args.name}' from template '{args.template}'...")
|
print(f"Creating study '{args.name}' from template '{args.template}'...")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
study_path = create_study_from_template(
|
study_path = create_study_from_template(template_name=args.template, study_name=args.name)
|
||||||
template_name=args.template,
|
|
||||||
study_name=args.name
|
|
||||||
)
|
|
||||||
print(f"\nSuccess! Study created at: {study_path}")
|
print(f"\nSuccess! Study created at: {study_path}")
|
||||||
return 0
|
return 0
|
||||||
except FileNotFoundError as e:
|
except FileNotFoundError as e:
|
||||||
@@ -290,7 +294,7 @@ def cmd_status(args) -> int:
|
|||||||
print(f" Model version: v{status['model_version']}")
|
print(f" Model version: v{status['model_version']}")
|
||||||
print(f" Should train: {status['should_train']}")
|
print(f" Should train: {status['should_train']}")
|
||||||
|
|
||||||
if status['latest_model']:
|
if status["latest_model"]:
|
||||||
print(f" Latest model: {status['latest_model']}")
|
print(f" Latest model: {status['latest_model']}")
|
||||||
|
|
||||||
else:
|
else:
|
||||||
@@ -305,8 +309,8 @@ def cmd_status(args) -> int:
|
|||||||
|
|
||||||
for study in studies:
|
for study in studies:
|
||||||
icon = "[OK]" if study["is_ready"] else "[!]"
|
icon = "[OK]" if study["is_ready"] else "[!]"
|
||||||
trials_info = f"{study['trials']} trials" if study['trials'] > 0 else "no trials"
|
trials_info = f"{study['trials']} trials" if study["trials"] > 0 else "no trials"
|
||||||
pareto_info = f", {study['pareto']} Pareto" if study['pareto'] > 0 else ""
|
pareto_info = f", {study['pareto']} Pareto" if study["pareto"] > 0 else ""
|
||||||
print(f" {icon} {study['name']}")
|
print(f" {icon} {study['name']}")
|
||||||
print(f" Status: {study['status']} ({trials_info}{pareto_info})")
|
print(f" Status: {study['status']} ({trials_info}{pareto_info})")
|
||||||
|
|
||||||
@@ -317,11 +321,7 @@ def cmd_train(args) -> int:
|
|||||||
"""Trigger neural network training."""
|
"""Trigger neural network training."""
|
||||||
print(f"Training neural model for study: {args.study}")
|
print(f"Training neural model for study: {args.study}")
|
||||||
|
|
||||||
trainer = AutoTrainer(
|
trainer = AutoTrainer(study_name=args.study, min_points=args.min_points, epochs=args.epochs)
|
||||||
study_name=args.study,
|
|
||||||
min_points=args.min_points,
|
|
||||||
epochs=args.epochs
|
|
||||||
)
|
|
||||||
|
|
||||||
status = trainer.get_status()
|
status = trainer.get_status()
|
||||||
print(f"\nCurrent status:")
|
print(f"\nCurrent status:")
|
||||||
@@ -329,8 +329,10 @@ def cmd_train(args) -> int:
|
|||||||
print(f" Min threshold: {args.min_points}")
|
print(f" Min threshold: {args.min_points}")
|
||||||
|
|
||||||
if args.force or trainer.should_train():
|
if args.force or trainer.should_train():
|
||||||
if args.force and status['total_points'] < args.min_points:
|
if args.force and status["total_points"] < args.min_points:
|
||||||
print(f"\nWarning: Force training with {status['total_points']} points (< {args.min_points})")
|
print(
|
||||||
|
f"\nWarning: Force training with {status['total_points']} points (< {args.min_points})"
|
||||||
|
)
|
||||||
|
|
||||||
print("\nStarting training...")
|
print("\nStarting training...")
|
||||||
model_path = trainer.train()
|
model_path = trainer.train()
|
||||||
@@ -342,7 +344,7 @@ def cmd_train(args) -> int:
|
|||||||
print("\nTraining failed - check logs")
|
print("\nTraining failed - check logs")
|
||||||
return 1
|
return 1
|
||||||
else:
|
else:
|
||||||
needed = args.min_points - status['total_points']
|
needed = args.min_points - status["total_points"]
|
||||||
print(f"\nNot enough data for training. Need {needed} more points.")
|
print(f"\nNot enough data for training. Need {needed} more points.")
|
||||||
print("Use --force to train anyway.")
|
print("Use --force to train anyway.")
|
||||||
return 1
|
return 1
|
||||||
@@ -355,6 +357,269 @@ def cmd_validate(args) -> int:
|
|||||||
return 0 if validation.is_ready_to_run else 1
|
return 0 if validation.is_ready_to_run else 1
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# NEW UX SYSTEM COMMANDS
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_intake(args) -> int:
|
||||||
|
"""Process an intake folder into a study."""
|
||||||
|
IntakeProcessor = get_intake_processor()
|
||||||
|
|
||||||
|
# Determine inbox folder
|
||||||
|
inbox_path = Path(args.folder)
|
||||||
|
|
||||||
|
if not inbox_path.is_absolute():
|
||||||
|
inbox_dir = PROJECT_ROOT / "studies" / "_inbox"
|
||||||
|
if (inbox_dir / args.folder).exists():
|
||||||
|
inbox_path = inbox_dir / args.folder
|
||||||
|
elif (PROJECT_ROOT / "studies" / args.folder).exists():
|
||||||
|
inbox_path = PROJECT_ROOT / "studies" / args.folder
|
||||||
|
|
||||||
|
if not inbox_path.exists():
|
||||||
|
print(f"Error: Folder not found: {inbox_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print(f"Processing intake: {inbox_path}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
def progress(message: str, percent: float):
|
||||||
|
bar_width = 30
|
||||||
|
filled = int(bar_width * percent)
|
||||||
|
bar = "=" * filled + "-" * (bar_width - filled)
|
||||||
|
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
|
||||||
|
if percent >= 1.0:
|
||||||
|
print()
|
||||||
|
|
||||||
|
try:
|
||||||
|
processor = IntakeProcessor(inbox_path, progress_callback=progress)
|
||||||
|
context = processor.process(
|
||||||
|
run_baseline=not args.skip_baseline,
|
||||||
|
copy_files=True,
|
||||||
|
run_introspection=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("INTAKE COMPLETE")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
summary = context.get_context_summary()
|
||||||
|
print(f"\nStudy: {context.study_name}")
|
||||||
|
print(f"Location: {processor.study_dir}")
|
||||||
|
print(f"\nContext loaded:")
|
||||||
|
print(f" Model: {'Yes' if summary['has_model'] else 'No'}")
|
||||||
|
print(f" Introspection: {'Yes' if summary['has_introspection'] else 'No'}")
|
||||||
|
print(f" Baseline: {'Yes' if summary['has_baseline'] else 'No'}")
|
||||||
|
print(
|
||||||
|
f" Expressions: {summary['num_expressions']} ({summary['num_dv_candidates']} candidates)"
|
||||||
|
)
|
||||||
|
|
||||||
|
if context.has_baseline:
|
||||||
|
print(f"\nBaseline: {context.get_baseline_summary()}")
|
||||||
|
|
||||||
|
if summary["warnings"]:
|
||||||
|
print(f"\nWarnings:")
|
||||||
|
for w in summary["warnings"]:
|
||||||
|
print(f" - {w}")
|
||||||
|
|
||||||
|
print(f"\nNext: atomizer gate {context.study_name}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError: {e}")
|
||||||
|
if args.verbose:
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_gate(args) -> int:
|
||||||
|
"""Run validation gate before optimization."""
|
||||||
|
ValidationGate = get_validation_gate()
|
||||||
|
|
||||||
|
study_path = Path(args.study)
|
||||||
|
if not study_path.is_absolute():
|
||||||
|
study_path = PROJECT_ROOT / "studies" / args.study
|
||||||
|
|
||||||
|
if not study_path.exists():
|
||||||
|
print(f"Error: Study not found: {study_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print(f"Validation Gate: {study_path.name}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
def progress(message: str, percent: float):
|
||||||
|
bar_width = 30
|
||||||
|
filled = int(bar_width * percent)
|
||||||
|
bar = "=" * filled + "-" * (bar_width - filled)
|
||||||
|
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
|
||||||
|
if percent >= 1.0:
|
||||||
|
print()
|
||||||
|
|
||||||
|
try:
|
||||||
|
gate = ValidationGate(study_path, progress_callback=progress)
|
||||||
|
result = gate.validate(
|
||||||
|
run_test_trials=not args.skip_trials,
|
||||||
|
n_test_trials=args.trials,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
if result.passed:
|
||||||
|
print("VALIDATION PASSED")
|
||||||
|
else:
|
||||||
|
print("VALIDATION FAILED")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Show test trials
|
||||||
|
if result.test_trials:
|
||||||
|
print(
|
||||||
|
f"\nTest Trials: {len([t for t in result.test_trials if t.success])}/{len(result.test_trials)} passed"
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.results_vary:
|
||||||
|
print("Results vary: Yes (mesh updating correctly)")
|
||||||
|
else:
|
||||||
|
print("Results vary: NO - MESH MAY NOT BE UPDATING!")
|
||||||
|
|
||||||
|
# Results table
|
||||||
|
print(f"\n{'Trial':<8} {'Status':<8} {'Time':<8}", end="")
|
||||||
|
if result.test_trials and result.test_trials[0].objectives:
|
||||||
|
for obj in list(result.test_trials[0].objectives.keys())[:3]:
|
||||||
|
print(f" {obj[:10]:<12}", end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
for trial in result.test_trials:
|
||||||
|
status = "OK" if trial.success else "FAIL"
|
||||||
|
print(
|
||||||
|
f"{trial.trial_number:<8} {status:<8} {trial.solve_time_seconds:<8.1f}", end=""
|
||||||
|
)
|
||||||
|
for val in list(trial.objectives.values())[:3]:
|
||||||
|
print(f" {val:<12.4f}", end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Runtime estimate
|
||||||
|
if result.avg_solve_time:
|
||||||
|
print(f"\nRuntime Estimate:")
|
||||||
|
print(f" Avg solve: {result.avg_solve_time:.1f}s")
|
||||||
|
if result.estimated_total_runtime:
|
||||||
|
print(f" Total: {result.estimated_total_runtime / 3600:.1f}h")
|
||||||
|
|
||||||
|
# Errors
|
||||||
|
if result.errors:
|
||||||
|
print(f"\nErrors:")
|
||||||
|
for err in result.errors:
|
||||||
|
print(f" - {err}")
|
||||||
|
|
||||||
|
if result.passed and args.approve:
|
||||||
|
gate.approve()
|
||||||
|
print(f"\nStudy approved for optimization!")
|
||||||
|
elif result.passed:
|
||||||
|
print(f"\nTo approve: atomizer gate {args.study} --approve")
|
||||||
|
|
||||||
|
gate.save_result(result)
|
||||||
|
return 0 if result.passed else 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError: {e}")
|
||||||
|
if args.verbose:
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_finalize(args) -> int:
|
||||||
|
"""Generate final report for a study."""
|
||||||
|
HTMLReportGenerator = get_report_generator()
|
||||||
|
|
||||||
|
study_path = Path(args.study)
|
||||||
|
if not study_path.is_absolute():
|
||||||
|
study_path = PROJECT_ROOT / "studies" / args.study
|
||||||
|
|
||||||
|
if not study_path.exists():
|
||||||
|
print(f"Error: Study not found: {study_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print(f"Generating report for: {study_path.name}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
try:
|
||||||
|
generator = HTMLReportGenerator(study_path)
|
||||||
|
report_path = generator.generate(include_pdf=getattr(args, "pdf", False))
|
||||||
|
|
||||||
|
print(f"\nReport generated successfully!")
|
||||||
|
print(f" HTML: {report_path}")
|
||||||
|
print(f" Data: {report_path.parent / 'data'}")
|
||||||
|
|
||||||
|
if getattr(args, "open", False):
|
||||||
|
import webbrowser
|
||||||
|
|
||||||
|
webbrowser.open(str(report_path))
|
||||||
|
else:
|
||||||
|
print(f"\nOpen in browser: file://{report_path}")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError: {e}")
|
||||||
|
if args.verbose:
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_list_studies(args) -> int:
|
||||||
|
"""List all studies and inbox items."""
|
||||||
|
studies_dir = PROJECT_ROOT / "studies"
|
||||||
|
|
||||||
|
print("Atomizer Studies")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Inbox items
|
||||||
|
inbox_dir = studies_dir / "_inbox"
|
||||||
|
if inbox_dir.exists():
|
||||||
|
inbox_items = [d for d in inbox_dir.iterdir() if d.is_dir() and not d.name.startswith(".")]
|
||||||
|
if inbox_items:
|
||||||
|
print("\nPending Intake (_inbox/):")
|
||||||
|
for item in sorted(inbox_items):
|
||||||
|
has_config = (item / "intake.yaml").exists()
|
||||||
|
has_model = bool(list(item.glob("**/*.sim")))
|
||||||
|
status = []
|
||||||
|
if has_config:
|
||||||
|
status.append("yaml")
|
||||||
|
if has_model:
|
||||||
|
status.append("model")
|
||||||
|
print(f" {item.name:<30} [{', '.join(status) or 'empty'}]")
|
||||||
|
|
||||||
|
# Active studies
|
||||||
|
print("\nStudies:")
|
||||||
|
for study_dir in sorted(studies_dir.iterdir()):
|
||||||
|
if (
|
||||||
|
study_dir.is_dir()
|
||||||
|
and not study_dir.name.startswith("_")
|
||||||
|
and not study_dir.name.startswith(".")
|
||||||
|
):
|
||||||
|
has_spec = (study_dir / "atomizer_spec.json").exists() or (
|
||||||
|
study_dir / "optimization_config.json"
|
||||||
|
).exists()
|
||||||
|
has_db = any(study_dir.rglob("study.db"))
|
||||||
|
has_approval = (study_dir / ".validation_approved").exists()
|
||||||
|
|
||||||
|
status = []
|
||||||
|
if has_spec:
|
||||||
|
status.append("configured")
|
||||||
|
if has_approval:
|
||||||
|
status.append("approved")
|
||||||
|
if has_db:
|
||||||
|
status.append("has_data")
|
||||||
|
|
||||||
|
print(f" {study_dir.name:<30} [{', '.join(status) or 'new'}]")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
description="Atomizer - Neural-Accelerated Structural Optimization",
|
description="Atomizer - Neural-Accelerated Structural Optimization",
|
||||||
@@ -372,7 +637,7 @@ Examples:
|
|||||||
|
|
||||||
# Manual training
|
# Manual training
|
||||||
python atomizer.py train --study my_study --epochs 100
|
python atomizer.py train --study my_study --epochs 100
|
||||||
"""
|
""",
|
||||||
)
|
)
|
||||||
|
|
||||||
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
|
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
|
||||||
@@ -381,13 +646,14 @@ Examples:
|
|||||||
|
|
||||||
# neural-optimize command
|
# neural-optimize command
|
||||||
neural_parser = subparsers.add_parser(
|
neural_parser = subparsers.add_parser(
|
||||||
"neural-optimize",
|
"neural-optimize", help="Run neural-accelerated optimization (main workflow)"
|
||||||
help="Run neural-accelerated optimization (main workflow)"
|
|
||||||
)
|
)
|
||||||
neural_parser.add_argument("--study", "-s", required=True, help="Study name")
|
neural_parser.add_argument("--study", "-s", required=True, help="Study name")
|
||||||
neural_parser.add_argument("--trials", "-n", type=int, default=500, help="Total trials")
|
neural_parser.add_argument("--trials", "-n", type=int, default=500, help="Total trials")
|
||||||
neural_parser.add_argument("--min-points", type=int, default=50, help="Min points for training")
|
neural_parser.add_argument("--min-points", type=int, default=50, help="Min points for training")
|
||||||
neural_parser.add_argument("--retrain-every", type=int, default=50, help="Retrain after N new points")
|
neural_parser.add_argument(
|
||||||
|
"--retrain-every", type=int, default=50, help="Retrain after N new points"
|
||||||
|
)
|
||||||
neural_parser.add_argument("--epochs", type=int, default=100, help="Training epochs")
|
neural_parser.add_argument("--epochs", type=int, default=100, help="Training epochs")
|
||||||
neural_parser.add_argument("--resume", action="store_true", help="Resume existing study")
|
neural_parser.add_argument("--resume", action="store_true", help="Resume existing study")
|
||||||
|
|
||||||
@@ -414,6 +680,31 @@ Examples:
|
|||||||
validate_parser = subparsers.add_parser("validate", help="Validate study setup")
|
validate_parser = subparsers.add_parser("validate", help="Validate study setup")
|
||||||
validate_parser.add_argument("--study", "-s", required=True, help="Study name")
|
validate_parser.add_argument("--study", "-s", required=True, help="Study name")
|
||||||
|
|
||||||
|
# ========================================================================
|
||||||
|
# NEW UX SYSTEM COMMANDS
|
||||||
|
# ========================================================================
|
||||||
|
|
||||||
|
# intake command
|
||||||
|
intake_parser = subparsers.add_parser("intake", help="Process an intake folder into a study")
|
||||||
|
intake_parser.add_argument("folder", help="Path to intake folder")
|
||||||
|
intake_parser.add_argument("--skip-baseline", action="store_true", help="Skip baseline solve")
|
||||||
|
|
||||||
|
# gate command (validation gate)
|
||||||
|
gate_parser = subparsers.add_parser("gate", help="Run validation gate with test trials")
|
||||||
|
gate_parser.add_argument("study", help="Study name or path")
|
||||||
|
gate_parser.add_argument("--skip-trials", action="store_true", help="Skip test trials")
|
||||||
|
gate_parser.add_argument("--trials", type=int, default=3, help="Number of test trials")
|
||||||
|
gate_parser.add_argument("--approve", action="store_true", help="Approve if validation passes")
|
||||||
|
|
||||||
|
# list command
|
||||||
|
list_studies_parser = subparsers.add_parser("list", help="List all studies and inbox items")
|
||||||
|
|
||||||
|
# finalize command
|
||||||
|
finalize_parser = subparsers.add_parser("finalize", help="Generate final HTML report")
|
||||||
|
finalize_parser.add_argument("study", help="Study name or path")
|
||||||
|
finalize_parser.add_argument("--pdf", action="store_true", help="Also generate PDF")
|
||||||
|
finalize_parser.add_argument("--open", action="store_true", help="Open report in browser")
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
if not args.command:
|
if not args.command:
|
||||||
@@ -429,7 +720,12 @@ Examples:
|
|||||||
"list-templates": cmd_list_templates,
|
"list-templates": cmd_list_templates,
|
||||||
"status": cmd_status,
|
"status": cmd_status,
|
||||||
"train": cmd_train,
|
"train": cmd_train,
|
||||||
"validate": cmd_validate
|
"validate": cmd_validate,
|
||||||
|
# New UX commands
|
||||||
|
"intake": cmd_intake,
|
||||||
|
"gate": cmd_gate,
|
||||||
|
"list": cmd_list_studies,
|
||||||
|
"finalize": cmd_finalize,
|
||||||
}
|
}
|
||||||
|
|
||||||
handler = commands.get(args.command)
|
handler = commands.get(args.command)
|
||||||
|
|||||||
540
docs/guides/DEVLOOP.md
Normal file
540
docs/guides/DEVLOOP.md
Normal file
@@ -0,0 +1,540 @@
|
|||||||
|
# DevLoop - Closed-Loop Development System
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
DevLoop is Atomizer's autonomous development cycle system that coordinates AI agents and automated testing to create a closed-loop development workflow.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Uses your existing CLI subscriptions - no API keys needed
|
||||||
|
- Playwright browser testing for UI verification
|
||||||
|
- Multiple test types: API, browser, CLI, filesystem
|
||||||
|
- Automatic analysis and fix iterations
|
||||||
|
- Persistent state in `.devloop/` directory
|
||||||
|
|
||||||
|
```
|
||||||
|
+-----------------------------------------------------------------------------+
|
||||||
|
| ATOMIZER DEVLOOP - CLOSED-LOOP DEVELOPMENT |
|
||||||
|
+-----------------------------------------------------------------------------+
|
||||||
|
| |
|
||||||
|
| +----------+ +----------+ +----------+ +----------+ |
|
||||||
|
| | PLAN |---->| BUILD |---->| TEST |---->| ANALYZE | |
|
||||||
|
| | Gemini | | Claude | | Playwright| | Gemini | |
|
||||||
|
| | OpenCode | | CLI | | + API | | OpenCode | |
|
||||||
|
| +----------+ +----------+ +----------+ +----------+ |
|
||||||
|
| ^ | |
|
||||||
|
| | | |
|
||||||
|
| +---------------------------------------------------+ |
|
||||||
|
| FIX LOOP (max iterations) |
|
||||||
|
+-----------------------------------------------------------------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### CLI Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full development cycle
|
||||||
|
python tools/devloop_cli.py start "Create new bracket study"
|
||||||
|
|
||||||
|
# Step-by-step execution
|
||||||
|
python tools/devloop_cli.py plan "Fix dashboard validation"
|
||||||
|
python tools/devloop_cli.py implement
|
||||||
|
python tools/devloop_cli.py test --study support_arm
|
||||||
|
python tools/devloop_cli.py analyze
|
||||||
|
|
||||||
|
# Browser UI tests (Playwright)
|
||||||
|
python tools/devloop_cli.py browser # Quick smoke test
|
||||||
|
python tools/devloop_cli.py browser --level home # Home page tests
|
||||||
|
python tools/devloop_cli.py browser --level full # All UI tests
|
||||||
|
python tools/devloop_cli.py browser --study support_arm # Study-specific
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
python tools/devloop_cli.py status
|
||||||
|
|
||||||
|
# Quick test with support_arm study
|
||||||
|
python tools/devloop_cli.py quick
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
1. **Backend running**: `cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload --port 8000`
|
||||||
|
2. **Frontend running**: `cd atomizer-dashboard/frontend && npm run dev`
|
||||||
|
3. **Playwright browsers installed**: `cd atomizer-dashboard/frontend && npx playwright install chromium`
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
optimization_engine/devloop/
|
||||||
|
+-- __init__.py # Module exports
|
||||||
|
+-- orchestrator.py # DevLoopOrchestrator - full cycle coordination
|
||||||
|
+-- cli_bridge.py # DevLoopCLIOrchestrator - CLI-based execution
|
||||||
|
| +-- ClaudeCodeCLI # Claude Code CLI wrapper
|
||||||
|
| +-- OpenCodeCLI # OpenCode (Gemini) CLI wrapper
|
||||||
|
+-- test_runner.py # DashboardTestRunner - test execution
|
||||||
|
+-- browser_scenarios.py # Pre-built Playwright scenarios
|
||||||
|
+-- planning.py # GeminiPlanner - strategic planning
|
||||||
|
+-- analyzer.py # ProblemAnalyzer - failure analysis
|
||||||
|
+-- claude_bridge.py # ClaudeCodeBridge - Claude API integration
|
||||||
|
|
||||||
|
tools/
|
||||||
|
+-- devloop_cli.py # CLI entry point
|
||||||
|
|
||||||
|
.devloop/ # Persistent state directory
|
||||||
|
+-- current_plan.json # Current planning state
|
||||||
|
+-- test_results.json # Latest filesystem/API test results
|
||||||
|
+-- browser_test_results.json# Latest browser test results
|
||||||
|
+-- analysis.json # Latest analysis results
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Components
|
||||||
|
|
||||||
|
| Component | Location | Purpose |
|
||||||
|
|-----------|----------|---------|
|
||||||
|
| `DevLoopCLIOrchestrator` | `cli_bridge.py` | CLI-based cycle orchestration |
|
||||||
|
| `ClaudeCodeCLI` | `cli_bridge.py` | Execute Claude Code CLI commands |
|
||||||
|
| `OpenCodeCLI` | `cli_bridge.py` | Execute OpenCode (Gemini) CLI commands |
|
||||||
|
| `DashboardTestRunner` | `test_runner.py` | Run all test types |
|
||||||
|
| `get_browser_scenarios()` | `browser_scenarios.py` | Pre-built Playwright tests |
|
||||||
|
| `DevLoopOrchestrator` | `orchestrator.py` | API-based orchestration (WebSocket) |
|
||||||
|
| `GeminiPlanner` | `planning.py` | Gemini API planning |
|
||||||
|
| `ProblemAnalyzer` | `analyzer.py` | Failure analysis |
|
||||||
|
|
||||||
|
### CLI Tools Configuration
|
||||||
|
|
||||||
|
DevLoop uses your existing CLI subscriptions:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# In cli_bridge.py
|
||||||
|
CLAUDE_PATH = r"C:\Users\antoi\.local\bin\claude.exe"
|
||||||
|
OPENCODE_PATH = r"C:\Users\antoi\AppData\Roaming\npm\opencode.cmd"
|
||||||
|
```
|
||||||
|
|
||||||
|
## CLI Commands Reference
|
||||||
|
|
||||||
|
### `start` - Full Development Cycle
|
||||||
|
|
||||||
|
Runs the complete PLAN -> BUILD -> TEST -> ANALYZE -> FIX loop.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py start "Create support_arm study" --max-iterations 5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments:**
|
||||||
|
- `objective` (required): What to achieve
|
||||||
|
- `--max-iterations`: Maximum fix iterations (default: 5)
|
||||||
|
|
||||||
|
**Flow:**
|
||||||
|
1. Gemini creates implementation plan
|
||||||
|
2. Claude Code implements the plan
|
||||||
|
3. Tests verify implementation
|
||||||
|
4. If tests fail: Gemini analyzes, Claude fixes, loop
|
||||||
|
5. Exits on success or max iterations
|
||||||
|
|
||||||
|
### `plan` - Create Implementation Plan
|
||||||
|
|
||||||
|
Uses Gemini (via OpenCode) to create a strategic plan.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py plan "Fix dashboard validation"
|
||||||
|
python tools/devloop_cli.py plan "Add new extractor" --context context.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:** Saves plan to `.devloop/current_plan.json`
|
||||||
|
|
||||||
|
**Plan structure:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"objective": "Fix dashboard validation",
|
||||||
|
"approach": "Update validation logic in spec_validator.py",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": "Update bounds validation",
|
||||||
|
"file": "optimization_engine/config/spec_validator.py",
|
||||||
|
"priority": "high"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"test_scenarios": [
|
||||||
|
{
|
||||||
|
"id": "test_001",
|
||||||
|
"name": "Validation passes for valid spec",
|
||||||
|
"type": "api",
|
||||||
|
"steps": [...]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"acceptance_criteria": ["All validation tests pass"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `implement` - Execute Plan with Claude Code
|
||||||
|
|
||||||
|
Implements the current plan using Claude Code CLI.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py implement
|
||||||
|
python tools/devloop_cli.py implement --plan custom_plan.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments:**
|
||||||
|
- `--plan`: Custom plan file (default: `.devloop/current_plan.json`)
|
||||||
|
|
||||||
|
**Output:** Reports files modified and success/failure.
|
||||||
|
|
||||||
|
### `test` - Run Tests
|
||||||
|
|
||||||
|
Run filesystem, API, or custom tests for a study.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py test --study support_arm
|
||||||
|
python tools/devloop_cli.py test --scenarios custom_tests.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments:**
|
||||||
|
- `--study`: Study name (generates standard tests)
|
||||||
|
- `--scenarios`: Custom test scenarios JSON file
|
||||||
|
|
||||||
|
**Standard study tests:**
|
||||||
|
1. Study directory exists
|
||||||
|
2. `atomizer_spec.json` is valid JSON
|
||||||
|
3. `README.md` exists
|
||||||
|
4. `run_optimization.py` exists
|
||||||
|
5. `1_setup/model/` directory exists
|
||||||
|
|
||||||
|
**Output:** Saves results to `.devloop/test_results.json`
|
||||||
|
|
||||||
|
### `browser` - Run Playwright UI Tests
|
||||||
|
|
||||||
|
Run browser-based UI tests using Playwright.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py browser # Quick smoke test
|
||||||
|
python tools/devloop_cli.py browser --level home # Home page tests
|
||||||
|
python tools/devloop_cli.py browser --level full # All UI tests
|
||||||
|
python tools/devloop_cli.py browser --level study --study support_arm
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments:**
|
||||||
|
- `--level`: Test level (`quick`, `home`, `full`, `study`)
|
||||||
|
- `--study`: Study name for study-specific tests
|
||||||
|
|
||||||
|
**Test Levels:**
|
||||||
|
|
||||||
|
| Level | Tests | Description |
|
||||||
|
|-------|-------|-------------|
|
||||||
|
| `quick` | 1 | Smoke test - page loads |
|
||||||
|
| `home` | 2 | Home page stats + folder expansion |
|
||||||
|
| `full` | 5+ | All UI + study-specific |
|
||||||
|
| `study` | 3 | Canvas, dashboard for specific study |
|
||||||
|
|
||||||
|
**Output:** Saves results to `.devloop/browser_test_results.json`
|
||||||
|
|
||||||
|
### `analyze` - Analyze Test Results
|
||||||
|
|
||||||
|
Uses Gemini (via OpenCode) to analyze failures and create fix plans.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py analyze
|
||||||
|
python tools/devloop_cli.py analyze --results custom_results.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments:**
|
||||||
|
- `--results`: Custom results file (default: `.devloop/test_results.json`)
|
||||||
|
|
||||||
|
**Output:** Saves analysis to `.devloop/analysis.json`
|
||||||
|
|
||||||
|
### `status` - View Current State
|
||||||
|
|
||||||
|
Shows the current DevLoop state.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py status
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
DevLoop Status
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
Current Plan: Fix dashboard validation
|
||||||
|
Tasks: 3
|
||||||
|
|
||||||
|
Last Test Results:
|
||||||
|
Passed: 4/5
|
||||||
|
|
||||||
|
Last Analysis:
|
||||||
|
Issues: 1
|
||||||
|
|
||||||
|
============================================================
|
||||||
|
CLI Tools:
|
||||||
|
- Claude Code: C:\Users\antoi\.local\bin\claude.exe
|
||||||
|
- OpenCode: C:\Users\antoi\AppData\Roaming\npm\opencode.cmd
|
||||||
|
```
|
||||||
|
|
||||||
|
### `quick` - Quick Test
|
||||||
|
|
||||||
|
Runs tests for the `support_arm` study as a quick verification.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python tools/devloop_cli.py quick
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Types
|
||||||
|
|
||||||
|
### Filesystem Tests
|
||||||
|
|
||||||
|
Check files and directories exist, JSON validity, content matching.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "test_fs_001",
|
||||||
|
"name": "Study directory exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [
|
||||||
|
{"action": "check_exists", "path": "studies/my_study"}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"exists": true}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Actions:**
|
||||||
|
- `check_exists` - Verify path exists
|
||||||
|
- `check_json_valid` - Parse JSON file
|
||||||
|
- `check_file_contains` - Search for content
|
||||||
|
|
||||||
|
### API Tests
|
||||||
|
|
||||||
|
Test REST endpoints.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "test_api_001",
|
||||||
|
"name": "Get study spec",
|
||||||
|
"type": "api",
|
||||||
|
"steps": [
|
||||||
|
{"action": "get", "endpoint": "/api/studies/my_study/spec"}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status_code": 200}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Actions:**
|
||||||
|
- `get` - HTTP GET
|
||||||
|
- `post` - HTTP POST with `data`
|
||||||
|
- `put` - HTTP PUT with `data`
|
||||||
|
- `delete` - HTTP DELETE
|
||||||
|
|
||||||
|
### Browser Tests (Playwright)
|
||||||
|
|
||||||
|
Test UI interactions.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "test_browser_001",
|
||||||
|
"name": "Canvas loads nodes",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/canvas/support_arm"},
|
||||||
|
{"action": "wait_for", "selector": ".react-flow__node"},
|
||||||
|
{"action": "click", "selector": "[data-testid='node-dv_001']"}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 20000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Actions:**
|
||||||
|
- `navigate` - Go to URL
|
||||||
|
- `wait_for` - Wait for selector
|
||||||
|
- `click` - Click element
|
||||||
|
- `fill` - Fill input with value
|
||||||
|
- `screenshot` - Take screenshot
|
||||||
|
|
||||||
|
### CLI Tests
|
||||||
|
|
||||||
|
Execute shell commands.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "test_cli_001",
|
||||||
|
"name": "Run optimization test",
|
||||||
|
"type": "cli",
|
||||||
|
"steps": [
|
||||||
|
{"command": "python run_optimization.py --test", "cwd": "studies/my_study"}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"returncode": 0}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Browser Test Scenarios
|
||||||
|
|
||||||
|
Pre-built scenarios in `browser_scenarios.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from optimization_engine.devloop.browser_scenarios import get_browser_scenarios
|
||||||
|
|
||||||
|
# Get scenarios by level
|
||||||
|
scenarios = get_browser_scenarios(level="full", study_name="support_arm")
|
||||||
|
|
||||||
|
# Available functions
|
||||||
|
get_browser_scenarios(level, study_name) # Main entry point
|
||||||
|
get_study_browser_scenarios(study_name) # Study-specific tests
|
||||||
|
get_ui_verification_scenarios() # Home page tests
|
||||||
|
get_chat_verification_scenarios() # Chat panel tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## Standalone Playwright Tests
|
||||||
|
|
||||||
|
In addition to DevLoop integration, you can run standalone Playwright tests:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd atomizer-dashboard/frontend
|
||||||
|
|
||||||
|
# Run all E2E tests
|
||||||
|
npm run test:e2e
|
||||||
|
|
||||||
|
# Run with Playwright UI
|
||||||
|
npm run test:e2e:ui
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
npx playwright test tests/e2e/home.spec.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test files:**
|
||||||
|
- `tests/e2e/home.spec.ts` - Home page tests (8 tests)
|
||||||
|
|
||||||
|
## API Integration
|
||||||
|
|
||||||
|
DevLoop also provides REST API endpoints when running the dashboard backend:
|
||||||
|
|
||||||
|
| Endpoint | Method | Description |
|
||||||
|
|----------|--------|-------------|
|
||||||
|
| `/api/devloop/status` | GET | Current loop status |
|
||||||
|
| `/api/devloop/start` | POST | Start development cycle |
|
||||||
|
| `/api/devloop/stop` | POST | Stop current cycle |
|
||||||
|
| `/api/devloop/step` | POST | Execute single phase |
|
||||||
|
| `/api/devloop/history` | GET | View past cycles |
|
||||||
|
| `/api/devloop/health` | GET | System health check |
|
||||||
|
| `/api/devloop/ws` | WebSocket | Real-time updates |
|
||||||
|
|
||||||
|
**Start a cycle via API:**
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8000/api/devloop/start \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"objective": "Create support_arm study", "max_iterations": 5}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## State Files
|
||||||
|
|
||||||
|
DevLoop maintains state in `.devloop/`:
|
||||||
|
|
||||||
|
| File | Purpose | Updated By |
|
||||||
|
|------|---------|------------|
|
||||||
|
| `current_plan.json` | Current implementation plan | `plan` command |
|
||||||
|
| `test_results.json` | Filesystem/API test results | `test` command |
|
||||||
|
| `browser_test_results.json` | Browser test results | `browser` command |
|
||||||
|
| `analysis.json` | Failure analysis | `analyze` command |
|
||||||
|
|
||||||
|
## Example Workflows
|
||||||
|
|
||||||
|
### Create a New Study
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full autonomous cycle
|
||||||
|
python tools/devloop_cli.py start "Create bracket_lightweight study with mass and displacement objectives"
|
||||||
|
|
||||||
|
# Or step by step
|
||||||
|
python tools/devloop_cli.py plan "Create bracket_lightweight study"
|
||||||
|
python tools/devloop_cli.py implement
|
||||||
|
python tools/devloop_cli.py test --study bracket_lightweight
|
||||||
|
python tools/devloop_cli.py browser --study bracket_lightweight
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug a Dashboard Issue
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Plan the fix
|
||||||
|
python tools/devloop_cli.py plan "Fix canvas node selection not updating panel"
|
||||||
|
|
||||||
|
# Implement
|
||||||
|
python tools/devloop_cli.py implement
|
||||||
|
|
||||||
|
# Test UI
|
||||||
|
python tools/devloop_cli.py browser --level full
|
||||||
|
|
||||||
|
# If tests fail, analyze
|
||||||
|
python tools/devloop_cli.py analyze
|
||||||
|
|
||||||
|
# Fix and retest loop...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Study Before Running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# File structure tests
|
||||||
|
python tools/devloop_cli.py test --study my_study
|
||||||
|
|
||||||
|
# Browser tests (canvas loads, etc.)
|
||||||
|
python tools/devloop_cli.py browser --level study --study my_study
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Browser Tests Fail
|
||||||
|
|
||||||
|
1. **Ensure frontend is running**: `npm run dev` in `atomizer-dashboard/frontend`
|
||||||
|
2. **Check port**: DevLoop uses `localhost:3003` (Vite default)
|
||||||
|
3. **Install browsers**: `npx playwright install chromium`
|
||||||
|
|
||||||
|
### CLI Tools Not Found
|
||||||
|
|
||||||
|
Check paths in `cli_bridge.py`:
|
||||||
|
```python
|
||||||
|
CLAUDE_PATH = r"C:\Users\antoi\.local\bin\claude.exe"
|
||||||
|
OPENCODE_PATH = r"C:\Users\antoi\AppData\Roaming\npm\opencode.cmd"
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Tests Fail
|
||||||
|
|
||||||
|
1. **Ensure backend is running**: Port 8000
|
||||||
|
2. **Check endpoint paths**: May need `/api/` prefix
|
||||||
|
|
||||||
|
### Tests Timeout
|
||||||
|
|
||||||
|
Increase timeout in test scenario:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"timeout_ms": 30000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Unclosed Client Session Warning
|
||||||
|
|
||||||
|
This is a known aiohttp warning on Windows. Tests still pass correctly.
|
||||||
|
|
||||||
|
## Integration with LAC
|
||||||
|
|
||||||
|
DevLoop records learnings to LAC (Learning Atomizer Core):
|
||||||
|
|
||||||
|
```python
|
||||||
|
from knowledge_base.lac import get_lac
|
||||||
|
|
||||||
|
lac = get_lac()
|
||||||
|
|
||||||
|
# Record after successful cycle
|
||||||
|
lac.record_insight(
|
||||||
|
category="success_pattern",
|
||||||
|
context="DevLoop created support_arm study",
|
||||||
|
insight="TPE sampler works well for 4-variable bracket problems",
|
||||||
|
confidence=0.9
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
1. **Parallel test execution** - Run independent tests concurrently
|
||||||
|
2. **Visual diff** - Show code changes in dashboard
|
||||||
|
3. **Smart rollback** - Automatic rollback on regression
|
||||||
|
4. **Branch management** - Auto-create feature branches
|
||||||
|
5. **Cost tracking** - Monitor CLI usage
|
||||||
144
docs/plans/ATOMIZER_STUDIO.md
Normal file
144
docs/plans/ATOMIZER_STUDIO.md
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
# Atomizer Studio - Technical Implementation Plan
|
||||||
|
|
||||||
|
**Version**: 1.0
|
||||||
|
**Date**: January 24, 2026
|
||||||
|
**Status**: In Progress
|
||||||
|
**Author**: Atomizer Team
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Executive Summary
|
||||||
|
|
||||||
|
**Atomizer Studio** is a unified, drag-and-drop study creation environment that consolidates file management, visual configuration (Canvas), and AI assistance into a single real-time workspace. It replaces the legacy wizard-based approach with a modern "Studio" experience.
|
||||||
|
|
||||||
|
### Core Principles
|
||||||
|
|
||||||
|
| Principle | Implementation |
|
||||||
|
|-----------|----------------|
|
||||||
|
| **Drag & Drop First** | No wizards, no forms. Drop files, see results. |
|
||||||
|
| **AI-Native** | Claude sees everything: files, parameters, goals. It proposes, you approve. |
|
||||||
|
| **Zero Commitment** | Work in "Draft Mode" until ready. Nothing is permanent until "Build". |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Architecture
|
||||||
|
|
||||||
|
### 2.1 The Draft Workflow
|
||||||
|
|
||||||
|
```
|
||||||
|
User Opens /studio
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
POST /intake/draft ──► Creates studies/_inbox/draft_{id}/
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
User Drops Files ──► Auto-Introspection ──► Parameters Discovered
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
AI Reads Context ──► Proposes Configuration ──► Canvas Updates
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
User Clicks BUILD ──► Finalize ──► studies/{topic}/{name}/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Interface Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
┌───────────────────┬───────────────────────────────────────────────┬───────────────────┐
|
||||||
|
│ RESOURCES (Left) │ CANVAS (Center) │ ASSISTANT (Right) │
|
||||||
|
├───────────────────┼───────────────────────────────────────────────┼───────────────────┤
|
||||||
|
│ ▼ DROP ZONE │ │ │
|
||||||
|
│ [Drag files] │ ┌───────┐ ┌────────┐ ┌─────────┐ │ "I see you want │
|
||||||
|
│ │ │ Model ├─────►│ Solver ├─────►│ Extract │ │ to minimize │
|
||||||
|
│ ▼ MODEL FILES │ └───────┘ └────────┘ └────┬────┘ │ mass. Adding │
|
||||||
|
│ • bracket.sim │ │ │ objective..." │
|
||||||
|
│ • bracket.prt │ ┌────▼────┐ │ │
|
||||||
|
│ │ │ Objectiv│ │ [Apply Changes] │
|
||||||
|
│ ▼ PARAMETERS │ └─────────┘ │ │
|
||||||
|
│ • thickness │ │ │
|
||||||
|
│ • rib_count │ AtomizerSpec v2.0 │ │
|
||||||
|
│ │ (Draft Mode) │ │
|
||||||
|
│ ▼ CONTEXT │ │ │
|
||||||
|
│ • goals.pdf │ │ [ Chat Input... ] │
|
||||||
|
├───────────────────┼───────────────────────────────────────────────┼───────────────────┤
|
||||||
|
│ [ Reset Draft ] │ [ Validate ] [ BUILD STUDY ] │ │
|
||||||
|
└───────────────────┴───────────────────────────────────────────────┴───────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Implementation Phases
|
||||||
|
|
||||||
|
### Phase 1: Backend API Enhancements
|
||||||
|
- `POST /intake/draft` - Create anonymous draft
|
||||||
|
- `GET /intake/{id}/context/content` - Extract text from uploaded files
|
||||||
|
- Enhanced `POST /intake/{id}/finalize` with rename support
|
||||||
|
|
||||||
|
### Phase 2: Frontend Studio Shell
|
||||||
|
- `/studio` route with 3-column layout
|
||||||
|
- DropZone component with file categorization
|
||||||
|
- Integrated Canvas in draft mode
|
||||||
|
- Parameter discovery panel
|
||||||
|
|
||||||
|
### Phase 3: AI Integration
|
||||||
|
- Context-aware chat system
|
||||||
|
- Spec modification via Claude
|
||||||
|
- Real-time canvas updates
|
||||||
|
|
||||||
|
### Phase 4: Polish & Testing
|
||||||
|
- Full DevLoop testing
|
||||||
|
- Edge case handling
|
||||||
|
- UX polish and animations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
atomizer-dashboard/
|
||||||
|
├── frontend/src/
|
||||||
|
│ ├── pages/
|
||||||
|
│ │ └── Studio.tsx
|
||||||
|
│ ├── components/
|
||||||
|
│ │ └── studio/
|
||||||
|
│ │ ├── StudioLayout.tsx
|
||||||
|
│ │ ├── ResourcePanel.tsx
|
||||||
|
│ │ ├── StudioCanvas.tsx
|
||||||
|
│ │ ├── StudioChat.tsx
|
||||||
|
│ │ ├── DropZone.tsx
|
||||||
|
│ │ ├── ParameterList.tsx
|
||||||
|
│ │ ├── ContextFileList.tsx
|
||||||
|
│ │ └── BuildDialog.tsx
|
||||||
|
│ └── hooks/
|
||||||
|
│ └── useDraft.ts
|
||||||
|
├── backend/api/
|
||||||
|
│ └── routes/
|
||||||
|
│ └── intake.py (enhanced)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. API Endpoints
|
||||||
|
|
||||||
|
### New Endpoints
|
||||||
|
|
||||||
|
| Endpoint | Method | Description |
|
||||||
|
|----------|--------|-------------|
|
||||||
|
| `/intake/draft` | POST | Create anonymous draft study |
|
||||||
|
| `/intake/{id}/context/content` | GET | Extract text from context files |
|
||||||
|
|
||||||
|
### Enhanced Endpoints
|
||||||
|
|
||||||
|
| Endpoint | Change |
|
||||||
|
|----------|--------|
|
||||||
|
| `/intake/{id}/finalize` | Added `new_name` parameter for rename |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Status
|
||||||
|
|
||||||
|
- [x] Plan documented
|
||||||
|
- [ ] Phase 1: Backend
|
||||||
|
- [ ] Phase 2: Frontend Shell
|
||||||
|
- [ ] Phase 3: AI Integration
|
||||||
|
- [ ] Phase 4: Testing & Polish
|
||||||
1191
docs/plans/ATOMIZER_UX_SYSTEM.md
Normal file
1191
docs/plans/ATOMIZER_UX_SYSTEM.md
Normal file
File diff suppressed because it is too large
Load Diff
637
docs/plans/DASHBOARD_INTAKE_ATOMIZERSPEC_INTEGRATION.md
Normal file
637
docs/plans/DASHBOARD_INTAKE_ATOMIZERSPEC_INTEGRATION.md
Normal file
@@ -0,0 +1,637 @@
|
|||||||
|
# Dashboard Intake & AtomizerSpec Integration Plan
|
||||||
|
|
||||||
|
**Version**: 1.0
|
||||||
|
**Author**: Atomizer Team
|
||||||
|
**Date**: January 22, 2026
|
||||||
|
**Status**: APPROVED FOR IMPLEMENTATION
|
||||||
|
**Dependencies**: ATOMIZER_UX_SYSTEM.md, UNIFIED_CONFIGURATION_ARCHITECTURE.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
This plan implements visual study creation in the Atomizer Dashboard, with `atomizer_spec.json` as the **single source of truth** for all study configuration. Engineers can:
|
||||||
|
|
||||||
|
1. **Drop files** into the dashboard to create a new study
|
||||||
|
2. **See introspection results** inline (expressions, mass, solver type)
|
||||||
|
3. **Open Canvas** for detailed configuration (one-click from create card)
|
||||||
|
4. **Generate README with Claude** (intelligent, not template-based)
|
||||||
|
5. **Run baseline solve** with real-time progress via WebSocket
|
||||||
|
6. **Finalize** to move study from inbox to studies folder
|
||||||
|
|
||||||
|
**Key Principle**: Every operation reads from or writes to `atomizer_spec.json`. Nothing bypasses the spec.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Architecture Overview
|
||||||
|
|
||||||
|
### 1.1 AtomizerSpec as Central Data Hub
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ ATOMIZER_SPEC.JSON - CENTRAL DATA HUB │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ INPUTS (write to spec) SPEC OUTPUTS (read spec) │
|
||||||
|
│ ┌──────────────────┐ ┌──────────┐ ┌──────────────────┐ │
|
||||||
|
│ │ File Upload │ │ │ │ Canvas Builder │ │
|
||||||
|
│ │ Introspection │ ────────→ │ atomizer │ ────────→ │ Dashboard Views │ │
|
||||||
|
│ │ Claude Interview │ │ _spec │ │ Optimization Run │ │
|
||||||
|
│ │ Canvas Edits │ │ .json │ │ README Generator │ │
|
||||||
|
│ │ Manual Edit │ │ │ │ Report Generator │ │
|
||||||
|
│ └──────────────────┘ └──────────┘ └──────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ │ validates against │
|
||||||
|
│ ↓ │
|
||||||
|
│ ┌──────────────────┐ │
|
||||||
|
│ │ atomizer_spec │ │
|
||||||
|
│ │ _v2.json │ │
|
||||||
|
│ │ (JSON Schema) │ │
|
||||||
|
│ └──────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2 Study Creation Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ STUDY CREATION FLOW │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ 1. DROP FILES 2. INTROSPECT 3. CLAUDE README 4. FINALIZE │
|
||||||
|
│ ┌────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────┐ │
|
||||||
|
│ │ .sim .prt │ → │ Expressions │ → │ Analyzes │ → │ Baseline │ │
|
||||||
|
│ │ .fem _i.prt│ │ Mass props │ │ context+model│ │ solve │ │
|
||||||
|
│ │ goals.md │ │ Solver type │ │ Writes full │ │ Update │ │
|
||||||
|
│ └────────────┘ └──────────────┘ │ README.md │ │ README │ │
|
||||||
|
│ │ │ └──────────────┘ │ Archive │ │
|
||||||
|
│ ↓ ↓ │ │ inbox │ │
|
||||||
|
│ Creates initial Updates spec Claude skill └──────────┘ │
|
||||||
|
│ atomizer_spec.json with introspection for study docs │ │
|
||||||
|
│ status="draft" status="introspected" ↓ │
|
||||||
|
│ studies/ │
|
||||||
|
│ {topic}/ │
|
||||||
|
│ {name}/ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.3 Spec Status State Machine
|
||||||
|
|
||||||
|
```
|
||||||
|
draft → introspected → configured → validated → ready → running → completed
|
||||||
|
│ │ │ │ │ │ │
|
||||||
|
│ │ │ │ │ │ └─ optimization done
|
||||||
|
│ │ │ │ │ └─ optimization started
|
||||||
|
│ │ │ │ └─ can start optimization
|
||||||
|
│ │ │ └─ baseline solve done
|
||||||
|
│ │ └─ DVs, objectives, constraints set (Claude or Canvas)
|
||||||
|
│ └─ introspection done
|
||||||
|
└─ files uploaded, minimal spec created
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. AtomizerSpec Schema Extensions
|
||||||
|
|
||||||
|
### 2.1 New Fields in SpecMeta
|
||||||
|
|
||||||
|
Add to `optimization_engine/config/spec_models.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class SpecStatus(str, Enum):
|
||||||
|
"""Study lifecycle status."""
|
||||||
|
DRAFT = "draft"
|
||||||
|
INTROSPECTED = "introspected"
|
||||||
|
CONFIGURED = "configured"
|
||||||
|
VALIDATED = "validated"
|
||||||
|
READY = "ready"
|
||||||
|
RUNNING = "running"
|
||||||
|
COMPLETED = "completed"
|
||||||
|
FAILED = "failed"
|
||||||
|
|
||||||
|
|
||||||
|
class SpecCreatedBy(str, Enum):
|
||||||
|
"""Who/what created the spec."""
|
||||||
|
CANVAS = "canvas"
|
||||||
|
CLAUDE = "claude"
|
||||||
|
API = "api"
|
||||||
|
MIGRATION = "migration"
|
||||||
|
MANUAL = "manual"
|
||||||
|
DASHBOARD_INTAKE = "dashboard_intake" # NEW
|
||||||
|
|
||||||
|
|
||||||
|
class SpecMeta(BaseModel):
|
||||||
|
"""Metadata about the spec."""
|
||||||
|
version: str = Field(..., pattern=r"^2\.\d+$")
|
||||||
|
study_name: str
|
||||||
|
created: Optional[datetime] = None
|
||||||
|
modified: Optional[datetime] = None
|
||||||
|
created_by: Optional[SpecCreatedBy] = None
|
||||||
|
modified_by: Optional[str] = None
|
||||||
|
status: SpecStatus = SpecStatus.DRAFT # NEW
|
||||||
|
topic: Optional[str] = None # NEW - folder grouping
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 IntrospectionData Model
|
||||||
|
|
||||||
|
New model for storing introspection results in the spec:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ExpressionInfo(BaseModel):
|
||||||
|
"""Information about an NX expression."""
|
||||||
|
name: str
|
||||||
|
value: Optional[float] = None
|
||||||
|
units: Optional[str] = None
|
||||||
|
formula: Optional[str] = None
|
||||||
|
is_candidate: bool = False
|
||||||
|
confidence: float = 0.0 # 0.0 to 1.0
|
||||||
|
|
||||||
|
|
||||||
|
class BaselineData(BaseModel):
|
||||||
|
"""Results from baseline FEA solve."""
|
||||||
|
timestamp: datetime
|
||||||
|
solve_time_seconds: float
|
||||||
|
mass_kg: Optional[float] = None
|
||||||
|
max_displacement_mm: Optional[float] = None
|
||||||
|
max_stress_mpa: Optional[float] = None
|
||||||
|
success: bool = True
|
||||||
|
error: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class IntrospectionData(BaseModel):
|
||||||
|
"""Model introspection results."""
|
||||||
|
timestamp: datetime
|
||||||
|
solver_type: Optional[str] = None
|
||||||
|
mass_kg: Optional[float] = None
|
||||||
|
volume_mm3: Optional[float] = None
|
||||||
|
expressions: List[ExpressionInfo] = []
|
||||||
|
baseline: Optional[BaselineData] = None
|
||||||
|
warnings: List[str] = []
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Extended ModelConfig
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ModelConfig(BaseModel):
|
||||||
|
"""Model file configuration."""
|
||||||
|
sim: Optional[SimFile] = None
|
||||||
|
fem: Optional[str] = None
|
||||||
|
prt: Optional[str] = None
|
||||||
|
idealized_prt: Optional[str] = None # NEW - critical for mesh updating
|
||||||
|
introspection: Optional[IntrospectionData] = None # NEW
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.4 JSON Schema Updates
|
||||||
|
|
||||||
|
Add to `optimization_engine/schemas/atomizer_spec_v2.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"definitions": {
|
||||||
|
"SpecMeta": {
|
||||||
|
"properties": {
|
||||||
|
"status": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["draft", "introspected", "configured", "validated", "ready", "running", "completed", "failed"],
|
||||||
|
"default": "draft"
|
||||||
|
},
|
||||||
|
"topic": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[A-Za-z0-9_]+$",
|
||||||
|
"description": "Topic folder for grouping related studies"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"IntrospectionData": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"timestamp": { "type": "string", "format": "date-time" },
|
||||||
|
"solver_type": { "type": "string" },
|
||||||
|
"mass_kg": { "type": "number" },
|
||||||
|
"volume_mm3": { "type": "number" },
|
||||||
|
"expressions": {
|
||||||
|
"type": "array",
|
||||||
|
"items": { "$ref": "#/definitions/ExpressionInfo" }
|
||||||
|
},
|
||||||
|
"baseline": { "$ref": "#/definitions/BaselineData" },
|
||||||
|
"warnings": { "type": "array", "items": { "type": "string" } }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"ExpressionInfo": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name": { "type": "string" },
|
||||||
|
"value": { "type": "number" },
|
||||||
|
"units": { "type": "string" },
|
||||||
|
"formula": { "type": "string" },
|
||||||
|
"is_candidate": { "type": "boolean", "default": false },
|
||||||
|
"confidence": { "type": "number", "minimum": 0, "maximum": 1 }
|
||||||
|
},
|
||||||
|
"required": ["name"]
|
||||||
|
},
|
||||||
|
"BaselineData": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"timestamp": { "type": "string", "format": "date-time" },
|
||||||
|
"solve_time_seconds": { "type": "number" },
|
||||||
|
"mass_kg": { "type": "number" },
|
||||||
|
"max_displacement_mm": { "type": "number" },
|
||||||
|
"max_stress_mpa": { "type": "number" },
|
||||||
|
"success": { "type": "boolean", "default": true },
|
||||||
|
"error": { "type": "string" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Backend Implementation
|
||||||
|
|
||||||
|
### 3.1 New File: `backend/api/routes/intake.py`
|
||||||
|
|
||||||
|
Core intake API endpoints:
|
||||||
|
|
||||||
|
| Endpoint | Method | Purpose | Status After |
|
||||||
|
|----------|--------|---------|--------------|
|
||||||
|
| `/api/intake/create` | POST | Create inbox folder with initial spec | draft |
|
||||||
|
| `/api/intake/introspect` | POST | Run NX introspection, update spec | introspected |
|
||||||
|
| `/api/intake/readme/generate` | POST | Claude generates README + config suggestions | configured |
|
||||||
|
| `/api/intake/finalize` | POST | Baseline solve, move to studies folder | validated/ready |
|
||||||
|
| `/api/intake/list` | GET | List inbox folders with status | - |
|
||||||
|
| `/api/intake/topics` | GET | List existing topic folders | - |
|
||||||
|
|
||||||
|
**Key Implementation Details:**
|
||||||
|
|
||||||
|
1. **Create** - Creates folder structure + minimal `atomizer_spec.json`
|
||||||
|
2. **Introspect** - Runs NX introspection, updates spec with expressions, mass, solver type
|
||||||
|
3. **Generate README** - Calls Claude with spec + goals.md, returns README + suggested config
|
||||||
|
4. **Finalize** - Full workflow: copy files, baseline solve (optional), move to studies, archive inbox
|
||||||
|
|
||||||
|
### 3.2 New File: `backend/api/services/spec_manager.py`
|
||||||
|
|
||||||
|
Centralized spec operations:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class SpecManager:
|
||||||
|
"""Single source of truth for spec operations."""
|
||||||
|
|
||||||
|
def load(self) -> dict
|
||||||
|
def save(self, spec: dict) -> None
|
||||||
|
def update_status(self, status: str, modified_by: str) -> dict
|
||||||
|
def add_introspection(self, data: dict) -> dict
|
||||||
|
def get_status(self) -> str
|
||||||
|
def exists(self) -> bool
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 New File: `backend/api/services/claude_readme.py`
|
||||||
|
|
||||||
|
Claude-powered README generation:
|
||||||
|
|
||||||
|
- Loads skill from `.claude/skills/modules/study-readme-generator.md`
|
||||||
|
- Builds prompt with spec + goals
|
||||||
|
- Returns README content + suggested DVs/objectives/constraints
|
||||||
|
- Uses Claude API (claude-sonnet-4-20250514)
|
||||||
|
|
||||||
|
### 3.4 WebSocket for Finalization Progress
|
||||||
|
|
||||||
|
The `/api/intake/finalize` endpoint will support WebSocket for real-time progress:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Progress steps
|
||||||
|
const steps = [
|
||||||
|
'Creating study folder',
|
||||||
|
'Copying model files',
|
||||||
|
'Running introspection',
|
||||||
|
'Running baseline solve',
|
||||||
|
'Extracting baseline results',
|
||||||
|
'Generating README with Claude',
|
||||||
|
'Moving to studies folder',
|
||||||
|
'Archiving inbox'
|
||||||
|
];
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Frontend Implementation
|
||||||
|
|
||||||
|
### 4.1 Component Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
frontend/src/components/home/
|
||||||
|
├── CreateStudyCard.tsx # Main study creation UI
|
||||||
|
├── IntrospectionResults.tsx # Display introspection data
|
||||||
|
├── TopicSelector.tsx # Topic dropdown + new topic input
|
||||||
|
├── StudyFilesPanel.tsx # File display in preview
|
||||||
|
└── index.ts # Exports
|
||||||
|
|
||||||
|
frontend/src/components/common/
|
||||||
|
└── ProgressModal.tsx # Finalization progress display
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 CreateStudyCard States
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
type CardState =
|
||||||
|
| 'empty' // No files, just showing dropzone
|
||||||
|
| 'staged' // Files selected, ready to upload
|
||||||
|
| 'uploading' // Uploading files to inbox
|
||||||
|
| 'introspecting' // Running introspection
|
||||||
|
| 'ready' // Introspection done, can finalize or open canvas
|
||||||
|
| 'finalizing' // Running finalization
|
||||||
|
| 'complete'; // Study created, showing success
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 CreateStudyCard UI
|
||||||
|
|
||||||
|
```
|
||||||
|
╔═══════════════════════════════════════════════════════════════╗
|
||||||
|
║ + Create New Study [Open Canvas] ║
|
||||||
|
╠═══════════════════════════════════════════════════════════════╣
|
||||||
|
║ ║
|
||||||
|
║ Study Name ║
|
||||||
|
║ ┌─────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ bracket_optimization_v1 │ ║
|
||||||
|
║ └─────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ║
|
||||||
|
║ Topic ║
|
||||||
|
║ ┌─────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ M1_Mirror ▼ │ ║
|
||||||
|
║ └─────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ○ Brackets ○ M1_Mirror ○ Support_Arms ● + New Topic ║
|
||||||
|
║ ║
|
||||||
|
║ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ║
|
||||||
|
║ │ 📁 Drop model files here │ ║
|
||||||
|
║ │ .sim .prt .fem _i.prt │ ║
|
||||||
|
║ │ or click to browse │ ║
|
||||||
|
║ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ ║
|
||||||
|
║ ║
|
||||||
|
║ Files [Clear] ║
|
||||||
|
║ ┌─────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ ✓ bracket_sim1.sim 1.2 MB │ ║
|
||||||
|
║ │ ✓ bracket.prt 3.4 MB │ ║
|
||||||
|
║ │ ✓ bracket_fem1.fem 2.1 MB │ ║
|
||||||
|
║ │ ✓ bracket_fem1_i.prt 0.8 MB ← Idealized! │ ║
|
||||||
|
║ └─────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ║
|
||||||
|
║ ▼ Model Information ✓ Ready ║
|
||||||
|
║ ┌─────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ Solver: NX Nastran │ ║
|
||||||
|
║ │ Estimated Mass: 2.34 kg │ ║
|
||||||
|
║ │ │ ║
|
||||||
|
║ │ Design Variable Candidates (5 found): │ ║
|
||||||
|
║ │ ★ rib_thickness = 5.0 mm [2.5 - 10.0] │ ║
|
||||||
|
║ │ ★ web_height = 20.0 mm [10.0 - 40.0] │ ║
|
||||||
|
║ │ ★ flange_width = 15.0 mm [7.5 - 30.0] │ ║
|
||||||
|
║ └─────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ║
|
||||||
|
║ ┌─────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ ☑ Run baseline solve (recommended for accurate values) │ ║
|
||||||
|
║ │ ☑ Generate README with Claude │ ║
|
||||||
|
║ └─────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ║
|
||||||
|
║ [ Finalize Study ] [ Open Canvas → ] ║
|
||||||
|
║ ║
|
||||||
|
╚═══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.4 ProgressModal UI
|
||||||
|
|
||||||
|
```
|
||||||
|
╔════════════════════════════════════════════════════════════════╗
|
||||||
|
║ Creating Study [X] ║
|
||||||
|
╠════════════════════════════════════════════════════════════════╣
|
||||||
|
║ ║
|
||||||
|
║ bracket_optimization_v1 ║
|
||||||
|
║ Topic: M1_Mirror ║
|
||||||
|
║ ║
|
||||||
|
║ ┌──────────────────────────────────────────────────────────┐ ║
|
||||||
|
║ │ │ ║
|
||||||
|
║ │ ✓ Creating study folder 0.5s │ ║
|
||||||
|
║ │ ✓ Copying model files 1.2s │ ║
|
||||||
|
║ │ ✓ Running introspection 3.4s │ ║
|
||||||
|
║ │ ● Running baseline solve... │ ║
|
||||||
|
║ │ ├─ Updating parameters │ ║
|
||||||
|
║ │ ├─ Meshing... │ ║
|
||||||
|
║ │ └─ Solving (iteration 2/5) │ ║
|
||||||
|
║ │ ○ Extracting baseline results │ ║
|
||||||
|
║ │ ○ Generating README with Claude │ ║
|
||||||
|
║ │ ○ Moving to studies folder │ ║
|
||||||
|
║ │ ○ Archiving inbox │ ║
|
||||||
|
║ │ │ ║
|
||||||
|
║ └──────────────────────────────────────────────────────────┘ ║
|
||||||
|
║ ║
|
||||||
|
║ [━━━━━━━━━━━━━━━━░░░░░░░░░░░░░░░░░░░] 42% ║
|
||||||
|
║ ║
|
||||||
|
║ Estimated time remaining: ~45 seconds ║
|
||||||
|
║ ║
|
||||||
|
╚════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.5 StudyFilesPanel (in Preview)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌────────────────────────────────────────────────────────────────┐
|
||||||
|
│ 📁 Model Files (4) [+ Add] [↗] │
|
||||||
|
├────────────────────────────────────────────────────────────────┤
|
||||||
|
│ 📦 bracket_sim1.sim 1.2 MB Simulation │
|
||||||
|
│ 📐 bracket.prt 3.4 MB Geometry │
|
||||||
|
│ 🔷 bracket_fem1.fem 2.1 MB FEM │
|
||||||
|
│ 🔶 bracket_fem1_i.prt 0.8 MB Idealized Part │
|
||||||
|
└────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Claude Skill for README Generation
|
||||||
|
|
||||||
|
### 5.1 Skill Location
|
||||||
|
|
||||||
|
`.claude/skills/modules/study-readme-generator.md`
|
||||||
|
|
||||||
|
### 5.2 Skill Purpose
|
||||||
|
|
||||||
|
Claude analyzes the AtomizerSpec and generates:
|
||||||
|
1. **Comprehensive README.md** - Not a template, but intelligent documentation
|
||||||
|
2. **Suggested design variables** - Based on introspection candidates
|
||||||
|
3. **Suggested objectives** - Based on goals.md or reasonable defaults
|
||||||
|
4. **Suggested extractors** - Mapped to objectives
|
||||||
|
5. **Suggested constraints** - If mentioned in goals
|
||||||
|
|
||||||
|
### 5.3 README Structure
|
||||||
|
|
||||||
|
1. **Title & Overview** - Study name, description, quick stats
|
||||||
|
2. **Optimization Goals** - Primary objective, constraints summary
|
||||||
|
3. **Model Information** - Solver, baseline mass, warnings
|
||||||
|
4. **Design Variables** - Table with baseline, bounds, units
|
||||||
|
5. **Extractors & Objectives** - Physics extraction mapping
|
||||||
|
6. **Constraints** - Limits and thresholds
|
||||||
|
7. **Recommended Approach** - Algorithm, trial budget
|
||||||
|
8. **Files** - Model file listing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Home Page Integration
|
||||||
|
|
||||||
|
### 6.1 Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────────────────────────┐
|
||||||
|
│ [Logo] [Canvas Builder] [Refresh] │
|
||||||
|
├──────────────────────────────────────────────────────────────────┤
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ Studies │ │ Running │ │ Trials │ │ Best │ │
|
||||||
|
│ │ 15 │ │ 2 │ │ 1,234 │ │ 2.34e-3 │ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
├─────────────────────────────┬────────────────────────────────────┤
|
||||||
|
│ ┌─────────────────────────┐ │ Study Preview │
|
||||||
|
│ │ + Create New Study │ │ ┌────────────────────────────────┐ │
|
||||||
|
│ │ │ │ │ 📁 Model Files (4) [+] │ │
|
||||||
|
│ │ [CreateStudyCard] │ │ │ (StudyFilesPanel) │ │
|
||||||
|
│ │ │ │ └────────────────────────────────┘ │
|
||||||
|
│ └─────────────────────────┘ │ │
|
||||||
|
│ │ README.md │
|
||||||
|
│ Studies (15) │ (MarkdownRenderer) │
|
||||||
|
│ ▶ M1_Mirror (5) │ │
|
||||||
|
│ ▶ Brackets (3) │ │
|
||||||
|
│ ▼ Other (2) │ │
|
||||||
|
│ └─ test_study ● Running │ │
|
||||||
|
└─────────────────────────────┴────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. File Changes Summary
|
||||||
|
|
||||||
|
| File | Action | Est. Lines |
|
||||||
|
|------|--------|------------|
|
||||||
|
| **Backend** | | |
|
||||||
|
| `backend/api/routes/intake.py` | CREATE | ~350 |
|
||||||
|
| `backend/api/services/spec_manager.py` | CREATE | ~80 |
|
||||||
|
| `backend/api/services/claude_readme.py` | CREATE | ~150 |
|
||||||
|
| `backend/api/main.py` | MODIFY | +5 |
|
||||||
|
| **Schema/Models** | | |
|
||||||
|
| `optimization_engine/config/spec_models.py` | MODIFY | +60 |
|
||||||
|
| `optimization_engine/schemas/atomizer_spec_v2.json` | MODIFY | +50 |
|
||||||
|
| **Frontend** | | |
|
||||||
|
| `frontend/src/components/home/CreateStudyCard.tsx` | CREATE | ~400 |
|
||||||
|
| `frontend/src/components/home/IntrospectionResults.tsx` | CREATE | ~120 |
|
||||||
|
| `frontend/src/components/home/TopicSelector.tsx` | CREATE | ~80 |
|
||||||
|
| `frontend/src/components/home/StudyFilesPanel.tsx` | CREATE | ~100 |
|
||||||
|
| `frontend/src/components/common/ProgressModal.tsx` | CREATE | ~150 |
|
||||||
|
| `frontend/src/pages/Home.tsx` | MODIFY | +80 |
|
||||||
|
| `frontend/src/api/client.ts` | MODIFY | +100 |
|
||||||
|
| `frontend/src/types/atomizer-spec.ts` | MODIFY | +40 |
|
||||||
|
| **Skills** | | |
|
||||||
|
| `.claude/skills/modules/study-readme-generator.md` | CREATE | ~120 |
|
||||||
|
|
||||||
|
**Total: ~1,885 lines**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Implementation Order
|
||||||
|
|
||||||
|
### Phase 1: Backend Foundation (Day 1)
|
||||||
|
1. Update `spec_models.py` with new fields (status, IntrospectionData)
|
||||||
|
2. Update JSON schema
|
||||||
|
3. Create `spec_manager.py` service
|
||||||
|
4. Create `intake.py` routes (create, introspect, list, topics)
|
||||||
|
5. Register in `main.py`
|
||||||
|
6. Test with curl/Postman
|
||||||
|
|
||||||
|
### Phase 2: Claude Integration (Day 1-2)
|
||||||
|
1. Create `study-readme-generator.md` skill
|
||||||
|
2. Create `claude_readme.py` service
|
||||||
|
3. Add `/readme/generate` endpoint
|
||||||
|
4. Test README generation
|
||||||
|
|
||||||
|
### Phase 3: Frontend Components (Day 2-3)
|
||||||
|
1. Add TypeScript types
|
||||||
|
2. Add API client methods
|
||||||
|
3. Create `TopicSelector` component
|
||||||
|
4. Create `IntrospectionResults` component
|
||||||
|
5. Create `ProgressModal` component
|
||||||
|
6. Create `CreateStudyCard` component
|
||||||
|
7. Create `StudyFilesPanel` component
|
||||||
|
|
||||||
|
### Phase 4: Home Page Integration (Day 3)
|
||||||
|
1. Modify `Home.tsx` layout
|
||||||
|
2. Integrate `CreateStudyCard` above study list
|
||||||
|
3. Add `StudyFilesPanel` to study preview
|
||||||
|
4. Test full flow
|
||||||
|
|
||||||
|
### Phase 5: Finalization & WebSocket (Day 4)
|
||||||
|
1. Implement `/finalize` endpoint with baseline solve
|
||||||
|
2. Add WebSocket progress updates
|
||||||
|
3. Implement inbox archiving
|
||||||
|
4. End-to-end testing
|
||||||
|
5. Documentation updates
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Validation Gate Integration
|
||||||
|
|
||||||
|
The Validation Gate runs 2-3 test trials before full optimization to catch:
|
||||||
|
- Mesh not updating (identical results)
|
||||||
|
- Extractor failures
|
||||||
|
- Constraint evaluation errors
|
||||||
|
|
||||||
|
**Integration point**: After study is finalized, before optimization starts, a "Validate" button appears in the study preview that runs the gate.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Success Criteria
|
||||||
|
|
||||||
|
- [ ] User can create study by dropping files in dashboard
|
||||||
|
- [ ] Introspection runs automatically after upload
|
||||||
|
- [ ] Introspection results show inline with design candidates highlighted
|
||||||
|
- [ ] "Open Canvas" button works, loading spec into canvas
|
||||||
|
- [ ] Claude generates comprehensive README from spec + goals
|
||||||
|
- [ ] Baseline solve runs with WebSocket progress display
|
||||||
|
- [ ] Study moves to correct topic folder
|
||||||
|
- [ ] Inbox folder is archived after success
|
||||||
|
- [ ] `atomizer_spec.json` is the ONLY configuration file used
|
||||||
|
- [ ] Spec status updates correctly through workflow
|
||||||
|
- [ ] Canvas can load and edit spec from inbox (pre-finalization)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. Error Handling
|
||||||
|
|
||||||
|
### Baseline Solve Failure
|
||||||
|
If baseline solve fails:
|
||||||
|
- Still create the study
|
||||||
|
- Set `spec.meta.status = "configured"` (not "validated")
|
||||||
|
- Store error in `spec.model.introspection.baseline.error`
|
||||||
|
- README notes baseline was attempted but failed
|
||||||
|
- User can retry baseline later or proceed without it
|
||||||
|
|
||||||
|
### Missing Idealized Part
|
||||||
|
If `*_i.prt` is not found:
|
||||||
|
- Add CRITICAL warning to introspection
|
||||||
|
- Highlight in UI with warning icon
|
||||||
|
- Still allow study creation (user may add later)
|
||||||
|
- README includes warning about mesh not updating
|
||||||
|
|
||||||
|
### Introspection Failure
|
||||||
|
If NX introspection fails:
|
||||||
|
- Store error in spec
|
||||||
|
- Allow manual configuration via Canvas
|
||||||
|
- User can retry introspection after fixing issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 12. Future Enhancements (Out of Scope)
|
||||||
|
|
||||||
|
- PDF extraction with Claude Vision
|
||||||
|
- Image analysis for sketches in context/
|
||||||
|
- Batch study creation (multiple studies at once)
|
||||||
|
- Study templates from existing studies
|
||||||
|
- Auto-retry failed baseline with different parameters
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Document created: January 22, 2026*
|
||||||
|
*Approved for implementation*
|
||||||
@@ -110,31 +110,48 @@ frequency = result['frequency'] # Hz
|
|||||||
```python
|
```python
|
||||||
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
|
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
|
||||||
|
|
||||||
# For shell elements (CQUAD4, CTRIA3)
|
# RECOMMENDED: Check ALL solid element types (returns max across all)
|
||||||
result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
|
result = extract_solid_stress(op2_file, subcase=1)
|
||||||
|
|
||||||
# For solid elements (CTETRA, CHEXA)
|
# Or specify single element type
|
||||||
result = extract_solid_stress(op2_file, subcase=1, element_type='ctetra')
|
result = extract_solid_stress(op2_file, subcase=1, element_type='chexa')
|
||||||
|
|
||||||
# Returns: {
|
# Returns: {
|
||||||
# 'max_von_mises': float, # MPa
|
# 'max_von_mises': float, # MPa (auto-converted from kPa)
|
||||||
# 'max_stress_element': int
|
# 'max_stress_element': int,
|
||||||
|
# 'element_type': str, # e.g., 'CHEXA', 'CTETRA'
|
||||||
|
# 'units': 'MPa'
|
||||||
# }
|
# }
|
||||||
|
|
||||||
max_stress = result['max_von_mises'] # MPa
|
max_stress = result['max_von_mises'] # MPa
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**IMPORTANT (Updated 2026-01-22):**
|
||||||
|
- By default, checks ALL solid types: CTETRA, CHEXA, CPENTA, CPYRAM
|
||||||
|
- CHEXA elements often have highest stress (not CTETRA!)
|
||||||
|
- Auto-converts from kPa to MPa (NX kg-mm-s unit system outputs kPa)
|
||||||
|
- Returns Elemental Nodal stress (peak), not Elemental Centroid (averaged)
|
||||||
|
|
||||||
### E4: BDF Mass Extraction
|
### E4: BDF Mass Extraction
|
||||||
|
|
||||||
**Module**: `optimization_engine.extractors.bdf_mass_extractor`
|
**Module**: `optimization_engine.extractors.extract_mass_from_bdf`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
|
from optimization_engine.extractors import extract_mass_from_bdf
|
||||||
|
|
||||||
mass_kg = extract_mass_from_bdf(str(bdf_file)) # kg
|
result = extract_mass_from_bdf(bdf_file)
|
||||||
|
# Returns: {
|
||||||
|
# 'total_mass': float, # kg (primary key)
|
||||||
|
# 'mass_kg': float, # kg
|
||||||
|
# 'mass_g': float, # grams
|
||||||
|
# 'cg': [x, y, z], # center of gravity
|
||||||
|
# 'num_elements': int
|
||||||
|
# }
|
||||||
|
|
||||||
|
mass_kg = result['mass_kg'] # kg
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: Reads mass directly from BDF/DAT file material and element definitions.
|
**Note**: Uses `BDFMassExtractor` internally. Reads mass from element geometry and material density in BDF/DAT file. NX kg-mm-s unit system - mass is directly in kg.
|
||||||
|
|
||||||
### E5: CAD Expression Mass
|
### E5: CAD Expression Mass
|
||||||
|
|
||||||
|
|||||||
1
knowledge_base/lac/optimization_memory/arm_support.jsonl
Normal file
1
knowledge_base/lac/optimization_memory/arm_support.jsonl
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"timestamp": "2026-01-22T21:10:37.955211", "study_name": "stage_3_arm", "geometry_type": "arm_support", "method": "TPE", "objectives": ["displacement", "mass"], "n_objectives": 2, "design_vars": 3, "trials": 21, "converged": false, "convergence_trial": null, "convergence_ratio": null, "best_value": null, "best_params": null, "notes": ""}
|
||||||
@@ -9,3 +9,11 @@
|
|||||||
{"timestamp": "2026-01-01T21:06:37.877252", "category": "failure", "context": "V13 optimization had 45 FEA failures (34% failure rate)", "insight": "rib_thickness parameter has CAD geometry constraint at ~9mm. All trials with rib_thickness > 9.0 failed. Set max to 9.0 (was 12.0). This is a critical CAD constraint not documented anywhere - the NX model geometry breaks with thicker radial ribs.", "confidence": 0.95, "tags": ["m1_mirror", "cad_constraint", "rib_thickness", "V13", "parameter_bounds"]}
|
{"timestamp": "2026-01-01T21:06:37.877252", "category": "failure", "context": "V13 optimization had 45 FEA failures (34% failure rate)", "insight": "rib_thickness parameter has CAD geometry constraint at ~9mm. All trials with rib_thickness > 9.0 failed. Set max to 9.0 (was 12.0). This is a critical CAD constraint not documented anywhere - the NX model geometry breaks with thicker radial ribs.", "confidence": 0.95, "tags": ["m1_mirror", "cad_constraint", "rib_thickness", "V13", "parameter_bounds"]}
|
||||||
{"timestamp": "2026-01-06T11:00:00.000000", "category": "failure", "context": "flat_back_final study failed at journal line 1042. params.exp contained '[mm]description=Best design from V10...' which is not a valid NX expression.", "insight": "CONFIG DATA LEAKAGE INTO EXPRESSIONS: When config contains a 'starting_design' section with documentation fields like 'description', these string values get passed to NX as expressions if not filtered. The fix is to check isinstance(value, (int, float)) before adding to expressions dict. NEVER blindly iterate config dictionaries and pass to NX - always filter by type. The journal failed because NX cannot create an expression named 'description' with a string value.", "confidence": 1.0, "tags": ["nx", "expressions", "config", "starting_design", "type-filtering", "journal-failure"]}
|
{"timestamp": "2026-01-06T11:00:00.000000", "category": "failure", "context": "flat_back_final study failed at journal line 1042. params.exp contained '[mm]description=Best design from V10...' which is not a valid NX expression.", "insight": "CONFIG DATA LEAKAGE INTO EXPRESSIONS: When config contains a 'starting_design' section with documentation fields like 'description', these string values get passed to NX as expressions if not filtered. The fix is to check isinstance(value, (int, float)) before adding to expressions dict. NEVER blindly iterate config dictionaries and pass to NX - always filter by type. The journal failed because NX cannot create an expression named 'description' with a string value.", "confidence": 1.0, "tags": ["nx", "expressions", "config", "starting_design", "type-filtering", "journal-failure"]}
|
||||||
{"timestamp": "2026-01-13T11:00:00.000000", "category": "failure", "context": "Created m1_mirror_flatback_lateral study without README.md despite: (1) OP_01 protocol requiring it, (2) PRIOR LAC FAILURE entry from 2025-12-17 documenting same mistake", "insight": "REPEATED FAILURE - DID NOT LEARN FROM LAC: This exact failure was documented on 2025-12-17 with clear remediation (use TodoWrite to track ALL required outputs). Yet I repeated the same mistake. ROOT CAUSE: Did not read failure.jsonl at session start as required by CLAUDE.md initialization steps. The CLAUDE.md explicitly says MANDATORY: Read knowledge_base/lac/session_insights/failure.jsonl. I skipped this step. FIX: Actually follow the initialization protocol. When creating studies, the checklist MUST include README.md and I must verify its creation before declaring the study complete.", "confidence": 1.0, "tags": ["study-creation", "readme", "repeated-failure", "lac-not-read", "session-initialization", "process-discipline"], "severity": "critical", "rule": "At session start, ACTUALLY READ failure.jsonl as mandated. When creating studies, use TodoWrite with explicit README.md item and verify completion."}
|
{"timestamp": "2026-01-13T11:00:00.000000", "category": "failure", "context": "Created m1_mirror_flatback_lateral study without README.md despite: (1) OP_01 protocol requiring it, (2) PRIOR LAC FAILURE entry from 2025-12-17 documenting same mistake", "insight": "REPEATED FAILURE - DID NOT LEARN FROM LAC: This exact failure was documented on 2025-12-17 with clear remediation (use TodoWrite to track ALL required outputs). Yet I repeated the same mistake. ROOT CAUSE: Did not read failure.jsonl at session start as required by CLAUDE.md initialization steps. The CLAUDE.md explicitly says MANDATORY: Read knowledge_base/lac/session_insights/failure.jsonl. I skipped this step. FIX: Actually follow the initialization protocol. When creating studies, the checklist MUST include README.md and I must verify its creation before declaring the study complete.", "confidence": 1.0, "tags": ["study-creation", "readme", "repeated-failure", "lac-not-read", "session-initialization", "process-discipline"], "severity": "critical", "rule": "At session start, ACTUALLY READ failure.jsonl as mandated. When creating studies, use TodoWrite with explicit README.md item and verify completion."}
|
||||||
|
{"timestamp": "2026-01-22T13:27:00", "category": "failure", "context": "DevLoop end-to-end test of support_arm study - NX solver failed to load geometry parts", "insight": "NX SOLVER PART LOADING: When running FEA on a new study, the NX journal may fail with NoneType error when trying to load geometry/idealized parts. The issue is that Parts.Open() returns a tuple (part, status) but the code expects just the part. Also need to ensure the part paths are absolute. Fix: Check return tuple and use absolute paths for part loading.", "confidence": 0.9, "tags": ["nx", "solver", "part-loading", "devloop", "support_arm"], "severity": "high"}
|
||||||
|
{"timestamp": "2026-01-22T13:37:05.354753", "category": "failure", "context": "Importing extractors from optimization_engine.extractors", "insight": "extract_displacement and extract_mass_from_bdf were not exported in __init__.py __all__ list. Always verify new extractors are added to both imports AND __all__ exports.", "confidence": 0.95, "tags": ["extractors", "imports", "python"]}
|
||||||
|
{"timestamp": "2026-01-22T13:37:05.357090", "category": "failure", "context": "NX solver failing to load geometry parts in solve_simulation.py", "insight": "Parts.Open() can return (None, status) instead of (part, status). Must check if loaded_part is not None before accessing .Name attribute. Fixed around line 852 in solve_simulation.py.", "confidence": 0.95, "tags": ["nx", "solver", "parts", "null-check"]}
|
||||||
|
{"timestamp": "2026-01-22T13:37:05.357090", "category": "failure", "context": "Nastran solve failing with memory allocation error", "insight": "Nastran may request large memory (28GB+) and fail if not available. Check support_arm_sim1-solution_1.log for memory error code 12. May need to configure memory limits in Nastran or close other applications.", "confidence": 0.8, "tags": ["nastran", "memory", "solver", "error"]}
|
||||||
|
{"timestamp": "2026-01-22T15:12:01.584128", "category": "failure", "context": "DevLoop closed-loop development system", "insight": "DevLoop was built but NOT used in this session. Claude defaulted to manual debugging instead of using devloop_cli.py. Need to make DevLoop the default workflow for any multi-step task. Add reminder in CLAUDE.md to use DevLoop for any task with 3+ steps.", "confidence": 0.95, "tags": ["devloop", "process", "automation", "workflow"]}
|
||||||
|
{"timestamp": "2026-01-22T15:23:37.040324", "category": "failure", "context": "NXSolver initialization with license_server parameter", "insight": "NXSolver does NOT have license_server in __init__. It reads from SPLM_LICENSE_SERVER env var. Set os.environ before creating solver.", "confidence": 1.0, "tags": ["nxsolver", "license", "config", "gotcha"]}
|
||||||
|
{"timestamp": "2026-01-22T21:00:03.480993", "category": "failure", "context": "Stage 3 arm baseline test: stress=641.8 MPa vs limit=82.5 MPa", "insight": "Stage 3 arm baseline design has stress 641.8 MPa, far exceeding 30%% Al yield (82.5 MPa). Either the constraint is too restrictive for this geometry, or design needs significant thickening. Consider relaxing constraint to 200 MPa (73%% yield) like support_arm study, or find stiff/light designs.", "confidence": 0.9, "tags": ["stage3_arm", "stress_constraint", "infeasible_baseline"]}
|
||||||
|
{"timestamp": "2026-01-22T21:10:37.955211", "category": "failure", "context": "Stage 3 arm optimization: 21 trials, 0 feasible (stress 600-680 MPa vs 200 MPa limit)", "insight": "Stage 3 arm geometry has INHERENT HIGH STRESS CONCENTRATIONS. Even 200 MPa (73%% yield) constraint is impossible to satisfy with current design variables (arm_thk, center_space, end_thk). All 21 trials showed stress 600-680 MPa regardless of parameters. This geometry needs: (1) stress-reducing features (fillets), (2) higher yield material, or (3) redesigned load paths. DO NOT use stress constraint <600 MPa for this geometry without redesign.", "confidence": 1.0, "tags": ["stage3_arm", "stress_constraint", "geometry_limitation", "infeasible"]}
|
||||||
|
|||||||
@@ -1,2 +1,3 @@
|
|||||||
{"timestamp": "2025-12-24T08:13:38.642843", "category": "protocol_clarification", "context": "SYS_14 Neural Acceleration with dashboard integration", "insight": "When running neural surrogate turbo optimization, FEA validation trials MUST be logged to Optuna for dashboard visibility. Use optuna.create_study() with load_if_exists=True, then for each FEA result: trial=study.ask(), set params via suggest_float(), set objectives as user_attrs, then study.tell(trial, weighted_sum).", "confidence": 0.95, "tags": ["SYS_14", "neural", "optuna", "dashboard", "turbo"]}
|
{"timestamp": "2025-12-24T08:13:38.642843", "category": "protocol_clarification", "context": "SYS_14 Neural Acceleration with dashboard integration", "insight": "When running neural surrogate turbo optimization, FEA validation trials MUST be logged to Optuna for dashboard visibility. Use optuna.create_study() with load_if_exists=True, then for each FEA result: trial=study.ask(), set params via suggest_float(), set objectives as user_attrs, then study.tell(trial, weighted_sum).", "confidence": 0.95, "tags": ["SYS_14", "neural", "optuna", "dashboard", "turbo"]}
|
||||||
{"timestamp": "2025-12-28T10:15:00", "category": "protocol_clarification", "context": "SYS_14 v2.3 update with TrialManager integration", "insight": "SYS_14 Neural Acceleration protocol updated to v2.3. Now uses TrialManager for consistent trial_NNNN naming instead of iter{N}. Key components: (1) TrialManager for folder+DB management, (2) DashboardDB for Optuna-compatible schema, (3) Trial numbers are monotonically increasing and NEVER reset. Reference implementation: studies/M1_Mirror/m1_mirror_cost_reduction_flat_back_V5/run_turbo_optimization.py", "confidence": 0.95, "tags": ["SYS_14", "trial_manager", "dashboard_db", "v2.3"]}
|
{"timestamp": "2025-12-28T10:15:00", "category": "protocol_clarification", "context": "SYS_14 v2.3 update with TrialManager integration", "insight": "SYS_14 Neural Acceleration protocol updated to v2.3. Now uses TrialManager for consistent trial_NNNN naming instead of iter{N}. Key components: (1) TrialManager for folder+DB management, (2) DashboardDB for Optuna-compatible schema, (3) Trial numbers are monotonically increasing and NEVER reset. Reference implementation: studies/M1_Mirror/m1_mirror_cost_reduction_flat_back_V5/run_turbo_optimization.py", "confidence": 0.95, "tags": ["SYS_14", "trial_manager", "dashboard_db", "v2.3"]}
|
||||||
|
{"timestamp": "2026-01-22T21:10:37.956764", "category": "protocol_clarification", "context": "Stage 3 arm study uses 1_model instead of 1_setup/model", "insight": "Dashboard intake creates studies with 1_model/ folder for CAD files, not the standard 1_setup/model/ structure. The run_optimization.py template uses MODEL_DIR = STUDY_DIR / 1_model for these intake-created studies. When fixing/completing intake studies, do NOT move files to 1_setup/model - just use the existing 1_model path.", "confidence": 0.9, "tags": ["study_structure", "dashboard_intake", "1_model", "paths"]}
|
||||||
|
|||||||
@@ -9,3 +9,12 @@
|
|||||||
{"timestamp": "2025-12-29T09:47:47.612485", "category": "success_pattern", "context": "Disk space optimization for FEA studies", "insight": "Per-trial FEA files are ~150MB but only OP2+JSON (~70MB) are essential. PRT/FEM/SIM/DAT are copies of master files and can be deleted after study completion. Archive to dalidou server for long-term storage.", "confidence": 0.95, "tags": ["disk_optimization", "archival", "study_management", "dalidou"], "related_files": ["optimization_engine/utils/study_archiver.py", "docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md"]}
|
{"timestamp": "2025-12-29T09:47:47.612485", "category": "success_pattern", "context": "Disk space optimization for FEA studies", "insight": "Per-trial FEA files are ~150MB but only OP2+JSON (~70MB) are essential. PRT/FEM/SIM/DAT are copies of master files and can be deleted after study completion. Archive to dalidou server for long-term storage.", "confidence": 0.95, "tags": ["disk_optimization", "archival", "study_management", "dalidou"], "related_files": ["optimization_engine/utils/study_archiver.py", "docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md"]}
|
||||||
{"timestamp": "2026-01-02T14:30:00", "category": "success_pattern", "context": "Study Interview Mode implementation and routing update", "insight": "STUDY CREATION DEFAULT: Interview Mode is now the DEFAULT for all study creation requests. Triggers: create a study, new study, set up study, optimize this, minimize mass - any study creation intent. Benefits: (1) Material-aware validation checks stress vs yield, (2) Anti-pattern detection warns about mass-no-constraint, (3) Auto extractor mapping E1-E10, (4) State persistence for interrupted sessions, (5) Blueprint generation with full validation. Skip with: skip interview, quick setup, manual config. Implementation: optimization_engine/interview/ with StudyInterviewEngine, QuestionEngine, EngineeringValidator, StudyBlueprint. All 129 tests passing.", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "validation", "anti_pattern", "materials"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/study_interview.py"]}
|
{"timestamp": "2026-01-02T14:30:00", "category": "success_pattern", "context": "Study Interview Mode implementation and routing update", "insight": "STUDY CREATION DEFAULT: Interview Mode is now the DEFAULT for all study creation requests. Triggers: create a study, new study, set up study, optimize this, minimize mass - any study creation intent. Benefits: (1) Material-aware validation checks stress vs yield, (2) Anti-pattern detection warns about mass-no-constraint, (3) Auto extractor mapping E1-E10, (4) State persistence for interrupted sessions, (5) Blueprint generation with full validation. Skip with: skip interview, quick setup, manual config. Implementation: optimization_engine/interview/ with StudyInterviewEngine, QuestionEngine, EngineeringValidator, StudyBlueprint. All 129 tests passing.", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "validation", "anti_pattern", "materials"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/study_interview.py"]}
|
||||||
{"timestamp": "2026-01-02T14:45:00", "category": "success_pattern", "context": "Study Interview Mode implementation complete", "insight": "INTERVIEW MODE DEFAULT: Study creation now uses Interview Mode by default for all study creation requests. This is a major usability improvement. Triggers: create a study, new study, set up, optimize this - any study creation intent. Key features: (1) Material-aware validation with 12 materials and fuzzy name matching, (2) Anti-pattern detection for 12 common mistakes, (3) Auto extractor mapping E1-E24, (4) 7-phase interview flow, (5) State persistence for interrupted sessions, (6) Blueprint validation before generation. Skip with: skip interview, quick setup, manual. Implementation in optimization_engine/interview/ with 129 tests passing. Full documentation in: .claude/skills/modules/study-interview-mode.md, docs/protocols/operations/OP_01_CREATE_STUDY.md", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "usability", "materials", "anti_pattern", "validation"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/"]}
|
{"timestamp": "2026-01-02T14:45:00", "category": "success_pattern", "context": "Study Interview Mode implementation complete", "insight": "INTERVIEW MODE DEFAULT: Study creation now uses Interview Mode by default for all study creation requests. This is a major usability improvement. Triggers: create a study, new study, set up, optimize this - any study creation intent. Key features: (1) Material-aware validation with 12 materials and fuzzy name matching, (2) Anti-pattern detection for 12 common mistakes, (3) Auto extractor mapping E1-E24, (4) 7-phase interview flow, (5) State persistence for interrupted sessions, (6) Blueprint validation before generation. Skip with: skip interview, quick setup, manual. Implementation in optimization_engine/interview/ with 129 tests passing. Full documentation in: .claude/skills/modules/study-interview-mode.md, docs/protocols/operations/OP_01_CREATE_STUDY.md", "confidence": 1.0, "tags": ["interview_mode", "study_creation", "default", "usability", "materials", "anti_pattern", "validation"], "related_files": [".claude/skills/modules/study-interview-mode.md", "docs/protocols/operations/OP_01_CREATE_STUDY.md", "optimization_engine/interview/"]}
|
||||||
|
{"timestamp": "2026-01-22T13:00:00", "category": "success_pattern", "context": "DevLoop closed-loop development system implementation", "insight": "DEVLOOP PATTERN: Implemented autonomous development cycle that coordinates Gemini (planning) + Claude Code (implementation) + Dashboard (testing) + LAC (learning). 7-stage loop: PLAN -> BUILD -> TEST -> ANALYZE -> FIX -> VERIFY -> LOOP. Key components: (1) DevLoopOrchestrator in optimization_engine/devloop/, (2) DashboardTestRunner for automated testing, (3) GeminiPlanner for strategic planning with mock fallback, (4) ClaudeCodeBridge for implementation, (5) ProblemAnalyzer for failure analysis. API at /api/devloop/* with WebSocket for real-time updates. CLI tool at tools/devloop_cli.py. Frontend panel DevLoopPanel.tsx. Test with: python tools/devloop_cli.py test --study support_arm", "confidence": 0.95, "tags": ["devloop", "automation", "testing", "gemini", "claude", "dashboard", "closed-loop"], "related_files": ["optimization_engine/devloop/orchestrator.py", "tools/devloop_cli.py", "docs/guides/DEVLOOP.md"]}
|
||||||
|
{"timestamp": "2026-01-22T13:37:05.355957", "category": "success_pattern", "context": "Extracting mass from Nastran BDF files", "insight": "Use BDFMassExtractor from bdf_mass_extractor.py for reliable mass extraction. It uses elem.Mass() which handles unit conversions properly. The simpler extract_mass_from_bdf.py now wraps this.", "confidence": 0.9, "tags": ["mass", "bdf", "extraction", "pyNastran"]}
|
||||||
|
{"timestamp": "2026-01-22T13:47:38.696196", "category": "success_pattern", "context": "Stress extraction from NX Nastran OP2 files", "insight": "pyNastran returns stress in kPa for NX kg-mm-s unit system. Divide by 1000 to get MPa. Must check ALL solid element types (CTETRA, CHEXA, CPENTA, CPYRAM) to find true max. Elemental Nodal gives peak stress (143.5 MPa), Elemental Centroid gives averaged (100.3 MPa).", "confidence": 0.95, "tags": ["stress", "extraction", "units", "pyNastran", "nastran"]}
|
||||||
|
{"timestamp": "2026-01-22T15:12:01.584128", "category": "success_pattern", "context": "Dashboard study discovery", "insight": "Dashboard now supports atomizer_spec.json as primary config. Updated _load_study_info() in optimization.py to check atomizer_spec.json first, then fall back to optimization_config.json. Studies with atomizer_spec.json are now discoverable.", "confidence": 0.9, "tags": ["dashboard", "atomizer_spec", "config", "v2.0"]}
|
||||||
|
{"timestamp": "2026-01-22T15:12:01.584128", "category": "success_pattern", "context": "Extracting stress from NX Nastran results", "insight": "CONFIRMED: pyNastran returns stress in kPa for NX kg-mm-s unit system. Divide by 1000 for MPa. Must check ALL solid types (CTETRA, CHEXA, CPENTA, CPYRAM) - CHEXA often has highest stress. Elemental Nodal (143.5 MPa) vs Elemental Centroid (100.3 MPa) - use Nodal for conservative peak stress.", "confidence": 1.0, "tags": ["stress", "extraction", "units", "nastran", "verified"]}
|
||||||
|
{"timestamp": "2026-01-22T15:23:37.040324", "category": "success_pattern", "context": "Creating new study with DevLoop workflow", "insight": "DevLoop workflow: plan -> create dirs -> copy models -> atomizer_spec.json -> validate canvas -> run_optimization.py -> devloop test -> FEA validation. 8 steps completed for support_arm_lightweight.", "confidence": 0.95, "tags": ["devloop", "workflow", "study_creation", "success"]}
|
||||||
|
{"timestamp": "2026-01-22T15:23:37.040324", "category": "success_pattern", "context": "Single-objective optimization with constraints", "insight": "Single-objective with constraints: one objective in array, constraints use threshold+operator, penalty in objective function, canvas edges ext->obj for objective, ext->con for constraints.", "confidence": 0.9, "tags": ["optimization", "single_objective", "constraints", "canvas"]}
|
||||||
|
{"timestamp": "2026-01-22T16:15:11.449264", "category": "success_pattern", "context": "Atomizer UX System implementation - January 2026", "insight": "New study workflow: (1) Put files in studies/_inbox/project_name/models/, (2) Optionally add intake.yaml and context/goals.md, (3) Run atomizer intake project_name, (4) Run atomizer gate study_name to validate with test trials, (5) If passed, approve with --approve flag, (6) Run optimization, (7) Run atomizer finalize study_name to generate interactive HTML report. The CLI commands are: intake, gate, list, finalize.", "confidence": 1.0, "tags": ["workflow", "ux", "cli", "intake", "validation", "report"]}
|
||||||
|
{"timestamp": "2026-01-22T21:10:37.956764", "category": "success_pattern", "context": "Stage 3 arm study setup and execution with DevLoop", "insight": "DevLoop test command (devloop_cli.py test --study) successfully validated study setup before optimization. The 5 standard tests (directory, spec JSON, README, run_optimization.py, model dir) caught structure issues early. Full workflow: (1) Copy model files, (2) Create atomizer_spec.json with extractors/objectives/constraints, (3) Create run_optimization.py from template, (4) Create README.md, (5) Run DevLoop tests, (6) Execute optimization.", "confidence": 0.95, "tags": ["devloop", "study_creation", "workflow", "testing"]}
|
||||||
|
|||||||
@@ -1 +1,2 @@
|
|||||||
{"timestamp": "2025-12-29T12:00:00", "category": "user_preference", "context": "Git remote configuration", "insight": "GitHub repository URL is https://github.com/Anto01/Atomizer.git (private repo). Always push to both origin (Gitea at 192.168.86.50:3000) and github remote.", "confidence": 1.0, "tags": ["git", "github", "remote", "configuration"]}
|
{"timestamp": "2025-12-29T12:00:00", "category": "user_preference", "context": "Git remote configuration", "insight": "GitHub repository URL is https://github.com/Anto01/Atomizer.git (private repo). Always push to both origin (Gitea at 192.168.86.50:3000) and github remote.", "confidence": 1.0, "tags": ["git", "github", "remote", "configuration"]}
|
||||||
|
{"timestamp": "2026-01-22T16:13:41.159557", "category": "user_preference", "context": "Atomizer UX architecture decision - January 2026", "insight": "NO DASHBOARD API - Use Claude Code CLI as the primary interface. The user (engineer) interacts with Atomizer through: (1) Claude Code chat in terminal - natural language, (2) CLI commands like atomizer intake/gate/finalize, (3) Dashboard is for VIEWING only (monitoring, reports), not for configuration. All study creation, validation, and management goes through Claude Code or CLI.", "confidence": 1.0, "tags": ["architecture", "ux", "cli", "dashboard", "claude-code"]}
|
||||||
|
|||||||
@@ -1,2 +1,6 @@
|
|||||||
{"timestamp": "2025-12-24T08:13:38.641823", "category": "workaround", "context": "Turbo optimization study structure", "insight": "Turbo studies use 3_results/ not 2_results/. Dashboard already supports both. Use study.db for Optuna-format (dashboard compatible), study_custom.db for internal custom tracking. Backfill script (scripts/backfill_optuna.py) can convert existing trials.", "confidence": 0.9, "tags": ["turbo", "study_structure", "optuna", "dashboard"]}
|
{"timestamp": "2025-12-24T08:13:38.641823", "category": "workaround", "context": "Turbo optimization study structure", "insight": "Turbo studies use 3_results/ not 2_results/. Dashboard already supports both. Use study.db for Optuna-format (dashboard compatible), study_custom.db for internal custom tracking. Backfill script (scripts/backfill_optuna.py) can convert existing trials.", "confidence": 0.9, "tags": ["turbo", "study_structure", "optuna", "dashboard"]}
|
||||||
{"timestamp": "2025-12-28T10:15:00", "category": "workaround", "context": "Custom database schema not showing in dashboard", "insight": "DASHBOARD COMPATIBILITY: If a study uses custom database schema instead of Optuna's (missing trial_values, trial_params, trial_user_attributes tables), the dashboard won't show trials. Use convert_custom_to_optuna() from dashboard_db.py to convert. This function drops all tables and recreates with Optuna-compatible schema, migrating all trial data.", "confidence": 0.95, "tags": ["dashboard", "optuna", "database", "schema", "migration"]}
|
{"timestamp": "2025-12-28T10:15:00", "category": "workaround", "context": "Custom database schema not showing in dashboard", "insight": "DASHBOARD COMPATIBILITY: If a study uses custom database schema instead of Optuna's (missing trial_values, trial_params, trial_user_attributes tables), the dashboard won't show trials. Use convert_custom_to_optuna() from dashboard_db.py to convert. This function drops all tables and recreates with Optuna-compatible schema, migrating all trial data.", "confidence": 0.95, "tags": ["dashboard", "optuna", "database", "schema", "migration"]}
|
||||||
|
{"timestamp": "2026-01-22T13:37:05.353675", "category": "workaround", "context": "NX installation paths on this machine", "insight": "The working NX installation is DesigncenterNX2512, NOT NX2506 or NX2412. NX2506 only has ThermalFlow components. Always use C:\\Program Files\\Siemens\\DesigncenterNX2512 for NX_INSTALL_DIR.", "confidence": 1.0, "tags": ["nx", "installation", "path", "config"]}
|
||||||
|
{"timestamp": "2026-01-22T15:12:01.584128", "category": "workaround", "context": "Nastran failing with 28GB memory allocation error", "insight": "Bun processes can consume 10-15GB of memory in background. When Nastran fails with memory allocation error, check Task Manager for Bun processes and kill them. Command: Get-Process -Name bun | Stop-Process -Force", "confidence": 1.0, "tags": ["nastran", "memory", "bun", "workaround"]}
|
||||||
|
{"timestamp": "2026-01-22T15:12:01.584128", "category": "workaround", "context": "NX installation paths", "insight": "CONFIRMED: Working NX installation is DesigncenterNX2512 at C:\\Program Files\\Siemens\\DesigncenterNX2512. NX2506 only has ThermalFlow. NX2412 exists but DesigncenterNX2512 is the primary working install.", "confidence": 1.0, "tags": ["nx", "installation", "path", "verified"]}
|
||||||
|
{"timestamp": "2026-01-22T15:23:37.040324", "category": "workaround", "context": "DevLoop test runner looking in wrong study path", "insight": "DevLoop test_runner.py was hardcoded to look in studies/_Other. Fixed devloop_cli.py to search flat structure first, then nested. Study path resolution now dynamic.", "confidence": 1.0, "tags": ["devloop", "bug", "fixed", "study_path"]}
|
||||||
|
|||||||
19
optimization_engine/cli/__init__.py
Normal file
19
optimization_engine/cli/__init__.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
"""
|
||||||
|
Atomizer CLI
|
||||||
|
============
|
||||||
|
|
||||||
|
Command-line interface for Atomizer operations.
|
||||||
|
|
||||||
|
Commands:
|
||||||
|
- atomizer intake <folder> - Process an intake folder
|
||||||
|
- atomizer validate <study> - Validate a study before running
|
||||||
|
- atomizer finalize <study> - Generate final report
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.cli import main
|
||||||
|
main()
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .main import main, app
|
||||||
|
|
||||||
|
__all__ = ["main", "app"]
|
||||||
383
optimization_engine/cli/main.py
Normal file
383
optimization_engine/cli/main.py
Normal file
@@ -0,0 +1,383 @@
|
|||||||
|
"""
|
||||||
|
Atomizer CLI Main Entry Point
|
||||||
|
=============================
|
||||||
|
|
||||||
|
Provides the `atomizer` command with subcommands:
|
||||||
|
- intake: Process an intake folder
|
||||||
|
- validate: Validate a study
|
||||||
|
- finalize: Generate final report
|
||||||
|
- list: List studies
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
atomizer intake bracket_project
|
||||||
|
atomizer validate bracket_mass_opt
|
||||||
|
atomizer finalize bracket_mass_opt --format html
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
def setup_logging(verbose: bool = False):
|
||||||
|
"""Setup logging configuration."""
|
||||||
|
level = logging.DEBUG if verbose else logging.INFO
|
||||||
|
logging.basicConfig(
|
||||||
|
level=level,
|
||||||
|
format="%(message)s",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def find_project_root() -> Path:
|
||||||
|
"""Find the Atomizer project root."""
|
||||||
|
current = Path(__file__).parent
|
||||||
|
while current != current.parent:
|
||||||
|
if (current / "CLAUDE.md").exists():
|
||||||
|
return current
|
||||||
|
current = current.parent
|
||||||
|
return Path.cwd()
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_intake(args):
|
||||||
|
"""Process an intake folder."""
|
||||||
|
from optimization_engine.intake import IntakeProcessor
|
||||||
|
|
||||||
|
# Determine inbox folder
|
||||||
|
inbox_path = Path(args.folder)
|
||||||
|
|
||||||
|
if not inbox_path.is_absolute():
|
||||||
|
# Check if it's in _inbox
|
||||||
|
project_root = find_project_root()
|
||||||
|
inbox_dir = project_root / "studies" / "_inbox"
|
||||||
|
|
||||||
|
if (inbox_dir / args.folder).exists():
|
||||||
|
inbox_path = inbox_dir / args.folder
|
||||||
|
elif (project_root / "studies" / args.folder).exists():
|
||||||
|
inbox_path = project_root / "studies" / args.folder
|
||||||
|
|
||||||
|
if not inbox_path.exists():
|
||||||
|
print(f"Error: Folder not found: {inbox_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print(f"Processing intake: {inbox_path}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Progress callback
|
||||||
|
def progress(message: str, percent: float):
|
||||||
|
bar_width = 30
|
||||||
|
filled = int(bar_width * percent)
|
||||||
|
bar = "=" * filled + "-" * (bar_width - filled)
|
||||||
|
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
|
||||||
|
if percent >= 1.0:
|
||||||
|
print() # Newline at end
|
||||||
|
|
||||||
|
try:
|
||||||
|
processor = IntakeProcessor(
|
||||||
|
inbox_path,
|
||||||
|
progress_callback=progress if not args.quiet else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
context = processor.process(
|
||||||
|
run_baseline=not args.skip_baseline,
|
||||||
|
copy_files=True,
|
||||||
|
run_introspection=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("INTAKE COMPLETE")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Show summary
|
||||||
|
summary = context.get_context_summary()
|
||||||
|
print(f"\nStudy: {context.study_name}")
|
||||||
|
print(f"Location: {processor.study_dir}")
|
||||||
|
print(f"\nContext loaded:")
|
||||||
|
print(f" Model: {'Yes' if summary['has_model'] else 'No'}")
|
||||||
|
print(f" Introspection: {'Yes' if summary['has_introspection'] else 'No'}")
|
||||||
|
print(f" Baseline: {'Yes' if summary['has_baseline'] else 'No'}")
|
||||||
|
print(f" Goals: {'Yes' if summary['has_goals'] else 'No'}")
|
||||||
|
print(f" Pre-config: {'Yes' if summary['has_preconfig'] else 'No'}")
|
||||||
|
print(
|
||||||
|
f" Expressions: {summary['num_expressions']} ({summary['num_dv_candidates']} candidates)"
|
||||||
|
)
|
||||||
|
|
||||||
|
if context.has_baseline:
|
||||||
|
print(f"\nBaseline: {context.get_baseline_summary()}")
|
||||||
|
|
||||||
|
if summary["warnings"]:
|
||||||
|
print(f"\nWarnings:")
|
||||||
|
for w in summary["warnings"]:
|
||||||
|
print(f" - {w}")
|
||||||
|
|
||||||
|
if args.interview:
|
||||||
|
print(f"\nTo continue with interview: atomizer interview {context.study_name}")
|
||||||
|
elif args.canvas:
|
||||||
|
print(f"\nOpen dashboard to configure in Canvas mode")
|
||||||
|
else:
|
||||||
|
print(f"\nNext steps:")
|
||||||
|
print(f" 1. Review context in {processor.study_dir / '0_intake'}")
|
||||||
|
print(f" 2. Configure study via interview or canvas")
|
||||||
|
print(f" 3. Run: atomizer validate {context.study_name}")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError: {e}")
|
||||||
|
if args.verbose:
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_validate(args):
|
||||||
|
"""Validate a study before running."""
|
||||||
|
from optimization_engine.validation import ValidationGate
|
||||||
|
|
||||||
|
# Find study directory
|
||||||
|
study_path = Path(args.study)
|
||||||
|
|
||||||
|
if not study_path.is_absolute():
|
||||||
|
project_root = find_project_root()
|
||||||
|
study_path = project_root / "studies" / args.study
|
||||||
|
|
||||||
|
if not study_path.exists():
|
||||||
|
print(f"Error: Study not found: {study_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print(f"Validating study: {study_path.name}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Progress callback
|
||||||
|
def progress(message: str, percent: float):
|
||||||
|
bar_width = 30
|
||||||
|
filled = int(bar_width * percent)
|
||||||
|
bar = "=" * filled + "-" * (bar_width - filled)
|
||||||
|
print(f"\r[{bar}] {percent * 100:5.1f}% {message}", end="", flush=True)
|
||||||
|
if percent >= 1.0:
|
||||||
|
print()
|
||||||
|
|
||||||
|
try:
|
||||||
|
gate = ValidationGate(
|
||||||
|
study_path,
|
||||||
|
progress_callback=progress if not args.quiet else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = gate.validate(
|
||||||
|
run_test_trials=not args.skip_trials,
|
||||||
|
n_test_trials=args.trials,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
if result.passed:
|
||||||
|
print("VALIDATION PASSED")
|
||||||
|
else:
|
||||||
|
print("VALIDATION FAILED")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Show spec validation
|
||||||
|
if result.spec_check:
|
||||||
|
print(f"\nSpec Validation:")
|
||||||
|
print(f" Errors: {len(result.spec_check.errors)}")
|
||||||
|
print(f" Warnings: {len(result.spec_check.warnings)}")
|
||||||
|
|
||||||
|
for issue in result.spec_check.errors:
|
||||||
|
print(f" [ERROR] {issue.message}")
|
||||||
|
for issue in result.spec_check.warnings[:5]: # Limit warnings shown
|
||||||
|
print(f" [WARN] {issue.message}")
|
||||||
|
|
||||||
|
# Show test trials
|
||||||
|
if result.test_trials:
|
||||||
|
print(f"\nTest Trials:")
|
||||||
|
successful = [t for t in result.test_trials if t.success]
|
||||||
|
print(f" Completed: {len(successful)}/{len(result.test_trials)}")
|
||||||
|
|
||||||
|
if result.results_vary:
|
||||||
|
print(f" Results vary: Yes (good!)")
|
||||||
|
else:
|
||||||
|
print(f" Results vary: NO - MESH MAY NOT BE UPDATING!")
|
||||||
|
|
||||||
|
# Show trial results table
|
||||||
|
print(f"\n {'Trial':<8} {'Status':<10} {'Time (s)':<10}", end="")
|
||||||
|
if successful and successful[0].objectives:
|
||||||
|
for obj in list(successful[0].objectives.keys())[:3]:
|
||||||
|
print(f" {obj:<12}", end="")
|
||||||
|
print()
|
||||||
|
print(" " + "-" * 50)
|
||||||
|
|
||||||
|
for trial in result.test_trials:
|
||||||
|
status = "OK" if trial.success else "FAIL"
|
||||||
|
print(
|
||||||
|
f" {trial.trial_number:<8} {status:<10} {trial.solve_time_seconds:<10.1f}",
|
||||||
|
end="",
|
||||||
|
)
|
||||||
|
for val in list(trial.objectives.values())[:3]:
|
||||||
|
print(f" {val:<12.4f}", end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Show estimates
|
||||||
|
if result.avg_solve_time:
|
||||||
|
print(f"\nRuntime Estimate:")
|
||||||
|
print(f" Avg solve time: {result.avg_solve_time:.1f}s")
|
||||||
|
if result.estimated_total_runtime:
|
||||||
|
hours = result.estimated_total_runtime / 3600
|
||||||
|
print(f" Est. total: {hours:.1f} hours")
|
||||||
|
|
||||||
|
# Show errors
|
||||||
|
if result.errors:
|
||||||
|
print(f"\nErrors:")
|
||||||
|
for err in result.errors:
|
||||||
|
print(f" - {err}")
|
||||||
|
|
||||||
|
# Approve if passed and requested
|
||||||
|
if result.passed:
|
||||||
|
if args.approve:
|
||||||
|
gate.approve()
|
||||||
|
print(f"\nStudy approved for optimization.")
|
||||||
|
else:
|
||||||
|
print(f"\nTo approve and start: atomizer validate {args.study} --approve")
|
||||||
|
|
||||||
|
# Save result
|
||||||
|
output_path = gate.save_result(result)
|
||||||
|
print(f"\nResult saved: {output_path}")
|
||||||
|
|
||||||
|
return 0 if result.passed else 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError: {e}")
|
||||||
|
if args.verbose:
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_list(args):
|
||||||
|
"""List available studies."""
|
||||||
|
project_root = find_project_root()
|
||||||
|
studies_dir = project_root / "studies"
|
||||||
|
|
||||||
|
print("Available Studies:")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# List inbox items
|
||||||
|
inbox_dir = studies_dir / "_inbox"
|
||||||
|
if inbox_dir.exists():
|
||||||
|
inbox_items = [d for d in inbox_dir.iterdir() if d.is_dir() and not d.name.startswith(".")]
|
||||||
|
if inbox_items:
|
||||||
|
print("\nPending Intake (_inbox/):")
|
||||||
|
for item in sorted(inbox_items):
|
||||||
|
has_config = (item / "intake.yaml").exists()
|
||||||
|
has_model = bool(list(item.glob("**/*.sim")))
|
||||||
|
status = []
|
||||||
|
if has_config:
|
||||||
|
status.append("config")
|
||||||
|
if has_model:
|
||||||
|
status.append("model")
|
||||||
|
print(f" {item.name:<30} [{', '.join(status) or 'empty'}]")
|
||||||
|
|
||||||
|
# List active studies
|
||||||
|
print("\nActive Studies:")
|
||||||
|
for study_dir in sorted(studies_dir.iterdir()):
|
||||||
|
if (
|
||||||
|
study_dir.is_dir()
|
||||||
|
and not study_dir.name.startswith("_")
|
||||||
|
and not study_dir.name.startswith(".")
|
||||||
|
):
|
||||||
|
# Check status
|
||||||
|
has_spec = (study_dir / "atomizer_spec.json").exists() or (
|
||||||
|
study_dir / "optimization_config.json"
|
||||||
|
).exists()
|
||||||
|
has_db = (study_dir / "3_results" / "study.db").exists() or (
|
||||||
|
study_dir / "2_results" / "study.db"
|
||||||
|
).exists()
|
||||||
|
has_approval = (study_dir / ".validation_approved").exists()
|
||||||
|
|
||||||
|
status = []
|
||||||
|
if has_spec:
|
||||||
|
status.append("configured")
|
||||||
|
if has_approval:
|
||||||
|
status.append("approved")
|
||||||
|
if has_db:
|
||||||
|
status.append("has_results")
|
||||||
|
|
||||||
|
print(f" {study_dir.name:<30} [{', '.join(status) or 'new'}]")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def cmd_finalize(args):
|
||||||
|
"""Generate final report for a study."""
|
||||||
|
print(f"Finalize command not yet implemented for: {args.study}")
|
||||||
|
print("This will generate the interactive HTML report.")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def create_parser() -> argparse.ArgumentParser:
|
||||||
|
"""Create the argument parser."""
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
prog="atomizer",
|
||||||
|
description="Atomizer - FEA Optimization Command Line Interface",
|
||||||
|
)
|
||||||
|
parser.add_argument("-v", "--verbose", action="store_true", help="Verbose output")
|
||||||
|
parser.add_argument("-q", "--quiet", action="store_true", help="Minimal output")
|
||||||
|
|
||||||
|
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||||
|
|
||||||
|
# intake command
|
||||||
|
intake_parser = subparsers.add_parser("intake", help="Process an intake folder")
|
||||||
|
intake_parser.add_argument("folder", help="Path to intake folder")
|
||||||
|
intake_parser.add_argument("--skip-baseline", action="store_true", help="Skip baseline solve")
|
||||||
|
intake_parser.add_argument(
|
||||||
|
"--interview", action="store_true", help="Continue to interview mode"
|
||||||
|
)
|
||||||
|
intake_parser.add_argument("--canvas", action="store_true", help="Open in canvas mode")
|
||||||
|
intake_parser.set_defaults(func=cmd_intake)
|
||||||
|
|
||||||
|
# validate command
|
||||||
|
validate_parser = subparsers.add_parser("validate", help="Validate a study")
|
||||||
|
validate_parser.add_argument("study", help="Study name or path")
|
||||||
|
validate_parser.add_argument("--skip-trials", action="store_true", help="Skip test trials")
|
||||||
|
validate_parser.add_argument("--trials", type=int, default=3, help="Number of test trials")
|
||||||
|
validate_parser.add_argument(
|
||||||
|
"--approve", action="store_true", help="Approve if validation passes"
|
||||||
|
)
|
||||||
|
validate_parser.set_defaults(func=cmd_validate)
|
||||||
|
|
||||||
|
# list command
|
||||||
|
list_parser = subparsers.add_parser("list", help="List studies")
|
||||||
|
list_parser.set_defaults(func=cmd_list)
|
||||||
|
|
||||||
|
# finalize command
|
||||||
|
finalize_parser = subparsers.add_parser("finalize", help="Generate final report")
|
||||||
|
finalize_parser.add_argument("study", help="Study name or path")
|
||||||
|
finalize_parser.add_argument("--format", choices=["html", "pdf", "all"], default="html")
|
||||||
|
finalize_parser.set_defaults(func=cmd_finalize)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main(args=None):
|
||||||
|
"""Main entry point."""
|
||||||
|
parser = create_parser()
|
||||||
|
parsed_args = parser.parse_args(args)
|
||||||
|
|
||||||
|
setup_logging(getattr(parsed_args, "verbose", False))
|
||||||
|
|
||||||
|
if parsed_args.command is None:
|
||||||
|
parser.print_help()
|
||||||
|
return 0
|
||||||
|
|
||||||
|
return parsed_args.func(parsed_args)
|
||||||
|
|
||||||
|
|
||||||
|
# For typer/click compatibility
|
||||||
|
app = main
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
@@ -7,7 +7,7 @@ They provide validation and type safety for the unified configuration system.
|
|||||||
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import Any, Dict, List, Literal, Optional, Union
|
from typing import Any, Dict, List, Literal, Optional, Tuple, Union
|
||||||
from pydantic import BaseModel, Field, field_validator, model_validator
|
from pydantic import BaseModel, Field, field_validator, model_validator
|
||||||
import re
|
import re
|
||||||
|
|
||||||
@@ -16,17 +16,34 @@ import re
|
|||||||
# Enums
|
# Enums
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class SpecCreatedBy(str, Enum):
|
class SpecCreatedBy(str, Enum):
|
||||||
"""Who/what created the spec."""
|
"""Who/what created the spec."""
|
||||||
|
|
||||||
CANVAS = "canvas"
|
CANVAS = "canvas"
|
||||||
CLAUDE = "claude"
|
CLAUDE = "claude"
|
||||||
API = "api"
|
API = "api"
|
||||||
MIGRATION = "migration"
|
MIGRATION = "migration"
|
||||||
MANUAL = "manual"
|
MANUAL = "manual"
|
||||||
|
DASHBOARD_INTAKE = "dashboard_intake"
|
||||||
|
|
||||||
|
|
||||||
|
class SpecStatus(str, Enum):
|
||||||
|
"""Study lifecycle status."""
|
||||||
|
|
||||||
|
DRAFT = "draft"
|
||||||
|
INTROSPECTED = "introspected"
|
||||||
|
CONFIGURED = "configured"
|
||||||
|
VALIDATED = "validated"
|
||||||
|
READY = "ready"
|
||||||
|
RUNNING = "running"
|
||||||
|
COMPLETED = "completed"
|
||||||
|
FAILED = "failed"
|
||||||
|
|
||||||
|
|
||||||
class SolverType(str, Enum):
|
class SolverType(str, Enum):
|
||||||
"""Supported solver types."""
|
"""Supported solver types."""
|
||||||
|
|
||||||
NASTRAN = "nastran"
|
NASTRAN = "nastran"
|
||||||
NX_NASTRAN = "NX_Nastran"
|
NX_NASTRAN = "NX_Nastran"
|
||||||
ABAQUS = "abaqus"
|
ABAQUS = "abaqus"
|
||||||
@@ -34,6 +51,7 @@ class SolverType(str, Enum):
|
|||||||
|
|
||||||
class SubcaseType(str, Enum):
|
class SubcaseType(str, Enum):
|
||||||
"""Subcase analysis types."""
|
"""Subcase analysis types."""
|
||||||
|
|
||||||
STATIC = "static"
|
STATIC = "static"
|
||||||
MODAL = "modal"
|
MODAL = "modal"
|
||||||
THERMAL = "thermal"
|
THERMAL = "thermal"
|
||||||
@@ -42,6 +60,7 @@ class SubcaseType(str, Enum):
|
|||||||
|
|
||||||
class DesignVariableType(str, Enum):
|
class DesignVariableType(str, Enum):
|
||||||
"""Design variable types."""
|
"""Design variable types."""
|
||||||
|
|
||||||
CONTINUOUS = "continuous"
|
CONTINUOUS = "continuous"
|
||||||
INTEGER = "integer"
|
INTEGER = "integer"
|
||||||
CATEGORICAL = "categorical"
|
CATEGORICAL = "categorical"
|
||||||
@@ -49,6 +68,7 @@ class DesignVariableType(str, Enum):
|
|||||||
|
|
||||||
class ExtractorType(str, Enum):
|
class ExtractorType(str, Enum):
|
||||||
"""Physics extractor types."""
|
"""Physics extractor types."""
|
||||||
|
|
||||||
DISPLACEMENT = "displacement"
|
DISPLACEMENT = "displacement"
|
||||||
FREQUENCY = "frequency"
|
FREQUENCY = "frequency"
|
||||||
STRESS = "stress"
|
STRESS = "stress"
|
||||||
@@ -62,18 +82,21 @@ class ExtractorType(str, Enum):
|
|||||||
|
|
||||||
class OptimizationDirection(str, Enum):
|
class OptimizationDirection(str, Enum):
|
||||||
"""Optimization direction."""
|
"""Optimization direction."""
|
||||||
|
|
||||||
MINIMIZE = "minimize"
|
MINIMIZE = "minimize"
|
||||||
MAXIMIZE = "maximize"
|
MAXIMIZE = "maximize"
|
||||||
|
|
||||||
|
|
||||||
class ConstraintType(str, Enum):
|
class ConstraintType(str, Enum):
|
||||||
"""Constraint types."""
|
"""Constraint types."""
|
||||||
|
|
||||||
HARD = "hard"
|
HARD = "hard"
|
||||||
SOFT = "soft"
|
SOFT = "soft"
|
||||||
|
|
||||||
|
|
||||||
class ConstraintOperator(str, Enum):
|
class ConstraintOperator(str, Enum):
|
||||||
"""Constraint comparison operators."""
|
"""Constraint comparison operators."""
|
||||||
|
|
||||||
LE = "<="
|
LE = "<="
|
||||||
GE = ">="
|
GE = ">="
|
||||||
LT = "<"
|
LT = "<"
|
||||||
@@ -83,6 +106,7 @@ class ConstraintOperator(str, Enum):
|
|||||||
|
|
||||||
class PenaltyMethod(str, Enum):
|
class PenaltyMethod(str, Enum):
|
||||||
"""Penalty methods for constraints."""
|
"""Penalty methods for constraints."""
|
||||||
|
|
||||||
LINEAR = "linear"
|
LINEAR = "linear"
|
||||||
QUADRATIC = "quadratic"
|
QUADRATIC = "quadratic"
|
||||||
EXPONENTIAL = "exponential"
|
EXPONENTIAL = "exponential"
|
||||||
@@ -90,6 +114,7 @@ class PenaltyMethod(str, Enum):
|
|||||||
|
|
||||||
class AlgorithmType(str, Enum):
|
class AlgorithmType(str, Enum):
|
||||||
"""Optimization algorithm types."""
|
"""Optimization algorithm types."""
|
||||||
|
|
||||||
TPE = "TPE"
|
TPE = "TPE"
|
||||||
CMA_ES = "CMA-ES"
|
CMA_ES = "CMA-ES"
|
||||||
NSGA_II = "NSGA-II"
|
NSGA_II = "NSGA-II"
|
||||||
@@ -100,6 +125,7 @@ class AlgorithmType(str, Enum):
|
|||||||
|
|
||||||
class SurrogateType(str, Enum):
|
class SurrogateType(str, Enum):
|
||||||
"""Surrogate model types."""
|
"""Surrogate model types."""
|
||||||
|
|
||||||
MLP = "MLP"
|
MLP = "MLP"
|
||||||
GNN = "GNN"
|
GNN = "GNN"
|
||||||
ENSEMBLE = "ensemble"
|
ENSEMBLE = "ensemble"
|
||||||
@@ -109,58 +135,104 @@ class SurrogateType(str, Enum):
|
|||||||
# Position Model
|
# Position Model
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class CanvasPosition(BaseModel):
|
class CanvasPosition(BaseModel):
|
||||||
"""Canvas position for nodes."""
|
"""Canvas position for nodes."""
|
||||||
|
|
||||||
x: float = 0
|
x: float = 0
|
||||||
y: float = 0
|
y: float = 0
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Introspection Models (for intake workflow)
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
|
class ExpressionInfo(BaseModel):
|
||||||
|
"""Information about an NX expression from introspection."""
|
||||||
|
|
||||||
|
name: str = Field(..., description="Expression name in NX")
|
||||||
|
value: Optional[float] = Field(default=None, description="Current value")
|
||||||
|
units: Optional[str] = Field(default=None, description="Physical units")
|
||||||
|
formula: Optional[str] = Field(default=None, description="Expression formula if any")
|
||||||
|
is_candidate: bool = Field(
|
||||||
|
default=False, description="Whether this is a design variable candidate"
|
||||||
|
)
|
||||||
|
confidence: float = Field(
|
||||||
|
default=0.0, ge=0.0, le=1.0, description="Confidence that this is a DV"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BaselineData(BaseModel):
|
||||||
|
"""Results from baseline FEA solve."""
|
||||||
|
|
||||||
|
timestamp: datetime = Field(..., description="When baseline was run")
|
||||||
|
solve_time_seconds: float = Field(..., description="How long the solve took")
|
||||||
|
mass_kg: Optional[float] = Field(default=None, description="Computed mass from BDF/FEM")
|
||||||
|
max_displacement_mm: Optional[float] = Field(
|
||||||
|
default=None, description="Max displacement result"
|
||||||
|
)
|
||||||
|
max_stress_mpa: Optional[float] = Field(default=None, description="Max von Mises stress")
|
||||||
|
success: bool = Field(default=True, description="Whether baseline solve succeeded")
|
||||||
|
error: Optional[str] = Field(default=None, description="Error message if failed")
|
||||||
|
|
||||||
|
|
||||||
|
class IntrospectionData(BaseModel):
|
||||||
|
"""Model introspection results stored in the spec."""
|
||||||
|
|
||||||
|
timestamp: datetime = Field(..., description="When introspection was run")
|
||||||
|
solver_type: Optional[str] = Field(default=None, description="Detected solver type")
|
||||||
|
mass_kg: Optional[float] = Field(
|
||||||
|
default=None, description="Mass from expressions or properties"
|
||||||
|
)
|
||||||
|
volume_mm3: Optional[float] = Field(default=None, description="Volume from mass properties")
|
||||||
|
expressions: List[ExpressionInfo] = Field(
|
||||||
|
default_factory=list, description="Discovered expressions"
|
||||||
|
)
|
||||||
|
baseline: Optional[BaselineData] = Field(default=None, description="Baseline solve results")
|
||||||
|
warnings: List[str] = Field(default_factory=list, description="Warnings from introspection")
|
||||||
|
|
||||||
|
def get_design_candidates(self) -> List[ExpressionInfo]:
|
||||||
|
"""Return expressions marked as design variable candidates."""
|
||||||
|
return [e for e in self.expressions if e.is_candidate]
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Meta Models
|
# Meta Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class SpecMeta(BaseModel):
|
class SpecMeta(BaseModel):
|
||||||
"""Metadata about the spec."""
|
"""Metadata about the spec."""
|
||||||
version: str = Field(
|
|
||||||
...,
|
version: str = Field(..., pattern=r"^2\.\d+$", description="Schema version (e.g., '2.0')")
|
||||||
pattern=r"^2\.\d+$",
|
created: Optional[datetime] = Field(default=None, description="When the spec was created")
|
||||||
description="Schema version (e.g., '2.0')"
|
|
||||||
)
|
|
||||||
created: Optional[datetime] = Field(
|
|
||||||
default=None,
|
|
||||||
description="When the spec was created"
|
|
||||||
)
|
|
||||||
modified: Optional[datetime] = Field(
|
modified: Optional[datetime] = Field(
|
||||||
default=None,
|
default=None, description="When the spec was last modified"
|
||||||
description="When the spec was last modified"
|
|
||||||
)
|
)
|
||||||
created_by: Optional[SpecCreatedBy] = Field(
|
created_by: Optional[SpecCreatedBy] = Field(
|
||||||
default=None,
|
default=None, description="Who/what created the spec"
|
||||||
description="Who/what created the spec"
|
|
||||||
)
|
|
||||||
modified_by: Optional[str] = Field(
|
|
||||||
default=None,
|
|
||||||
description="Who/what last modified the spec"
|
|
||||||
)
|
)
|
||||||
|
modified_by: Optional[str] = Field(default=None, description="Who/what last modified the spec")
|
||||||
study_name: str = Field(
|
study_name: str = Field(
|
||||||
...,
|
...,
|
||||||
min_length=3,
|
min_length=3,
|
||||||
max_length=100,
|
max_length=100,
|
||||||
pattern=r"^[a-z0-9_]+$",
|
pattern=r"^[a-z0-9_]+$",
|
||||||
description="Unique study identifier (snake_case)"
|
description="Unique study identifier (snake_case)",
|
||||||
)
|
)
|
||||||
description: Optional[str] = Field(
|
description: Optional[str] = Field(
|
||||||
default=None,
|
default=None, max_length=1000, description="Human-readable description"
|
||||||
max_length=1000,
|
|
||||||
description="Human-readable description"
|
|
||||||
)
|
|
||||||
tags: Optional[List[str]] = Field(
|
|
||||||
default=None,
|
|
||||||
description="Tags for categorization"
|
|
||||||
)
|
)
|
||||||
|
tags: Optional[List[str]] = Field(default=None, description="Tags for categorization")
|
||||||
engineering_context: Optional[str] = Field(
|
engineering_context: Optional[str] = Field(
|
||||||
|
default=None, description="Real-world engineering context"
|
||||||
|
)
|
||||||
|
status: SpecStatus = Field(default=SpecStatus.DRAFT, description="Study lifecycle status")
|
||||||
|
topic: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
description="Real-world engineering context"
|
pattern=r"^[A-Za-z0-9_]+$",
|
||||||
|
description="Topic folder for grouping related studies",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -168,15 +240,20 @@ class SpecMeta(BaseModel):
|
|||||||
# Model Configuration Models
|
# Model Configuration Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class NxPartConfig(BaseModel):
|
class NxPartConfig(BaseModel):
|
||||||
"""NX geometry part file configuration."""
|
"""NX geometry part file configuration."""
|
||||||
|
|
||||||
path: Optional[str] = Field(default=None, description="Path to .prt file")
|
path: Optional[str] = Field(default=None, description="Path to .prt file")
|
||||||
hash: Optional[str] = Field(default=None, description="File hash for change detection")
|
hash: Optional[str] = Field(default=None, description="File hash for change detection")
|
||||||
idealized_part: Optional[str] = Field(default=None, description="Idealized part filename (_i.prt)")
|
idealized_part: Optional[str] = Field(
|
||||||
|
default=None, description="Idealized part filename (_i.prt)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class FemConfig(BaseModel):
|
class FemConfig(BaseModel):
|
||||||
"""FEM mesh file configuration."""
|
"""FEM mesh file configuration."""
|
||||||
|
|
||||||
path: Optional[str] = Field(default=None, description="Path to .fem file")
|
path: Optional[str] = Field(default=None, description="Path to .fem file")
|
||||||
element_count: Optional[int] = Field(default=None, description="Number of elements")
|
element_count: Optional[int] = Field(default=None, description="Number of elements")
|
||||||
node_count: Optional[int] = Field(default=None, description="Number of nodes")
|
node_count: Optional[int] = Field(default=None, description="Number of nodes")
|
||||||
@@ -184,6 +261,7 @@ class FemConfig(BaseModel):
|
|||||||
|
|
||||||
class Subcase(BaseModel):
|
class Subcase(BaseModel):
|
||||||
"""Simulation subcase definition."""
|
"""Simulation subcase definition."""
|
||||||
|
|
||||||
id: int
|
id: int
|
||||||
name: Optional[str] = None
|
name: Optional[str] = None
|
||||||
type: Optional[SubcaseType] = None
|
type: Optional[SubcaseType] = None
|
||||||
@@ -191,18 +269,18 @@ class Subcase(BaseModel):
|
|||||||
|
|
||||||
class SimConfig(BaseModel):
|
class SimConfig(BaseModel):
|
||||||
"""Simulation file configuration."""
|
"""Simulation file configuration."""
|
||||||
|
|
||||||
path: str = Field(..., description="Path to .sim file")
|
path: str = Field(..., description="Path to .sim file")
|
||||||
solver: SolverType = Field(..., description="Solver type")
|
solver: SolverType = Field(..., description="Solver type")
|
||||||
solution_type: Optional[str] = Field(
|
solution_type: Optional[str] = Field(
|
||||||
default=None,
|
default=None, pattern=r"^SOL\d+$", description="Solution type (e.g., SOL101)"
|
||||||
pattern=r"^SOL\d+$",
|
|
||||||
description="Solution type (e.g., SOL101)"
|
|
||||||
)
|
)
|
||||||
subcases: Optional[List[Subcase]] = Field(default=None, description="Defined subcases")
|
subcases: Optional[List[Subcase]] = Field(default=None, description="Defined subcases")
|
||||||
|
|
||||||
|
|
||||||
class NxSettings(BaseModel):
|
class NxSettings(BaseModel):
|
||||||
"""NX runtime settings."""
|
"""NX runtime settings."""
|
||||||
|
|
||||||
nx_install_path: Optional[str] = None
|
nx_install_path: Optional[str] = None
|
||||||
simulation_timeout_s: Optional[int] = Field(default=None, ge=60, le=7200)
|
simulation_timeout_s: Optional[int] = Field(default=None, ge=60, le=7200)
|
||||||
auto_start_nx: Optional[bool] = None
|
auto_start_nx: Optional[bool] = None
|
||||||
@@ -210,23 +288,31 @@ class NxSettings(BaseModel):
|
|||||||
|
|
||||||
class ModelConfig(BaseModel):
|
class ModelConfig(BaseModel):
|
||||||
"""NX model files and configuration."""
|
"""NX model files and configuration."""
|
||||||
|
|
||||||
nx_part: Optional[NxPartConfig] = None
|
nx_part: Optional[NxPartConfig] = None
|
||||||
fem: Optional[FemConfig] = None
|
fem: Optional[FemConfig] = None
|
||||||
sim: SimConfig
|
sim: Optional[SimConfig] = Field(
|
||||||
|
default=None, description="Simulation file config (required for optimization)"
|
||||||
|
)
|
||||||
nx_settings: Optional[NxSettings] = None
|
nx_settings: Optional[NxSettings] = None
|
||||||
|
introspection: Optional[IntrospectionData] = Field(
|
||||||
|
default=None, description="Model introspection results from intake"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Design Variable Models
|
# Design Variable Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class DesignVariableBounds(BaseModel):
|
class DesignVariableBounds(BaseModel):
|
||||||
"""Design variable bounds."""
|
"""Design variable bounds."""
|
||||||
|
|
||||||
min: float
|
min: float
|
||||||
max: float
|
max: float
|
||||||
|
|
||||||
@model_validator(mode='after')
|
@model_validator(mode="after")
|
||||||
def validate_bounds(self) -> 'DesignVariableBounds':
|
def validate_bounds(self) -> "DesignVariableBounds":
|
||||||
if self.min >= self.max:
|
if self.min >= self.max:
|
||||||
raise ValueError(f"min ({self.min}) must be less than max ({self.max})")
|
raise ValueError(f"min ({self.min}) must be less than max ({self.max})")
|
||||||
return self
|
return self
|
||||||
@@ -234,16 +320,13 @@ class DesignVariableBounds(BaseModel):
|
|||||||
|
|
||||||
class DesignVariable(BaseModel):
|
class DesignVariable(BaseModel):
|
||||||
"""A design variable to optimize."""
|
"""A design variable to optimize."""
|
||||||
id: str = Field(
|
|
||||||
...,
|
id: str = Field(..., pattern=r"^dv_\d{3}$", description="Unique identifier (pattern: dv_XXX)")
|
||||||
pattern=r"^dv_\d{3}$",
|
|
||||||
description="Unique identifier (pattern: dv_XXX)"
|
|
||||||
)
|
|
||||||
name: str = Field(..., description="Human-readable name")
|
name: str = Field(..., description="Human-readable name")
|
||||||
expression_name: str = Field(
|
expression_name: str = Field(
|
||||||
...,
|
...,
|
||||||
pattern=r"^[a-zA-Z_][a-zA-Z0-9_]*$",
|
pattern=r"^[a-zA-Z_][a-zA-Z0-9_]*$",
|
||||||
description="NX expression name (must match model)"
|
description="NX expression name (must match model)",
|
||||||
)
|
)
|
||||||
type: DesignVariableType = Field(..., description="Variable type")
|
type: DesignVariableType = Field(..., description="Variable type")
|
||||||
bounds: DesignVariableBounds = Field(..., description="Value bounds")
|
bounds: DesignVariableBounds = Field(..., description="Value bounds")
|
||||||
@@ -259,8 +342,10 @@ class DesignVariable(BaseModel):
|
|||||||
# Extractor Models
|
# Extractor Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class ExtractorConfig(BaseModel):
|
class ExtractorConfig(BaseModel):
|
||||||
"""Type-specific extractor configuration."""
|
"""Type-specific extractor configuration."""
|
||||||
|
|
||||||
inner_radius_mm: Optional[float] = None
|
inner_radius_mm: Optional[float] = None
|
||||||
outer_radius_mm: Optional[float] = None
|
outer_radius_mm: Optional[float] = None
|
||||||
n_modes: Optional[int] = None
|
n_modes: Optional[int] = None
|
||||||
@@ -279,6 +364,7 @@ class ExtractorConfig(BaseModel):
|
|||||||
|
|
||||||
class CustomFunction(BaseModel):
|
class CustomFunction(BaseModel):
|
||||||
"""Custom function definition for custom_function extractors."""
|
"""Custom function definition for custom_function extractors."""
|
||||||
|
|
||||||
name: Optional[str] = Field(default=None, description="Function name")
|
name: Optional[str] = Field(default=None, description="Function name")
|
||||||
module: Optional[str] = Field(default=None, description="Python module path")
|
module: Optional[str] = Field(default=None, description="Python module path")
|
||||||
signature: Optional[str] = Field(default=None, description="Function signature")
|
signature: Optional[str] = Field(default=None, description="Function signature")
|
||||||
@@ -287,32 +373,33 @@ class CustomFunction(BaseModel):
|
|||||||
|
|
||||||
class ExtractorOutput(BaseModel):
|
class ExtractorOutput(BaseModel):
|
||||||
"""Output definition for an extractor."""
|
"""Output definition for an extractor."""
|
||||||
|
|
||||||
name: str = Field(..., description="Output name (used by objectives/constraints)")
|
name: str = Field(..., description="Output name (used by objectives/constraints)")
|
||||||
metric: Optional[str] = Field(default=None, description="Specific metric (max, total, rms, etc.)")
|
metric: Optional[str] = Field(
|
||||||
|
default=None, description="Specific metric (max, total, rms, etc.)"
|
||||||
|
)
|
||||||
subcase: Optional[int] = Field(default=None, description="Subcase ID for this output")
|
subcase: Optional[int] = Field(default=None, description="Subcase ID for this output")
|
||||||
units: Optional[str] = None
|
units: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class Extractor(BaseModel):
|
class Extractor(BaseModel):
|
||||||
"""Physics extractor that computes outputs from FEA."""
|
"""Physics extractor that computes outputs from FEA."""
|
||||||
id: str = Field(
|
|
||||||
...,
|
id: str = Field(..., pattern=r"^ext_\d{3}$", description="Unique identifier (pattern: ext_XXX)")
|
||||||
pattern=r"^ext_\d{3}$",
|
|
||||||
description="Unique identifier (pattern: ext_XXX)"
|
|
||||||
)
|
|
||||||
name: str = Field(..., description="Human-readable name")
|
name: str = Field(..., description="Human-readable name")
|
||||||
type: ExtractorType = Field(..., description="Extractor type")
|
type: ExtractorType = Field(..., description="Extractor type")
|
||||||
builtin: bool = Field(default=True, description="Whether this is a built-in extractor")
|
builtin: bool = Field(default=True, description="Whether this is a built-in extractor")
|
||||||
config: Optional[ExtractorConfig] = Field(default=None, description="Type-specific configuration")
|
config: Optional[ExtractorConfig] = Field(
|
||||||
|
default=None, description="Type-specific configuration"
|
||||||
|
)
|
||||||
function: Optional[CustomFunction] = Field(
|
function: Optional[CustomFunction] = Field(
|
||||||
default=None,
|
default=None, description="Custom function definition (for custom_function type)"
|
||||||
description="Custom function definition (for custom_function type)"
|
|
||||||
)
|
)
|
||||||
outputs: List[ExtractorOutput] = Field(..., min_length=1, description="Output values")
|
outputs: List[ExtractorOutput] = Field(..., min_length=1, description="Output values")
|
||||||
canvas_position: Optional[CanvasPosition] = None
|
canvas_position: Optional[CanvasPosition] = None
|
||||||
|
|
||||||
@model_validator(mode='after')
|
@model_validator(mode="after")
|
||||||
def validate_custom_function(self) -> 'Extractor':
|
def validate_custom_function(self) -> "Extractor":
|
||||||
if self.type == ExtractorType.CUSTOM_FUNCTION and self.function is None:
|
if self.type == ExtractorType.CUSTOM_FUNCTION and self.function is None:
|
||||||
raise ValueError("custom_function extractor requires function definition")
|
raise ValueError("custom_function extractor requires function definition")
|
||||||
return self
|
return self
|
||||||
@@ -322,19 +409,18 @@ class Extractor(BaseModel):
|
|||||||
# Objective Models
|
# Objective Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class ObjectiveSource(BaseModel):
|
class ObjectiveSource(BaseModel):
|
||||||
"""Source reference for objective value."""
|
"""Source reference for objective value."""
|
||||||
|
|
||||||
extractor_id: str = Field(..., description="Reference to extractor")
|
extractor_id: str = Field(..., description="Reference to extractor")
|
||||||
output_name: str = Field(..., description="Which output from the extractor")
|
output_name: str = Field(..., description="Which output from the extractor")
|
||||||
|
|
||||||
|
|
||||||
class Objective(BaseModel):
|
class Objective(BaseModel):
|
||||||
"""Optimization objective."""
|
"""Optimization objective."""
|
||||||
id: str = Field(
|
|
||||||
...,
|
id: str = Field(..., pattern=r"^obj_\d{3}$", description="Unique identifier (pattern: obj_XXX)")
|
||||||
pattern=r"^obj_\d{3}$",
|
|
||||||
description="Unique identifier (pattern: obj_XXX)"
|
|
||||||
)
|
|
||||||
name: str = Field(..., description="Human-readable name")
|
name: str = Field(..., description="Human-readable name")
|
||||||
direction: OptimizationDirection = Field(..., description="Optimization direction")
|
direction: OptimizationDirection = Field(..., description="Optimization direction")
|
||||||
weight: float = Field(default=1.0, ge=0, description="Weight for weighted sum")
|
weight: float = Field(default=1.0, ge=0, description="Weight for weighted sum")
|
||||||
@@ -349,14 +435,17 @@ class Objective(BaseModel):
|
|||||||
# Constraint Models
|
# Constraint Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class ConstraintSource(BaseModel):
|
class ConstraintSource(BaseModel):
|
||||||
"""Source reference for constraint value."""
|
"""Source reference for constraint value."""
|
||||||
|
|
||||||
extractor_id: str
|
extractor_id: str
|
||||||
output_name: str
|
output_name: str
|
||||||
|
|
||||||
|
|
||||||
class PenaltyConfig(BaseModel):
|
class PenaltyConfig(BaseModel):
|
||||||
"""Penalty method configuration for constraints."""
|
"""Penalty method configuration for constraints."""
|
||||||
|
|
||||||
method: Optional[PenaltyMethod] = None
|
method: Optional[PenaltyMethod] = None
|
||||||
weight: Optional[float] = None
|
weight: Optional[float] = None
|
||||||
margin: Optional[float] = Field(default=None, description="Soft margin before penalty kicks in")
|
margin: Optional[float] = Field(default=None, description="Soft margin before penalty kicks in")
|
||||||
@@ -364,11 +453,8 @@ class PenaltyConfig(BaseModel):
|
|||||||
|
|
||||||
class Constraint(BaseModel):
|
class Constraint(BaseModel):
|
||||||
"""Hard or soft constraint."""
|
"""Hard or soft constraint."""
|
||||||
id: str = Field(
|
|
||||||
...,
|
id: str = Field(..., pattern=r"^con_\d{3}$", description="Unique identifier (pattern: con_XXX)")
|
||||||
pattern=r"^con_\d{3}$",
|
|
||||||
description="Unique identifier (pattern: con_XXX)"
|
|
||||||
)
|
|
||||||
name: str
|
name: str
|
||||||
type: ConstraintType = Field(..., description="Constraint type")
|
type: ConstraintType = Field(..., description="Constraint type")
|
||||||
operator: ConstraintOperator = Field(..., description="Comparison operator")
|
operator: ConstraintOperator = Field(..., description="Comparison operator")
|
||||||
@@ -383,8 +469,10 @@ class Constraint(BaseModel):
|
|||||||
# Optimization Models
|
# Optimization Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class AlgorithmConfig(BaseModel):
|
class AlgorithmConfig(BaseModel):
|
||||||
"""Algorithm-specific settings."""
|
"""Algorithm-specific settings."""
|
||||||
|
|
||||||
population_size: Optional[int] = None
|
population_size: Optional[int] = None
|
||||||
n_generations: Optional[int] = None
|
n_generations: Optional[int] = None
|
||||||
mutation_prob: Optional[float] = None
|
mutation_prob: Optional[float] = None
|
||||||
@@ -399,22 +487,24 @@ class AlgorithmConfig(BaseModel):
|
|||||||
|
|
||||||
class Algorithm(BaseModel):
|
class Algorithm(BaseModel):
|
||||||
"""Optimization algorithm configuration."""
|
"""Optimization algorithm configuration."""
|
||||||
|
|
||||||
type: AlgorithmType
|
type: AlgorithmType
|
||||||
config: Optional[AlgorithmConfig] = None
|
config: Optional[AlgorithmConfig] = None
|
||||||
|
|
||||||
|
|
||||||
class OptimizationBudget(BaseModel):
|
class OptimizationBudget(BaseModel):
|
||||||
"""Computational budget for optimization."""
|
"""Computational budget for optimization."""
|
||||||
|
|
||||||
max_trials: Optional[int] = Field(default=None, ge=1, le=10000)
|
max_trials: Optional[int] = Field(default=None, ge=1, le=10000)
|
||||||
max_time_hours: Optional[float] = None
|
max_time_hours: Optional[float] = None
|
||||||
convergence_patience: Optional[int] = Field(
|
convergence_patience: Optional[int] = Field(
|
||||||
default=None,
|
default=None, description="Stop if no improvement for N trials"
|
||||||
description="Stop if no improvement for N trials"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class SurrogateConfig(BaseModel):
|
class SurrogateConfig(BaseModel):
|
||||||
"""Neural surrogate model configuration."""
|
"""Neural surrogate model configuration."""
|
||||||
|
|
||||||
n_models: Optional[int] = None
|
n_models: Optional[int] = None
|
||||||
architecture: Optional[List[int]] = None
|
architecture: Optional[List[int]] = None
|
||||||
train_every_n_trials: Optional[int] = None
|
train_every_n_trials: Optional[int] = None
|
||||||
@@ -425,6 +515,7 @@ class SurrogateConfig(BaseModel):
|
|||||||
|
|
||||||
class Surrogate(BaseModel):
|
class Surrogate(BaseModel):
|
||||||
"""Surrogate model settings."""
|
"""Surrogate model settings."""
|
||||||
|
|
||||||
enabled: Optional[bool] = None
|
enabled: Optional[bool] = None
|
||||||
type: Optional[SurrogateType] = None
|
type: Optional[SurrogateType] = None
|
||||||
config: Optional[SurrogateConfig] = None
|
config: Optional[SurrogateConfig] = None
|
||||||
@@ -432,6 +523,7 @@ class Surrogate(BaseModel):
|
|||||||
|
|
||||||
class OptimizationConfig(BaseModel):
|
class OptimizationConfig(BaseModel):
|
||||||
"""Optimization algorithm configuration."""
|
"""Optimization algorithm configuration."""
|
||||||
|
|
||||||
algorithm: Algorithm
|
algorithm: Algorithm
|
||||||
budget: OptimizationBudget
|
budget: OptimizationBudget
|
||||||
surrogate: Optional[Surrogate] = None
|
surrogate: Optional[Surrogate] = None
|
||||||
@@ -442,8 +534,10 @@ class OptimizationConfig(BaseModel):
|
|||||||
# Workflow Models
|
# Workflow Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class WorkflowStage(BaseModel):
|
class WorkflowStage(BaseModel):
|
||||||
"""A stage in a multi-stage optimization workflow."""
|
"""A stage in a multi-stage optimization workflow."""
|
||||||
|
|
||||||
id: str
|
id: str
|
||||||
name: str
|
name: str
|
||||||
algorithm: Optional[str] = None
|
algorithm: Optional[str] = None
|
||||||
@@ -453,6 +547,7 @@ class WorkflowStage(BaseModel):
|
|||||||
|
|
||||||
class WorkflowTransition(BaseModel):
|
class WorkflowTransition(BaseModel):
|
||||||
"""Transition between workflow stages."""
|
"""Transition between workflow stages."""
|
||||||
|
|
||||||
from_: str = Field(..., alias="from")
|
from_: str = Field(..., alias="from")
|
||||||
to: str
|
to: str
|
||||||
condition: Optional[str] = None
|
condition: Optional[str] = None
|
||||||
@@ -463,6 +558,7 @@ class WorkflowTransition(BaseModel):
|
|||||||
|
|
||||||
class Workflow(BaseModel):
|
class Workflow(BaseModel):
|
||||||
"""Multi-stage optimization workflow."""
|
"""Multi-stage optimization workflow."""
|
||||||
|
|
||||||
stages: Optional[List[WorkflowStage]] = None
|
stages: Optional[List[WorkflowStage]] = None
|
||||||
transitions: Optional[List[WorkflowTransition]] = None
|
transitions: Optional[List[WorkflowTransition]] = None
|
||||||
|
|
||||||
@@ -471,8 +567,10 @@ class Workflow(BaseModel):
|
|||||||
# Reporting Models
|
# Reporting Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class InsightConfig(BaseModel):
|
class InsightConfig(BaseModel):
|
||||||
"""Insight-specific configuration."""
|
"""Insight-specific configuration."""
|
||||||
|
|
||||||
include_html: Optional[bool] = None
|
include_html: Optional[bool] = None
|
||||||
show_pareto_evolution: Optional[bool] = None
|
show_pareto_evolution: Optional[bool] = None
|
||||||
|
|
||||||
@@ -482,6 +580,7 @@ class InsightConfig(BaseModel):
|
|||||||
|
|
||||||
class Insight(BaseModel):
|
class Insight(BaseModel):
|
||||||
"""Reporting insight definition."""
|
"""Reporting insight definition."""
|
||||||
|
|
||||||
type: Optional[str] = None
|
type: Optional[str] = None
|
||||||
for_trials: Optional[str] = None
|
for_trials: Optional[str] = None
|
||||||
config: Optional[InsightConfig] = None
|
config: Optional[InsightConfig] = None
|
||||||
@@ -489,6 +588,7 @@ class Insight(BaseModel):
|
|||||||
|
|
||||||
class ReportingConfig(BaseModel):
|
class ReportingConfig(BaseModel):
|
||||||
"""Reporting configuration."""
|
"""Reporting configuration."""
|
||||||
|
|
||||||
auto_report: Optional[bool] = None
|
auto_report: Optional[bool] = None
|
||||||
report_triggers: Optional[List[str]] = None
|
report_triggers: Optional[List[str]] = None
|
||||||
insights: Optional[List[Insight]] = None
|
insights: Optional[List[Insight]] = None
|
||||||
@@ -498,8 +598,10 @@ class ReportingConfig(BaseModel):
|
|||||||
# Canvas Models
|
# Canvas Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class CanvasViewport(BaseModel):
|
class CanvasViewport(BaseModel):
|
||||||
"""Canvas viewport settings."""
|
"""Canvas viewport settings."""
|
||||||
|
|
||||||
x: float = 0
|
x: float = 0
|
||||||
y: float = 0
|
y: float = 0
|
||||||
zoom: float = 1.0
|
zoom: float = 1.0
|
||||||
@@ -507,6 +609,7 @@ class CanvasViewport(BaseModel):
|
|||||||
|
|
||||||
class CanvasEdge(BaseModel):
|
class CanvasEdge(BaseModel):
|
||||||
"""Connection between canvas nodes."""
|
"""Connection between canvas nodes."""
|
||||||
|
|
||||||
source: str
|
source: str
|
||||||
target: str
|
target: str
|
||||||
sourceHandle: Optional[str] = None
|
sourceHandle: Optional[str] = None
|
||||||
@@ -515,6 +618,7 @@ class CanvasEdge(BaseModel):
|
|||||||
|
|
||||||
class CanvasGroup(BaseModel):
|
class CanvasGroup(BaseModel):
|
||||||
"""Grouping of canvas nodes."""
|
"""Grouping of canvas nodes."""
|
||||||
|
|
||||||
id: str
|
id: str
|
||||||
name: str
|
name: str
|
||||||
node_ids: List[str]
|
node_ids: List[str]
|
||||||
@@ -522,6 +626,7 @@ class CanvasGroup(BaseModel):
|
|||||||
|
|
||||||
class CanvasConfig(BaseModel):
|
class CanvasConfig(BaseModel):
|
||||||
"""Canvas UI state (persisted for reconstruction)."""
|
"""Canvas UI state (persisted for reconstruction)."""
|
||||||
|
|
||||||
layout_version: Optional[str] = None
|
layout_version: Optional[str] = None
|
||||||
viewport: Optional[CanvasViewport] = None
|
viewport: Optional[CanvasViewport] = None
|
||||||
edges: Optional[List[CanvasEdge]] = None
|
edges: Optional[List[CanvasEdge]] = None
|
||||||
@@ -532,6 +637,7 @@ class CanvasConfig(BaseModel):
|
|||||||
# Main AtomizerSpec Model
|
# Main AtomizerSpec Model
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class AtomizerSpec(BaseModel):
|
class AtomizerSpec(BaseModel):
|
||||||
"""
|
"""
|
||||||
AtomizerSpec v2.0 - The unified configuration schema for Atomizer optimization studies.
|
AtomizerSpec v2.0 - The unified configuration schema for Atomizer optimization studies.
|
||||||
@@ -542,36 +648,32 @@ class AtomizerSpec(BaseModel):
|
|||||||
- Claude Assistant (reading and modifying)
|
- Claude Assistant (reading and modifying)
|
||||||
- Optimization Engine (execution)
|
- Optimization Engine (execution)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
meta: SpecMeta = Field(..., description="Metadata about the spec")
|
meta: SpecMeta = Field(..., description="Metadata about the spec")
|
||||||
model: ModelConfig = Field(..., description="NX model files and configuration")
|
model: ModelConfig = Field(..., description="NX model files and configuration")
|
||||||
design_variables: List[DesignVariable] = Field(
|
design_variables: List[DesignVariable] = Field(
|
||||||
...,
|
default_factory=list,
|
||||||
min_length=1,
|
|
||||||
max_length=50,
|
max_length=50,
|
||||||
description="Design variables to optimize"
|
description="Design variables to optimize (required for running)",
|
||||||
)
|
)
|
||||||
extractors: List[Extractor] = Field(
|
extractors: List[Extractor] = Field(
|
||||||
...,
|
default_factory=list, description="Physics extractors (required for running)"
|
||||||
min_length=1,
|
|
||||||
description="Physics extractors"
|
|
||||||
)
|
)
|
||||||
objectives: List[Objective] = Field(
|
objectives: List[Objective] = Field(
|
||||||
...,
|
default_factory=list,
|
||||||
min_length=1,
|
|
||||||
max_length=5,
|
max_length=5,
|
||||||
description="Optimization objectives"
|
description="Optimization objectives (required for running)",
|
||||||
)
|
)
|
||||||
constraints: Optional[List[Constraint]] = Field(
|
constraints: Optional[List[Constraint]] = Field(
|
||||||
default=None,
|
default=None, description="Hard and soft constraints"
|
||||||
description="Hard and soft constraints"
|
|
||||||
)
|
)
|
||||||
optimization: OptimizationConfig = Field(..., description="Algorithm configuration")
|
optimization: OptimizationConfig = Field(..., description="Algorithm configuration")
|
||||||
workflow: Optional[Workflow] = Field(default=None, description="Multi-stage workflow")
|
workflow: Optional[Workflow] = Field(default=None, description="Multi-stage workflow")
|
||||||
reporting: Optional[ReportingConfig] = Field(default=None, description="Reporting config")
|
reporting: Optional[ReportingConfig] = Field(default=None, description="Reporting config")
|
||||||
canvas: Optional[CanvasConfig] = Field(default=None, description="Canvas UI state")
|
canvas: Optional[CanvasConfig] = Field(default=None, description="Canvas UI state")
|
||||||
|
|
||||||
@model_validator(mode='after')
|
@model_validator(mode="after")
|
||||||
def validate_references(self) -> 'AtomizerSpec':
|
def validate_references(self) -> "AtomizerSpec":
|
||||||
"""Validate that all references are valid."""
|
"""Validate that all references are valid."""
|
||||||
# Collect valid extractor IDs and their outputs
|
# Collect valid extractor IDs and their outputs
|
||||||
extractor_outputs: Dict[str, set] = {}
|
extractor_outputs: Dict[str, set] = {}
|
||||||
@@ -638,13 +740,44 @@ class AtomizerSpec(BaseModel):
|
|||||||
"""Check if this is a multi-objective optimization."""
|
"""Check if this is a multi-objective optimization."""
|
||||||
return len(self.objectives) > 1
|
return len(self.objectives) > 1
|
||||||
|
|
||||||
|
def is_ready_for_optimization(self) -> Tuple[bool, List[str]]:
|
||||||
|
"""
|
||||||
|
Check if spec is complete enough to run optimization.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (is_ready, list of missing requirements)
|
||||||
|
"""
|
||||||
|
missing = []
|
||||||
|
|
||||||
|
# Check required fields for optimization
|
||||||
|
if not self.model.sim:
|
||||||
|
missing.append("No simulation file (.sim) configured")
|
||||||
|
|
||||||
|
if not self.design_variables:
|
||||||
|
missing.append("No design variables defined")
|
||||||
|
|
||||||
|
if not self.extractors:
|
||||||
|
missing.append("No extractors defined")
|
||||||
|
|
||||||
|
if not self.objectives:
|
||||||
|
missing.append("No objectives defined")
|
||||||
|
|
||||||
|
# Check that enabled DVs have valid bounds
|
||||||
|
for dv in self.get_enabled_design_variables():
|
||||||
|
if dv.bounds.min >= dv.bounds.max:
|
||||||
|
missing.append(f"Design variable '{dv.name}' has invalid bounds")
|
||||||
|
|
||||||
|
return len(missing) == 0, missing
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Validation Response Models
|
# Validation Response Models
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
|
|
||||||
|
|
||||||
class ValidationError(BaseModel):
|
class ValidationError(BaseModel):
|
||||||
"""A validation error."""
|
"""A validation error."""
|
||||||
|
|
||||||
type: str # 'schema', 'semantic', 'reference'
|
type: str # 'schema', 'semantic', 'reference'
|
||||||
path: List[str]
|
path: List[str]
|
||||||
message: str
|
message: str
|
||||||
@@ -652,6 +785,7 @@ class ValidationError(BaseModel):
|
|||||||
|
|
||||||
class ValidationWarning(BaseModel):
|
class ValidationWarning(BaseModel):
|
||||||
"""A validation warning."""
|
"""A validation warning."""
|
||||||
|
|
||||||
type: str
|
type: str
|
||||||
path: List[str]
|
path: List[str]
|
||||||
message: str
|
message: str
|
||||||
@@ -659,6 +793,7 @@ class ValidationWarning(BaseModel):
|
|||||||
|
|
||||||
class ValidationSummary(BaseModel):
|
class ValidationSummary(BaseModel):
|
||||||
"""Summary of spec contents."""
|
"""Summary of spec contents."""
|
||||||
|
|
||||||
design_variables: int
|
design_variables: int
|
||||||
extractors: int
|
extractors: int
|
||||||
objectives: int
|
objectives: int
|
||||||
@@ -668,6 +803,7 @@ class ValidationSummary(BaseModel):
|
|||||||
|
|
||||||
class ValidationReport(BaseModel):
|
class ValidationReport(BaseModel):
|
||||||
"""Full validation report."""
|
"""Full validation report."""
|
||||||
|
|
||||||
valid: bool
|
valid: bool
|
||||||
errors: List[ValidationError]
|
errors: List[ValidationError]
|
||||||
warnings: List[ValidationWarning]
|
warnings: List[ValidationWarning]
|
||||||
|
|||||||
68
optimization_engine/devloop/__init__.py
Normal file
68
optimization_engine/devloop/__init__.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
"""
|
||||||
|
Atomizer DevLoop - Closed-Loop Development System
|
||||||
|
|
||||||
|
This module provides autonomous development cycle capabilities:
|
||||||
|
1. Gemini Pro for strategic planning and analysis
|
||||||
|
2. Claude Code (Opus 4.5) for implementation
|
||||||
|
3. Dashboard testing for verification
|
||||||
|
4. LAC integration for persistent learning
|
||||||
|
|
||||||
|
The DevLoop orchestrates the full cycle:
|
||||||
|
PLAN (Gemini) -> BUILD (Claude) -> TEST (Dashboard) -> ANALYZE (Gemini) -> FIX (Claude) -> VERIFY
|
||||||
|
|
||||||
|
Example usage:
|
||||||
|
from optimization_engine.devloop import DevLoopOrchestrator
|
||||||
|
|
||||||
|
orchestrator = DevLoopOrchestrator()
|
||||||
|
result = await orchestrator.run_development_cycle(
|
||||||
|
objective="Create support_arm optimization study"
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
# Lazy imports to avoid circular dependencies
|
||||||
|
def __getattr__(name):
|
||||||
|
if name == "DevLoopOrchestrator":
|
||||||
|
from .orchestrator import DevLoopOrchestrator
|
||||||
|
|
||||||
|
return DevLoopOrchestrator
|
||||||
|
elif name == "LoopPhase":
|
||||||
|
from .orchestrator import LoopPhase
|
||||||
|
|
||||||
|
return LoopPhase
|
||||||
|
elif name == "LoopState":
|
||||||
|
from .orchestrator import LoopState
|
||||||
|
|
||||||
|
return LoopState
|
||||||
|
elif name == "DashboardTestRunner":
|
||||||
|
from .test_runner import DashboardTestRunner
|
||||||
|
|
||||||
|
return DashboardTestRunner
|
||||||
|
elif name == "TestScenario":
|
||||||
|
from .test_runner import TestScenario
|
||||||
|
|
||||||
|
return TestScenario
|
||||||
|
elif name == "GeminiPlanner":
|
||||||
|
from .planning import GeminiPlanner
|
||||||
|
|
||||||
|
return GeminiPlanner
|
||||||
|
elif name == "ProblemAnalyzer":
|
||||||
|
from .analyzer import ProblemAnalyzer
|
||||||
|
|
||||||
|
return ProblemAnalyzer
|
||||||
|
elif name == "ClaudeCodeBridge":
|
||||||
|
from .claude_bridge import ClaudeCodeBridge
|
||||||
|
|
||||||
|
return ClaudeCodeBridge
|
||||||
|
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"DevLoopOrchestrator",
|
||||||
|
"LoopPhase",
|
||||||
|
"LoopState",
|
||||||
|
"DashboardTestRunner",
|
||||||
|
"TestScenario",
|
||||||
|
"GeminiPlanner",
|
||||||
|
"ProblemAnalyzer",
|
||||||
|
]
|
||||||
421
optimization_engine/devloop/analyzer.py
Normal file
421
optimization_engine/devloop/analyzer.py
Normal file
@@ -0,0 +1,421 @@
|
|||||||
|
"""
|
||||||
|
Problem Analyzer - Analyze test results and generate fix plans using Gemini.
|
||||||
|
|
||||||
|
Handles:
|
||||||
|
- Root cause analysis from test failures
|
||||||
|
- Pattern detection across failures
|
||||||
|
- Fix plan generation
|
||||||
|
- Priority assessment
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Issue:
|
||||||
|
"""A detected issue from test results."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
description: str
|
||||||
|
severity: str = "medium" # "critical", "high", "medium", "low"
|
||||||
|
category: str = "unknown"
|
||||||
|
affected_files: List[str] = field(default_factory=list)
|
||||||
|
test_ids: List[str] = field(default_factory=list)
|
||||||
|
root_cause: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FixPlan:
|
||||||
|
"""Plan for fixing an issue."""
|
||||||
|
|
||||||
|
issue_id: str
|
||||||
|
approach: str
|
||||||
|
steps: List[Dict] = field(default_factory=list)
|
||||||
|
estimated_effort: str = "medium"
|
||||||
|
rollback_steps: List[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AnalysisReport:
|
||||||
|
"""Complete analysis report."""
|
||||||
|
|
||||||
|
timestamp: str = field(default_factory=lambda: datetime.now().isoformat())
|
||||||
|
issues_found: bool = False
|
||||||
|
issues: List[Issue] = field(default_factory=list)
|
||||||
|
fix_plans: Dict[str, FixPlan] = field(default_factory=dict)
|
||||||
|
patterns: List[Dict] = field(default_factory=list)
|
||||||
|
recommendations: List[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
class ProblemAnalyzer:
|
||||||
|
"""
|
||||||
|
Gemini-powered analysis of test failures and improvement opportunities.
|
||||||
|
|
||||||
|
Capabilities:
|
||||||
|
- Deep analysis of test results
|
||||||
|
- Root cause identification
|
||||||
|
- Pattern detection across failures
|
||||||
|
- Fix plan generation with priority
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, gemini_planner: Optional[Any] = None):
|
||||||
|
"""
|
||||||
|
Initialize the analyzer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
gemini_planner: GeminiPlanner instance for API access
|
||||||
|
"""
|
||||||
|
self._planner = gemini_planner
|
||||||
|
self._history: List[AnalysisReport] = []
|
||||||
|
|
||||||
|
@property
|
||||||
|
def planner(self):
|
||||||
|
"""Get or create Gemini planner."""
|
||||||
|
if self._planner is None:
|
||||||
|
from .planning import GeminiPlanner
|
||||||
|
|
||||||
|
self._planner = GeminiPlanner()
|
||||||
|
return self._planner
|
||||||
|
|
||||||
|
async def analyze_test_results(self, test_report: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Perform deep analysis of test results.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_report: Test report from DashboardTestRunner
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Analysis dict with issues, fix_plans, patterns
|
||||||
|
"""
|
||||||
|
summary = test_report.get("summary", {})
|
||||||
|
scenarios = test_report.get("scenarios", [])
|
||||||
|
|
||||||
|
# Quick return if all passed
|
||||||
|
if summary.get("failed", 0) == 0:
|
||||||
|
return {
|
||||||
|
"issues_found": False,
|
||||||
|
"issues": [],
|
||||||
|
"fix_plans": {},
|
||||||
|
"patterns": [],
|
||||||
|
"recommendations": ["All tests passed!"],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Analyze failures
|
||||||
|
failures = [s for s in scenarios if not s.get("passed", True)]
|
||||||
|
|
||||||
|
# Use Gemini for deep analysis if available
|
||||||
|
if self.planner.client != "mock":
|
||||||
|
return await self._gemini_analysis(test_report, failures)
|
||||||
|
else:
|
||||||
|
return self._rule_based_analysis(test_report, failures)
|
||||||
|
|
||||||
|
async def _gemini_analysis(self, test_report: Dict, failures: List[Dict]) -> Dict:
|
||||||
|
"""Use Gemini for sophisticated analysis."""
|
||||||
|
prompt = self._build_analysis_prompt(test_report, failures)
|
||||||
|
|
||||||
|
try:
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
response = await loop.run_in_executor(
|
||||||
|
None, lambda: self.planner._model.generate_content(prompt)
|
||||||
|
)
|
||||||
|
|
||||||
|
text = response.text
|
||||||
|
|
||||||
|
# Parse JSON from response
|
||||||
|
if "```json" in text:
|
||||||
|
start = text.find("```json") + 7
|
||||||
|
end = text.find("```", start)
|
||||||
|
json_str = text[start:end].strip()
|
||||||
|
analysis = json.loads(json_str)
|
||||||
|
else:
|
||||||
|
analysis = self._rule_based_analysis(test_report, failures)
|
||||||
|
|
||||||
|
logger.info(f"Gemini analysis found {len(analysis.get('issues', []))} issues")
|
||||||
|
return analysis
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Gemini analysis failed: {e}, falling back to rule-based")
|
||||||
|
return self._rule_based_analysis(test_report, failures)
|
||||||
|
|
||||||
|
def _build_analysis_prompt(self, test_report: Dict, failures: List[Dict]) -> str:
|
||||||
|
"""Build analysis prompt for Gemini."""
|
||||||
|
return f"""## Test Failure Analysis
|
||||||
|
|
||||||
|
### Test Report Summary
|
||||||
|
- Total Tests: {test_report.get("summary", {}).get("total", 0)}
|
||||||
|
- Passed: {test_report.get("summary", {}).get("passed", 0)}
|
||||||
|
- Failed: {test_report.get("summary", {}).get("failed", 0)}
|
||||||
|
|
||||||
|
### Failed Tests
|
||||||
|
{json.dumps(failures, indent=2)}
|
||||||
|
|
||||||
|
### Analysis Required
|
||||||
|
|
||||||
|
Analyze these test failures and provide:
|
||||||
|
|
||||||
|
1. **Root Cause Analysis**: What caused each failure?
|
||||||
|
2. **Pattern Detection**: Are there recurring issues?
|
||||||
|
3. **Fix Priority**: Which issues should be addressed first?
|
||||||
|
4. **Implementation Plan**: Specific code changes needed
|
||||||
|
|
||||||
|
Output as JSON:
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"issues_found": true,
|
||||||
|
"issues": [
|
||||||
|
{{
|
||||||
|
"id": "issue_001",
|
||||||
|
"description": "What went wrong",
|
||||||
|
"severity": "high|medium|low",
|
||||||
|
"category": "api|ui|config|filesystem|logic",
|
||||||
|
"affected_files": ["path/to/file.py"],
|
||||||
|
"test_ids": ["test_001"],
|
||||||
|
"root_cause": "Why it happened"
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"fix_plans": {{
|
||||||
|
"issue_001": {{
|
||||||
|
"issue_id": "issue_001",
|
||||||
|
"approach": "How to fix it",
|
||||||
|
"steps": [
|
||||||
|
{{"action": "edit", "file": "path/to/file.py", "description": "Change X to Y"}}
|
||||||
|
],
|
||||||
|
"estimated_effort": "low|medium|high",
|
||||||
|
"rollback_steps": ["How to undo if needed"]
|
||||||
|
}}
|
||||||
|
}},
|
||||||
|
"patterns": [
|
||||||
|
{{"pattern": "Common issue type", "occurrences": 3, "suggestion": "Systemic fix"}}
|
||||||
|
],
|
||||||
|
"recommendations": [
|
||||||
|
"High-level improvement suggestions"
|
||||||
|
]
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Focus on actionable, specific fixes that Claude Code can implement.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _rule_based_analysis(self, test_report: Dict, failures: List[Dict]) -> Dict:
|
||||||
|
"""Rule-based analysis when Gemini is not available."""
|
||||||
|
issues = []
|
||||||
|
fix_plans = {}
|
||||||
|
patterns = []
|
||||||
|
|
||||||
|
# Categorize failures
|
||||||
|
api_failures = []
|
||||||
|
filesystem_failures = []
|
||||||
|
browser_failures = []
|
||||||
|
cli_failures = []
|
||||||
|
|
||||||
|
for failure in failures:
|
||||||
|
scenario_id = failure.get("scenario_id", "unknown")
|
||||||
|
error = failure.get("error", "")
|
||||||
|
details = failure.get("details", {})
|
||||||
|
|
||||||
|
# Detect issue type
|
||||||
|
if "api" in scenario_id.lower() or "status_code" in details:
|
||||||
|
api_failures.append(failure)
|
||||||
|
elif "filesystem" in scenario_id.lower() or "exists" in details:
|
||||||
|
filesystem_failures.append(failure)
|
||||||
|
elif "browser" in scenario_id.lower():
|
||||||
|
browser_failures.append(failure)
|
||||||
|
elif "cli" in scenario_id.lower() or "command" in details:
|
||||||
|
cli_failures.append(failure)
|
||||||
|
|
||||||
|
# Generate issues for API failures
|
||||||
|
for i, failure in enumerate(api_failures):
|
||||||
|
issue_id = f"api_issue_{i + 1}"
|
||||||
|
status = failure.get("details", {}).get("status_code", "unknown")
|
||||||
|
|
||||||
|
issues.append(
|
||||||
|
{
|
||||||
|
"id": issue_id,
|
||||||
|
"description": f"API request failed with status {status}",
|
||||||
|
"severity": "high" if status in [500, 503] else "medium",
|
||||||
|
"category": "api",
|
||||||
|
"affected_files": self._guess_api_files(failure),
|
||||||
|
"test_ids": [failure.get("scenario_id")],
|
||||||
|
"root_cause": failure.get("error", "Unknown API error"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
fix_plans[issue_id] = {
|
||||||
|
"issue_id": issue_id,
|
||||||
|
"approach": "Check API endpoint implementation",
|
||||||
|
"steps": [
|
||||||
|
{"action": "check", "description": "Verify endpoint exists in routes"},
|
||||||
|
{"action": "test", "description": "Run endpoint manually with curl"},
|
||||||
|
],
|
||||||
|
"estimated_effort": "medium",
|
||||||
|
"rollback_steps": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate issues for filesystem failures
|
||||||
|
for i, failure in enumerate(filesystem_failures):
|
||||||
|
issue_id = f"fs_issue_{i + 1}"
|
||||||
|
path = failure.get("details", {}).get("path", "unknown path")
|
||||||
|
|
||||||
|
issues.append(
|
||||||
|
{
|
||||||
|
"id": issue_id,
|
||||||
|
"description": f"Expected file/directory not found: {path}",
|
||||||
|
"severity": "high",
|
||||||
|
"category": "filesystem",
|
||||||
|
"affected_files": [path],
|
||||||
|
"test_ids": [failure.get("scenario_id")],
|
||||||
|
"root_cause": "File was not created during implementation",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
fix_plans[issue_id] = {
|
||||||
|
"issue_id": issue_id,
|
||||||
|
"approach": "Create missing file/directory",
|
||||||
|
"steps": [
|
||||||
|
{"action": "create", "path": path, "description": f"Create {path}"},
|
||||||
|
],
|
||||||
|
"estimated_effort": "low",
|
||||||
|
"rollback_steps": [f"Remove {path}"],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect patterns
|
||||||
|
if len(api_failures) > 1:
|
||||||
|
patterns.append(
|
||||||
|
{
|
||||||
|
"pattern": "Multiple API failures",
|
||||||
|
"occurrences": len(api_failures),
|
||||||
|
"suggestion": "Check if backend server is running",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(filesystem_failures) > 1:
|
||||||
|
patterns.append(
|
||||||
|
{
|
||||||
|
"pattern": "Multiple missing files",
|
||||||
|
"occurrences": len(filesystem_failures),
|
||||||
|
"suggestion": "Review study creation process",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate recommendations
|
||||||
|
recommendations = []
|
||||||
|
if api_failures:
|
||||||
|
recommendations.append("Verify backend API is running on port 8000")
|
||||||
|
if filesystem_failures:
|
||||||
|
recommendations.append("Check that study directory structure is correctly created")
|
||||||
|
if browser_failures:
|
||||||
|
recommendations.append("Ensure frontend is running on port 3000")
|
||||||
|
if cli_failures:
|
||||||
|
recommendations.append("Check Python environment and script paths")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"issues_found": len(issues) > 0,
|
||||||
|
"issues": issues,
|
||||||
|
"fix_plans": fix_plans,
|
||||||
|
"patterns": patterns,
|
||||||
|
"recommendations": recommendations,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _guess_api_files(self, failure: Dict) -> List[str]:
|
||||||
|
"""Guess which API files might be affected."""
|
||||||
|
endpoint = failure.get("details", {}).get("response", {})
|
||||||
|
|
||||||
|
# Common API file patterns
|
||||||
|
return [
|
||||||
|
"atomizer-dashboard/backend/api/routes/",
|
||||||
|
"atomizer-dashboard/backend/api/services/",
|
||||||
|
]
|
||||||
|
|
||||||
|
async def analyze_iteration_history(self, iterations: List[Dict]) -> Dict:
|
||||||
|
"""
|
||||||
|
Analyze patterns across multiple iterations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
iterations: List of IterationResult dicts
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Cross-iteration analysis
|
||||||
|
"""
|
||||||
|
recurring_issues = {}
|
||||||
|
success_rate = 0
|
||||||
|
|
||||||
|
for iteration in iterations:
|
||||||
|
if iteration.get("success"):
|
||||||
|
success_rate += 1
|
||||||
|
|
||||||
|
# Track recurring issues
|
||||||
|
analysis = iteration.get("analysis", {})
|
||||||
|
for issue in analysis.get("issues", []):
|
||||||
|
issue_type = issue.get("category", "unknown")
|
||||||
|
if issue_type not in recurring_issues:
|
||||||
|
recurring_issues[issue_type] = 0
|
||||||
|
recurring_issues[issue_type] += 1
|
||||||
|
|
||||||
|
total = len(iterations) or 1
|
||||||
|
|
||||||
|
return {
|
||||||
|
"total_iterations": len(iterations),
|
||||||
|
"success_rate": success_rate / total,
|
||||||
|
"recurring_issues": recurring_issues,
|
||||||
|
"most_common_issue": max(recurring_issues, key=recurring_issues.get)
|
||||||
|
if recurring_issues
|
||||||
|
else None,
|
||||||
|
"recommendation": self._generate_meta_recommendation(
|
||||||
|
recurring_issues, success_rate / total
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_meta_recommendation(self, recurring_issues: Dict, success_rate: float) -> str:
|
||||||
|
"""Generate high-level recommendation based on iteration history."""
|
||||||
|
if success_rate >= 0.8:
|
||||||
|
return "Development cycle is healthy. Minor issues detected."
|
||||||
|
elif success_rate >= 0.5:
|
||||||
|
most_common = (
|
||||||
|
max(recurring_issues, key=recurring_issues.get) if recurring_issues else "unknown"
|
||||||
|
)
|
||||||
|
return f"Focus on fixing {most_common} issues to improve success rate."
|
||||||
|
else:
|
||||||
|
return (
|
||||||
|
"Development cycle needs attention. Consider reviewing architecture or test design."
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_priority_queue(self, analysis: Dict) -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Get issues sorted by priority for fixing.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
analysis: Analysis result dict
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Sorted list of issues with their fix plans
|
||||||
|
"""
|
||||||
|
issues = analysis.get("issues", [])
|
||||||
|
fix_plans = analysis.get("fix_plans", {})
|
||||||
|
|
||||||
|
# Priority order
|
||||||
|
severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3}
|
||||||
|
|
||||||
|
# Sort by severity
|
||||||
|
sorted_issues = sorted(
|
||||||
|
issues, key=lambda x: severity_order.get(x.get("severity", "medium"), 2)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Attach fix plans
|
||||||
|
queue = []
|
||||||
|
for issue in sorted_issues:
|
||||||
|
issue_id = issue.get("id")
|
||||||
|
queue.append(
|
||||||
|
{
|
||||||
|
"issue": issue,
|
||||||
|
"fix_plan": fix_plans.get(issue_id),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return queue
|
||||||
170
optimization_engine/devloop/browser_scenarios.py
Normal file
170
optimization_engine/devloop/browser_scenarios.py
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
"""
|
||||||
|
Browser Test Scenarios for DevLoop
|
||||||
|
Pre-built Playwright scenarios that can be used for dashboard verification.
|
||||||
|
|
||||||
|
These scenarios use the same structure as DashboardTestRunner browser tests
|
||||||
|
but provide ready-made tests for common dashboard operations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List
|
||||||
|
|
||||||
|
|
||||||
|
def get_study_browser_scenarios(study_name: str) -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Get browser test scenarios for a specific study.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_name: The study to test
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of browser test scenarios
|
||||||
|
"""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": "browser_home_loads",
|
||||||
|
"name": "Home page loads with studies",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/"},
|
||||||
|
{"action": "wait_for", "selector": "text=Studies"},
|
||||||
|
{"action": "wait_for", "selector": "button:has-text('trials')"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 15000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "browser_canvas_loads",
|
||||||
|
"name": f"Canvas loads for {study_name}",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": f"/canvas/{study_name}"},
|
||||||
|
# Wait for ReactFlow nodes to render
|
||||||
|
{"action": "wait_for", "selector": ".react-flow__node"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 20000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "browser_dashboard_loads",
|
||||||
|
"name": f"Dashboard loads for {study_name}",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": f"/dashboard"},
|
||||||
|
# Wait for dashboard main element to load
|
||||||
|
{"action": "wait_for", "selector": "main"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 15000,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def get_ui_verification_scenarios() -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Get scenarios for verifying UI components.
|
||||||
|
|
||||||
|
These are general UI health checks, not study-specific.
|
||||||
|
"""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": "browser_home_stats",
|
||||||
|
"name": "Home page shows statistics",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/"},
|
||||||
|
{"action": "wait_for", "selector": "text=Total Studies"},
|
||||||
|
{"action": "wait_for", "selector": "text=Running"},
|
||||||
|
{"action": "wait_for", "selector": "text=Total Trials"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 10000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "browser_expand_folder",
|
||||||
|
"name": "Topic folder expands on click",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/"},
|
||||||
|
{"action": "wait_for", "selector": "button:has-text('trials')"},
|
||||||
|
{"action": "click", "selector": "button:has-text('trials')"},
|
||||||
|
# After click, should see study status badges
|
||||||
|
{
|
||||||
|
"action": "wait_for",
|
||||||
|
"selector": "span:has-text('completed'), span:has-text('running'), span:has-text('paused')",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 10000,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def get_chat_verification_scenarios() -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Get scenarios for verifying chat/Claude integration.
|
||||||
|
"""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": "browser_chat_panel",
|
||||||
|
"name": "Chat panel opens",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/canvas/support_arm"},
|
||||||
|
{"action": "wait_for", "selector": ".react-flow__node"},
|
||||||
|
# Look for chat toggle or chat panel
|
||||||
|
{
|
||||||
|
"action": "click",
|
||||||
|
"selector": "button[aria-label='Chat'], button:has-text('Chat')",
|
||||||
|
},
|
||||||
|
{"action": "wait_for", "selector": "textarea, input[type='text']"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 15000,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
# Standard scenario sets
|
||||||
|
STANDARD_BROWSER_SCENARIOS: Dict[str, List[Dict]] = {
|
||||||
|
"quick": [
|
||||||
|
{
|
||||||
|
"id": "browser_smoke",
|
||||||
|
"name": "Dashboard smoke test",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": "/"},
|
||||||
|
{"action": "wait_for", "selector": "text=Studies"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"status": "pass"},
|
||||||
|
"timeout_ms": 10000,
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"home": get_ui_verification_scenarios(),
|
||||||
|
"full": get_ui_verification_scenarios() + get_study_browser_scenarios("support_arm"),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_browser_scenarios(level: str = "quick", study_name: str = None) -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Get browser scenarios by level.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
level: "quick" (smoke), "home" (home page), "full" (all scenarios)
|
||||||
|
study_name: Optional study name for study-specific tests
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of browser test scenarios
|
||||||
|
"""
|
||||||
|
if level == "quick":
|
||||||
|
return STANDARD_BROWSER_SCENARIOS["quick"]
|
||||||
|
elif level == "home":
|
||||||
|
return STANDARD_BROWSER_SCENARIOS["home"]
|
||||||
|
elif level == "full":
|
||||||
|
scenarios = list(STANDARD_BROWSER_SCENARIOS["full"])
|
||||||
|
if study_name:
|
||||||
|
scenarios.extend(get_study_browser_scenarios(study_name))
|
||||||
|
return scenarios
|
||||||
|
elif level == "study" and study_name:
|
||||||
|
return get_study_browser_scenarios(study_name)
|
||||||
|
else:
|
||||||
|
return STANDARD_BROWSER_SCENARIOS["quick"]
|
||||||
392
optimization_engine/devloop/claude_bridge.py
Normal file
392
optimization_engine/devloop/claude_bridge.py
Normal file
@@ -0,0 +1,392 @@
|
|||||||
|
"""
|
||||||
|
Claude Code Bridge - Interface between DevLoop and Claude Code execution.
|
||||||
|
|
||||||
|
Handles:
|
||||||
|
- Translating Gemini plans into Claude Code instructions
|
||||||
|
- Executing code changes through OpenCode extension or CLI
|
||||||
|
- Capturing implementation results
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ImplementationResult:
|
||||||
|
"""Result of a Claude Code implementation."""
|
||||||
|
|
||||||
|
status: str # "success", "partial", "error"
|
||||||
|
files_modified: List[str]
|
||||||
|
warnings: List[str]
|
||||||
|
errors: List[str]
|
||||||
|
duration_seconds: float
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeCodeBridge:
|
||||||
|
"""
|
||||||
|
Bridge between Gemini plans and Claude Code execution.
|
||||||
|
|
||||||
|
Supports multiple execution modes:
|
||||||
|
- CLI: Direct Claude Code CLI invocation
|
||||||
|
- API: Anthropic API for code generation (if API key available)
|
||||||
|
- Manual: Generate instructions for human execution
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config: Optional[Dict] = None):
|
||||||
|
"""
|
||||||
|
Initialize the bridge.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: Configuration with execution mode and API settings
|
||||||
|
"""
|
||||||
|
self.config = config or {}
|
||||||
|
self.workspace = Path(self.config.get("workspace", "C:/Users/antoi/Atomizer"))
|
||||||
|
self.execution_mode = self.config.get("mode", "cli")
|
||||||
|
self._client = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def client(self):
|
||||||
|
"""Lazy-load Anthropic client if API mode."""
|
||||||
|
if self._client is None and self.execution_mode == "api":
|
||||||
|
try:
|
||||||
|
import anthropic
|
||||||
|
|
||||||
|
api_key = self.config.get("api_key") or os.environ.get("ANTHROPIC_API_KEY")
|
||||||
|
if api_key:
|
||||||
|
self._client = anthropic.Anthropic(api_key=api_key)
|
||||||
|
logger.info("Anthropic client initialized")
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("anthropic package not installed")
|
||||||
|
return self._client
|
||||||
|
|
||||||
|
def create_implementation_session(self, plan: Dict) -> str:
|
||||||
|
"""
|
||||||
|
Generate Claude Code instruction from Gemini plan.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plan: Plan dict from GeminiPlanner
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Formatted instruction string for Claude Code
|
||||||
|
"""
|
||||||
|
objective = plan.get("objective", "Unknown objective")
|
||||||
|
approach = plan.get("approach", "")
|
||||||
|
tasks = plan.get("tasks", [])
|
||||||
|
acceptance_criteria = plan.get("acceptance_criteria", [])
|
||||||
|
|
||||||
|
instruction = f"""## Implementation Task: {objective}
|
||||||
|
|
||||||
|
### Approach
|
||||||
|
{approach}
|
||||||
|
|
||||||
|
### Tasks to Complete
|
||||||
|
"""
|
||||||
|
|
||||||
|
for i, task in enumerate(tasks, 1):
|
||||||
|
instruction += f"""
|
||||||
|
{i}. **{task.get("description", "Task")}**
|
||||||
|
- File: `{task.get("file", "TBD")}`
|
||||||
|
- Priority: {task.get("priority", "medium")}
|
||||||
|
"""
|
||||||
|
if task.get("code_hint"):
|
||||||
|
instruction += f" - Hint: {task.get('code_hint')}\n"
|
||||||
|
if task.get("dependencies"):
|
||||||
|
instruction += f" - Depends on: {', '.join(task['dependencies'])}\n"
|
||||||
|
|
||||||
|
instruction += """
|
||||||
|
### Acceptance Criteria
|
||||||
|
"""
|
||||||
|
for criterion in acceptance_criteria:
|
||||||
|
instruction += f"- [ ] {criterion}\n"
|
||||||
|
|
||||||
|
instruction += """
|
||||||
|
### Constraints
|
||||||
|
- Maintain existing API contracts
|
||||||
|
- Follow Atomizer coding standards
|
||||||
|
- Ensure AtomizerSpec v2.0 compatibility
|
||||||
|
- Create README.md for any new study
|
||||||
|
- Use existing extractors from SYS_12 when possible
|
||||||
|
"""
|
||||||
|
|
||||||
|
return instruction
|
||||||
|
|
||||||
|
async def execute_plan(self, plan: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Execute an implementation plan.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plan: Plan dict from GeminiPlanner
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Implementation result dict
|
||||||
|
"""
|
||||||
|
instruction = self.create_implementation_session(plan)
|
||||||
|
|
||||||
|
if self.execution_mode == "cli":
|
||||||
|
return await self._execute_via_cli(instruction, plan)
|
||||||
|
elif self.execution_mode == "api":
|
||||||
|
return await self._execute_via_api(instruction, plan)
|
||||||
|
else:
|
||||||
|
return await self._execute_manual(instruction, plan)
|
||||||
|
|
||||||
|
async def _execute_via_cli(self, instruction: str, plan: Dict) -> Dict:
|
||||||
|
"""Execute through Claude Code CLI."""
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
# Write instruction to temp file
|
||||||
|
instruction_file = self.workspace / ".devloop_instruction.md"
|
||||||
|
instruction_file.write_text(instruction)
|
||||||
|
|
||||||
|
files_modified = []
|
||||||
|
warnings = []
|
||||||
|
errors = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to invoke Claude Code CLI
|
||||||
|
# Note: This assumes claude-code or similar CLI is available
|
||||||
|
result = subprocess.run(
|
||||||
|
[
|
||||||
|
"powershell",
|
||||||
|
"-Command",
|
||||||
|
f"cd {self.workspace}; claude --print '{instruction_file}'",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=300, # 5 minute timeout
|
||||||
|
cwd=str(self.workspace),
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.returncode == 0:
|
||||||
|
# Parse output for modified files
|
||||||
|
output = result.stdout
|
||||||
|
for line in output.split("\n"):
|
||||||
|
if "Modified:" in line or "Created:" in line:
|
||||||
|
parts = line.split(":", 1)
|
||||||
|
if len(parts) > 1:
|
||||||
|
files_modified.append(parts[1].strip())
|
||||||
|
|
||||||
|
status = "success"
|
||||||
|
else:
|
||||||
|
errors.append(result.stderr or "CLI execution failed")
|
||||||
|
status = "error"
|
||||||
|
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
errors.append("CLI execution timed out after 5 minutes")
|
||||||
|
status = "error"
|
||||||
|
except FileNotFoundError:
|
||||||
|
# Claude CLI not found, fall back to manual mode
|
||||||
|
logger.warning("Claude CLI not found, switching to manual mode")
|
||||||
|
return await self._execute_manual(instruction, plan)
|
||||||
|
except Exception as e:
|
||||||
|
errors.append(str(e))
|
||||||
|
status = "error"
|
||||||
|
finally:
|
||||||
|
# Clean up temp file
|
||||||
|
if instruction_file.exists():
|
||||||
|
instruction_file.unlink()
|
||||||
|
|
||||||
|
duration = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": status,
|
||||||
|
"files": files_modified,
|
||||||
|
"warnings": warnings,
|
||||||
|
"errors": errors,
|
||||||
|
"duration_seconds": duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _execute_via_api(self, instruction: str, plan: Dict) -> Dict:
|
||||||
|
"""Execute through Anthropic API for code generation."""
|
||||||
|
if not self.client:
|
||||||
|
return await self._execute_manual(instruction, plan)
|
||||||
|
|
||||||
|
start_time = datetime.now()
|
||||||
|
files_modified = []
|
||||||
|
warnings = []
|
||||||
|
errors = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Use Claude API for code generation
|
||||||
|
response = self.client.messages.create(
|
||||||
|
model="claude-sonnet-4-20250514",
|
||||||
|
max_tokens=8192,
|
||||||
|
messages=[
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": f"""You are implementing code for the Atomizer FEA optimization framework.
|
||||||
|
|
||||||
|
{instruction}
|
||||||
|
|
||||||
|
For each file that needs to be created or modified, output the complete file content in this format:
|
||||||
|
|
||||||
|
### FILE: path/to/file.py
|
||||||
|
```python
|
||||||
|
# file content here
|
||||||
|
```
|
||||||
|
|
||||||
|
Be thorough and implement all tasks completely.
|
||||||
|
""",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parse response for file contents
|
||||||
|
content = response.content[0].text
|
||||||
|
|
||||||
|
# Extract files from response
|
||||||
|
import re
|
||||||
|
|
||||||
|
file_pattern = r"### FILE: (.+?)\n```\w*\n(.*?)```"
|
||||||
|
matches = re.findall(file_pattern, content, re.DOTALL)
|
||||||
|
|
||||||
|
for file_path, file_content in matches:
|
||||||
|
try:
|
||||||
|
full_path = self.workspace / file_path.strip()
|
||||||
|
full_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
full_path.write_text(file_content.strip())
|
||||||
|
files_modified.append(str(file_path.strip()))
|
||||||
|
logger.info(f"Created/modified: {file_path}")
|
||||||
|
except Exception as e:
|
||||||
|
errors.append(f"Failed to write {file_path}: {e}")
|
||||||
|
|
||||||
|
status = "success" if files_modified else "partial"
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
errors.append(str(e))
|
||||||
|
status = "error"
|
||||||
|
|
||||||
|
duration = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": status,
|
||||||
|
"files": files_modified,
|
||||||
|
"warnings": warnings,
|
||||||
|
"errors": errors,
|
||||||
|
"duration_seconds": duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _execute_manual(self, instruction: str, plan: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Generate manual instructions (when automation not available).
|
||||||
|
|
||||||
|
Saves instruction to file for human execution.
|
||||||
|
"""
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
# Save instruction for manual execution
|
||||||
|
output_file = self.workspace / ".devloop" / "pending_instruction.md"
|
||||||
|
output_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
output_file.write_text(instruction)
|
||||||
|
|
||||||
|
logger.info(f"Manual instruction saved to: {output_file}")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": "pending_manual",
|
||||||
|
"instruction_file": str(output_file),
|
||||||
|
"files": [],
|
||||||
|
"warnings": ["Automated execution not available. Please execute manually."],
|
||||||
|
"errors": [],
|
||||||
|
"duration_seconds": (datetime.now() - start_time).total_seconds(),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def execute_fix(self, fix_plan: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Execute a specific fix from analysis.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
fix_plan: Fix plan dict from ProblemAnalyzer
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Fix result dict
|
||||||
|
"""
|
||||||
|
issue_id = fix_plan.get("issue_id", "unknown")
|
||||||
|
approach = fix_plan.get("approach", "")
|
||||||
|
steps = fix_plan.get("steps", [])
|
||||||
|
|
||||||
|
instruction = f"""## Bug Fix: {issue_id}
|
||||||
|
|
||||||
|
### Approach
|
||||||
|
{approach}
|
||||||
|
|
||||||
|
### Steps
|
||||||
|
"""
|
||||||
|
for i, step in enumerate(steps, 1):
|
||||||
|
instruction += f"{i}. {step.get('description', step.get('action', 'Step'))}\n"
|
||||||
|
if step.get("file"):
|
||||||
|
instruction += f" File: `{step['file']}`\n"
|
||||||
|
|
||||||
|
instruction += """
|
||||||
|
### Verification
|
||||||
|
After implementing the fix, verify that:
|
||||||
|
1. The specific test case passes
|
||||||
|
2. No regressions are introduced
|
||||||
|
3. Code follows Atomizer patterns
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Execute as a mini-plan
|
||||||
|
return await self.execute_plan(
|
||||||
|
{
|
||||||
|
"objective": f"Fix: {issue_id}",
|
||||||
|
"approach": approach,
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"description": step.get("description", step.get("action")),
|
||||||
|
"file": step.get("file"),
|
||||||
|
"priority": "high",
|
||||||
|
}
|
||||||
|
for step in steps
|
||||||
|
],
|
||||||
|
"acceptance_criteria": [
|
||||||
|
"Original test passes",
|
||||||
|
"No new errors introduced",
|
||||||
|
],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_execution_status(self) -> Dict:
|
||||||
|
"""Get current execution status."""
|
||||||
|
pending_file = self.workspace / ".devloop" / "pending_instruction.md"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"mode": self.execution_mode,
|
||||||
|
"workspace": str(self.workspace),
|
||||||
|
"has_pending_instruction": pending_file.exists(),
|
||||||
|
"api_available": self.client is not None,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def verify_implementation(self, expected_files: List[str]) -> Dict:
|
||||||
|
"""
|
||||||
|
Verify that implementation created expected files.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
expected_files: List of file paths that should exist
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Verification result
|
||||||
|
"""
|
||||||
|
missing = []
|
||||||
|
found = []
|
||||||
|
|
||||||
|
for file_path in expected_files:
|
||||||
|
path = (
|
||||||
|
self.workspace / file_path if not Path(file_path).is_absolute() else Path(file_path)
|
||||||
|
)
|
||||||
|
if path.exists():
|
||||||
|
found.append(str(file_path))
|
||||||
|
else:
|
||||||
|
missing.append(str(file_path))
|
||||||
|
|
||||||
|
return {
|
||||||
|
"complete": len(missing) == 0,
|
||||||
|
"found": found,
|
||||||
|
"missing": missing,
|
||||||
|
}
|
||||||
652
optimization_engine/devloop/cli_bridge.py
Normal file
652
optimization_engine/devloop/cli_bridge.py
Normal file
@@ -0,0 +1,652 @@
|
|||||||
|
"""
|
||||||
|
CLI Bridge - Execute AI tasks through Claude Code CLI and OpenCode CLI.
|
||||||
|
|
||||||
|
Uses your existing subscriptions via CLI tools:
|
||||||
|
- Claude Code CLI (claude.exe) for implementation
|
||||||
|
- OpenCode CLI (opencode) for Gemini planning
|
||||||
|
|
||||||
|
No API keys needed - leverages your CLI subscriptions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Tuple
|
||||||
|
import re
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CLIResult:
|
||||||
|
"""Result from CLI execution."""
|
||||||
|
|
||||||
|
success: bool
|
||||||
|
output: str
|
||||||
|
error: str
|
||||||
|
duration_seconds: float
|
||||||
|
files_modified: List[str]
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeCodeCLI:
|
||||||
|
"""
|
||||||
|
Execute tasks through Claude Code CLI.
|
||||||
|
|
||||||
|
Uses: claude.exe --print for non-interactive execution
|
||||||
|
"""
|
||||||
|
|
||||||
|
CLAUDE_PATH = r"C:\Users\antoi\.local\bin\claude.exe"
|
||||||
|
|
||||||
|
def __init__(self, workspace: Path):
|
||||||
|
self.workspace = workspace
|
||||||
|
|
||||||
|
async def execute(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
timeout: int = 300,
|
||||||
|
model: str = "opus",
|
||||||
|
) -> CLIResult:
|
||||||
|
"""
|
||||||
|
Execute a prompt through Claude Code CLI.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The instruction/prompt to execute
|
||||||
|
timeout: Timeout in seconds
|
||||||
|
model: Model to use (opus, sonnet, haiku)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
CLIResult with output and modified files
|
||||||
|
"""
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
# Build command
|
||||||
|
cmd = [
|
||||||
|
self.CLAUDE_PATH,
|
||||||
|
"--print", # Non-interactive mode
|
||||||
|
"--model",
|
||||||
|
model,
|
||||||
|
"--permission-mode",
|
||||||
|
"acceptEdits", # Auto-accept edits
|
||||||
|
prompt,
|
||||||
|
]
|
||||||
|
|
||||||
|
logger.info(f"Executing Claude Code CLI: {prompt[:100]}...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Run in workspace directory
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout,
|
||||||
|
cwd=str(self.workspace),
|
||||||
|
env={**os.environ, "TERM": "dumb"}, # Disable colors
|
||||||
|
)
|
||||||
|
|
||||||
|
output = result.stdout
|
||||||
|
error = result.stderr
|
||||||
|
success = result.returncode == 0
|
||||||
|
|
||||||
|
# Extract modified files from output
|
||||||
|
files_modified = self._extract_modified_files(output)
|
||||||
|
|
||||||
|
duration = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Claude Code completed in {duration:.1f}s, modified {len(files_modified)} files"
|
||||||
|
)
|
||||||
|
|
||||||
|
return CLIResult(
|
||||||
|
success=success,
|
||||||
|
output=output,
|
||||||
|
error=error,
|
||||||
|
duration_seconds=duration,
|
||||||
|
files_modified=files_modified,
|
||||||
|
)
|
||||||
|
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return CLIResult(
|
||||||
|
success=False,
|
||||||
|
output="",
|
||||||
|
error=f"Timeout after {timeout}s",
|
||||||
|
duration_seconds=timeout,
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return CLIResult(
|
||||||
|
success=False,
|
||||||
|
output="",
|
||||||
|
error=str(e),
|
||||||
|
duration_seconds=(datetime.now() - start_time).total_seconds(),
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
def _extract_modified_files(self, output: str) -> List[str]:
|
||||||
|
"""Extract list of modified files from Claude Code output."""
|
||||||
|
files = []
|
||||||
|
|
||||||
|
# Look for file modification patterns
|
||||||
|
patterns = [
|
||||||
|
r"(?:Created|Modified|Wrote|Updated|Edited):\s*[`'\"]?([^\s`'\"]+)[`'\"]?",
|
||||||
|
r"Writing to [`'\"]?([^\s`'\"]+)[`'\"]?",
|
||||||
|
r"File saved: ([^\s]+)",
|
||||||
|
]
|
||||||
|
|
||||||
|
for pattern in patterns:
|
||||||
|
matches = re.findall(pattern, output, re.IGNORECASE)
|
||||||
|
files.extend(matches)
|
||||||
|
|
||||||
|
return list(set(files))
|
||||||
|
|
||||||
|
async def execute_with_context(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
context_files: List[str],
|
||||||
|
timeout: int = 300,
|
||||||
|
) -> CLIResult:
|
||||||
|
"""
|
||||||
|
Execute with additional context files loaded.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The instruction
|
||||||
|
context_files: Files to read as context
|
||||||
|
timeout: Timeout in seconds
|
||||||
|
"""
|
||||||
|
# Build prompt with context
|
||||||
|
context_prompt = prompt
|
||||||
|
|
||||||
|
if context_files:
|
||||||
|
context_prompt += "\n\nContext files to consider:\n"
|
||||||
|
for f in context_files:
|
||||||
|
context_prompt += f"- {f}\n"
|
||||||
|
|
||||||
|
return await self.execute(context_prompt, timeout)
|
||||||
|
|
||||||
|
|
||||||
|
class OpenCodeCLI:
|
||||||
|
"""
|
||||||
|
Execute tasks through OpenCode CLI (Gemini).
|
||||||
|
|
||||||
|
Uses: opencode run for non-interactive execution
|
||||||
|
"""
|
||||||
|
|
||||||
|
OPENCODE_PATH = r"C:\Users\antoi\AppData\Roaming\npm\opencode.cmd"
|
||||||
|
|
||||||
|
def __init__(self, workspace: Path):
|
||||||
|
self.workspace = workspace
|
||||||
|
|
||||||
|
async def execute(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
timeout: int = 180,
|
||||||
|
model: str = "google/gemini-3-pro-preview",
|
||||||
|
) -> CLIResult:
|
||||||
|
"""
|
||||||
|
Execute a prompt through OpenCode CLI.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The instruction/prompt
|
||||||
|
timeout: Timeout in seconds
|
||||||
|
model: Model to use
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
CLIResult with output
|
||||||
|
"""
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
# Build command
|
||||||
|
cmd = [self.OPENCODE_PATH, "run", "--model", model, prompt]
|
||||||
|
|
||||||
|
logger.info(f"Executing OpenCode CLI: {prompt[:100]}...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout,
|
||||||
|
cwd=str(self.workspace),
|
||||||
|
env={**os.environ, "TERM": "dumb"},
|
||||||
|
)
|
||||||
|
|
||||||
|
output = result.stdout
|
||||||
|
error = result.stderr
|
||||||
|
success = result.returncode == 0
|
||||||
|
|
||||||
|
duration = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
logger.info(f"OpenCode completed in {duration:.1f}s")
|
||||||
|
|
||||||
|
return CLIResult(
|
||||||
|
success=success,
|
||||||
|
output=output,
|
||||||
|
error=error,
|
||||||
|
duration_seconds=duration,
|
||||||
|
files_modified=[], # OpenCode typically doesn't modify files directly
|
||||||
|
)
|
||||||
|
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return CLIResult(
|
||||||
|
success=False,
|
||||||
|
output="",
|
||||||
|
error=f"Timeout after {timeout}s",
|
||||||
|
duration_seconds=timeout,
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return CLIResult(
|
||||||
|
success=False,
|
||||||
|
output="",
|
||||||
|
error=str(e),
|
||||||
|
duration_seconds=(datetime.now() - start_time).total_seconds(),
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
async def plan(self, objective: str, context: Dict = None) -> Dict:
|
||||||
|
"""
|
||||||
|
Create an implementation plan using Gemini via OpenCode.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
objective: What to achieve
|
||||||
|
context: Additional context
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Plan dict with tasks and test scenarios
|
||||||
|
"""
|
||||||
|
prompt = f"""You are a strategic planner for Atomizer, an FEA optimization framework.
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
{objective}
|
||||||
|
|
||||||
|
## Context
|
||||||
|
{json.dumps(context, indent=2) if context else "None provided"}
|
||||||
|
|
||||||
|
## Task
|
||||||
|
Create a detailed implementation plan in JSON format with:
|
||||||
|
1. tasks: List of implementation tasks for Claude Code
|
||||||
|
2. test_scenarios: Tests to verify implementation
|
||||||
|
3. acceptance_criteria: Success conditions
|
||||||
|
|
||||||
|
Output ONLY valid JSON in this format:
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"objective": "{objective}",
|
||||||
|
"approach": "Brief description",
|
||||||
|
"tasks": [
|
||||||
|
{{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": "What to do",
|
||||||
|
"file": "path/to/file.py",
|
||||||
|
"priority": "high"
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"test_scenarios": [
|
||||||
|
{{
|
||||||
|
"id": "test_001",
|
||||||
|
"name": "Test name",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [{{"action": "check_exists", "path": "some/path"}}],
|
||||||
|
"expected_outcome": {{"exists": true}}
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"acceptance_criteria": [
|
||||||
|
"Criterion 1"
|
||||||
|
]
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
result = await self.execute(prompt)
|
||||||
|
|
||||||
|
if not result.success:
|
||||||
|
logger.error(f"OpenCode planning failed: {result.error}")
|
||||||
|
return self._fallback_plan(objective, context)
|
||||||
|
|
||||||
|
# Parse JSON from output
|
||||||
|
try:
|
||||||
|
# Find JSON block in output
|
||||||
|
output = result.output
|
||||||
|
|
||||||
|
if "```json" in output:
|
||||||
|
start = output.find("```json") + 7
|
||||||
|
end = output.find("```", start)
|
||||||
|
json_str = output[start:end].strip()
|
||||||
|
elif "```" in output:
|
||||||
|
start = output.find("```") + 3
|
||||||
|
end = output.find("```", start)
|
||||||
|
json_str = output[start:end].strip()
|
||||||
|
else:
|
||||||
|
# Try to find JSON object directly
|
||||||
|
match = re.search(r"\{.*\}", output, re.DOTALL)
|
||||||
|
if match:
|
||||||
|
json_str = match.group()
|
||||||
|
else:
|
||||||
|
return self._fallback_plan(objective, context)
|
||||||
|
|
||||||
|
plan = json.loads(json_str)
|
||||||
|
logger.info(f"Plan created with {len(plan.get('tasks', []))} tasks")
|
||||||
|
return plan
|
||||||
|
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Failed to parse plan JSON: {e}")
|
||||||
|
return self._fallback_plan(objective, context)
|
||||||
|
|
||||||
|
def _fallback_plan(self, objective: str, context: Dict = None) -> Dict:
|
||||||
|
"""Generate a fallback plan when Gemini fails."""
|
||||||
|
logger.warning("Using fallback plan")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"objective": objective,
|
||||||
|
"approach": "Fallback plan - manual implementation",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": f"Implement: {objective}",
|
||||||
|
"file": "TBD",
|
||||||
|
"priority": "high",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"test_scenarios": [],
|
||||||
|
"acceptance_criteria": [objective],
|
||||||
|
}
|
||||||
|
|
||||||
|
async def analyze(self, test_results: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Analyze test results using Gemini via OpenCode.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_results: Test report from dashboard
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Analysis with issues and fix plans
|
||||||
|
"""
|
||||||
|
summary = test_results.get("summary", {})
|
||||||
|
scenarios = test_results.get("scenarios", [])
|
||||||
|
|
||||||
|
if summary.get("failed", 0) == 0:
|
||||||
|
return {
|
||||||
|
"issues_found": False,
|
||||||
|
"issues": [],
|
||||||
|
"fix_plans": {},
|
||||||
|
"recommendations": ["All tests passed!"],
|
||||||
|
}
|
||||||
|
|
||||||
|
failures = [s for s in scenarios if not s.get("passed", True)]
|
||||||
|
|
||||||
|
prompt = f"""Analyze these test failures for Atomizer FEA optimization framework:
|
||||||
|
|
||||||
|
## Test Summary
|
||||||
|
- Total: {summary.get("total", 0)}
|
||||||
|
- Passed: {summary.get("passed", 0)}
|
||||||
|
- Failed: {summary.get("failed", 0)}
|
||||||
|
|
||||||
|
## Failed Tests
|
||||||
|
{json.dumps(failures, indent=2)}
|
||||||
|
|
||||||
|
## Task
|
||||||
|
Provide root cause analysis and fix plans in JSON:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"issues_found": true,
|
||||||
|
"issues": [
|
||||||
|
{{
|
||||||
|
"id": "issue_001",
|
||||||
|
"description": "What went wrong",
|
||||||
|
"severity": "high",
|
||||||
|
"root_cause": "Why it failed"
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"fix_plans": {{
|
||||||
|
"issue_001": {{
|
||||||
|
"approach": "How to fix",
|
||||||
|
"steps": [{{"action": "edit", "file": "path", "description": "change"}}]
|
||||||
|
}}
|
||||||
|
}},
|
||||||
|
"recommendations": ["suggestion"]
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
result = await self.execute(prompt)
|
||||||
|
|
||||||
|
if not result.success:
|
||||||
|
return self._fallback_analysis(failures)
|
||||||
|
|
||||||
|
try:
|
||||||
|
output = result.output
|
||||||
|
if "```json" in output:
|
||||||
|
start = output.find("```json") + 7
|
||||||
|
end = output.find("```", start)
|
||||||
|
json_str = output[start:end].strip()
|
||||||
|
else:
|
||||||
|
match = re.search(r"\{.*\}", output, re.DOTALL)
|
||||||
|
json_str = match.group() if match else "{}"
|
||||||
|
|
||||||
|
return json.loads(json_str)
|
||||||
|
|
||||||
|
except:
|
||||||
|
return self._fallback_analysis(failures)
|
||||||
|
|
||||||
|
def _fallback_analysis(self, failures: List[Dict]) -> Dict:
|
||||||
|
"""Generate fallback analysis."""
|
||||||
|
issues = []
|
||||||
|
fix_plans = {}
|
||||||
|
|
||||||
|
for i, failure in enumerate(failures):
|
||||||
|
issue_id = f"issue_{i + 1}"
|
||||||
|
issues.append(
|
||||||
|
{
|
||||||
|
"id": issue_id,
|
||||||
|
"description": failure.get("error", "Unknown error"),
|
||||||
|
"severity": "medium",
|
||||||
|
"root_cause": "Requires investigation",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
fix_plans[issue_id] = {
|
||||||
|
"approach": "Manual investigation required",
|
||||||
|
"steps": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
"issues_found": len(issues) > 0,
|
||||||
|
"issues": issues,
|
||||||
|
"fix_plans": fix_plans,
|
||||||
|
"recommendations": ["Review failed tests manually"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class DevLoopCLIOrchestrator:
|
||||||
|
"""
|
||||||
|
Orchestrate DevLoop using CLI tools.
|
||||||
|
|
||||||
|
- OpenCode (Gemini) for planning and analysis
|
||||||
|
- Claude Code for implementation and fixes
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, workspace: Path = None):
|
||||||
|
self.workspace = workspace or Path("C:/Users/antoi/Atomizer")
|
||||||
|
self.claude = ClaudeCodeCLI(self.workspace)
|
||||||
|
self.opencode = OpenCodeCLI(self.workspace)
|
||||||
|
self.iteration = 0
|
||||||
|
|
||||||
|
async def run_cycle(
|
||||||
|
self,
|
||||||
|
objective: str,
|
||||||
|
context: Dict = None,
|
||||||
|
max_iterations: int = 5,
|
||||||
|
) -> Dict:
|
||||||
|
"""
|
||||||
|
Run a complete development cycle.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
objective: What to achieve
|
||||||
|
context: Additional context
|
||||||
|
max_iterations: Maximum fix iterations
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Cycle report
|
||||||
|
"""
|
||||||
|
from .test_runner import DashboardTestRunner
|
||||||
|
|
||||||
|
start_time = datetime.now()
|
||||||
|
results = {
|
||||||
|
"objective": objective,
|
||||||
|
"iterations": [],
|
||||||
|
"status": "in_progress",
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(f"Starting DevLoop cycle: {objective}")
|
||||||
|
|
||||||
|
# Phase 1: Plan (Gemini via OpenCode)
|
||||||
|
logger.info("Phase 1: Planning with Gemini...")
|
||||||
|
plan = await self.opencode.plan(objective, context)
|
||||||
|
|
||||||
|
iteration = 0
|
||||||
|
while iteration < max_iterations:
|
||||||
|
iteration += 1
|
||||||
|
iter_result = {"iteration": iteration}
|
||||||
|
|
||||||
|
# Phase 2: Implement (Claude Code)
|
||||||
|
logger.info(f"Phase 2 (iter {iteration}): Implementing with Claude Code...")
|
||||||
|
impl_result = await self._implement(plan)
|
||||||
|
iter_result["implementation"] = {
|
||||||
|
"success": impl_result.success,
|
||||||
|
"files_modified": impl_result.files_modified,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Phase 3: Test (Dashboard)
|
||||||
|
logger.info(f"Phase 3 (iter {iteration}): Testing...")
|
||||||
|
test_runner = DashboardTestRunner()
|
||||||
|
test_results = await test_runner.run_test_suite(plan.get("test_scenarios", []))
|
||||||
|
iter_result["test_results"] = test_results
|
||||||
|
|
||||||
|
# Check if all tests pass
|
||||||
|
summary = test_results.get("summary", {})
|
||||||
|
if summary.get("failed", 0) == 0:
|
||||||
|
logger.info("All tests passed!")
|
||||||
|
results["iterations"].append(iter_result)
|
||||||
|
results["status"] = "success"
|
||||||
|
break
|
||||||
|
|
||||||
|
# Phase 4: Analyze (Gemini via OpenCode)
|
||||||
|
logger.info(f"Phase 4 (iter {iteration}): Analyzing failures...")
|
||||||
|
analysis = await self.opencode.analyze(test_results)
|
||||||
|
iter_result["analysis"] = analysis
|
||||||
|
|
||||||
|
if not analysis.get("issues_found"):
|
||||||
|
results["status"] = "success"
|
||||||
|
results["iterations"].append(iter_result)
|
||||||
|
break
|
||||||
|
|
||||||
|
# Phase 5: Fix (Claude Code)
|
||||||
|
logger.info(f"Phase 5 (iter {iteration}): Fixing issues...")
|
||||||
|
fix_result = await self._fix(analysis)
|
||||||
|
iter_result["fixes"] = {
|
||||||
|
"success": fix_result.success,
|
||||||
|
"files_modified": fix_result.files_modified,
|
||||||
|
}
|
||||||
|
|
||||||
|
results["iterations"].append(iter_result)
|
||||||
|
|
||||||
|
if results["status"] == "in_progress":
|
||||||
|
results["status"] = "max_iterations_reached"
|
||||||
|
|
||||||
|
results["duration_seconds"] = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
logger.info(f"DevLoop cycle completed: {results['status']}")
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
async def _implement(self, plan: Dict) -> CLIResult:
|
||||||
|
"""Implement the plan using Claude Code."""
|
||||||
|
tasks = plan.get("tasks", [])
|
||||||
|
|
||||||
|
if not tasks:
|
||||||
|
return CLIResult(
|
||||||
|
success=True,
|
||||||
|
output="No tasks to implement",
|
||||||
|
error="",
|
||||||
|
duration_seconds=0,
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build implementation prompt
|
||||||
|
prompt = f"""Implement the following tasks for Atomizer:
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
{plan.get("objective", "Unknown")}
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
{plan.get("approach", "Follow best practices")}
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
"""
|
||||||
|
for task in tasks:
|
||||||
|
prompt += f"""
|
||||||
|
### {task.get("id", "task")}: {task.get("description", "")}
|
||||||
|
- File: {task.get("file", "TBD")}
|
||||||
|
- Priority: {task.get("priority", "medium")}
|
||||||
|
"""
|
||||||
|
|
||||||
|
prompt += """
|
||||||
|
## Requirements
|
||||||
|
- Follow Atomizer coding standards
|
||||||
|
- Use AtomizerSpec v2.0 format
|
||||||
|
- Create README.md for any new study
|
||||||
|
- Use existing extractors from optimization_engine/extractors/
|
||||||
|
"""
|
||||||
|
|
||||||
|
return await self.claude.execute(prompt, timeout=300)
|
||||||
|
|
||||||
|
async def _fix(self, analysis: Dict) -> CLIResult:
|
||||||
|
"""Apply fixes using Claude Code."""
|
||||||
|
issues = analysis.get("issues", [])
|
||||||
|
fix_plans = analysis.get("fix_plans", {})
|
||||||
|
|
||||||
|
if not issues:
|
||||||
|
return CLIResult(
|
||||||
|
success=True,
|
||||||
|
output="No issues to fix",
|
||||||
|
error="",
|
||||||
|
duration_seconds=0,
|
||||||
|
files_modified=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build fix prompt
|
||||||
|
prompt = "Fix the following issues:\n\n"
|
||||||
|
|
||||||
|
for issue in issues:
|
||||||
|
issue_id = issue.get("id", "unknown")
|
||||||
|
prompt += f"""
|
||||||
|
## Issue: {issue_id}
|
||||||
|
- Description: {issue.get("description", "")}
|
||||||
|
- Root Cause: {issue.get("root_cause", "Unknown")}
|
||||||
|
- Severity: {issue.get("severity", "medium")}
|
||||||
|
"""
|
||||||
|
|
||||||
|
fix_plan = fix_plans.get(issue_id, {})
|
||||||
|
if fix_plan:
|
||||||
|
prompt += f"- Fix Approach: {fix_plan.get('approach', 'Investigate')}\n"
|
||||||
|
for step in fix_plan.get("steps", []):
|
||||||
|
prompt += f" - {step.get('description', step.get('action', 'step'))}\n"
|
||||||
|
|
||||||
|
return await self.claude.execute(prompt, timeout=300)
|
||||||
|
|
||||||
|
async def step_plan(self, objective: str, context: Dict = None) -> Dict:
|
||||||
|
"""Execute only the planning phase."""
|
||||||
|
return await self.opencode.plan(objective, context)
|
||||||
|
|
||||||
|
async def step_implement(self, plan: Dict) -> CLIResult:
|
||||||
|
"""Execute only the implementation phase."""
|
||||||
|
return await self._implement(plan)
|
||||||
|
|
||||||
|
async def step_analyze(self, test_results: Dict) -> Dict:
|
||||||
|
"""Execute only the analysis phase."""
|
||||||
|
return await self.opencode.analyze(test_results)
|
||||||
561
optimization_engine/devloop/orchestrator.py
Normal file
561
optimization_engine/devloop/orchestrator.py
Normal file
@@ -0,0 +1,561 @@
|
|||||||
|
"""
|
||||||
|
DevLoop Orchestrator - Master controller for closed-loop development.
|
||||||
|
|
||||||
|
Coordinates:
|
||||||
|
- Gemini Pro: Strategic planning, analysis, test design
|
||||||
|
- Claude Code: Implementation, code changes, fixes
|
||||||
|
- Dashboard: Automated testing, verification
|
||||||
|
- LAC: Learning capture and retrieval
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Callable
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class LoopPhase(Enum):
|
||||||
|
"""Current phase in the development loop."""
|
||||||
|
|
||||||
|
IDLE = "idle"
|
||||||
|
PLANNING = "planning"
|
||||||
|
IMPLEMENTING = "implementing"
|
||||||
|
TESTING = "testing"
|
||||||
|
ANALYZING = "analyzing"
|
||||||
|
FIXING = "fixing"
|
||||||
|
VERIFYING = "verifying"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoopState:
|
||||||
|
"""Current state of the development loop."""
|
||||||
|
|
||||||
|
phase: LoopPhase = LoopPhase.IDLE
|
||||||
|
iteration: int = 0
|
||||||
|
current_task: Optional[str] = None
|
||||||
|
test_results: Optional[Dict] = None
|
||||||
|
analysis: Optional[Dict] = None
|
||||||
|
last_update: str = field(default_factory=lambda: datetime.now().isoformat())
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class IterationResult:
|
||||||
|
"""Result of a single development iteration."""
|
||||||
|
|
||||||
|
iteration: int
|
||||||
|
plan: Optional[Dict] = None
|
||||||
|
implementation: Optional[Dict] = None
|
||||||
|
test_results: Optional[Dict] = None
|
||||||
|
analysis: Optional[Dict] = None
|
||||||
|
fixes: Optional[List[Dict]] = None
|
||||||
|
verification: Optional[Dict] = None
|
||||||
|
success: bool = False
|
||||||
|
duration_seconds: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CycleReport:
|
||||||
|
"""Complete report for a development cycle."""
|
||||||
|
|
||||||
|
objective: str
|
||||||
|
start_time: str = field(default_factory=lambda: datetime.now().isoformat())
|
||||||
|
end_time: Optional[str] = None
|
||||||
|
iterations: List[IterationResult] = field(default_factory=list)
|
||||||
|
status: str = "in_progress"
|
||||||
|
total_duration_seconds: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
class DevLoopOrchestrator:
|
||||||
|
"""
|
||||||
|
Autonomous development loop orchestrator.
|
||||||
|
|
||||||
|
Coordinates Gemini (planning) + Claude Code (implementation) + Dashboard (testing)
|
||||||
|
in a continuous improvement cycle.
|
||||||
|
|
||||||
|
Flow:
|
||||||
|
1. Gemini: Plan features/fixes
|
||||||
|
2. Claude Code: Implement
|
||||||
|
3. Dashboard: Test
|
||||||
|
4. Gemini: Analyze results
|
||||||
|
5. Claude Code: Fix issues
|
||||||
|
6. Dashboard: Verify
|
||||||
|
7. Loop back with learnings
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
config: Optional[Dict] = None,
|
||||||
|
gemini_client: Optional[Any] = None,
|
||||||
|
claude_bridge: Optional[Any] = None,
|
||||||
|
dashboard_runner: Optional[Any] = None,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initialize the orchestrator.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: Configuration dict with API keys and settings
|
||||||
|
gemini_client: Pre-configured Gemini client (optional)
|
||||||
|
claude_bridge: Pre-configured Claude Code bridge (optional)
|
||||||
|
dashboard_runner: Pre-configured Dashboard test runner (optional)
|
||||||
|
"""
|
||||||
|
self.config = config or self._default_config()
|
||||||
|
self.state = LoopState()
|
||||||
|
self.subscribers: List[Callable] = []
|
||||||
|
|
||||||
|
# Initialize components lazily
|
||||||
|
self._gemini = gemini_client
|
||||||
|
self._claude_bridge = claude_bridge
|
||||||
|
self._dashboard = dashboard_runner
|
||||||
|
self._lac = None
|
||||||
|
|
||||||
|
# History for learning
|
||||||
|
self.cycle_history: List[CycleReport] = []
|
||||||
|
|
||||||
|
def _default_config(self) -> Dict:
|
||||||
|
"""Default configuration."""
|
||||||
|
return {
|
||||||
|
"max_iterations": 10,
|
||||||
|
"auto_fix_threshold": "high", # Only auto-fix high+ severity
|
||||||
|
"learning_enabled": True,
|
||||||
|
"dashboard_url": "http://localhost:3000",
|
||||||
|
"websocket_url": "ws://localhost:8000",
|
||||||
|
"test_timeout_ms": 30000,
|
||||||
|
}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gemini(self):
|
||||||
|
"""Lazy-load Gemini planner."""
|
||||||
|
if self._gemini is None:
|
||||||
|
from .planning import GeminiPlanner
|
||||||
|
|
||||||
|
self._gemini = GeminiPlanner(self.config.get("gemini", {}))
|
||||||
|
return self._gemini
|
||||||
|
|
||||||
|
@property
|
||||||
|
def claude_bridge(self):
|
||||||
|
"""Lazy-load Claude Code bridge."""
|
||||||
|
if self._claude_bridge is None:
|
||||||
|
from .claude_bridge import ClaudeCodeBridge
|
||||||
|
|
||||||
|
self._claude_bridge = ClaudeCodeBridge(self.config.get("claude", {}))
|
||||||
|
return self._claude_bridge
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dashboard(self):
|
||||||
|
"""Lazy-load Dashboard test runner."""
|
||||||
|
if self._dashboard is None:
|
||||||
|
from .test_runner import DashboardTestRunner
|
||||||
|
|
||||||
|
self._dashboard = DashboardTestRunner(self.config)
|
||||||
|
return self._dashboard
|
||||||
|
|
||||||
|
@property
|
||||||
|
def lac(self):
|
||||||
|
"""Lazy-load LAC (Learning Atomizer Core)."""
|
||||||
|
if self._lac is None and self.config.get("learning_enabled", True):
|
||||||
|
try:
|
||||||
|
from knowledge_base.lac import get_lac
|
||||||
|
|
||||||
|
self._lac = get_lac()
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("LAC not available, learning disabled")
|
||||||
|
return self._lac
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[LoopState], None]):
|
||||||
|
"""Subscribe to state updates."""
|
||||||
|
self.subscribers.append(callback)
|
||||||
|
|
||||||
|
def unsubscribe(self, callback: Callable):
|
||||||
|
"""Unsubscribe from state updates."""
|
||||||
|
if callback in self.subscribers:
|
||||||
|
self.subscribers.remove(callback)
|
||||||
|
|
||||||
|
def _notify_subscribers(self):
|
||||||
|
"""Notify all subscribers of state change."""
|
||||||
|
self.state.last_update = datetime.now().isoformat()
|
||||||
|
for callback in self.subscribers:
|
||||||
|
try:
|
||||||
|
callback(self.state)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Subscriber error: {e}")
|
||||||
|
|
||||||
|
def _update_state(self, phase: Optional[LoopPhase] = None, task: Optional[str] = None):
|
||||||
|
"""Update state and notify subscribers."""
|
||||||
|
if phase:
|
||||||
|
self.state.phase = phase
|
||||||
|
if task:
|
||||||
|
self.state.current_task = task
|
||||||
|
self._notify_subscribers()
|
||||||
|
|
||||||
|
async def run_development_cycle(
|
||||||
|
self,
|
||||||
|
objective: str,
|
||||||
|
context: Optional[Dict] = None,
|
||||||
|
max_iterations: Optional[int] = None,
|
||||||
|
) -> CycleReport:
|
||||||
|
"""
|
||||||
|
Execute a complete development cycle.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
objective: What to achieve (e.g., "Create support_arm optimization study")
|
||||||
|
context: Additional context (study spec, problem statement, etc.)
|
||||||
|
max_iterations: Override default max iterations
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
CycleReport with all iteration results
|
||||||
|
"""
|
||||||
|
max_iter = max_iterations or self.config.get("max_iterations", 10)
|
||||||
|
|
||||||
|
report = CycleReport(objective=objective)
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
logger.info(f"Starting development cycle: {objective}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
while not self._is_objective_complete(report) and len(report.iterations) < max_iter:
|
||||||
|
iteration_result = await self._run_iteration(objective, context)
|
||||||
|
report.iterations.append(iteration_result)
|
||||||
|
|
||||||
|
# Record learning from successful patterns
|
||||||
|
if iteration_result.success and self.lac:
|
||||||
|
await self._record_learning(iteration_result)
|
||||||
|
|
||||||
|
# Check for max iterations
|
||||||
|
if len(report.iterations) >= max_iter:
|
||||||
|
report.status = "max_iterations_reached"
|
||||||
|
logger.warning(f"Max iterations ({max_iter}) reached")
|
||||||
|
break
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
report.status = f"error: {str(e)}"
|
||||||
|
logger.error(f"Development cycle error: {e}")
|
||||||
|
|
||||||
|
report.end_time = datetime.now().isoformat()
|
||||||
|
report.total_duration_seconds = (datetime.now() - start_time).total_seconds()
|
||||||
|
|
||||||
|
if report.status == "in_progress":
|
||||||
|
report.status = "completed"
|
||||||
|
|
||||||
|
self.cycle_history.append(report)
|
||||||
|
self._update_state(LoopPhase.IDLE)
|
||||||
|
|
||||||
|
return report
|
||||||
|
|
||||||
|
def _is_objective_complete(self, report: CycleReport) -> bool:
|
||||||
|
"""Check if the objective has been achieved."""
|
||||||
|
if not report.iterations:
|
||||||
|
return False
|
||||||
|
|
||||||
|
last_iter = report.iterations[-1]
|
||||||
|
|
||||||
|
# Success if last iteration passed all tests
|
||||||
|
if last_iter.success and last_iter.test_results:
|
||||||
|
tests = last_iter.test_results
|
||||||
|
if tests.get("summary", {}).get("failed", 0) == 0:
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def _run_iteration(self, objective: str, context: Optional[Dict]) -> IterationResult:
|
||||||
|
"""Run a single iteration through all phases."""
|
||||||
|
start_time = datetime.now()
|
||||||
|
result = IterationResult(iteration=self.state.iteration)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Phase 1: Planning (Gemini)
|
||||||
|
self._update_state(LoopPhase.PLANNING, "Creating implementation plan")
|
||||||
|
result.plan = await self._planning_phase(objective, context)
|
||||||
|
|
||||||
|
# Phase 2: Implementation (Claude Code)
|
||||||
|
self._update_state(LoopPhase.IMPLEMENTING, "Implementing changes")
|
||||||
|
result.implementation = await self._implementation_phase(result.plan)
|
||||||
|
|
||||||
|
# Phase 3: Testing (Dashboard)
|
||||||
|
self._update_state(LoopPhase.TESTING, "Running tests")
|
||||||
|
result.test_results = await self._testing_phase(result.plan)
|
||||||
|
self.state.test_results = result.test_results
|
||||||
|
|
||||||
|
# Phase 4: Analysis (Gemini)
|
||||||
|
self._update_state(LoopPhase.ANALYZING, "Analyzing results")
|
||||||
|
result.analysis = await self._analysis_phase(result.test_results)
|
||||||
|
self.state.analysis = result.analysis
|
||||||
|
|
||||||
|
# Phases 5-6: Fix & Verify if needed
|
||||||
|
if result.analysis and result.analysis.get("issues_found"):
|
||||||
|
self._update_state(LoopPhase.FIXING, "Implementing fixes")
|
||||||
|
result.fixes = await self._fixing_phase(result.analysis)
|
||||||
|
|
||||||
|
self._update_state(LoopPhase.VERIFYING, "Verifying fixes")
|
||||||
|
result.verification = await self._verification_phase(result.fixes)
|
||||||
|
result.success = result.verification.get("all_passed", False)
|
||||||
|
else:
|
||||||
|
result.success = True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Iteration {self.state.iteration} failed: {e}")
|
||||||
|
result.success = False
|
||||||
|
|
||||||
|
result.duration_seconds = (datetime.now() - start_time).total_seconds()
|
||||||
|
self.state.iteration += 1
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
async def _planning_phase(self, objective: str, context: Optional[Dict]) -> Dict:
|
||||||
|
"""Gemini creates implementation plan."""
|
||||||
|
# Gather context
|
||||||
|
historical_learnings = []
|
||||||
|
if self.lac:
|
||||||
|
historical_learnings = self.lac.get_relevant_insights(objective)
|
||||||
|
|
||||||
|
plan_request = {
|
||||||
|
"objective": objective,
|
||||||
|
"context": context or {},
|
||||||
|
"previous_results": self.state.test_results,
|
||||||
|
"historical_learnings": historical_learnings,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
plan = await self.gemini.create_plan(plan_request)
|
||||||
|
logger.info(f"Plan created with {len(plan.get('tasks', []))} tasks")
|
||||||
|
return plan
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Planning phase failed: {e}")
|
||||||
|
return {"error": str(e), "tasks": [], "test_scenarios": []}
|
||||||
|
|
||||||
|
async def _implementation_phase(self, plan: Dict) -> Dict:
|
||||||
|
"""Claude Code implements the plan."""
|
||||||
|
if not plan or plan.get("error"):
|
||||||
|
return {"status": "skipped", "reason": "No valid plan"}
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await self.claude_bridge.execute_plan(plan)
|
||||||
|
return {
|
||||||
|
"status": result.get("status", "unknown"),
|
||||||
|
"files_modified": result.get("files", []),
|
||||||
|
"warnings": result.get("warnings", []),
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Implementation phase failed: {e}")
|
||||||
|
return {"status": "error", "error": str(e)}
|
||||||
|
|
||||||
|
async def _testing_phase(self, plan: Dict) -> Dict:
|
||||||
|
"""Dashboard runs automated tests."""
|
||||||
|
test_scenarios = plan.get("test_scenarios", [])
|
||||||
|
|
||||||
|
if not test_scenarios:
|
||||||
|
# Generate default tests based on objective
|
||||||
|
test_scenarios = self._generate_default_tests(plan)
|
||||||
|
|
||||||
|
try:
|
||||||
|
results = await self.dashboard.run_test_suite(test_scenarios)
|
||||||
|
return results
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Testing phase failed: {e}")
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": str(e),
|
||||||
|
"summary": {"passed": 0, "failed": 1, "total": 1},
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_default_tests(self, plan: Dict) -> List[Dict]:
|
||||||
|
"""Generate default test scenarios based on the plan."""
|
||||||
|
objective = plan.get("objective", "")
|
||||||
|
|
||||||
|
tests = []
|
||||||
|
|
||||||
|
# Study creation tests
|
||||||
|
if "study" in objective.lower() or "create" in objective.lower():
|
||||||
|
tests.extend(
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"id": "test_study_exists",
|
||||||
|
"name": "Study directory exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"check": "directory_exists",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_spec_valid",
|
||||||
|
"name": "AtomizerSpec is valid",
|
||||||
|
"type": "api",
|
||||||
|
"endpoint": "/api/studies/{study_id}/spec/validate",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_dashboard_loads",
|
||||||
|
"name": "Dashboard loads study",
|
||||||
|
"type": "browser",
|
||||||
|
"action": "load_study",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Optimization tests
|
||||||
|
if "optimi" in objective.lower():
|
||||||
|
tests.extend(
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"id": "test_run_trial",
|
||||||
|
"name": "Single trial executes",
|
||||||
|
"type": "cli",
|
||||||
|
"command": "python run_optimization.py --test",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
return tests
|
||||||
|
|
||||||
|
async def _analysis_phase(self, test_results: Dict) -> Dict:
|
||||||
|
"""Gemini analyzes test results."""
|
||||||
|
try:
|
||||||
|
from .analyzer import ProblemAnalyzer
|
||||||
|
|
||||||
|
analyzer = ProblemAnalyzer(self.gemini)
|
||||||
|
return await analyzer.analyze_test_results(test_results)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Analysis phase failed: {e}")
|
||||||
|
return {
|
||||||
|
"issues_found": True,
|
||||||
|
"issues": [{"description": str(e), "severity": "high"}],
|
||||||
|
"fix_plans": {},
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _fixing_phase(self, analysis: Dict) -> List[Dict]:
|
||||||
|
"""Claude Code implements fixes."""
|
||||||
|
fixes = []
|
||||||
|
|
||||||
|
for issue in analysis.get("issues", []):
|
||||||
|
fix_plan = analysis.get("fix_plans", {}).get(issue.get("id", "unknown"))
|
||||||
|
|
||||||
|
if fix_plan:
|
||||||
|
try:
|
||||||
|
result = await self.claude_bridge.execute_fix(fix_plan)
|
||||||
|
fixes.append(
|
||||||
|
{
|
||||||
|
"issue_id": issue.get("id"),
|
||||||
|
"status": result.get("status"),
|
||||||
|
"files_modified": result.get("files", []),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
fixes.append(
|
||||||
|
{
|
||||||
|
"issue_id": issue.get("id"),
|
||||||
|
"status": "error",
|
||||||
|
"error": str(e),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return fixes
|
||||||
|
|
||||||
|
async def _verification_phase(self, fixes: List[Dict]) -> Dict:
|
||||||
|
"""Dashboard verifies fixes."""
|
||||||
|
# Re-run tests for each fix
|
||||||
|
all_passed = True
|
||||||
|
verification_results = []
|
||||||
|
|
||||||
|
for fix in fixes:
|
||||||
|
if fix.get("status") == "error":
|
||||||
|
all_passed = False
|
||||||
|
verification_results.append(
|
||||||
|
{
|
||||||
|
"issue_id": fix.get("issue_id"),
|
||||||
|
"passed": False,
|
||||||
|
"reason": fix.get("error"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Run targeted test
|
||||||
|
result = await self.dashboard.verify_fix(fix)
|
||||||
|
verification_results.append(result)
|
||||||
|
if not result.get("passed", False):
|
||||||
|
all_passed = False
|
||||||
|
|
||||||
|
return {
|
||||||
|
"all_passed": all_passed,
|
||||||
|
"results": verification_results,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _record_learning(self, iteration: IterationResult):
|
||||||
|
"""Store successful patterns for future reference."""
|
||||||
|
if not self.lac:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.lac.record_insight(
|
||||||
|
category="success_pattern",
|
||||||
|
context=f"DevLoop iteration {iteration.iteration}",
|
||||||
|
insight=f"Successfully completed: {iteration.plan.get('objective', 'unknown')}",
|
||||||
|
confidence=0.8,
|
||||||
|
tags=["devloop", "success"],
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to record learning: {e}")
|
||||||
|
|
||||||
|
# ========================================================================
|
||||||
|
# Single-step operations (for manual control)
|
||||||
|
# ========================================================================
|
||||||
|
|
||||||
|
async def step_plan(self, objective: str, context: Optional[Dict] = None) -> Dict:
|
||||||
|
"""Execute only the planning phase."""
|
||||||
|
self._update_state(LoopPhase.PLANNING, objective)
|
||||||
|
plan = await self._planning_phase(objective, context)
|
||||||
|
self._update_state(LoopPhase.IDLE)
|
||||||
|
return plan
|
||||||
|
|
||||||
|
async def step_implement(self, plan: Dict) -> Dict:
|
||||||
|
"""Execute only the implementation phase."""
|
||||||
|
self._update_state(LoopPhase.IMPLEMENTING)
|
||||||
|
result = await self._implementation_phase(plan)
|
||||||
|
self._update_state(LoopPhase.IDLE)
|
||||||
|
return result
|
||||||
|
|
||||||
|
async def step_test(self, scenarios: List[Dict]) -> Dict:
|
||||||
|
"""Execute only the testing phase."""
|
||||||
|
self._update_state(LoopPhase.TESTING)
|
||||||
|
result = await self._testing_phase({"test_scenarios": scenarios})
|
||||||
|
self._update_state(LoopPhase.IDLE)
|
||||||
|
return result
|
||||||
|
|
||||||
|
async def step_analyze(self, test_results: Dict) -> Dict:
|
||||||
|
"""Execute only the analysis phase."""
|
||||||
|
self._update_state(LoopPhase.ANALYZING)
|
||||||
|
result = await self._analysis_phase(test_results)
|
||||||
|
self._update_state(LoopPhase.IDLE)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def get_state(self) -> Dict:
|
||||||
|
"""Get current state as dict."""
|
||||||
|
return {
|
||||||
|
"phase": self.state.phase.value,
|
||||||
|
"iteration": self.state.iteration,
|
||||||
|
"current_task": self.state.current_task,
|
||||||
|
"test_results": self.state.test_results,
|
||||||
|
"last_update": self.state.last_update,
|
||||||
|
}
|
||||||
|
|
||||||
|
def export_history(self, filepath: Optional[Path] = None) -> Dict:
|
||||||
|
"""Export cycle history for analysis."""
|
||||||
|
history = {
|
||||||
|
"exported_at": datetime.now().isoformat(),
|
||||||
|
"total_cycles": len(self.cycle_history),
|
||||||
|
"cycles": [
|
||||||
|
{
|
||||||
|
"objective": c.objective,
|
||||||
|
"status": c.status,
|
||||||
|
"iterations": len(c.iterations),
|
||||||
|
"duration_seconds": c.total_duration_seconds,
|
||||||
|
}
|
||||||
|
for c in self.cycle_history
|
||||||
|
],
|
||||||
|
}
|
||||||
|
|
||||||
|
if filepath:
|
||||||
|
with open(filepath, "w") as f:
|
||||||
|
json.dump(history, f, indent=2)
|
||||||
|
|
||||||
|
return history
|
||||||
451
optimization_engine/devloop/planning.py
Normal file
451
optimization_engine/devloop/planning.py
Normal file
@@ -0,0 +1,451 @@
|
|||||||
|
"""
|
||||||
|
Gemini Planner - Strategic planning and test design using Gemini Pro.
|
||||||
|
|
||||||
|
Handles:
|
||||||
|
- Implementation planning from objectives
|
||||||
|
- Test scenario generation
|
||||||
|
- Architecture decisions
|
||||||
|
- Risk assessment
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PlanTask:
|
||||||
|
"""A single task in the implementation plan."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
description: str
|
||||||
|
file: Optional[str] = None
|
||||||
|
code_hint: Optional[str] = None
|
||||||
|
priority: str = "medium"
|
||||||
|
dependencies: List[str] = None
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
if self.dependencies is None:
|
||||||
|
self.dependencies = []
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestScenario:
|
||||||
|
"""A test scenario for dashboard verification."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
type: str # "api", "browser", "cli", "filesystem"
|
||||||
|
steps: List[Dict] = None
|
||||||
|
expected_outcome: Dict = None
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
if self.steps is None:
|
||||||
|
self.steps = []
|
||||||
|
if self.expected_outcome is None:
|
||||||
|
self.expected_outcome = {"status": "pass"}
|
||||||
|
|
||||||
|
|
||||||
|
class GeminiPlanner:
|
||||||
|
"""
|
||||||
|
Strategic planner using Gemini Pro.
|
||||||
|
|
||||||
|
Generates:
|
||||||
|
- Implementation tasks for Claude Code
|
||||||
|
- Test scenarios for dashboard verification
|
||||||
|
- Architecture decisions
|
||||||
|
- Risk assessments
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config: Optional[Dict] = None):
|
||||||
|
"""
|
||||||
|
Initialize the planner.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: Configuration with API key and model settings
|
||||||
|
"""
|
||||||
|
self.config = config or {}
|
||||||
|
self._client = None
|
||||||
|
self._model = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def client(self):
|
||||||
|
"""Lazy-load Gemini client."""
|
||||||
|
if self._client is None:
|
||||||
|
try:
|
||||||
|
import google.generativeai as genai
|
||||||
|
|
||||||
|
api_key = self.config.get("api_key") or os.environ.get("GEMINI_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
raise ValueError("GEMINI_API_KEY not set")
|
||||||
|
|
||||||
|
genai.configure(api_key=api_key)
|
||||||
|
self._client = genai
|
||||||
|
|
||||||
|
model_name = self.config.get("model", "gemini-2.0-flash-thinking-exp-01-21")
|
||||||
|
self._model = genai.GenerativeModel(model_name)
|
||||||
|
|
||||||
|
logger.info(f"Gemini client initialized with model: {model_name}")
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("google-generativeai not installed, using mock planner")
|
||||||
|
self._client = "mock"
|
||||||
|
|
||||||
|
return self._client
|
||||||
|
|
||||||
|
async def create_plan(self, request: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Create an implementation plan from an objective.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request: Dict with:
|
||||||
|
- objective: What to achieve
|
||||||
|
- context: Additional context (study spec, etc.)
|
||||||
|
- previous_results: Results from last iteration
|
||||||
|
- historical_learnings: Relevant LAC insights
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Plan dict with tasks, test_scenarios, risks
|
||||||
|
"""
|
||||||
|
objective = request.get("objective", "")
|
||||||
|
context = request.get("context", {})
|
||||||
|
previous_results = request.get("previous_results")
|
||||||
|
learnings = request.get("historical_learnings", [])
|
||||||
|
|
||||||
|
# Build planning prompt
|
||||||
|
prompt = self._build_planning_prompt(objective, context, previous_results, learnings)
|
||||||
|
|
||||||
|
# Get response from Gemini
|
||||||
|
if self.client == "mock":
|
||||||
|
plan = self._mock_plan(objective, context)
|
||||||
|
else:
|
||||||
|
plan = await self._query_gemini(prompt)
|
||||||
|
|
||||||
|
return plan
|
||||||
|
|
||||||
|
def _build_planning_prompt(
|
||||||
|
self,
|
||||||
|
objective: str,
|
||||||
|
context: Dict,
|
||||||
|
previous_results: Optional[Dict],
|
||||||
|
learnings: List[Dict],
|
||||||
|
) -> str:
|
||||||
|
"""Build the planning prompt for Gemini."""
|
||||||
|
|
||||||
|
prompt = f"""## Atomizer Development Planning Session
|
||||||
|
|
||||||
|
### Objective
|
||||||
|
{objective}
|
||||||
|
|
||||||
|
### Context
|
||||||
|
{json.dumps(context, indent=2) if context else "No additional context provided."}
|
||||||
|
|
||||||
|
### Previous Iteration Results
|
||||||
|
{json.dumps(previous_results, indent=2) if previous_results else "First iteration - no previous results."}
|
||||||
|
|
||||||
|
### Historical Learnings (from LAC)
|
||||||
|
{self._format_learnings(learnings)}
|
||||||
|
|
||||||
|
### Required Outputs
|
||||||
|
|
||||||
|
Generate a detailed implementation plan in JSON format with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{{
|
||||||
|
"objective": "{objective}",
|
||||||
|
"approach": "Brief description of the approach",
|
||||||
|
"tasks": [
|
||||||
|
{{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": "What to do",
|
||||||
|
"file": "path/to/file.py",
|
||||||
|
"code_hint": "Pseudo-code or pattern to use",
|
||||||
|
"priority": "high|medium|low",
|
||||||
|
"dependencies": ["task_000"]
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"test_scenarios": [
|
||||||
|
{{
|
||||||
|
"id": "test_001",
|
||||||
|
"name": "Test name",
|
||||||
|
"type": "api|browser|cli|filesystem",
|
||||||
|
"steps": [
|
||||||
|
{{"action": "navigate", "target": "/canvas"}}
|
||||||
|
],
|
||||||
|
"expected_outcome": {{"status": "pass", "assertions": []}}
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"risks": [
|
||||||
|
{{
|
||||||
|
"description": "What could go wrong",
|
||||||
|
"mitigation": "How to handle it",
|
||||||
|
"severity": "high|medium|low"
|
||||||
|
}}
|
||||||
|
],
|
||||||
|
"acceptance_criteria": [
|
||||||
|
"Criteria 1",
|
||||||
|
"Criteria 2"
|
||||||
|
]
|
||||||
|
}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Guidelines
|
||||||
|
|
||||||
|
1. **Tasks should be specific and actionable** - Each task should be completable by Claude Code
|
||||||
|
2. **Test scenarios must be verifiable** - Use dashboard endpoints and browser actions
|
||||||
|
3. **Consider Atomizer architecture** - Use existing extractors (SYS_12), follow AtomizerSpec v2.0
|
||||||
|
4. **Apply historical learnings** - Avoid known failure patterns
|
||||||
|
|
||||||
|
### Important Atomizer Patterns
|
||||||
|
|
||||||
|
- Studies use `atomizer_spec.json` (AtomizerSpec v2.0)
|
||||||
|
- Design variables have bounds: {{"min": X, "max": Y}}
|
||||||
|
- Objectives use extractors: E1 (displacement), E3 (stress), E4 (mass)
|
||||||
|
- Constraints define limits with operators: <, >, <=, >=
|
||||||
|
|
||||||
|
Output ONLY the JSON plan, no additional text.
|
||||||
|
"""
|
||||||
|
return prompt
|
||||||
|
|
||||||
|
def _format_learnings(self, learnings: List[Dict]) -> str:
|
||||||
|
"""Format LAC learnings for the prompt."""
|
||||||
|
if not learnings:
|
||||||
|
return "No relevant historical learnings."
|
||||||
|
|
||||||
|
formatted = []
|
||||||
|
for learning in learnings[:5]: # Limit to 5 most relevant
|
||||||
|
formatted.append(
|
||||||
|
f"- [{learning.get('category', 'insight')}] {learning.get('insight', '')}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return "\n".join(formatted)
|
||||||
|
|
||||||
|
async def _query_gemini(self, prompt: str) -> Dict:
|
||||||
|
"""Query Gemini and parse response."""
|
||||||
|
try:
|
||||||
|
# Run in executor to not block
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
response = await loop.run_in_executor(
|
||||||
|
None, lambda: self._model.generate_content(prompt)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Extract JSON from response
|
||||||
|
text = response.text
|
||||||
|
|
||||||
|
# Try to parse JSON
|
||||||
|
try:
|
||||||
|
# Find JSON block
|
||||||
|
if "```json" in text:
|
||||||
|
start = text.find("```json") + 7
|
||||||
|
end = text.find("```", start)
|
||||||
|
json_str = text[start:end].strip()
|
||||||
|
elif "```" in text:
|
||||||
|
start = text.find("```") + 3
|
||||||
|
end = text.find("```", start)
|
||||||
|
json_str = text[start:end].strip()
|
||||||
|
else:
|
||||||
|
json_str = text.strip()
|
||||||
|
|
||||||
|
plan = json.loads(json_str)
|
||||||
|
logger.info(f"Gemini plan parsed: {len(plan.get('tasks', []))} tasks")
|
||||||
|
return plan
|
||||||
|
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Failed to parse Gemini response: {e}")
|
||||||
|
return {
|
||||||
|
"objective": "Parse error",
|
||||||
|
"error": str(e),
|
||||||
|
"raw_response": text[:500],
|
||||||
|
"tasks": [],
|
||||||
|
"test_scenarios": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Gemini query failed: {e}")
|
||||||
|
return {
|
||||||
|
"objective": "Query error",
|
||||||
|
"error": str(e),
|
||||||
|
"tasks": [],
|
||||||
|
"test_scenarios": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
def _mock_plan(self, objective: str, context: Dict) -> Dict:
|
||||||
|
"""Generate a mock plan for testing without Gemini API."""
|
||||||
|
logger.info("Using mock planner (Gemini not available)")
|
||||||
|
|
||||||
|
# Detect objective type
|
||||||
|
is_study_creation = any(
|
||||||
|
kw in objective.lower() for kw in ["create", "study", "new", "setup"]
|
||||||
|
)
|
||||||
|
|
||||||
|
tasks = []
|
||||||
|
test_scenarios = []
|
||||||
|
|
||||||
|
if is_study_creation:
|
||||||
|
study_name = context.get("study_name", "support_arm")
|
||||||
|
|
||||||
|
tasks = [
|
||||||
|
{
|
||||||
|
"id": "task_001",
|
||||||
|
"description": f"Create study directory structure for {study_name}",
|
||||||
|
"file": f"studies/_Other/{study_name}/",
|
||||||
|
"priority": "high",
|
||||||
|
"dependencies": [],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "task_002",
|
||||||
|
"description": "Copy NX model files to study directory",
|
||||||
|
"file": f"studies/_Other/{study_name}/1_setup/model/",
|
||||||
|
"priority": "high",
|
||||||
|
"dependencies": ["task_001"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "task_003",
|
||||||
|
"description": "Create AtomizerSpec v2.0 configuration",
|
||||||
|
"file": f"studies/_Other/{study_name}/atomizer_spec.json",
|
||||||
|
"priority": "high",
|
||||||
|
"dependencies": ["task_002"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "task_004",
|
||||||
|
"description": "Create run_optimization.py script",
|
||||||
|
"file": f"studies/_Other/{study_name}/run_optimization.py",
|
||||||
|
"priority": "high",
|
||||||
|
"dependencies": ["task_003"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "task_005",
|
||||||
|
"description": "Create README.md documentation",
|
||||||
|
"file": f"studies/_Other/{study_name}/README.md",
|
||||||
|
"priority": "medium",
|
||||||
|
"dependencies": ["task_003"],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
test_scenarios = [
|
||||||
|
{
|
||||||
|
"id": "test_001",
|
||||||
|
"name": "Study directory exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [{"action": "check_exists", "path": f"studies/_Other/{study_name}"}],
|
||||||
|
"expected_outcome": {"exists": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_002",
|
||||||
|
"name": "AtomizerSpec is valid",
|
||||||
|
"type": "api",
|
||||||
|
"steps": [
|
||||||
|
{"action": "get", "endpoint": f"/api/studies/{study_name}/spec/validate"}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"valid": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_003",
|
||||||
|
"name": "Dashboard loads study",
|
||||||
|
"type": "browser",
|
||||||
|
"steps": [
|
||||||
|
{"action": "navigate", "url": f"/canvas/{study_name}"},
|
||||||
|
{"action": "wait_for", "selector": "[data-testid='canvas-container']"},
|
||||||
|
],
|
||||||
|
"expected_outcome": {"loaded": True},
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"objective": objective,
|
||||||
|
"approach": "Mock plan for development testing",
|
||||||
|
"tasks": tasks,
|
||||||
|
"test_scenarios": test_scenarios,
|
||||||
|
"risks": [
|
||||||
|
{
|
||||||
|
"description": "NX model files may have dependencies",
|
||||||
|
"mitigation": "Copy all related files (_i.prt, .fem, .sim)",
|
||||||
|
"severity": "high",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"acceptance_criteria": [
|
||||||
|
"Study directory structure created",
|
||||||
|
"AtomizerSpec validates without errors",
|
||||||
|
"Dashboard loads study canvas",
|
||||||
|
],
|
||||||
|
}
|
||||||
|
|
||||||
|
async def analyze_codebase(self, query: str) -> Dict:
|
||||||
|
"""
|
||||||
|
Use Gemini to analyze codebase state.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
query: What to analyze (e.g., "current dashboard components")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Analysis results
|
||||||
|
"""
|
||||||
|
# This would integrate with codebase scanning
|
||||||
|
# For now, return a stub
|
||||||
|
return {
|
||||||
|
"query": query,
|
||||||
|
"analysis": "Codebase analysis not yet implemented",
|
||||||
|
"recommendations": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
async def generate_test_scenarios(
|
||||||
|
self,
|
||||||
|
feature: str,
|
||||||
|
context: Optional[Dict] = None,
|
||||||
|
) -> List[Dict]:
|
||||||
|
"""
|
||||||
|
Generate test scenarios for a specific feature.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
feature: Feature to test (e.g., "study creation", "spec validation")
|
||||||
|
context: Additional context
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of test scenarios
|
||||||
|
"""
|
||||||
|
prompt = f"""Generate test scenarios for the Atomizer feature: {feature}
|
||||||
|
|
||||||
|
Context: {json.dumps(context, indent=2) if context else "None"}
|
||||||
|
|
||||||
|
Output as JSON array of test scenarios:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{{
|
||||||
|
"id": "test_001",
|
||||||
|
"name": "Test name",
|
||||||
|
"type": "api|browser|cli|filesystem",
|
||||||
|
"steps": [...]
|
||||||
|
"expected_outcome": {{...}}
|
||||||
|
}}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
if self.client == "mock":
|
||||||
|
return self._mock_plan(feature, context or {}).get("test_scenarios", [])
|
||||||
|
|
||||||
|
# Query Gemini
|
||||||
|
try:
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
response = await loop.run_in_executor(
|
||||||
|
None, lambda: self._model.generate_content(prompt)
|
||||||
|
)
|
||||||
|
|
||||||
|
text = response.text
|
||||||
|
if "```json" in text:
|
||||||
|
start = text.find("```json") + 7
|
||||||
|
end = text.find("```", start)
|
||||||
|
json_str = text[start:end].strip()
|
||||||
|
return json.loads(json_str)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to generate test scenarios: {e}")
|
||||||
|
|
||||||
|
return []
|
||||||
585
optimization_engine/devloop/test_runner.py
Normal file
585
optimization_engine/devloop/test_runner.py
Normal file
@@ -0,0 +1,585 @@
|
|||||||
|
"""
|
||||||
|
Dashboard Test Runner - Automated testing through the Atomizer dashboard.
|
||||||
|
|
||||||
|
Supports test types:
|
||||||
|
- API tests (REST endpoint verification)
|
||||||
|
- Browser tests (UI interaction via Playwright)
|
||||||
|
- CLI tests (command line execution)
|
||||||
|
- Filesystem tests (file/directory verification)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import subprocess
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestStep:
|
||||||
|
"""A single step in a test scenario."""
|
||||||
|
|
||||||
|
action: str
|
||||||
|
target: Optional[str] = None
|
||||||
|
data: Optional[Dict] = None
|
||||||
|
timeout_ms: int = 5000
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestScenario:
|
||||||
|
"""A complete test scenario."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
name: str
|
||||||
|
type: str # "api", "browser", "cli", "filesystem"
|
||||||
|
steps: List[Dict] = field(default_factory=list)
|
||||||
|
expected_outcome: Dict = field(default_factory=lambda: {"status": "pass"})
|
||||||
|
timeout_ms: int = 30000
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestResult:
|
||||||
|
"""Result of a single test."""
|
||||||
|
|
||||||
|
scenario_id: str
|
||||||
|
scenario_name: str
|
||||||
|
passed: bool
|
||||||
|
duration_ms: float
|
||||||
|
error: Optional[str] = None
|
||||||
|
details: Optional[Dict] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestReport:
|
||||||
|
"""Complete test report."""
|
||||||
|
|
||||||
|
timestamp: str = field(default_factory=lambda: datetime.now().isoformat())
|
||||||
|
scenarios: List[TestResult] = field(default_factory=list)
|
||||||
|
summary: Dict = field(default_factory=lambda: {"passed": 0, "failed": 0, "total": 0})
|
||||||
|
|
||||||
|
|
||||||
|
class DashboardTestRunner:
|
||||||
|
"""
|
||||||
|
Automated test runner for Atomizer dashboard.
|
||||||
|
|
||||||
|
Executes test scenarios against:
|
||||||
|
- Backend API endpoints
|
||||||
|
- Frontend UI (via Playwright if available)
|
||||||
|
- CLI commands
|
||||||
|
- Filesystem assertions
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config: Optional[Dict] = None):
|
||||||
|
"""
|
||||||
|
Initialize the test runner.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: Configuration with dashboard URLs and timeouts
|
||||||
|
"""
|
||||||
|
self.config = config or {}
|
||||||
|
self.base_url = self.config.get("dashboard_url", "http://localhost:8000")
|
||||||
|
self.ws_url = self.config.get("websocket_url", "ws://localhost:8000")
|
||||||
|
self.timeout_ms = self.config.get("test_timeout_ms", 30000)
|
||||||
|
self.studies_dir = Path(self.config.get("studies_dir", "C:/Users/antoi/Atomizer/studies"))
|
||||||
|
|
||||||
|
self._session: Optional[aiohttp.ClientSession] = None
|
||||||
|
self._ws: Optional[aiohttp.ClientWebSocketResponse] = None
|
||||||
|
self._playwright = None
|
||||||
|
self._browser = None
|
||||||
|
|
||||||
|
async def connect(self):
|
||||||
|
"""Initialize connections."""
|
||||||
|
if self._session is None:
|
||||||
|
self._session = aiohttp.ClientSession(
|
||||||
|
timeout=aiohttp.ClientTimeout(total=self.timeout_ms / 1000)
|
||||||
|
)
|
||||||
|
|
||||||
|
async def disconnect(self):
|
||||||
|
"""Clean up connections."""
|
||||||
|
if self._ws:
|
||||||
|
await self._ws.close()
|
||||||
|
self._ws = None
|
||||||
|
if self._session:
|
||||||
|
await self._session.close()
|
||||||
|
self._session = None
|
||||||
|
if self._browser:
|
||||||
|
await self._browser.close()
|
||||||
|
self._browser = None
|
||||||
|
|
||||||
|
async def run_test_suite(self, scenarios: List[Dict]) -> Dict:
|
||||||
|
"""
|
||||||
|
Run a complete test suite.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
scenarios: List of test scenario dicts
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Test report as dict
|
||||||
|
"""
|
||||||
|
await self.connect()
|
||||||
|
|
||||||
|
report = TestReport()
|
||||||
|
|
||||||
|
for scenario_dict in scenarios:
|
||||||
|
scenario = self._parse_scenario(scenario_dict)
|
||||||
|
start_time = datetime.now()
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await self._execute_scenario(scenario)
|
||||||
|
result.duration_ms = (datetime.now() - start_time).total_seconds() * 1000
|
||||||
|
report.scenarios.append(result)
|
||||||
|
|
||||||
|
if result.passed:
|
||||||
|
report.summary["passed"] += 1
|
||||||
|
else:
|
||||||
|
report.summary["failed"] += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Scenario {scenario.id} failed with error: {e}")
|
||||||
|
report.scenarios.append(
|
||||||
|
TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=(datetime.now() - start_time).total_seconds() * 1000,
|
||||||
|
error=str(e),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
report.summary["failed"] += 1
|
||||||
|
|
||||||
|
report.summary["total"] += 1
|
||||||
|
|
||||||
|
return {
|
||||||
|
"timestamp": report.timestamp,
|
||||||
|
"scenarios": [self._result_to_dict(r) for r in report.scenarios],
|
||||||
|
"summary": report.summary,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _parse_scenario(self, scenario_dict: Dict) -> TestScenario:
|
||||||
|
"""Parse a scenario dict into TestScenario."""
|
||||||
|
return TestScenario(
|
||||||
|
id=scenario_dict.get("id", "unknown"),
|
||||||
|
name=scenario_dict.get("name", "Unnamed test"),
|
||||||
|
type=scenario_dict.get("type", "api"),
|
||||||
|
steps=scenario_dict.get("steps", []),
|
||||||
|
expected_outcome=scenario_dict.get("expected_outcome", {"status": "pass"}),
|
||||||
|
timeout_ms=scenario_dict.get("timeout_ms", self.timeout_ms),
|
||||||
|
)
|
||||||
|
|
||||||
|
def _result_to_dict(self, result: TestResult) -> Dict:
|
||||||
|
"""Convert TestResult to dict."""
|
||||||
|
return {
|
||||||
|
"scenario_id": result.scenario_id,
|
||||||
|
"scenario_name": result.scenario_name,
|
||||||
|
"passed": result.passed,
|
||||||
|
"duration_ms": result.duration_ms,
|
||||||
|
"error": result.error,
|
||||||
|
"details": result.details,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _execute_scenario(self, scenario: TestScenario) -> TestResult:
|
||||||
|
"""Execute a single test scenario."""
|
||||||
|
logger.info(f"Executing test: {scenario.name} ({scenario.type})")
|
||||||
|
|
||||||
|
if scenario.type == "api":
|
||||||
|
return await self._execute_api_scenario(scenario)
|
||||||
|
elif scenario.type == "browser":
|
||||||
|
return await self._execute_browser_scenario(scenario)
|
||||||
|
elif scenario.type == "cli":
|
||||||
|
return await self._execute_cli_scenario(scenario)
|
||||||
|
elif scenario.type == "filesystem":
|
||||||
|
return await self._execute_filesystem_scenario(scenario)
|
||||||
|
else:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Unknown test type: {scenario.type}",
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _execute_api_scenario(self, scenario: TestScenario) -> TestResult:
|
||||||
|
"""Execute an API test scenario."""
|
||||||
|
details = {}
|
||||||
|
|
||||||
|
for step in scenario.steps:
|
||||||
|
action = step.get("action", "get").lower()
|
||||||
|
endpoint = step.get("endpoint", step.get("target", "/"))
|
||||||
|
data = step.get("data")
|
||||||
|
|
||||||
|
url = f"{self.base_url}{endpoint}"
|
||||||
|
|
||||||
|
try:
|
||||||
|
if action == "get":
|
||||||
|
async with self._session.get(url) as resp:
|
||||||
|
details["status_code"] = resp.status
|
||||||
|
details["response"] = await resp.json()
|
||||||
|
|
||||||
|
elif action == "post":
|
||||||
|
async with self._session.post(url, json=data) as resp:
|
||||||
|
details["status_code"] = resp.status
|
||||||
|
details["response"] = await resp.json()
|
||||||
|
|
||||||
|
elif action == "put":
|
||||||
|
async with self._session.put(url, json=data) as resp:
|
||||||
|
details["status_code"] = resp.status
|
||||||
|
details["response"] = await resp.json()
|
||||||
|
|
||||||
|
elif action == "delete":
|
||||||
|
async with self._session.delete(url) as resp:
|
||||||
|
details["status_code"] = resp.status
|
||||||
|
details["response"] = await resp.json()
|
||||||
|
|
||||||
|
except aiohttp.ClientError as e:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"API request failed: {e}",
|
||||||
|
details={"url": url, "action": action},
|
||||||
|
)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
details["response"] = "Non-JSON response"
|
||||||
|
|
||||||
|
# Check expected outcome
|
||||||
|
passed = self._check_outcome(details, scenario.expected_outcome)
|
||||||
|
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=passed,
|
||||||
|
duration_ms=0,
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _execute_browser_scenario(self, scenario: TestScenario) -> TestResult:
|
||||||
|
"""Execute a browser test scenario using Playwright."""
|
||||||
|
try:
|
||||||
|
from playwright.async_api import async_playwright
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("Playwright not available, skipping browser test")
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=True, # Skip, don't fail
|
||||||
|
duration_ms=0,
|
||||||
|
error="Playwright not installed - test skipped",
|
||||||
|
)
|
||||||
|
|
||||||
|
details = {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with async_playwright() as p:
|
||||||
|
browser = await p.chromium.launch(headless=True)
|
||||||
|
page = await browser.new_page()
|
||||||
|
|
||||||
|
for step in scenario.steps:
|
||||||
|
action = step.get("action", "navigate")
|
||||||
|
|
||||||
|
if action == "navigate":
|
||||||
|
url = step.get("url", "/")
|
||||||
|
# Use frontend URL (port 3003 for Vite dev server)
|
||||||
|
full_url = f"http://localhost:3003{url}" if url.startswith("/") else url
|
||||||
|
await page.goto(full_url, timeout=scenario.timeout_ms)
|
||||||
|
details["navigated_to"] = full_url
|
||||||
|
|
||||||
|
elif action == "wait_for":
|
||||||
|
selector = step.get("selector")
|
||||||
|
if selector:
|
||||||
|
await page.wait_for_selector(selector, timeout=scenario.timeout_ms)
|
||||||
|
details["found_selector"] = selector
|
||||||
|
|
||||||
|
elif action == "click":
|
||||||
|
selector = step.get("selector")
|
||||||
|
if selector:
|
||||||
|
await page.click(selector)
|
||||||
|
details["clicked"] = selector
|
||||||
|
|
||||||
|
elif action == "fill":
|
||||||
|
selector = step.get("selector")
|
||||||
|
value = step.get("value", "")
|
||||||
|
if selector:
|
||||||
|
await page.fill(selector, value)
|
||||||
|
details["filled"] = {selector: value}
|
||||||
|
|
||||||
|
elif action == "screenshot":
|
||||||
|
path = step.get("path", f"test_{scenario.id}.png")
|
||||||
|
await page.screenshot(path=path)
|
||||||
|
details["screenshot"] = path
|
||||||
|
|
||||||
|
await browser.close()
|
||||||
|
|
||||||
|
passed = True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Browser test failed: {e}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=passed,
|
||||||
|
duration_ms=0,
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _execute_cli_scenario(self, scenario: TestScenario) -> TestResult:
|
||||||
|
"""Execute a CLI test scenario."""
|
||||||
|
details = {}
|
||||||
|
|
||||||
|
for step in scenario.steps:
|
||||||
|
command = step.get("command", step.get("target", ""))
|
||||||
|
cwd = step.get("cwd", str(self.studies_dir))
|
||||||
|
|
||||||
|
if not command:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Use PowerShell on Windows
|
||||||
|
result = subprocess.run(
|
||||||
|
["powershell", "-Command", command],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
cwd=cwd,
|
||||||
|
timeout=scenario.timeout_ms / 1000,
|
||||||
|
)
|
||||||
|
|
||||||
|
details["command"] = command
|
||||||
|
details["returncode"] = result.returncode
|
||||||
|
details["stdout"] = result.stdout[:1000] if result.stdout else ""
|
||||||
|
details["stderr"] = result.stderr[:1000] if result.stderr else ""
|
||||||
|
|
||||||
|
if result.returncode != 0:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Command failed with code {result.returncode}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Command timed out after {scenario.timeout_ms}ms",
|
||||||
|
details={"command": command},
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"CLI execution failed: {e}",
|
||||||
|
details={"command": command},
|
||||||
|
)
|
||||||
|
|
||||||
|
passed = self._check_outcome(details, scenario.expected_outcome)
|
||||||
|
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=passed,
|
||||||
|
duration_ms=0,
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _execute_filesystem_scenario(self, scenario: TestScenario) -> TestResult:
|
||||||
|
"""Execute a filesystem test scenario."""
|
||||||
|
details = {}
|
||||||
|
|
||||||
|
for step in scenario.steps:
|
||||||
|
action = step.get("action", "check_exists")
|
||||||
|
path_str = step.get("path", "")
|
||||||
|
|
||||||
|
# Resolve relative paths
|
||||||
|
if not Path(path_str).is_absolute():
|
||||||
|
path = self.studies_dir.parent / path_str
|
||||||
|
else:
|
||||||
|
path = Path(path_str)
|
||||||
|
|
||||||
|
if action == "check_exists":
|
||||||
|
exists = path.exists()
|
||||||
|
details["path"] = str(path)
|
||||||
|
details["exists"] = exists
|
||||||
|
|
||||||
|
if scenario.expected_outcome.get("exists", True) != exists:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Path {'does not exist' if not exists else 'exists but should not'}: {path}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
elif action == "check_file_contains":
|
||||||
|
content_check = step.get("contains", "")
|
||||||
|
if path.exists() and path.is_file():
|
||||||
|
content = path.read_text()
|
||||||
|
contains = content_check in content
|
||||||
|
details["contains"] = contains
|
||||||
|
details["search_term"] = content_check
|
||||||
|
|
||||||
|
if not contains:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"File does not contain: {content_check}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"File not found: {path}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
elif action == "check_json_valid":
|
||||||
|
if path.exists() and path.is_file():
|
||||||
|
try:
|
||||||
|
with open(path) as f:
|
||||||
|
json.load(f)
|
||||||
|
details["valid_json"] = True
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"Invalid JSON: {e}",
|
||||||
|
details={"path": str(path)},
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=False,
|
||||||
|
duration_ms=0,
|
||||||
|
error=f"File not found: {path}",
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
return TestResult(
|
||||||
|
scenario_id=scenario.id,
|
||||||
|
scenario_name=scenario.name,
|
||||||
|
passed=True,
|
||||||
|
duration_ms=0,
|
||||||
|
details=details,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_outcome(self, details: Dict, expected: Dict) -> bool:
|
||||||
|
"""Check if test details match expected outcome."""
|
||||||
|
for key, expected_value in expected.items():
|
||||||
|
if key not in details:
|
||||||
|
continue
|
||||||
|
|
||||||
|
actual_value = details[key]
|
||||||
|
|
||||||
|
# Handle nested dicts
|
||||||
|
if isinstance(expected_value, dict) and isinstance(actual_value, dict):
|
||||||
|
if not self._check_outcome(actual_value, expected_value):
|
||||||
|
return False
|
||||||
|
# Handle lists
|
||||||
|
elif isinstance(expected_value, list) and isinstance(actual_value, list):
|
||||||
|
if expected_value != actual_value:
|
||||||
|
return False
|
||||||
|
# Handle simple values
|
||||||
|
elif actual_value != expected_value:
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def verify_fix(self, fix: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Verify that a specific fix was successful.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
fix: Fix dict with issue_id and files_modified
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Verification result
|
||||||
|
"""
|
||||||
|
issue_id = fix.get("issue_id", "unknown")
|
||||||
|
files_modified = fix.get("files_modified", [])
|
||||||
|
|
||||||
|
# Run quick verification
|
||||||
|
passed = True
|
||||||
|
details = {}
|
||||||
|
|
||||||
|
# Check that modified files exist
|
||||||
|
for file_path in files_modified:
|
||||||
|
path = Path(file_path)
|
||||||
|
if not path.exists():
|
||||||
|
passed = False
|
||||||
|
details["missing_file"] = str(path)
|
||||||
|
break
|
||||||
|
|
||||||
|
# Could add more sophisticated verification here
|
||||||
|
|
||||||
|
return {
|
||||||
|
"issue_id": issue_id,
|
||||||
|
"passed": passed,
|
||||||
|
"details": details,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def run_health_check(self) -> Dict:
|
||||||
|
"""
|
||||||
|
Run a quick health check on dashboard components.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Health status dict
|
||||||
|
"""
|
||||||
|
await self.connect()
|
||||||
|
|
||||||
|
health = {
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"api": "unknown",
|
||||||
|
"frontend": "unknown",
|
||||||
|
"websocket": "unknown",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check API
|
||||||
|
try:
|
||||||
|
async with self._session.get(f"{self.base_url}/health") as resp:
|
||||||
|
if resp.status == 200:
|
||||||
|
health["api"] = "healthy"
|
||||||
|
else:
|
||||||
|
health["api"] = f"unhealthy (status {resp.status})"
|
||||||
|
except Exception as e:
|
||||||
|
health["api"] = f"error: {e}"
|
||||||
|
|
||||||
|
# Check frontend (if available)
|
||||||
|
try:
|
||||||
|
async with self._session.get("http://localhost:3000") as resp:
|
||||||
|
if resp.status == 200:
|
||||||
|
health["frontend"] = "healthy"
|
||||||
|
else:
|
||||||
|
health["frontend"] = f"unhealthy (status {resp.status})"
|
||||||
|
except Exception as e:
|
||||||
|
health["frontend"] = f"error: {e}"
|
||||||
|
|
||||||
|
return health
|
||||||
@@ -65,6 +65,16 @@ from optimization_engine.extractors.extract_zernike_figure import (
|
|||||||
extract_zernike_figure_rms,
|
extract_zernike_figure_rms,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Displacement extraction
|
||||||
|
from optimization_engine.extractors.extract_displacement import (
|
||||||
|
extract_displacement,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mass extraction from BDF
|
||||||
|
from optimization_engine.extractors.extract_mass_from_bdf import (
|
||||||
|
extract_mass_from_bdf,
|
||||||
|
)
|
||||||
|
|
||||||
# Part mass and material extractor (from NX .prt files)
|
# Part mass and material extractor (from NX .prt files)
|
||||||
from optimization_engine.extractors.extract_part_mass_material import (
|
from optimization_engine.extractors.extract_part_mass_material import (
|
||||||
extract_part_mass_material,
|
extract_part_mass_material,
|
||||||
@@ -145,72 +155,76 @@ from optimization_engine.extractors.spec_extractor_builder import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
|
# Displacement extraction
|
||||||
|
"extract_displacement",
|
||||||
|
# Mass extraction (from BDF)
|
||||||
|
"extract_mass_from_bdf",
|
||||||
# Part mass & material (from .prt)
|
# Part mass & material (from .prt)
|
||||||
'extract_part_mass_material',
|
"extract_part_mass_material",
|
||||||
'extract_part_mass',
|
"extract_part_mass",
|
||||||
'extract_part_material',
|
"extract_part_material",
|
||||||
'PartMassExtractor',
|
"PartMassExtractor",
|
||||||
# Stress extractors
|
# Stress extractors
|
||||||
'extract_solid_stress',
|
"extract_solid_stress",
|
||||||
'extract_principal_stress',
|
"extract_principal_stress",
|
||||||
'extract_max_principal_stress',
|
"extract_max_principal_stress",
|
||||||
'extract_min_principal_stress',
|
"extract_min_principal_stress",
|
||||||
# Strain energy
|
# Strain energy
|
||||||
'extract_strain_energy',
|
"extract_strain_energy",
|
||||||
'extract_total_strain_energy',
|
"extract_total_strain_energy",
|
||||||
'extract_strain_energy_density',
|
"extract_strain_energy_density",
|
||||||
# SPC forces / reactions
|
# SPC forces / reactions
|
||||||
'extract_spc_forces',
|
"extract_spc_forces",
|
||||||
'extract_total_reaction_force',
|
"extract_total_reaction_force",
|
||||||
'extract_reaction_component',
|
"extract_reaction_component",
|
||||||
'check_force_equilibrium',
|
"check_force_equilibrium",
|
||||||
# Zernike (telescope mirrors) - Standard Z-only method
|
# Zernike (telescope mirrors) - Standard Z-only method
|
||||||
'ZernikeExtractor',
|
"ZernikeExtractor",
|
||||||
'extract_zernike_from_op2',
|
"extract_zernike_from_op2",
|
||||||
'extract_zernike_filtered_rms',
|
"extract_zernike_filtered_rms",
|
||||||
'extract_zernike_relative_rms',
|
"extract_zernike_relative_rms",
|
||||||
# Zernike OPD (RECOMMENDED - uses actual geometry, no shape assumption)
|
# Zernike OPD (RECOMMENDED - uses actual geometry, no shape assumption)
|
||||||
# Supports annular apertures via inner_radius parameter
|
# Supports annular apertures via inner_radius parameter
|
||||||
'ZernikeOPDExtractor',
|
"ZernikeOPDExtractor",
|
||||||
'extract_zernike_opd',
|
"extract_zernike_opd",
|
||||||
'extract_zernike_opd_filtered_rms',
|
"extract_zernike_opd_filtered_rms",
|
||||||
'compute_zernike_coefficients_annular',
|
"compute_zernike_coefficients_annular",
|
||||||
# Zernike Analytic (parabola-based with lateral displacement correction)
|
# Zernike Analytic (parabola-based with lateral displacement correction)
|
||||||
'ZernikeAnalyticExtractor',
|
"ZernikeAnalyticExtractor",
|
||||||
'extract_zernike_analytic',
|
"extract_zernike_analytic",
|
||||||
'extract_zernike_analytic_filtered_rms',
|
"extract_zernike_analytic_filtered_rms",
|
||||||
'compare_zernike_methods',
|
"compare_zernike_methods",
|
||||||
# Backwards compatibility (deprecated)
|
# Backwards compatibility (deprecated)
|
||||||
'ZernikeFigureExtractor',
|
"ZernikeFigureExtractor",
|
||||||
'extract_zernike_figure',
|
"extract_zernike_figure",
|
||||||
'extract_zernike_figure_rms',
|
"extract_zernike_figure_rms",
|
||||||
# Temperature (Phase 3 - thermal)
|
# Temperature (Phase 3 - thermal)
|
||||||
'extract_temperature',
|
"extract_temperature",
|
||||||
'extract_temperature_gradient',
|
"extract_temperature_gradient",
|
||||||
'extract_heat_flux',
|
"extract_heat_flux",
|
||||||
'get_max_temperature',
|
"get_max_temperature",
|
||||||
# Modal mass (Phase 3 - dynamics)
|
# Modal mass (Phase 3 - dynamics)
|
||||||
'extract_modal_mass',
|
"extract_modal_mass",
|
||||||
'extract_frequencies',
|
"extract_frequencies",
|
||||||
'get_first_frequency',
|
"get_first_frequency",
|
||||||
'get_modal_mass_ratio',
|
"get_modal_mass_ratio",
|
||||||
# Part introspection (Phase 4)
|
# Part introspection (Phase 4)
|
||||||
'introspect_part',
|
"introspect_part",
|
||||||
'get_expressions_dict',
|
"get_expressions_dict",
|
||||||
'get_expression_value',
|
"get_expression_value",
|
||||||
'print_introspection_summary',
|
"print_introspection_summary",
|
||||||
# Custom extractor loader (Phase 5)
|
# Custom extractor loader (Phase 5)
|
||||||
'CustomExtractor',
|
"CustomExtractor",
|
||||||
'CustomExtractorLoader',
|
"CustomExtractorLoader",
|
||||||
'CustomExtractorContext',
|
"CustomExtractorContext",
|
||||||
'ExtractorSecurityError',
|
"ExtractorSecurityError",
|
||||||
'ExtractorValidationError',
|
"ExtractorValidationError",
|
||||||
'load_custom_extractors',
|
"load_custom_extractors",
|
||||||
'execute_custom_extractor',
|
"execute_custom_extractor",
|
||||||
'validate_custom_extractor',
|
"validate_custom_extractor",
|
||||||
# Spec extractor builder
|
# Spec extractor builder
|
||||||
'SpecExtractorBuilder',
|
"SpecExtractorBuilder",
|
||||||
'build_extractors_from_spec',
|
"build_extractors_from_spec",
|
||||||
'get_extractor_outputs',
|
"get_extractor_outputs",
|
||||||
'list_available_builtin_extractors',
|
"list_available_builtin_extractors",
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -1,26 +1,30 @@
|
|||||||
"""
|
"""
|
||||||
Extract mass from Nastran BDF/DAT file as fallback when OP2 doesn't have GRDPNT
|
Extract mass from Nastran BDF/DAT file.
|
||||||
|
|
||||||
|
This module provides a simple wrapper around the BDFMassExtractor class.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Any
|
from typing import Dict, Any
|
||||||
import re
|
|
||||||
|
from optimization_engine.extractors.bdf_mass_extractor import BDFMassExtractor
|
||||||
|
|
||||||
|
|
||||||
def extract_mass_from_bdf(bdf_file: Path) -> Dict[str, Any]:
|
def extract_mass_from_bdf(bdf_file: Path) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
Extract mass from Nastran BDF file by parsing material and element definitions.
|
Extract mass from Nastran BDF file.
|
||||||
|
|
||||||
This is a fallback when OP2 doesn't have PARAM,GRDPNT output.
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
bdf_file: Path to .dat or .bdf file
|
bdf_file: Path to .dat or .bdf file
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
dict: {
|
dict: {
|
||||||
'mass_kg': total mass in kg,
|
'total_mass': mass in kg (primary key),
|
||||||
'mass_g': total mass in grams,
|
'mass_kg': mass in kg,
|
||||||
'method': 'bdf_calculation'
|
'mass_g': mass in grams,
|
||||||
|
'cg': center of gravity [x, y, z],
|
||||||
|
'num_elements': number of elements,
|
||||||
|
'breakdown': mass by element type
|
||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
bdf_file = Path(bdf_file)
|
bdf_file = Path(bdf_file)
|
||||||
@@ -28,35 +32,23 @@ def extract_mass_from_bdf(bdf_file: Path) -> Dict[str, Any]:
|
|||||||
if not bdf_file.exists():
|
if not bdf_file.exists():
|
||||||
raise FileNotFoundError(f"BDF file not found: {bdf_file}")
|
raise FileNotFoundError(f"BDF file not found: {bdf_file}")
|
||||||
|
|
||||||
# Parse using pyNastran BDF reader
|
extractor = BDFMassExtractor(str(bdf_file))
|
||||||
from pyNastran.bdf.bdf import read_bdf
|
result = extractor.extract_mass()
|
||||||
|
|
||||||
model = read_bdf(str(bdf_file), validate=False, xref=True, punch=False,
|
# Add 'total_mass' as primary key for compatibility
|
||||||
encoding='utf-8', log=None, debug=False, mode='msc')
|
result["total_mass"] = result["mass_kg"]
|
||||||
|
|
||||||
# Calculate total mass by summing element masses
|
return result
|
||||||
# model.mass_properties() returns (mass, cg, inertia)
|
|
||||||
mass_properties = model.mass_properties()
|
|
||||||
mass_ton = mass_properties[0] # Mass in tons (ton-mm-sec)
|
|
||||||
|
|
||||||
# NX Nastran typically uses ton-mm-sec units
|
|
||||||
mass_kg = mass_ton * 1000.0 # Convert tons to kg
|
|
||||||
mass_g = mass_kg * 1000.0 # Convert kg to grams
|
|
||||||
|
|
||||||
return {
|
|
||||||
'mass_kg': mass_kg,
|
|
||||||
'mass_g': mass_g,
|
|
||||||
'mass_ton': mass_ton,
|
|
||||||
'method': 'bdf_calculation',
|
|
||||||
'units': 'ton-mm-sec (converted to kg/g)'
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
if len(sys.argv) > 1:
|
if len(sys.argv) > 1:
|
||||||
bdf_file = Path(sys.argv[1])
|
bdf_file = Path(sys.argv[1])
|
||||||
result = extract_mass_from_bdf(bdf_file)
|
result = extract_mass_from_bdf(bdf_file)
|
||||||
print(f"Mass from BDF: {result['mass_kg']:.6f} kg ({result['mass_g']:.3f} g)")
|
print(f"Mass from BDF: {result['mass_kg']:.6f} kg ({result['mass_g']:.3f} g)")
|
||||||
|
print(f"CG: {result['cg']}")
|
||||||
|
print(f"Elements: {result['num_elements']}")
|
||||||
else:
|
else:
|
||||||
print(f"Usage: python {sys.argv[0]} <bdf_file>")
|
print(f"Usage: python {sys.argv[0]} <bdf_file>")
|
||||||
|
|||||||
@@ -1,74 +1,86 @@
|
|||||||
"""
|
"""
|
||||||
Extract maximum von Mises stress from structural analysis
|
Extract maximum von Mises stress from structural analysis.
|
||||||
Auto-generated by Atomizer Phase 3 - pyNastran Research Agent
|
|
||||||
|
|
||||||
Pattern: solid_stress
|
Supports all solid element types (CTETRA, CHEXA, CPENTA, CPYRAM) and
|
||||||
Element Type: CTETRA
|
shell elements (CQUAD4, CTRIA3).
|
||||||
Result Type: stress
|
|
||||||
API: model.ctetra_stress[subcase] or model.chexa_stress[subcase]
|
Unit Note: NX Nastran in kg-mm-s outputs stress in kPa. This extractor
|
||||||
|
converts to MPa (divide by 1000) for engineering use.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Any
|
from typing import Dict, Any, Optional
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pyNastran.op2.op2 import OP2
|
from pyNastran.op2.op2 import OP2
|
||||||
|
|
||||||
|
|
||||||
def extract_solid_stress(op2_file: Path, subcase: int = 1, element_type: str = 'ctetra'):
|
def extract_solid_stress(
|
||||||
"""Extract stress from solid elements."""
|
op2_file: Path,
|
||||||
from pyNastran.op2.op2 import OP2
|
subcase: int = 1,
|
||||||
import numpy as np
|
element_type: Optional[str] = None,
|
||||||
|
convert_to_mpa: bool = True,
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Extract maximum von Mises stress from solid elements.
|
||||||
|
|
||||||
model = OP2()
|
Args:
|
||||||
|
op2_file: Path to OP2 results file
|
||||||
|
subcase: Subcase ID (default 1)
|
||||||
|
element_type: Specific element type to check ('ctetra', 'chexa', etc.)
|
||||||
|
If None, checks ALL solid element types and returns max.
|
||||||
|
convert_to_mpa: If True, divide by 1000 to convert kPa to MPa (default True)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
dict with 'max_von_mises' (in MPa if convert_to_mpa=True),
|
||||||
|
'max_stress_element', and 'element_type'
|
||||||
|
"""
|
||||||
|
model = OP2(debug=False, log=None)
|
||||||
model.read_op2(str(op2_file))
|
model.read_op2(str(op2_file))
|
||||||
|
|
||||||
# Get stress object for element type
|
# All solid element types to check
|
||||||
# Different element types have different stress attributes
|
solid_element_types = ["ctetra", "chexa", "cpenta", "cpyram"]
|
||||||
stress_attr_map = {
|
shell_element_types = ["cquad4", "ctria3"]
|
||||||
'ctetra': 'ctetra_stress',
|
|
||||||
'chexa': 'chexa_stress',
|
|
||||||
'cquad4': 'cquad4_stress',
|
|
||||||
'ctria3': 'ctria3_stress'
|
|
||||||
}
|
|
||||||
|
|
||||||
stress_attr = stress_attr_map.get(element_type.lower())
|
# If specific element type requested, only check that one
|
||||||
if not stress_attr:
|
if element_type:
|
||||||
raise ValueError(f"Unknown element type: {element_type}")
|
element_types_to_check = [element_type.lower()]
|
||||||
|
|
||||||
# Access stress through op2_results container
|
|
||||||
# pyNastran structure: model.op2_results.stress.cquad4_stress[subcase]
|
|
||||||
stress_dict = None
|
|
||||||
|
|
||||||
if hasattr(model, 'op2_results') and hasattr(model.op2_results, 'stress'):
|
|
||||||
stress_container = model.op2_results.stress
|
|
||||||
if hasattr(stress_container, stress_attr):
|
|
||||||
stress_dict = getattr(stress_container, stress_attr)
|
|
||||||
|
|
||||||
if stress_dict is None:
|
|
||||||
raise ValueError(f"No {element_type} stress results in OP2. Available attributes: {[a for a in dir(model) if 'stress' in a.lower()]}")
|
|
||||||
|
|
||||||
# stress_dict is a dictionary with subcase IDs as keys
|
|
||||||
available_subcases = list(stress_dict.keys())
|
|
||||||
if not available_subcases:
|
|
||||||
raise ValueError(f"No stress data found in OP2 file")
|
|
||||||
|
|
||||||
# Use the specified subcase or first available
|
|
||||||
if subcase in available_subcases:
|
|
||||||
actual_subcase = subcase
|
|
||||||
else:
|
else:
|
||||||
actual_subcase = available_subcases[0]
|
# Check all solid types by default
|
||||||
|
element_types_to_check = solid_element_types
|
||||||
|
|
||||||
stress = stress_dict[actual_subcase]
|
if not hasattr(model, "op2_results") or not hasattr(model.op2_results, "stress"):
|
||||||
|
raise ValueError("No stress results in OP2 file")
|
||||||
|
|
||||||
itime = 0
|
stress_container = model.op2_results.stress
|
||||||
|
|
||||||
# Extract von Mises if available
|
# Find max stress across all requested element types
|
||||||
if stress.is_von_mises: # Property, not method
|
max_stress = 0.0
|
||||||
# Different element types have von Mises at different column indices
|
max_stress_elem = 0
|
||||||
# Shell elements (CQUAD4, CTRIA3): 8 columns, von Mises at column 7
|
max_stress_type = None
|
||||||
# Solid elements (CTETRA, CHEXA): 10 columns, von Mises at column 9
|
|
||||||
|
for elem_type in element_types_to_check:
|
||||||
|
stress_attr = f"{elem_type}_stress"
|
||||||
|
|
||||||
|
if not hasattr(stress_container, stress_attr):
|
||||||
|
continue
|
||||||
|
|
||||||
|
stress_dict = getattr(stress_container, stress_attr)
|
||||||
|
if not stress_dict:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Get subcase
|
||||||
|
available_subcases = list(stress_dict.keys())
|
||||||
|
if not available_subcases:
|
||||||
|
continue
|
||||||
|
|
||||||
|
actual_subcase = subcase if subcase in available_subcases else available_subcases[0]
|
||||||
|
stress = stress_dict[actual_subcase]
|
||||||
|
|
||||||
|
if not stress.is_von_mises:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Determine von Mises column
|
||||||
ncols = stress.data.shape[2]
|
ncols = stress.data.shape[2]
|
||||||
|
|
||||||
if ncols == 8:
|
if ncols == 8:
|
||||||
# Shell elements - von Mises is last column
|
# Shell elements - von Mises is last column
|
||||||
von_mises_col = 7
|
von_mises_col = 7
|
||||||
@@ -76,27 +88,37 @@ def extract_solid_stress(op2_file: Path, subcase: int = 1, element_type: str = '
|
|||||||
# Solid elements - von Mises is column 9
|
# Solid elements - von Mises is column 9
|
||||||
von_mises_col = 9
|
von_mises_col = 9
|
||||||
else:
|
else:
|
||||||
# Unknown format, try last column
|
|
||||||
von_mises_col = ncols - 1
|
von_mises_col = ncols - 1
|
||||||
|
|
||||||
|
itime = 0
|
||||||
von_mises = stress.data[itime, :, von_mises_col]
|
von_mises = stress.data[itime, :, von_mises_col]
|
||||||
max_stress = float(np.max(von_mises))
|
elem_max = float(np.max(von_mises))
|
||||||
|
|
||||||
# Get element info
|
if elem_max > max_stress:
|
||||||
element_ids = [eid for (eid, node) in stress.element_node]
|
max_stress = elem_max
|
||||||
max_stress_elem = element_ids[np.argmax(von_mises)]
|
element_ids = [eid for (eid, node) in stress.element_node]
|
||||||
|
max_stress_elem = int(element_ids[np.argmax(von_mises)])
|
||||||
|
max_stress_type = elem_type.upper()
|
||||||
|
|
||||||
return {
|
if max_stress_type is None:
|
||||||
'max_von_mises': max_stress,
|
raise ValueError(f"No stress results found for element types: {element_types_to_check}")
|
||||||
'max_stress_element': int(max_stress_elem)
|
|
||||||
}
|
# Convert from kPa to MPa (NX kg-mm-s unit system outputs kPa)
|
||||||
else:
|
if convert_to_mpa:
|
||||||
raise ValueError("von Mises stress not available")
|
max_stress = max_stress / 1000.0
|
||||||
|
|
||||||
|
return {
|
||||||
|
"max_von_mises": max_stress,
|
||||||
|
"max_stress_element": max_stress_elem,
|
||||||
|
"element_type": max_stress_type,
|
||||||
|
"units": "MPa" if convert_to_mpa else "kPa",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
# Example usage
|
# Example usage
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
if len(sys.argv) > 1:
|
if len(sys.argv) > 1:
|
||||||
op2_file = Path(sys.argv[1])
|
op2_file = Path(sys.argv[1])
|
||||||
result = extract_solid_stress(op2_file)
|
result = extract_solid_stress(op2_file)
|
||||||
|
|||||||
@@ -473,23 +473,33 @@ def extract_displacements_by_subcase(
|
|||||||
ngt = darr.node_gridtype.astype(int)
|
ngt = darr.node_gridtype.astype(int)
|
||||||
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
node_ids = ngt if ngt.ndim == 1 else ngt[:, 0]
|
||||||
|
|
||||||
# Try to identify subcase from subtitle or isubcase
|
# Try to identify subcase from subtitle, label, or isubcase
|
||||||
subtitle = getattr(darr, 'subtitle', None)
|
subtitle = getattr(darr, 'subtitle', None)
|
||||||
|
op2_label = getattr(darr, 'label', None)
|
||||||
isubcase = getattr(darr, 'isubcase', None)
|
isubcase = getattr(darr, 'isubcase', None)
|
||||||
|
|
||||||
# Extract numeric from subtitle
|
# Extract numeric from subtitle first, then label, then isubcase
|
||||||
label = None
|
import re
|
||||||
if isinstance(subtitle, str):
|
subcase_id = None
|
||||||
import re
|
|
||||||
|
# Priority 1: subtitle (e.g., "GRAVITY 20 DEG")
|
||||||
|
if isinstance(subtitle, str) and subtitle.strip():
|
||||||
m = re.search(r'-?\d+', subtitle)
|
m = re.search(r'-?\d+', subtitle)
|
||||||
if m:
|
if m:
|
||||||
label = m.group(0)
|
subcase_id = m.group(0)
|
||||||
|
|
||||||
if label is None and isinstance(isubcase, int):
|
# Priority 2: label field (e.g., "90 SUBCASE 1")
|
||||||
label = str(isubcase)
|
if subcase_id is None and isinstance(op2_label, str) and op2_label.strip():
|
||||||
|
m = re.search(r'-?\d+', op2_label)
|
||||||
|
if m:
|
||||||
|
subcase_id = m.group(0)
|
||||||
|
|
||||||
if label:
|
# Priority 3: isubcase number
|
||||||
result[label] = {
|
if subcase_id is None and isinstance(isubcase, int):
|
||||||
|
subcase_id = str(isubcase)
|
||||||
|
|
||||||
|
if subcase_id:
|
||||||
|
result[subcase_id] = {
|
||||||
'node_ids': node_ids.astype(int),
|
'node_ids': node_ids.astype(int),
|
||||||
'disp': dmat.copy()
|
'disp': dmat.copy()
|
||||||
}
|
}
|
||||||
|
|||||||
46
optimization_engine/intake/__init__.py
Normal file
46
optimization_engine/intake/__init__.py
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
"""
|
||||||
|
Atomizer Intake System
|
||||||
|
======================
|
||||||
|
|
||||||
|
Provides structured intake processing for optimization studies.
|
||||||
|
|
||||||
|
Components:
|
||||||
|
- IntakeConfig: Pydantic schema for intake.yaml
|
||||||
|
- StudyContext: Complete assembled context for study creation
|
||||||
|
- IntakeProcessor: File handling and processing
|
||||||
|
- ContextAssembler: Combines all context sources
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.intake import IntakeProcessor, IntakeConfig
|
||||||
|
|
||||||
|
processor = IntakeProcessor(inbox_folder)
|
||||||
|
context = processor.process()
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .config import (
|
||||||
|
IntakeConfig,
|
||||||
|
StudyConfig,
|
||||||
|
ObjectiveConfig,
|
||||||
|
ConstraintConfig,
|
||||||
|
DesignVariableConfig,
|
||||||
|
BudgetConfig,
|
||||||
|
AlgorithmConfig,
|
||||||
|
MaterialConfig,
|
||||||
|
)
|
||||||
|
from .context import StudyContext, IntrospectionData, BaselineResult
|
||||||
|
from .processor import IntakeProcessor
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"IntakeConfig",
|
||||||
|
"StudyConfig",
|
||||||
|
"ObjectiveConfig",
|
||||||
|
"ConstraintConfig",
|
||||||
|
"DesignVariableConfig",
|
||||||
|
"BudgetConfig",
|
||||||
|
"AlgorithmConfig",
|
||||||
|
"MaterialConfig",
|
||||||
|
"StudyContext",
|
||||||
|
"IntrospectionData",
|
||||||
|
"BaselineResult",
|
||||||
|
"IntakeProcessor",
|
||||||
|
]
|
||||||
371
optimization_engine/intake/config.py
Normal file
371
optimization_engine/intake/config.py
Normal file
@@ -0,0 +1,371 @@
|
|||||||
|
"""
|
||||||
|
Intake Configuration Schema
|
||||||
|
===========================
|
||||||
|
|
||||||
|
Pydantic models for intake.yaml configuration files.
|
||||||
|
|
||||||
|
These models define the structure of pre-configuration that users can
|
||||||
|
provide to skip interview questions and speed up study setup.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, List, Literal, Union, Any, Dict
|
||||||
|
from pydantic import BaseModel, Field, field_validator, model_validator
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectiveConfig(BaseModel):
|
||||||
|
"""Configuration for an optimization objective."""
|
||||||
|
|
||||||
|
goal: Literal["minimize", "maximize"]
|
||||||
|
target: str = Field(
|
||||||
|
description="What to optimize: mass, displacement, stress, frequency, stiffness, or custom name"
|
||||||
|
)
|
||||||
|
weight: float = Field(default=1.0, ge=0.0, le=10.0)
|
||||||
|
extractor: Optional[str] = Field(
|
||||||
|
default=None, description="Custom extractor function name (auto-detected if not specified)"
|
||||||
|
)
|
||||||
|
|
||||||
|
@field_validator("target")
|
||||||
|
@classmethod
|
||||||
|
def validate_target(cls, v: str) -> str:
|
||||||
|
"""Normalize target names."""
|
||||||
|
known_targets = {
|
||||||
|
"mass",
|
||||||
|
"weight",
|
||||||
|
"displacement",
|
||||||
|
"deflection",
|
||||||
|
"stress",
|
||||||
|
"frequency",
|
||||||
|
"stiffness",
|
||||||
|
"strain_energy",
|
||||||
|
"volume",
|
||||||
|
}
|
||||||
|
normalized = v.lower().strip()
|
||||||
|
# Map common aliases
|
||||||
|
aliases = {
|
||||||
|
"weight": "mass",
|
||||||
|
"deflection": "displacement",
|
||||||
|
}
|
||||||
|
return aliases.get(normalized, normalized)
|
||||||
|
|
||||||
|
|
||||||
|
class ConstraintConfig(BaseModel):
|
||||||
|
"""Configuration for an optimization constraint."""
|
||||||
|
|
||||||
|
type: str = Field(
|
||||||
|
description="Constraint type: max_stress, max_displacement, min_frequency, etc."
|
||||||
|
)
|
||||||
|
threshold: float
|
||||||
|
units: str = ""
|
||||||
|
description: Optional[str] = None
|
||||||
|
|
||||||
|
@field_validator("type")
|
||||||
|
@classmethod
|
||||||
|
def normalize_type(cls, v: str) -> str:
|
||||||
|
"""Normalize constraint type names."""
|
||||||
|
return v.lower().strip().replace(" ", "_")
|
||||||
|
|
||||||
|
|
||||||
|
class DesignVariableConfig(BaseModel):
|
||||||
|
"""Configuration for a design variable."""
|
||||||
|
|
||||||
|
name: str = Field(description="NX expression name")
|
||||||
|
bounds: tuple[float, float] = Field(description="(min, max) bounds")
|
||||||
|
units: Optional[str] = None
|
||||||
|
description: Optional[str] = None
|
||||||
|
step: Optional[float] = Field(default=None, description="Step size for discrete variables")
|
||||||
|
|
||||||
|
@field_validator("bounds")
|
||||||
|
@classmethod
|
||||||
|
def validate_bounds(cls, v: tuple[float, float]) -> tuple[float, float]:
|
||||||
|
"""Ensure bounds are valid."""
|
||||||
|
if len(v) != 2:
|
||||||
|
raise ValueError("Bounds must be a tuple of (min, max)")
|
||||||
|
if v[0] >= v[1]:
|
||||||
|
raise ValueError(f"Lower bound ({v[0]}) must be less than upper bound ({v[1]})")
|
||||||
|
return v
|
||||||
|
|
||||||
|
@property
|
||||||
|
def range(self) -> float:
|
||||||
|
"""Get the range of the design variable."""
|
||||||
|
return self.bounds[1] - self.bounds[0]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def range_ratio(self) -> float:
|
||||||
|
"""Get the ratio of upper to lower bound."""
|
||||||
|
if self.bounds[0] == 0:
|
||||||
|
return float("inf")
|
||||||
|
return self.bounds[1] / self.bounds[0]
|
||||||
|
|
||||||
|
|
||||||
|
class BudgetConfig(BaseModel):
|
||||||
|
"""Configuration for optimization budget."""
|
||||||
|
|
||||||
|
max_trials: int = Field(default=100, ge=1, le=10000)
|
||||||
|
timeout_per_trial: int = Field(default=300, ge=10, le=7200, description="Seconds per FEA solve")
|
||||||
|
target_runtime: Optional[str] = Field(
|
||||||
|
default=None, description="Target total runtime (e.g., '2h', '30m')"
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_target_runtime_seconds(self) -> Optional[int]:
|
||||||
|
"""Parse target_runtime string to seconds."""
|
||||||
|
if not self.target_runtime:
|
||||||
|
return None
|
||||||
|
|
||||||
|
runtime = self.target_runtime.lower().strip()
|
||||||
|
|
||||||
|
if runtime.endswith("h"):
|
||||||
|
return int(float(runtime[:-1]) * 3600)
|
||||||
|
elif runtime.endswith("m"):
|
||||||
|
return int(float(runtime[:-1]) * 60)
|
||||||
|
elif runtime.endswith("s"):
|
||||||
|
return int(float(runtime[:-1]))
|
||||||
|
else:
|
||||||
|
# Assume seconds
|
||||||
|
return int(float(runtime))
|
||||||
|
|
||||||
|
|
||||||
|
class AlgorithmConfig(BaseModel):
|
||||||
|
"""Configuration for optimization algorithm."""
|
||||||
|
|
||||||
|
method: Literal["auto", "TPE", "CMA-ES", "NSGA-II", "random"] = "auto"
|
||||||
|
neural_acceleration: bool = Field(
|
||||||
|
default=False, description="Enable surrogate model for speedup"
|
||||||
|
)
|
||||||
|
priority: Literal["speed", "accuracy", "balanced"] = "balanced"
|
||||||
|
seed: Optional[int] = Field(default=None, description="Random seed for reproducibility")
|
||||||
|
|
||||||
|
|
||||||
|
class MaterialConfig(BaseModel):
|
||||||
|
"""Configuration for material properties."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
yield_stress: Optional[float] = Field(default=None, ge=0, description="Yield stress in MPa")
|
||||||
|
ultimate_stress: Optional[float] = Field(
|
||||||
|
default=None, ge=0, description="Ultimate stress in MPa"
|
||||||
|
)
|
||||||
|
density: Optional[float] = Field(default=None, ge=0, description="Density in kg/m3")
|
||||||
|
youngs_modulus: Optional[float] = Field(
|
||||||
|
default=None, ge=0, description="Young's modulus in GPa"
|
||||||
|
)
|
||||||
|
poissons_ratio: Optional[float] = Field(
|
||||||
|
default=None, ge=0, le=0.5, description="Poisson's ratio"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectivesConfig(BaseModel):
|
||||||
|
"""Configuration for all objectives."""
|
||||||
|
|
||||||
|
primary: ObjectiveConfig
|
||||||
|
secondary: Optional[List[ObjectiveConfig]] = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_multi_objective(self) -> bool:
|
||||||
|
"""Check if this is a multi-objective problem."""
|
||||||
|
return self.secondary is not None and len(self.secondary) > 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def all_objectives(self) -> List[ObjectiveConfig]:
|
||||||
|
"""Get all objectives as a flat list."""
|
||||||
|
objectives = [self.primary]
|
||||||
|
if self.secondary:
|
||||||
|
objectives.extend(self.secondary)
|
||||||
|
return objectives
|
||||||
|
|
||||||
|
|
||||||
|
class StudyConfig(BaseModel):
|
||||||
|
"""Configuration for study metadata."""
|
||||||
|
|
||||||
|
name: Optional[str] = Field(
|
||||||
|
default=None, description="Study name (auto-generated from folder if omitted)"
|
||||||
|
)
|
||||||
|
type: Literal["single_objective", "multi_objective"] = "single_objective"
|
||||||
|
description: Optional[str] = None
|
||||||
|
tags: Optional[List[str]] = None
|
||||||
|
|
||||||
|
|
||||||
|
class IntakeConfig(BaseModel):
|
||||||
|
"""
|
||||||
|
Complete intake.yaml configuration schema.
|
||||||
|
|
||||||
|
All fields are optional - anything not specified will be asked
|
||||||
|
in the interview or auto-detected from introspection.
|
||||||
|
"""
|
||||||
|
|
||||||
|
study: Optional[StudyConfig] = None
|
||||||
|
objectives: Optional[ObjectivesConfig] = None
|
||||||
|
constraints: Optional[List[ConstraintConfig]] = None
|
||||||
|
design_variables: Optional[List[DesignVariableConfig]] = None
|
||||||
|
budget: Optional[BudgetConfig] = None
|
||||||
|
algorithm: Optional[AlgorithmConfig] = None
|
||||||
|
material: Optional[MaterialConfig] = None
|
||||||
|
notes: Optional[str] = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_yaml(cls, yaml_path: Union[str, Path]) -> "IntakeConfig":
|
||||||
|
"""Load configuration from a YAML file."""
|
||||||
|
yaml_path = Path(yaml_path)
|
||||||
|
|
||||||
|
if not yaml_path.exists():
|
||||||
|
raise FileNotFoundError(f"Intake config not found: {yaml_path}")
|
||||||
|
|
||||||
|
with open(yaml_path, "r", encoding="utf-8") as f:
|
||||||
|
data = yaml.safe_load(f)
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
return cls()
|
||||||
|
|
||||||
|
return cls.model_validate(data)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_yaml_safe(cls, yaml_path: Union[str, Path]) -> Optional["IntakeConfig"]:
|
||||||
|
"""Load configuration from YAML, returning None if file doesn't exist."""
|
||||||
|
yaml_path = Path(yaml_path)
|
||||||
|
|
||||||
|
if not yaml_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
return cls.from_yaml(yaml_path)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def to_yaml(self, yaml_path: Union[str, Path]) -> None:
|
||||||
|
"""Save configuration to a YAML file."""
|
||||||
|
yaml_path = Path(yaml_path)
|
||||||
|
|
||||||
|
data = self.model_dump(exclude_none=True)
|
||||||
|
|
||||||
|
with open(yaml_path, "w", encoding="utf-8") as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
|
||||||
|
|
||||||
|
def get_value(self, key: str) -> Optional[Any]:
|
||||||
|
"""
|
||||||
|
Get a configuration value by dot-notation key.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
config.get_value("study.name")
|
||||||
|
config.get_value("budget.max_trials")
|
||||||
|
config.get_value("objectives.primary.goal")
|
||||||
|
"""
|
||||||
|
parts = key.split(".")
|
||||||
|
value: Any = self
|
||||||
|
|
||||||
|
for part in parts:
|
||||||
|
if value is None:
|
||||||
|
return None
|
||||||
|
if hasattr(value, part):
|
||||||
|
value = getattr(value, part)
|
||||||
|
elif isinstance(value, dict):
|
||||||
|
value = value.get(part)
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return value
|
||||||
|
|
||||||
|
def is_complete(self) -> bool:
|
||||||
|
"""Check if all required configuration is provided."""
|
||||||
|
return (
|
||||||
|
self.objectives is not None
|
||||||
|
and self.design_variables is not None
|
||||||
|
and len(self.design_variables) > 0
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_missing_fields(self) -> List[str]:
|
||||||
|
"""Get list of fields that still need to be configured."""
|
||||||
|
missing = []
|
||||||
|
|
||||||
|
if self.objectives is None:
|
||||||
|
missing.append("objectives")
|
||||||
|
if self.design_variables is None or len(self.design_variables) == 0:
|
||||||
|
missing.append("design_variables")
|
||||||
|
if self.constraints is None:
|
||||||
|
missing.append("constraints (recommended)")
|
||||||
|
if self.budget is None:
|
||||||
|
missing.append("budget")
|
||||||
|
|
||||||
|
return missing
|
||||||
|
|
||||||
|
@model_validator(mode="after")
|
||||||
|
def validate_consistency(self) -> "IntakeConfig":
|
||||||
|
"""Validate consistency between configuration sections."""
|
||||||
|
# Check study type matches objectives
|
||||||
|
if self.study and self.objectives:
|
||||||
|
is_multi = self.objectives.is_multi_objective
|
||||||
|
declared_multi = self.study.type == "multi_objective"
|
||||||
|
|
||||||
|
if is_multi and not declared_multi:
|
||||||
|
# Auto-correct study type
|
||||||
|
self.study.type = "multi_objective"
|
||||||
|
|
||||||
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
# Common material presets
|
||||||
|
MATERIAL_PRESETS: Dict[str, MaterialConfig] = {
|
||||||
|
"aluminum_6061_t6": MaterialConfig(
|
||||||
|
name="Aluminum 6061-T6",
|
||||||
|
yield_stress=276,
|
||||||
|
ultimate_stress=310,
|
||||||
|
density=2700,
|
||||||
|
youngs_modulus=68.9,
|
||||||
|
poissons_ratio=0.33,
|
||||||
|
),
|
||||||
|
"aluminum_7075_t6": MaterialConfig(
|
||||||
|
name="Aluminum 7075-T6",
|
||||||
|
yield_stress=503,
|
||||||
|
ultimate_stress=572,
|
||||||
|
density=2810,
|
||||||
|
youngs_modulus=71.7,
|
||||||
|
poissons_ratio=0.33,
|
||||||
|
),
|
||||||
|
"steel_a36": MaterialConfig(
|
||||||
|
name="Steel A36",
|
||||||
|
yield_stress=250,
|
||||||
|
ultimate_stress=400,
|
||||||
|
density=7850,
|
||||||
|
youngs_modulus=200,
|
||||||
|
poissons_ratio=0.26,
|
||||||
|
),
|
||||||
|
"stainless_304": MaterialConfig(
|
||||||
|
name="Stainless Steel 304",
|
||||||
|
yield_stress=215,
|
||||||
|
ultimate_stress=505,
|
||||||
|
density=8000,
|
||||||
|
youngs_modulus=193,
|
||||||
|
poissons_ratio=0.29,
|
||||||
|
),
|
||||||
|
"titanium_6al4v": MaterialConfig(
|
||||||
|
name="Titanium Ti-6Al-4V",
|
||||||
|
yield_stress=880,
|
||||||
|
ultimate_stress=950,
|
||||||
|
density=4430,
|
||||||
|
youngs_modulus=113.8,
|
||||||
|
poissons_ratio=0.342,
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_material_preset(name: str) -> Optional[MaterialConfig]:
|
||||||
|
"""
|
||||||
|
Get a material preset by name (fuzzy matching).
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
get_material_preset("6061") # Returns aluminum_6061_t6
|
||||||
|
get_material_preset("steel") # Returns steel_a36
|
||||||
|
"""
|
||||||
|
name_lower = name.lower().replace("-", "_").replace(" ", "_")
|
||||||
|
|
||||||
|
# Direct match
|
||||||
|
if name_lower in MATERIAL_PRESETS:
|
||||||
|
return MATERIAL_PRESETS[name_lower]
|
||||||
|
|
||||||
|
# Partial match
|
||||||
|
for key, material in MATERIAL_PRESETS.items():
|
||||||
|
if name_lower in key or name_lower in material.name.lower():
|
||||||
|
return material
|
||||||
|
|
||||||
|
return None
|
||||||
540
optimization_engine/intake/context.py
Normal file
540
optimization_engine/intake/context.py
Normal file
@@ -0,0 +1,540 @@
|
|||||||
|
"""
|
||||||
|
Study Context
|
||||||
|
=============
|
||||||
|
|
||||||
|
Complete assembled context for study creation, combining:
|
||||||
|
- Model introspection results
|
||||||
|
- Context files (goals.md, PDFs, images)
|
||||||
|
- Pre-configuration (intake.yaml)
|
||||||
|
- LAC memory (similar studies, recommendations)
|
||||||
|
|
||||||
|
This context object is used by both Interview Mode and Canvas Mode
|
||||||
|
to provide intelligent suggestions and pre-filled values.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, List, Dict, Any
|
||||||
|
from enum import Enum
|
||||||
|
import json
|
||||||
|
|
||||||
|
|
||||||
|
class ConfidenceLevel(str, Enum):
|
||||||
|
"""Confidence level for suggestions."""
|
||||||
|
|
||||||
|
HIGH = "high"
|
||||||
|
MEDIUM = "medium"
|
||||||
|
LOW = "low"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ExpressionInfo:
|
||||||
|
"""Information about an NX expression."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
value: Optional[float] = None
|
||||||
|
units: Optional[str] = None
|
||||||
|
formula: Optional[str] = None
|
||||||
|
type: str = "Number"
|
||||||
|
is_design_candidate: bool = False
|
||||||
|
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
|
||||||
|
reason: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"name": self.name,
|
||||||
|
"value": self.value,
|
||||||
|
"units": self.units,
|
||||||
|
"formula": self.formula,
|
||||||
|
"type": self.type,
|
||||||
|
"is_design_candidate": self.is_design_candidate,
|
||||||
|
"confidence": self.confidence.value,
|
||||||
|
"reason": self.reason,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SolutionInfo:
|
||||||
|
"""Information about an NX solution."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
type: str # SOL 101, SOL 103, etc.
|
||||||
|
description: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BoundaryConditionInfo:
|
||||||
|
"""Information about a boundary condition."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
type: str # Fixed, Pinned, etc.
|
||||||
|
location: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoadInfo:
|
||||||
|
"""Information about a load."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
type: str # Force, Pressure, etc.
|
||||||
|
magnitude: Optional[float] = None
|
||||||
|
units: Optional[str] = None
|
||||||
|
location: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MaterialInfo:
|
||||||
|
"""Information about a material in the model."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
yield_stress: Optional[float] = None
|
||||||
|
density: Optional[float] = None
|
||||||
|
youngs_modulus: Optional[float] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MeshInfo:
|
||||||
|
"""Information about the mesh."""
|
||||||
|
|
||||||
|
element_count: int = 0
|
||||||
|
node_count: int = 0
|
||||||
|
element_types: List[str] = field(default_factory=list)
|
||||||
|
quality_metrics: Dict[str, float] = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BaselineResult:
|
||||||
|
"""Results from baseline solve."""
|
||||||
|
|
||||||
|
mass_kg: Optional[float] = None
|
||||||
|
max_displacement_mm: Optional[float] = None
|
||||||
|
max_stress_mpa: Optional[float] = None
|
||||||
|
max_strain: Optional[float] = None
|
||||||
|
first_frequency_hz: Optional[float] = None
|
||||||
|
strain_energy_j: Optional[float] = None
|
||||||
|
solve_time_seconds: Optional[float] = None
|
||||||
|
success: bool = False
|
||||||
|
error: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"mass_kg": self.mass_kg,
|
||||||
|
"max_displacement_mm": self.max_displacement_mm,
|
||||||
|
"max_stress_mpa": self.max_stress_mpa,
|
||||||
|
"max_strain": self.max_strain,
|
||||||
|
"first_frequency_hz": self.first_frequency_hz,
|
||||||
|
"strain_energy_j": self.strain_energy_j,
|
||||||
|
"solve_time_seconds": self.solve_time_seconds,
|
||||||
|
"success": self.success,
|
||||||
|
"error": self.error,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_summary(self) -> str:
|
||||||
|
"""Get a human-readable summary of baseline results."""
|
||||||
|
if not self.success:
|
||||||
|
return f"Baseline solve failed: {self.error or 'Unknown error'}"
|
||||||
|
|
||||||
|
parts = []
|
||||||
|
if self.mass_kg is not None:
|
||||||
|
parts.append(f"mass={self.mass_kg:.2f}kg")
|
||||||
|
if self.max_displacement_mm is not None:
|
||||||
|
parts.append(f"disp={self.max_displacement_mm:.3f}mm")
|
||||||
|
if self.max_stress_mpa is not None:
|
||||||
|
parts.append(f"stress={self.max_stress_mpa:.1f}MPa")
|
||||||
|
if self.first_frequency_hz is not None:
|
||||||
|
parts.append(f"freq={self.first_frequency_hz:.1f}Hz")
|
||||||
|
|
||||||
|
return ", ".join(parts) if parts else "No results"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class IntrospectionData:
|
||||||
|
"""Complete introspection results from NX model."""
|
||||||
|
|
||||||
|
success: bool = False
|
||||||
|
timestamp: Optional[datetime] = None
|
||||||
|
error: Optional[str] = None
|
||||||
|
|
||||||
|
# Part information
|
||||||
|
expressions: List[ExpressionInfo] = field(default_factory=list)
|
||||||
|
bodies: List[Dict[str, Any]] = field(default_factory=list)
|
||||||
|
|
||||||
|
# Simulation information
|
||||||
|
solutions: List[SolutionInfo] = field(default_factory=list)
|
||||||
|
boundary_conditions: List[BoundaryConditionInfo] = field(default_factory=list)
|
||||||
|
loads: List[LoadInfo] = field(default_factory=list)
|
||||||
|
materials: List[MaterialInfo] = field(default_factory=list)
|
||||||
|
mesh_info: Optional[MeshInfo] = None
|
||||||
|
|
||||||
|
# Available result types (from OP2)
|
||||||
|
available_results: Dict[str, bool] = field(default_factory=dict)
|
||||||
|
subcases: List[int] = field(default_factory=list)
|
||||||
|
|
||||||
|
# Baseline solve
|
||||||
|
baseline: Optional[BaselineResult] = None
|
||||||
|
|
||||||
|
def get_expression_names(self) -> List[str]:
|
||||||
|
"""Get list of all expression names."""
|
||||||
|
return [e.name for e in self.expressions]
|
||||||
|
|
||||||
|
def get_design_candidates(self) -> List[ExpressionInfo]:
|
||||||
|
"""Get expressions that look like design variables."""
|
||||||
|
return [e for e in self.expressions if e.is_design_candidate]
|
||||||
|
|
||||||
|
def get_expression(self, name: str) -> Optional[ExpressionInfo]:
|
||||||
|
"""Get expression by name."""
|
||||||
|
for expr in self.expressions:
|
||||||
|
if expr.name == name:
|
||||||
|
return expr
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_solver_type(self) -> Optional[str]:
|
||||||
|
"""Get the primary solver type (SOL 101, etc.)."""
|
||||||
|
if self.solutions:
|
||||||
|
return self.solutions[0].type
|
||||||
|
return None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
"""Convert to dictionary for JSON serialization."""
|
||||||
|
return {
|
||||||
|
"success": self.success,
|
||||||
|
"timestamp": self.timestamp.isoformat() if self.timestamp else None,
|
||||||
|
"error": self.error,
|
||||||
|
"expressions": [e.to_dict() for e in self.expressions],
|
||||||
|
"solutions": [{"name": s.name, "type": s.type} for s in self.solutions],
|
||||||
|
"boundary_conditions": [
|
||||||
|
{"name": bc.name, "type": bc.type} for bc in self.boundary_conditions
|
||||||
|
],
|
||||||
|
"loads": [
|
||||||
|
{"name": l.name, "type": l.type, "magnitude": l.magnitude} for l in self.loads
|
||||||
|
],
|
||||||
|
"materials": [{"name": m.name, "yield_stress": m.yield_stress} for m in self.materials],
|
||||||
|
"available_results": self.available_results,
|
||||||
|
"subcases": self.subcases,
|
||||||
|
"baseline": self.baseline.to_dict() if self.baseline else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: Dict[str, Any]) -> "IntrospectionData":
|
||||||
|
"""Create from dictionary."""
|
||||||
|
introspection = cls(
|
||||||
|
success=data.get("success", False),
|
||||||
|
error=data.get("error"),
|
||||||
|
)
|
||||||
|
|
||||||
|
if data.get("timestamp"):
|
||||||
|
introspection.timestamp = datetime.fromisoformat(data["timestamp"])
|
||||||
|
|
||||||
|
# Parse expressions
|
||||||
|
for expr_data in data.get("expressions", []):
|
||||||
|
introspection.expressions.append(
|
||||||
|
ExpressionInfo(
|
||||||
|
name=expr_data["name"],
|
||||||
|
value=expr_data.get("value"),
|
||||||
|
units=expr_data.get("units"),
|
||||||
|
formula=expr_data.get("formula"),
|
||||||
|
type=expr_data.get("type", "Number"),
|
||||||
|
is_design_candidate=expr_data.get("is_design_candidate", False),
|
||||||
|
confidence=ConfidenceLevel(expr_data.get("confidence", "medium")),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parse solutions
|
||||||
|
for sol_data in data.get("solutions", []):
|
||||||
|
introspection.solutions.append(
|
||||||
|
SolutionInfo(
|
||||||
|
name=sol_data["name"],
|
||||||
|
type=sol_data["type"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
introspection.available_results = data.get("available_results", {})
|
||||||
|
introspection.subcases = data.get("subcases", [])
|
||||||
|
|
||||||
|
# Parse baseline
|
||||||
|
if data.get("baseline"):
|
||||||
|
baseline_data = data["baseline"]
|
||||||
|
introspection.baseline = BaselineResult(
|
||||||
|
mass_kg=baseline_data.get("mass_kg"),
|
||||||
|
max_displacement_mm=baseline_data.get("max_displacement_mm"),
|
||||||
|
max_stress_mpa=baseline_data.get("max_stress_mpa"),
|
||||||
|
solve_time_seconds=baseline_data.get("solve_time_seconds"),
|
||||||
|
success=baseline_data.get("success", False),
|
||||||
|
error=baseline_data.get("error"),
|
||||||
|
)
|
||||||
|
|
||||||
|
return introspection
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class DVSuggestion:
|
||||||
|
"""Suggested design variable."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
current_value: Optional[float] = None
|
||||||
|
suggested_bounds: Optional[tuple[float, float]] = None
|
||||||
|
units: Optional[str] = None
|
||||||
|
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
|
||||||
|
reason: str = ""
|
||||||
|
source: str = "introspection" # introspection, preconfig, lac
|
||||||
|
lac_insight: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"name": self.name,
|
||||||
|
"current_value": self.current_value,
|
||||||
|
"suggested_bounds": list(self.suggested_bounds) if self.suggested_bounds else None,
|
||||||
|
"units": self.units,
|
||||||
|
"confidence": self.confidence.value,
|
||||||
|
"reason": self.reason,
|
||||||
|
"source": self.source,
|
||||||
|
"lac_insight": self.lac_insight,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ObjectiveSuggestion:
|
||||||
|
"""Suggested optimization objective."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
goal: str # minimize, maximize
|
||||||
|
extractor: str
|
||||||
|
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
|
||||||
|
reason: str = ""
|
||||||
|
source: str = "goals"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ConstraintSuggestion:
|
||||||
|
"""Suggested optimization constraint."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
type: str # less_than, greater_than
|
||||||
|
suggested_threshold: Optional[float] = None
|
||||||
|
units: Optional[str] = None
|
||||||
|
confidence: ConfidenceLevel = ConfidenceLevel.MEDIUM
|
||||||
|
reason: str = ""
|
||||||
|
source: str = "requirements"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ImageAnalysis:
|
||||||
|
"""Analysis result from Claude Vision for an image."""
|
||||||
|
|
||||||
|
image_path: Path
|
||||||
|
component_type: Optional[str] = None
|
||||||
|
dimensions: List[str] = field(default_factory=list)
|
||||||
|
load_conditions: List[str] = field(default_factory=list)
|
||||||
|
annotations: List[str] = field(default_factory=list)
|
||||||
|
suggestions: List[str] = field(default_factory=list)
|
||||||
|
raw_analysis: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LACInsight:
|
||||||
|
"""Insight from Learning Atomizer Core."""
|
||||||
|
|
||||||
|
study_name: str
|
||||||
|
similarity_score: float
|
||||||
|
geometry_type: str
|
||||||
|
method_used: str
|
||||||
|
objectives: List[str]
|
||||||
|
trials_to_convergence: Optional[int] = None
|
||||||
|
success: bool = True
|
||||||
|
lesson: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class StudyContext:
|
||||||
|
"""
|
||||||
|
Complete context for study creation.
|
||||||
|
|
||||||
|
This is the central data structure that combines all information
|
||||||
|
gathered during intake processing, ready for use by Interview Mode
|
||||||
|
or Canvas Mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# === Identity ===
|
||||||
|
study_name: str
|
||||||
|
source_folder: Path
|
||||||
|
created_at: datetime = field(default_factory=datetime.now)
|
||||||
|
|
||||||
|
# === Model Files ===
|
||||||
|
sim_file: Optional[Path] = None
|
||||||
|
fem_file: Optional[Path] = None
|
||||||
|
prt_file: Optional[Path] = None
|
||||||
|
idealized_prt_file: Optional[Path] = None
|
||||||
|
|
||||||
|
# === From Introspection ===
|
||||||
|
introspection: Optional[IntrospectionData] = None
|
||||||
|
|
||||||
|
# === From Context Files ===
|
||||||
|
goals_text: Optional[str] = None
|
||||||
|
requirements_text: Optional[str] = None
|
||||||
|
constraints_text: Optional[str] = None
|
||||||
|
notes_text: Optional[str] = None
|
||||||
|
image_analyses: List[ImageAnalysis] = field(default_factory=list)
|
||||||
|
|
||||||
|
# === From intake.yaml ===
|
||||||
|
preconfig: Optional[Any] = None # IntakeConfig, imported dynamically to avoid circular import
|
||||||
|
|
||||||
|
# === From LAC ===
|
||||||
|
similar_studies: List[LACInsight] = field(default_factory=list)
|
||||||
|
recommended_method: Optional[str] = None
|
||||||
|
known_issues: List[str] = field(default_factory=list)
|
||||||
|
user_preferences: Dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
|
# === Derived Suggestions ===
|
||||||
|
suggested_dvs: List[DVSuggestion] = field(default_factory=list)
|
||||||
|
suggested_objectives: List[ObjectiveSuggestion] = field(default_factory=list)
|
||||||
|
suggested_constraints: List[ConstraintSuggestion] = field(default_factory=list)
|
||||||
|
|
||||||
|
# === Status ===
|
||||||
|
warnings: List[str] = field(default_factory=list)
|
||||||
|
errors: List[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_introspection(self) -> bool:
|
||||||
|
"""Check if introspection data is available."""
|
||||||
|
return self.introspection is not None and self.introspection.success
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_baseline(self) -> bool:
|
||||||
|
"""Check if baseline results are available."""
|
||||||
|
return (
|
||||||
|
self.introspection is not None
|
||||||
|
and self.introspection.baseline is not None
|
||||||
|
and self.introspection.baseline.success
|
||||||
|
)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_preconfig(self) -> bool:
|
||||||
|
"""Check if pre-configuration is available."""
|
||||||
|
return self.preconfig is not None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def ready_for_interview(self) -> bool:
|
||||||
|
"""Check if context is ready for interview mode."""
|
||||||
|
return self.has_introspection and len(self.errors) == 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def ready_for_canvas(self) -> bool:
|
||||||
|
"""Check if context is ready for canvas mode."""
|
||||||
|
return self.has_introspection and self.sim_file is not None
|
||||||
|
|
||||||
|
def get_baseline_summary(self) -> str:
|
||||||
|
"""Get human-readable baseline summary."""
|
||||||
|
if self.introspection is None:
|
||||||
|
return "No baseline data"
|
||||||
|
if self.introspection.baseline is None:
|
||||||
|
return "No baseline data"
|
||||||
|
return self.introspection.baseline.get_summary()
|
||||||
|
|
||||||
|
def get_missing_required(self) -> List[str]:
|
||||||
|
"""Get list of missing required items."""
|
||||||
|
missing = []
|
||||||
|
|
||||||
|
if self.sim_file is None:
|
||||||
|
missing.append("Simulation file (.sim)")
|
||||||
|
if not self.has_introspection:
|
||||||
|
missing.append("Model introspection")
|
||||||
|
|
||||||
|
return missing
|
||||||
|
|
||||||
|
def get_context_summary(self) -> Dict[str, Any]:
|
||||||
|
"""Get a summary of loaded context for display."""
|
||||||
|
return {
|
||||||
|
"study_name": self.study_name,
|
||||||
|
"has_model": self.sim_file is not None,
|
||||||
|
"has_introspection": self.has_introspection,
|
||||||
|
"has_baseline": self.has_baseline,
|
||||||
|
"has_goals": self.goals_text is not None,
|
||||||
|
"has_requirements": self.requirements_text is not None,
|
||||||
|
"has_preconfig": self.has_preconfig,
|
||||||
|
"num_expressions": len(self.introspection.expressions) if self.introspection else 0,
|
||||||
|
"num_dv_candidates": len(self.introspection.get_design_candidates())
|
||||||
|
if self.introspection
|
||||||
|
else 0,
|
||||||
|
"num_similar_studies": len(self.similar_studies),
|
||||||
|
"warnings": self.warnings,
|
||||||
|
"errors": self.errors,
|
||||||
|
}
|
||||||
|
|
||||||
|
def to_interview_context(self) -> Dict[str, Any]:
|
||||||
|
"""Get context formatted for interview mode."""
|
||||||
|
return {
|
||||||
|
"study_name": self.study_name,
|
||||||
|
"baseline": (
|
||||||
|
self.introspection.baseline.to_dict()
|
||||||
|
if self.introspection is not None and self.introspection.baseline is not None
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
"expressions": [e.to_dict() for e in self.introspection.expressions]
|
||||||
|
if self.introspection
|
||||||
|
else [],
|
||||||
|
"design_candidates": [e.to_dict() for e in self.introspection.get_design_candidates()]
|
||||||
|
if self.introspection
|
||||||
|
else [],
|
||||||
|
"solver_type": self.introspection.get_solver_type() if self.introspection else None,
|
||||||
|
"goals_text": self.goals_text,
|
||||||
|
"requirements_text": self.requirements_text,
|
||||||
|
"preconfig": self.preconfig.model_dump() if self.preconfig else None,
|
||||||
|
"suggested_dvs": [dv.to_dict() for dv in self.suggested_dvs],
|
||||||
|
"similar_studies": [
|
||||||
|
{"name": s.study_name, "method": s.method_used, "similarity": s.similarity_score}
|
||||||
|
for s in self.similar_studies
|
||||||
|
],
|
||||||
|
"recommended_method": self.recommended_method,
|
||||||
|
}
|
||||||
|
|
||||||
|
def save(self, output_path: Path) -> None:
|
||||||
|
"""Save context to JSON file."""
|
||||||
|
data = {
|
||||||
|
"study_name": self.study_name,
|
||||||
|
"source_folder": str(self.source_folder),
|
||||||
|
"created_at": self.created_at.isoformat(),
|
||||||
|
"sim_file": str(self.sim_file) if self.sim_file else None,
|
||||||
|
"fem_file": str(self.fem_file) if self.fem_file else None,
|
||||||
|
"prt_file": str(self.prt_file) if self.prt_file else None,
|
||||||
|
"introspection": self.introspection.to_dict() if self.introspection else None,
|
||||||
|
"goals_text": self.goals_text,
|
||||||
|
"requirements_text": self.requirements_text,
|
||||||
|
"suggested_dvs": [dv.to_dict() for dv in self.suggested_dvs],
|
||||||
|
"warnings": self.warnings,
|
||||||
|
"errors": self.errors,
|
||||||
|
}
|
||||||
|
|
||||||
|
with open(output_path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def load(cls, input_path: Path) -> "StudyContext":
|
||||||
|
"""Load context from JSON file."""
|
||||||
|
with open(input_path, "r", encoding="utf-8") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
|
||||||
|
context = cls(
|
||||||
|
study_name=data["study_name"],
|
||||||
|
source_folder=Path(data["source_folder"]),
|
||||||
|
created_at=datetime.fromisoformat(data["created_at"]),
|
||||||
|
)
|
||||||
|
|
||||||
|
if data.get("sim_file"):
|
||||||
|
context.sim_file = Path(data["sim_file"])
|
||||||
|
if data.get("fem_file"):
|
||||||
|
context.fem_file = Path(data["fem_file"])
|
||||||
|
if data.get("prt_file"):
|
||||||
|
context.prt_file = Path(data["prt_file"])
|
||||||
|
|
||||||
|
if data.get("introspection"):
|
||||||
|
context.introspection = IntrospectionData.from_dict(data["introspection"])
|
||||||
|
|
||||||
|
context.goals_text = data.get("goals_text")
|
||||||
|
context.requirements_text = data.get("requirements_text")
|
||||||
|
context.warnings = data.get("warnings", [])
|
||||||
|
context.errors = data.get("errors", [])
|
||||||
|
|
||||||
|
return context
|
||||||
789
optimization_engine/intake/processor.py
Normal file
789
optimization_engine/intake/processor.py
Normal file
@@ -0,0 +1,789 @@
|
|||||||
|
"""
|
||||||
|
Intake Processor
|
||||||
|
================
|
||||||
|
|
||||||
|
Processes intake folders to create study context:
|
||||||
|
1. Validates folder structure
|
||||||
|
2. Copies model files to study directory
|
||||||
|
3. Parses intake.yaml pre-configuration
|
||||||
|
4. Extracts text from context files (goals.md, PDFs)
|
||||||
|
5. Runs model introspection
|
||||||
|
6. Optionally runs baseline solve
|
||||||
|
7. Assembles complete StudyContext
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.intake import IntakeProcessor
|
||||||
|
|
||||||
|
processor = IntakeProcessor(Path("studies/_inbox/my_project"))
|
||||||
|
context = processor.process(run_baseline=True)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, List, Callable, Dict, Any
|
||||||
|
|
||||||
|
from .config import IntakeConfig, DesignVariableConfig
|
||||||
|
from .context import (
|
||||||
|
StudyContext,
|
||||||
|
IntrospectionData,
|
||||||
|
ExpressionInfo,
|
||||||
|
SolutionInfo,
|
||||||
|
BaselineResult,
|
||||||
|
DVSuggestion,
|
||||||
|
ObjectiveSuggestion,
|
||||||
|
ConstraintSuggestion,
|
||||||
|
ConfidenceLevel,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class IntakeError(Exception):
|
||||||
|
"""Error during intake processing."""
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class IntakeProcessor:
|
||||||
|
"""
|
||||||
|
Processes an intake folder to create a complete StudyContext.
|
||||||
|
|
||||||
|
The processor handles:
|
||||||
|
- File discovery and validation
|
||||||
|
- Model file copying
|
||||||
|
- Configuration parsing
|
||||||
|
- Context file extraction
|
||||||
|
- Model introspection (via NX journals)
|
||||||
|
- Baseline solve (optional)
|
||||||
|
- Suggestion generation
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
inbox_folder: Path,
|
||||||
|
studies_dir: Optional[Path] = None,
|
||||||
|
progress_callback: Optional[Callable[[str, float], None]] = None,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initialize the intake processor.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
inbox_folder: Path to the intake folder (in _inbox/)
|
||||||
|
studies_dir: Base studies directory (default: auto-detect)
|
||||||
|
progress_callback: Optional callback for progress updates (message, percent)
|
||||||
|
"""
|
||||||
|
self.inbox_folder = Path(inbox_folder)
|
||||||
|
self.progress_callback = progress_callback or (lambda m, p: None)
|
||||||
|
|
||||||
|
# Validate inbox folder exists
|
||||||
|
if not self.inbox_folder.exists():
|
||||||
|
raise IntakeError(f"Inbox folder not found: {self.inbox_folder}")
|
||||||
|
|
||||||
|
# Determine study name from folder name
|
||||||
|
self.study_name = self.inbox_folder.name
|
||||||
|
if self.study_name.startswith("_"):
|
||||||
|
# Strip leading underscore (used for examples)
|
||||||
|
self.study_name = self.study_name[1:]
|
||||||
|
|
||||||
|
# Set studies directory
|
||||||
|
if studies_dir is None:
|
||||||
|
# Find project root
|
||||||
|
current = Path(__file__).parent
|
||||||
|
while current != current.parent:
|
||||||
|
if (current / "CLAUDE.md").exists():
|
||||||
|
studies_dir = current / "studies"
|
||||||
|
break
|
||||||
|
current = current.parent
|
||||||
|
else:
|
||||||
|
studies_dir = Path.cwd() / "studies"
|
||||||
|
|
||||||
|
self.studies_dir = Path(studies_dir)
|
||||||
|
self.study_dir = self.studies_dir / self.study_name
|
||||||
|
|
||||||
|
# Initialize context
|
||||||
|
self.context = StudyContext(
|
||||||
|
study_name=self.study_name,
|
||||||
|
source_folder=self.inbox_folder,
|
||||||
|
)
|
||||||
|
|
||||||
|
def process(
|
||||||
|
self,
|
||||||
|
run_baseline: bool = True,
|
||||||
|
copy_files: bool = True,
|
||||||
|
run_introspection: bool = True,
|
||||||
|
) -> StudyContext:
|
||||||
|
"""
|
||||||
|
Process the intake folder and create StudyContext.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
run_baseline: Run a baseline FEA solve to get actual values
|
||||||
|
copy_files: Copy model files to study directory
|
||||||
|
run_introspection: Run NX model introspection
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete StudyContext ready for interview or canvas
|
||||||
|
"""
|
||||||
|
logger.info(f"Processing intake: {self.inbox_folder}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Step 1: Discover files
|
||||||
|
self._progress("Discovering files...", 0.0)
|
||||||
|
self._discover_files()
|
||||||
|
|
||||||
|
# Step 2: Parse intake.yaml
|
||||||
|
self._progress("Parsing configuration...", 0.1)
|
||||||
|
self._parse_config()
|
||||||
|
|
||||||
|
# Step 3: Extract context files
|
||||||
|
self._progress("Extracting context...", 0.2)
|
||||||
|
self._extract_context_files()
|
||||||
|
|
||||||
|
# Step 4: Copy model files
|
||||||
|
if copy_files:
|
||||||
|
self._progress("Copying model files...", 0.3)
|
||||||
|
self._copy_model_files()
|
||||||
|
|
||||||
|
# Step 5: Run introspection
|
||||||
|
if run_introspection:
|
||||||
|
self._progress("Introspecting model...", 0.4)
|
||||||
|
self._run_introspection()
|
||||||
|
|
||||||
|
# Step 6: Run baseline solve
|
||||||
|
if run_baseline and self.context.sim_file:
|
||||||
|
self._progress("Running baseline solve...", 0.6)
|
||||||
|
self._run_baseline_solve()
|
||||||
|
|
||||||
|
# Step 7: Generate suggestions
|
||||||
|
self._progress("Generating suggestions...", 0.8)
|
||||||
|
self._generate_suggestions()
|
||||||
|
|
||||||
|
# Step 8: Save context
|
||||||
|
self._progress("Saving context...", 0.9)
|
||||||
|
self._save_context()
|
||||||
|
|
||||||
|
self._progress("Complete!", 1.0)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.context.errors.append(str(e))
|
||||||
|
logger.error(f"Intake processing failed: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
return self.context
|
||||||
|
|
||||||
|
def _progress(self, message: str, percent: float) -> None:
|
||||||
|
"""Report progress."""
|
||||||
|
logger.info(f"[{percent * 100:.0f}%] {message}")
|
||||||
|
self.progress_callback(message, percent)
|
||||||
|
|
||||||
|
def _discover_files(self) -> None:
|
||||||
|
"""Discover model and context files in the inbox folder."""
|
||||||
|
|
||||||
|
# Look for model files
|
||||||
|
models_dir = self.inbox_folder / "models"
|
||||||
|
if models_dir.exists():
|
||||||
|
search_dir = models_dir
|
||||||
|
else:
|
||||||
|
# Fall back to root folder
|
||||||
|
search_dir = self.inbox_folder
|
||||||
|
|
||||||
|
# Find simulation file (required)
|
||||||
|
sim_files = list(search_dir.glob("*.sim"))
|
||||||
|
if sim_files:
|
||||||
|
self.context.sim_file = sim_files[0]
|
||||||
|
logger.info(f"Found sim file: {self.context.sim_file.name}")
|
||||||
|
else:
|
||||||
|
self.context.warnings.append("No .sim file found in models/")
|
||||||
|
|
||||||
|
# Find FEM file
|
||||||
|
fem_files = list(search_dir.glob("*.fem"))
|
||||||
|
if fem_files:
|
||||||
|
self.context.fem_file = fem_files[0]
|
||||||
|
logger.info(f"Found fem file: {self.context.fem_file.name}")
|
||||||
|
|
||||||
|
# Find part file
|
||||||
|
prt_files = [f for f in search_dir.glob("*.prt") if "_i.prt" not in f.name.lower()]
|
||||||
|
if prt_files:
|
||||||
|
self.context.prt_file = prt_files[0]
|
||||||
|
logger.info(f"Found prt file: {self.context.prt_file.name}")
|
||||||
|
|
||||||
|
# Find idealized part (CRITICAL!)
|
||||||
|
idealized_files = list(search_dir.glob("*_i.prt")) + list(search_dir.glob("*_I.prt"))
|
||||||
|
if idealized_files:
|
||||||
|
self.context.idealized_prt_file = idealized_files[0]
|
||||||
|
logger.info(f"Found idealized prt: {self.context.idealized_prt_file.name}")
|
||||||
|
else:
|
||||||
|
self.context.warnings.append(
|
||||||
|
"No idealized part (*_i.prt) found - mesh may not update during optimization!"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _parse_config(self) -> None:
|
||||||
|
"""Parse intake.yaml if present."""
|
||||||
|
config_path = self.inbox_folder / "intake.yaml"
|
||||||
|
|
||||||
|
if config_path.exists():
|
||||||
|
try:
|
||||||
|
self.context.preconfig = IntakeConfig.from_yaml(config_path)
|
||||||
|
logger.info("Loaded intake.yaml configuration")
|
||||||
|
|
||||||
|
# Update study name if specified
|
||||||
|
if self.context.preconfig.study and self.context.preconfig.study.name:
|
||||||
|
self.context.study_name = self.context.preconfig.study.name
|
||||||
|
self.study_name = self.context.study_name
|
||||||
|
self.study_dir = self.studies_dir / self.study_name
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.context.warnings.append(f"Failed to parse intake.yaml: {e}")
|
||||||
|
logger.warning(f"Failed to parse intake.yaml: {e}")
|
||||||
|
else:
|
||||||
|
logger.info("No intake.yaml found, will use interview mode")
|
||||||
|
|
||||||
|
def _extract_context_files(self) -> None:
|
||||||
|
"""Extract text from context files."""
|
||||||
|
context_dir = self.inbox_folder / "context"
|
||||||
|
|
||||||
|
# Read goals.md
|
||||||
|
goals_path = context_dir / "goals.md"
|
||||||
|
if goals_path.exists():
|
||||||
|
self.context.goals_text = goals_path.read_text(encoding="utf-8")
|
||||||
|
logger.info("Loaded goals.md")
|
||||||
|
|
||||||
|
# Read constraints.txt
|
||||||
|
constraints_path = context_dir / "constraints.txt"
|
||||||
|
if constraints_path.exists():
|
||||||
|
self.context.constraints_text = constraints_path.read_text(encoding="utf-8")
|
||||||
|
logger.info("Loaded constraints.txt")
|
||||||
|
|
||||||
|
# Read any .txt or .md files in context/
|
||||||
|
if context_dir.exists():
|
||||||
|
for txt_file in context_dir.glob("*.txt"):
|
||||||
|
if txt_file.name != "constraints.txt":
|
||||||
|
content = txt_file.read_text(encoding="utf-8")
|
||||||
|
if self.context.notes_text:
|
||||||
|
self.context.notes_text += f"\n\n--- {txt_file.name} ---\n{content}"
|
||||||
|
else:
|
||||||
|
self.context.notes_text = content
|
||||||
|
|
||||||
|
# Extract PDF text (basic implementation)
|
||||||
|
# TODO: Add PyMuPDF and Claude Vision integration
|
||||||
|
for pdf_path in context_dir.glob("*.pdf") if context_dir.exists() else []:
|
||||||
|
try:
|
||||||
|
text = self._extract_pdf_text(pdf_path)
|
||||||
|
if text:
|
||||||
|
self.context.requirements_text = text
|
||||||
|
logger.info(f"Extracted text from {pdf_path.name}")
|
||||||
|
except Exception as e:
|
||||||
|
self.context.warnings.append(f"Failed to extract PDF {pdf_path.name}: {e}")
|
||||||
|
|
||||||
|
def _extract_pdf_text(self, pdf_path: Path) -> Optional[str]:
|
||||||
|
"""Extract text from PDF using PyMuPDF if available."""
|
||||||
|
try:
|
||||||
|
import fitz # PyMuPDF
|
||||||
|
|
||||||
|
doc = fitz.open(pdf_path)
|
||||||
|
text_parts = []
|
||||||
|
|
||||||
|
for page in doc:
|
||||||
|
text_parts.append(page.get_text())
|
||||||
|
|
||||||
|
doc.close()
|
||||||
|
return "\n".join(text_parts)
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("PyMuPDF not installed, skipping PDF extraction")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"PDF extraction failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _copy_model_files(self) -> None:
|
||||||
|
"""Copy model files to study directory."""
|
||||||
|
|
||||||
|
# Create study directory structure
|
||||||
|
model_dir = self.study_dir / "1_model"
|
||||||
|
model_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
(self.study_dir / "2_iterations").mkdir(exist_ok=True)
|
||||||
|
(self.study_dir / "3_results").mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Copy files
|
||||||
|
files_to_copy = [
|
||||||
|
self.context.sim_file,
|
||||||
|
self.context.fem_file,
|
||||||
|
self.context.prt_file,
|
||||||
|
self.context.idealized_prt_file,
|
||||||
|
]
|
||||||
|
|
||||||
|
for src in files_to_copy:
|
||||||
|
if src and src.exists():
|
||||||
|
dst = model_dir / src.name
|
||||||
|
if not dst.exists():
|
||||||
|
shutil.copy2(src, dst)
|
||||||
|
logger.info(f"Copied: {src.name}")
|
||||||
|
else:
|
||||||
|
logger.info(f"Already exists: {src.name}")
|
||||||
|
|
||||||
|
# Update paths to point to copied files
|
||||||
|
if self.context.sim_file:
|
||||||
|
self.context.sim_file = model_dir / self.context.sim_file.name
|
||||||
|
if self.context.fem_file:
|
||||||
|
self.context.fem_file = model_dir / self.context.fem_file.name
|
||||||
|
if self.context.prt_file:
|
||||||
|
self.context.prt_file = model_dir / self.context.prt_file.name
|
||||||
|
if self.context.idealized_prt_file:
|
||||||
|
self.context.idealized_prt_file = model_dir / self.context.idealized_prt_file.name
|
||||||
|
|
||||||
|
def _run_introspection(self) -> None:
|
||||||
|
"""Run NX model introspection."""
|
||||||
|
|
||||||
|
if not self.context.sim_file or not self.context.sim_file.exists():
|
||||||
|
self.context.warnings.append("Cannot introspect - no sim file")
|
||||||
|
return
|
||||||
|
|
||||||
|
introspection = IntrospectionData(timestamp=datetime.now())
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to use existing introspection modules
|
||||||
|
from optimization_engine.extractors.introspect_part import introspect_part_expressions
|
||||||
|
|
||||||
|
# Introspect part for expressions
|
||||||
|
if self.context.prt_file and self.context.prt_file.exists():
|
||||||
|
expressions = introspect_part_expressions(str(self.context.prt_file))
|
||||||
|
|
||||||
|
for expr in expressions:
|
||||||
|
is_candidate = self._is_design_candidate(expr["name"], expr.get("value"))
|
||||||
|
introspection.expressions.append(
|
||||||
|
ExpressionInfo(
|
||||||
|
name=expr["name"],
|
||||||
|
value=expr.get("value"),
|
||||||
|
units=expr.get("units"),
|
||||||
|
formula=expr.get("formula"),
|
||||||
|
type=expr.get("type", "Number"),
|
||||||
|
is_design_candidate=is_candidate,
|
||||||
|
confidence=ConfidenceLevel.HIGH
|
||||||
|
if is_candidate
|
||||||
|
else ConfidenceLevel.MEDIUM,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
introspection.success = True
|
||||||
|
logger.info(f"Introspected {len(introspection.expressions)} expressions")
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("Introspection module not available, using fallback")
|
||||||
|
introspection.success = False
|
||||||
|
introspection.error = "Introspection module not available"
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Introspection failed: {e}")
|
||||||
|
introspection.success = False
|
||||||
|
introspection.error = str(e)
|
||||||
|
|
||||||
|
self.context.introspection = introspection
|
||||||
|
|
||||||
|
def _is_design_candidate(self, name: str, value: Optional[float]) -> bool:
|
||||||
|
"""Check if an expression looks like a design variable candidate."""
|
||||||
|
|
||||||
|
# Skip if no value or non-numeric
|
||||||
|
if value is None:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Skip system/reference expressions
|
||||||
|
if name.startswith("p") and name[1:].isdigit():
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Skip mass-related outputs (not inputs)
|
||||||
|
if "mass" in name.lower() and "input" not in name.lower():
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Look for typical design parameter names
|
||||||
|
design_keywords = [
|
||||||
|
"thickness",
|
||||||
|
"width",
|
||||||
|
"height",
|
||||||
|
"length",
|
||||||
|
"radius",
|
||||||
|
"diameter",
|
||||||
|
"angle",
|
||||||
|
"offset",
|
||||||
|
"depth",
|
||||||
|
"size",
|
||||||
|
"span",
|
||||||
|
"pitch",
|
||||||
|
"gap",
|
||||||
|
"rib",
|
||||||
|
"flange",
|
||||||
|
"web",
|
||||||
|
"wall",
|
||||||
|
"fillet",
|
||||||
|
"chamfer",
|
||||||
|
]
|
||||||
|
|
||||||
|
name_lower = name.lower()
|
||||||
|
return any(kw in name_lower for kw in design_keywords)
|
||||||
|
|
||||||
|
def _run_baseline_solve(self) -> None:
|
||||||
|
"""Run baseline FEA solve to get actual values."""
|
||||||
|
|
||||||
|
if not self.context.introspection:
|
||||||
|
self.context.introspection = IntrospectionData(timestamp=datetime.now())
|
||||||
|
|
||||||
|
baseline = BaselineResult()
|
||||||
|
|
||||||
|
try:
|
||||||
|
from optimization_engine.nx.solver import NXSolver
|
||||||
|
|
||||||
|
solver = NXSolver()
|
||||||
|
model_dir = self.context.sim_file.parent
|
||||||
|
|
||||||
|
result = solver.run_simulation(
|
||||||
|
sim_file=self.context.sim_file,
|
||||||
|
working_dir=model_dir,
|
||||||
|
expression_updates={}, # No updates for baseline
|
||||||
|
cleanup=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
if result["success"]:
|
||||||
|
baseline.success = True
|
||||||
|
baseline.solve_time_seconds = result.get("solve_time", 0)
|
||||||
|
|
||||||
|
# Extract results from OP2
|
||||||
|
op2_file = result.get("op2_file")
|
||||||
|
if op2_file and Path(op2_file).exists():
|
||||||
|
self._extract_baseline_results(baseline, Path(op2_file), model_dir)
|
||||||
|
|
||||||
|
logger.info(f"Baseline solve complete: {baseline.get_summary()}")
|
||||||
|
else:
|
||||||
|
baseline.success = False
|
||||||
|
baseline.error = result.get("error", "Unknown error")
|
||||||
|
logger.warning(f"Baseline solve failed: {baseline.error}")
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
logger.warning("NXSolver not available, skipping baseline")
|
||||||
|
baseline.success = False
|
||||||
|
baseline.error = "NXSolver not available"
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Baseline solve failed: {e}")
|
||||||
|
baseline.success = False
|
||||||
|
baseline.error = str(e)
|
||||||
|
|
||||||
|
self.context.introspection.baseline = baseline
|
||||||
|
|
||||||
|
def _extract_baseline_results(
|
||||||
|
self, baseline: BaselineResult, op2_file: Path, model_dir: Path
|
||||||
|
) -> None:
|
||||||
|
"""Extract results from OP2 file."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to extract displacement
|
||||||
|
from optimization_engine.extractors.extract_displacement import extract_displacement
|
||||||
|
|
||||||
|
disp_result = extract_displacement(op2_file, subcase=1)
|
||||||
|
baseline.max_displacement_mm = disp_result.get("max_displacement")
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Displacement extraction failed: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to extract stress
|
||||||
|
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
|
||||||
|
|
||||||
|
stress_result = extract_solid_stress(op2_file, subcase=1)
|
||||||
|
baseline.max_stress_mpa = stress_result.get("max_von_mises")
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Stress extraction failed: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Try to extract mass from BDF
|
||||||
|
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
|
||||||
|
|
||||||
|
dat_files = list(model_dir.glob("*.dat"))
|
||||||
|
if dat_files:
|
||||||
|
baseline.mass_kg = extract_mass_from_bdf(str(dat_files[0]))
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Mass extraction failed: {e}")
|
||||||
|
|
||||||
|
def _generate_suggestions(self) -> None:
|
||||||
|
"""Generate intelligent suggestions based on all context."""
|
||||||
|
|
||||||
|
self._generate_dv_suggestions()
|
||||||
|
self._generate_objective_suggestions()
|
||||||
|
self._generate_constraint_suggestions()
|
||||||
|
self._query_lac()
|
||||||
|
|
||||||
|
def _generate_dv_suggestions(self) -> None:
|
||||||
|
"""Generate design variable suggestions."""
|
||||||
|
|
||||||
|
suggestions: Dict[str, DVSuggestion] = {}
|
||||||
|
|
||||||
|
# From introspection
|
||||||
|
if self.context.introspection:
|
||||||
|
for expr in self.context.introspection.get_design_candidates():
|
||||||
|
if expr.value is not None and isinstance(expr.value, (int, float)):
|
||||||
|
# Calculate suggested bounds (50% to 150% of current value)
|
||||||
|
if expr.value > 0:
|
||||||
|
bounds = (expr.value * 0.5, expr.value * 1.5)
|
||||||
|
else:
|
||||||
|
bounds = (expr.value * 1.5, expr.value * 0.5)
|
||||||
|
|
||||||
|
suggestions[expr.name] = DVSuggestion(
|
||||||
|
name=expr.name,
|
||||||
|
current_value=expr.value,
|
||||||
|
suggested_bounds=bounds,
|
||||||
|
units=expr.units,
|
||||||
|
confidence=expr.confidence,
|
||||||
|
reason=f"Numeric expression with value {expr.value}",
|
||||||
|
source="introspection",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Override/add from preconfig
|
||||||
|
if self.context.preconfig and self.context.preconfig.design_variables:
|
||||||
|
for dv in self.context.preconfig.design_variables:
|
||||||
|
if dv.name in suggestions:
|
||||||
|
# Update existing suggestion
|
||||||
|
suggestions[dv.name].suggested_bounds = dv.bounds
|
||||||
|
suggestions[dv.name].units = dv.units or suggestions[dv.name].units
|
||||||
|
suggestions[dv.name].source = "preconfig"
|
||||||
|
suggestions[dv.name].confidence = ConfidenceLevel.HIGH
|
||||||
|
else:
|
||||||
|
# Add new suggestion
|
||||||
|
suggestions[dv.name] = DVSuggestion(
|
||||||
|
name=dv.name,
|
||||||
|
suggested_bounds=dv.bounds,
|
||||||
|
units=dv.units,
|
||||||
|
confidence=ConfidenceLevel.HIGH,
|
||||||
|
reason="Specified in intake.yaml",
|
||||||
|
source="preconfig",
|
||||||
|
)
|
||||||
|
|
||||||
|
self.context.suggested_dvs = list(suggestions.values())
|
||||||
|
logger.info(f"Generated {len(self.context.suggested_dvs)} DV suggestions")
|
||||||
|
|
||||||
|
def _generate_objective_suggestions(self) -> None:
|
||||||
|
"""Generate objective suggestions from context."""
|
||||||
|
|
||||||
|
suggestions = []
|
||||||
|
|
||||||
|
# From preconfig
|
||||||
|
if self.context.preconfig and self.context.preconfig.objectives:
|
||||||
|
obj = self.context.preconfig.objectives.primary
|
||||||
|
extractor = self._get_extractor_for_target(obj.target)
|
||||||
|
suggestions.append(
|
||||||
|
ObjectiveSuggestion(
|
||||||
|
name=obj.target,
|
||||||
|
goal=obj.goal,
|
||||||
|
extractor=extractor,
|
||||||
|
confidence=ConfidenceLevel.HIGH,
|
||||||
|
reason="Specified in intake.yaml",
|
||||||
|
source="preconfig",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# From goals text (simple keyword matching)
|
||||||
|
elif self.context.goals_text:
|
||||||
|
goals_lower = self.context.goals_text.lower()
|
||||||
|
|
||||||
|
if "minimize" in goals_lower and "mass" in goals_lower:
|
||||||
|
suggestions.append(
|
||||||
|
ObjectiveSuggestion(
|
||||||
|
name="mass",
|
||||||
|
goal="minimize",
|
||||||
|
extractor="extract_mass_from_bdf",
|
||||||
|
confidence=ConfidenceLevel.MEDIUM,
|
||||||
|
reason="Found 'minimize mass' in goals",
|
||||||
|
source="goals",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
elif "minimize" in goals_lower and "weight" in goals_lower:
|
||||||
|
suggestions.append(
|
||||||
|
ObjectiveSuggestion(
|
||||||
|
name="mass",
|
||||||
|
goal="minimize",
|
||||||
|
extractor="extract_mass_from_bdf",
|
||||||
|
confidence=ConfidenceLevel.MEDIUM,
|
||||||
|
reason="Found 'minimize weight' in goals",
|
||||||
|
source="goals",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if "maximize" in goals_lower and "stiffness" in goals_lower:
|
||||||
|
suggestions.append(
|
||||||
|
ObjectiveSuggestion(
|
||||||
|
name="stiffness",
|
||||||
|
goal="maximize",
|
||||||
|
extractor="extract_displacement", # Inverse of displacement
|
||||||
|
confidence=ConfidenceLevel.MEDIUM,
|
||||||
|
reason="Found 'maximize stiffness' in goals",
|
||||||
|
source="goals",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.context.suggested_objectives = suggestions
|
||||||
|
|
||||||
|
def _generate_constraint_suggestions(self) -> None:
|
||||||
|
"""Generate constraint suggestions from context."""
|
||||||
|
|
||||||
|
suggestions = []
|
||||||
|
|
||||||
|
# From preconfig
|
||||||
|
if self.context.preconfig and self.context.preconfig.constraints:
|
||||||
|
for const in self.context.preconfig.constraints:
|
||||||
|
suggestions.append(
|
||||||
|
ConstraintSuggestion(
|
||||||
|
name=const.type,
|
||||||
|
type="less_than" if "max" in const.type else "greater_than",
|
||||||
|
suggested_threshold=const.threshold,
|
||||||
|
units=const.units,
|
||||||
|
confidence=ConfidenceLevel.HIGH,
|
||||||
|
reason="Specified in intake.yaml",
|
||||||
|
source="preconfig",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# From requirements text
|
||||||
|
if self.context.requirements_text:
|
||||||
|
# Simple pattern matching for constraints
|
||||||
|
text = self.context.requirements_text
|
||||||
|
|
||||||
|
# Look for stress limits
|
||||||
|
stress_pattern = r"(?:max(?:imum)?|stress)\s*[:<]?\s*(\d+(?:\.\d+)?)\s*(?:MPa|mpa)"
|
||||||
|
matches = re.findall(stress_pattern, text, re.IGNORECASE)
|
||||||
|
if matches:
|
||||||
|
suggestions.append(
|
||||||
|
ConstraintSuggestion(
|
||||||
|
name="max_stress",
|
||||||
|
type="less_than",
|
||||||
|
suggested_threshold=float(matches[0]),
|
||||||
|
units="MPa",
|
||||||
|
confidence=ConfidenceLevel.MEDIUM,
|
||||||
|
reason=f"Found stress limit in requirements: {matches[0]} MPa",
|
||||||
|
source="requirements",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Look for displacement limits
|
||||||
|
disp_pattern = (
|
||||||
|
r"(?:max(?:imum)?|displacement|deflection)\s*[:<]?\s*(\d+(?:\.\d+)?)\s*(?:mm|MM)"
|
||||||
|
)
|
||||||
|
matches = re.findall(disp_pattern, text, re.IGNORECASE)
|
||||||
|
if matches:
|
||||||
|
suggestions.append(
|
||||||
|
ConstraintSuggestion(
|
||||||
|
name="max_displacement",
|
||||||
|
type="less_than",
|
||||||
|
suggested_threshold=float(matches[0]),
|
||||||
|
units="mm",
|
||||||
|
confidence=ConfidenceLevel.MEDIUM,
|
||||||
|
reason=f"Found displacement limit in requirements: {matches[0]} mm",
|
||||||
|
source="requirements",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.context.suggested_constraints = suggestions
|
||||||
|
|
||||||
|
def _get_extractor_for_target(self, target: str) -> str:
|
||||||
|
"""Map optimization target to extractor function."""
|
||||||
|
extractors = {
|
||||||
|
"mass": "extract_mass_from_bdf",
|
||||||
|
"displacement": "extract_displacement",
|
||||||
|
"stress": "extract_solid_stress",
|
||||||
|
"frequency": "extract_frequency",
|
||||||
|
"stiffness": "extract_displacement", # Inverse
|
||||||
|
"strain_energy": "extract_strain_energy",
|
||||||
|
}
|
||||||
|
return extractors.get(target.lower(), f"extract_{target}")
|
||||||
|
|
||||||
|
def _query_lac(self) -> None:
|
||||||
|
"""Query Learning Atomizer Core for similar studies."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
from knowledge_base.lac import get_lac
|
||||||
|
|
||||||
|
lac = get_lac()
|
||||||
|
|
||||||
|
# Build query from context
|
||||||
|
query_parts = [self.study_name]
|
||||||
|
if self.context.goals_text:
|
||||||
|
query_parts.append(self.context.goals_text[:200])
|
||||||
|
|
||||||
|
query = " ".join(query_parts)
|
||||||
|
|
||||||
|
# Get similar studies
|
||||||
|
similar = lac.query_similar_optimizations(query)
|
||||||
|
|
||||||
|
# Get method recommendation
|
||||||
|
n_objectives = 1
|
||||||
|
if self.context.preconfig and self.context.preconfig.objectives:
|
||||||
|
n_objectives = len(self.context.preconfig.objectives.all_objectives)
|
||||||
|
|
||||||
|
recommendation = lac.get_best_method_for(
|
||||||
|
geometry_type="unknown", n_objectives=n_objectives
|
||||||
|
)
|
||||||
|
|
||||||
|
if recommendation:
|
||||||
|
self.context.recommended_method = recommendation.get("method")
|
||||||
|
|
||||||
|
logger.info(f"LAC query complete: {len(similar)} similar studies found")
|
||||||
|
|
||||||
|
except ImportError:
|
||||||
|
logger.debug("LAC not available")
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"LAC query failed: {e}")
|
||||||
|
|
||||||
|
def _save_context(self) -> None:
|
||||||
|
"""Save assembled context to study directory."""
|
||||||
|
|
||||||
|
# Ensure study directory exists
|
||||||
|
self.study_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Save context JSON
|
||||||
|
context_path = self.study_dir / "0_intake" / "study_context.json"
|
||||||
|
context_path.parent.mkdir(exist_ok=True)
|
||||||
|
self.context.save(context_path)
|
||||||
|
|
||||||
|
# Save introspection report
|
||||||
|
if self.context.introspection:
|
||||||
|
introspection_path = self.study_dir / "0_intake" / "introspection.json"
|
||||||
|
import json
|
||||||
|
|
||||||
|
with open(introspection_path, "w") as f:
|
||||||
|
json.dump(self.context.introspection.to_dict(), f, indent=2)
|
||||||
|
|
||||||
|
# Copy original context files
|
||||||
|
intake_dir = self.study_dir / "0_intake" / "original_context"
|
||||||
|
intake_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
context_source = self.inbox_folder / "context"
|
||||||
|
if context_source.exists():
|
||||||
|
for f in context_source.iterdir():
|
||||||
|
if f.is_file():
|
||||||
|
shutil.copy2(f, intake_dir / f.name)
|
||||||
|
|
||||||
|
# Copy intake.yaml
|
||||||
|
intake_yaml = self.inbox_folder / "intake.yaml"
|
||||||
|
if intake_yaml.exists():
|
||||||
|
shutil.copy2(intake_yaml, self.study_dir / "0_intake" / "intake.yaml")
|
||||||
|
|
||||||
|
logger.info(f"Saved context to {self.study_dir / '0_intake'}")
|
||||||
|
|
||||||
|
|
||||||
|
def process_intake(
|
||||||
|
inbox_folder: Path,
|
||||||
|
run_baseline: bool = True,
|
||||||
|
progress_callback: Optional[Callable[[str, float], None]] = None,
|
||||||
|
) -> StudyContext:
|
||||||
|
"""
|
||||||
|
Convenience function to process an intake folder.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
inbox_folder: Path to inbox folder
|
||||||
|
run_baseline: Run baseline solve
|
||||||
|
progress_callback: Optional progress callback
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete StudyContext
|
||||||
|
"""
|
||||||
|
processor = IntakeProcessor(inbox_folder, progress_callback=progress_callback)
|
||||||
|
return processor.process(run_baseline=run_baseline)
|
||||||
@@ -70,15 +70,15 @@ def extract_part_mass(theSession, part, output_dir):
|
|||||||
import json
|
import json
|
||||||
|
|
||||||
results = {
|
results = {
|
||||||
'part_file': part.Name,
|
"part_file": part.Name,
|
||||||
'mass_kg': 0.0,
|
"mass_kg": 0.0,
|
||||||
'mass_g': 0.0,
|
"mass_g": 0.0,
|
||||||
'volume_mm3': 0.0,
|
"volume_mm3": 0.0,
|
||||||
'surface_area_mm2': 0.0,
|
"surface_area_mm2": 0.0,
|
||||||
'center_of_gravity_mm': [0.0, 0.0, 0.0],
|
"center_of_gravity_mm": [0.0, 0.0, 0.0],
|
||||||
'num_bodies': 0,
|
"num_bodies": 0,
|
||||||
'success': False,
|
"success": False,
|
||||||
'error': None
|
"error": None,
|
||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -88,10 +88,10 @@ def extract_part_mass(theSession, part, output_dir):
|
|||||||
if body.IsSolidBody:
|
if body.IsSolidBody:
|
||||||
bodies.append(body)
|
bodies.append(body)
|
||||||
|
|
||||||
results['num_bodies'] = len(bodies)
|
results["num_bodies"] = len(bodies)
|
||||||
|
|
||||||
if not bodies:
|
if not bodies:
|
||||||
results['error'] = "No solid bodies found"
|
results["error"] = "No solid bodies found"
|
||||||
raise ValueError("No solid bodies found in part")
|
raise ValueError("No solid bodies found in part")
|
||||||
|
|
||||||
# Get the measure manager
|
# Get the measure manager
|
||||||
@@ -104,30 +104,30 @@ def extract_part_mass(theSession, part, output_dir):
|
|||||||
uc.GetBase("Area"),
|
uc.GetBase("Area"),
|
||||||
uc.GetBase("Volume"),
|
uc.GetBase("Volume"),
|
||||||
uc.GetBase("Mass"),
|
uc.GetBase("Mass"),
|
||||||
uc.GetBase("Length")
|
uc.GetBase("Length"),
|
||||||
]
|
]
|
||||||
|
|
||||||
# Create mass properties measurement
|
# Create mass properties measurement
|
||||||
measureBodies = measureManager.NewMassProperties(mass_units, 0.99, bodies)
|
measureBodies = measureManager.NewMassProperties(mass_units, 0.99, bodies)
|
||||||
|
|
||||||
if measureBodies:
|
if measureBodies:
|
||||||
results['mass_kg'] = measureBodies.Mass
|
results["mass_kg"] = measureBodies.Mass
|
||||||
results['mass_g'] = results['mass_kg'] * 1000.0
|
results["mass_g"] = results["mass_kg"] * 1000.0
|
||||||
|
|
||||||
try:
|
try:
|
||||||
results['volume_mm3'] = measureBodies.Volume
|
results["volume_mm3"] = measureBodies.Volume
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
results['surface_area_mm2'] = measureBodies.Area
|
results["surface_area_mm2"] = measureBodies.Area
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
cog = measureBodies.Centroid
|
cog = measureBodies.Centroid
|
||||||
if cog:
|
if cog:
|
||||||
results['center_of_gravity_mm'] = [cog.X, cog.Y, cog.Z]
|
results["center_of_gravity_mm"] = [cog.X, cog.Y, cog.Z]
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -136,26 +136,26 @@ def extract_part_mass(theSession, part, output_dir):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
results['success'] = True
|
results["success"] = True
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
results['error'] = str(e)
|
results["error"] = str(e)
|
||||||
results['success'] = False
|
results["success"] = False
|
||||||
|
|
||||||
# Write results to JSON file
|
# Write results to JSON file
|
||||||
output_file = os.path.join(output_dir, "_temp_part_properties.json")
|
output_file = os.path.join(output_dir, "_temp_part_properties.json")
|
||||||
with open(output_file, 'w') as f:
|
with open(output_file, "w") as f:
|
||||||
json.dump(results, f, indent=2)
|
json.dump(results, f, indent=2)
|
||||||
|
|
||||||
# Write simple mass value for backward compatibility
|
# Write simple mass value for backward compatibility
|
||||||
mass_file = os.path.join(output_dir, "_temp_mass.txt")
|
mass_file = os.path.join(output_dir, "_temp_mass.txt")
|
||||||
with open(mass_file, 'w') as f:
|
with open(mass_file, "w") as f:
|
||||||
f.write(str(results['mass_kg']))
|
f.write(str(results["mass_kg"]))
|
||||||
|
|
||||||
if not results['success']:
|
if not results["success"]:
|
||||||
raise ValueError(results['error'])
|
raise ValueError(results["error"])
|
||||||
|
|
||||||
return results['mass_kg']
|
return results["mass_kg"]
|
||||||
|
|
||||||
|
|
||||||
def find_or_open_part(theSession, part_path):
|
def find_or_open_part(theSession, part_path):
|
||||||
@@ -176,7 +176,7 @@ def find_or_open_part(theSession, part_path):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
# Not found, open it
|
# Not found, open it
|
||||||
markId = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f'Load {part_name}')
|
markId = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f"Load {part_name}")
|
||||||
part, partLoadStatus = theSession.Parts.Open(part_path)
|
part, partLoadStatus = theSession.Parts.Open(part_path)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
return part, False
|
return part, False
|
||||||
@@ -194,26 +194,28 @@ def main(args):
|
|||||||
"""
|
"""
|
||||||
if len(args) < 1:
|
if len(args) < 1:
|
||||||
print("ERROR: No .sim file path provided")
|
print("ERROR: No .sim file path provided")
|
||||||
print("Usage: run_journal.exe solve_simulation.py <sim_file_path> [solution_name] [expr1=val1] ...")
|
print(
|
||||||
|
"Usage: run_journal.exe solve_simulation.py <sim_file_path> [solution_name] [expr1=val1] ..."
|
||||||
|
)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
sim_file_path = args[0]
|
sim_file_path = args[0]
|
||||||
solution_name = args[1] if len(args) > 1 and args[1] != 'None' else None
|
solution_name = args[1] if len(args) > 1 and args[1] != "None" else None
|
||||||
|
|
||||||
# Parse expression updates
|
# Parse expression updates
|
||||||
expression_updates = {}
|
expression_updates = {}
|
||||||
for arg in args[2:]:
|
for arg in args[2:]:
|
||||||
if '=' in arg:
|
if "=" in arg:
|
||||||
name, value = arg.split('=', 1)
|
name, value = arg.split("=", 1)
|
||||||
expression_updates[name] = float(value)
|
expression_updates[name] = float(value)
|
||||||
|
|
||||||
# Get working directory
|
# Get working directory
|
||||||
working_dir = os.path.dirname(os.path.abspath(sim_file_path))
|
working_dir = os.path.dirname(os.path.abspath(sim_file_path))
|
||||||
sim_filename = os.path.basename(sim_file_path)
|
sim_filename = os.path.basename(sim_file_path)
|
||||||
|
|
||||||
print(f"[JOURNAL] " + "="*60)
|
print(f"[JOURNAL] " + "=" * 60)
|
||||||
print(f"[JOURNAL] NX SIMULATION SOLVER (Assembly FEM Workflow)")
|
print(f"[JOURNAL] NX SIMULATION SOLVER (Assembly FEM Workflow)")
|
||||||
print(f"[JOURNAL] " + "="*60)
|
print(f"[JOURNAL] " + "=" * 60)
|
||||||
print(f"[JOURNAL] Simulation: {sim_filename}")
|
print(f"[JOURNAL] Simulation: {sim_filename}")
|
||||||
print(f"[JOURNAL] Working directory: {working_dir}")
|
print(f"[JOURNAL] Working directory: {working_dir}")
|
||||||
print(f"[JOURNAL] Solution: {solution_name or 'Solution 1'}")
|
print(f"[JOURNAL] Solution: {solution_name or 'Solution 1'}")
|
||||||
@@ -226,7 +228,9 @@ def main(args):
|
|||||||
|
|
||||||
# Set load options
|
# Set load options
|
||||||
theSession.Parts.LoadOptions.LoadLatest = False
|
theSession.Parts.LoadOptions.LoadLatest = False
|
||||||
theSession.Parts.LoadOptions.ComponentLoadMethod = NXOpen.LoadOptions.LoadMethod.FromDirectory
|
theSession.Parts.LoadOptions.ComponentLoadMethod = (
|
||||||
|
NXOpen.LoadOptions.LoadMethod.FromDirectory
|
||||||
|
)
|
||||||
theSession.Parts.LoadOptions.SetSearchDirectories([working_dir], [True])
|
theSession.Parts.LoadOptions.SetSearchDirectories([working_dir], [True])
|
||||||
theSession.Parts.LoadOptions.ComponentsToLoad = NXOpen.LoadOptions.LoadComponents.All
|
theSession.Parts.LoadOptions.ComponentsToLoad = NXOpen.LoadOptions.LoadComponents.All
|
||||||
theSession.Parts.LoadOptions.PartLoadOption = NXOpen.LoadOptions.LoadOption.FullyLoad
|
theSession.Parts.LoadOptions.PartLoadOption = NXOpen.LoadOptions.LoadOption.FullyLoad
|
||||||
@@ -240,7 +244,7 @@ def main(args):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
# Check for assembly FEM files
|
# Check for assembly FEM files
|
||||||
afm_files = [f for f in os.listdir(working_dir) if f.endswith('.afm')]
|
afm_files = [f for f in os.listdir(working_dir) if f.endswith(".afm")]
|
||||||
is_assembly = len(afm_files) > 0
|
is_assembly = len(afm_files) > 0
|
||||||
|
|
||||||
if is_assembly and expression_updates:
|
if is_assembly and expression_updates:
|
||||||
@@ -262,11 +266,14 @@ def main(args):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] FATAL ERROR: {e}")
|
print(f"[JOURNAL] FATAL ERROR: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expression_updates, working_dir):
|
def solve_assembly_fem_workflow(
|
||||||
|
theSession, sim_file_path, solution_name, expression_updates, working_dir
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Full assembly FEM workflow based on recorded NX journal.
|
Full assembly FEM workflow based on recorded NX journal.
|
||||||
|
|
||||||
@@ -285,8 +292,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
sim_file_full_path = os.path.join(working_dir, sim_filename)
|
sim_file_full_path = os.path.join(working_dir, sim_filename)
|
||||||
print(f"[JOURNAL] Opening SIM file: {sim_filename}")
|
print(f"[JOURNAL] Opening SIM file: {sim_filename}")
|
||||||
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
|
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
|
||||||
sim_file_full_path,
|
sim_file_full_path, NXOpen.DisplayPartOption.AllowAdditional
|
||||||
NXOpen.DisplayPartOption.AllowAdditional
|
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
|
|
||||||
@@ -330,7 +336,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] WARNING: M1_Blank_fem1_i.prt not found!")
|
print(f"[JOURNAL] WARNING: M1_Blank_fem1_i.prt not found!")
|
||||||
|
|
||||||
# Load M1_Vertical_Support_Skeleton_fem1_i.prt (CRITICAL: idealized geometry for support)
|
# Load M1_Vertical_Support_Skeleton_fem1_i.prt (CRITICAL: idealized geometry for support)
|
||||||
skeleton_idealized_prt_path = os.path.join(working_dir, "M1_Vertical_Support_Skeleton_fem1_i.prt")
|
skeleton_idealized_prt_path = os.path.join(
|
||||||
|
working_dir, "M1_Vertical_Support_Skeleton_fem1_i.prt"
|
||||||
|
)
|
||||||
if os.path.exists(skeleton_idealized_prt_path):
|
if os.path.exists(skeleton_idealized_prt_path):
|
||||||
print(f"[JOURNAL] Loading M1_Vertical_Support_Skeleton_fem1_i.prt...")
|
print(f"[JOURNAL] Loading M1_Vertical_Support_Skeleton_fem1_i.prt...")
|
||||||
part3_skel, was_loaded = find_or_open_part(theSession, skeleton_idealized_prt_path)
|
part3_skel, was_loaded = find_or_open_part(theSession, skeleton_idealized_prt_path)
|
||||||
@@ -347,11 +355,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
# Find and switch to M1_Blank part
|
# Find and switch to M1_Blank part
|
||||||
try:
|
try:
|
||||||
part3 = theSession.Parts.FindObject("M1_Blank")
|
part3 = theSession.Parts.FindObject("M1_Blank")
|
||||||
markId3 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part")
|
markId3 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part"
|
||||||
|
)
|
||||||
status1, partLoadStatus3 = theSession.Parts.SetActiveDisplay(
|
status1, partLoadStatus3 = theSession.Parts.SetActiveDisplay(
|
||||||
part3,
|
part3,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.UseLast
|
NXOpen.PartDisplayPartWorkPartOption.UseLast,
|
||||||
)
|
)
|
||||||
partLoadStatus3.Dispose()
|
partLoadStatus3.Dispose()
|
||||||
|
|
||||||
@@ -366,10 +376,10 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
|
|
||||||
# Write expressions to a temp file and import (more reliable than editing one by one)
|
# Write expressions to a temp file and import (more reliable than editing one by one)
|
||||||
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
|
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
|
||||||
with open(exp_file_path, 'w') as f:
|
with open(exp_file_path, "w") as f:
|
||||||
for expr_name, expr_value in expression_updates.items():
|
for expr_name, expr_value in expression_updates.items():
|
||||||
# Determine unit
|
# Determine unit
|
||||||
if 'angle' in expr_name.lower() or 'vertical' in expr_name.lower():
|
if "angle" in expr_name.lower() or "vertical" in expr_name.lower():
|
||||||
unit_str = "Degrees"
|
unit_str = "Degrees"
|
||||||
else:
|
else:
|
||||||
unit_str = "MilliMeter"
|
unit_str = "MilliMeter"
|
||||||
@@ -377,12 +387,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] {expr_name} = {expr_value} ({unit_str})")
|
print(f"[JOURNAL] {expr_name} = {expr_value} ({unit_str})")
|
||||||
|
|
||||||
print(f"[JOURNAL] Importing expressions from file...")
|
print(f"[JOURNAL] Importing expressions from file...")
|
||||||
markId_import = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Import Expressions")
|
markId_import = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Import Expressions"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
expModified, errorMessages = workPart.Expressions.ImportFromFile(
|
expModified, errorMessages = workPart.Expressions.ImportFromFile(
|
||||||
exp_file_path,
|
exp_file_path, NXOpen.ExpressionCollection.ImportMode.Replace
|
||||||
NXOpen.ExpressionCollection.ImportMode.Replace
|
|
||||||
)
|
)
|
||||||
print(f"[JOURNAL] Expressions imported: {expModified} modified")
|
print(f"[JOURNAL] Expressions imported: {expModified} modified")
|
||||||
if errorMessages:
|
if errorMessages:
|
||||||
@@ -390,14 +401,18 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
|
|
||||||
# Update geometry after import
|
# Update geometry after import
|
||||||
print(f"[JOURNAL] Rebuilding M1_Blank geometry...")
|
print(f"[JOURNAL] Rebuilding M1_Blank geometry...")
|
||||||
markId_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
|
markId_update = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Invisible, "NX update"
|
||||||
|
)
|
||||||
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
|
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
|
||||||
theSession.DeleteUndoMark(markId_update, "NX update")
|
theSession.DeleteUndoMark(markId_update, "NX update")
|
||||||
print(f"[JOURNAL] M1_Blank geometry rebuilt ({nErrs} errors)")
|
print(f"[JOURNAL] M1_Blank geometry rebuilt ({nErrs} errors)")
|
||||||
|
|
||||||
# CRITICAL: Save M1_Blank after geometry update so FEM can read updated geometry
|
# CRITICAL: Save M1_Blank after geometry update so FEM can read updated geometry
|
||||||
print(f"[JOURNAL] Saving M1_Blank...")
|
print(f"[JOURNAL] Saving M1_Blank...")
|
||||||
partSaveStatus_blank = workPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_blank = workPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||||
|
)
|
||||||
partSaveStatus_blank.Dispose()
|
partSaveStatus_blank.Dispose()
|
||||||
print(f"[JOURNAL] M1_Blank saved")
|
print(f"[JOURNAL] M1_Blank saved")
|
||||||
|
|
||||||
@@ -445,11 +460,13 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] Updating {part_name}...")
|
print(f"[JOURNAL] Updating {part_name}...")
|
||||||
linked_part = theSession.Parts.FindObject(part_name)
|
linked_part = theSession.Parts.FindObject(part_name)
|
||||||
|
|
||||||
markId_linked = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, f"Update {part_name}")
|
markId_linked = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, f"Update {part_name}"
|
||||||
|
)
|
||||||
status_linked, partLoadStatus_linked = theSession.Parts.SetActiveDisplay(
|
status_linked, partLoadStatus_linked = theSession.Parts.SetActiveDisplay(
|
||||||
linked_part,
|
linked_part,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.UseLast
|
NXOpen.PartDisplayPartWorkPartOption.UseLast,
|
||||||
)
|
)
|
||||||
partLoadStatus_linked.Dispose()
|
partLoadStatus_linked.Dispose()
|
||||||
|
|
||||||
@@ -457,14 +474,18 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
theSession.ApplicationSwitchImmediate("UG_APP_MODELING")
|
theSession.ApplicationSwitchImmediate("UG_APP_MODELING")
|
||||||
|
|
||||||
# Update to propagate linked expression changes
|
# Update to propagate linked expression changes
|
||||||
markId_linked_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
|
markId_linked_update = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Invisible, "NX update"
|
||||||
|
)
|
||||||
nErrs_linked = theSession.UpdateManager.DoUpdate(markId_linked_update)
|
nErrs_linked = theSession.UpdateManager.DoUpdate(markId_linked_update)
|
||||||
theSession.DeleteUndoMark(markId_linked_update, "NX update")
|
theSession.DeleteUndoMark(markId_linked_update, "NX update")
|
||||||
print(f"[JOURNAL] {part_name} geometry rebuilt ({nErrs_linked} errors)")
|
print(f"[JOURNAL] {part_name} geometry rebuilt ({nErrs_linked} errors)")
|
||||||
|
|
||||||
# CRITICAL: Save part after geometry update so FEM can read updated geometry
|
# CRITICAL: Save part after geometry update so FEM can read updated geometry
|
||||||
print(f"[JOURNAL] Saving {part_name}...")
|
print(f"[JOURNAL] Saving {part_name}...")
|
||||||
partSaveStatus_linked = linked_part.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_linked = linked_part.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||||
|
)
|
||||||
partSaveStatus_linked.Dispose()
|
partSaveStatus_linked.Dispose()
|
||||||
print(f"[JOURNAL] {part_name} saved")
|
print(f"[JOURNAL] {part_name} saved")
|
||||||
|
|
||||||
@@ -482,7 +503,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
sim_part_name = os.path.splitext(sim_filename)[0] # e.g., "ASSY_M1_assyfem1_sim1"
|
sim_part_name = os.path.splitext(sim_filename)[0] # e.g., "ASSY_M1_assyfem1_sim1"
|
||||||
print(f"[JOURNAL] Looking for sim part: {sim_part_name}")
|
print(f"[JOURNAL] Looking for sim part: {sim_part_name}")
|
||||||
|
|
||||||
markId_sim = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part")
|
markId_sim = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Change Displayed Part"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# First try to find it among loaded parts (like recorded journal)
|
# First try to find it among loaded parts (like recorded journal)
|
||||||
@@ -490,7 +513,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
status_sim, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
status_sim, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
||||||
simPart1,
|
simPart1,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.UseLast
|
NXOpen.PartDisplayPartWorkPartOption.UseLast,
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
print(f"[JOURNAL] Found and activated existing sim part")
|
print(f"[JOURNAL] Found and activated existing sim part")
|
||||||
@@ -498,8 +521,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
# Fallback: Open fresh if not found
|
# Fallback: Open fresh if not found
|
||||||
print(f"[JOURNAL] Sim part not found, opening fresh: {sim_filename}")
|
print(f"[JOURNAL] Sim part not found, opening fresh: {sim_filename}")
|
||||||
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
|
basePart, partLoadStatus = theSession.Parts.OpenActiveDisplay(
|
||||||
sim_file_path,
|
sim_file_path, NXOpen.DisplayPartOption.AllowAdditional
|
||||||
NXOpen.DisplayPartOption.AllowAdditional
|
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
|
|
||||||
@@ -517,23 +539,29 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] Updating M1_Blank_fem1...")
|
print(f"[JOURNAL] Updating M1_Blank_fem1...")
|
||||||
try:
|
try:
|
||||||
component2 = component1.FindObject("COMPONENT M1_Blank_fem1 1")
|
component2 = component1.FindObject("COMPONENT M1_Blank_fem1 1")
|
||||||
markId_fem1 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Make Work Part")
|
markId_fem1 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Make Work Part"
|
||||||
|
)
|
||||||
partLoadStatus5 = theSession.Parts.SetWorkComponent(
|
partLoadStatus5 = theSession.Parts.SetWorkComponent(
|
||||||
component2,
|
component2,
|
||||||
NXOpen.PartCollection.RefsetOption.Entire,
|
NXOpen.PartCollection.RefsetOption.Entire,
|
||||||
NXOpen.PartCollection.WorkComponentOption.Visible
|
NXOpen.PartCollection.WorkComponentOption.Visible,
|
||||||
)
|
)
|
||||||
workFemPart = theSession.Parts.BaseWork
|
workFemPart = theSession.Parts.BaseWork
|
||||||
partLoadStatus5.Dispose()
|
partLoadStatus5.Dispose()
|
||||||
|
|
||||||
markId_update1 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update FE Model")
|
markId_update1 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Update FE Model"
|
||||||
|
)
|
||||||
fEModel1 = workFemPart.FindObject("FEModel")
|
fEModel1 = workFemPart.FindObject("FEModel")
|
||||||
fEModel1.UpdateFemodel()
|
fEModel1.UpdateFemodel()
|
||||||
print(f"[JOURNAL] M1_Blank_fem1 updated")
|
print(f"[JOURNAL] M1_Blank_fem1 updated")
|
||||||
|
|
||||||
# CRITICAL: Save FEM file after update to persist mesh changes
|
# CRITICAL: Save FEM file after update to persist mesh changes
|
||||||
print(f"[JOURNAL] Saving M1_Blank_fem1...")
|
print(f"[JOURNAL] Saving M1_Blank_fem1...")
|
||||||
partSaveStatus_fem1 = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_fem1 = workFemPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||||
|
)
|
||||||
partSaveStatus_fem1.Dispose()
|
partSaveStatus_fem1.Dispose()
|
||||||
print(f"[JOURNAL] M1_Blank_fem1 saved")
|
print(f"[JOURNAL] M1_Blank_fem1 saved")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -543,23 +571,29 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] Updating M1_Vertical_Support_Skeleton_fem1...")
|
print(f"[JOURNAL] Updating M1_Vertical_Support_Skeleton_fem1...")
|
||||||
try:
|
try:
|
||||||
component3 = component1.FindObject("COMPONENT M1_Vertical_Support_Skeleton_fem1 3")
|
component3 = component1.FindObject("COMPONENT M1_Vertical_Support_Skeleton_fem1 3")
|
||||||
markId_fem2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Make Work Part")
|
markId_fem2 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Make Work Part"
|
||||||
|
)
|
||||||
partLoadStatus6 = theSession.Parts.SetWorkComponent(
|
partLoadStatus6 = theSession.Parts.SetWorkComponent(
|
||||||
component3,
|
component3,
|
||||||
NXOpen.PartCollection.RefsetOption.Entire,
|
NXOpen.PartCollection.RefsetOption.Entire,
|
||||||
NXOpen.PartCollection.WorkComponentOption.Visible
|
NXOpen.PartCollection.WorkComponentOption.Visible,
|
||||||
)
|
)
|
||||||
workFemPart = theSession.Parts.BaseWork
|
workFemPart = theSession.Parts.BaseWork
|
||||||
partLoadStatus6.Dispose()
|
partLoadStatus6.Dispose()
|
||||||
|
|
||||||
markId_update2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update FE Model")
|
markId_update2 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Update FE Model"
|
||||||
|
)
|
||||||
fEModel2 = workFemPart.FindObject("FEModel")
|
fEModel2 = workFemPart.FindObject("FEModel")
|
||||||
fEModel2.UpdateFemodel()
|
fEModel2.UpdateFemodel()
|
||||||
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 updated")
|
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 updated")
|
||||||
|
|
||||||
# CRITICAL: Save FEM file after update to persist mesh changes
|
# CRITICAL: Save FEM file after update to persist mesh changes
|
||||||
print(f"[JOURNAL] Saving M1_Vertical_Support_Skeleton_fem1...")
|
print(f"[JOURNAL] Saving M1_Vertical_Support_Skeleton_fem1...")
|
||||||
partSaveStatus_fem2 = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_fem2 = workFemPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||||
|
)
|
||||||
partSaveStatus_fem2.Dispose()
|
partSaveStatus_fem2.Dispose()
|
||||||
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 saved")
|
print(f"[JOURNAL] M1_Vertical_Support_Skeleton_fem1 saved")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -578,7 +612,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
partLoadStatus8 = theSession.Parts.SetWorkComponent(
|
partLoadStatus8 = theSession.Parts.SetWorkComponent(
|
||||||
component1,
|
component1,
|
||||||
NXOpen.PartCollection.RefsetOption.Entire,
|
NXOpen.PartCollection.RefsetOption.Entire,
|
||||||
NXOpen.PartCollection.WorkComponentOption.Visible
|
NXOpen.PartCollection.WorkComponentOption.Visible,
|
||||||
)
|
)
|
||||||
workAssyFemPart = theSession.Parts.BaseWork
|
workAssyFemPart = theSession.Parts.BaseWork
|
||||||
displaySimPart = theSession.Parts.BaseDisplay
|
displaySimPart = theSession.Parts.BaseDisplay
|
||||||
@@ -643,13 +677,17 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
elif numMerged == 0:
|
elif numMerged == 0:
|
||||||
print(f"[JOURNAL] No nodes were merged (0 returned)")
|
print(f"[JOURNAL] No nodes were merged (0 returned)")
|
||||||
if numDuplicates is None:
|
if numDuplicates is None:
|
||||||
print(f"[JOURNAL] WARNING: IdentifyDuplicateNodes returned None - mesh may need display refresh")
|
print(
|
||||||
|
f"[JOURNAL] WARNING: IdentifyDuplicateNodes returned None - mesh may need display refresh"
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
print(f"[JOURNAL] MergeDuplicateNodes returned None - batch mode limitation")
|
print(f"[JOURNAL] MergeDuplicateNodes returned None - batch mode limitation")
|
||||||
except Exception as merge_error:
|
except Exception as merge_error:
|
||||||
print(f"[JOURNAL] MergeDuplicateNodes failed: {merge_error}")
|
print(f"[JOURNAL] MergeDuplicateNodes failed: {merge_error}")
|
||||||
if numDuplicates is None:
|
if numDuplicates is None:
|
||||||
print(f"[JOURNAL] This combined with IdentifyDuplicateNodes=None suggests display issue")
|
print(
|
||||||
|
f"[JOURNAL] This combined with IdentifyDuplicateNodes=None suggests display issue"
|
||||||
|
)
|
||||||
|
|
||||||
theSession.SetUndoMarkName(markId_merge, "Duplicate Nodes")
|
theSession.SetUndoMarkName(markId_merge, "Duplicate Nodes")
|
||||||
duplicateNodesCheckBuilder1.Destroy()
|
duplicateNodesCheckBuilder1.Destroy()
|
||||||
@@ -658,6 +696,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] WARNING: Node merge: {e}")
|
print(f"[JOURNAL] WARNING: Node merge: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
|
|
||||||
# ==========================================================================
|
# ==========================================================================
|
||||||
@@ -673,7 +712,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
|
|
||||||
theSession.SetUndoMarkName(markId_labels, "Assembly Label Manager Dialog")
|
theSession.SetUndoMarkName(markId_labels, "Assembly Label Manager Dialog")
|
||||||
|
|
||||||
markId_labels2 = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "Assembly Label Manager")
|
markId_labels2 = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Invisible, "Assembly Label Manager"
|
||||||
|
)
|
||||||
|
|
||||||
# Set offsets for each FE model occurrence
|
# Set offsets for each FE model occurrence
|
||||||
# These offsets ensure unique node/element labels across components
|
# These offsets ensure unique node/element labels across components
|
||||||
@@ -720,7 +761,9 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
print(f"[JOURNAL] STEP 5b: Saving assembly FEM after all updates...")
|
print(f"[JOURNAL] STEP 5b: Saving assembly FEM after all updates...")
|
||||||
try:
|
try:
|
||||||
# Save the assembly FEM to persist all mesh updates and node merges
|
# Save the assembly FEM to persist all mesh updates and node merges
|
||||||
partSaveStatus_afem = workAssyFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_afem = workAssyFemPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue
|
||||||
|
)
|
||||||
partSaveStatus_afem.Dispose()
|
partSaveStatus_afem.Dispose()
|
||||||
print(f"[JOURNAL] Assembly FEM saved: {workAssyFemPart.Name}")
|
print(f"[JOURNAL] Assembly FEM saved: {workAssyFemPart.Name}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -736,7 +779,7 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
partLoadStatus9 = theSession.Parts.SetWorkComponent(
|
partLoadStatus9 = theSession.Parts.SetWorkComponent(
|
||||||
NXOpen.Assemblies.Component.Null,
|
NXOpen.Assemblies.Component.Null,
|
||||||
NXOpen.PartCollection.RefsetOption.Entire,
|
NXOpen.PartCollection.RefsetOption.Entire,
|
||||||
NXOpen.PartCollection.WorkComponentOption.Visible
|
NXOpen.PartCollection.WorkComponentOption.Visible,
|
||||||
)
|
)
|
||||||
workSimPart = theSession.Parts.BaseWork
|
workSimPart = theSession.Parts.BaseWork
|
||||||
partLoadStatus9.Dispose()
|
partLoadStatus9.Dispose()
|
||||||
@@ -760,13 +803,15 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
psolutions1,
|
psolutions1,
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Foreground # Use Foreground to ensure OP2 is complete
|
NXOpen.CAE.SimSolution.SolveMode.Foreground, # Use Foreground to ensure OP2 is complete
|
||||||
)
|
)
|
||||||
|
|
||||||
theSession.DeleteUndoMark(markId_solve2, None)
|
theSession.DeleteUndoMark(markId_solve2, None)
|
||||||
theSession.SetUndoMarkName(markId_solve, "Solve")
|
theSession.SetUndoMarkName(markId_solve, "Solve")
|
||||||
|
|
||||||
print(f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped")
|
print(
|
||||||
|
f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped"
|
||||||
|
)
|
||||||
|
|
||||||
# ==========================================================================
|
# ==========================================================================
|
||||||
# STEP 7: SAVE ALL - Save all modified parts (FEM, SIM, PRT)
|
# STEP 7: SAVE ALL - Save all modified parts (FEM, SIM, PRT)
|
||||||
@@ -784,11 +829,14 @@ def solve_assembly_fem_workflow(theSession, sim_file_path, solution_name, expres
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] ERROR solving: {e}")
|
print(f"[JOURNAL] ERROR solving: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_updates, working_dir):
|
def solve_simple_workflow(
|
||||||
|
theSession, sim_file_path, solution_name, expression_updates, working_dir
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Workflow for single-part simulations with optional expression updates.
|
Workflow for single-part simulations with optional expression updates.
|
||||||
|
|
||||||
@@ -802,8 +850,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
|
|
||||||
# Open the .sim file
|
# Open the .sim file
|
||||||
basePart1, partLoadStatus1 = theSession.Parts.OpenActiveDisplay(
|
basePart1, partLoadStatus1 = theSession.Parts.OpenActiveDisplay(
|
||||||
sim_file_path,
|
sim_file_path, NXOpen.DisplayPartOption.AllowAdditional
|
||||||
NXOpen.DisplayPartOption.AllowAdditional
|
|
||||||
)
|
)
|
||||||
partLoadStatus1.Dispose()
|
partLoadStatus1.Dispose()
|
||||||
|
|
||||||
@@ -830,11 +877,11 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
part_type = type(part).__name__
|
part_type = type(part).__name__
|
||||||
|
|
||||||
# Skip FEM and SIM parts by type
|
# Skip FEM and SIM parts by type
|
||||||
if 'fem' in part_type.lower() or 'sim' in part_type.lower():
|
if "fem" in part_type.lower() or "sim" in part_type.lower():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Skip parts with _fem or _sim in name
|
# Skip parts with _fem or _sim in name
|
||||||
if '_fem' in part_name or '_sim' in part_name:
|
if "_fem" in part_name or "_sim" in part_name:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
geom_part = part
|
geom_part = part
|
||||||
@@ -845,25 +892,38 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
if geom_part is None:
|
if geom_part is None:
|
||||||
print(f"[JOURNAL] Geometry part not loaded, searching for .prt file...")
|
print(f"[JOURNAL] Geometry part not loaded, searching for .prt file...")
|
||||||
for filename in os.listdir(working_dir):
|
for filename in os.listdir(working_dir):
|
||||||
if filename.endswith('.prt') and '_fem' not in filename.lower() and '_sim' not in filename.lower():
|
# Skip idealized parts (_i.prt), FEM parts, and SIM parts
|
||||||
|
if (
|
||||||
|
filename.endswith(".prt")
|
||||||
|
and "_fem" not in filename.lower()
|
||||||
|
and "_sim" not in filename.lower()
|
||||||
|
and "_i.prt" not in filename.lower()
|
||||||
|
):
|
||||||
prt_path = os.path.join(working_dir, filename)
|
prt_path = os.path.join(working_dir, filename)
|
||||||
print(f"[JOURNAL] Loading geometry part: {filename}")
|
print(f"[JOURNAL] Loading geometry part: {filename}")
|
||||||
try:
|
try:
|
||||||
geom_part, partLoadStatus = theSession.Parts.Open(prt_path)
|
loaded_part, partLoadStatus = theSession.Parts.Open(prt_path)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
print(f"[JOURNAL] Geometry part loaded: {geom_part.Name}")
|
# Check if load actually succeeded (Parts.Open can return None)
|
||||||
break
|
if loaded_part is not None:
|
||||||
|
geom_part = loaded_part
|
||||||
|
print(f"[JOURNAL] Geometry part loaded: {geom_part.Name}")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
print(f"[JOURNAL] WARNING: Parts.Open returned None for {filename}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] WARNING: Could not load {filename}: {e}")
|
print(f"[JOURNAL] WARNING: Could not load {filename}: {e}")
|
||||||
|
|
||||||
if geom_part:
|
if geom_part:
|
||||||
try:
|
try:
|
||||||
# Switch to the geometry part for expression editing
|
# Switch to the geometry part for expression editing
|
||||||
markId_expr = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Visible, "Update Expressions")
|
markId_expr = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Visible, "Update Expressions"
|
||||||
|
)
|
||||||
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
||||||
geom_part,
|
geom_part,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.UseLast
|
NXOpen.PartDisplayPartWorkPartOption.UseLast,
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
|
|
||||||
@@ -874,10 +934,10 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
|
|
||||||
# Write expressions to temp file and import
|
# Write expressions to temp file and import
|
||||||
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
|
exp_file_path = os.path.join(working_dir, "_temp_expressions.exp")
|
||||||
with open(exp_file_path, 'w') as f:
|
with open(exp_file_path, "w") as f:
|
||||||
for expr_name, expr_value in expression_updates.items():
|
for expr_name, expr_value in expression_updates.items():
|
||||||
# Determine unit based on name
|
# Determine unit based on name
|
||||||
if 'angle' in expr_name.lower():
|
if "angle" in expr_name.lower():
|
||||||
unit_str = "Degrees"
|
unit_str = "Degrees"
|
||||||
else:
|
else:
|
||||||
unit_str = "MilliMeter"
|
unit_str = "MilliMeter"
|
||||||
@@ -886,8 +946,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
|
|
||||||
print(f"[JOURNAL] Importing expressions...")
|
print(f"[JOURNAL] Importing expressions...")
|
||||||
expModified, errorMessages = workPart.Expressions.ImportFromFile(
|
expModified, errorMessages = workPart.Expressions.ImportFromFile(
|
||||||
exp_file_path,
|
exp_file_path, NXOpen.ExpressionCollection.ImportMode.Replace
|
||||||
NXOpen.ExpressionCollection.ImportMode.Replace
|
|
||||||
)
|
)
|
||||||
print(f"[JOURNAL] Expressions modified: {expModified}")
|
print(f"[JOURNAL] Expressions modified: {expModified}")
|
||||||
if errorMessages:
|
if errorMessages:
|
||||||
@@ -895,14 +954,19 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
|
|
||||||
# Update geometry
|
# Update geometry
|
||||||
print(f"[JOURNAL] Rebuilding geometry...")
|
print(f"[JOURNAL] Rebuilding geometry...")
|
||||||
markId_update = theSession.SetUndoMark(NXOpen.Session.MarkVisibility.Invisible, "NX update")
|
markId_update = theSession.SetUndoMark(
|
||||||
|
NXOpen.Session.MarkVisibility.Invisible, "NX update"
|
||||||
|
)
|
||||||
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
|
nErrs = theSession.UpdateManager.DoUpdate(markId_update)
|
||||||
theSession.DeleteUndoMark(markId_update, "NX update")
|
theSession.DeleteUndoMark(markId_update, "NX update")
|
||||||
print(f"[JOURNAL] Geometry rebuilt ({nErrs} errors)")
|
print(f"[JOURNAL] Geometry rebuilt ({nErrs} errors)")
|
||||||
|
|
||||||
# Save geometry part
|
# Save geometry part
|
||||||
print(f"[JOURNAL] Saving geometry part...")
|
print(f"[JOURNAL] Saving geometry part...")
|
||||||
partSaveStatus_geom = workPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_geom = workPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue,
|
||||||
|
NXOpen.BasePart.CloseAfterSave.FalseValue,
|
||||||
|
)
|
||||||
partSaveStatus_geom.Dispose()
|
partSaveStatus_geom.Dispose()
|
||||||
|
|
||||||
# Clean up temp file
|
# Clean up temp file
|
||||||
@@ -914,6 +978,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] ERROR updating expressions: {e}")
|
print(f"[JOURNAL] ERROR updating expressions: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
else:
|
else:
|
||||||
print(f"[JOURNAL] WARNING: Could not find geometry part for expression updates!")
|
print(f"[JOURNAL] WARNING: Could not find geometry part for expression updates!")
|
||||||
@@ -928,13 +993,18 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
# The chain is: .prt (geometry) -> _i.prt (idealized) -> .fem (mesh)
|
# The chain is: .prt (geometry) -> _i.prt (idealized) -> .fem (mesh)
|
||||||
idealized_part = None
|
idealized_part = None
|
||||||
for filename in os.listdir(working_dir):
|
for filename in os.listdir(working_dir):
|
||||||
if '_i.prt' in filename.lower():
|
if "_i.prt" in filename.lower():
|
||||||
idealized_path = os.path.join(working_dir, filename)
|
idealized_path = os.path.join(working_dir, filename)
|
||||||
print(f"[JOURNAL] Loading idealized part: {filename}")
|
print(f"[JOURNAL] Loading idealized part: {filename}")
|
||||||
try:
|
try:
|
||||||
idealized_part, partLoadStatus = theSession.Parts.Open(idealized_path)
|
loaded_part, partLoadStatus = theSession.Parts.Open(idealized_path)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
print(f"[JOURNAL] Idealized part loaded: {idealized_part.Name}")
|
# Check if load actually succeeded (Parts.Open can return None)
|
||||||
|
if loaded_part is not None:
|
||||||
|
idealized_part = loaded_part
|
||||||
|
print(f"[JOURNAL] Idealized part loaded: {idealized_part.Name}")
|
||||||
|
else:
|
||||||
|
print(f"[JOURNAL] WARNING: Parts.Open returned None for idealized part")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] WARNING: Could not load idealized part: {e}")
|
print(f"[JOURNAL] WARNING: Could not load idealized part: {e}")
|
||||||
break
|
break
|
||||||
@@ -942,7 +1012,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
# Find the FEM part
|
# Find the FEM part
|
||||||
fem_part = None
|
fem_part = None
|
||||||
for part in theSession.Parts:
|
for part in theSession.Parts:
|
||||||
if '_fem' in part.Name.lower() or part.Name.lower().endswith('.fem'):
|
if "_fem" in part.Name.lower() or part.Name.lower().endswith(".fem"):
|
||||||
fem_part = part
|
fem_part = part
|
||||||
print(f"[JOURNAL] Found FEM part: {part.Name}")
|
print(f"[JOURNAL] Found FEM part: {part.Name}")
|
||||||
break
|
break
|
||||||
@@ -956,7 +1026,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
||||||
fem_part,
|
fem_part,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.SameAsDisplay # Critical fix!
|
NXOpen.PartDisplayPartWorkPartOption.SameAsDisplay, # Critical fix!
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
|
|
||||||
@@ -972,13 +1042,17 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
print(f"[JOURNAL] FE model updated")
|
print(f"[JOURNAL] FE model updated")
|
||||||
|
|
||||||
# Save FEM
|
# Save FEM
|
||||||
partSaveStatus_fem = workFemPart.Save(NXOpen.BasePart.SaveComponents.TrueValue, NXOpen.BasePart.CloseAfterSave.FalseValue)
|
partSaveStatus_fem = workFemPart.Save(
|
||||||
|
NXOpen.BasePart.SaveComponents.TrueValue,
|
||||||
|
NXOpen.BasePart.CloseAfterSave.FalseValue,
|
||||||
|
)
|
||||||
partSaveStatus_fem.Dispose()
|
partSaveStatus_fem.Dispose()
|
||||||
print(f"[JOURNAL] FEM saved")
|
print(f"[JOURNAL] FEM saved")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[JOURNAL] ERROR updating FEM: {e}")
|
print(f"[JOURNAL] ERROR updating FEM: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
|
|
||||||
# =========================================================================
|
# =========================================================================
|
||||||
@@ -990,7 +1064,7 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
status, partLoadStatus = theSession.Parts.SetActiveDisplay(
|
||||||
workSimPart,
|
workSimPart,
|
||||||
NXOpen.DisplayPartOption.AllowAdditional,
|
NXOpen.DisplayPartOption.AllowAdditional,
|
||||||
NXOpen.PartDisplayPartWorkPartOption.UseLast
|
NXOpen.PartDisplayPartWorkPartOption.UseLast,
|
||||||
)
|
)
|
||||||
partLoadStatus.Dispose()
|
partLoadStatus.Dispose()
|
||||||
|
|
||||||
@@ -1016,13 +1090,15 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
psolutions1,
|
psolutions1,
|
||||||
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
NXOpen.CAE.SimSolution.SolveOption.Solve,
|
||||||
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
NXOpen.CAE.SimSolution.SetupCheckOption.CompleteCheckAndOutputErrors,
|
||||||
NXOpen.CAE.SimSolution.SolveMode.Foreground # Use Foreground to wait for completion
|
NXOpen.CAE.SimSolution.SolveMode.Foreground, # Use Foreground to wait for completion
|
||||||
)
|
)
|
||||||
|
|
||||||
theSession.DeleteUndoMark(markId_solve2, None)
|
theSession.DeleteUndoMark(markId_solve2, None)
|
||||||
theSession.SetUndoMarkName(markId_solve, "Solve")
|
theSession.SetUndoMarkName(markId_solve, "Solve")
|
||||||
|
|
||||||
print(f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped")
|
print(
|
||||||
|
f"[JOURNAL] Solve completed: {numsolved} solved, {numfailed} failed, {numskipped} skipped"
|
||||||
|
)
|
||||||
|
|
||||||
# Save all
|
# Save all
|
||||||
try:
|
try:
|
||||||
@@ -1035,6 +1111,6 @@ def solve_simple_workflow(theSession, sim_file_path, solution_name, expression_u
|
|||||||
return numfailed == 0
|
return numfailed == 0
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == "__main__":
|
||||||
success = main(sys.argv[1:])
|
success = main(sys.argv[1:])
|
||||||
sys.exit(0 if success else 1)
|
sys.exit(0 if success else 1)
|
||||||
|
|||||||
1042
optimization_engine/reporting/html_report.py
Normal file
1042
optimization_engine/reporting/html_report.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -85,7 +85,7 @@
|
|||||||
"created_by": {
|
"created_by": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Who/what created the spec",
|
"description": "Who/what created the spec",
|
||||||
"enum": ["canvas", "claude", "api", "migration", "manual"]
|
"enum": ["canvas", "claude", "api", "migration", "manual", "dashboard_intake"]
|
||||||
},
|
},
|
||||||
"modified_by": {
|
"modified_by": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
@@ -114,6 +114,17 @@
|
|||||||
"engineering_context": {
|
"engineering_context": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Real-world engineering scenario"
|
"description": "Real-world engineering scenario"
|
||||||
|
},
|
||||||
|
"status": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Study lifecycle status",
|
||||||
|
"enum": ["draft", "introspected", "configured", "validated", "ready", "running", "completed", "failed"],
|
||||||
|
"default": "draft"
|
||||||
|
},
|
||||||
|
"topic": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Topic folder for grouping related studies",
|
||||||
|
"pattern": "^[A-Za-z0-9_]+$"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -215,6 +226,124 @@
|
|||||||
"type": "boolean"
|
"type": "boolean"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
"introspection": {
|
||||||
|
"$ref": "#/definitions/introspection_data",
|
||||||
|
"description": "Model introspection results from intake workflow"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
"introspection_data": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "Model introspection results stored in the spec",
|
||||||
|
"properties": {
|
||||||
|
"timestamp": {
|
||||||
|
"type": "string",
|
||||||
|
"format": "date-time",
|
||||||
|
"description": "When introspection was run"
|
||||||
|
},
|
||||||
|
"solver_type": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Detected solver type"
|
||||||
|
},
|
||||||
|
"mass_kg": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Mass from expressions or mass properties"
|
||||||
|
},
|
||||||
|
"volume_mm3": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Volume from mass properties"
|
||||||
|
},
|
||||||
|
"expressions": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Discovered NX expressions",
|
||||||
|
"items": {
|
||||||
|
"$ref": "#/definitions/expression_info"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"baseline": {
|
||||||
|
"$ref": "#/definitions/baseline_data",
|
||||||
|
"description": "Baseline FEA solve results"
|
||||||
|
},
|
||||||
|
"warnings": {
|
||||||
|
"type": "array",
|
||||||
|
"description": "Warnings from introspection",
|
||||||
|
"items": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
"expression_info": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "Information about an NX expression from introspection",
|
||||||
|
"required": ["name"],
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Expression name in NX"
|
||||||
|
},
|
||||||
|
"value": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Current value"
|
||||||
|
},
|
||||||
|
"units": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Physical units"
|
||||||
|
},
|
||||||
|
"formula": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Expression formula if any"
|
||||||
|
},
|
||||||
|
"is_candidate": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "Whether this is a design variable candidate",
|
||||||
|
"default": false
|
||||||
|
},
|
||||||
|
"confidence": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Confidence that this is a design variable (0.0 to 1.0)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
"baseline_data": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "Results from baseline FEA solve",
|
||||||
|
"properties": {
|
||||||
|
"timestamp": {
|
||||||
|
"type": "string",
|
||||||
|
"format": "date-time",
|
||||||
|
"description": "When baseline was run"
|
||||||
|
},
|
||||||
|
"solve_time_seconds": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "How long the solve took"
|
||||||
|
},
|
||||||
|
"mass_kg": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Computed mass from BDF/FEM"
|
||||||
|
},
|
||||||
|
"max_displacement_mm": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Max displacement result"
|
||||||
|
},
|
||||||
|
"max_stress_mpa": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Max von Mises stress"
|
||||||
|
},
|
||||||
|
"success": {
|
||||||
|
"type": "boolean",
|
||||||
|
"description": "Whether baseline solve succeeded",
|
||||||
|
"default": true
|
||||||
|
},
|
||||||
|
"error": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Error message if failed"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|||||||
31
optimization_engine/validation/__init__.py
Normal file
31
optimization_engine/validation/__init__.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
"""
|
||||||
|
Atomizer Validation System
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Validates study configuration before optimization starts.
|
||||||
|
|
||||||
|
Components:
|
||||||
|
- ValidationGate: Main orchestrator for validation
|
||||||
|
- SpecChecker: Validates atomizer_spec.json
|
||||||
|
- TestTrialRunner: Runs 2-3 test trials to verify setup
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
from optimization_engine.validation import ValidationGate
|
||||||
|
|
||||||
|
gate = ValidationGate(study_dir)
|
||||||
|
result = gate.validate(run_test_trials=True)
|
||||||
|
|
||||||
|
if result.passed:
|
||||||
|
gate.approve() # Start optimization
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .gate import ValidationGate, ValidationResult, TestTrialResult
|
||||||
|
from .checker import SpecChecker, ValidationIssue
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"ValidationGate",
|
||||||
|
"ValidationResult",
|
||||||
|
"TestTrialResult",
|
||||||
|
"SpecChecker",
|
||||||
|
"ValidationIssue",
|
||||||
|
]
|
||||||
454
optimization_engine/validation/checker.py
Normal file
454
optimization_engine/validation/checker.py
Normal file
@@ -0,0 +1,454 @@
|
|||||||
|
"""
|
||||||
|
Specification Checker
|
||||||
|
=====================
|
||||||
|
|
||||||
|
Validates atomizer_spec.json (or optimization_config.json) for:
|
||||||
|
- Schema compliance
|
||||||
|
- Semantic correctness
|
||||||
|
- Anti-pattern detection
|
||||||
|
- Expression existence
|
||||||
|
|
||||||
|
This catches configuration errors BEFORE wasting time on failed trials.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Dict, Any, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class IssueSeverity(str, Enum):
|
||||||
|
"""Severity level for validation issues."""
|
||||||
|
|
||||||
|
ERROR = "error" # Must fix before proceeding
|
||||||
|
WARNING = "warning" # Should review, but can proceed
|
||||||
|
INFO = "info" # Informational note
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ValidationIssue:
|
||||||
|
"""A single validation issue."""
|
||||||
|
|
||||||
|
severity: IssueSeverity
|
||||||
|
code: str
|
||||||
|
message: str
|
||||||
|
path: Optional[str] = None # JSON path to the issue
|
||||||
|
suggestion: Optional[str] = None
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
prefix = {
|
||||||
|
IssueSeverity.ERROR: "[ERROR]",
|
||||||
|
IssueSeverity.WARNING: "[WARN]",
|
||||||
|
IssueSeverity.INFO: "[INFO]",
|
||||||
|
}[self.severity]
|
||||||
|
|
||||||
|
location = f" at {self.path}" if self.path else ""
|
||||||
|
return f"{prefix} {self.message}{location}"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CheckResult:
|
||||||
|
"""Result of running the spec checker."""
|
||||||
|
|
||||||
|
valid: bool
|
||||||
|
issues: List[ValidationIssue] = field(default_factory=list)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def errors(self) -> List[ValidationIssue]:
|
||||||
|
return [i for i in self.issues if i.severity == IssueSeverity.ERROR]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def warnings(self) -> List[ValidationIssue]:
|
||||||
|
return [i for i in self.issues if i.severity == IssueSeverity.WARNING]
|
||||||
|
|
||||||
|
def add_error(self, code: str, message: str, path: str = None, suggestion: str = None):
|
||||||
|
self.issues.append(
|
||||||
|
ValidationIssue(
|
||||||
|
severity=IssueSeverity.ERROR,
|
||||||
|
code=code,
|
||||||
|
message=message,
|
||||||
|
path=path,
|
||||||
|
suggestion=suggestion,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.valid = False
|
||||||
|
|
||||||
|
def add_warning(self, code: str, message: str, path: str = None, suggestion: str = None):
|
||||||
|
self.issues.append(
|
||||||
|
ValidationIssue(
|
||||||
|
severity=IssueSeverity.WARNING,
|
||||||
|
code=code,
|
||||||
|
message=message,
|
||||||
|
path=path,
|
||||||
|
suggestion=suggestion,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_info(self, code: str, message: str, path: str = None):
|
||||||
|
self.issues.append(
|
||||||
|
ValidationIssue(
|
||||||
|
severity=IssueSeverity.INFO,
|
||||||
|
code=code,
|
||||||
|
message=message,
|
||||||
|
path=path,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class SpecChecker:
|
||||||
|
"""
|
||||||
|
Validates study specification files.
|
||||||
|
|
||||||
|
Checks:
|
||||||
|
1. Required fields present
|
||||||
|
2. Design variable bounds valid
|
||||||
|
3. Expressions exist in model (if introspection available)
|
||||||
|
4. Extractors available for objectives/constraints
|
||||||
|
5. Anti-patterns (mass minimization without constraints, etc.)
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Known extractors
|
||||||
|
KNOWN_EXTRACTORS = {
|
||||||
|
"extract_mass_from_bdf",
|
||||||
|
"extract_part_mass",
|
||||||
|
"extract_displacement",
|
||||||
|
"extract_solid_stress",
|
||||||
|
"extract_principal_stress",
|
||||||
|
"extract_frequency",
|
||||||
|
"extract_strain_energy",
|
||||||
|
"extract_temperature",
|
||||||
|
"extract_zernike_from_op2",
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
spec_path: Optional[Path] = None,
|
||||||
|
available_expressions: Optional[List[str]] = None,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initialize the checker.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
spec_path: Path to spec file (atomizer_spec.json or optimization_config.json)
|
||||||
|
available_expressions: List of expression names from introspection
|
||||||
|
"""
|
||||||
|
self.spec_path = spec_path
|
||||||
|
self.available_expressions = available_expressions or []
|
||||||
|
self.spec: Dict[str, Any] = {}
|
||||||
|
|
||||||
|
def check(self, spec_data: Optional[Dict[str, Any]] = None) -> CheckResult:
|
||||||
|
"""
|
||||||
|
Run all validation checks.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
spec_data: Spec dict (or load from spec_path if not provided)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
CheckResult with all issues found
|
||||||
|
"""
|
||||||
|
result = CheckResult(valid=True)
|
||||||
|
|
||||||
|
# Load spec if not provided
|
||||||
|
if spec_data:
|
||||||
|
self.spec = spec_data
|
||||||
|
elif self.spec_path and self.spec_path.exists():
|
||||||
|
with open(self.spec_path) as f:
|
||||||
|
self.spec = json.load(f)
|
||||||
|
else:
|
||||||
|
result.add_error("SPEC_NOT_FOUND", "No specification file found")
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Run checks
|
||||||
|
self._check_required_fields(result)
|
||||||
|
self._check_design_variables(result)
|
||||||
|
self._check_objectives(result)
|
||||||
|
self._check_constraints(result)
|
||||||
|
self._check_extractors(result)
|
||||||
|
self._check_anti_patterns(result)
|
||||||
|
self._check_files(result)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _check_required_fields(self, result: CheckResult) -> None:
|
||||||
|
"""Check that required fields are present."""
|
||||||
|
|
||||||
|
# Check for design variables
|
||||||
|
dvs = self.spec.get("design_variables", [])
|
||||||
|
if not dvs:
|
||||||
|
result.add_error(
|
||||||
|
"NO_DESIGN_VARIABLES",
|
||||||
|
"No design variables defined",
|
||||||
|
suggestion="Add at least one design variable to optimize",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for objectives
|
||||||
|
objectives = self.spec.get("objectives", [])
|
||||||
|
if not objectives:
|
||||||
|
result.add_error(
|
||||||
|
"NO_OBJECTIVES",
|
||||||
|
"No objectives defined",
|
||||||
|
suggestion="Define at least one objective (e.g., minimize mass)",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for simulation settings
|
||||||
|
sim = self.spec.get("simulation", {})
|
||||||
|
if not sim.get("sim_file"):
|
||||||
|
result.add_warning(
|
||||||
|
"NO_SIM_FILE", "No simulation file specified", path="simulation.sim_file"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_design_variables(self, result: CheckResult) -> None:
|
||||||
|
"""Check design variable definitions."""
|
||||||
|
|
||||||
|
dvs = self.spec.get("design_variables", [])
|
||||||
|
|
||||||
|
for i, dv in enumerate(dvs):
|
||||||
|
param = dv.get("parameter", dv.get("expression_name", dv.get("name", f"dv_{i}")))
|
||||||
|
bounds = dv.get("bounds", [])
|
||||||
|
path = f"design_variables[{i}]"
|
||||||
|
|
||||||
|
# Handle both formats: [min, max] or {"min": x, "max": y}
|
||||||
|
if isinstance(bounds, dict):
|
||||||
|
min_val = bounds.get("min")
|
||||||
|
max_val = bounds.get("max")
|
||||||
|
elif isinstance(bounds, (list, tuple)) and len(bounds) == 2:
|
||||||
|
min_val, max_val = bounds
|
||||||
|
else:
|
||||||
|
result.add_error(
|
||||||
|
"INVALID_BOUNDS",
|
||||||
|
f"Design variable '{param}' has invalid bounds format",
|
||||||
|
path=path,
|
||||||
|
suggestion="Bounds must be [min, max] or {min: x, max: y}",
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Convert to float if strings
|
||||||
|
try:
|
||||||
|
min_val = float(min_val)
|
||||||
|
max_val = float(max_val)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
result.add_error(
|
||||||
|
"INVALID_BOUNDS_TYPE",
|
||||||
|
f"Design variable '{param}' bounds must be numeric",
|
||||||
|
path=path,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check bounds order
|
||||||
|
if min_val >= max_val:
|
||||||
|
result.add_error(
|
||||||
|
"BOUNDS_INVERTED",
|
||||||
|
f"Design variable '{param}': min ({min_val}) >= max ({max_val})",
|
||||||
|
path=path,
|
||||||
|
suggestion="Ensure min < max",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for very wide bounds
|
||||||
|
if max_val > 0 and min_val > 0:
|
||||||
|
ratio = max_val / min_val
|
||||||
|
if ratio > 100:
|
||||||
|
result.add_warning(
|
||||||
|
"BOUNDS_TOO_WIDE",
|
||||||
|
f"Design variable '{param}' has very wide bounds (ratio: {ratio:.1f}x)",
|
||||||
|
path=path,
|
||||||
|
suggestion="Consider narrowing bounds for faster convergence",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for very narrow bounds
|
||||||
|
if max_val > 0 and min_val > 0:
|
||||||
|
ratio = max_val / min_val
|
||||||
|
if ratio < 1.1:
|
||||||
|
result.add_warning(
|
||||||
|
"BOUNDS_TOO_NARROW",
|
||||||
|
f"Design variable '{param}' has very narrow bounds (ratio: {ratio:.2f}x)",
|
||||||
|
path=path,
|
||||||
|
suggestion="Consider widening bounds to explore more design space",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check expression exists (if introspection available)
|
||||||
|
if self.available_expressions and param not in self.available_expressions:
|
||||||
|
result.add_error(
|
||||||
|
"EXPRESSION_NOT_FOUND",
|
||||||
|
f"Expression '{param}' not found in model",
|
||||||
|
path=path,
|
||||||
|
suggestion=f"Available expressions: {', '.join(self.available_expressions[:5])}...",
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_objectives(self, result: CheckResult) -> None:
|
||||||
|
"""Check objective definitions."""
|
||||||
|
|
||||||
|
objectives = self.spec.get("objectives", [])
|
||||||
|
|
||||||
|
for i, obj in enumerate(objectives):
|
||||||
|
name = obj.get("name", f"objective_{i}")
|
||||||
|
# Handle both formats: "goal" or "direction"
|
||||||
|
goal = obj.get("goal", obj.get("direction", "")).lower()
|
||||||
|
path = f"objectives[{i}]"
|
||||||
|
|
||||||
|
# Check goal is valid
|
||||||
|
if goal not in ("minimize", "maximize"):
|
||||||
|
result.add_error(
|
||||||
|
"INVALID_GOAL",
|
||||||
|
f"Objective '{name}' has invalid goal: '{goal}'",
|
||||||
|
path=path,
|
||||||
|
suggestion="Use 'minimize' or 'maximize'",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check extraction is defined
|
||||||
|
extraction = obj.get("extraction", {})
|
||||||
|
if not extraction.get("action"):
|
||||||
|
result.add_warning(
|
||||||
|
"NO_EXTRACTOR",
|
||||||
|
f"Objective '{name}' has no extractor specified",
|
||||||
|
path=path,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_constraints(self, result: CheckResult) -> None:
|
||||||
|
"""Check constraint definitions."""
|
||||||
|
|
||||||
|
constraints = self.spec.get("constraints", [])
|
||||||
|
|
||||||
|
for i, const in enumerate(constraints):
|
||||||
|
name = const.get("name", f"constraint_{i}")
|
||||||
|
const_type = const.get("type", "").lower()
|
||||||
|
threshold = const.get("threshold")
|
||||||
|
path = f"constraints[{i}]"
|
||||||
|
|
||||||
|
# Check type is valid
|
||||||
|
if const_type not in ("less_than", "greater_than", "equal_to"):
|
||||||
|
result.add_warning(
|
||||||
|
"INVALID_CONSTRAINT_TYPE",
|
||||||
|
f"Constraint '{name}' has unusual type: '{const_type}'",
|
||||||
|
path=path,
|
||||||
|
suggestion="Use 'less_than' or 'greater_than'",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check threshold is defined
|
||||||
|
if threshold is None:
|
||||||
|
result.add_error(
|
||||||
|
"NO_THRESHOLD",
|
||||||
|
f"Constraint '{name}' has no threshold defined",
|
||||||
|
path=path,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_extractors(self, result: CheckResult) -> None:
|
||||||
|
"""Check that referenced extractors exist."""
|
||||||
|
|
||||||
|
# Check objective extractors
|
||||||
|
for obj in self.spec.get("objectives", []):
|
||||||
|
extraction = obj.get("extraction", {})
|
||||||
|
action = extraction.get("action", "")
|
||||||
|
|
||||||
|
if action and action not in self.KNOWN_EXTRACTORS:
|
||||||
|
result.add_warning(
|
||||||
|
"UNKNOWN_EXTRACTOR",
|
||||||
|
f"Extractor '{action}' is not in the standard library",
|
||||||
|
suggestion="Ensure custom extractor is available",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check constraint extractors
|
||||||
|
for const in self.spec.get("constraints", []):
|
||||||
|
extraction = const.get("extraction", {})
|
||||||
|
action = extraction.get("action", "")
|
||||||
|
|
||||||
|
if action and action not in self.KNOWN_EXTRACTORS:
|
||||||
|
result.add_warning(
|
||||||
|
"UNKNOWN_EXTRACTOR",
|
||||||
|
f"Extractor '{action}' is not in the standard library",
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_anti_patterns(self, result: CheckResult) -> None:
|
||||||
|
"""Check for common optimization anti-patterns."""
|
||||||
|
|
||||||
|
objectives = self.spec.get("objectives", [])
|
||||||
|
constraints = self.spec.get("constraints", [])
|
||||||
|
|
||||||
|
# Anti-pattern: Mass minimization without stress/displacement constraints
|
||||||
|
has_mass_objective = any(
|
||||||
|
"mass" in obj.get("name", "").lower() and obj.get("goal") == "minimize"
|
||||||
|
for obj in objectives
|
||||||
|
)
|
||||||
|
|
||||||
|
has_structural_constraint = any(
|
||||||
|
any(
|
||||||
|
kw in const.get("name", "").lower()
|
||||||
|
for kw in ["stress", "displacement", "deflection"]
|
||||||
|
)
|
||||||
|
for const in constraints
|
||||||
|
)
|
||||||
|
|
||||||
|
if has_mass_objective and not has_structural_constraint:
|
||||||
|
result.add_warning(
|
||||||
|
"MASS_NO_CONSTRAINT",
|
||||||
|
"Mass minimization without structural constraints",
|
||||||
|
suggestion="Add stress or displacement constraints to prevent over-optimization",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Anti-pattern: Too many design variables for trial count
|
||||||
|
n_dvs = len(self.spec.get("design_variables", []))
|
||||||
|
n_trials = self.spec.get("optimization_settings", {}).get("n_trials", 100)
|
||||||
|
|
||||||
|
if n_dvs > 0 and n_trials / n_dvs < 10:
|
||||||
|
result.add_warning(
|
||||||
|
"LOW_TRIALS_PER_DV",
|
||||||
|
f"Only {n_trials / n_dvs:.1f} trials per design variable",
|
||||||
|
suggestion=f"Consider increasing trials to at least {n_dvs * 20} for better coverage",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Anti-pattern: Too many objectives
|
||||||
|
n_objectives = len(objectives)
|
||||||
|
if n_objectives > 3:
|
||||||
|
result.add_warning(
|
||||||
|
"TOO_MANY_OBJECTIVES",
|
||||||
|
f"{n_objectives} objectives may lead to sparse Pareto front",
|
||||||
|
suggestion="Consider consolidating or using weighted objectives",
|
||||||
|
)
|
||||||
|
|
||||||
|
def _check_files(self, result: CheckResult) -> None:
|
||||||
|
"""Check that referenced files exist."""
|
||||||
|
|
||||||
|
if not self.spec_path:
|
||||||
|
return
|
||||||
|
|
||||||
|
study_dir = self.spec_path.parent.parent # Assuming spec is in 1_setup/
|
||||||
|
|
||||||
|
sim = self.spec.get("simulation", {})
|
||||||
|
sim_file = sim.get("sim_file")
|
||||||
|
|
||||||
|
if sim_file:
|
||||||
|
# Check multiple possible locations
|
||||||
|
possible_paths = [
|
||||||
|
study_dir / "1_model" / sim_file,
|
||||||
|
study_dir / "1_setup" / "model" / sim_file,
|
||||||
|
study_dir / sim_file,
|
||||||
|
]
|
||||||
|
|
||||||
|
found = any(p.exists() for p in possible_paths)
|
||||||
|
if not found:
|
||||||
|
result.add_error(
|
||||||
|
"SIM_FILE_NOT_FOUND",
|
||||||
|
f"Simulation file not found: {sim_file}",
|
||||||
|
path="simulation.sim_file",
|
||||||
|
suggestion="Ensure model files are copied to study directory",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_spec(spec_path: Path, expressions: List[str] = None) -> CheckResult:
|
||||||
|
"""
|
||||||
|
Convenience function to validate a spec file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
spec_path: Path to spec file
|
||||||
|
expressions: List of available expressions (from introspection)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
CheckResult with validation issues
|
||||||
|
"""
|
||||||
|
checker = SpecChecker(spec_path, expressions)
|
||||||
|
return checker.check()
|
||||||
508
optimization_engine/validation/gate.py
Normal file
508
optimization_engine/validation/gate.py
Normal file
@@ -0,0 +1,508 @@
|
|||||||
|
"""
|
||||||
|
Validation Gate
|
||||||
|
===============
|
||||||
|
|
||||||
|
The final checkpoint before optimization begins.
|
||||||
|
|
||||||
|
1. Validates the study specification
|
||||||
|
2. Runs 2-3 test trials to verify:
|
||||||
|
- Parameters actually update the model
|
||||||
|
- Mesh regenerates correctly
|
||||||
|
- Extractors work
|
||||||
|
- Results are different (not stuck)
|
||||||
|
3. Estimates runtime
|
||||||
|
4. Gets user approval
|
||||||
|
|
||||||
|
This is CRITICAL for catching the "mesh not updating" issue that
|
||||||
|
wastes hours of optimization time.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import random
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, List, Dict, Any, Callable
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from .checker import SpecChecker, CheckResult, IssueSeverity
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class TestTrialResult:
|
||||||
|
"""Result of a single test trial."""
|
||||||
|
|
||||||
|
trial_number: int
|
||||||
|
parameters: Dict[str, float]
|
||||||
|
objectives: Dict[str, float]
|
||||||
|
constraints: Dict[str, float] = field(default_factory=dict)
|
||||||
|
solve_time_seconds: float = 0.0
|
||||||
|
success: bool = False
|
||||||
|
error: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"trial_number": self.trial_number,
|
||||||
|
"parameters": self.parameters,
|
||||||
|
"objectives": self.objectives,
|
||||||
|
"constraints": self.constraints,
|
||||||
|
"solve_time_seconds": self.solve_time_seconds,
|
||||||
|
"success": self.success,
|
||||||
|
"error": self.error,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ValidationResult:
|
||||||
|
"""Complete validation result."""
|
||||||
|
|
||||||
|
passed: bool
|
||||||
|
timestamp: datetime = field(default_factory=datetime.now)
|
||||||
|
|
||||||
|
# Spec validation
|
||||||
|
spec_check: Optional[CheckResult] = None
|
||||||
|
|
||||||
|
# Test trials
|
||||||
|
test_trials: List[TestTrialResult] = field(default_factory=list)
|
||||||
|
results_vary: bool = False
|
||||||
|
variance_by_objective: Dict[str, float] = field(default_factory=dict)
|
||||||
|
|
||||||
|
# Runtime estimates
|
||||||
|
avg_solve_time: Optional[float] = None
|
||||||
|
estimated_total_runtime: Optional[float] = None
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
errors: List[str] = field(default_factory=list)
|
||||||
|
warnings: List[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
def add_error(self, message: str):
|
||||||
|
self.errors.append(message)
|
||||||
|
self.passed = False
|
||||||
|
|
||||||
|
def add_warning(self, message: str):
|
||||||
|
self.warnings.append(message)
|
||||||
|
|
||||||
|
def get_summary(self) -> str:
|
||||||
|
"""Get human-readable summary."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if self.passed:
|
||||||
|
lines.append("VALIDATION PASSED")
|
||||||
|
else:
|
||||||
|
lines.append("VALIDATION FAILED")
|
||||||
|
|
||||||
|
lines.append(f"\nSpec Validation:")
|
||||||
|
if self.spec_check:
|
||||||
|
lines.append(f" Errors: {len(self.spec_check.errors)}")
|
||||||
|
lines.append(f" Warnings: {len(self.spec_check.warnings)}")
|
||||||
|
|
||||||
|
lines.append(f"\nTest Trials:")
|
||||||
|
lines.append(
|
||||||
|
f" Completed: {len([t for t in self.test_trials if t.success])}/{len(self.test_trials)}"
|
||||||
|
)
|
||||||
|
lines.append(f" Results Vary: {'Yes' if self.results_vary else 'NO - PROBLEM!'}")
|
||||||
|
|
||||||
|
if self.variance_by_objective:
|
||||||
|
lines.append(" Variance by Objective:")
|
||||||
|
for obj, var in self.variance_by_objective.items():
|
||||||
|
lines.append(f" {obj}: {var:.6f}")
|
||||||
|
|
||||||
|
if self.avg_solve_time:
|
||||||
|
lines.append(f"\nRuntime Estimate:")
|
||||||
|
lines.append(f" Avg solve time: {self.avg_solve_time:.1f}s")
|
||||||
|
if self.estimated_total_runtime:
|
||||||
|
hours = self.estimated_total_runtime / 3600
|
||||||
|
lines.append(f" Est. total: {hours:.1f} hours")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"passed": self.passed,
|
||||||
|
"timestamp": self.timestamp.isoformat(),
|
||||||
|
"spec_errors": len(self.spec_check.errors) if self.spec_check else 0,
|
||||||
|
"spec_warnings": len(self.spec_check.warnings) if self.spec_check else 0,
|
||||||
|
"test_trials": [t.to_dict() for t in self.test_trials],
|
||||||
|
"results_vary": self.results_vary,
|
||||||
|
"variance_by_objective": self.variance_by_objective,
|
||||||
|
"avg_solve_time": self.avg_solve_time,
|
||||||
|
"estimated_total_runtime": self.estimated_total_runtime,
|
||||||
|
"errors": self.errors,
|
||||||
|
"warnings": self.warnings,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class ValidationGate:
|
||||||
|
"""
|
||||||
|
Validates study setup before optimization.
|
||||||
|
|
||||||
|
This is the critical checkpoint that prevents wasted optimization time
|
||||||
|
by catching issues like:
|
||||||
|
- Missing files
|
||||||
|
- Invalid bounds
|
||||||
|
- Mesh not updating (all results identical)
|
||||||
|
- Broken extractors
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
study_dir: Path,
|
||||||
|
progress_callback: Optional[Callable[[str, float], None]] = None,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initialize the validation gate.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_dir: Path to the study directory
|
||||||
|
progress_callback: Optional callback for progress updates
|
||||||
|
"""
|
||||||
|
self.study_dir = Path(study_dir)
|
||||||
|
self.progress_callback = progress_callback or (lambda m, p: None)
|
||||||
|
|
||||||
|
# Find spec file
|
||||||
|
self.spec_path = self._find_spec_path()
|
||||||
|
self.spec: Dict[str, Any] = {}
|
||||||
|
|
||||||
|
if self.spec_path and self.spec_path.exists():
|
||||||
|
with open(self.spec_path) as f:
|
||||||
|
self.spec = json.load(f)
|
||||||
|
|
||||||
|
def _find_spec_path(self) -> Optional[Path]:
|
||||||
|
"""Find the specification file."""
|
||||||
|
# Try atomizer_spec.json first (v2.0)
|
||||||
|
candidates = [
|
||||||
|
self.study_dir / "atomizer_spec.json",
|
||||||
|
self.study_dir / "1_setup" / "atomizer_spec.json",
|
||||||
|
self.study_dir / "optimization_config.json",
|
||||||
|
self.study_dir / "1_setup" / "optimization_config.json",
|
||||||
|
]
|
||||||
|
|
||||||
|
for path in candidates:
|
||||||
|
if path.exists():
|
||||||
|
return path
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def validate(
|
||||||
|
self,
|
||||||
|
run_test_trials: bool = True,
|
||||||
|
n_test_trials: int = 3,
|
||||||
|
available_expressions: Optional[List[str]] = None,
|
||||||
|
) -> ValidationResult:
|
||||||
|
"""
|
||||||
|
Run full validation.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
run_test_trials: Whether to run test FEA solves
|
||||||
|
n_test_trials: Number of test trials (2-3 recommended)
|
||||||
|
available_expressions: Expression names from introspection
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ValidationResult with all findings
|
||||||
|
"""
|
||||||
|
result = ValidationResult(passed=True)
|
||||||
|
|
||||||
|
logger.info(f"Validating study: {self.study_dir.name}")
|
||||||
|
self._progress("Starting validation...", 0.0)
|
||||||
|
|
||||||
|
# Step 1: Check spec file exists
|
||||||
|
if not self.spec_path:
|
||||||
|
result.add_error("No specification file found")
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Step 2: Validate spec
|
||||||
|
self._progress("Validating specification...", 0.1)
|
||||||
|
checker = SpecChecker(self.spec_path, available_expressions)
|
||||||
|
result.spec_check = checker.check(self.spec)
|
||||||
|
|
||||||
|
# Add spec errors to result
|
||||||
|
for issue in result.spec_check.errors:
|
||||||
|
result.add_error(str(issue))
|
||||||
|
for issue in result.spec_check.warnings:
|
||||||
|
result.add_warning(str(issue))
|
||||||
|
|
||||||
|
# Stop if spec has errors (unless they're non-critical)
|
||||||
|
if result.spec_check.errors:
|
||||||
|
self._progress("Validation failed: spec errors", 1.0)
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Step 3: Run test trials
|
||||||
|
if run_test_trials:
|
||||||
|
self._progress("Running test trials...", 0.2)
|
||||||
|
self._run_test_trials(result, n_test_trials)
|
||||||
|
|
||||||
|
# Step 4: Calculate estimates
|
||||||
|
self._progress("Calculating estimates...", 0.9)
|
||||||
|
self._calculate_estimates(result)
|
||||||
|
|
||||||
|
self._progress("Validation complete", 1.0)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _progress(self, message: str, percent: float):
|
||||||
|
"""Report progress."""
|
||||||
|
logger.info(f"[{percent * 100:.0f}%] {message}")
|
||||||
|
self.progress_callback(message, percent)
|
||||||
|
|
||||||
|
def _run_test_trials(self, result: ValidationResult, n_trials: int) -> None:
|
||||||
|
"""Run test trials to verify setup."""
|
||||||
|
|
||||||
|
try:
|
||||||
|
from optimization_engine.nx.solver import NXSolver
|
||||||
|
except ImportError:
|
||||||
|
result.add_warning("NXSolver not available - skipping test trials")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get design variables
|
||||||
|
design_vars = self.spec.get("design_variables", [])
|
||||||
|
if not design_vars:
|
||||||
|
result.add_error("No design variables to test")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get model directory
|
||||||
|
model_dir = self._find_model_dir()
|
||||||
|
if not model_dir:
|
||||||
|
result.add_error("Model directory not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get sim file
|
||||||
|
sim_file = self._find_sim_file(model_dir)
|
||||||
|
if not sim_file:
|
||||||
|
result.add_error("Simulation file not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
solver = NXSolver()
|
||||||
|
|
||||||
|
for i in range(n_trials):
|
||||||
|
self._progress(f"Running test trial {i + 1}/{n_trials}...", 0.2 + (0.6 * i / n_trials))
|
||||||
|
|
||||||
|
trial_result = TestTrialResult(trial_number=i + 1, parameters={}, objectives={})
|
||||||
|
|
||||||
|
# Generate random parameters within bounds
|
||||||
|
params = {}
|
||||||
|
for dv in design_vars:
|
||||||
|
param_name = dv.get("parameter", dv.get("name"))
|
||||||
|
bounds = dv.get("bounds", [0, 1])
|
||||||
|
# Use random value within bounds
|
||||||
|
value = random.uniform(bounds[0], bounds[1])
|
||||||
|
params[param_name] = value
|
||||||
|
|
||||||
|
trial_result.parameters = params
|
||||||
|
|
||||||
|
try:
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
# Run simulation
|
||||||
|
solve_result = solver.run_simulation(
|
||||||
|
sim_file=sim_file,
|
||||||
|
working_dir=model_dir,
|
||||||
|
expression_updates=params,
|
||||||
|
cleanup=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
trial_result.solve_time_seconds = time.time() - start_time
|
||||||
|
|
||||||
|
if solve_result.get("success"):
|
||||||
|
trial_result.success = True
|
||||||
|
|
||||||
|
# Extract results
|
||||||
|
op2_file = solve_result.get("op2_file")
|
||||||
|
if op2_file:
|
||||||
|
objectives = self._extract_objectives(Path(op2_file), model_dir)
|
||||||
|
trial_result.objectives = objectives
|
||||||
|
else:
|
||||||
|
trial_result.success = False
|
||||||
|
trial_result.error = solve_result.get("error", "Unknown error")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
trial_result.success = False
|
||||||
|
trial_result.error = str(e)
|
||||||
|
logger.error(f"Test trial {i + 1} failed: {e}")
|
||||||
|
|
||||||
|
result.test_trials.append(trial_result)
|
||||||
|
|
||||||
|
# Check if results vary
|
||||||
|
self._check_results_variance(result)
|
||||||
|
|
||||||
|
def _find_model_dir(self) -> Optional[Path]:
|
||||||
|
"""Find the model directory."""
|
||||||
|
candidates = [
|
||||||
|
self.study_dir / "1_model",
|
||||||
|
self.study_dir / "1_setup" / "model",
|
||||||
|
self.study_dir,
|
||||||
|
]
|
||||||
|
|
||||||
|
for path in candidates:
|
||||||
|
if path.exists() and list(path.glob("*.sim")):
|
||||||
|
return path
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _find_sim_file(self, model_dir: Path) -> Optional[Path]:
|
||||||
|
"""Find the simulation file."""
|
||||||
|
# From spec
|
||||||
|
sim = self.spec.get("simulation", {})
|
||||||
|
sim_name = sim.get("sim_file")
|
||||||
|
|
||||||
|
if sim_name:
|
||||||
|
sim_path = model_dir / sim_name
|
||||||
|
if sim_path.exists():
|
||||||
|
return sim_path
|
||||||
|
|
||||||
|
# Search for .sim files
|
||||||
|
sim_files = list(model_dir.glob("*.sim"))
|
||||||
|
if sim_files:
|
||||||
|
return sim_files[0]
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_objectives(self, op2_file: Path, model_dir: Path) -> Dict[str, float]:
|
||||||
|
"""Extract objective values from results."""
|
||||||
|
objectives = {}
|
||||||
|
|
||||||
|
# Extract based on configured objectives
|
||||||
|
for obj in self.spec.get("objectives", []):
|
||||||
|
name = obj.get("name", "objective")
|
||||||
|
extraction = obj.get("extraction", {})
|
||||||
|
action = extraction.get("action", "")
|
||||||
|
|
||||||
|
try:
|
||||||
|
if "mass" in action.lower():
|
||||||
|
from optimization_engine.extractors.bdf_mass_extractor import (
|
||||||
|
extract_mass_from_bdf,
|
||||||
|
)
|
||||||
|
|
||||||
|
dat_files = list(model_dir.glob("*.dat"))
|
||||||
|
if dat_files:
|
||||||
|
objectives[name] = extract_mass_from_bdf(str(dat_files[0]))
|
||||||
|
|
||||||
|
elif "displacement" in action.lower():
|
||||||
|
from optimization_engine.extractors.extract_displacement import (
|
||||||
|
extract_displacement,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = extract_displacement(op2_file, subcase=1)
|
||||||
|
objectives[name] = result.get("max_displacement", 0)
|
||||||
|
|
||||||
|
elif "stress" in action.lower():
|
||||||
|
from optimization_engine.extractors.extract_von_mises_stress import (
|
||||||
|
extract_solid_stress,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = extract_solid_stress(op2_file, subcase=1)
|
||||||
|
objectives[name] = result.get("max_von_mises", 0)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to extract {name}: {e}")
|
||||||
|
|
||||||
|
return objectives
|
||||||
|
|
||||||
|
def _check_results_variance(self, result: ValidationResult) -> None:
|
||||||
|
"""Check if test trial results vary (indicating mesh is updating)."""
|
||||||
|
|
||||||
|
successful_trials = [t for t in result.test_trials if t.success]
|
||||||
|
|
||||||
|
if len(successful_trials) < 2:
|
||||||
|
result.add_warning("Not enough successful trials to check variance")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Check variance for each objective
|
||||||
|
for obj_name in successful_trials[0].objectives.keys():
|
||||||
|
values = [t.objectives.get(obj_name, 0) for t in successful_trials]
|
||||||
|
|
||||||
|
if len(values) > 1:
|
||||||
|
variance = np.var(values)
|
||||||
|
result.variance_by_objective[obj_name] = variance
|
||||||
|
|
||||||
|
# Check if variance is too low (results are stuck)
|
||||||
|
mean_val = np.mean(values)
|
||||||
|
if mean_val != 0:
|
||||||
|
cv = np.sqrt(variance) / abs(mean_val) # Coefficient of variation
|
||||||
|
|
||||||
|
if cv < 0.001: # Less than 0.1% variation
|
||||||
|
result.add_error(
|
||||||
|
f"Results for '{obj_name}' are nearly identical (CV={cv:.6f}). "
|
||||||
|
"The mesh may not be updating!"
|
||||||
|
)
|
||||||
|
result.results_vary = False
|
||||||
|
else:
|
||||||
|
result.results_vary = True
|
||||||
|
else:
|
||||||
|
# Can't calculate CV if mean is 0
|
||||||
|
if variance < 1e-10:
|
||||||
|
result.add_warning(f"Results for '{obj_name}' show no variation")
|
||||||
|
else:
|
||||||
|
result.results_vary = True
|
||||||
|
|
||||||
|
# Default to True if we couldn't check
|
||||||
|
if not result.variance_by_objective:
|
||||||
|
result.results_vary = True
|
||||||
|
|
||||||
|
def _calculate_estimates(self, result: ValidationResult) -> None:
|
||||||
|
"""Calculate runtime estimates."""
|
||||||
|
|
||||||
|
successful_trials = [t for t in result.test_trials if t.success]
|
||||||
|
|
||||||
|
if successful_trials:
|
||||||
|
solve_times = [t.solve_time_seconds for t in successful_trials]
|
||||||
|
result.avg_solve_time = np.mean(solve_times)
|
||||||
|
|
||||||
|
# Get total trials from spec
|
||||||
|
n_trials = self.spec.get("optimization_settings", {}).get("n_trials", 100)
|
||||||
|
result.estimated_total_runtime = result.avg_solve_time * n_trials
|
||||||
|
|
||||||
|
def approve(self) -> bool:
|
||||||
|
"""
|
||||||
|
Mark the study as approved for optimization.
|
||||||
|
|
||||||
|
Creates an approval file to indicate validation passed.
|
||||||
|
"""
|
||||||
|
approval_file = self.study_dir / ".validation_approved"
|
||||||
|
|
||||||
|
try:
|
||||||
|
approval_file.write_text(datetime.now().isoformat())
|
||||||
|
logger.info(f"Study approved: {self.study_dir.name}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to approve: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_approved(self) -> bool:
|
||||||
|
"""Check if study has been approved."""
|
||||||
|
approval_file = self.study_dir / ".validation_approved"
|
||||||
|
return approval_file.exists()
|
||||||
|
|
||||||
|
def save_result(self, result: ValidationResult) -> Path:
|
||||||
|
"""Save validation result to file."""
|
||||||
|
output_path = self.study_dir / "validation_result.json"
|
||||||
|
|
||||||
|
with open(output_path, "w") as f:
|
||||||
|
json.dump(result.to_dict(), f, indent=2)
|
||||||
|
|
||||||
|
return output_path
|
||||||
|
|
||||||
|
|
||||||
|
def validate_study(
|
||||||
|
study_dir: Path,
|
||||||
|
run_test_trials: bool = True,
|
||||||
|
n_test_trials: int = 3,
|
||||||
|
) -> ValidationResult:
|
||||||
|
"""
|
||||||
|
Convenience function to validate a study.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
study_dir: Path to study directory
|
||||||
|
run_test_trials: Whether to run test FEA solves
|
||||||
|
n_test_trials: Number of test trials
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ValidationResult
|
||||||
|
"""
|
||||||
|
gate = ValidationGate(study_dir)
|
||||||
|
return gate.validate(run_test_trials=run_test_trials, n_test_trials=n_test_trials)
|
||||||
@@ -4,7 +4,7 @@
|
|||||||
"extraction_method": "ZernikeOPD_Annular",
|
"extraction_method": "ZernikeOPD_Annular",
|
||||||
"inner_radius_mm": 135.75,
|
"inner_radius_mm": 135.75,
|
||||||
"objectives_note": "Mass NOT in objective - WFE only",
|
"objectives_note": "Mass NOT in objective - WFE only",
|
||||||
"total_trials": 169,
|
"total_trials": 171,
|
||||||
"feasible_trials": 167,
|
"feasible_trials": 167,
|
||||||
"best_trial": {
|
"best_trial": {
|
||||||
"number": 163,
|
"number": 163,
|
||||||
@@ -24,5 +24,5 @@
|
|||||||
},
|
},
|
||||||
"iter_folder": "C:\\Users\\antoi\\Atomizer\\studies\\M1_Mirror\\m1_mirror_cost_reduction_lateral\\2_iterations\\iter164"
|
"iter_folder": "C:\\Users\\antoi\\Atomizer\\studies\\M1_Mirror\\m1_mirror_cost_reduction_lateral\\2_iterations\\iter164"
|
||||||
},
|
},
|
||||||
"timestamp": "2026-01-14T17:59:38.649254"
|
"timestamp": "2026-01-20T16:12:29.817282"
|
||||||
}
|
}
|
||||||
@@ -2,9 +2,9 @@
|
|||||||
"meta": {
|
"meta": {
|
||||||
"version": "2.0",
|
"version": "2.0",
|
||||||
"created": "2026-01-17T15:35:12.024432Z",
|
"created": "2026-01-17T15:35:12.024432Z",
|
||||||
"modified": "2026-01-20T20:05:28.197219Z",
|
"modified": "2026-01-22T13:48:14.104039Z",
|
||||||
"created_by": "migration",
|
"created_by": "migration",
|
||||||
"modified_by": "test",
|
"modified_by": "canvas",
|
||||||
"study_name": "m1_mirror_cost_reduction_lateral",
|
"study_name": "m1_mirror_cost_reduction_lateral",
|
||||||
"description": "Lateral support optimization with new U-joint expressions (lateral_inner_u, lateral_outer_u) for cost reduction model. Focus on WFE and MFG only - no mass objective.",
|
"description": "Lateral support optimization with new U-joint expressions (lateral_inner_u, lateral_outer_u) for cost reduction model. Focus on WFE and MFG only - no mass objective.",
|
||||||
"tags": [
|
"tags": [
|
||||||
@@ -152,22 +152,6 @@
|
|||||||
"y": 580
|
"y": 580
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "variable_1768938898079",
|
|
||||||
"expression_name": "expr_1768938898079",
|
|
||||||
"type": "continuous",
|
|
||||||
"bounds": {
|
|
||||||
"min": 0,
|
|
||||||
"max": 1
|
|
||||||
},
|
|
||||||
"baseline": 0.5,
|
|
||||||
"enabled": true,
|
|
||||||
"canvas_position": {
|
|
||||||
"x": -185.06035488622524,
|
|
||||||
"y": 91.62521000204346
|
|
||||||
},
|
|
||||||
"id": "dv_008"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "test_dv",
|
"name": "test_dv",
|
||||||
"expression_name": "test_expr",
|
"expression_name": "test_expr",
|
||||||
@@ -304,6 +288,27 @@
|
|||||||
"y": 262.2501934501369
|
"y": 262.2501934501369
|
||||||
},
|
},
|
||||||
"id": "ext_004"
|
"id": "ext_004"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "extractor_1769089672375",
|
||||||
|
"type": "custom_function",
|
||||||
|
"builtin": false,
|
||||||
|
"enabled": true,
|
||||||
|
"function": {
|
||||||
|
"name": "extract",
|
||||||
|
"source_code": "def extract(op2_path: str, config: dict = None) -> dict:\n \"\"\"\n Custom extractor function.\n \n Args:\n op2_path: Path to the OP2 results file\n config: Optional configuration dict\n \n Returns:\n Dictionary with extracted values\n \"\"\"\n # TODO: Implement extraction logic\n return {'value': 0.0}\n"
|
||||||
|
},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "value",
|
||||||
|
"metric": "custom"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"canvas_position": {
|
||||||
|
"x": 1114.5479601736847,
|
||||||
|
"y": 880.0345512775555
|
||||||
|
},
|
||||||
|
"id": "ext_006"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"objectives": [
|
"objectives": [
|
||||||
|
|||||||
485
tools/devloop_cli.py
Normal file
485
tools/devloop_cli.py
Normal file
@@ -0,0 +1,485 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
DevLoop CLI - Command-line interface for closed-loop development.
|
||||||
|
|
||||||
|
Uses your CLI subscriptions:
|
||||||
|
- OpenCode CLI (Gemini) for planning and analysis
|
||||||
|
- Claude Code CLI for implementation
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python devloop_cli.py start "Create support_arm study"
|
||||||
|
python devloop_cli.py plan "Fix dashboard validation"
|
||||||
|
python devloop_cli.py implement plan.json
|
||||||
|
python devloop_cli.py test --study support_arm
|
||||||
|
python devloop_cli.py analyze test_results.json
|
||||||
|
python devloop_cli.py status
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Add project root to path
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
|
|
||||||
|
async def start_cycle(objective: str, max_iterations: int = 5):
|
||||||
|
"""Start a development cycle using CLI tools."""
|
||||||
|
from optimization_engine.devloop.cli_bridge import DevLoopCLIOrchestrator
|
||||||
|
|
||||||
|
print(f"Starting DevLoop cycle: {objective}")
|
||||||
|
print("=" * 60)
|
||||||
|
print("Using: OpenCode (Gemini) for planning, Claude Code for implementation")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
orchestrator = DevLoopCLIOrchestrator()
|
||||||
|
|
||||||
|
result = await orchestrator.run_cycle(
|
||||||
|
objective=objective,
|
||||||
|
max_iterations=max_iterations,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print(f"Cycle complete: {result['status']}")
|
||||||
|
print(f" Iterations: {len(result['iterations'])}")
|
||||||
|
print(f" Duration: {result.get('duration_seconds', 0):.1f}s")
|
||||||
|
|
||||||
|
for i, iter_result in enumerate(result["iterations"], 1):
|
||||||
|
impl = iter_result.get("implementation", {})
|
||||||
|
tests = iter_result.get("test_results", {}).get("summary", {})
|
||||||
|
print(f"\n Iteration {i}:")
|
||||||
|
print(f" Implementation: {'OK' if impl.get('success') else 'FAILED'}")
|
||||||
|
print(f" Tests: {tests.get('passed', 0)}/{tests.get('total', 0)} passed")
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
async def run_plan(objective: str, context_file: str = None):
|
||||||
|
"""Run only the planning phase with Gemini via OpenCode."""
|
||||||
|
from optimization_engine.devloop.cli_bridge import OpenCodeCLI
|
||||||
|
|
||||||
|
print(f"Planning with Gemini (OpenCode): {objective}")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
opencode = OpenCodeCLI(workspace)
|
||||||
|
|
||||||
|
context = None
|
||||||
|
if context_file:
|
||||||
|
with open(context_file) as f:
|
||||||
|
context = json.load(f)
|
||||||
|
|
||||||
|
plan = await opencode.plan(objective, context)
|
||||||
|
|
||||||
|
print("\nPlan created:")
|
||||||
|
print(json.dumps(plan, indent=2))
|
||||||
|
|
||||||
|
# Save plan to file
|
||||||
|
plan_file = workspace / ".devloop" / "current_plan.json"
|
||||||
|
plan_file.parent.mkdir(exist_ok=True)
|
||||||
|
with open(plan_file, "w") as f:
|
||||||
|
json.dump(plan, f, indent=2)
|
||||||
|
print(f"\nPlan saved to: {plan_file}")
|
||||||
|
|
||||||
|
return plan
|
||||||
|
|
||||||
|
|
||||||
|
async def run_implement(plan_file: str = None):
|
||||||
|
"""Run only the implementation phase with Claude Code."""
|
||||||
|
from optimization_engine.devloop.cli_bridge import DevLoopCLIOrchestrator
|
||||||
|
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
|
||||||
|
# Load plan
|
||||||
|
if plan_file:
|
||||||
|
plan_path = Path(plan_file)
|
||||||
|
else:
|
||||||
|
plan_path = workspace / ".devloop" / "current_plan.json"
|
||||||
|
|
||||||
|
if not plan_path.exists():
|
||||||
|
print(f"Error: Plan file not found: {plan_path}")
|
||||||
|
print("Run 'devloop_cli.py plan <objective>' first")
|
||||||
|
return None
|
||||||
|
|
||||||
|
with open(plan_path) as f:
|
||||||
|
plan = json.load(f)
|
||||||
|
|
||||||
|
print(f"Implementing plan: {plan.get('objective', 'Unknown')}")
|
||||||
|
print("-" * 60)
|
||||||
|
print(f"Tasks: {len(plan.get('tasks', []))}")
|
||||||
|
|
||||||
|
orchestrator = DevLoopCLIOrchestrator(workspace)
|
||||||
|
result = await orchestrator.step_implement(plan)
|
||||||
|
|
||||||
|
print(f"\nImplementation {'succeeded' if result.success else 'failed'}")
|
||||||
|
print(f" Duration: {result.duration_seconds:.1f}s")
|
||||||
|
print(f" Files modified: {len(result.files_modified)}")
|
||||||
|
for f in result.files_modified:
|
||||||
|
print(f" - {f}")
|
||||||
|
|
||||||
|
if result.error:
|
||||||
|
print(f"\nError: {result.error}")
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
async def run_browser_tests(level: str = "quick", study_name: str = None):
|
||||||
|
"""Run browser tests using Playwright via DevLoop."""
|
||||||
|
from optimization_engine.devloop.test_runner import DashboardTestRunner
|
||||||
|
from optimization_engine.devloop.browser_scenarios import get_browser_scenarios
|
||||||
|
|
||||||
|
print(f"Running browser tests (level={level})")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
runner = DashboardTestRunner()
|
||||||
|
scenarios = get_browser_scenarios(level=level, study_name=study_name)
|
||||||
|
|
||||||
|
print(f"Scenarios: {len(scenarios)}")
|
||||||
|
for s in scenarios:
|
||||||
|
print(f" - {s['name']}")
|
||||||
|
|
||||||
|
results = await runner.run_test_suite(scenarios)
|
||||||
|
|
||||||
|
summary = results.get("summary", {})
|
||||||
|
print(f"\nResults: {summary.get('passed', 0)}/{summary.get('total', 0)} passed")
|
||||||
|
|
||||||
|
for scenario in results.get("scenarios", []):
|
||||||
|
status = "PASS" if scenario.get("passed") else "FAIL"
|
||||||
|
print(f" [{status}] {scenario.get('scenario_name')}")
|
||||||
|
if not scenario.get("passed") and scenario.get("error"):
|
||||||
|
print(f" Error: {scenario.get('error')}")
|
||||||
|
|
||||||
|
# Save results
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
results_file = workspace / ".devloop" / "browser_test_results.json"
|
||||||
|
results_file.parent.mkdir(exist_ok=True)
|
||||||
|
with open(results_file, "w") as f:
|
||||||
|
json.dump(results, f, indent=2)
|
||||||
|
print(f"\nResults saved to: {results_file}")
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
async def run_tests(
|
||||||
|
study_name: str = None, scenarios_file: str = None, include_browser: bool = False
|
||||||
|
):
|
||||||
|
"""Run tests for a specific study or from scenarios file."""
|
||||||
|
from optimization_engine.devloop.test_runner import DashboardTestRunner
|
||||||
|
|
||||||
|
runner = DashboardTestRunner()
|
||||||
|
|
||||||
|
if scenarios_file:
|
||||||
|
with open(scenarios_file) as f:
|
||||||
|
scenarios = json.load(f)
|
||||||
|
elif study_name:
|
||||||
|
print(f"Running tests for study: {study_name}")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
# Find the study - check both flat and nested locations
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
studies_root = Path("studies")
|
||||||
|
|
||||||
|
# Check flat structure first (studies/study_name)
|
||||||
|
if (studies_root / study_name).exists():
|
||||||
|
study_path = f"studies/{study_name}"
|
||||||
|
# Then check nested _Other structure
|
||||||
|
elif (studies_root / "_Other" / study_name).exists():
|
||||||
|
study_path = f"studies/_Other/{study_name}"
|
||||||
|
# Check other topic folders
|
||||||
|
else:
|
||||||
|
study_path = None
|
||||||
|
for topic_dir in studies_root.iterdir():
|
||||||
|
if topic_dir.is_dir() and (topic_dir / study_name).exists():
|
||||||
|
study_path = f"studies/{topic_dir.name}/{study_name}"
|
||||||
|
break
|
||||||
|
if not study_path:
|
||||||
|
study_path = f"studies/{study_name}" # Default, will fail gracefully
|
||||||
|
|
||||||
|
print(f"Study path: {study_path}")
|
||||||
|
|
||||||
|
# Generate test scenarios for the study
|
||||||
|
scenarios = [
|
||||||
|
{
|
||||||
|
"id": "test_study_dir",
|
||||||
|
"name": f"Study directory exists: {study_name}",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [{"action": "check_exists", "path": study_path}],
|
||||||
|
"expected_outcome": {"exists": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_spec",
|
||||||
|
"name": "AtomizerSpec is valid JSON",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"action": "check_json_valid",
|
||||||
|
"path": f"{study_path}/atomizer_spec.json",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"valid_json": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_readme",
|
||||||
|
"name": "README exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [{"action": "check_exists", "path": f"{study_path}/README.md"}],
|
||||||
|
"expected_outcome": {"exists": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_run_script",
|
||||||
|
"name": "run_optimization.py exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"action": "check_exists",
|
||||||
|
"path": f"{study_path}/run_optimization.py",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"expected_outcome": {"exists": True},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "test_model_dir",
|
||||||
|
"name": "Model directory exists",
|
||||||
|
"type": "filesystem",
|
||||||
|
"steps": [{"action": "check_exists", "path": f"{study_path}/1_setup/model"}],
|
||||||
|
"expected_outcome": {"exists": True},
|
||||||
|
},
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
print("Error: Provide --study or --scenarios")
|
||||||
|
return None
|
||||||
|
|
||||||
|
results = await runner.run_test_suite(scenarios)
|
||||||
|
|
||||||
|
summary = results.get("summary", {})
|
||||||
|
print(f"\nResults: {summary.get('passed', 0)}/{summary.get('total', 0)} passed")
|
||||||
|
|
||||||
|
for scenario in results.get("scenarios", []):
|
||||||
|
status = "PASS" if scenario.get("passed") else "FAIL"
|
||||||
|
print(f" [{status}] {scenario.get('scenario_name')}")
|
||||||
|
if not scenario.get("passed") and scenario.get("error"):
|
||||||
|
print(f" Error: {scenario.get('error')}")
|
||||||
|
|
||||||
|
# Save results
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
results_file = workspace / ".devloop" / "test_results.json"
|
||||||
|
results_file.parent.mkdir(exist_ok=True)
|
||||||
|
with open(results_file, "w") as f:
|
||||||
|
json.dump(results, f, indent=2)
|
||||||
|
print(f"\nResults saved to: {results_file}")
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
async def run_analyze(results_file: str = None):
|
||||||
|
"""Analyze test results with Gemini via OpenCode."""
|
||||||
|
from optimization_engine.devloop.cli_bridge import OpenCodeCLI
|
||||||
|
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
|
||||||
|
# Load results
|
||||||
|
if results_file:
|
||||||
|
results_path = Path(results_file)
|
||||||
|
else:
|
||||||
|
results_path = workspace / ".devloop" / "test_results.json"
|
||||||
|
|
||||||
|
if not results_path.exists():
|
||||||
|
print(f"Error: Results file not found: {results_path}")
|
||||||
|
print("Run 'devloop_cli.py test --study <name>' first")
|
||||||
|
return None
|
||||||
|
|
||||||
|
with open(results_path) as f:
|
||||||
|
test_results = json.load(f)
|
||||||
|
|
||||||
|
print("Analyzing test results with Gemini (OpenCode)...")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
opencode = OpenCodeCLI(workspace)
|
||||||
|
analysis = await opencode.analyze(test_results)
|
||||||
|
|
||||||
|
print(f"\nAnalysis complete:")
|
||||||
|
print(f" Issues found: {analysis.get('issues_found', False)}")
|
||||||
|
|
||||||
|
for issue in analysis.get("issues", []):
|
||||||
|
print(f"\n Issue: {issue.get('id')}")
|
||||||
|
print(f" Description: {issue.get('description')}")
|
||||||
|
print(f" Severity: {issue.get('severity')}")
|
||||||
|
print(f" Root cause: {issue.get('root_cause')}")
|
||||||
|
|
||||||
|
for rec in analysis.get("recommendations", []):
|
||||||
|
print(f"\n Recommendation: {rec}")
|
||||||
|
|
||||||
|
# Save analysis
|
||||||
|
analysis_file = workspace / ".devloop" / "analysis.json"
|
||||||
|
with open(analysis_file, "w") as f:
|
||||||
|
json.dump(analysis, f, indent=2)
|
||||||
|
print(f"\nAnalysis saved to: {analysis_file}")
|
||||||
|
|
||||||
|
return analysis
|
||||||
|
|
||||||
|
|
||||||
|
async def show_status():
|
||||||
|
"""Show current DevLoop status."""
|
||||||
|
workspace = Path("C:/Users/antoi/Atomizer")
|
||||||
|
devloop_dir = workspace / ".devloop"
|
||||||
|
|
||||||
|
print("DevLoop Status")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Check for existing files
|
||||||
|
plan_file = devloop_dir / "current_plan.json"
|
||||||
|
results_file = devloop_dir / "test_results.json"
|
||||||
|
analysis_file = devloop_dir / "analysis.json"
|
||||||
|
|
||||||
|
if plan_file.exists():
|
||||||
|
with open(plan_file) as f:
|
||||||
|
plan = json.load(f)
|
||||||
|
print(f"\nCurrent Plan: {plan.get('objective', 'Unknown')}")
|
||||||
|
print(f" Tasks: {len(plan.get('tasks', []))}")
|
||||||
|
else:
|
||||||
|
print("\nNo current plan")
|
||||||
|
|
||||||
|
if results_file.exists():
|
||||||
|
with open(results_file) as f:
|
||||||
|
results = json.load(f)
|
||||||
|
summary = results.get("summary", {})
|
||||||
|
print(f"\nLast Test Results:")
|
||||||
|
print(f" Passed: {summary.get('passed', 0)}/{summary.get('total', 0)}")
|
||||||
|
else:
|
||||||
|
print("\nNo test results")
|
||||||
|
|
||||||
|
if analysis_file.exists():
|
||||||
|
with open(analysis_file) as f:
|
||||||
|
analysis = json.load(f)
|
||||||
|
print(f"\nLast Analysis:")
|
||||||
|
print(f" Issues: {len(analysis.get('issues', []))}")
|
||||||
|
else:
|
||||||
|
print("\nNo analysis")
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("CLI Tools:")
|
||||||
|
print(" - Claude Code: C:\\Users\\antoi\\.local\\bin\\claude.exe")
|
||||||
|
print(" - OpenCode: C:\\Users\\antoi\\AppData\\Roaming\\npm\\opencode.cmd")
|
||||||
|
|
||||||
|
|
||||||
|
async def quick_support_arm():
|
||||||
|
"""Quick test with support_arm study."""
|
||||||
|
print("Quick DevLoop test with support_arm study")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test the study
|
||||||
|
results = await run_tests(study_name="support_arm")
|
||||||
|
|
||||||
|
if results and results.get("summary", {}).get("failed", 0) == 0:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("SUCCESS: support_arm study is properly configured!")
|
||||||
|
print("\nNext steps:")
|
||||||
|
print(
|
||||||
|
" 1. Run optimization: cd studies/_Other/support_arm && python run_optimization.py --test"
|
||||||
|
)
|
||||||
|
print(" 2. Start dashboard: cd atomizer-dashboard && npm run dev")
|
||||||
|
print(" 3. View in canvas: http://localhost:3000/canvas/support_arm")
|
||||||
|
else:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("Some tests failed. Running analysis...")
|
||||||
|
await run_analyze()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="DevLoop CLI - Closed-loop development using CLI subscriptions",
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
Examples:
|
||||||
|
# Run full development cycle
|
||||||
|
python devloop_cli.py start "Create new bracket study"
|
||||||
|
|
||||||
|
# Step-by-step execution
|
||||||
|
python devloop_cli.py plan "Fix dashboard validation"
|
||||||
|
python devloop_cli.py implement
|
||||||
|
python devloop_cli.py test --study support_arm
|
||||||
|
python devloop_cli.py analyze
|
||||||
|
|
||||||
|
# Browser tests (Playwright)
|
||||||
|
python devloop_cli.py browser # Quick smoke test
|
||||||
|
python devloop_cli.py browser --level full # All UI tests
|
||||||
|
python devloop_cli.py browser --study support_arm # Study-specific
|
||||||
|
|
||||||
|
# Quick test
|
||||||
|
python devloop_cli.py quick
|
||||||
|
|
||||||
|
Tools used:
|
||||||
|
- OpenCode (Gemini): Planning and analysis
|
||||||
|
- Claude Code: Implementation and fixes
|
||||||
|
- Playwright: Browser UI testing
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
subparsers = parser.add_subparsers(dest="command", help="Commands")
|
||||||
|
|
||||||
|
# Start command - full cycle
|
||||||
|
start_parser = subparsers.add_parser("start", help="Start a full development cycle")
|
||||||
|
start_parser.add_argument("objective", help="What to achieve")
|
||||||
|
start_parser.add_argument("--max-iterations", type=int, default=5, help="Max fix iterations")
|
||||||
|
|
||||||
|
# Plan command
|
||||||
|
plan_parser = subparsers.add_parser("plan", help="Create plan with Gemini (OpenCode)")
|
||||||
|
plan_parser.add_argument("objective", help="What to plan")
|
||||||
|
plan_parser.add_argument("--context", help="Context JSON file")
|
||||||
|
|
||||||
|
# Implement command
|
||||||
|
impl_parser = subparsers.add_parser("implement", help="Implement plan with Claude Code")
|
||||||
|
impl_parser.add_argument("--plan", help="Plan JSON file (default: .devloop/current_plan.json)")
|
||||||
|
|
||||||
|
# Test command
|
||||||
|
test_parser = subparsers.add_parser("test", help="Run tests")
|
||||||
|
test_parser.add_argument("--study", help="Study name to test")
|
||||||
|
test_parser.add_argument("--scenarios", help="Test scenarios JSON file")
|
||||||
|
|
||||||
|
# Analyze command
|
||||||
|
analyze_parser = subparsers.add_parser("analyze", help="Analyze results with Gemini (OpenCode)")
|
||||||
|
analyze_parser.add_argument("--results", help="Test results JSON file")
|
||||||
|
|
||||||
|
# Status command
|
||||||
|
subparsers.add_parser("status", help="Show current DevLoop status")
|
||||||
|
|
||||||
|
# Quick command
|
||||||
|
subparsers.add_parser("quick", help="Quick test with support_arm study")
|
||||||
|
|
||||||
|
# Browser command
|
||||||
|
browser_parser = subparsers.add_parser("browser", help="Run browser UI tests with Playwright")
|
||||||
|
browser_parser.add_argument(
|
||||||
|
"--level",
|
||||||
|
choices=["quick", "home", "full", "study"],
|
||||||
|
default="quick",
|
||||||
|
help="Test level: quick (smoke), home (home page), full (all), study (study-specific)",
|
||||||
|
)
|
||||||
|
browser_parser.add_argument("--study", help="Study name for study-specific tests")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == "start":
|
||||||
|
asyncio.run(start_cycle(args.objective, args.max_iterations))
|
||||||
|
elif args.command == "plan":
|
||||||
|
asyncio.run(run_plan(args.objective, args.context))
|
||||||
|
elif args.command == "implement":
|
||||||
|
asyncio.run(run_implement(args.plan))
|
||||||
|
elif args.command == "test":
|
||||||
|
asyncio.run(run_tests(args.study, args.scenarios))
|
||||||
|
elif args.command == "analyze":
|
||||||
|
asyncio.run(run_analyze(args.results))
|
||||||
|
elif args.command == "status":
|
||||||
|
asyncio.run(show_status())
|
||||||
|
elif args.command == "quick":
|
||||||
|
asyncio.run(quick_support_arm())
|
||||||
|
elif args.command == "browser":
|
||||||
|
asyncio.run(run_browser_tests(args.level, args.study))
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
38
tools/test_extraction.py
Normal file
38
tools/test_extraction.py
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Test extraction pipeline on existing OP2 file."""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
|
from optimization_engine.extractors import extract_displacement, extract_solid_stress
|
||||||
|
|
||||||
|
op2_path = r"C:\Users\antoi\Atomizer\studies\_Other\Model_for_dev\support_arm_sim1-solution_1.op2"
|
||||||
|
|
||||||
|
print("Testing extractors on existing OP2 file...")
|
||||||
|
print(f"File: {op2_path}")
|
||||||
|
print(f"Exists: {Path(op2_path).exists()}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Test displacement extraction
|
||||||
|
print("1. Displacement extraction:")
|
||||||
|
try:
|
||||||
|
result = extract_displacement(op2_path)
|
||||||
|
print(f" Max magnitude: {result.get('max_magnitude', 'N/A')} mm")
|
||||||
|
print(f" Full result: {result}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ERROR: {e}")
|
||||||
|
|
||||||
|
# Test stress extraction
|
||||||
|
print()
|
||||||
|
print("2. Stress extraction (CTETRA):")
|
||||||
|
try:
|
||||||
|
result = extract_solid_stress(op2_path, element_type="ctetra")
|
||||||
|
print(f" Max von Mises: {result.get('max_von_mises', 'N/A')} MPa")
|
||||||
|
print(f" Full result: {result}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ERROR: {e}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("Done!")
|
||||||
172
tools/test_j1_vs_mean_per_subcase.py
Normal file
172
tools/test_j1_vs_mean_per_subcase.py
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
"""
|
||||||
|
Test: J1 coefficient vs mean(WFE) for each subcase individually.
|
||||||
|
|
||||||
|
Hypothesis: At 90 deg (zenith), gravity is axially symmetric, so J1 should
|
||||||
|
closely match mean(WFE). At other angles (20, 40, 60), lateral gravity
|
||||||
|
components break symmetry, potentially causing J1 != mean.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from pathlib import Path
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
|
from optimization_engine.extractors.extract_zernike import (
|
||||||
|
ZernikeExtractor,
|
||||||
|
compute_zernike_coefficients,
|
||||||
|
DEFAULT_N_MODES,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_j1_vs_mean_per_subcase():
|
||||||
|
"""Test J1 vs mean for each subcase in real data."""
|
||||||
|
print("=" * 70)
|
||||||
|
print("J1 vs mean(WFE) PER SUBCASE")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
# Find OP2 files
|
||||||
|
studies_dir = Path(__file__).parent.parent / "studies"
|
||||||
|
|
||||||
|
# Look for M1 mirror studies specifically
|
||||||
|
op2_files = list(studies_dir.rglob("**/m1_mirror*/**/*.op2"))
|
||||||
|
if not op2_files:
|
||||||
|
op2_files = list(studies_dir.rglob("*.op2"))
|
||||||
|
|
||||||
|
if not op2_files:
|
||||||
|
print("No OP2 files found!")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Use the first one
|
||||||
|
op2_file = op2_files[0]
|
||||||
|
print(f"\nUsing: {op2_file.relative_to(studies_dir.parent)}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
extractor = ZernikeExtractor(op2_file)
|
||||||
|
subcases = list(extractor.displacements.keys())
|
||||||
|
print(f"Available subcases: {subcases}")
|
||||||
|
|
||||||
|
print(f"\n{'Subcase':<10} {'Mean(WFE)':<15} {'J1 Coeff':<15} {'|Diff|':<12} {'Diff %':<10}")
|
||||||
|
print("-" * 70)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for sc in sorted(subcases, key=lambda x: int(x) if x.isdigit() else 0):
|
||||||
|
try:
|
||||||
|
X, Y, WFE = extractor._build_dataframe(sc)
|
||||||
|
|
||||||
|
# Compute Zernike coefficients
|
||||||
|
coeffs, R_max = compute_zernike_coefficients(X, Y, WFE, DEFAULT_N_MODES)
|
||||||
|
|
||||||
|
j1 = coeffs[0]
|
||||||
|
wfe_mean = np.mean(WFE)
|
||||||
|
diff = abs(j1 - wfe_mean)
|
||||||
|
pct_diff = 100 * diff / abs(wfe_mean) if abs(wfe_mean) > 1e-6 else 0
|
||||||
|
|
||||||
|
print(f"{sc:<10} {wfe_mean:<15.4f} {j1:<15.4f} {diff:<12.4f} {pct_diff:<10.4f}")
|
||||||
|
|
||||||
|
results.append({
|
||||||
|
'subcase': sc,
|
||||||
|
'mean_wfe': wfe_mean,
|
||||||
|
'j1': j1,
|
||||||
|
'diff': diff,
|
||||||
|
'pct_diff': pct_diff
|
||||||
|
})
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"{sc:<10} ERROR: {e}")
|
||||||
|
|
||||||
|
# Also check RELATIVE WFE (e.g., 20 vs 90, 40 vs 90, 60 vs 90)
|
||||||
|
print(f"\n" + "=" * 70)
|
||||||
|
print("RELATIVE WFE (vs reference subcase)")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
# Find 90 or use first subcase as reference
|
||||||
|
ref_sc = '90' if '90' in subcases else subcases[0]
|
||||||
|
print(f"Reference subcase: {ref_sc}")
|
||||||
|
|
||||||
|
print(f"\n{'Relative':<15} {'Mean(WFE_rel)':<15} {'J1 Coeff':<15} {'|Diff|':<12} {'Diff %':<10}")
|
||||||
|
print("-" * 70)
|
||||||
|
|
||||||
|
ref_data = extractor.displacements[ref_sc]
|
||||||
|
ref_node_to_idx = {int(nid): i for i, nid in enumerate(ref_data['node_ids'])}
|
||||||
|
|
||||||
|
for sc in sorted(subcases, key=lambda x: int(x) if x.isdigit() else 0):
|
||||||
|
if sc == ref_sc:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
target_data = extractor.displacements[sc]
|
||||||
|
|
||||||
|
X_rel, Y_rel, WFE_rel = [], [], []
|
||||||
|
|
||||||
|
for i, nid in enumerate(target_data['node_ids']):
|
||||||
|
nid = int(nid)
|
||||||
|
if nid not in ref_node_to_idx:
|
||||||
|
continue
|
||||||
|
ref_idx = ref_node_to_idx[nid]
|
||||||
|
geo = extractor.node_geometry.get(nid)
|
||||||
|
if geo is None:
|
||||||
|
continue
|
||||||
|
|
||||||
|
X_rel.append(geo[0])
|
||||||
|
Y_rel.append(geo[1])
|
||||||
|
|
||||||
|
target_wfe = target_data['disp'][i, 2] * extractor.wfe_factor
|
||||||
|
ref_wfe = ref_data['disp'][ref_idx, 2] * extractor.wfe_factor
|
||||||
|
WFE_rel.append(target_wfe - ref_wfe)
|
||||||
|
|
||||||
|
X_rel = np.array(X_rel)
|
||||||
|
Y_rel = np.array(Y_rel)
|
||||||
|
WFE_rel = np.array(WFE_rel)
|
||||||
|
|
||||||
|
# Compute Zernike on relative WFE
|
||||||
|
coeffs_rel, _ = compute_zernike_coefficients(X_rel, Y_rel, WFE_rel, DEFAULT_N_MODES)
|
||||||
|
|
||||||
|
j1_rel = coeffs_rel[0]
|
||||||
|
wfe_rel_mean = np.mean(WFE_rel)
|
||||||
|
diff_rel = abs(j1_rel - wfe_rel_mean)
|
||||||
|
pct_diff_rel = 100 * diff_rel / abs(wfe_rel_mean) if abs(wfe_rel_mean) > 1e-6 else 0
|
||||||
|
|
||||||
|
label = f"{sc} vs {ref_sc}"
|
||||||
|
print(f"{label:<15} {wfe_rel_mean:<15.4f} {j1_rel:<15.4f} {diff_rel:<12.4f} {pct_diff_rel:<10.4f}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"{sc} vs {ref_sc}: ERROR: {e}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
|
||||||
|
def test_symmetry_analysis():
|
||||||
|
"""Analyze why J1 != mean for different subcases."""
|
||||||
|
print(f"\n" + "=" * 70)
|
||||||
|
print("SYMMETRY ANALYSIS")
|
||||||
|
print("=" * 70)
|
||||||
|
print("""
|
||||||
|
Theory: J1 (piston) should equal mean(WFE) when:
|
||||||
|
1. The aperture is circular/annular AND
|
||||||
|
2. The sampling is uniform in angle AND
|
||||||
|
3. The WFE has no bias correlated with position
|
||||||
|
|
||||||
|
At 90 deg (zenith):
|
||||||
|
- Gravity acts purely in Z direction
|
||||||
|
- Deformation should be axially symmetric
|
||||||
|
- J1 should closely match mean(WFE)
|
||||||
|
|
||||||
|
At 20/40/60 deg:
|
||||||
|
- Gravity has lateral (X,Y) components
|
||||||
|
- Deformation may have asymmetric patterns
|
||||||
|
- Tip/tilt (J2,J3) will be large
|
||||||
|
- But J1 vs mean should still be close IF sampling is uniform
|
||||||
|
|
||||||
|
The difference J1-mean comes from:
|
||||||
|
- Non-uniform radial sampling (mesh density varies)
|
||||||
|
- Correlation between WFE and position (asymmetric loading)
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_j1_vs_mean_per_subcase()
|
||||||
|
test_symmetry_analysis()
|
||||||
349
tools/test_zernike_recentering.py
Normal file
349
tools/test_zernike_recentering.py
Normal file
@@ -0,0 +1,349 @@
|
|||||||
|
"""
|
||||||
|
Test: Is recentering Z per subcase equivalent to removing piston from relative WFE?
|
||||||
|
|
||||||
|
Theory:
|
||||||
|
- Option A: Recenter each subcase, then subtract
|
||||||
|
WFE_rel_A = (dZ_20 - mean(dZ_20)) - (dZ_90 - mean(dZ_90))
|
||||||
|
|
||||||
|
- Option B: Subtract raw, then remove piston via Zernike J1
|
||||||
|
WFE_rel_B = dZ_20 - dZ_90
|
||||||
|
Then filter J1 from Zernike fit
|
||||||
|
|
||||||
|
Mathematically:
|
||||||
|
WFE_rel_A = dZ_20 - dZ_90 - mean(dZ_20) + mean(dZ_90)
|
||||||
|
= (dZ_20 - dZ_90) - (mean(dZ_20) - mean(dZ_90))
|
||||||
|
|
||||||
|
If nodes are identical: mean(dZ_20) - mean(dZ_90) = mean(dZ_20 - dZ_90)
|
||||||
|
So: WFE_rel_A = WFE_rel_B - mean(WFE_rel_B)
|
||||||
|
|
||||||
|
This should equal WFE_rel_B with J1 (piston) removed, BUT only if:
|
||||||
|
1. Piston Zernike Z1 = 1 (constant)
|
||||||
|
2. Sampling is uniform (or Z1 coefficient = mean for non-uniform)
|
||||||
|
|
||||||
|
For annular apertures and non-uniform mesh sampling, this might not be exact!
|
||||||
|
"""
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
from pathlib import Path
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Add project root to path
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||||
|
|
||||||
|
from optimization_engine.extractors.extract_zernike import (
|
||||||
|
ZernikeExtractor,
|
||||||
|
compute_zernike_coefficients,
|
||||||
|
compute_rms_metrics,
|
||||||
|
zernike_noll,
|
||||||
|
DEFAULT_N_MODES,
|
||||||
|
DEFAULT_FILTER_ORDERS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_recentering_equivalence_synthetic():
|
||||||
|
"""Test with synthetic data where we control everything."""
|
||||||
|
print("=" * 70)
|
||||||
|
print("TEST 1: Synthetic Data")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
np.random.seed(42)
|
||||||
|
|
||||||
|
# Create annular aperture (like a telescope mirror)
|
||||||
|
n_points = 5000
|
||||||
|
r_inner = 0.3 # 30% central obscuration
|
||||||
|
r_outer = 1.0
|
||||||
|
|
||||||
|
# Generate random points in annulus
|
||||||
|
r = np.sqrt(np.random.uniform(r_inner**2, r_outer**2, n_points))
|
||||||
|
theta = np.random.uniform(0, 2*np.pi, n_points)
|
||||||
|
X = r * np.cos(theta) * 500 # Scale to 500mm radius
|
||||||
|
Y = r * np.sin(theta) * 500
|
||||||
|
|
||||||
|
# Simulate Z-displacements for two subcases (in mm, will convert to nm)
|
||||||
|
# Subcase 90: Some aberration pattern + piston
|
||||||
|
piston_90 = 0.001 # 1 micron mean displacement
|
||||||
|
dZ_90 = piston_90 + 0.0005 * (X**2 + Y**2) / 500**2 # Add some power
|
||||||
|
dZ_90 += 0.0002 * np.random.randn(n_points) # Add noise
|
||||||
|
|
||||||
|
# Subcase 20: Different aberration + different piston
|
||||||
|
piston_20 = 0.003 # 3 micron mean displacement
|
||||||
|
dZ_20 = piston_20 + 0.0008 * (X**2 + Y**2) / 500**2 # More power
|
||||||
|
dZ_20 += 0.0003 * X / 500 # Add some tilt
|
||||||
|
dZ_20 += 0.0002 * np.random.randn(n_points) # Add noise
|
||||||
|
|
||||||
|
# Convert to WFE in nm (2x for reflection, 1e6 for mm->nm)
|
||||||
|
wfe_factor = 2.0 * 1e6
|
||||||
|
WFE_90 = dZ_90 * wfe_factor
|
||||||
|
WFE_20 = dZ_20 * wfe_factor
|
||||||
|
|
||||||
|
print(f"\nInput data:")
|
||||||
|
print(f" Points: {n_points} (annular, r_inner={r_inner})")
|
||||||
|
print(f" Mean WFE_90: {np.mean(WFE_90):.2f} nm")
|
||||||
|
print(f" Mean WFE_20: {np.mean(WFE_20):.2f} nm")
|
||||||
|
print(f" Mean difference: {np.mean(WFE_20) - np.mean(WFE_90):.2f} nm")
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Option A: Recenter each subcase BEFORE subtraction
|
||||||
|
# =========================================================================
|
||||||
|
WFE_90_centered = WFE_90 - np.mean(WFE_90)
|
||||||
|
WFE_20_centered = WFE_20 - np.mean(WFE_20)
|
||||||
|
WFE_rel_A = WFE_20_centered - WFE_90_centered
|
||||||
|
|
||||||
|
# Fit Zernike to option A
|
||||||
|
coeffs_A, R_max_A = compute_zernike_coefficients(X, Y, WFE_rel_A, DEFAULT_N_MODES)
|
||||||
|
|
||||||
|
# Compute RMS metrics for option A
|
||||||
|
rms_A = compute_rms_metrics(X, Y, WFE_rel_A, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
|
||||||
|
|
||||||
|
print(f"\nOption A (recenter before subtraction):")
|
||||||
|
print(f" Mean WFE_rel_A: {np.mean(WFE_rel_A):.6f} nm (should be ~0)")
|
||||||
|
print(f" J1 (piston) coefficient: {coeffs_A[0]:.6f} nm")
|
||||||
|
print(f" Global RMS: {rms_A['global_rms_nm']:.2f} nm")
|
||||||
|
print(f" Filtered RMS (J1-J4 removed): {rms_A['filtered_rms_nm']:.2f} nm")
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Option B: Subtract raw, then analyze (current implementation)
|
||||||
|
# =========================================================================
|
||||||
|
WFE_rel_B = WFE_20 - WFE_90 # No recentering
|
||||||
|
|
||||||
|
# Fit Zernike to option B
|
||||||
|
coeffs_B, R_max_B = compute_zernike_coefficients(X, Y, WFE_rel_B, DEFAULT_N_MODES)
|
||||||
|
|
||||||
|
# Compute RMS metrics for option B
|
||||||
|
rms_B = compute_rms_metrics(X, Y, WFE_rel_B, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
|
||||||
|
|
||||||
|
print(f"\nOption B (current: subtract raw, filter J1-J4 after):")
|
||||||
|
print(f" Mean WFE_rel_B: {np.mean(WFE_rel_B):.2f} nm")
|
||||||
|
print(f" J1 (piston) coefficient: {coeffs_B[0]:.2f} nm")
|
||||||
|
print(f" Global RMS: {rms_B['global_rms_nm']:.2f} nm")
|
||||||
|
print(f" Filtered RMS (J1-J4 removed): {rms_B['filtered_rms_nm']:.2f} nm")
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# Compare
|
||||||
|
# =========================================================================
|
||||||
|
print(f"\n" + "=" * 70)
|
||||||
|
print("COMPARISON")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
# Check if J1 coefficient in B equals the mean
|
||||||
|
j1_vs_mean_diff = abs(coeffs_B[0] - np.mean(WFE_rel_B))
|
||||||
|
print(f"\n1. Does J1 = mean(WFE)?")
|
||||||
|
print(f" J1 coefficient: {coeffs_B[0]:.6f} nm")
|
||||||
|
print(f" Mean of WFE: {np.mean(WFE_rel_B):.6f} nm")
|
||||||
|
print(f" Difference: {j1_vs_mean_diff:.6f} nm")
|
||||||
|
print(f" --> {'YES (negligible diff)' if j1_vs_mean_diff < 0.01 else 'NO (significant diff!)'}")
|
||||||
|
|
||||||
|
# Check if filtered RMS is the same
|
||||||
|
filtered_rms_diff = abs(rms_A['filtered_rms_nm'] - rms_B['filtered_rms_nm'])
|
||||||
|
print(f"\n2. Is filtered RMS the same?")
|
||||||
|
print(f" Option A filtered RMS: {rms_A['filtered_rms_nm']:.6f} nm")
|
||||||
|
print(f" Option B filtered RMS: {rms_B['filtered_rms_nm']:.6f} nm")
|
||||||
|
print(f" Difference: {filtered_rms_diff:.6f} nm")
|
||||||
|
print(f" --> {'EQUIVALENT' if filtered_rms_diff < 0.01 else 'NOT EQUIVALENT!'}")
|
||||||
|
|
||||||
|
# Check all coefficients (J2 onwards should be identical)
|
||||||
|
coeff_diffs = np.abs(coeffs_A[1:] - coeffs_B[1:]) # Skip J1
|
||||||
|
max_coeff_diff = np.max(coeff_diffs)
|
||||||
|
print(f"\n3. Are non-piston coefficients (J2+) identical?")
|
||||||
|
print(f" Max difference in J2-J{DEFAULT_N_MODES}: {max_coeff_diff:.6f} nm")
|
||||||
|
print(f" --> {'YES' if max_coeff_diff < 0.01 else 'NO!'}")
|
||||||
|
|
||||||
|
# The key insight: for non-uniform sampling, J1 might not equal mean exactly
|
||||||
|
# Let's check how Z1 (piston polynomial) behaves
|
||||||
|
x_c = X - np.mean(X)
|
||||||
|
y_c = Y - np.mean(Y)
|
||||||
|
R_max = np.max(np.hypot(x_c, y_c))
|
||||||
|
r_norm = np.hypot(x_c / R_max, y_c / R_max)
|
||||||
|
theta_norm = np.arctan2(y_c, x_c)
|
||||||
|
|
||||||
|
Z1 = zernike_noll(1, r_norm, theta_norm) # Piston polynomial
|
||||||
|
print(f"\n4. Piston polynomial Z1 statistics:")
|
||||||
|
print(f" Z1 should be constant = 1 for all points")
|
||||||
|
print(f" Mean(Z1): {np.mean(Z1):.6f}")
|
||||||
|
print(f" Std(Z1): {np.std(Z1):.6f}")
|
||||||
|
print(f" Min(Z1): {np.min(Z1):.6f}")
|
||||||
|
print(f" Max(Z1): {np.max(Z1):.6f}")
|
||||||
|
|
||||||
|
return filtered_rms_diff < 0.01
|
||||||
|
|
||||||
|
|
||||||
|
def test_recentering_with_real_data():
|
||||||
|
"""Test with real OP2 data if available."""
|
||||||
|
print("\n" + "=" * 70)
|
||||||
|
print("TEST 2: Real Data (if available)")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
# Look for a real study with OP2 data
|
||||||
|
studies_dir = Path(__file__).parent.parent / "studies"
|
||||||
|
op2_files = list(studies_dir.rglob("*.op2"))
|
||||||
|
|
||||||
|
if not op2_files:
|
||||||
|
print(" No OP2 files found in studies directory. Skipping real data test.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Use the first one found
|
||||||
|
op2_file = op2_files[0]
|
||||||
|
print(f"\n Using: {op2_file.relative_to(studies_dir.parent)}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
extractor = ZernikeExtractor(op2_file)
|
||||||
|
subcases = list(extractor.displacements.keys())
|
||||||
|
print(f" Available subcases: {subcases}")
|
||||||
|
|
||||||
|
if len(subcases) < 2:
|
||||||
|
print(" Need at least 2 subcases for relative comparison. Skipping.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Pick two subcases
|
||||||
|
ref_subcase = subcases[0]
|
||||||
|
target_subcase = subcases[1]
|
||||||
|
print(f" Comparing: {target_subcase} vs {ref_subcase}")
|
||||||
|
|
||||||
|
# Get raw data
|
||||||
|
X_t, Y_t, WFE_t = extractor._build_dataframe(target_subcase)
|
||||||
|
X_r, Y_r, WFE_r = extractor._build_dataframe(ref_subcase)
|
||||||
|
|
||||||
|
# Build node matching (same logic as extract_relative)
|
||||||
|
target_data = extractor.displacements[target_subcase]
|
||||||
|
ref_data = extractor.displacements[ref_subcase]
|
||||||
|
|
||||||
|
ref_node_to_idx = {
|
||||||
|
int(nid): i for i, nid in enumerate(ref_data['node_ids'])
|
||||||
|
}
|
||||||
|
|
||||||
|
X_common, Y_common = [], []
|
||||||
|
WFE_target_common, WFE_ref_common = [], []
|
||||||
|
|
||||||
|
for i, nid in enumerate(target_data['node_ids']):
|
||||||
|
nid = int(nid)
|
||||||
|
if nid not in ref_node_to_idx:
|
||||||
|
continue
|
||||||
|
ref_idx = ref_node_to_idx[nid]
|
||||||
|
geo = extractor.node_geometry.get(nid)
|
||||||
|
if geo is None:
|
||||||
|
continue
|
||||||
|
|
||||||
|
X_common.append(geo[0])
|
||||||
|
Y_common.append(geo[1])
|
||||||
|
WFE_target_common.append(target_data['disp'][i, 2] * extractor.wfe_factor)
|
||||||
|
WFE_ref_common.append(ref_data['disp'][ref_idx, 2] * extractor.wfe_factor)
|
||||||
|
|
||||||
|
X = np.array(X_common)
|
||||||
|
Y = np.array(Y_common)
|
||||||
|
WFE_target = np.array(WFE_target_common)
|
||||||
|
WFE_ref = np.array(WFE_ref_common)
|
||||||
|
|
||||||
|
print(f" Common nodes: {len(X)}")
|
||||||
|
print(f" Mean WFE target ({target_subcase}): {np.mean(WFE_target):.2f} nm")
|
||||||
|
print(f" Mean WFE ref ({ref_subcase}): {np.mean(WFE_ref):.2f} nm")
|
||||||
|
|
||||||
|
# Option A: Recenter before
|
||||||
|
WFE_target_centered = WFE_target - np.mean(WFE_target)
|
||||||
|
WFE_ref_centered = WFE_ref - np.mean(WFE_ref)
|
||||||
|
WFE_rel_A = WFE_target_centered - WFE_ref_centered
|
||||||
|
|
||||||
|
rms_A = compute_rms_metrics(X, Y, WFE_rel_A, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
|
||||||
|
|
||||||
|
# Option B: Current (no recentering)
|
||||||
|
WFE_rel_B = WFE_target - WFE_ref
|
||||||
|
rms_B = compute_rms_metrics(X, Y, WFE_rel_B, DEFAULT_N_MODES, DEFAULT_FILTER_ORDERS)
|
||||||
|
|
||||||
|
print(f"\n Option A (recenter before):")
|
||||||
|
print(f" Filtered RMS: {rms_A['filtered_rms_nm']:.4f} nm")
|
||||||
|
|
||||||
|
print(f"\n Option B (current, filter after):")
|
||||||
|
print(f" Filtered RMS: {rms_B['filtered_rms_nm']:.4f} nm")
|
||||||
|
|
||||||
|
diff = abs(rms_A['filtered_rms_nm'] - rms_B['filtered_rms_nm'])
|
||||||
|
pct_diff = 100 * diff / rms_B['filtered_rms_nm'] if rms_B['filtered_rms_nm'] > 0 else 0
|
||||||
|
|
||||||
|
print(f"\n Difference: {diff:.4f} nm ({pct_diff:.4f}%)")
|
||||||
|
print(f" --> {'EQUIVALENT' if pct_diff < 0.1 else 'NOT EQUIVALENT!'}")
|
||||||
|
|
||||||
|
return pct_diff < 0.1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" Error: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def test_annular_zernike_piston():
|
||||||
|
"""
|
||||||
|
Specifically test whether J1 = mean for annular apertures.
|
||||||
|
|
||||||
|
For a filled circular aperture with uniform sampling, J1 = mean exactly.
|
||||||
|
For annular apertures or non-uniform sampling, this may not hold!
|
||||||
|
"""
|
||||||
|
print("\n" + "=" * 70)
|
||||||
|
print("TEST 3: Annular Aperture - Does J1 = mean?")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
np.random.seed(123)
|
||||||
|
|
||||||
|
# Test different inner radii
|
||||||
|
for r_inner in [0.0, 0.2, 0.4, 0.6]:
|
||||||
|
n_points = 10000
|
||||||
|
r_outer = 1.0
|
||||||
|
|
||||||
|
# Generate points in annulus
|
||||||
|
if r_inner > 0:
|
||||||
|
r = np.sqrt(np.random.uniform(r_inner**2, r_outer**2, n_points))
|
||||||
|
else:
|
||||||
|
r = np.sqrt(np.random.uniform(0, r_outer**2, n_points))
|
||||||
|
theta = np.random.uniform(0, 2*np.pi, n_points)
|
||||||
|
X = r * np.cos(theta) * 500
|
||||||
|
Y = r * np.sin(theta) * 500
|
||||||
|
|
||||||
|
# Create WFE with known piston
|
||||||
|
true_piston = 1000.0 # nm
|
||||||
|
WFE = true_piston + 100 * np.random.randn(n_points)
|
||||||
|
|
||||||
|
# Fit Zernike
|
||||||
|
coeffs, _ = compute_zernike_coefficients(X, Y, WFE, DEFAULT_N_MODES)
|
||||||
|
|
||||||
|
j1_coeff = coeffs[0]
|
||||||
|
wfe_mean = np.mean(WFE)
|
||||||
|
diff = abs(j1_coeff - wfe_mean)
|
||||||
|
|
||||||
|
print(f"\n r_inner = {r_inner}:")
|
||||||
|
print(f" True piston: {true_piston:.2f} nm")
|
||||||
|
print(f" Mean(WFE): {wfe_mean:.2f} nm")
|
||||||
|
print(f" J1 coefficient: {j1_coeff:.2f} nm")
|
||||||
|
print(f" |J1 - mean|: {diff:.4f} nm")
|
||||||
|
print(f" --> {'J1 ≈ mean' if diff < 1.0 else 'J1 ≠ mean!'}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
print("\n" + "=" * 70)
|
||||||
|
print("ZERNIKE RECENTERING EQUIVALENCE TEST")
|
||||||
|
print("=" * 70)
|
||||||
|
print("\nQuestion: Is recentering Z per subcase equivalent to")
|
||||||
|
print(" removing piston (J1) from relative WFE?")
|
||||||
|
print("=" * 70)
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test1_passed = test_recentering_equivalence_synthetic()
|
||||||
|
test2_passed = test_recentering_with_real_data()
|
||||||
|
test_annular_zernike_piston()
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print("\n" + "=" * 70)
|
||||||
|
print("SUMMARY")
|
||||||
|
print("=" * 70)
|
||||||
|
print(f"\nSynthetic data test: {'PASSED' if test1_passed else 'FAILED'}")
|
||||||
|
if test2_passed is not None:
|
||||||
|
print(f"Real data test: {'PASSED' if test2_passed else 'FAILED'}")
|
||||||
|
else:
|
||||||
|
print("Real data test: SKIPPED")
|
||||||
|
|
||||||
|
print("\nConclusion:")
|
||||||
|
if test1_passed:
|
||||||
|
print(" For standard Zernike on circular/annular apertures,")
|
||||||
|
print(" recentering Z before subtraction IS equivalent to")
|
||||||
|
print(" filtering J1 (piston) after Zernike fit.")
|
||||||
|
print("\n The current implementation is correct!")
|
||||||
|
else:
|
||||||
|
print(" WARNING: Recentering and J1 filtering are NOT equivalent!")
|
||||||
|
print(" Consider updating the extractor to recenter before subtraction.")
|
||||||
Reference in New Issue
Block a user