Reorganized simple_beam_optimization study and created templates for future
studies following best practices for clarity, chronology, and self-documentation.
## Study Reorganization (simple_beam_optimization)
**New Directory Structure**:
```
studies/simple_beam_optimization/
├── 1_setup/ # Pre-optimization setup
│ ├── model/ # Reference CAD/FEM model
│ └── benchmarking/ # Baseline validation results
├── 2_substudies/ # Optimization runs (numbered chronologically)
│ ├── 01_initial_exploration/
│ ├── 02_validation_3d_3trials/
│ ├── 03_validation_4d_3trials/
│ └── 04_full_optimization_50trials/
└── 3_reports/ # Study-level analysis
└── COMPREHENSIVE_BENCHMARK_RESULTS.md
```
**Key Changes**:
1. **Numbered Substudies**: 01_, 02_, 03_, 04_ indicate chronological order
2. **Reorganized Setup**: model/ and benchmarking/ moved to 1_setup/
3. **Centralized Reports**: Study-level docs moved to 3_reports/
4. **Substudy Documentation**: Each substudy has README.md explaining purpose/results
## Updated Metadata
**study_metadata.json** (v2.0):
- Tracks all 4 substudies with creation date, status, purpose
- Includes result summaries (best objective, feasible count)
- Documents new organization version
**Substudies Documented**:
- 01_initial_exploration - Initial design space exploration
- 02_validation_3d_3trials - Validate 3D parameter updates
- 03_validation_4d_3trials - Validate 4D updates including hole_count
- 04_full_optimization_50trials - Full 50-trial optimization
## Templates for Future Studies
**templates/study_template/** - Complete study structure:
- README.md template with study overview format
- study_metadata.json template with v2.0 schema
- Pre-created 1_setup/, 2_substudies/, 3_reports/ directories
**templates/substudy_README_template.md** - Standardized substudy documentation:
- Purpose and hypothesis
- Configuration changes from previous run
- Expected vs actual results
- Validation checklist
- Lessons learned
- Next steps
**templates/HOW_TO_CREATE_A_STUDY.md** - Complete guide:
- Quick start (9 steps from template to first run)
- Substudy workflow
- Directory structure reference
- Naming conventions
- Best practices
- Troubleshooting guide
- Examples
## Benefits
**Clarity**:
- Numbered substudies show chronological progression (01 → 02 → 03 → 04)
- Clear separation: setup vs. optimization runs vs. analysis
- Self-documenting via substudy READMEs
**Discoverability**:
- study_metadata.json provides complete substudy registry
- Each substudy README explains what was tested and why
- Easy to find results for specific runs
**Scalability**:
- Works for small studies (3 substudies) or large studies (50+)
- Chronological numbering scales to 99 substudies
- Template system makes new studies quick to set up
**Reproducibility**:
- Each substudy documents configuration changes
- Purpose and results clearly stated
- Lessons learned captured for future reference
## Implementation Details
**reorganize_study.py** - Migration script:
- Handles locked files gracefully
- Moves files to new structure
- Provides clear progress reporting
- Safe to run multiple times
**Organization Version**: 2.0
- Tracked in study_metadata.json
- Future studies will use this structure by default
- Existing studies can migrate or keep current structure
## Files Added
- templates/study_template/ - Complete study template
- templates/substudy_README_template.md - Substudy documentation template
- templates/HOW_TO_CREATE_A_STUDY.md - Comprehensive creation guide
- reorganize_study.py - Migration script for existing studies
## Files Reorganized (simple_beam_optimization)
**Moved to 1_setup/**:
- model/ → 1_setup/model/ (CAD/FEM reference files)
- substudies/benchmarking/ → 1_setup/benchmarking/
- baseline_validation.json → 1_setup/
**Renamed and Moved to 2_substudies/**:
- substudies/initial_exploration/ → 2_substudies/01_initial_exploration/
- substudies/validation_3trials/ → 2_substudies/02_validation_3d_3trials/
- substudies/validation_4d_3trials/ → 2_substudies/03_validation_4d_3trials/
- substudies/full_optimization_50trials/ → 2_substudies/04_full_optimization_50trials/
**Moved to 3_reports/**:
- COMPREHENSIVE_BENCHMARK_RESULTS.md → 3_reports/
**Substudy-Specific Docs** (moved to substudy directories):
- OPTIMIZATION_RESULTS_50TRIALS.md → 2_substudies/04_full_optimization_50trials/OPTIMIZATION_RESULTS.md
## Documentation Created
Each substudy now has README.md documenting:
- **01_initial_exploration**: Initial exploration purpose
- **02_validation_3d_3trials**: 3D parameter update validation
- **03_validation_4d_3trials**: hole_count validation success
- **04_full_optimization_50trials**: Full results, no feasible designs found
## Next Steps
**For Future Studies**:
1. Copy templates/study_template/
2. Follow templates/HOW_TO_CREATE_A_STUDY.md
3. Use numbered substudies (01_, 02_, ...)
4. Document each substudy with README.md
**For Existing Studies**:
- Can migrate using reorganize_study.py
- Or apply organization v2.0 to new substudies only
- See docs/STUDY_ORGANIZATION.md for migration guide
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
10 KiB
How to Create a New Study
This guide shows you how to set up a new optimization study using Atomizer's standardized directory structure.
Quick Start
1. Copy the Study Template
cp -r templates/study_template studies/your_study_name
cd studies/your_study_name
2. Add Your CAD/FEM Model
Place your reference model files in 1_setup/model/:
1_setup/model/
├── YourPart.prt # NX CAD model
├── YourPart_sim1.sim # NX simulation file
└── [baseline results] # Optional: baseline FEA results
3. Run Benchmarking
Validate the baseline model before optimization:
cd ../.. # Back to Atomizer root
python optimization_engine/benchmarking.py \
--prt "studies/your_study_name/1_setup/model/YourPart.prt" \
--sim "studies/your_study_name/1_setup/model/YourPart_sim1.sim" \
--output "studies/your_study_name/1_setup/benchmarking"
This will:
- Extract all NX expressions
- Run baseline FEA
- Extract all results (displacement, stress, etc.)
- Save benchmark data
4. Create Configuration File
Copy and modify the beam optimization config as a starting point:
cp studies/simple_beam_optimization/beam_optimization_config.json \
studies/your_study_name/your_config.json
Edit your_config.json:
{
"study_name": "your_study_name",
"description": "Describe what you're optimizing",
"substudy_name": "01_initial_exploration",
"design_variables": {
"your_param_1": {
"type": "continuous",
"min": 10.0,
"max": 50.0,
"baseline": 30.0,
"units": "mm"
},
"your_param_2": {
"type": "integer",
"min": 5,
"max": 15,
"baseline": 10,
"units": "unitless"
}
},
"extractors": [
{
"name": "max_displacement",
"action": "extract_displacement",
"parameters": {"metric": "max"}
}
],
"objectives": [
{
"name": "minimize_displacement",
"extractor": "max_displacement",
"goal": "minimize",
"weight": 1.0
}
],
"optimization_settings": {
"n_trials": 10,
"sampler": "TPE"
},
"post_processing": {
"generate_plots": true,
"plot_formats": ["png", "pdf"],
"cleanup_models": false,
"keep_top_n_models": 10
}
}
5. Create Runner Script
Create run_optimization.py in the study directory:
"""
Runner script for your_study_name optimization.
"""
from pathlib import Path
import sys
# Add optimization_engine to path
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
from optimization_engine.runner import OptimizationRunner
if __name__ == '__main__':
study_dir = Path(__file__).parent
# Paths
config_file = study_dir / "your_config.json"
prt_file = study_dir / "1_setup" / "model" / "YourPart.prt"
sim_file = study_dir / "1_setup" / "model" / "YourPart_sim1.sim"
output_dir = study_dir / "2_substudies" / "01_initial_exploration"
# Run optimization
runner = OptimizationRunner(
config_file=config_file,
prt_file=prt_file,
sim_file=sim_file,
output_dir=output_dir
)
study = runner.run()
print("\nOptimization complete!")
print(f"Results saved to: {output_dir}")
6. Update Study Metadata
Edit study_metadata.json:
{
"study_name": "your_study_name",
"description": "Brief description",
"created": "2025-11-17T19:00:00",
"status": "active",
"design_variables": ["your_param_1", "your_param_2"],
"objectives": ["minimize_displacement"],
"constraints": [],
"substudies": [],
"organization_version": "2.0"
}
7. Run First Substudy
python studies/your_study_name/run_optimization.py
This will:
- Create
2_substudies/01_initial_exploration/ - Run N trials (as specified in config)
- Generate plots (if enabled)
- Save results
8. Document Your Substudy
Create 2_substudies/01_initial_exploration/README.md using the template:
cp templates/substudy_README_template.md \
studies/your_study_name/2_substudies/01_initial_exploration/README.md
Fill in:
- Purpose
- Configuration
- Expected outcome
- Actual results (after run completes)
9. Update Study Metadata
After the substudy completes, add it to study_metadata.json:
{
"substudies": [
{
"name": "01_initial_exploration",
"created": "2025-11-17T19:00:00",
"status": "completed",
"trials": 10,
"purpose": "Initial design space exploration",
"notes": "Completed successfully"
}
]
}
Substudy Workflow
Creating a New Substudy
When you want to run a new optimization (e.g., with different settings):
1. Choose a Number
- Next in sequence (02, 03, 04, etc.)
2. Choose a Name
- Descriptive of what changes:
02_validation_5trials,03_refined_search_30trials
3. Update Configuration
- Modify
your_config.jsonwith new settings - Update
substudy_namefield
4. Create Substudy README
cp templates/substudy_README_template.md \
studies/your_study_name/2_substudies/02_your_substudy/README.md
5. Run Optimization
python studies/your_study_name/run_optimization.py
6. Document Results
- Fill in README.md with actual results
- Update
study_metadata.json
Directory Structure Reference
studies/your_study_name/
│
├── 1_setup/ # Pre-optimization
│ ├── model/ # Reference CAD/FEM
│ │ ├── YourPart.prt
│ │ └── YourPart_sim1.sim
│ └── benchmarking/ # Baseline validation
│ ├── benchmark_results.json
│ └── BENCHMARK_REPORT.md
│
├── 2_substudies/ # Optimization runs
│ ├── 01_initial_exploration/
│ │ ├── README.md # Purpose, findings
│ │ ├── trial_000/
│ │ ├── trial_001/
│ │ ├── plots/ # Auto-generated
│ │ ├── history.json
│ │ └── best_trial.json
│ ├── 02_validation_5trials/
│ └── 03_refined_search_30trials/
│
├── 3_reports/ # Study-level analysis
│ ├── SUBSTUDY_COMPARISON.md
│ └── FINAL_RECOMMENDATIONS.md
│
├── README.md # Study overview
├── study_metadata.json # Metadata & substudy registry
├── your_config.json # Main configuration
└── run_optimization.py # Runner script
Best Practices
Naming Conventions
Studies: lowercase_with_underscores
simple_beam_optimizationbracket_displacement_maximizingengine_mount_fatigue
Substudies: NN_descriptive_name_Ntrials
01_initial_exploration02_validation_3trials03_full_optimization_50trials04_refined_search_promising_region
Substudy Numbering
- Start at 01, increment by 1
- Use two digits (01, 02, ..., 99)
- Chronological order = number order
Documentation
Always Create:
- Study README.md (overview, current status)
- Substudy README.md (purpose, results)
- study_metadata.json (registry of substudies)
Optional:
- Detailed result analysis (OPTIMIZATION_RESULTS.md)
- Study-level comparisons (in 3_reports/)
- Lessons learned document
Configuration Management
- Keep one main config file per study
- Modify
substudy_namefor each new substudy - Document config changes in substudy README
- Consider version control for config changes
Post-Processing
Enable in config for automatic plots and cleanup:
"post_processing": {
"generate_plots": true,
"plot_formats": ["png", "pdf"],
"cleanup_models": true,
"keep_top_n_models": 10,
"cleanup_dry_run": false
}
Recommended:
generate_plots: true- Always generate plotscleanup_models: falseinitially,trueafter validationkeep_top_n_models: 10for most studies- Use
cleanup_dry_run: truefirst to preview deletion
Troubleshooting
Model Files Not Updating
Symptom: Design variables don't change between trials
Solutions:
- Check expression names match config exactly
- Verify .exp export works:
NX_updater.get_all_expressions(use_exp_export=True) - Check NX version compatibility
Optimization Not Converging
Symptom: No improvement over many trials
Solutions:
- Check objective scaling (are values similar magnitude?)
- Verify design variable bounds are reasonable
- Try different sampler (TPE → Random for wide exploration)
- Increase trial count
No Feasible Designs Found
Symptom: All trials violate constraints
Solutions:
- Relax constraints
- Expand design variable bounds
- Adjust objective weights (prioritize meeting constraints)
- Consider multi-stage optimization (feasibility first, then optimize)
Plots Not Generating
Symptom: No plots/ directory created
Solutions:
- Check matplotlib installation:
conda install matplotlib pandas "numpy<2" - Verify
post_processing.generate_plots: truein config - Check history.json exists (use generate_history_from_trials.py if needed)
- Look for errors in post-processing output
Examples
See existing studies for reference:
- studies/simple_beam_optimization/ - Full 4D optimization with substudies
- templates/study_template/ - Clean template to copy
Summary
Study Creation Checklist:
- Copy study_template
- Add CAD/FEM model to 1_setup/model/
- Run benchmarking
- Create configuration file
- Create runner script
- Update study_metadata.json
- Run first substudy (01_initial_exploration)
- Create substudy README
- Document results in study README
For Each New Substudy:
- Choose number and name (02_, 03_, etc.)
- Update configuration
- Create substudy README (from template)
- Run optimization
- Fill in actual results in README
- Update study_metadata.json
- Review plots and best trial
When Study Complete:
- Create comparison report in 3_reports/
- Write final recommendations
- Update study README with final status
- Archive or cleanup if needed