# How to Create a New Study This guide shows you how to set up a new optimization study using Atomizer's standardized directory structure. --- ## Quick Start ### 1. Copy the Study Template ```bash cp -r templates/study_template studies/your_study_name cd studies/your_study_name ``` ### 2. Add Your CAD/FEM Model Place your reference model files in `1_setup/model/`: ``` 1_setup/model/ ├── YourPart.prt # NX CAD model ├── YourPart_sim1.sim # NX simulation file └── [baseline results] # Optional: baseline FEA results ``` ### 3. Run Benchmarking Validate the baseline model before optimization: ```bash cd ../.. # Back to Atomizer root python optimization_engine/benchmarking.py \ --prt "studies/your_study_name/1_setup/model/YourPart.prt" \ --sim "studies/your_study_name/1_setup/model/YourPart_sim1.sim" \ --output "studies/your_study_name/1_setup/benchmarking" ``` This will: - Extract all NX expressions - Run baseline FEA - Extract all results (displacement, stress, etc.) - Save benchmark data ### 4. Create Configuration File Copy and modify the beam optimization config as a starting point: ```bash cp studies/simple_beam_optimization/beam_optimization_config.json \ studies/your_study_name/your_config.json ``` Edit `your_config.json`: ```json { "study_name": "your_study_name", "description": "Describe what you're optimizing", "substudy_name": "01_initial_exploration", "design_variables": { "your_param_1": { "type": "continuous", "min": 10.0, "max": 50.0, "baseline": 30.0, "units": "mm" }, "your_param_2": { "type": "integer", "min": 5, "max": 15, "baseline": 10, "units": "unitless" } }, "extractors": [ { "name": "max_displacement", "action": "extract_displacement", "parameters": {"metric": "max"} } ], "objectives": [ { "name": "minimize_displacement", "extractor": "max_displacement", "goal": "minimize", "weight": 1.0 } ], "optimization_settings": { "n_trials": 10, "sampler": "TPE" }, "post_processing": { "generate_plots": true, "plot_formats": ["png", "pdf"], "cleanup_models": false, "keep_top_n_models": 10 } } ``` ### 5. Create Runner Script Create `run_optimization.py` in the study directory: ```python """ Runner script for your_study_name optimization. """ from pathlib import Path import sys # Add optimization_engine to path sys.path.insert(0, str(Path(__file__).parent.parent.parent)) from optimization_engine.runner import OptimizationRunner if __name__ == '__main__': study_dir = Path(__file__).parent # Paths config_file = study_dir / "your_config.json" prt_file = study_dir / "1_setup" / "model" / "YourPart.prt" sim_file = study_dir / "1_setup" / "model" / "YourPart_sim1.sim" output_dir = study_dir / "2_substudies" / "01_initial_exploration" # Run optimization runner = OptimizationRunner( config_file=config_file, prt_file=prt_file, sim_file=sim_file, output_dir=output_dir ) study = runner.run() print("\nOptimization complete!") print(f"Results saved to: {output_dir}") ``` ### 6. Update Study Metadata Edit `study_metadata.json`: ```json { "study_name": "your_study_name", "description": "Brief description", "created": "2025-11-17T19:00:00", "status": "active", "design_variables": ["your_param_1", "your_param_2"], "objectives": ["minimize_displacement"], "constraints": [], "substudies": [], "organization_version": "2.0" } ``` ### 7. Run First Substudy ```bash python studies/your_study_name/run_optimization.py ``` This will: - Create `2_substudies/01_initial_exploration/` - Run N trials (as specified in config) - Generate plots (if enabled) - Save results ### 8. Document Your Substudy Create `2_substudies/01_initial_exploration/README.md` using the template: ```bash cp templates/substudy_README_template.md \ studies/your_study_name/2_substudies/01_initial_exploration/README.md ``` Fill in: - Purpose - Configuration - Expected outcome - Actual results (after run completes) ### 9. Update Study Metadata After the substudy completes, add it to `study_metadata.json`: ```json { "substudies": [ { "name": "01_initial_exploration", "created": "2025-11-17T19:00:00", "status": "completed", "trials": 10, "purpose": "Initial design space exploration", "notes": "Completed successfully" } ] } ``` --- ## Substudy Workflow ### Creating a New Substudy When you want to run a new optimization (e.g., with different settings): **1. Choose a Number** - Next in sequence (02, 03, 04, etc.) **2. Choose a Name** - Descriptive of what changes: `02_validation_5trials`, `03_refined_search_30trials` **3. Update Configuration** - Modify `your_config.json` with new settings - Update `substudy_name` field **4. Create Substudy README** ```bash cp templates/substudy_README_template.md \ studies/your_study_name/2_substudies/02_your_substudy/README.md ``` **5. Run Optimization** ```bash python studies/your_study_name/run_optimization.py ``` **6. Document Results** - Fill in README.md with actual results - Update `study_metadata.json` --- ## Directory Structure Reference ``` studies/your_study_name/ │ ├── 1_setup/ # Pre-optimization │ ├── model/ # Reference CAD/FEM │ │ ├── YourPart.prt │ │ └── YourPart_sim1.sim │ └── benchmarking/ # Baseline validation │ ├── benchmark_results.json │ └── BENCHMARK_REPORT.md │ ├── 2_substudies/ # Optimization runs │ ├── 01_initial_exploration/ │ │ ├── README.md # Purpose, findings │ │ ├── trial_000/ │ │ ├── trial_001/ │ │ ├── plots/ # Auto-generated │ │ ├── history.json │ │ └── best_trial.json │ ├── 02_validation_5trials/ │ └── 03_refined_search_30trials/ │ ├── 3_reports/ # Study-level analysis │ ├── SUBSTUDY_COMPARISON.md │ └── FINAL_RECOMMENDATIONS.md │ ├── README.md # Study overview ├── study_metadata.json # Metadata & substudy registry ├── your_config.json # Main configuration └── run_optimization.py # Runner script ``` --- ## Best Practices ### Naming Conventions **Studies**: `lowercase_with_underscores` - `simple_beam_optimization` - `bracket_displacement_maximizing` - `engine_mount_fatigue` **Substudies**: `NN_descriptive_name_Ntrials` - `01_initial_exploration` - `02_validation_3trials` - `03_full_optimization_50trials` - `04_refined_search_promising_region` ### Substudy Numbering - Start at 01, increment by 1 - Use two digits (01, 02, ..., 99) - Chronological order = number order ### Documentation **Always Create**: - Study README.md (overview, current status) - Substudy README.md (purpose, results) - study_metadata.json (registry of substudies) **Optional**: - Detailed result analysis (OPTIMIZATION_RESULTS.md) - Study-level comparisons (in 3_reports/) - Lessons learned document ### Configuration Management - Keep one main config file per study - Modify `substudy_name` for each new substudy - Document config changes in substudy README - Consider version control for config changes ### Post-Processing Enable in config for automatic plots and cleanup: ```json "post_processing": { "generate_plots": true, "plot_formats": ["png", "pdf"], "cleanup_models": true, "keep_top_n_models": 10, "cleanup_dry_run": false } ``` **Recommended**: - `generate_plots: true` - Always generate plots - `cleanup_models: false` initially, `true` after validation - `keep_top_n_models: 10` for most studies - Use `cleanup_dry_run: true` first to preview deletion --- ## Troubleshooting ### Model Files Not Updating **Symptom**: Design variables don't change between trials **Solutions**: 1. Check expression names match config exactly 2. Verify .exp export works: `NX_updater.get_all_expressions(use_exp_export=True)` 3. Check NX version compatibility ### Optimization Not Converging **Symptom**: No improvement over many trials **Solutions**: 1. Check objective scaling (are values similar magnitude?) 2. Verify design variable bounds are reasonable 3. Try different sampler (TPE → Random for wide exploration) 4. Increase trial count ### No Feasible Designs Found **Symptom**: All trials violate constraints **Solutions**: 1. Relax constraints 2. Expand design variable bounds 3. Adjust objective weights (prioritize meeting constraints) 4. Consider multi-stage optimization (feasibility first, then optimize) ### Plots Not Generating **Symptom**: No plots/ directory created **Solutions**: 1. Check matplotlib installation: `conda install matplotlib pandas "numpy<2"` 2. Verify `post_processing.generate_plots: true` in config 3. Check history.json exists (use generate_history_from_trials.py if needed) 4. Look for errors in post-processing output --- ## Examples See existing studies for reference: - [studies/simple_beam_optimization/](../studies/simple_beam_optimization/) - Full 4D optimization with substudies - [templates/study_template/](study_template/) - Clean template to copy --- ## Summary **Study Creation Checklist**: - [ ] Copy study_template - [ ] Add CAD/FEM model to 1_setup/model/ - [ ] Run benchmarking - [ ] Create configuration file - [ ] Create runner script - [ ] Update study_metadata.json - [ ] Run first substudy (01_initial_exploration) - [ ] Create substudy README - [ ] Document results in study README **For Each New Substudy**: - [ ] Choose number and name (02_, 03_, etc.) - [ ] Update configuration - [ ] Create substudy README (from template) - [ ] Run optimization - [ ] Fill in actual results in README - [ ] Update study_metadata.json - [ ] Review plots and best trial **When Study Complete**: - [ ] Create comparison report in 3_reports/ - [ ] Write final recommendations - [ ] Update study README with final status - [ ] Archive or cleanup if needed