Add persistent knowledge system that enables Atomizer to learn from every session and improve over time. ## New Files - knowledge_base/lac.py: LAC class with optimization memory, session insights, and skill evolution tracking - knowledge_base/__init__.py: Package initialization - .claude/skills/modules/learning-atomizer-core.md: Full LAC skill documentation - docs/07_DEVELOPMENT/ATOMIZER_CLAUDE_CODE_INSTRUCTIONS.md: Master instructions ## Updated Files - CLAUDE.md: Added LAC section, communication style, AVERVS execution framework, error classification, and "Atomizer Claude" identity - 00_BOOTSTRAP.md: Added session startup/closing checklists with LAC integration - 01_CHEATSHEET.md: Added LAC CLI and Python API quick reference - 02_CONTEXT_LOADER.md: Added LAC query section and anti-pattern ## LAC Features - Query similar past optimizations before starting new ones - Record insights (failures, success patterns, workarounds) - Record optimization outcomes for future reference - Suggest protocol improvements based on discoveries - Simple JSONL storage (no database required) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
8.3 KiB
8.3 KiB
skill_id, version, last_updated, type, code_dependencies, requires_skills
| skill_id | version | last_updated | type | code_dependencies | requires_skills | |||
|---|---|---|---|---|---|---|---|---|
| SKILL_001 | 2.0 | 2025-12-07 | reference |
|
|
Atomizer Quick Reference Cheatsheet
Version: 2.0 Updated: 2025-12-07 Purpose: Rapid lookup for common operations. "I want X → Use Y"
Task → Protocol Quick Lookup
| I want to... | Use Protocol | Key Command/Action |
|---|---|---|
| Create a new optimization study | OP_01 | Generate optimization_config.json + run_optimization.py |
| Run an optimization | OP_02 | conda activate atomizer && python run_optimization.py |
| Check optimization progress | OP_03 | Query study.db or check dashboard at localhost:3000 |
| See best results | OP_04 | optuna-dashboard sqlite:///study.db or dashboard |
| Export neural training data | OP_05 | python run_optimization.py --export-training |
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
| Add custom physics extractor | EXT_01 | Create in optimization_engine/extractors/ |
| Add lifecycle hook | EXT_02 | Create in optimization_engine/plugins/ |
Extractor Quick Reference
| Physics | Extractor | Function Call |
|---|---|---|
| Max displacement | E1 | extract_displacement(op2_file, subcase=1) |
| Natural frequency | E2 | extract_frequency(op2_file, subcase=1, mode_number=1) |
| Von Mises stress | E3 | extract_solid_stress(op2_file, subcase=1, element_type='cquad4') |
| BDF mass | E4 | extract_mass_from_bdf(bdf_file) |
| CAD expression mass | E5 | extract_mass_from_expression(prt_file, expression_name='p173') |
| Field data | E6 | FieldDataExtractor(field_file, result_column, aggregation) |
| Stiffness (k=F/δ) | E7 | StiffnessCalculator(...) |
| Zernike WFE | E8 | extract_zernike_from_op2(op2_file, bdf_file, subcase) |
| Zernike relative | E9 | extract_zernike_relative_rms(op2_file, bdf_file, target, ref) |
| Zernike builder | E10 | ZernikeObjectiveBuilder(op2_finder) |
| Part mass + material | E11 | extract_part_mass_material(prt_file) → mass, volume, material |
Full details: See SYS_12_EXTRACTOR_LIBRARY.md or modules/extractors-catalog.md
Protocol Selection Guide
Single Objective Optimization
Question: Do you have ONE goal to minimize/maximize?
├─ Yes, simple problem (smooth, <10 params)
│ └─► Protocol 10 + CMA-ES or GP-BO sampler
│
├─ Yes, complex problem (noisy, many params)
│ └─► Protocol 10 + TPE sampler
│
└─ Not sure about problem characteristics?
└─► Protocol 10 with adaptive characterization (default)
Multi-Objective Optimization
Question: Do you have 2-3 competing goals?
├─ Yes (e.g., minimize mass AND minimize stress)
│ └─► Protocol 11 + NSGA-II sampler
│
└─ Pareto front needed?
└─► Protocol 11 (returns best_trials, not best_trial)
Neural Network Acceleration
Question: Do you need >50 trials OR surrogate model?
├─ Yes
│ └─► Protocol 14 (configure surrogate_settings in config)
│
└─ Training data export needed?
└─► OP_05_EXPORT_TRAINING_DATA.md
Configuration Quick Reference
optimization_config.json Structure
{
"study_name": "my_study",
"design_variables": [
{"name": "thickness", "min": 1.0, "max": 10.0, "unit": "mm"}
],
"objectives": [
{"name": "mass", "goal": "minimize", "unit": "kg"}
],
"constraints": [
{"name": "max_stress", "type": "<=", "threshold": 250, "unit": "MPa"}
],
"optimization_settings": {
"protocol": "protocol_10_single_objective",
"sampler": "TPESampler",
"n_trials": 50
},
"simulation": {
"model_file": "model.prt",
"sim_file": "model.sim",
"solver": "nastran"
}
}
Sampler Quick Selection
| Sampler | Use When | Protocol |
|---|---|---|
TPESampler |
Default, robust to noise | P10 |
CMAESSampler |
Smooth, unimodal problems | P10 |
GPSampler |
Expensive FEA, few trials | P10 |
NSGAIISampler |
Multi-objective (2-3 goals) | P11 |
RandomSampler |
Characterization phase only | P10 |
Study File Structure
studies/{study_name}/
├── 1_setup/
│ ├── model/ # NX files (.prt, .sim, .fem)
│ └── optimization_config.json
├── 2_results/
│ ├── study.db # Optuna SQLite database
│ ├── optimizer_state.json # Real-time state (P13)
│ └── trial_logs/
├── README.md # MANDATORY: Engineering blueprint
├── STUDY_REPORT.md # MANDATORY: Results tracking
└── run_optimization.py # Entrypoint script
Common Commands
# Activate environment (ALWAYS FIRST)
conda activate atomizer
# Run optimization
python run_optimization.py
# Run with specific trial count
python run_optimization.py --n-trials 100
# Resume interrupted optimization
python run_optimization.py --resume
# Export training data for neural network
python run_optimization.py --export-training
# View results in Optuna dashboard
optuna-dashboard sqlite:///2_results/study.db
# Check study status
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///2_results/study.db'); print(f'Trials: {len(s.trials)}')"
LAC (Learning Atomizer Core) Commands
# View LAC statistics
python knowledge_base/lac.py stats
# Generate full LAC report
python knowledge_base/lac.py report
# View pending protocol updates
python knowledge_base/lac.py pending
# Query insights for a context
python knowledge_base/lac.py insights "bracket mass optimization"
Python API Quick Reference
from knowledge_base.lac import get_lac
lac = get_lac()
# Query prior knowledge
insights = lac.get_relevant_insights("bracket mass")
similar = lac.query_similar_optimizations("bracket", ["mass"])
rec = lac.get_best_method_for("bracket", n_objectives=1)
# Record learning
lac.record_insight("success_pattern", "context", "insight", confidence=0.8)
# Record optimization outcome
lac.record_optimization_outcome(study_name="...", geometry_type="...", ...)
Error Quick Fixes
| Error | Likely Cause | Quick Fix |
|---|---|---|
| "No module named optuna" | Wrong environment | conda activate atomizer |
| "NX session timeout" | Model too complex | Increase timeout in config |
| "OP2 file not found" | Solve failed | Check NX log for errors |
| "No feasible solutions" | Constraints too tight | Relax constraint thresholds |
| "NSGA-II requires >1 objective" | Wrong protocol | Use P10 for single-objective |
| "Expression not found" | Wrong parameter name | Verify expression names in NX |
| All trials identical results | Missing *_i.prt |
Copy idealized part to study folder! |
Full troubleshooting: See OP_06_TROUBLESHOOT.md
CRITICAL: NX FEM Mesh Update
If all optimization trials produce identical results, the mesh is NOT updating!
Required Files for Mesh Updates
studies/{study}/1_setup/model/
├── Model.prt # Geometry
├── Model_fem1_i.prt # Idealized part ← MUST EXIST!
├── Model_fem1.fem # FEM
└── Model_sim1.sim # Simulation
Why It Matters
The *_i.prt (idealized part) MUST be:
- Present in the study folder
- Loaded before
UpdateFemodel()(already implemented insolve_simulation.py)
Without it, UpdateFemodel() runs but the mesh doesn't change!
Privilege Levels
| Level | Can Create Studies | Can Add Extractors | Can Add Protocols |
|---|---|---|---|
| user | ✓ | ✗ | ✗ |
| power_user | ✓ | ✓ | ✗ |
| admin | ✓ | ✓ | ✓ |
Dashboard URLs
| Service | URL | Purpose |
|---|---|---|
| Atomizer Dashboard | http://localhost:3000 |
Real-time optimization monitoring |
| Optuna Dashboard | http://localhost:8080 |
Trial history, parameter importance |
| API Backend | http://localhost:5000 |
REST API for dashboard |
Protocol Numbers Reference
| # | Name | Purpose |
|---|---|---|
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
| 12 | - | (Reserved) |
| 13 | Dashboard | Real-time tracking and visualization |
| 14 | Neural | Surrogate model acceleration |