Self-Aware Turbo v3 optimization validated on M1 Mirror flat back: - Best WS: 205.58 (12% better than previous best 218.26) - 100% feasibility rate, 100% unique designs - Uses 556 training samples from V5-V8 campaign data Key innovations in V9: - Adaptive exploration schedule (15% → 8% → 3%) - Mass threshold at 118 kg (optimal sweet spot) - 70% exploitation near best design - Seeded with best known design from V7 - Ensemble surrogate with R²=0.99 Updated documentation: - SYS_16: SAT protocol updated to v3.0 VALIDATED - Cheatsheet: Added SAT v3 as recommended method - Context: Updated protocol overview 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
675 lines
21 KiB
Markdown
675 lines
21 KiB
Markdown
---
|
|
skill_id: SKILL_001
|
|
version: 2.4
|
|
last_updated: 2025-12-31
|
|
type: reference
|
|
code_dependencies:
|
|
- optimization_engine/extractors/__init__.py
|
|
- optimization_engine/core/method_selector.py
|
|
- optimization_engine/utils/trial_manager.py
|
|
- optimization_engine/utils/dashboard_db.py
|
|
requires_skills:
|
|
- SKILL_000
|
|
---
|
|
|
|
# Atomizer Quick Reference Cheatsheet
|
|
|
|
**Version**: 2.4
|
|
**Updated**: 2025-12-31
|
|
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
|
|
|
---
|
|
|
|
## Task → Protocol Quick Lookup
|
|
|
|
| I want to... | Use Protocol | Key Command/Action |
|
|
|--------------|--------------|-------------------|
|
|
| Create a new optimization study | OP_01 | Place in `studies/{geometry_type}/`, generate config + runner + **README.md** |
|
|
| Run an optimization | OP_02 | `conda activate atomizer && python run_optimization.py` |
|
|
| Check optimization progress | OP_03 | Query `study.db` or check dashboard at `localhost:3000` |
|
|
| See best results | OP_04 | `optuna-dashboard sqlite:///study.db` or dashboard |
|
|
| Export neural training data | OP_05 | `python run_optimization.py --export-training` |
|
|
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
|
|
| **Free disk space** | **OP_07** | `archive_study.bat cleanup <study> --execute` |
|
|
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
|
|
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
|
|
| Generate physics insight | SYS_16 | `python -m optimization_engine.insights generate <study>` |
|
|
| **Manage knowledge/playbook** | **SYS_17** | `from optimization_engine.context import AtomizerPlaybook` |
|
|
|
|
---
|
|
|
|
## Extractor Quick Reference
|
|
|
|
| Physics | Extractor | Function Call |
|
|
|---------|-----------|---------------|
|
|
| Max displacement | E1 | `extract_displacement(op2_file, subcase=1)` |
|
|
| Natural frequency | E2 | `extract_frequency(op2_file, subcase=1, mode_number=1)` |
|
|
| Von Mises stress | E3 | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` |
|
|
| BDF mass | E4 | `extract_mass_from_bdf(bdf_file)` |
|
|
| CAD expression mass | E5 | `extract_mass_from_expression(prt_file, expression_name='p173')` |
|
|
| Field data | E6 | `FieldDataExtractor(field_file, result_column, aggregation)` |
|
|
| Stiffness (k=F/δ) | E7 | `StiffnessCalculator(...)` |
|
|
| Zernike WFE (standard) | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
|
|
| Zernike relative | E9 | `extract_zernike_relative_rms(op2_file, bdf_file, target, ref)` |
|
|
| Zernike builder | E10 | `ZernikeObjectiveBuilder(op2_finder)` |
|
|
| Part mass + material | E11 | `extract_part_mass_material(prt_file)` → mass, volume, material |
|
|
| Zernike Analytic | E20 | `extract_zernike_analytic(op2_file, focal_length=5000.0)` |
|
|
| **Zernike OPD** | E22 | `extract_zernike_opd(op2_file)` ← **Most rigorous, RECOMMENDED** |
|
|
|
|
> **Mass extraction tip**: Always use E11 (geometry .prt) over E4 (BDF) for accuracy.
|
|
> pyNastran under-reports mass ~7% on hex-dominant meshes with tet/pyramid fills.
|
|
|
|
**Full details**: See `SYS_12_EXTRACTOR_LIBRARY.md` or `modules/extractors-catalog.md`
|
|
|
|
---
|
|
|
|
## Protocol Selection Guide
|
|
|
|
### Single Objective Optimization
|
|
```
|
|
Question: Do you have ONE goal to minimize/maximize?
|
|
├─ Yes, simple problem (smooth, <10 params)
|
|
│ └─► Protocol 10 + CMA-ES or GP-BO sampler
|
|
│
|
|
├─ Yes, complex problem (noisy, many params)
|
|
│ └─► Protocol 10 + TPE sampler
|
|
│
|
|
└─ Not sure about problem characteristics?
|
|
└─► Protocol 10 with adaptive characterization (default)
|
|
```
|
|
|
|
### Multi-Objective Optimization
|
|
```
|
|
Question: Do you have 2-3 competing goals?
|
|
├─ Yes (e.g., minimize mass AND minimize stress)
|
|
│ └─► Protocol 11 + NSGA-II sampler
|
|
│
|
|
└─ Pareto front needed?
|
|
└─► Protocol 11 (returns best_trials, not best_trial)
|
|
```
|
|
|
|
### Neural Network Acceleration
|
|
```
|
|
Question: Do you need >50 trials OR surrogate model?
|
|
├─ Yes, have 500+ historical samples
|
|
│ └─► SYS_16 SAT v3 (Self-Aware Turbo) - BEST RESULTS
|
|
│
|
|
├─ Yes, have 50-500 samples
|
|
│ └─► Protocol 14 with ensemble surrogate
|
|
│
|
|
└─ Training data export needed?
|
|
└─► OP_05_EXPORT_TRAINING_DATA.md
|
|
```
|
|
|
|
### SAT v3 (Self-Aware Turbo) - NEW BEST METHOD
|
|
```
|
|
When: Have 500+ historical FEA samples from prior studies
|
|
Result: V9 achieved WS=205.58 (12% better than TPE)
|
|
|
|
Key settings:
|
|
├─ n_ensemble_models: 5
|
|
├─ adaptive exploration: 15% → 8% → 3%
|
|
├─ mass_soft_threshold: 118.0 kg
|
|
├─ exploit_near_best_ratio: 0.7
|
|
└─ lbfgs_polish_trials: 10
|
|
|
|
Reference: SYS_16_SELF_AWARE_TURBO.md
|
|
```
|
|
|
|
---
|
|
|
|
## Configuration Quick Reference
|
|
|
|
### optimization_config.json Structure
|
|
```json
|
|
{
|
|
"study_name": "my_study",
|
|
"design_variables": [
|
|
{"name": "thickness", "min": 1.0, "max": 10.0, "unit": "mm"}
|
|
],
|
|
"objectives": [
|
|
{"name": "mass", "goal": "minimize", "unit": "kg"}
|
|
],
|
|
"constraints": [
|
|
{"name": "max_stress", "type": "<=", "threshold": 250, "unit": "MPa"}
|
|
],
|
|
"optimization_settings": {
|
|
"protocol": "protocol_10_single_objective",
|
|
"sampler": "TPESampler",
|
|
"n_trials": 50
|
|
},
|
|
"simulation": {
|
|
"model_file": "model.prt",
|
|
"sim_file": "model.sim",
|
|
"solver": "nastran"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Sampler Quick Selection
|
|
| Sampler | Use When | Protocol |
|
|
|---------|----------|----------|
|
|
| `TPESampler` | Default, robust to noise | P10 |
|
|
| `CMAESSampler` | Smooth, unimodal problems | P10 |
|
|
| `GPSampler` | Expensive FEA, few trials | P10 |
|
|
| `NSGAIISampler` | Multi-objective (2-3 goals) | P11 |
|
|
| `RandomSampler` | Characterization phase only | P10 |
|
|
| **L-BFGS** | **Polish phase (after surrogate)** | **P14** |
|
|
|
|
### L-BFGS Gradient Optimization (NEW)
|
|
|
|
Exploits surrogate differentiability for **100-1000x faster** local refinement:
|
|
|
|
```python
|
|
from optimization_engine.core.gradient_optimizer import GradientOptimizer, run_lbfgs_polish
|
|
|
|
# Quick usage - polish from top FEA candidates
|
|
results = run_lbfgs_polish(study_dir, n_starts=20, n_iterations=100)
|
|
|
|
# Or with more control
|
|
optimizer = GradientOptimizer(surrogate, objective_weights=[5.0, 5.0, 1.0])
|
|
result = optimizer.optimize(starting_points=top_candidates, method='lbfgs')
|
|
```
|
|
|
|
**CLI usage**:
|
|
```bash
|
|
python -m optimization_engine.core.gradient_optimizer studies/my_study --n-starts 20
|
|
|
|
# Or per-study script (if available)
|
|
python run_lbfgs_polish.py --n-starts 20 --grid-then-grad
|
|
```
|
|
|
|
**When to use**: After training surrogate, before final FEA validation
|
|
|
|
---
|
|
|
|
## Study File Structure
|
|
|
|
```
|
|
studies/{study_name}/
|
|
├── 1_setup/
|
|
│ ├── model/ # NX files (.prt, .sim, .fem)
|
|
│ └── optimization_config.json
|
|
├── 2_results/
|
|
│ ├── study.db # Optuna SQLite database
|
|
│ ├── optimizer_state.json # Real-time state (P13)
|
|
│ └── trial_logs/
|
|
├── README.md # MANDATORY: Engineering blueprint
|
|
├── STUDY_REPORT.md # MANDATORY: Results tracking
|
|
└── run_optimization.py # Entrypoint script
|
|
```
|
|
|
|
---
|
|
|
|
## Common Commands
|
|
|
|
```bash
|
|
# Activate environment (ALWAYS FIRST)
|
|
conda activate atomizer
|
|
|
|
# Run optimization
|
|
python run_optimization.py --start
|
|
|
|
# Run with specific trial count
|
|
python run_optimization.py --start --trials 50
|
|
|
|
# Resume interrupted optimization
|
|
python run_optimization.py --start --resume
|
|
|
|
# Export training data for neural network
|
|
python run_optimization.py --export-training
|
|
|
|
# View results in Optuna dashboard
|
|
optuna-dashboard sqlite:///3_results/study.db
|
|
|
|
# Check study status
|
|
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///3_results/study.db'); print(f'Trials: {len(s.trials)}')"
|
|
```
|
|
|
|
### When to Use --resume
|
|
|
|
| Scenario | Command |
|
|
|----------|---------|
|
|
| **First run of NEW study** | `python run_optimization.py --start --trials 50` |
|
|
| **First run with SEEDING** (e.g., V15 from V14) | `python run_optimization.py --start --trials 50` |
|
|
| **Continue INTERRUPTED run** | `python run_optimization.py --start --resume` |
|
|
| **Add MORE trials to completed study** | `python run_optimization.py --start --trials 20 --resume` |
|
|
|
|
**Key insight**: `--resume` is for continuing an existing `study.db`, NOT for seeding from prior studies. Seeding happens automatically on first run when `source_studies` is configured.
|
|
|
|
---
|
|
|
|
## Disk Space Management (OP_07)
|
|
|
|
FEA studies consume massive disk space. After completion, clean up regenerable files:
|
|
|
|
### Quick Commands
|
|
|
|
```bash
|
|
# Analyze disk usage
|
|
archive_study.bat analyze studies\M1_Mirror
|
|
|
|
# Cleanup completed study (dry run first!)
|
|
archive_study.bat cleanup studies\M1_Mirror\m1_mirror_V12
|
|
archive_study.bat cleanup studies\M1_Mirror\m1_mirror_V12 --execute
|
|
|
|
# Archive to dalidou server
|
|
archive_study.bat archive studies\M1_Mirror\m1_mirror_V12 --execute
|
|
|
|
# List remote archives
|
|
archive_study.bat list
|
|
```
|
|
|
|
### What Gets Deleted vs Kept
|
|
|
|
| KEEP | DELETE |
|
|
|------|--------|
|
|
| `.op2` (Nastran results) | `.prt, .fem, .sim` (copies of master) |
|
|
| `.json` (params/metadata) | `.dat` (solver input) |
|
|
| `1_setup/` (master files) | `.f04, .f06, .log` (solver logs) |
|
|
| `3_results/` (database) | `.afm, .diag, .bak` (temp files) |
|
|
|
|
### Typical Savings
|
|
|
|
| Stage | M1_Mirror Example |
|
|
|-------|-------------------|
|
|
| Full | 194 GB |
|
|
| After cleanup | 114 GB (41% saved) |
|
|
| Archived to server | 5 GB local (97% saved) |
|
|
|
|
**Full details**: `docs/protocols/operations/OP_07_DISK_OPTIMIZATION.md`
|
|
|
|
---
|
|
|
|
## LAC (Learning Atomizer Core) Commands
|
|
|
|
```bash
|
|
# View LAC statistics
|
|
python knowledge_base/lac.py stats
|
|
|
|
# Generate full LAC report
|
|
python knowledge_base/lac.py report
|
|
|
|
# View pending protocol updates
|
|
python knowledge_base/lac.py pending
|
|
|
|
# Query insights for a context
|
|
python knowledge_base/lac.py insights "bracket mass optimization"
|
|
```
|
|
|
|
### Python API Quick Reference
|
|
```python
|
|
from knowledge_base.lac import get_lac
|
|
lac = get_lac()
|
|
|
|
# Query prior knowledge
|
|
insights = lac.get_relevant_insights("bracket mass")
|
|
similar = lac.query_similar_optimizations("bracket", ["mass"])
|
|
rec = lac.get_best_method_for("bracket", n_objectives=1)
|
|
|
|
# Record learning
|
|
lac.record_insight("success_pattern", "context", "insight", confidence=0.8)
|
|
|
|
# Record optimization outcome
|
|
lac.record_optimization_outcome(study_name="...", geometry_type="...", ...)
|
|
```
|
|
|
|
---
|
|
|
|
## Error Quick Fixes
|
|
|
|
| Error | Likely Cause | Quick Fix |
|
|
|-------|--------------|-----------|
|
|
| "No module named optuna" | Wrong environment | `conda activate atomizer` |
|
|
| "NX session timeout" | Model too complex | Increase `timeout` in config |
|
|
| "OP2 file not found" | Solve failed | Check NX log for errors |
|
|
| "No feasible solutions" | Constraints too tight | Relax constraint thresholds |
|
|
| "NSGA-II requires >1 objective" | Wrong protocol | Use P10 for single-objective |
|
|
| "Expression not found" | Wrong parameter name | Verify expression names in NX |
|
|
| **All trials identical results** | **Missing `*_i.prt`** | **Copy idealized part to study folder!** |
|
|
|
|
**Full troubleshooting**: See `OP_06_TROUBLESHOOT.md`
|
|
|
|
---
|
|
|
|
## CRITICAL: NX FEM Mesh Update
|
|
|
|
**If all optimization trials produce identical results, the mesh is NOT updating!**
|
|
|
|
### Required Files for Mesh Updates
|
|
```
|
|
studies/{study}/1_setup/model/
|
|
├── Model.prt # Geometry
|
|
├── Model_fem1_i.prt # Idealized part ← MUST EXIST!
|
|
├── Model_fem1.fem # FEM
|
|
└── Model_sim1.sim # Simulation
|
|
```
|
|
|
|
### Why It Matters
|
|
The `*_i.prt` (idealized part) MUST be:
|
|
1. **Present** in the study folder
|
|
2. **Loaded** before `UpdateFemodel()` (already implemented in `solve_simulation.py`)
|
|
|
|
Without it, `UpdateFemodel()` runs but the mesh doesn't change!
|
|
|
|
---
|
|
|
|
## Privilege Levels
|
|
|
|
| Level | Can Create Studies | Can Add Extractors | Can Add Protocols |
|
|
|-------|-------------------|-------------------|------------------|
|
|
| user | ✓ | ✗ | ✗ |
|
|
| power_user | ✓ | ✓ | ✗ |
|
|
| admin | ✓ | ✓ | ✓ |
|
|
|
|
---
|
|
|
|
## Dashboard URLs
|
|
|
|
| Service | URL | Purpose |
|
|
|---------|-----|---------|
|
|
| Atomizer Dashboard | `http://localhost:3000` | Real-time optimization monitoring |
|
|
| Optuna Dashboard | `http://localhost:8080` | Trial history, parameter importance |
|
|
| API Backend | `http://localhost:5000` | REST API for dashboard |
|
|
|
|
---
|
|
|
|
## Protocol Numbers Reference
|
|
|
|
| # | Name | Purpose |
|
|
|---|------|---------|
|
|
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
|
|
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
|
|
| 12 | Extractor Library | Physics extraction catalog |
|
|
| 13 | Dashboard | Real-time tracking and visualization |
|
|
| 14 | Neural | Surrogate model acceleration |
|
|
| 15 | Method Selector | Recommends optimization strategy |
|
|
| 16 | Study Insights | Physics visualizations (Zernike, stress, modal) |
|
|
| 17 | Context Engineering | ACE framework - self-improving knowledge system |
|
|
|
|
---
|
|
|
|
## Study Insights Quick Reference (SYS_16)
|
|
|
|
Generate physics-focused visualizations from FEA results.
|
|
|
|
### Available Insight Types
|
|
| Type | Purpose | Data Required |
|
|
|------|---------|---------------|
|
|
| `zernike_dashboard` | **RECOMMENDED: Unified WFE dashboard** | OP2 with displacements |
|
|
| `zernike_wfe` | WFE with Standard/OPD toggle | OP2 with displacements |
|
|
| `zernike_opd_comparison` | Compare Standard vs OPD methods | OP2 with displacements |
|
|
| `stress_field` | Von Mises stress contours | OP2 with stresses |
|
|
| `modal` | Mode shapes + frequencies | OP2 with eigenvectors |
|
|
| `thermal` | Temperature distribution | OP2 with temperatures |
|
|
| `design_space` | Parameter-objective landscape | study.db with 5+ trials |
|
|
|
|
### Zernike Method Comparison
|
|
| Method | Use | RMS Difference |
|
|
|--------|-----|----------------|
|
|
| **Standard (Z-only)** | Quick analysis | Baseline |
|
|
| **OPD (X,Y,Z)** ← RECOMMENDED | Any surface with lateral displacement | **+8-11% higher** (more accurate) |
|
|
|
|
### Commands
|
|
```bash
|
|
# List available insights for a study
|
|
python -m optimization_engine.insights list
|
|
|
|
# Generate all insights
|
|
python -m optimization_engine.insights generate studies/my_study
|
|
|
|
# Generate specific insight
|
|
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
|
|
```
|
|
|
|
### Python API
|
|
```python
|
|
from optimization_engine.insights import get_insight, list_available_insights
|
|
from pathlib import Path
|
|
|
|
study_path = Path("studies/my_study")
|
|
|
|
# Check what's available
|
|
available = list_available_insights(study_path)
|
|
|
|
# Generate Zernike WFE insight
|
|
insight = get_insight('zernike_wfe', study_path)
|
|
result = insight.generate()
|
|
print(result.html_path) # Path to generated HTML
|
|
print(result.summary) # Key metrics dict
|
|
```
|
|
|
|
**Output**: HTMLs saved to `{study}/3_insights/`
|
|
|
|
---
|
|
|
|
## CRITICAL: NXSolver Initialization Pattern
|
|
|
|
**NEVER pass full config dict to NXSolver. Use named parameters:**
|
|
|
|
```python
|
|
# WRONG - causes TypeError
|
|
self.nx_solver = NXSolver(self.config) # ❌
|
|
|
|
# CORRECT - use FEARunner pattern from V14/V15
|
|
nx_settings = self.config.get('nx_settings', {})
|
|
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
|
|
|
|
# Extract version from path
|
|
import re
|
|
version_match = re.search(r'NX(\d+)', nx_install_dir)
|
|
nastran_version = version_match.group(1) if version_match else "2506"
|
|
|
|
self.nx_solver = NXSolver(
|
|
master_model_dir=str(self.master_model_dir), # Path to 1_setup/model
|
|
nx_install_dir=nx_install_dir,
|
|
nastran_version=nastran_version,
|
|
timeout=nx_settings.get('simulation_timeout_s', 600),
|
|
use_iteration_folders=True,
|
|
study_name="my_study_name"
|
|
)
|
|
```
|
|
|
|
### FEARunner Class Pattern
|
|
|
|
Always wrap NXSolver in a `FEARunner` class for:
|
|
- Lazy initialization (setup on first use)
|
|
- Clean separation of NX setup from optimization logic
|
|
- Consistent error handling
|
|
|
|
```python
|
|
class FEARunner:
|
|
def __init__(self, config: Dict):
|
|
self.config = config
|
|
self.nx_solver = None
|
|
self.master_model_dir = SETUP_DIR / "model"
|
|
|
|
def setup(self):
|
|
# Initialize NX and solver here
|
|
...
|
|
|
|
def run_fea(self, params: Dict, trial_num: int) -> Optional[Dict]:
|
|
if self.nx_solver is None:
|
|
self.setup()
|
|
# Run simulation...
|
|
```
|
|
|
|
**Reference implementations**:
|
|
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
|
|
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
|
|
|
|
---
|
|
|
|
## Trial Management Utilities
|
|
|
|
### TrialManager - Unified Trial Folder + DB Management
|
|
|
|
```python
|
|
from optimization_engine.utils.trial_manager import TrialManager
|
|
|
|
tm = TrialManager(study_dir)
|
|
|
|
# Start new trial (creates folder, saves params)
|
|
trial = tm.new_trial(
|
|
params={'rib_thickness': 10.5, 'mirror_face_thickness': 17.0},
|
|
source="turbo",
|
|
metadata={'turbo_batch': 1, 'predicted_ws': 42.0}
|
|
)
|
|
# Returns: {'trial_id': 47, 'trial_number': 47, 'folder_path': Path(...)}
|
|
|
|
# After FEA completes
|
|
tm.complete_trial(
|
|
trial_number=trial['trial_number'],
|
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
|
weighted_sum=42.5,
|
|
is_feasible=True
|
|
)
|
|
|
|
# Mark failed trial
|
|
tm.fail_trial(trial_number=47, error="NX solver timeout")
|
|
```
|
|
|
|
### DashboardDB - Optuna-Compatible Database
|
|
|
|
```python
|
|
from optimization_engine.utils.dashboard_db import DashboardDB, convert_custom_to_optuna
|
|
|
|
# Create new dashboard-compatible database
|
|
db = DashboardDB(db_path, study_name="my_study")
|
|
|
|
# Log a trial
|
|
trial_id = db.log_trial(
|
|
params={'rib_thickness': 10.5},
|
|
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
|
|
weighted_sum=42.5,
|
|
is_feasible=True,
|
|
state="COMPLETE"
|
|
)
|
|
|
|
# Mark best trial
|
|
db.mark_best(trial_id)
|
|
|
|
# Get summary
|
|
summary = db.get_summary()
|
|
|
|
# Convert existing custom database to Optuna format
|
|
convert_custom_to_optuna(db_path, study_name)
|
|
```
|
|
|
|
### Trial Naming Convention
|
|
|
|
```
|
|
2_iterations/
|
|
├── trial_0001/ # Zero-padded, monotonically increasing
|
|
├── trial_0002/ # NEVER reset, NEVER overwritten
|
|
└── trial_0003/
|
|
```
|
|
|
|
**Key principles**:
|
|
- Trial numbers **NEVER reset** across study lifetime
|
|
- Surrogate predictions (5K per batch) are NOT logged as trials
|
|
- Only FEA-validated results become trials
|
|
|
|
---
|
|
|
|
## Context Engineering Quick Reference (SYS_17)
|
|
|
|
The ACE (Agentic Context Engineering) framework enables self-improving optimization through structured knowledge capture.
|
|
|
|
### Core Components
|
|
|
|
| Component | Purpose | Key Function |
|
|
|-----------|---------|--------------|
|
|
| **AtomizerPlaybook** | Structured knowledge store | `playbook.add_insight()`, `playbook.get_context_for_task()` |
|
|
| **AtomizerReflector** | Extracts insights from outcomes | `reflector.analyze_outcome()` |
|
|
| **AtomizerSessionState** | Context isolation (exposed/isolated) | `session.get_llm_context()` |
|
|
| **FeedbackLoop** | Automated learning | `feedback.process_trial_result()` |
|
|
| **CompactionManager** | Long-session handling | `compactor.maybe_compact()` |
|
|
| **CacheMonitor** | KV-cache optimization | `optimizer.track_completion()` |
|
|
|
|
### Python API Quick Reference
|
|
|
|
```python
|
|
from optimization_engine.context import (
|
|
AtomizerPlaybook, AtomizerReflector, get_session,
|
|
InsightCategory, TaskType, FeedbackLoop
|
|
)
|
|
|
|
# Load playbook
|
|
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
|
|
|
|
# Add an insight
|
|
playbook.add_insight(
|
|
category=InsightCategory.STRATEGY, # str, mis, tool, cal, dom, wf
|
|
content="CMA-ES converges faster on smooth mirror surfaces",
|
|
tags=["mirror", "sampler", "convergence"]
|
|
)
|
|
playbook.save(Path("knowledge_base/playbook.json"))
|
|
|
|
# Get context for LLM
|
|
context = playbook.get_context_for_task(
|
|
task_type="optimization",
|
|
max_items=15,
|
|
min_confidence=0.5
|
|
)
|
|
|
|
# Record feedback
|
|
playbook.record_outcome(item_id="str_001", helpful=True)
|
|
|
|
# Session state
|
|
session = get_session()
|
|
session.exposed.task_type = TaskType.RUN_OPTIMIZATION
|
|
session.add_action("Started optimization run")
|
|
llm_context = session.get_llm_context()
|
|
|
|
# Feedback loop (automated learning)
|
|
feedback = FeedbackLoop(playbook_path)
|
|
feedback.process_trial_result(
|
|
trial_number=42,
|
|
params={'thickness': 10.5},
|
|
objectives={'mass': 5.2},
|
|
is_feasible=True
|
|
)
|
|
```
|
|
|
|
### Insight Categories
|
|
|
|
| Category | Code | Use For |
|
|
|----------|------|---------|
|
|
| Strategy | `str` | Optimization approaches that work |
|
|
| Mistake | `mis` | Common errors to avoid |
|
|
| Tool | `tool` | Tool usage patterns |
|
|
| Calculation | `cal` | Formulas and calculations |
|
|
| Domain | `dom` | FEA/NX domain knowledge |
|
|
| Workflow | `wf` | Process patterns |
|
|
|
|
### Playbook Item Format
|
|
|
|
```
|
|
[str_001] helpful=5 harmful=0 :: CMA-ES converges faster on smooth surfaces
|
|
```
|
|
|
|
- `net_score = helpful - harmful`
|
|
- `confidence = helpful / (helpful + harmful)`
|
|
- Items with `net_score < -3` are pruned
|
|
|
|
### REST API Endpoints
|
|
|
|
| Endpoint | Method | Purpose |
|
|
|----------|--------|---------|
|
|
| `/api/context/playbook` | GET | Playbook summary stats |
|
|
| `/api/context/playbook/items` | GET | List items with filters |
|
|
| `/api/context/playbook/feedback` | POST | Record helpful/harmful |
|
|
| `/api/context/playbook/insights` | POST | Add new insight |
|
|
| `/api/context/playbook/prune` | POST | Remove harmful items |
|
|
| `/api/context/session` | GET | Current session state |
|
|
| `/api/context/learning/report` | GET | Comprehensive learning report |
|
|
|
|
### Dashboard URL
|
|
|
|
| Service | URL | Purpose |
|
|
|---------|-----|---------|
|
|
| Context API | `http://localhost:5000/api/context` | Playbook management |
|
|
|
|
**Full documentation**: `docs/protocols/system/SYS_17_CONTEXT_ENGINEERING.md`
|