Complete implementation of Agentic Context Engineering (ACE) framework: Core modules (optimization_engine/context/): - playbook.py: AtomizerPlaybook with helpful/harmful scoring - reflector.py: AtomizerReflector for insight extraction - session_state.py: Context isolation (exposed/isolated state) - feedback_loop.py: Automated learning from trial results - compaction.py: Long-session context management - cache_monitor.py: KV-cache optimization tracking - runner_integration.py: OptimizationRunner integration Dashboard integration: - context.py: 12 REST API endpoints for playbook management Tests: - test_context_engineering.py: 44 unit tests - test_context_integration.py: 16 integration tests Documentation: - CONTEXT_ENGINEERING_REPORT.md: Comprehensive implementation report - CONTEXT_ENGINEERING_API.md: Complete API reference - SYS_17_CONTEXT_ENGINEERING.md: System protocol - Updated cheatsheet with SYS_17 quick reference - Enhanced bootstrap (00_BOOTSTRAP_V2.md) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
19 KiB
Context Engineering API Reference
Version: 1.0
Updated: 2025-12-29
Module: optimization_engine.context
This document provides complete API documentation for the Atomizer Context Engineering (ACE) framework.
Table of Contents
- Module Overview
- Core Classes
- Session Management
- Analysis & Learning
- Optimization
- Integration
- REST API
Module Overview
Import Patterns
# Full import
from optimization_engine.context import (
# Core playbook
AtomizerPlaybook,
PlaybookItem,
InsightCategory,
# Session management
AtomizerSessionState,
ExposedState,
IsolatedState,
TaskType,
get_session,
# Analysis
AtomizerReflector,
OptimizationOutcome,
InsightCandidate,
# Learning
FeedbackLoop,
FeedbackLoopFactory,
# Optimization
CompactionManager,
ContextEvent,
EventType,
ContextBudgetManager,
ContextCacheOptimizer,
CacheStats,
StablePrefixBuilder,
# Integration
ContextEngineeringMixin,
ContextAwareRunner,
)
# Convenience imports
from optimization_engine.context import AtomizerPlaybook, get_session
Core Classes
AtomizerPlaybook
The central knowledge store for persistent learning across sessions.
Constructor
AtomizerPlaybook(
items: Dict[str, PlaybookItem] = None,
version: int = 1,
created_at: str = None,
last_updated: str = None
)
Class Methods
load(path: Path) -> AtomizerPlaybook
Load playbook from JSON file.
playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json"))
Parameters:
path: Path to JSON file
Returns: AtomizerPlaybook instance
Raises: FileNotFoundError if file doesn't exist (creates new if not found)
Instance Methods
save(path: Path) -> None
Save playbook to JSON file.
playbook.save(Path("knowledge_base/playbook.json"))
add_insight(category, content, source_trial=None, tags=None) -> PlaybookItem
Add a new insight to the playbook.
item = playbook.add_insight(
category=InsightCategory.STRATEGY,
content="CMA-ES converges faster on smooth surfaces",
source_trial=42,
tags=["sampler", "convergence", "mirror"]
)
Parameters:
category(InsightCategory): Category of the insightcontent(str): The insight contentsource_trial(int, optional): Trial number that generated this insighttags(List[str], optional): Tags for filtering
Returns: The created PlaybookItem
record_outcome(item_id: str, helpful: bool) -> None
Record whether an insight was helpful or harmful.
playbook.record_outcome("str_001", helpful=True)
playbook.record_outcome("mis_003", helpful=False)
Parameters:
item_id(str): ID of the playbook itemhelpful(bool): True if helpful, False if harmful
get_context_for_task(task_type, max_items=15, min_confidence=0.5) -> str
Get formatted context string for LLM consumption.
context = playbook.get_context_for_task(
task_type="optimization",
max_items=15,
min_confidence=0.5
)
Parameters:
task_type(str): Type of task for filteringmax_items(int): Maximum items to includemin_confidence(float): Minimum confidence threshold (0.0-1.0)
Returns: Formatted string suitable for LLM context
get_by_category(category, min_score=0) -> List[PlaybookItem]
Get items filtered by category.
mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2)
Parameters:
category(InsightCategory): Category to filter bymin_score(int): Minimum net score
Returns: List of matching PlaybookItems
get_stats() -> Dict
Get playbook statistics.
stats = playbook.get_stats()
# Returns:
# {
# "total_items": 45,
# "by_category": {"STRATEGY": 12, "MISTAKE": 8, ...},
# "version": 3,
# "last_updated": "2025-12-29T10:30:00",
# "avg_score": 2.4,
# "max_score": 15,
# "min_score": -3
# }
prune_harmful(threshold=-3) -> int
Remove items with net score below threshold.
removed_count = playbook.prune_harmful(threshold=-3)
Parameters:
threshold(int): Items with net_score <= threshold are removed
Returns: Number of items removed
PlaybookItem
Dataclass representing a single playbook entry.
@dataclass
class PlaybookItem:
id: str # e.g., "str_001", "mis_003"
category: InsightCategory # Category enum
content: str # The insight text
helpful_count: int = 0 # Times marked helpful
harmful_count: int = 0 # Times marked harmful
tags: List[str] = field(default_factory=list)
source_trial: Optional[int] = None
created_at: str = "" # ISO timestamp
last_used: Optional[str] = None # ISO timestamp
Properties
item.net_score # helpful_count - harmful_count
item.confidence # helpful / (helpful + harmful), or 0.5 if no feedback
Methods
# Convert to context string for LLM
context_str = item.to_context_string()
# "[str_001] helpful=5 harmful=0 :: CMA-ES converges faster..."
InsightCategory
Enum for categorizing insights.
class InsightCategory(Enum):
STRATEGY = "str" # Optimization strategies that work
CALCULATION = "cal" # Formulas and calculations
MISTAKE = "mis" # Common mistakes to avoid
TOOL = "tool" # Tool usage patterns
DOMAIN = "dom" # Domain-specific knowledge (FEA, NX)
WORKFLOW = "wf" # Workflow patterns
Usage:
# Create with enum
category = InsightCategory.STRATEGY
# Create from string
category = InsightCategory("str")
# Get string value
value = InsightCategory.STRATEGY.value # "str"
Session Management
AtomizerSessionState
Manages session context with exposed/isolated separation.
Constructor
session = AtomizerSessionState(
session_id: str = None # Auto-generated UUID if not provided
)
Attributes
session.session_id # Unique session identifier
session.exposed # ExposedState - always in LLM context
session.isolated # IsolatedState - on-demand access only
session.last_updated # ISO timestamp of last update
Methods
get_llm_context() -> str
Get exposed state formatted for LLM context.
context = session.get_llm_context()
# Returns formatted string with task type, study info, progress, etc.
add_action(action: str) -> None
Record an action (keeps last 20).
session.add_action("Started optimization with TPE sampler")
add_error(error: str, error_type: str = None) -> None
Record an error (keeps last 10).
session.add_error("NX solver timeout after 600s", error_type="solver")
to_dict() / from_dict(data) -> AtomizerSessionState
Serialize/deserialize session state.
# Save
data = session.to_dict()
# Restore
session = AtomizerSessionState.from_dict(data)
ExposedState
State that's always included in LLM context.
@dataclass
class ExposedState:
task_type: Optional[TaskType] = None
study_name: Optional[str] = None
study_status: str = "idle"
trials_completed: int = 0
trials_total: int = 0
best_value: Optional[float] = None
recent_actions: List[str] = field(default_factory=list) # Last 20
recent_errors: List[str] = field(default_factory=list) # Last 10
IsolatedState
State available on-demand but not in default context.
@dataclass
class IsolatedState:
full_trial_history: List[Dict] = field(default_factory=list)
detailed_errors: List[Dict] = field(default_factory=list)
performance_metrics: Dict = field(default_factory=dict)
debug_info: Dict = field(default_factory=dict)
TaskType
Enum for session task classification.
class TaskType(Enum):
CREATE_STUDY = "create_study"
RUN_OPTIMIZATION = "run_optimization"
MONITOR_PROGRESS = "monitor_progress"
ANALYZE_RESULTS = "analyze_results"
DEBUG_ERROR = "debug_error"
CONFIGURE_SETTINGS = "configure_settings"
NEURAL_ACCELERATION = "neural_acceleration"
get_session()
Get or create the global session instance.
from optimization_engine.context import get_session
session = get_session()
session.exposed.task_type = TaskType.RUN_OPTIMIZATION
Analysis & Learning
AtomizerReflector
Analyzes optimization outcomes and extracts insights.
Constructor
reflector = AtomizerReflector(playbook: AtomizerPlaybook)
Methods
analyze_outcome(outcome: OptimizationOutcome) -> List[InsightCandidate]
Analyze an optimization outcome for insights.
outcome = OptimizationOutcome(
study_name="bracket_v3",
trial_number=42,
params={'thickness': 10.5},
objectives={'mass': 5.2},
constraints_satisfied=True,
error_message=None,
solve_time=45.2
)
insights = reflector.analyze_outcome(outcome)
for insight in insights:
print(f"{insight.category}: {insight.content}")
extract_error_insights(error_message: str) -> List[InsightCandidate]
Extract insights from error messages.
insights = reflector.extract_error_insights("Solution did not converge within tolerance")
# Returns insights about convergence failures
OptimizationOutcome
Dataclass for optimization trial outcomes.
@dataclass
class OptimizationOutcome:
study_name: str
trial_number: int
params: Dict[str, Any]
objectives: Dict[str, float]
constraints_satisfied: bool
error_message: Optional[str] = None
solve_time: Optional[float] = None
FeedbackLoop
Automated learning from optimization execution.
Constructor
feedback = FeedbackLoop(playbook_path: Path)
Methods
process_trial_result(trial_number, params, objectives, is_feasible, error=None)
Process a trial result for learning opportunities.
feedback.process_trial_result(
trial_number=42,
params={'thickness': 10.5, 'width': 25.0},
objectives={'mass': 5.2, 'stress': 180.0},
is_feasible=True,
error=None
)
finalize_study(study_summary: Dict) -> Dict
Finalize learning at end of optimization study.
result = feedback.finalize_study({
"name": "bracket_v3",
"total_trials": 100,
"best_value": 4.8,
"convergence_rate": 0.95
})
# Returns: {"insights_added": 3, "patterns_identified": ["fast_convergence"]}
Optimization
CompactionManager
Handles context compaction for long-running sessions.
Constructor
compactor = CompactionManager(
max_events: int = 100,
preserve_errors: bool = True,
preserve_milestones: bool = True
)
Methods
add_event(event: ContextEvent) -> None
Add an event to the session history.
from optimization_engine.context import ContextEvent, EventType
event = ContextEvent(
event_type=EventType.TRIAL_COMPLETE,
content="Trial 42 completed: mass=5.2kg",
timestamp=datetime.now().isoformat(),
is_error=False,
is_milestone=False
)
compactor.add_event(event)
maybe_compact() -> Optional[str]
Compact events if over threshold.
summary = compactor.maybe_compact()
if summary:
print(f"Compacted: {summary}")
get_context() -> str
Get current context string.
context = compactor.get_context()
ContextCacheOptimizer
Monitors and optimizes KV-cache efficiency.
Constructor
optimizer = ContextCacheOptimizer()
Methods
track_request(prefix_tokens: int, total_tokens: int)
Track a request for cache analysis.
optimizer.track_request(prefix_tokens=5000, total_tokens=15000)
track_completion(success: bool, response_tokens: int)
Track completion for performance analysis.
optimizer.track_completion(success=True, response_tokens=500)
get_stats_dict() -> Dict
Get cache statistics.
stats = optimizer.get_stats_dict()
# Returns:
# {
# "total_requests": 150,
# "cache_hits": 120,
# "cache_hit_rate": 0.8,
# "avg_prefix_ratio": 0.33,
# ...
# }
get_report() -> str
Get human-readable report.
report = optimizer.get_report()
print(report)
Integration
ContextEngineeringMixin
Mixin class for adding context engineering to optimization runners.
class ContextEngineeringMixin:
def init_context_engineering(self, playbook_path: Path):
"""Initialize context engineering components."""
def record_trial_outcome(self, trial_number, params, objectives,
is_feasible, error=None):
"""Record trial outcome for learning."""
def get_context_for_llm(self) -> str:
"""Get combined context for LLM consumption."""
def finalize_context_engineering(self, study_summary: Dict):
"""Finalize learning at study completion."""
ContextAwareRunner
Pre-built runner with context engineering enabled.
from optimization_engine.context import ContextAwareRunner
runner = ContextAwareRunner(
config=config_dict,
playbook_path=Path("knowledge_base/playbook.json")
)
# Run optimization with automatic learning
runner.run()
REST API
The Context Engineering module exposes REST endpoints via FastAPI.
Base URL
http://localhost:5000/api/context
Endpoints
GET /playbook
Get playbook summary statistics.
Response:
{
"total_items": 45,
"by_category": {"STRATEGY": 12, "MISTAKE": 8},
"version": 3,
"last_updated": "2025-12-29T10:30:00",
"avg_score": 2.4,
"top_score": 15,
"lowest_score": -3
}
GET /playbook/items
List playbook items with optional filters.
Query Parameters:
category(str): Filter by category (str, mis, tool, cal, dom, wf)min_score(int): Minimum net score (default: 0)min_confidence(float): Minimum confidence (default: 0.0)limit(int): Max items (default: 50)offset(int): Pagination offset (default: 0)
Response:
[
{
"id": "str_001",
"category": "str",
"content": "CMA-ES converges faster on smooth surfaces",
"helpful_count": 5,
"harmful_count": 0,
"net_score": 5,
"confidence": 1.0,
"tags": ["sampler", "convergence"],
"created_at": "2025-12-29T10:00:00",
"last_used": "2025-12-29T10:30:00"
}
]
GET /playbook/items/{item_id}
Get a specific playbook item.
Response: Single PlaybookItemResponse object
POST /playbook/feedback
Record feedback on a playbook item.
Request Body:
{
"item_id": "str_001",
"helpful": true
}
Response:
{
"item_id": "str_001",
"new_score": 6,
"new_confidence": 1.0,
"helpful_count": 6,
"harmful_count": 0
}
POST /playbook/insights
Add a new insight.
Request Body:
{
"category": "str",
"content": "New insight content",
"tags": ["tag1", "tag2"],
"source_trial": 42
}
Response:
{
"item_id": "str_015",
"category": "str",
"content": "New insight content",
"message": "Insight added successfully"
}
DELETE /playbook/items/{item_id}
Delete a playbook item.
Response:
{
"deleted": "str_001",
"content_preview": "CMA-ES converges faster..."
}
POST /playbook/prune
Remove harmful items.
Query Parameters:
threshold(int): Net score threshold (default: -3)
Response:
{
"items_pruned": 3,
"threshold_used": -3,
"remaining_items": 42
}
GET /playbook/context
Get playbook context for LLM consumption.
Query Parameters:
task_type(str): Task type (default: "optimization")max_items(int): Maximum items (default: 15)min_confidence(float): Minimum confidence (default: 0.5)
Response:
{
"context": "## Atomizer Knowledge Base\n...",
"items_included": 15,
"task_type": "optimization"
}
GET /session
Get current session state.
Response:
{
"session_id": "abc123",
"task_type": "run_optimization",
"study_name": "bracket_v3",
"study_status": "running",
"trials_completed": 42,
"trials_total": 100,
"best_value": 5.2,
"recent_actions": ["Started optimization", "Trial 42 complete"],
"recent_errors": []
}
GET /session/context
Get session context for LLM consumption.
Response:
{
"context": "## Current Session\nTask: run_optimization\n...",
"session_id": "abc123",
"last_updated": "2025-12-29T10:30:00"
}
GET /cache/stats
Get KV-cache statistics.
Response:
{
"stats": {
"total_requests": 150,
"cache_hits": 120,
"cache_hit_rate": 0.8
},
"report": "Cache Performance Report\n..."
}
GET /learning/report
Get comprehensive learning report.
Response:
{
"generated_at": "2025-12-29T10:30:00",
"playbook_stats": {...},
"top_performers": [
{"id": "str_001", "content": "...", "score": 15}
],
"worst_performers": [
{"id": "mis_003", "content": "...", "score": -2}
],
"recommendations": [
"Consider pruning 3 harmful items (net_score < -3)"
]
}
Error Handling
All API endpoints return appropriate HTTP status codes:
| Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Bad request (invalid parameters) |
| 404 | Not found (item doesn't exist) |
| 500 | Server error (module not available) |
Error response format:
{
"detail": "Error description"
}
See Also
- Context Engineering Report - Full implementation report
- SYS_17 Protocol - System protocol
- Cheatsheet - Quick reference