# Context Engineering API Reference **Version**: 1.0 **Updated**: 2025-12-29 **Module**: `optimization_engine.context` This document provides complete API documentation for the Atomizer Context Engineering (ACE) framework. --- ## Table of Contents 1. [Module Overview](#module-overview) 2. [Core Classes](#core-classes) - [AtomizerPlaybook](#atomizerplaybook) - [PlaybookItem](#playbookitem) - [InsightCategory](#insightcategory) 3. [Session Management](#session-management) - [AtomizerSessionState](#atomizersessionstate) - [ExposedState](#exposedstate) - [IsolatedState](#isolatedstate) - [TaskType](#tasktype) 4. [Analysis & Learning](#analysis--learning) - [AtomizerReflector](#atomizerreflector) - [FeedbackLoop](#feedbackloop) 5. [Optimization](#optimization) - [CompactionManager](#compactionmanager) - [ContextCacheOptimizer](#contextcacheoptimizer) 6. [Integration](#integration) - [ContextEngineeringMixin](#contextengineeringmixin) - [ContextAwareRunner](#contextawarerunner) 7. [REST API](#rest-api) --- ## Module Overview ### Import Patterns ```python # Full import from optimization_engine.context import ( # Core playbook AtomizerPlaybook, PlaybookItem, InsightCategory, # Session management AtomizerSessionState, ExposedState, IsolatedState, TaskType, get_session, # Analysis AtomizerReflector, OptimizationOutcome, InsightCandidate, # Learning FeedbackLoop, FeedbackLoopFactory, # Optimization CompactionManager, ContextEvent, EventType, ContextBudgetManager, ContextCacheOptimizer, CacheStats, StablePrefixBuilder, # Integration ContextEngineeringMixin, ContextAwareRunner, ) # Convenience imports from optimization_engine.context import AtomizerPlaybook, get_session ``` --- ## Core Classes ### AtomizerPlaybook The central knowledge store for persistent learning across sessions. #### Constructor ```python AtomizerPlaybook( items: Dict[str, PlaybookItem] = None, version: int = 1, created_at: str = None, last_updated: str = None ) ``` #### Class Methods ##### `load(path: Path) -> AtomizerPlaybook` Load playbook from JSON file. ```python playbook = AtomizerPlaybook.load(Path("knowledge_base/playbook.json")) ``` **Parameters:** - `path`: Path to JSON file **Returns:** AtomizerPlaybook instance **Raises:** FileNotFoundError if file doesn't exist (creates new if not found) --- #### Instance Methods ##### `save(path: Path) -> None` Save playbook to JSON file. ```python playbook.save(Path("knowledge_base/playbook.json")) ``` --- ##### `add_insight(category, content, source_trial=None, tags=None) -> PlaybookItem` Add a new insight to the playbook. ```python item = playbook.add_insight( category=InsightCategory.STRATEGY, content="CMA-ES converges faster on smooth surfaces", source_trial=42, tags=["sampler", "convergence", "mirror"] ) ``` **Parameters:** - `category` (InsightCategory): Category of the insight - `content` (str): The insight content - `source_trial` (int, optional): Trial number that generated this insight - `tags` (List[str], optional): Tags for filtering **Returns:** The created PlaybookItem --- ##### `record_outcome(item_id: str, helpful: bool) -> None` Record whether an insight was helpful or harmful. ```python playbook.record_outcome("str_001", helpful=True) playbook.record_outcome("mis_003", helpful=False) ``` **Parameters:** - `item_id` (str): ID of the playbook item - `helpful` (bool): True if helpful, False if harmful --- ##### `get_context_for_task(task_type, max_items=15, min_confidence=0.5) -> str` Get formatted context string for LLM consumption. ```python context = playbook.get_context_for_task( task_type="optimization", max_items=15, min_confidence=0.5 ) ``` **Parameters:** - `task_type` (str): Type of task for filtering - `max_items` (int): Maximum items to include - `min_confidence` (float): Minimum confidence threshold (0.0-1.0) **Returns:** Formatted string suitable for LLM context --- ##### `get_by_category(category, min_score=0) -> List[PlaybookItem]` Get items filtered by category. ```python mistakes = playbook.get_by_category(InsightCategory.MISTAKE, min_score=-2) ``` **Parameters:** - `category` (InsightCategory): Category to filter by - `min_score` (int): Minimum net score **Returns:** List of matching PlaybookItems --- ##### `get_stats() -> Dict` Get playbook statistics. ```python stats = playbook.get_stats() # Returns: # { # "total_items": 45, # "by_category": {"STRATEGY": 12, "MISTAKE": 8, ...}, # "version": 3, # "last_updated": "2025-12-29T10:30:00", # "avg_score": 2.4, # "max_score": 15, # "min_score": -3 # } ``` --- ##### `prune_harmful(threshold=-3) -> int` Remove items with net score below threshold. ```python removed_count = playbook.prune_harmful(threshold=-3) ``` **Parameters:** - `threshold` (int): Items with net_score <= threshold are removed **Returns:** Number of items removed --- ### PlaybookItem Dataclass representing a single playbook entry. ```python @dataclass class PlaybookItem: id: str # e.g., "str_001", "mis_003" category: InsightCategory # Category enum content: str # The insight text helpful_count: int = 0 # Times marked helpful harmful_count: int = 0 # Times marked harmful tags: List[str] = field(default_factory=list) source_trial: Optional[int] = None created_at: str = "" # ISO timestamp last_used: Optional[str] = None # ISO timestamp ``` #### Properties ```python item.net_score # helpful_count - harmful_count item.confidence # helpful / (helpful + harmful), or 0.5 if no feedback ``` #### Methods ```python # Convert to context string for LLM context_str = item.to_context_string() # "[str_001] helpful=5 harmful=0 :: CMA-ES converges faster..." ``` --- ### InsightCategory Enum for categorizing insights. ```python class InsightCategory(Enum): STRATEGY = "str" # Optimization strategies that work CALCULATION = "cal" # Formulas and calculations MISTAKE = "mis" # Common mistakes to avoid TOOL = "tool" # Tool usage patterns DOMAIN = "dom" # Domain-specific knowledge (FEA, NX) WORKFLOW = "wf" # Workflow patterns ``` **Usage:** ```python # Create with enum category = InsightCategory.STRATEGY # Create from string category = InsightCategory("str") # Get string value value = InsightCategory.STRATEGY.value # "str" ``` --- ## Session Management ### AtomizerSessionState Manages session context with exposed/isolated separation. #### Constructor ```python session = AtomizerSessionState( session_id: str = None # Auto-generated UUID if not provided ) ``` #### Attributes ```python session.session_id # Unique session identifier session.exposed # ExposedState - always in LLM context session.isolated # IsolatedState - on-demand access only session.last_updated # ISO timestamp of last update ``` #### Methods ##### `get_llm_context() -> str` Get exposed state formatted for LLM context. ```python context = session.get_llm_context() # Returns formatted string with task type, study info, progress, etc. ``` --- ##### `add_action(action: str) -> None` Record an action (keeps last 20). ```python session.add_action("Started optimization with TPE sampler") ``` --- ##### `add_error(error: str, error_type: str = None) -> None` Record an error (keeps last 10). ```python session.add_error("NX solver timeout after 600s", error_type="solver") ``` --- ##### `to_dict() / from_dict(data) -> AtomizerSessionState` Serialize/deserialize session state. ```python # Save data = session.to_dict() # Restore session = AtomizerSessionState.from_dict(data) ``` --- ### ExposedState State that's always included in LLM context. ```python @dataclass class ExposedState: task_type: Optional[TaskType] = None study_name: Optional[str] = None study_status: str = "idle" trials_completed: int = 0 trials_total: int = 0 best_value: Optional[float] = None recent_actions: List[str] = field(default_factory=list) # Last 20 recent_errors: List[str] = field(default_factory=list) # Last 10 ``` --- ### IsolatedState State available on-demand but not in default context. ```python @dataclass class IsolatedState: full_trial_history: List[Dict] = field(default_factory=list) detailed_errors: List[Dict] = field(default_factory=list) performance_metrics: Dict = field(default_factory=dict) debug_info: Dict = field(default_factory=dict) ``` --- ### TaskType Enum for session task classification. ```python class TaskType(Enum): CREATE_STUDY = "create_study" RUN_OPTIMIZATION = "run_optimization" MONITOR_PROGRESS = "monitor_progress" ANALYZE_RESULTS = "analyze_results" DEBUG_ERROR = "debug_error" CONFIGURE_SETTINGS = "configure_settings" NEURAL_ACCELERATION = "neural_acceleration" ``` --- ### get_session() Get or create the global session instance. ```python from optimization_engine.context import get_session session = get_session() session.exposed.task_type = TaskType.RUN_OPTIMIZATION ``` --- ## Analysis & Learning ### AtomizerReflector Analyzes optimization outcomes and extracts insights. #### Constructor ```python reflector = AtomizerReflector(playbook: AtomizerPlaybook) ``` #### Methods ##### `analyze_outcome(outcome: OptimizationOutcome) -> List[InsightCandidate]` Analyze an optimization outcome for insights. ```python outcome = OptimizationOutcome( study_name="bracket_v3", trial_number=42, params={'thickness': 10.5}, objectives={'mass': 5.2}, constraints_satisfied=True, error_message=None, solve_time=45.2 ) insights = reflector.analyze_outcome(outcome) for insight in insights: print(f"{insight.category}: {insight.content}") ``` --- ##### `extract_error_insights(error_message: str) -> List[InsightCandidate]` Extract insights from error messages. ```python insights = reflector.extract_error_insights("Solution did not converge within tolerance") # Returns insights about convergence failures ``` --- ### OptimizationOutcome Dataclass for optimization trial outcomes. ```python @dataclass class OptimizationOutcome: study_name: str trial_number: int params: Dict[str, Any] objectives: Dict[str, float] constraints_satisfied: bool error_message: Optional[str] = None solve_time: Optional[float] = None ``` --- ### FeedbackLoop Automated learning from optimization execution. #### Constructor ```python feedback = FeedbackLoop(playbook_path: Path) ``` #### Methods ##### `process_trial_result(trial_number, params, objectives, is_feasible, error=None)` Process a trial result for learning opportunities. ```python feedback.process_trial_result( trial_number=42, params={'thickness': 10.5, 'width': 25.0}, objectives={'mass': 5.2, 'stress': 180.0}, is_feasible=True, error=None ) ``` --- ##### `finalize_study(study_summary: Dict) -> Dict` Finalize learning at end of optimization study. ```python result = feedback.finalize_study({ "name": "bracket_v3", "total_trials": 100, "best_value": 4.8, "convergence_rate": 0.95 }) # Returns: {"insights_added": 3, "patterns_identified": ["fast_convergence"]} ``` --- ## Optimization ### CompactionManager Handles context compaction for long-running sessions. #### Constructor ```python compactor = CompactionManager( max_events: int = 100, preserve_errors: bool = True, preserve_milestones: bool = True ) ``` #### Methods ##### `add_event(event: ContextEvent) -> None` Add an event to the session history. ```python from optimization_engine.context import ContextEvent, EventType event = ContextEvent( event_type=EventType.TRIAL_COMPLETE, content="Trial 42 completed: mass=5.2kg", timestamp=datetime.now().isoformat(), is_error=False, is_milestone=False ) compactor.add_event(event) ``` --- ##### `maybe_compact() -> Optional[str]` Compact events if over threshold. ```python summary = compactor.maybe_compact() if summary: print(f"Compacted: {summary}") ``` --- ##### `get_context() -> str` Get current context string. ```python context = compactor.get_context() ``` --- ### ContextCacheOptimizer Monitors and optimizes KV-cache efficiency. #### Constructor ```python optimizer = ContextCacheOptimizer() ``` #### Methods ##### `track_request(prefix_tokens: int, total_tokens: int)` Track a request for cache analysis. ```python optimizer.track_request(prefix_tokens=5000, total_tokens=15000) ``` --- ##### `track_completion(success: bool, response_tokens: int)` Track completion for performance analysis. ```python optimizer.track_completion(success=True, response_tokens=500) ``` --- ##### `get_stats_dict() -> Dict` Get cache statistics. ```python stats = optimizer.get_stats_dict() # Returns: # { # "total_requests": 150, # "cache_hits": 120, # "cache_hit_rate": 0.8, # "avg_prefix_ratio": 0.33, # ... # } ``` --- ##### `get_report() -> str` Get human-readable report. ```python report = optimizer.get_report() print(report) ``` --- ## Integration ### ContextEngineeringMixin Mixin class for adding context engineering to optimization runners. ```python class ContextEngineeringMixin: def init_context_engineering(self, playbook_path: Path): """Initialize context engineering components.""" def record_trial_outcome(self, trial_number, params, objectives, is_feasible, error=None): """Record trial outcome for learning.""" def get_context_for_llm(self) -> str: """Get combined context for LLM consumption.""" def finalize_context_engineering(self, study_summary: Dict): """Finalize learning at study completion.""" ``` --- ### ContextAwareRunner Pre-built runner with context engineering enabled. ```python from optimization_engine.context import ContextAwareRunner runner = ContextAwareRunner( config=config_dict, playbook_path=Path("knowledge_base/playbook.json") ) # Run optimization with automatic learning runner.run() ``` --- ## REST API The Context Engineering module exposes REST endpoints via FastAPI. ### Base URL ``` http://localhost:5000/api/context ``` ### Endpoints #### GET `/playbook` Get playbook summary statistics. **Response:** ```json { "total_items": 45, "by_category": {"STRATEGY": 12, "MISTAKE": 8}, "version": 3, "last_updated": "2025-12-29T10:30:00", "avg_score": 2.4, "top_score": 15, "lowest_score": -3 } ``` --- #### GET `/playbook/items` List playbook items with optional filters. **Query Parameters:** - `category` (str): Filter by category (str, mis, tool, cal, dom, wf) - `min_score` (int): Minimum net score (default: 0) - `min_confidence` (float): Minimum confidence (default: 0.0) - `limit` (int): Max items (default: 50) - `offset` (int): Pagination offset (default: 0) **Response:** ```json [ { "id": "str_001", "category": "str", "content": "CMA-ES converges faster on smooth surfaces", "helpful_count": 5, "harmful_count": 0, "net_score": 5, "confidence": 1.0, "tags": ["sampler", "convergence"], "created_at": "2025-12-29T10:00:00", "last_used": "2025-12-29T10:30:00" } ] ``` --- #### GET `/playbook/items/{item_id}` Get a specific playbook item. **Response:** Single PlaybookItemResponse object --- #### POST `/playbook/feedback` Record feedback on a playbook item. **Request Body:** ```json { "item_id": "str_001", "helpful": true } ``` **Response:** ```json { "item_id": "str_001", "new_score": 6, "new_confidence": 1.0, "helpful_count": 6, "harmful_count": 0 } ``` --- #### POST `/playbook/insights` Add a new insight. **Request Body:** ```json { "category": "str", "content": "New insight content", "tags": ["tag1", "tag2"], "source_trial": 42 } ``` **Response:** ```json { "item_id": "str_015", "category": "str", "content": "New insight content", "message": "Insight added successfully" } ``` --- #### DELETE `/playbook/items/{item_id}` Delete a playbook item. **Response:** ```json { "deleted": "str_001", "content_preview": "CMA-ES converges faster..." } ``` --- #### POST `/playbook/prune` Remove harmful items. **Query Parameters:** - `threshold` (int): Net score threshold (default: -3) **Response:** ```json { "items_pruned": 3, "threshold_used": -3, "remaining_items": 42 } ``` --- #### GET `/playbook/context` Get playbook context for LLM consumption. **Query Parameters:** - `task_type` (str): Task type (default: "optimization") - `max_items` (int): Maximum items (default: 15) - `min_confidence` (float): Minimum confidence (default: 0.5) **Response:** ```json { "context": "## Atomizer Knowledge Base\n...", "items_included": 15, "task_type": "optimization" } ``` --- #### GET `/session` Get current session state. **Response:** ```json { "session_id": "abc123", "task_type": "run_optimization", "study_name": "bracket_v3", "study_status": "running", "trials_completed": 42, "trials_total": 100, "best_value": 5.2, "recent_actions": ["Started optimization", "Trial 42 complete"], "recent_errors": [] } ``` --- #### GET `/session/context` Get session context for LLM consumption. **Response:** ```json { "context": "## Current Session\nTask: run_optimization\n...", "session_id": "abc123", "last_updated": "2025-12-29T10:30:00" } ``` --- #### GET `/cache/stats` Get KV-cache statistics. **Response:** ```json { "stats": { "total_requests": 150, "cache_hits": 120, "cache_hit_rate": 0.8 }, "report": "Cache Performance Report\n..." } ``` --- #### GET `/learning/report` Get comprehensive learning report. **Response:** ```json { "generated_at": "2025-12-29T10:30:00", "playbook_stats": {...}, "top_performers": [ {"id": "str_001", "content": "...", "score": 15} ], "worst_performers": [ {"id": "mis_003", "content": "...", "score": -2} ], "recommendations": [ "Consider pruning 3 harmful items (net_score < -3)" ] } ``` --- ## Error Handling All API endpoints return appropriate HTTP status codes: | Code | Meaning | |------|---------| | 200 | Success | | 400 | Bad request (invalid parameters) | | 404 | Not found (item doesn't exist) | | 500 | Server error (module not available) | Error response format: ```json { "detail": "Error description" } ``` --- ## See Also - [Context Engineering Report](../CONTEXT_ENGINEERING_REPORT.md) - Full implementation report - [SYS_18 Protocol](../protocols/system/SYS_18_CONTEXT_ENGINEERING.md) - System protocol - [Cheatsheet](../../.claude/skills/01_CHEATSHEET.md) - Quick reference