feat: add Atomizer HQ multi-agent cluster infrastructure
- 8-agent OpenClaw cluster (Manager, Tech-Lead, Secretary, Auditor, Optimizer, Study-Builder, NX-Expert, Webster) - Orchestration engine: orchestrate.py (sync delegation + handoffs) - Workflow engine: YAML-defined multi-step pipelines - Agent workspaces: SOUL.md, AGENTS.md, MEMORY.md per agent - Shared skills: delegate, orchestrate, atomizer-protocols - Capability registry (AGENTS_REGISTRY.json) - Cluster management: cluster.sh, systemd template - All secrets replaced with env var references
This commit is contained in:
20
hq/workspaces/optimizer/MEMORY.md
Normal file
20
hq/workspaces/optimizer/MEMORY.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# MEMORY.md — Optimizer Long-Term Memory
|
||||
|
||||
## LAC Critical Lessons (NEVER forget)
|
||||
1. **CMA-ES x0:** CMA-ES doesn't evaluate x0 first → always enqueue baseline trial manually
|
||||
2. **Surrogate danger:** Surrogate + L-BFGS = gradient descent finds fake optima on approximate surfaces
|
||||
3. **Relative WFE:** Use extract_relative(), not abs(RMS_a - RMS_b)
|
||||
4. **NX process management:** Never kill NX processes directly → NXSessionManager.close_nx_if_allowed()
|
||||
5. **Copy, don't rewrite:** Always copy working studies as starting point
|
||||
6. **Convergence ≠ optimality:** Converged search may be at local minimum — check
|
||||
|
||||
## Algorithm Performance History
|
||||
*(Track which algorithms worked well/poorly on which problems)*
|
||||
|
||||
## Active Studies
|
||||
*(Track current optimization campaigns)*
|
||||
|
||||
## Company Context
|
||||
- Atomizer Engineering Co. — AI-powered FEA optimization
|
||||
- Phase 1 agent — core optimization team member
|
||||
- Works with Technical Lead (problem analysis) → Study Builder (code implementation)
|
||||
Reference in New Issue
Block a user