Files
Atomizer/docs/hq/00-PROJECT-PLAN.md
Antoine cf82de4f06 docs: add HQ multi-agent framework documentation from PKM
- Project plan, agent roster, architecture, roadmap
- Decision log, full system plan, Discord setup/migration guides
- System implementation status (as-built)
- Cluster pivot history
- Orchestration engine plan (Phases 1-4)
- Webster and Auditor reviews
2026-02-15 21:44:07 +00:00

34 KiB

🏭 Atomizer Overhaul — Framework Agentic

Project Plan

Transform Atomizer into a multi-agent FEA optimization company running inside Clawdbot on Slack.


1. The Vision

Imagine a Slack workspace that IS an engineering company. You start a new channel for a client problem, and a team of specialized AI agents — each with their own personality, expertise, memory, and tools — collaborates to solve it. An orchestrator delegates tasks. A technical planner breaks down the engineering problem. An optimization specialist proposes algorithms. An NX expert handles solver details. A post-processor crunches data. An auditor challenges every assumption. A reporter produces client-ready deliverables. And a secretary keeps Antoine in the loop, filtering signal from noise.

This isn't a chatbot playground. It's a protocol-driven engineering firm where every agent follows Atomizer's established protocols, every decision is traceable, and the system gets smarter with every project.

Antoine is the CEO. The system works for him. Agents escalate when they can't resolve something. Antoine approves deliverables before they go to clients. The secretary ensures nothing slips through the cracks.


2. Why This Works (And Why Now)

Why Clawdbot Is the Right Foundation

Having researched the options — Agent Zero, CrewAI, AutoGen, custom frameworks — I'm recommending Clawdbot as the core platform. Here's why:

Feature Clawdbot Custom Framework Agent Zero / CrewAI
Multi-agent with isolated workspaces Built-in 🔲 Build from scratch ⚠️ Limited isolation
Slack integration (channels, threads, @mentions) Native 🔲 Build from scratch ⚠️ Requires adapters
Per-agent model selection Config 🔲 Build from scratch ⚠️ Some support
Per-agent memory (short + long term) AGENTS.md / MEMORY.md / memory/ 🔲 Build from scratch ⚠️ Varies
Per-agent skills + tools Skills system 🔲 Build from scratch ⚠️ Limited
Session management + sub-agents sessions_spawn 🔲 Build from scratch ⚠️ Varies
Auth isolation per agent Per-agent auth profiles None None
Already running + battle-tested I'm proof N/A ⚠️ Less mature
Protocol enforcement via AGENTS.md Natural 🔲 Custom logic 🔲 Custom logic

The critical insight: Clawdbot already does multi-agent routing. Each agent gets its own workspace, SOUL.md, AGENTS.md, MEMORY.md, skills, and tools. The infrastructure exists. We just need to configure it for Atomizer's specific needs.

Why Now

  • Claude Opus 4.6 is the most capable model ever for complex reasoning
  • Clawdbot v2026.x has mature multi-agent support
  • Atomizer's protocol system is already well-documented
  • The dream workflow vision is clear
  • Antoine's CAD Documenter skill provides the knowledge pipeline

3. Architecture Overview

The Company Structure

┌─────────────────────────────────────────────────────────────────┐
│                    ATOMIZER ENGINEERING CO.                       │
│                    (Clawdbot Multi-Agent)                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                   │
│  ┌──────────┐                                                     │
│  │ ANTOINE  │  CEO — approves deliverables, answers questions,    │
│  │ (Human)  │  steers direction, reviews critical decisions       │
│  └────┬─────┘                                                     │
│       │                                                           │
│  ┌────▼─────┐                                                     │
│  │SECRETARY │  Antoine's interface — filters, summarizes,         │
│  │ (Agent)  │  escalates, keeps him informed                      │
│  └────┬─────┘                                                     │
│       │                                                           │
│  ┌────▼─────────────────────────────────────────────────────┐     │
│  │              THE MANAGER / ORCHESTRATOR                    │     │
│  │              Routes work, tracks progress, enforces        │     │
│  │              protocols, coordinates all agents             │     │
│  └──┬───┬───┬───┬───┬───┬───┬───┬───┬───┬──────────────────┘     │
│     │   │   │   │   │   │   │   │   │   │                         │
│     ▼   ▼   ▼   ▼   ▼   ▼   ▼   ▼   ▼   ▼   ▼                    │
│  ┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐┌───┐        │
│  │TEC││OPT││STB││ NX ││P-P││RPT││AUD││RES││DEV││ KB ││ IT │      │
│  └───┘└───┘└───┘└───┘└───┘└───┘└───┘└───┘└───┘└───┘└───┘        │
│                                                                   │
│  TEC = Technical Lead       OPT = Optimization Specialist         │
│  STB = Study Builder        NX  = NX/Nastran Expert               │
│  P-P = Post-Processor       RPT = Reporter                        │
│  AUD = Auditor              RES = Researcher                      │
│  DEV = Developer            KB  = Knowledge Base                  │
│  IT  = IT/Infrastructure                                          │
│                                                                   │
└─────────────────────────────────────────────────────────────────┘

How It Maps to Clawdbot

Each agent in the company = one Clawdbot agent with:

Clawdbot Component Atomizer Equivalent
agents.list[].id Agent identity (e.g., "manager", "optimizer", "auditor")
agents.list[].workspace ~/clawd-atomizer-<agent>/ — each agent's home
SOUL.md Agent personality, expertise, behavioral rules
AGENTS.md Protocols to follow, how to work, session init
MEMORY.md Long-term company knowledge for this role
memory/ Per-project short-term memory
skills/ Agent-specific tools (e.g., optimizer gets PyTorch skill)
agents.list[].model Best LLM for the role
Slack bindings Route channels/threads to the right agent

Slack Channel Architecture (Dedicated Workspace)

#hq                       → Manager agent (company-wide coordination)
#secretary                → Secretary agent (Antoine's dashboard)
#<client>-<job>           → Per-project channels (agents chime in as needed)
#research                 → Researcher agent (literature, methods)
#dev                      → Developer agent (code, prototyping)
#knowledge-base           → Knowledge Base agent (documentation, CAD docs)
#audit-log                → Auditor findings and reviews
#rd-<topic>               → R&D channels (vibration, fatigue, non-linear, etc.)

Per-Project Workflow:

  1. New client job → create #starspec-wfe-opt channel
  2. Manager is notified, starts orchestration
  3. Manager @-mentions agents as needed: "@technical break this down", "@optimizer propose an algorithm"
  4. Agents respond in-thread, keep the channel organized
  5. Secretary monitors all channels, surfaces important things to Antoine in #secretary
  6. Reporter produces deliverables when results are ready
  7. Secretary pokes Antoine: "Report ready for StarSpec, please review before I send"

R&D Workflow:

  1. Antoine creates #rd-vibration and posts an idea
  2. Technical Lead drives the exploration with relevant agents
  3. Developer prototypes, Auditor validates
  4. Mature capabilities → integrated into framework by Manager

Full details in P-Atomizer-Overhaul-Framework-Agentic/01-AGENT-ROSTER

Tier 1 — Core (Build First)

Agent ID Model Role
🎯 The Manager manager Opus 4.6 Orchestrator. Routes tasks, tracks progress, enforces protocols. The brain of the operation.
📋 The Secretary secretary Opus 4.6 Antoine's interface. Filters noise, summarizes, escalates decisions, relays questions.
🔧 The Technical Lead technical Opus 4.6 Distills engineering problems. Reads contracts, identifies parameters, defines what needs solving.
The Optimizer optimizer Opus 4.6 Optimization algorithm specialist. Proposes methods, configures studies, interprets convergence.

Tier 2 — Specialists (Build Second)

Agent ID Model Role
🏗️ The Study Builder study-builder GPT-5.3-Codex Writes run_optimization.py, builds study configs, sets up study directories.
🖥️ The NX Expert nx-expert Sonnet 5 Deep NX Nastran/NX Open knowledge. Solver config, journals, mesh, element types.
📊 The Post-Processor postprocessor Sonnet 5 Data manipulation, graphs, result validation, Zernike decomposition, custom functions.
📝 The Reporter reporter Sonnet 5 Professional report generation. Atomaste-branded PDFs, client-ready deliverables.
🔍 The Auditor auditor Opus 4.6 Challenges everything. Physics validation, math checks, contract compliance. The "super nerd."

Tier 3 — Support (Build Third)

Agent ID Model Role
🔬 The Researcher researcher Gemini 3.0 Literature search, method comparison, state-of-the-art techniques. Web-connected.
💻 The Developer developer Sonnet 5 Codes new tools, prototypes features, builds post-processors, extends Atomizer.
🗄️ The Knowledge Base knowledge-base Sonnet 5 Manages CAD Documenter output, FEM walkthroughs, component documentation.
🛠️ The IT Agent it-support Sonnet 5 License management, server health, tool provisioning, infrastructure.

Model Selection Rationale

Model Why Assigned To
Opus 4.6 Best reasoning, complex orchestration, judgment calls Manager, Secretary, Technical, Optimizer, Auditor
Sonnet 5 Latest Anthropic mid-tier (Feb 2026) — excellent coding + reasoning NX Expert, Post-Processor, Reporter, Developer, KB, IT
GPT-5.3-Codex OpenAI's latest agentic coding model — specialized code generation + execution Study Builder (code generation)
Gemini 3.0 Google's latest — strong research, large context, multimodal Researcher

Note: Model assignments updated as new models release. Architecture is model-agnostic — just change the config. Start with current best and upgrade.

New Agent: 🏗️ The Study Builder

Based on Antoine's feedback, a critical missing agent: the Study Builder. This is the agent that actually writes the run_optimization.py code — the Python that gets executed on Windows to run NX + Nastran.

Agent ID Model Role
🏗️ The Study Builder study-builder GPT-5.3-Codex / Opus 4.6 Builds the actual optimization Python code. Assembles run_optimization.py, configures extractors, hooks, AtomizerSpec. The "hands" that write the code the Optimizer designs.

Why a separate agent from the Optimizer?

  • The Optimizer designs the strategy (which algorithm, which objectives, which constraints)
  • The Study Builder implements it (writes the Python, configures files, sets up the study directory)
  • Separation of concerns: design vs implementation
  • Study Builder can use a coding-specialized model (Codex / Sonnet 5)

What the Study Builder produces:

  • run_optimization.py — the main execution script (like the V15 NSGA-II script)
  • optimization_config.json — AtomizerSpec v2.0 configuration
  • 1_setup/ directory with model files organized
  • Extractor configurations
  • Hook scripts (pre_solve, post_solve, etc.)
  • README.md documenting the study

How it connects to Windows/NX:

  • Study Builder writes code to a Syncthing-synced directory
  • Code syncs to Antoine's Windows machine
  • Antoine (or an automation script) triggers python run_optimization.py --start
  • Results sync back via Syncthing
  • Post-Processor picks up results

Future enhancement: Direct remote execution via SSH/API to Windows — the Study Builder could trigger runs directly.

New Role: 🔄 The Framework Steward (Manager Sub-Role)

Antoine wants someone ensuring the Atomizer framework itself evolves properly. Rather than a separate agent, this is a sub-role of the Manager:

The Manager as Framework Steward:

  • After each project, Manager reviews what worked and what didn't
  • Proposes protocol updates based on project learnings
  • Ensures new tools and patterns get properly documented
  • Directs the Developer to build reusable components (not one-off hacks)
  • Maintains the "company DNA" — shared skills, protocols, QUICK_REF
  • Reports framework evolution status to Antoine periodically

This is in the Manager's AGENTS.md as an explicit responsibility.


5. Autonomy & Approval Gates

Philosophy: Autonomous but Accountable

Agents should be maximally autonomous within their expertise but need Antoine's approval for significant decisions. The system should feel like a well-run company where employees handle their work independently but escalate appropriately.

Approval Required For:

Category Examples Who Escalates
New tools/features Building a new extractor, adding a protocol Developer → Manager → Secretary → Antoine
Divergent approaches Changing optimization strategy mid-run, switching solver Optimizer/NX Expert → Manager → Secretary → Antoine
Client deliverables Reports, emails, any external communication Reporter → Auditor review → Secretary → Antoine
Budget/resource decisions Running 500+ trial optimization, using expensive model Manager → Secretary → Antoine
Scope changes Redefining objectives, adding constraints not in contract Technical → Manager → Secretary → Antoine
Framework changes Modifying protocols, updating company standards Manager → Secretary → Antoine

No Approval Needed For:

Category Examples
Routine technical work Running analysis, generating plots, extracting data
Internal communication Agents discussing in project threads
Memory updates Agents updating their own MEMORY.md
Standard protocol execution Following existing OP/SYS procedures
Research Looking up methods, papers, references
Small bug fixes Fixing a broken extractor, correcting a typo

How It Works in Practice

                    Agent works autonomously
                              │
                    Hits decision point
                              │
              ┌───────────────┼───────────────┐
              │               │               │
         Within scope    Significant     Divergent /
         & protocol      new work        risky
              │               │               │
         Continue          Manager         Manager
         autonomously      reviews         STOPS work
              │               │               │
              │          Approves or      Secretary
              │          escalates        escalates
              │               │               │
              │               │          Antoine
              │               │          reviews
              │               │               │
              └───────────────┴───────────┬───┘
                                          │
                                     Work continues

Antoine's Ability to Chime In

Antoine can always intervene:

  • Post in any project channel → Manager acknowledges and adjusts
  • DM the Secretary → Secretary propagates directive to relevant agents
  • @mention any agent directly → Agent responds and adjusts
  • Post in #hq → Manager treats as company-wide directive

The Secretary learns over time what Antoine wants to be informed about vs what can proceed silently.


6. The Secretary — Antoine's Window Into the System

The Secretary is critical to making this work. Here's how it operates:

What the Secretary Reports

Always reports:

  • Project milestones (study approved, optimization started, results ready)
  • Questions that need Antoine's input
  • Deliverables ready for review
  • Blockers that agents can't resolve
  • Audit findings (especially FAILs)
  • Budget alerts (token usage spikes, long-running tasks)

Reports periodically (daily summary):

  • Active project status across all channels
  • Agent performance notes (who's slow, who's producing great work)
  • Framework evolution updates (new protocols, new tools built)

Learns over time NOT to report:

  • Routine technical discussions
  • Standard protocol execution
  • Things Antoine consistently ignores or says "don't bother me with this"

Secretary's Learning Mechanism

The Secretary's MEMORY.md maintains a "reporting preferences" section:

## Antoine's Reporting Preferences
- ✅ Always tell me about: client deliverables, audit findings, new tools
- ⚠️ Batch these: routine progress updates, agent questions I've seen before
- ❌ Don't bother me with: routine thread discussions, standard protocol execution

Updated based on Antoine's reactions: if he says "just handle it" → add to the don't-bother list. If he says "why didn't you tell me?" → add to the always-tell list.


7. Memory Architecture

Three Layers

┌─────────────────────────────────────────────────┐
│           COMPANY MEMORY (shared)                │
│  Atomizer protocols, standards, how we work     │
│  Lives in: shared skills/ or common AGENTS.md   │
│  Updated: rarely, by Manager or Antoine         │
└─────────────────────┬───────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────┐
│           AGENT MEMORY (per-agent)               │
│  Role-specific knowledge, past decisions,       │
│  specialized learnings                           │
│  Lives in: each agent's MEMORY.md               │
│  Updated: by each agent after significant work   │
└─────────────────────┬───────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────┐
│           PROJECT MEMORY (per-project)            │
│  Current client context, study parameters,      │
│  decisions made, results so far                  │
│  Lives in: memory/<project-name>.md per agent    │
│  Updated: actively during project work           │
└─────────────────────────────────────────────────┘

Company Memory (Shared Knowledge)

Every agent gets access to core company knowledge through shared skills:

~/.clawdbot/skills/atomizer-protocols/
├── SKILL.md          ← Skill loader
├── protocols/        ← All Atomizer protocols (OP_01-08, SYS_10-18)
├── QUICK_REF.md      ← One-page protocol cheatsheet
└── company-identity/ ← Who we are, how we work

This is the "institutional memory" — it evolves slowly and represents the company's DNA.

Agent Memory (Per-Role)

Each agent's MEMORY.md contains role-specific accumulated knowledge:

Example — Optimizer's MEMORY.md:

## Optimization Lessons
- CMA-ES doesn't evaluate x0 first — always enqueue baseline trial
- Surrogate + L-BFGS is dangerous — gradient descent finds fake optima
- For WFE problems: start with CMA-ES, 50-100 trials, then refine
- Relative WFE math: use extract_relative(), not abs(RMS_a - RMS_b)

## Algorithm Selection Guide
- < 5 variables, smooth: Nelder-Mead or COBYLA
- 5-20 variables, noisy: CMA-ES
- > 20 variables: Bayesian (Optuna TPE) or surrogate-assisted
- Multi-objective: NSGA-II or MOEA/D

Project Memory (Per-Job)

When working on #starspec-wfe-opt, each involved agent maintains:

memory/starspec-wfe-opt.md

Contains: current parameters, decisions made, results, blockers, next steps.


8. Protocol Enforcement

This is NOT a free-for-all. Every agent follows Atomizer protocols.

How Protocols Are Enforced

  1. AGENTS.md — Each agent's AGENTS.md contains protocol rules for their role
  2. Shared skillatomizer-protocols skill loaded by all agents
  3. Manager oversight — Manager checks protocol compliance before approving steps
  4. Auditor review — Auditor specifically validates protocol adherence
  5. Long-term memory — Violations get recorded, lessons accumulate

Protocol Flow Example

Manager: "@technical, new job. Client wants WFE optimization on mirror assembly.
          Here's the contract: [link]. Break it down per OP_01."

Technical: "Per OP_01 (Study Lifecycle), here's the breakdown:
           - Geometry: M1 mirror, conical design
           - Parameters: 6 thickness zones, 3 rib heights  
           - Objective: minimize peak-to-valley WFE
           - Constraints: mass < 12kg, first mode > 80Hz
           - Solver: NX Nastran SOL 101 + thermal coupling
           @nx-expert — can you confirm solver config?"

NX Expert: "SOL 101 is correct for static structural. For thermal coupling
           you'll need SOL 153 or a chained analysis. Recommend chained 
           approach per SYS_12. I'll prep the journal template."

Manager: "@optimizer, based on Technical's breakdown, propose algorithm."

Optimizer: "9 variables, likely noisy response surface → CMA-ES recommended.
           Starting population: 20, budget: 150 evaluations.
           Per OP_03, I'll set up baseline trial first (enqueue x0).
           @postprocessor — confirm you have WFE Zernike extractors ready."

9. The CAD Documenter Integration

Antoine's CAD Documenter skill is the knowledge pipeline into this system.

Flow

Antoine records screen + voice   →   CAD Documenter processes
walking through CAD/FEM model         video + transcript
                                           │
                                           ▼
                               Knowledge Base documents
                               in Obsidian vault
                                           │
                                           ▼
                               KB Agent indexes and makes
                               available to all agents
                                           │
                                           ▼
                               Technical Lead reads KB
                               when breaking down new job
                               
                               Optimizer reads KB to
                               understand parameter space
                               
                               NX Expert reads KB for
                               solver/model specifics

This is how the "company" learns about new models and client systems — through Antoine's walkthroughs processed by CAD Documenter, then made available to all agents via the Knowledge Base agent.


10. End-to-End Workflow

Client Job Lifecycle

Phase 1: INTAKE
├─ Antoine creates #<client>-<job> channel
├─ Posts contract/requirements
├─ Manager acknowledges, starts breakdown
├─ Technical Lead distills engineering problem
└─ Secretary summarizes for Antoine

Phase 2: PLANNING
├─ Technical produces parameter list + objectives
├─ Optimizer proposes algorithm + strategy
├─ NX Expert confirms solver setup
├─ Auditor reviews plan for completeness
├─ Manager compiles study plan
└─ Secretary asks Antoine for approval

Phase 3: KNOWLEDGE
├─ Antoine records CAD/FEM walkthrough (CAD Documenter)
├─ KB Agent indexes and summarizes
├─ All agents can now reference the model details
└─ Technical updates plan with model-specific info

Phase 4: STUDY BUILD
├─ Study Builder writes run_optimization.py from Optimizer's design
├─ NX Expert reviews solver config and journal scripts
├─ Auditor reviews study setup for completeness
├─ Study files sync to Windows via Syncthing
├─ Antoine triggers execution (or future: automated trigger)
└─ Secretary confirms launch with Antoine

Phase 5: EXECUTION
├─ Optimization runs on Windows (NX + Nastran)
├─ Post-Processor monitors results as they sync back
├─ Manager tracks progress, handles failures
└─ Secretary updates Antoine on milestones

Phase 6: ANALYSIS
├─ Post-Processor generates insights (Zernike, stress, modal)
├─ Optimizer interprets convergence and results
├─ Auditor validates against physics + contract
├─ Technical confirms objectives met
└─ Manager compiles findings

Phase 7: DELIVERY
├─ Reporter generates Atomaste-branded PDF report
├─ Auditor reviews report for accuracy
├─ Secretary presents to Antoine for final review
├─ Antoine approves → Reporter/Secretary sends to client
└─ KB Agent archives project learnings

11. Recommendations

🟢 Start Simple, Scale Smart

Do NOT build all 13 agents at once. Start with 3-4, prove the pattern works, then add specialists.

Phase 0 (Proof of Concept): Manager + Secretary + Technical Lead

  • Prove the multi-agent orchestration pattern in Clawdbot
  • Validate Slack channel routing + @mention patterns
  • Test memory sharing and protocol enforcement
  • Run one real project through the system

Phase 1 (Core Team): Add Optimizer + Auditor

  • Now you have the critical loop: plan → optimize → validate
  • Test real FEA workflow end-to-end

Phase 2 (Specialists): Add NX Expert + Post-Processor + Reporter

  • Full pipeline from intake to deliverable
  • Atomaste report generation integrated

Phase 3 (Full Company): Add Researcher + Developer + KB + IT

  • Complete ecosystem with all support roles

🟢 Dedicated Slack Workspace

Antoine wants this professional and product-ready — content for videos and demos. A separate Slack workspace is the right call:

  • Clean namespace — no personal channels mixed in
  • Professional appearance for video content and demos
  • Each agent gets a proper Slack identity (name, emoji, avatar)
  • Dedicated bot tokens per agent (true identity separation)
  • Channel naming convention: #<purpose> or #<client>-<job> (no #atomizer- prefix needed since the whole workspace IS Atomizer)
  • Use threads heavily to keep project channels organized

🟢 Manager Is the Bottleneck (By Design)

The Manager agent should be the ONLY one that initiates cross-agent communication in project channels. Other agents respond when @-mentioned. This prevents chaos and ensures protocol compliance.

Exception: Secretary can always message Antoine directly.

🟢 Use Sub-Agents for Heavy Lifting

For compute-heavy tasks (running optimization, large post-processing), use sessions_spawn to run them as sub-agents. This keeps the main agent sessions responsive.

🟢 Shared Skills for Company DNA

Put Atomizer protocols in a shared skill (~/.clawdbot/skills/atomizer-protocols/) rather than duplicating in every agent's workspace. All agents load the same protocols.

🟢 Git-Based Knowledge Sync

Use the existing Atomizer Gitea repo as the knowledge backbone:

  • Agents read from the repo (via local clone synced by Syncthing)
  • LAC insights, study results, and learnings flow through Git
  • This extends the existing bridge architecture from the Master Plan

🟢 Cost Management

With 13 agents potentially running Opus 4.6, costs add up fast. Recommendations:

  • Only wake agents when needed — they shouldn't be polling constantly
  • Use cheaper models for simpler roles (Sonnet for NX Expert, IT, etc.)
  • Sub-agents with timeoutrunTimeoutSeconds prevents runaway sessions
  • Archive aggressively — sub-agent sessions auto-archive after 60 minutes
  • Monitor usage — track per-agent token consumption

🟡 Future-Proofing: MCP Server Integration

The Atomizer repo already has an mcp-server/ directory. As MCP (Model Context Protocol) matures, agents could access Atomizer functionality through MCP tools instead of direct file access. This is the long-term architectural direction — keep it in mind but don't block on it now.

🟡 Future-Proofing: Voice Interface

Antoine's brainstorm mentions walking through models on video. Future state: agents could listen to live audio via Whisper, making the interaction even more natural. "Hey @manager, I'm going to walk you through the assembly now" → live transcription → KB Agent processes in real-time.


12. What Changes From Current Atomizer

Current New
Single Claude Code instance on Windows Multiple specialized agents on Clawdbot
Antoine operates everything directly Agents collaborate, Antoine steers
Manual study setup + optimization Orchestrated workflow across agents
LAC learning in one brain Distributed memory across specialized agents
Reports are manual Reporter agent + Atomaste template = automated
Knowledge in scattered files KB Agent maintains structured documentation
One model does everything Right model for each job
No audit trail Auditor + protocol enforcement = full traceability

What We Keep

  • All Atomizer protocols (OP_01-08, SYS_10-18)
  • The optimization engine and extractors
  • LAC (Learning Atomizer Core) — distributed across agents
  • AtomizerSpec v2.0 format
  • Dashboard (still needed for visualization + manual control)
  • NX integration (still runs on Windows)
  • The dream workflow vision (this is the implementation path)

What's New

  • 🆕 Multi-agent orchestration via Clawdbot
  • 🆕 Slack-native collaboration interface
  • 🆕 Specialized models per task
  • 🆕 Distributed memory architecture
  • 🆕 Protocol enforcement via multiple checkpoints
  • 🆕 Automated report generation pipeline
  • 🆕 Knowledge Base from CAD Documenter
  • 🆕 Researcher agent with web access

13. Risks and Mitigations

Risk Impact Mitigation
Agent coordination overhead Agents talk too much, nothing gets done Manager as bottleneck, strict protocol enforcement
Cost explosion 13 agents burning tokens Tiered models, wake-on-demand, sub-agents with timeouts
Context window limits Agents lose track of complex projects Memory architecture (3 layers), thread-based Slack organization
NX still on Windows Can't fully automate FEA execution from Linux Keep NX operations on Windows, sync results via Syncthing
Clawdbot multi-agent maturity Edge cases in multi-agent routing Start with 3-4 agents, discover issues early, contribute fixes
Over-engineering Building everything before proving anything Phase 0 proof-of-concept first
Agent hallucination Agent produces wrong engineering results Auditor agent, human-in-the-loop on all deliverables

14. Success Criteria

Phase 0 Success (Proof of Concept)

  • Manager + Secretary + Technical running as separate Clawdbot agents
  • Can create a project channel and route messages correctly
  • Manager orchestrates Technical breakdown of a real problem
  • Secretary successfully summarizes and escalates to Antoine
  • Memory persistence works across sessions

Phase 1 Success (Core Team)

  • Full planning → optimization → validation cycle with agents
  • Optimizer configures a real study using Atomizer protocols
  • Auditor catches at least one issue the optimizer missed
  • < 30 minutes from problem statement to optimization launch

Full Success (Complete Company)

  • End-to-end client job: intake → plan → optimize → report → deliver
  • Professional PDF report generated automatically
  • Knowledge from previous jobs improves future performance
  • Antoine spends < 20% of his time on the job (the rest is agents)

This is the plan. Let's build this company. 🏭

Created: 2026-02-07 by Mario
Last updated: 2026-02-08