Files
Atomizer/docs/07_DEVELOPMENT/ATOMIZER_USER_GUIDE.md
Antoine c1f2634636 docs: Add user guide for proper Atomizer usage and evolution
Comprehensive guide teaching users how to interact with Atomizer so that
the learning system evolves correctly. Covers:

- The right mindset (colleague, not tool)
- Starting sessions with proper context
- Communicating goals, constraints, preferences
- Creating and running optimization studies
- Analyzing and validating results
- Reporting errors effectively
- Contributing to LAC (recording insights, outcomes, workarounds)
- Ending sessions properly to capture learnings

Includes:
- Mermaid diagrams for learning loop and flows
- Good vs bad examples for every interaction type
- Complete example session transcript
- Quick reference card for common patterns
- Golden rules summary

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 22:12:28 -05:00

17 KiB

Atomizer User Guide

How to Use Atomizer So It Evolves the Right Way

Version: 1.0 Last Updated: 2025-12-11


Introduction

Atomizer is not just an optimization tool - it's a learning system. Every session you have with Claude contributes to making future sessions better. This guide teaches you how to use Atomizer properly so that:

  1. You get the best results from your optimizations
  2. The system learns and improves over time
  3. Knowledge is preserved and shared

Table of Contents

  1. The Right Mindset
  2. Starting a Session
  3. Communicating with Atomizer Claude
  4. Creating Optimization Studies
  5. Running Optimizations
  6. Analyzing Results
  7. When Things Go Wrong
  8. Contributing to Learning
  9. Ending a Session
  10. Best Practices Summary

1. The Right Mindset

Think of Atomizer as a Knowledgeable Colleague

❌ Wrong: "Atomizer is a tool I use"
✅ Right: "Atomizer is a colleague who learns from our conversations"

When you work with Atomizer:

  • Explain your goals - not just what you want, but why
  • Share context - what constraints matter? what tradeoffs are acceptable?
  • Report outcomes - did it work? what surprised you?
  • Mention discoveries - found something unexpected? Say so!

The Learning Loop

graph LR
    A[You describe problem] --> B[Claude suggests approach]
    B --> C[Optimization runs]
    C --> D[Results analyzed]
    D --> E[Learnings recorded]
    E --> F[Next session is smarter]
    F --> A

Your job: Keep Claude informed so the loop works.


2. Starting a Session

What Happens Behind the Scenes

When you start a new Claude Code session in the Atomizer project:

  1. Claude reads CLAUDE.md (system instructions)
  2. Claude checks for active studies
  3. Claude queries LAC for relevant prior knowledge
  4. Claude is ready to help

Good Session Starters

✅ "I need to optimize my bracket for minimum mass while keeping
   stress below 250 MPa. The model is in studies/bracket_v4/"

✅ "Continue working on the mirror optimization from yesterday.
   I think we were at 50 trials."

✅ "I'm having trouble with the beam study - the solver keeps
   timing out."

Bad Session Starters

❌ "Optimize this" (no context)

❌ "Run the thing" (what thing?)

❌ "It's not working" (what isn't working?)

Providing Context Helps Learning

When you provide good context, Claude can:

  • Find similar past optimizations in LAC
  • Apply learnings from previous sessions
  • Make better method recommendations
sequenceDiagram
    participant You
    participant Claude
    participant LAC

    You->>Claude: "Optimize bracket for mass, stress < 250 MPa"
    Claude->>LAC: Query similar: "bracket mass stress"
    LAC-->>Claude: Found: TPE worked 85% for brackets
    LAC-->>Claude: Insight: "20 startup trials helps"
    Claude->>You: "Based on 5 similar studies, I recommend TPE with 20 startup trials..."

3. Communicating with Atomizer Claude

Be Specific About Goals

❌ Vague: "Make it lighter"

✅ Specific: "Minimize mass while keeping maximum displacement < 2mm
   and first natural frequency > 100 Hz"

Mention Constraints and Preferences

✅ "I need results by Friday, so limit to 50 trials"

✅ "This is a preliminary study - rough results are fine"

✅ "This is for production - I need high confidence in the optimum"

✅ "I prefer TPE over CMA-ES based on past experience"

Ask Questions

Atomizer Claude is an expert. Use that expertise:

✅ "What method do you recommend for this problem?"

✅ "How many trials should I run?"

✅ "Is this the right extractor for von Mises stress?"

✅ "Why did convergence slow down after trial 30?"

Report What You Observe

This is critical for learning:

✅ "The optimization converged faster than expected - maybe because
   the design space is simple?"

✅ "I noticed the solver is slow when thickness < 2mm"

✅ "The Pareto front has a sharp knee around mass = 3kg"

✅ "This result doesn't make physical sense - stress should increase
   with thinner walls"

4. Creating Optimization Studies

The Creation Flow

flowchart TD
    A[Describe your optimization goal] --> B{Claude analyzes model}
    B --> C[Claude suggests config]
    C --> D{You approve?}
    D -->|Yes| E[Files generated]
    D -->|Adjust| F[Discuss changes]
    F --> C
    E --> G[Ready to run]

What to Provide

Information Why It Matters
Model files (.prt, .sim, .fem) Claude needs to analyze them
Optimization goal "Minimize mass", "Maximize stiffness"
Constraints "Stress < 250 MPa", "Frequency > 100 Hz"
Design variables Which parameters to vary (or let Claude discover)
Trial budget How many evaluations you can afford

Example: Good Study Creation Request

"Create an optimization study for my UAV arm:

Goal: Minimize mass while maximizing stiffness (multi-objective)

Constraints:
- Maximum stress < 200 MPa
- First frequency > 50 Hz

Design variables:
- Wall thickness (1-5 mm)
- Rib spacing (10-50 mm)
- Material (Al6061 or Al7075)

Budget: About 100 trials, I have time for a thorough study.

The model is in studies/uav_arm_v2/1_setup/model/"

Review the Generated Config

Claude will generate optimization_config.json. Review it:

✅ Check that objectives match your goals
✅ Verify constraints are correct
✅ Confirm design variable bounds make sense
✅ Ensure extractors are appropriate

If something's wrong, say so! This helps Claude learn what works.


5. Running Optimizations

Before Running

Ask Claude to validate:

"Please validate the config before we run"

This catches errors early.

During Running

You can:

  • Check progress: "How many trials completed?"
  • See best so far: "What's the current best design?"
  • Monitor convergence: "Is it converging?"

If You Need to Stop

"Pause the optimization - I need to check something"

Claude will help you resume later.

Long-Running Optimizations

For studies with many trials:

"Start the optimization and let it run overnight.
I'll check results tomorrow."

6. Analyzing Results

What to Ask For

✅ "Show me the best design"

✅ "Plot the convergence history"

✅ "Show the Pareto front" (for multi-objective)

✅ "Compare the top 5 designs"

✅ "What parameters are most important?"

✅ "Generate a study report"

Validate Results Physically

This is important for learning! Tell Claude if results make sense:

✅ "This looks right - thinner walls do reduce mass"

✅ "This is surprising - I expected more sensitivity to rib spacing"

❌ "This can't be right - stress should be higher with this geometry"

When results don't make sense, investigate together:

  • Check extractor configuration
  • Verify solver completed correctly
  • Look for constraint violations

Record Insights

If you discover something interesting:

"Record this insight: For UAV arms with thin walls (<2mm),
the frequency constraint becomes dominant before stress."

7. When Things Go Wrong

How to Report Errors

Good error report:

"The optimization failed at trial 23.
Error message: 'OP2 file not found'
The NX log shows 'Singular stiffness matrix'"

Bad error report:

"It's broken"

Common Issues and What to Say

Issue How to Report
Solver timeout "Solver timed out after X minutes on trial Y"
Missing file "Can't find [filename] - should it be in [location]?"
Unexpected results "Results don't match physics - [explain why]"
Slow convergence "Still not converged after X trials - should I continue?"

Help Claude Help You

When troubleshooting:

✅ "Here's what I already tried: [list attempts]"

✅ "This worked for a similar study last week"

✅ "The model works fine when I run it manually in NX"

Workarounds Should Be Recorded

If you find a workaround:

"We found that loading the _i.prt file first fixes the mesh update issue.
Please record this as a workaround."

This helps future sessions avoid the same problem.


8. Contributing to Learning

The Learning Atomizer Core (LAC)

LAC stores three types of knowledge:

graph TB
    subgraph LAC["What LAC Learns"]
        A[Optimization Memory<br/>What methods work for what]
        B[Session Insights<br/>Failures, successes, workarounds]
        C[Skill Evolution<br/>Protocol improvements]
    end

How You Contribute

1. Report Outcomes

At the end of a successful optimization:

"The optimization completed successfully. TPE worked well for this
bracket problem - converged at trial 67 out of 100."

Claude records this to LAC automatically.

2. Share Discoveries

When you learn something:

"I discovered that CMA-ES struggles with this type of problem
because of the discrete frequency target. TPE handled it better."

Claude will record this insight.

3. Report Preferences

Your preferences help personalize future sessions:

"I prefer seeing actual values in plots rather than normalized values"

"I like concise summaries - you don't need to explain basic FEA to me"

4. Suggest Improvements

If documentation was unclear:

"The protocol didn't explain how to handle assemblies -
you should add that."

Claude will suggest a protocol update.

What Gets Recorded

Type Example Used For
Success "TPE converged in 67 trials for bracket" Method recommendations
Failure "CMA-ES failed on discrete targets" Avoiding bad choices
Workaround "Load _i.prt before UpdateFemodel()" Fixing known issues
Preference "User prefers concise output" Personalization

9. Ending a Session

Before You Go

Take 30 seconds to wrap up properly:

"Let's wrap up this session."

Claude will:

  1. Summarize what was accomplished
  2. Record any learnings to LAC
  3. Note the current state of any studies
  4. Suggest next steps

Good Session Endings

✅ "We're done for today. The optimization is at trial 50,
   continuing overnight. I'll check results tomorrow."

✅ "Session complete. Please record that TPE worked well
   for this beam optimization."

✅ "Ending session. Next time I want to analyze the Pareto
   front in more detail."

Bad Session Endings

❌ [Just closing the window without wrapping up]

❌ [Stopping mid-task without noting the state]

Session Summary

Ask for a summary:

"Summarize this session"

You'll get:

  • What was accomplished
  • Current state of studies
  • Learnings recorded
  • Recommended next steps

10. Best Practices Summary

Do These Things

Practice Why
Provide context Enables LAC queries and better recommendations
Explain your goals Claude can suggest better approaches
Report outcomes Builds optimization memory
Share discoveries Prevents repeat mistakes
Validate results Catches errors, improves extractors
Wrap up sessions Records learnings properly

Avoid These Things

Anti-Pattern Why It's Bad
Vague requests Claude can't help effectively
Ignoring results Missed learning opportunities
Not reporting errors Same errors repeat
Abandoning sessions Learnings not recorded
Skipping validation Errors propagate

The Golden Rules

┌─────────────────────────────────────────────────────────────────────┐
│                                                                     │
│  1. CONTEXT IS KING                                                 │
│     The more context you provide, the better Claude can help.       │
│                                                                     │
│  2. OUTCOMES MATTER                                                 │
│     Always report whether things worked or didn't.                  │
│                                                                     │
│  3. SHARE DISCOVERIES                                               │
│     If you learn something, say it. It helps everyone.              │
│                                                                     │
│  4. CLOSE THE LOOP                                                  │
│     End sessions properly so learnings are captured.                │
│                                                                     │
│  5. TRUST BUT VERIFY                                                │
│     Review generated configs. Validate results physically.          │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

Quick Reference Card

Session Start

"I need to [goal] for my [geometry type].
The model is in [location].
Constraints: [list].
Budget: [trials]."

During Session

"What do you recommend for [decision]?"
"The results show [observation] - does this make sense?"
"I discovered that [insight]."

Session End

"Let's wrap up. Please record that [outcome/learning]."
"Summarize this session."

Reporting Issues

"[Action] failed with error: [message]
I already tried: [attempts]
The expected behavior was: [expectation]"

Example: A Complete Session

Here's what a good session looks like:

YOU: I need to optimize my bracket for minimum mass.
     Stress must stay below 250 MPa. The model is in
     studies/bracket_v5/1_setup/model/. I have time
     for about 100 trials.

CLAUDE: I'll analyze your model and check LAC for similar studies.
        [Analyzes model, queries LAC]

        Based on 3 similar bracket optimizations, I recommend TPE
        with 20 startup trials. This converged in ~70 trials for
        similar problems.

        Found 4 expressions: thickness_1, thickness_2, fillet_r, width
        I suggest varying all of them.

        Here's the proposed config: [shows config]

        Want me to proceed?

YOU: Looks good, but I want to fix fillet_r at 5mm -
     it's a manufacturing constraint.

CLAUDE: Updated config with fillet_r fixed at 5mm.
        Validated - ready to run.

        Start the optimization?

YOU: Yes, run it.

[Optimization runs...]

YOU: How's progress?

CLAUDE: Trial 45/100 complete.
        Current best: 2.34 kg at trial 32
        Convergence: Good - variance dropping steadily

        On track to complete in ~30 minutes.

[Optimization completes]

CLAUDE: Optimization complete.
        Best design: mass = 2.18 kg (trial 78)
        - thickness_1: 3.2 mm
        - thickness_2: 2.8 mm
        - width: 45 mm

        Stress constraint satisfied: max stress = 237 MPa

        Want me to generate a detailed report?

YOU: Yes. Also, I noticed convergence was faster than the similar
     studies you mentioned. Maybe because I fixed the fillet radius?

CLAUDE: Good observation. That reduces the design space from 4 to 3
        variables, which typically speeds convergence.

        I'll record this insight: "Fixing fillet radius in bracket
        optimization reduced design space and improved convergence
        by ~20%."

        [Generates report]

        Report saved. Anything else?

YOU: No, let's wrap up.

CLAUDE: Session Summary:

        ## Accomplished
        - Created and ran bracket_v5 optimization
        - Found optimum: 2.18 kg at trial 78
        - Generated study report

        ## Learnings Recorded
        - TPE effective for 3-variable bracket optimization
        - Fixing fillet radius improved convergence ~20%

        ## Next Steps
        1. Review detailed report
        2. Validate design in NX
        3. Consider manufacturing feasibility

Getting Help

If you're stuck:

"I'm not sure how to [task]. Can you help?"

If something seems wrong:

"This doesn't seem right because [reason]. Can we investigate?"

If you want to learn:

"Explain how [concept] works in Atomizer"

Remember: Every session makes Atomizer smarter. Your contributions matter.

Atomizer: Where engineers talk, AI optimizes, and learning never stops.