Commit Graph

4 Commits

Author SHA1 Message Date
73a7b9d9f1 feat: Add dashboard chat integration and MCP server
Major changes:
- Dashboard: WebSocket-based chat with session management
- Dashboard: New chat components (ChatPane, ChatInput, ModeToggle)
- Dashboard: Enhanced UI with parallel coordinates chart
- MCP Server: New atomizer-tools server for Claude integration
- Extractors: Enhanced Zernike OPD extractor
- Reports: Improved report generator

New studies (configs and scripts only):
- M1 Mirror: Cost reduction campaign studies
- Simple Beam, Simple Bracket, UAV Arm studies

Note: Large iteration data (2_iterations/, best_design_archive/)
excluded via .gitignore - kept on local Gitea only.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:53:55 -05:00
b1ffc64407 feat: Implement SAT v3 achieving WS=205.58 (new campaign record)
Self-Aware Turbo v3 optimization validated on M1 Mirror flat back:
- Best WS: 205.58 (12% better than previous best 218.26)
- 100% feasibility rate, 100% unique designs
- Uses 556 training samples from V5-V8 campaign data

Key innovations in V9:
- Adaptive exploration schedule (15% → 8% → 3%)
- Mass threshold at 118 kg (optimal sweet spot)
- 70% exploitation near best design
- Seeded with best known design from V7
- Ensemble surrogate with R²=0.99

Updated documentation:
- SYS_16: SAT protocol updated to v3.0 VALIDATED
- Cheatsheet: Added SAT v3 as recommended method
- Context: Updated protocol overview

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 16:06:33 -05:00
f0e594570a docs: Add comprehensive podcast briefing document
- Add ATOMIZER_PODCAST_BRIEFING.md with complete technical overview
- Covers all 12 sections: architecture, optimization, neural acceleration
- Includes impressive statistics and metrics for podcast generation
- Update LAC failure insights from recent sessions
- Add M1_Mirror studies README

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 09:36:40 -05:00
faa7779a43 feat: Add L-BFGS gradient optimizer for surrogate polish phase
Implements gradient-based optimization exploiting MLP surrogate differentiability.
Achieves 100-1000x faster convergence than derivative-free methods (TPE, CMA-ES).

New files:
- optimization_engine/gradient_optimizer.py: GradientOptimizer class with L-BFGS/Adam/SGD
- studies/M1_Mirror/m1_mirror_adaptive_V14/run_lbfgs_polish.py: Per-study runner

Updated docs:
- SYS_14_NEURAL_ACCELERATION.md: Full L-BFGS section (v2.4)
- 01_CHEATSHEET.md: Quick reference for L-BFGS usage
- atomizer_fast_solver_technologies.md: Architecture context

Usage: python -m optimization_engine.gradient_optimizer studies/my_study --n-starts 20

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 16:36:18 -05:00