Architecture

Simple components, strong lineage.

SwarmMind is intentionally modular. You can swap models, change evaluators, and add plugins without rewriting the system.

System map

Modular architecture from agents → evolution → archive → codex.

SwarmMind/
├── agents/           # Agent definitions (PhilosopherAgent)
├── engine/           # Core systems
│   ├── writer.py     # Narrative generation
│   ├── editor.py     # Collaborative refinement
│   ├── judge.py      # Multi-model evaluation
│   └── reflection.py # Meta-learning from narratives
├── evolution/        # GA operators
│   ├── selector.py   # Fitness-based selection
│   ├── mutator.py    # Genome mutations
│   ├── pairing.py    # Writer-editor collaboration
│   └── profile_tracker.py  # Achievement tracking
├── plugins/          # Hook-based extensibility
│   ├── base.py       # Plugin interface
│   ├── registry.py   # Plugin loader
│   └── examples/     # LoggingPlugin, MetricsPlugin
├── archive/          # Long-term memory (JSONL)
├── codex/            # Curated anthology
│   └── manager.py    # Chapter organization, cross-refs
├── gui/              # 4 Streamlit interfaces
│   ├── app.py               # Evolution dashboard
│   ├── profile_viewer.py    # Agent achievements
│   ├── reflection_viewer.py # Archive browser
│   └── codex_promotion.py   # Curation interface
└── tests/            # 111+ comprehensive tests
See full docs

Core building blocks

Agents: Philosophical writers with archetypes, lenses, and styles.

Evolution: Genetic algorithm with mutation, selection, and pairing.

Plugins: 8 lifecycle hooks for extensibility.

Profiles: Automatic specialty detection and lineage tracking.

Codex: Thematic anthology with 6 curated chapters.

The goal: fast, measurable iteration with deep traceability.

What “fitness” can be

Fitness is a contract you define: rubric scoring, pairwise comparisons, unit tests, retrieval accuracy, cost/latency constraints, or domain outcomes (e.g., “did this explanation match the scoreboard?”).

LLM-as-judge
heuristics
tests
outcome constraints

Traceability

Each run produces artifacts you can inspect: prompts, tool calls, outputs, scores, and reflections. This makes the system debuggable — and makes “progress” auditable instead of mystical.

Debugging is a first-class feature.