Core

Overview

MailSubsystem Core is a local-first email intelligence application written in Rust. It connects to an IMAP mailbox, stores runtime state in PostgreSQL, and runs a fault-tolerant agent pipeline for classification, summarization, threat detection, folder recommendation, and filing.

Who it is for

Developers and technical operators who want serious inbox automation they can audit. It is intentionally different from managed AI inbox products: the Core runs on your hardware, with your database, your model provider choices, and logs you can inspect.

Status

The Core is being prepared for open-source release under AGPL v3. The repository is in pre-release hardening, with Phase 5 gap closure and production hardening in progress. Treat current builds as developer-focused pre-release software until the release criteria are met.
AreaCurrent direction
LicenseAGPL v3 intended for OSS publication.
Repositorygithub.com/iamcobolt/MailSubsystem.git
RuntimeRust application with PostgreSQL source of truth and IMAP integration.
HardeningPhase 5 focuses on production readiness, sandbox safety, observability, and reliable autonomous operation.

Architecture

The current implementation is a single CLI binary. The intended product direction is a long-running daemon with this expanded flow:

IMAP IDLE → Ingest → Analyze → Embed → Action Dispatch → Location → File

The shorter README workflow remains useful for day-one operation:

Sync → Analyze → Locate → File
Simulated MailSubsystem Core architecture component map
Simulated architecture: IMAP, PostgreSQL, agents, and model providers

Core owns intelligence

LLM workflows, RAG, classification, summarization, threat detection, action dispatch, folder recommendation, and filing logic.

PostgreSQL owns state

Email records, folders, durable queues, agent checkpoints, analysis output, embeddings, sender behavior, and audit-friendly run history.

Agents are declarative

Specs use Markdown conventions and shared runtime behavior, so prompt and policy changes don't require rewriting the binary.

Models are pluggable

Local models through LM Studio or Ollama are the preferred posture; hosted providers can be configured for selected escalation paths.

Next

Continue to Quick start to build the binary and run your first commands.