Dream Processing
Dream Processing is a memory consolidation system inspired by how the human brain processes and consolidates memories during sleep. It discovers clusters of semantically similar memories using vector embeddings, then consolidates each cluster into a single unified memory via LLM — reducing token consumption while preserving all key insights.Dream processing is integrated into the existing decay engine cron — there is no separate job to configure. It runs automatically alongside the memory decay pipeline.
How It Works
Dream processing operates in three phases:Phase 1: Cluster Discovery (No LLM)
The cluster discovery phase uses vector embeddings to find groups of related memories.Select Candidate Memories
Active memories from the memory ring that have not been forgotten, are not in the
essence decay stage, and have not already been assigned to a dream cluster. Capped at 200 candidates per run.Build Adjacency Graph
For each candidate memory, generate an embedding from its current content and search for nearest neighbors in the vector store. Neighbors with cosine similarity >= 0.75 (configurable) are connected via bidirectional edges.
Union-Find Clustering
A Union-Find (disjoint set) algorithm groups connected memories into clusters. Only clusters with 3+ members (configurable) proceed to consolidation.
Phase 2: LLM Consolidation
Each cluster is fed to a compression-tuned LLM that merges redundant content into one coherent memory. Input format:Phase 3: Transactional Storage
All storage operations happen in a single database transaction to ensure atomicity:Create Consolidated Memory
A new
memory_ring entry is created with the consolidated text, aggregate scores (average success, total reinforcement, max relevance), and metadata linking back to the source cluster.Record Dream Consolidation
An entry in
dream_consolidations records the cluster similarity, token savings, source count, and result memory UUID.Mark Source Memories
Source memories are marked with the cluster ID (
dream_cluster_id) so they are not included in future dream runs. They are not deleted — they continue to decay naturally through the existing decay engine.Token Savings
The primary benefit of dream processing is reduced token consumption. A typical consolidation looks like:| Metric | Before | After | Savings |
|---|---|---|---|
| Source memories | 4 | 1 | -3 entries |
| Total tokens | ~2,000 | ~250 | ~87% |
| Key insights preserved | 100% | 100% | — |
dream_consolidations table tracks cumulative token_savings for reporting.
Source memories are not deleted immediately. They continue through the natural decay lifecycle (FULL -> COMPRESSED -> SUMMARY -> ESSENCE -> FORGOTTEN), so no information is lost even if the LLM consolidation misses something.
API Reference
Trigger Dream Processing
| Field | Description |
|---|---|
clustersFound | Number of semantic clusters discovered |
consolidated | Number of clusters successfully merged |
totalTokenSavings | Aggregate tokens saved across all consolidations |
errors | Non-fatal errors encountered during processing |
Get Dream History
SDK Usage
Consolidated Memory Metadata
Consolidated memories carry metadata that identifies them as dream outputs:["dream_consolidated"] for easy filtering in search results.
Concurrency Protection
Dream processing uses PostgreSQL advisory locks (key738204) to prevent concurrent runs. If a dream processing job is already running, subsequent requests return immediately without processing.
Configuration
| Variable | Default | Description |
|---|---|---|
DREAM_ENABLED | true | Enable/disable dream processing |
DREAM_MIN_CLUSTER_SIZE | 3 | Minimum memories to form a cluster |
DREAM_SIMILARITY_THRESHOLD | 0.75 | Cosine similarity threshold for adjacency |
DREAM_MAX_CLUSTERS_PER_RUN | 10 | Maximum clusters processed per run |
DREAM_CONSOLIDATION_MAX_INPUT_TOKENS | 1500 | Max input tokens per LLM call |
DREAM_CONSOLIDATION_MAX_OUTPUT_TOKENS | 300 | Max output tokens for consolidated memory |
DREAM_COOLDOWN_DAYS | 7 | Minimum days between dream runs |
DREAM_TOP_K_NEIGHBORS | 3 | Number of nearest neighbors per memory |

