From Forgetful to Brilliant.
Watch how AutoMem transforms your AI from a blank slate into a context-aware assistant.
Universal I/O
Compatible with Claude Code, Cursor, ChatGPT, OpenClaw, and any MCP client. Local or remote — works everywhere via MCP and Streamable HTTP.
Graph Engine
FalkorDB + Qdrant hybrid core. Maps relationships between memories, not just vector similarity.
Instant Recall
Sub-50ms latency. Your agent knows what you know before you finish typing the prompt.
Dreams while you dream.
AutoMem runs quietly in the background, stitching every thought into a retrieval graph. It notices patterns, links context, and keeps the right edges warm—without you ever thinking about it.
0.15.0 • Relationship Engine & Benchmarks
Optimized relationship taxonomy for cleaner graph traversal. LoCoMo cat5 multi-hop judge for automated recall quality scoring. priority_ids now boosts relevance instead of hard-filtering. Embedding dimension validation catches provider mismatches at startup.
MCP 0.13.0 • OpenClaw Overhaul & Cursor Rules
Complete OpenClaw plugin rewrite — native MCP integration and new skill setup. Split global vs. project rules for Cursor. Hook stdin parsing with truncation guards and dedup checks.
0.14.0 • Docker & Hardened Deployments
Official Docker build workflow. QDRANT_HOST + QDRANT_PORT for flexible Qdrant config. Stateless MCP bridge transport for resilient connections. Consolidation overhaul: reduced decay rate, importance floor, archived memory filtering. Voyage AI set as recommended embedding default.
0.13.0 • Adaptive Recall & Graph Viewer
min_score threshold with adaptive floor filtering — low-relevance results get cut automatically. LoCoMo benchmark harness for rapid experiment iteration. Graph viewer externalized as standalone module.
0.12.0 • Recall Quality Lab & JIT Enrichment
Just-in-time enrichment on recall for higher-quality results. Recall Quality Lab for data-driven recall optimization. LongMemEval benchmark harness (ICLR 2025) for measuring memory quality. relevance_score now synced to Qdrant payload for accurate vector search.
0.11.0 • Embedding Providers & Memory Lookup
Voyage AI embedding provider for high-accuracy recall. OpenAI-compatible providers via OPENAI_BASE_URL (LM Studio, Ollama, xAI, etc.). GET /memory/<id> endpoint for direct memory lookup. Hardened time parser for numeric timestamps.
0.10.1 • Stability Fixes
Temporal query bound to 7-day window with timeout to prevent runaway queries. Skip temperature param for o-series and gpt-5 models.
0.10.0 • Visualization & Observability
Interactive graph visualization API. automem-watch real-time observability with SSE streaming and TUI. Ollama embedding provider for fully local deployments. Memory content size governance with auto-summarization. Streamable HTTP transport for MCP connections.
MCP 0.12.0 • OpenClaw Integration
Native OpenClaw integration replacing mcporter dependency chain. Direct HTTP API calls to AutoMem with new SKILL.md template. Simplified setup and troubleshooting documentation.
MCP 0.11.0 • Claude Code Plugin
Claude Code plugin with automatic memory capture hooks. Session start recall, session end summary. Cursor hooks integration for automatic capture. Integration tests and CLI smoke tests.
Open Source & Portable
Your memory infrastructure should be as portable as your code. Run it on your laptop, your private cloud, or our managed service.
Docker
The standard container. Run it locally or on any VPS.
automem/server:latest
Railway
One-click production deployment. Handles DBs & updates automatically.
> Success (24s)
Source
Fork the repo. Hack the graph logic. It's your code.
npm install