Transform any AI assistant into a memory-enabled powerhouse. 16 MCP tools for storing conversations, documents, and knowledge with semantic search and cross-session persistence.
Works with
Complete toolkit for managing memories, conversations, and knowledge bases with semantic search
Store new memories with content, embeddings, tags, and metadata for semantic search
Retrieve specific memories by ID with full content and metadata
Update existing memories with new content, tags, or embeddings
Remove memories from the database permanently
Browse all memories with pagination and filtering options
Hybrid search combining full-text and vector similarity
Find semantically similar memories using cosine similarity
Full-text search with relevance scoring and field filtering
Get all tags used across memories with usage counts
Filter memories by specific tags or tag combinations
Save entire conversation threads with automatic embedding generation
Retrieve complete conversation history by ID
Store documents with chunking and automatic embedding generation
Search across document chunks with semantic matching
Database statistics including memory counts and storage usage
Export memories to JSON format for backup or migration
Columnist-DB MCP Server provides a universal memory layer for any AI assistant that supports the Model Context Protocol. Once configured, AI assistants can automatically store and recall information across sessions.
Add the MCP server to your AI assistant's configuration file. The server runs locally and stores all data in IndexedDB.
Once configured, the AI assistant automatically gets access to all 16 memory management tools without any additional setup.
The AI can store conversations, documents, and knowledge that persists across sessions. Search uses vector embeddings for semantic matching.
Works with Claude Desktop, Cline, and any other MCP-compatible AI assistant. No vendor lock-in.
All data stays on your machine. No cloud dependencies, no privacy concerns. Full offline capability with IndexedDB.
Vector embeddings enable semantic matching. Find relevant memories even when exact keywords don't match.