BotWire + LangChain Quickstart (2 Minutes)
Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup
You're building a LangChain app and need persistent memory that survives restarts. Your chatbot forgets everything when your process dies, or you're manually juggling JSON files and database connections. You need a 2-minute setup that just works.
The Problem: LangChain Memory Dies With Your Process
LangChain's built-in memory classes store conversation history in RAM. The moment your Python process restarts — deployment, crash, or simple code change — all context vanishes. Your chatbot becomes a goldfish.
The typical workaround involves writing custom memory classes that persist to Redis, PostgreSQL, or files. You'll spend hours debugging connection strings, serialization formats, and cleanup logic. Even LangChain's community memory implementations require database setup, API keys, or cloud accounts.
Here's what breaks:
from langchain.memory import ConversationBufferMemory
# This dies when your process restarts
memory = ConversationBufferMemory()
# User context: gone
# Conversation flow: broken
# Time wasted: hours
You need persistent memory without the database headache.
The Fix: BotWire + LangChain in 30 Seconds
Install BotWire and replace your memory class:
pip install botwire
from botwire import BotWireChatHistory
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
# Persistent memory that survives restarts
chat_history = BotWireChatHistory(session_id="user-123")
memory = ConversationBufferMemory(
chat_memory=chat_history,
return_messages=True
)
# Your existing LangChain code works unchanged
llm = ChatOpenAI()
conversation = ConversationChain(llm=llm, memory=memory)
# This conversation persists across restarts
response = conversation.predict(input="Remember my name is Alice")
That's it. Your LangChain memory now persists forever.
How It Works: Persistent Key-Value Storage
BotWire provides a dead-simple HTTP-backed key-value store. The BotWireChatHistory adapter implements LangChain's BaseChatMessageHistory interface, automatically syncing messages to persistent storage.
Each session_id creates an isolated conversation namespace. Messages are stored as structured data on BotWire's servers, accessible from any process or machine:
from botwire import BotWireChatHistory
# Create isolated conversation spaces
user_chat = BotWireChatHistory(session_id="user-alice")
admin_chat = BotWireChatHistory(session_id="admin-bob")
# Messages persist independently
user_chat.add_user_message("I need help with pricing")
admin_chat.add_user_message("Deploy the new model")
# Restart your process...
# Messages are still there
print(user_chat.messages) # Loads from persistent storage
For direct memory operations, use the core BotWire API:
from botwire import Memory
# Store arbitrary conversation metadata
memory = Memory("conversation-metadata")
memory.set("user-alice-preferences", {"language": "es", "tone": "formal"})
memory.set("user-alice-context", {"last_topic": "billing", "priority": "high"})
# Retrieve across processes
prefs = memory.get("user-alice-preferences") # Returns the dict
context = memory.get("user-alice-context")
# List all keys in this namespace
all_keys = memory.list() # ["user-alice-preferences", "user-alice-context"]
Cross-process persistence means your development server, production deployment, and background workers all share the same conversation state. No Redis clusters or database migrations required.
Advanced LangChain Integration
For multi-agent or complex conversation workflows, combine BotWire with LangChain's advanced memory patterns:
from botwire import BotWireChatHistory, Memory
from langchain.memory import ConversationSummaryBufferMemory
from langchain_openai import ChatOpenAI
# Persistent summarizing memory for long conversations
chat_history = BotWireChatHistory(session_id="long-conversation-user-456")
summary_memory = ConversationSummaryBufferMemory(
chat_memory=chat_history,
llm=ChatOpenAI(),
max_token_limit=2000,
return_messages=True
)
# Store conversation metadata separately
metadata = Memory("user-sessions")
metadata.set("user-456-started", "2024-01-15T10:30:00Z")
metadata.set("user-456-topic", "technical-support")
# Both conversation and metadata persist
conversation = ConversationChain(llm=ChatOpenAI(), memory=summary_memory)
This pattern works with any LangChain memory class that accepts a chat_memory parameter. Your existing memory strategies — summarization, token limits, sliding windows — now persist across deployments.
When NOT to Use BotWire
BotWire isn't right for every use case:
• Vector search or semantic memory — BotWire stores key-value pairs, not embeddings. Use Pinecone or Weaviate for similarity search. • High-throughput applications — Free tier caps at 1000 writes/day per namespace. For heavy workloads, consider self-hosting or dedicated databases. • Sub-millisecond latency requirements — HTTP calls add ~50-200ms overhead. Use in-memory caching for real-time applications.
FAQ
Q: Why not just use Redis? A: Redis requires server setup, connection management, and serialization logic. BotWire works instantly without infrastructure. For production scale, self-host BotWire or use Redis with custom LangChain memory classes.
Q: Is this actually free? A: Yes. 1000 writes/day per namespace, 50MB storage per namespace, unlimited reads, forever. No credit card, no signup wall. Open source MIT license if you want to self-host.
Q: What about data privacy? A: Data flows through BotWire's servers by default. For sensitive applications, self-host the open source version (single FastAPI + SQLite service) or use local memory solutions.
Get Started Now
BotWire eliminates LangChain memory persistence headaches in one line of code.
pip install botwire
Full documentation and self-hosting instructions: https://botwire.dev