Telegram AI Bot With Persistent Memory Across Sessions
Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup
Your Telegram AI bot works perfectly until your server restarts — then it forgets every conversation like it has digital amnesia. The bot loses context, asks users to repeat themselves, and feels broken. This happens because most bot implementations store chat history in memory, which vanishes when the process dies.
Why Telegram Bots Lose Their Memory
When you build an AI Telegram bot, the conversation state lives in your Python process — usually in variables, lists, or dictionaries. The moment your server restarts (deployments, crashes, scaling), all that context evaporates.
# This breaks on restart
chat_histories = {} # Gone when process dies
@bot.message_handler()
def handle_message(message):
user_id = message.from_user.id
if user_id not in chat_histories:
chat_histories[user_id] = []
chat_histories[user_id].append(message.text)
# This history disappears on restart
Your users notice immediately. They're mid-conversation about their project requirements, the bot restarts, and suddenly it asks "How can I help you?" like they never met. For AI agents that build context over time, this memory loss kills the user experience.
The Fix: Persistent Memory
Install BotWire to give your telegram bot ai memory that survives restarts:
pip install botwire
Replace your in-memory storage with persistent key-value storage:
from botwire import Memory
import telebot
bot = telebot.TeleBot("YOUR_BOT_TOKEN")
memory = Memory("telegram-bot")
@bot.message_handler(func=lambda message: True)
def handle_message(message):
user_id = str(message.from_user.id)
# Get existing history
history = memory.get(f"chat_{user_id}") or []
# Add new message
history.append({
"user": message.text,
"timestamp": message.date
})
# Save back to persistent storage
memory.set(f"chat_{user_id}", history)
# Your AI logic here with full context
response = generate_ai_response(history)
bot.reply_to(message, response)
How Persistent Bot Memory Works
The Memory("telegram-bot") creates a namespace for your bot's data. Each memory.set() call saves to a persistent backend that survives process restarts. When your bot starts up, memory.get() retrieves the exact conversation state.
You can store any JSON-serializable data — conversation history, user preferences, or complex state objects:
# Store structured conversation data
memory.set(f"user_{user_id}_context", {
"conversation_history": messages,
"user_preferences": {"language": "en", "tone": "professional"},
"current_task": "code_review",
"session_metadata": {"started": datetime.now().isoformat()}
})
# Clean up old conversations
memory.delete(f"chat_{inactive_user_id}")
# List all active users
all_keys = memory.list_keys()
active_users = [k for k in all_keys if k.startswith("user_")]
The memory persists across server restarts, deployments, and even moving to different machines. Your telegram persistent bot maintains context whether it's running on your laptop or production servers.
For memory management, old conversations won't pile up forever. You can implement TTL-style cleanup by checking timestamps and deleting stale sessions, or let the 50MB namespace limit naturally cap your storage.
Integration with LangChain Agents
If you're using LangChain for your AI logic, BotWire provides a drop-in chat history adapter:
from botwire import BotWireChatHistory
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
@bot.message_handler(func=lambda message: True)
def handle_langchain_message(message):
user_id = str(message.from_user.id)
# Persistent chat history for LangChain
chat_history = BotWireChatHistory(session_id=f"telegram_{user_id}")
memory = ConversationBufferMemory(
chat_memory=chat_history,
return_messages=True
)
conversation = ConversationChain(
llm=your_llm_model,
memory=memory,
verbose=False
)
response = conversation.predict(input=message.text)
bot.reply_to(message, response)
This gives your LangChain-powered telegram bot ai memory that persists across all restarts while using familiar LangChain patterns.
When NOT to Use BotWire
- Vector search: BotWire is key-value storage, not a vector database. Use Pinecone/Weaviate for semantic search over conversation embeddings.
- High-throughput bots: The 1000 writes/day free limit won't work for viral bots with thousands of concurrent users.
- Sub-millisecond latency: Network calls to the persistence layer add ~50-200ms. Use Redis for ultra-low latency caching.
FAQ
Q: Why not just use Redis or a database? A: You could, but then you need to run Redis, handle connections, implement serialization, and manage infrastructure. BotWire works out of the box with zero setup.
Q: Is this actually free? A: Yes, 1000 writes/day per namespace is free forever. Most personal/small business bots stay well under this limit. You can also self-host the open-source version.
Q: What about data privacy? A: Data is stored on BotWire's servers for the hosted version. For sensitive use cases, self-host the MIT-licensed FastAPI service on your own infrastructure.
Stop rebuilding memory systems for every bot project. Give your Telegram AI agents persistent memory that just works: pip install botwire and deploy with confidence. Check out more examples at botwire.dev.