WhatsApp AI Assistant With Persistent Memory

Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup

Your WhatsApp AI assistant forgets everything after a restart. Every conversation starts from scratch, users get frustrated repeating context, and your bot feels broken. Here's how to fix persistent memory for WhatsApp bots built with Twilio or Meta's API using a simple key-value store that survives restarts.

The Problem: WhatsApp Bots Have Amnesia

WhatsApp AI assistants are stateless by design. When your Flask/FastAPI server restarts, crashes, or scales, all chat history vanishes. Your bot can't remember:

This breaks user experience. Imagine telling your AI assistant your name, preferences, and context every single conversation. Users abandon bots that can't maintain basic continuity.

The core issue: WhatsApp webhooks are HTTP requests. Your server processes the message, sends a response, and forgets everything. No built-in persistence exists between webhook calls. You need external storage that persists across process restarts.

The Fix: Persistent Memory in 3 Lines

Install BotWire Memory for instant persistent storage:

pip install botwire

Add memory to your WhatsApp webhook handler:

from botwire import Memory
from flask import Flask, request

app = Flask(__name__)

@app.route('/webhook', methods=['POST'])
def whatsapp_webhook():
    data = request.json
    user_id = data['entry'][0]['changes'][0]['value']['messages'][0]['from']
    message = data['entry'][0]['changes'][0]['value']['messages'][0]['text']['body']
    
    # Persistent memory - survives restarts
    memory = Memory(f"whatsapp-{user_id}")
    
    # Store conversation context
    memory.set("last_message", message)
    memory.set("message_count", memory.get("message_count", 0) + 1)
    
    # Your AI logic here
    response = generate_ai_response(message, memory)
    send_whatsapp_message(user_id, response)
    
    return "OK"

How It Works

The Memory class creates a namespaced key-value store per user. Each WhatsApp user gets isolated storage that persists across server restarts, deployments, and crashes.

Key patterns for whatsapp chatbot state:

from botwire import Memory

def handle_whatsapp_message(user_id, message):
    memory = Memory(f"user-{user_id}")
    
    # Store user context
    if "my name is" in message.lower():
        name = extract_name(message)
        memory.set("user_name", name)
    
    # Conversation history (last 10 messages)
    history = memory.get("history", [])
    history.append({"user": message, "timestamp": time.time()})
    history = history[-10:]  # Keep last 10
    memory.set("history", history)
    
    # User preferences
    if "I prefer" in message:
        prefs = memory.get("preferences", {})
        prefs["communication_style"] = "formal"
        memory.set("preferences", prefs)
        
    # Multi-turn dialog state
    memory.set("last_intent", detect_intent(message))
    memory.set("conversation_stage", "collecting_info")

Memory operations are synchronous HTTP calls to botwire.dev. Each namespace gets 1000 writes/day free, unlimited reads. Data persists indefinitely unless you delete it. TTL and key listing work as expected:

# Set expiring data
memory.set("temp_token", "abc123", ttl=3600)  # 1 hour

# List all keys
keys = memory.list_keys()

# Delete specific data
memory.delete("old_conversation")

Cross-process access works automatically. Multiple server instances share the same user memory.

Twilio WhatsApp Integration

For Twilio WhatsApp bots, integrate memory into your webhook handler:

from twilio.twiml.messaging_response import MessagingResponse
from botwire import Memory
import openai

@app.route('/twilio-webhook', methods=['POST'])
def twilio_whatsapp():
    user_phone = request.form['From']  # whatsapp:+1234567890
    message_body = request.form['Body']
    
    # Persistent memory per phone number
    memory = Memory(f"twilio-{user_phone}")
    
    # Build conversation context
    chat_history = memory.get("messages", [])
    chat_history.append({"role": "user", "content": message_body})
    
    # Call OpenAI with history
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=chat_history,
        max_tokens=150
    )
    
    ai_message = response.choices[0].message.content
    chat_history.append({"role": "assistant", "content": ai_message})
    
    # Persist updated history (keep last 20 messages)
    memory.set("messages", chat_history[-20:])
    
    # Send Twilio response
    twiml = MessagingResponse()
    twiml.message(ai_message)
    return str(twiml)

When NOT to Use BotWire

FAQ

Why not Redis? Redis requires hosting, configuration, and scaling. BotWire Memory works instantly with zero setup. Perfect for prototypes and small-scale bots.

Is this actually free? Yes, 1000 writes/day per namespace forever. No credit card, no expiration. Paid plans exist for higher limits.

What about privacy? Data is stored on botwire.dev servers. For sensitive data, self-host the open-source version (single FastAPI + SQLite file).

Give your WhatsApp AI assistant the memory it deserves. Users will notice the difference immediately - no more repeating context or starting conversations from scratch.

pip install botwire

Get started at https://botwire.dev

Install in one command:

pip install botwire

Start free at botwire.dev