Long-Running AI Agents with Temporal + Memory
Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup
Your AI agents crash, restart, or get redeployed — and suddenly they can't remember what they were doing. You're running long-running Temporal workflows for AI agents that need persistent memory across restarts. Standard in-memory storage dies with the process, and you need something that survives infrastructure changes without complex database setup.
The Memory Loss Problem
When you build AI agents with Temporal workflows that run for hours or days, memory becomes critical. Your agent might be halfway through a complex task — analyzing documents, waiting for user input, or coordinating with external APIs — when your container restarts, Kubernetes reschedules your pod, or you deploy new code.
Without persistent memory, your agent loses context:
- ChatGPT forgets the conversation history
- Multi-step workflows restart from scratch
- User preferences and learned behaviors vanish
- Cross-workflow data sharing breaks
You could build Redis clusters or manage databases, but for simple agent memory, that's overkill. You need something that works immediately and survives process restarts without infrastructure complexity.
The Fix: BotWire Memory
Install BotWire and add persistent memory to your Temporal AI workflows in minutes:
pip install botwire
Here's a working Temporal workflow with persistent agent memory:
from botwire import Memory
import temporalio
from temporalio import workflow, activity
from datetime import timedelta
@workflow.defn
class AIAgentWorkflow:
@workflow.run
async def run(self, user_id: str, task: str) -> str:
# Agent memory survives workflow restarts
memory = Memory(f"agent-{user_id}")
# Remember the current task
memory.set("current_task", task)
memory.set("status", "started")
# Execute long-running activities
result = await workflow.execute_activity(
analyze_task,
args=[user_id, task],
start_to_close_timeout=timedelta(hours=2)
)
# Update memory with results
memory.set("status", "completed")
memory.set("result", result)
return result
@activity.defn
async def analyze_task(user_id: str, task: str) -> str:
memory = Memory(f"agent-{user_id}")
# Access memory from any activity
current_status = memory.get("status", "unknown")
print(f"Agent status: {current_status}")
# Your AI logic here
return f"Analysis complete for: {task}"
How It Works
BotWire Memory is a persistent key-value store designed for agent memory. Each Memory("namespace") creates an isolated storage space that survives process restarts, container crashes, and deployments.
The memory persists across Temporal workflow executions. If your workflow gets interrupted and replayed, your agent picks up exactly where it left off:
@workflow.defn
class PersistentChatWorkflow:
@workflow.run
async def run(self, session_id: str, message: str) -> str:
memory = Memory(f"chat-{session_id}")
# Check if this is a returning conversation
conversation_count = memory.get("message_count", 0)
user_name = memory.get("user_name", "Unknown")
# Update conversation state
memory.set("message_count", conversation_count + 1)
memory.set("last_message", message)
if conversation_count == 0:
return f"Hello! I'm starting our conversation."
else:
return f"Welcome back {user_name}! This is message #{conversation_count + 1}"
You can list all stored keys, delete specific entries, or clear entire namespaces:
memory = Memory("agent-123")
# List all stored keys
all_keys = memory.list_keys()
print(f"Stored keys: {all_keys}")
# Delete specific memory
memory.delete("temporary_data")
# Memory automatically handles concurrent access across processes
Temporal-Specific Patterns
For complex Temporal AI workflows, use memory to coordinate between activities and handle workflow continuations:
@workflow.defn
class MultiStepAIWorkflow:
@workflow.run
async def run(self, agent_id: str, steps: list[str]) -> dict:
memory = Memory(f"workflow-{agent_id}")
results = {}
# Track progress through workflow steps
completed_steps = memory.get("completed_steps", [])
for step in steps:
if step in completed_steps:
# Skip already completed steps on replay
results[step] = memory.get(f"result_{step}")
continue
# Execute step
result = await workflow.execute_activity(
process_step,
args=[agent_id, step],
start_to_close_timeout=timedelta(minutes=30)
)
# Persist result before continuing
memory.set(f"result_{step}", result)
completed_steps.append(step)
memory.set("completed_steps", completed_steps)
results[step] = result
return results
When NOT to Use BotWire
- Vector search: BotWire isn't a vector database. Use Pinecone, Weaviate, or Chroma for embedding-based semantic search
- High-throughput workloads: 1000 writes/day limit on free tier. For millions of operations, use Redis or dedicated databases
- Sub-millisecond latency: HTTP-based storage adds network overhead. Use in-memory caches for ultra-low latency requirements
FAQ
Why not just use Redis? Redis requires setup, clustering for HA, and memory management. BotWire works instantly with zero configuration and has a free tier that covers most AI agent use cases.
Is this actually free? Yes, 1000 writes/day per namespace forever. That covers most agent workflows. Unlimited reads, 50MB storage per namespace. You can also self-host the open source version.
What about data privacy? All data is encrypted in transit. For sensitive workloads, self-host using the open source version — it's a single FastAPI service with SQLite.
Get Started
Add persistent memory to your Temporal AI agents in under 5 minutes. No signups, no API keys, just working agent memory that survives any restart.
pip install botwire
Try it now at https://botwire.dev — your agents will thank you for remembering.