Stateful AI Workflows with Inngest + BotWire
Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup
Your AI workflows keep losing their memory between Inngest function runs. Agent conversations reset, context disappears, and your LLM starts every interaction from scratch. You need durable workflow AI that remembers past decisions, user preferences, and conversation history across restarts and deployments.
The Memory Problem in Durable Workflows
Inngest functions are stateless by design. Each execution starts fresh with no memory of previous runs. This works great for simple event processing, but breaks down for AI agents that need persistent state.
Consider an AI customer service agent built with Inngest. A user asks about their order, the agent looks it up and responds. Five minutes later, the same user asks a follow-up question. Without persistent memory, your agent has zero context about the previous conversation.
The obvious solutions don't work well:
- Function arguments become unwieldy for large conversation histories
- External databases require schema design, migrations, and connection management
- Redis/cache layers add infrastructure complexity and still lose data on restarts
What you need is persistent key-value memory that survives process restarts, deployments, and infrastructure changes while integrating seamlessly with your Inngest AI workflows.
The Fix: BotWire Memory
BotWire provides persistent agent memory with a simple key-value API. Install it and your Inngest functions can remember anything across executions:
pip install botwire
from inngest import Inngest
from botwire import Memory
inngest = Inngest(app_id="ai-agent")
@inngest.create_function(
fn_id="chat-agent",
trigger=inngest.create_trigger("api/chat")
)
async def chat_agent(ctx, step):
user_id = ctx.event.data["user_id"]
message = ctx.event.data["message"]
# Persistent memory per user
memory = Memory(f"user-{user_id}")
# Get conversation history
history = memory.get("conversation") or []
# Add new message
history.append({"role": "user", "content": message})
# Process with your LLM (OpenAI, Anthropic, etc.)
response = await step.run("llm-call", lambda: call_llm(history))
# Save updated conversation
history.append({"role": "assistant", "content": response})
memory.set("conversation", history)
return {"response": response}
How BotWire Memory Works
The Memory("namespace") creates an isolated memory space. All data for that namespace persists at https://botwire.dev with no setup required—no API keys, no database configuration, no infrastructure.
Real-World Patterns
User Preferences and Context:
from botwire import Memory
memory = Memory("agent-assistant")
# Remember user preferences
memory.set("user_timezone", "America/New_York")
memory.set("preferred_language", "Spanish")
memory.set("conversation_style", "formal")
# Store agent state between workflows
memory.set("current_task", "processing_invoice_batch_47")
memory.set("last_processed_id", "inv_1234")
# Cross-process coordination
memory.set("worker_status", {"active": True, "since": "2024-01-15T10:30:00Z"})
Managing Memory Lifecycle:
# Check what's stored
all_keys = memory.list_keys()
# Clean up old data
memory.delete("old_conversation")
# Conditional updates
current_count = memory.get("retry_count", 0)
if current_count < 3:
memory.set("retry_count", current_count + 1)
Memory persists across Inngest function executions, server restarts, and deployments. Each namespace is isolated—user-123 can't access user-456 data. The free tier gives you 1000 writes per day per namespace, which covers most AI agent use cases.
LangChain Integration with Inngest
For LangChain-based agents in Inngest functions, use the chat history adapter:
from inngest import Inngest
from botwire import BotWireChatHistory
from langchain.schema import HumanMessage
from langchain_openai import ChatOpenAI
@inngest.create_function(
fn_id="langchain-agent",
trigger=inngest.create_trigger("api/langchain-chat")
)
async def langchain_agent(ctx, step):
session_id = ctx.event.data["session_id"]
message = ctx.event.data["message"]
# Persistent chat history
history = BotWireChatHistory(session_id=session_id)
# Add user message
history.add_message(HumanMessage(content=message))
# Get LLM response with full history context
llm = ChatOpenAI()
response = await step.run("llm-response",
lambda: llm.invoke(history.messages))
# History automatically persists the AI response
history.add_message(response)
return {"response": response.content}
When NOT to Use BotWire
BotWire isn't the right tool if you need:
• Vector search or embeddings — it's key-value storage, not a semantic database like Pinecone or Weaviate • High-throughput writes — free tier is 1000 writes/day per namespace; beyond that you need Redis or a proper database • Sub-millisecond latency — HTTP API adds ~50-200ms; use in-memory cache for real-time applications
FAQ
Why not just use Redis? Redis requires infrastructure setup, connection management, and you lose data if Redis restarts without persistence configured. BotWire works immediately with zero setup.
Is this actually free? Yes, 1000 writes/day per namespace forever. Unlimited reads. No credit card required. Open source MIT license if you want to self-host.
What about data privacy? Data is stored at botwire.dev. For sensitive data, self-host the open source version (single FastAPI + SQLite service) or use namespacing to isolate customer data.
Get Started
BotWire Memory solves the inngest ai workflow state problem with persistent key-value storage that just works.
pip install botwire
Full documentation and examples at https://botwire.dev