Browserbase Agents With Persistent Memory
Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup
Your Browserbase or Stagehand browser automation agents work perfectly—until they restart. Every new session, they've forgotten everything: scraped data, authentication tokens, user preferences, even basic context about their previous runs. You're manually rebuilding state or storing it in files and databases. Here's how to give your browser agents persistent memory that survives restarts, process crashes, and machine changes.
The Problem: Browser Agents Lose Everything
Browser automation frameworks like Browserbase and Stagehand excel at controlling browsers, but they're stateless by design. When your agent finishes a run, everything in memory vanishes. Your agent might spend 10 minutes logging into a complex web app, scraping user data, and learning preferences—then forget it all.
The pain is real: authentication flows repeat every run, scraped data gets re-fetched unnecessarily, and context about user interactions disappears. You end up writing custom persistence layer with files, SQLite, or external databases. Each agent needs its own storage logic, error handling, and cleanup.
This becomes especially painful with long-running browser agents that need to maintain state across multiple sessions, remember user preferences, or cache expensive API responses. Your agent should remember what it learned, not start from scratch every time.
The Fix: Add Persistent Memory in 3 Lines
BotWire Memory gives your browser agents persistent key-value storage that survives restarts and works across processes.
pip install botwire
from botwire import Memory
import time
# Your browser agent with persistent memory
memory = Memory("my-browser-agent")
# Store state that survives restarts
memory.set("last_login_token", "abc123")
memory.set("user_preferences", {"theme": "dark", "language": "en"})
memory.set("scraped_urls", ["https://example.com/page1", "https://example.com/page2"])
# Retrieve state on next run
token = memory.get("last_login_token") # "abc123"
prefs = memory.get("user_preferences") # {"theme": "dark", "language": "en"}
Your agent now remembers everything between runs. No databases, no file handling, no complex setup.
How It Works: Persistent State Patterns
The Memory class stores data in a namespace (like "my-browser-agent"). Each namespace is isolated, so multiple agents won't interfere with each other. Data persists across restarts, process crashes, and even different machines.
from botwire import Memory
# Different agents use different namespaces
login_agent = Memory("login-bot")
scraper_agent = Memory("data-scraper")
# Store complex data structures
scraper_agent.set("pagination_state", {
"current_page": 5,
"total_pages": 20,
"last_item_id": "item_12345"
})
# Set TTL for temporary data (30 minutes)
login_agent.set("session_cookie", "temp_session_123", ttl=1800)
# List all stored keys
all_keys = scraper_agent.list_keys()
print(f"Stored data: {all_keys}")
# Delete when done
login_agent.delete("old_session_token")
Common patterns for browser agent state:
- Authentication tokens: Store login cookies, JWTs, or session data
- Scraping progress: Track pagination, last processed items, or queue state
- User context: Remember preferences, settings, or interaction history
- Rate limiting: Store API call timestamps to avoid hitting limits
- Error recovery: Save checkpoint data to resume after failures
The memory persists across processes, so you can stop your agent, deploy updates, restart on different machines, and pick up exactly where you left off.
Browserbase Integration Example
Here's how to add persistent memory to a Browserbase agent that maintains login state and scraping progress:
from botwire import Memory
import browserbase # Example integration
class PersistentBrowserAgent:
def __init__(self, agent_id):
self.memory = Memory(f"browserbase-agent-{agent_id}")
self.browser = browserbase.Browser()
def run_with_memory(self):
# Check if we're already logged in
auth_token = self.memory.get("auth_token")
if auth_token:
print("Resuming with saved auth token")
self.browser.set_cookie("auth", auth_token)
else:
print("First run - need to login")
self.login_and_save_token()
# Resume scraping from last position
last_page = self.memory.get("last_scraped_page", 1)
results = self.scrape_from_page(last_page)
# Save progress for next run
self.memory.set("last_scraped_page", last_page + len(results))
self.memory.set("total_scraped",
self.memory.get("total_scraped", 0) + len(results))
def login_and_save_token(self):
# Your login logic here
token = self.browser.login("user", "pass")
self.memory.set("auth_token", token, ttl=86400) # 24 hour TTL
This pattern works with any browser automation framework—the memory layer is independent of your browser control logic.
When NOT to Use BotWire Memory
BotWire Memory isn't the right choice for every use case:
• Vector search or semantic memory: This is key-value storage, not a vector database. Use Pinecone, Weaviate, or Chroma for embedding-based retrieval.
• High-throughput applications: Free tier is 1000 writes/day per namespace. For heavy workloads, consider Redis or a dedicated database.
• Sub-millisecond latency requirements: Network calls add ~50-200ms. Use in-memory caches for ultra-low latency needs.
FAQ
Q: Why not just use Redis or a database? A: BotWire Memory requires zero setup—no servers, no connection strings, no auth. It's designed for agents that need simple persistence without infrastructure complexity.
Q: Is this actually free? A: Yes. 1000 writes/day per namespace, 50MB storage per namespace, unlimited reads. No signup required, no credit card, no API keys.
Q: What about data privacy? A: It's open source (MIT license) and you can self-host the entire service. The hosted version at botwire.dev doesn't require accounts, so no personal data collection.
Get Started
Add persistent memory to your browser agents in one command. Your agents will thank you for remembering their hard work.
pip install botwire
Full documentation and self-hosting instructions at botwire.dev.