Scheduled AI Agent Tasks (Cron) with Persistent State

Free · Open source (MIT) · Works with LangChain, CrewAI, AutoGen · No signup

Your cron-scheduled AI agent crashes overnight, loses track of processed items, and duplicates work when it restarts. You need persistent state that survives process restarts, remembers what your periodic AI agent was doing, and picks up exactly where it left off. Here's how to fix it with memory that persists between cron runs.

The Problem with Stateless Scheduled AI Agents

When you run AI agents on cron schedules, they're stateless by default. Each execution starts fresh with no memory of previous runs. This breaks workflows that need continuity:

Here's what typically breaks:

# This breaks - no memory between cron runs
def process_queue():
    items = get_all_items()  # Always processes everything
    for item in items:
        expensive_llm_call(item)  # Duplicates work

Your agent has no idea what it processed 10 minutes ago. It either skips work or duplicates it. Both are expensive mistakes when you're paying per LLM token.

The Fix: Persistent Agent Memory

Install BotWire for persistent key-value storage that survives restarts:

pip install botwire

Now your scheduled AI agent remembers state between cron runs:

from botwire import Memory
from datetime import datetime

def scheduled_agent():
    memory = Memory("daily-processor")
    
    # Check where we left off
    last_processed = memory.get("last_item_id") or 0
    
    # Process only new items
    new_items = get_items_after(last_processed)
    
    for item in new_items:
        result = process_with_llm(item)
        
        # Save progress after each item
        memory.set("last_item_id", item.id)
        memory.set(f"result_{item.id}", result)
    
    # Track last run
    memory.set("last_run", datetime.now().isoformat())

How Persistent Agent State Works

The Memory class gives your cron ai agent a persistent brain. Each namespace is isolated, so multiple agents don't interfere with each other.

Tracking Multi-Step Workflows

def multi_step_agent():
    memory = Memory("workflow-agent")
    
    # Resume from last step
    current_step = memory.get("current_step") or "analyze"
    data = memory.get("workflow_data") or {}
    
    if current_step == "analyze":
        analysis = analyze_with_llm(data)
        memory.set("analysis_result", analysis)
        memory.set("current_step", "summarize")
        
    elif current_step == "summarize":
        analysis = memory.get("analysis_result")
        summary = summarize_with_llm(analysis)
        memory.set("final_summary", summary)
        memory.set("current_step", "complete")

Handling Failures and Restarts

Your periodic ai agent can recover from crashes by checking its last known state:

def resilient_agent():
    memory = Memory("resilient-processor")
    
    # Check if we were interrupted
    processing = memory.get("currently_processing")
    if processing:
        print(f"Resuming interrupted task: {processing}")
        # Retry or skip the interrupted item
        memory.set("currently_processing", None)
    
    for task in get_pending_tasks():
        # Mark as processing before starting
        memory.set("currently_processing", task.id)
        
        result = expensive_llm_task(task)
        
        # Mark as complete
        memory.set(f"completed_{task.id}", result)
        memory.set("currently_processing", None)

Memory operations are atomic HTTP calls, so your agent state stays consistent even if the process dies mid-execution.

Integration with AI Frameworks

For agents built with CrewAI, use the memory tools that let your scheduled llm task remember and recall information:

from crewai import Agent, Task, Crew
from botwire.memory import memory_tools

# Create memory-enabled agent
analyst = Agent(
    role="Data Analyst",
    goal="Process daily reports with continuity",
    tools=memory_tools("daily-analyst"),  # remember, recall, list_memory tools
    backstory="I remember what I processed yesterday"
)

# Task that uses memory
analysis_task = Task(
    description="""
    1. Recall what data you processed yesterday using recall tool
    2. Process only new data since then  
    3. Remember your findings using remember tool
    """,
    agent=analyst
)

crew = Crew(agents=[analyst], tasks=[analysis_task])

The memory_tools function returns three tools your agent can use: remember to store information, recall to retrieve it, and list_memory to see what it has stored.

When NOT to Use BotWire

BotWire isn't the right fit for every use case:

FAQ

Why not just use Redis or a database? You could, but then you need infrastructure, auth, connection handling, and error recovery. BotWire works immediately with zero setup.

Is this actually free? Yes. 1000 writes per day per namespace, 50MB storage, unlimited reads. No signup, no API key required.

What about data privacy? Data is stored on BotWire's servers by default. For sensitive data, self-host the open-source version (it's just FastAPI + SQLite).

Stop losing agent state between cron runs. Your scheduled AI agents should remember what they were doing and pick up where they left off. pip install botwire and add persistent memory in 3 lines of code. Try it at https://botwire.dev.

Install in one command:

pip install botwire

Start free at botwire.dev