How to Build Self-Planning AI Agents with LangChain DeepAgents and LangGraph in Python (2025 Guide)
How to Build Self-Planning AI Agents with LangChain DeepAgents and LangGraph in Python (2025 Guide)
Building AI agents that can break down complex tasks, manage their own filesystem context, and spawn sub-agents has traditionally required extensive scaffolding. You'd need to wire up prompts, implement context management, create tool interfaces, and handle state transitions manually. LangChain's DeepAgents framework eliminates this boilerplate by providing a batteries-included agent harness built on LangGraph.
This guide walks through building production-ready AI agents with planning capabilities, filesystem access, and hierarchical task delegation using DeepAgents.
What Makes DeepAgents Different from Basic LangChain Agents
Traditional LangChain agents require you to manually configure:
- Tool definitions and execution logic
- Context window management and summarization
- Prompt engineering for effective tool usage
- State persistence between agent calls
- Sub-agent spawning and isolation
DeepAgents ships with opinionated defaults for all of these. It's an agent harness rather than an agent framework—you get a working agent immediately and customize only what you need.
Core Features Out of the Box
Planning System: The write_todos tool enables agents to break down complex requests into trackable subtasks. The agent maintains a todo list and updates it as work progresses.
Filesystem Backend: Built-in tools (read_file, write_file, edit_file, ls, glob, grep) let agents persist context across turns. Large outputs automatically save to files instead of cluttering the conversation history.
Shell Access: The execute tool provides sandboxed command execution for running scripts, installing packages, or interacting with external systems.
Sub-Agent Delegation: The task tool spawns isolated sub-agents with their own context windows, perfect for delegating independent work streams.
Automatic Context Management: When conversations exceed token limits, DeepAgents auto-summarizes earlier messages while preserving critical information.
Installation and Basic Setup
Install DeepAgents via pip or uv:
pip install deepagents
# or with uv for faster installs
uv add deepagents
Create your first agent with default configuration:
from deepagents import create_deep_agent
# Initialize with defaults (uses GPT-4 via OpenAI)
agent = create_deep_agent()
# Run a complex task
result = agent.invoke({
"messages": [{
"role": "user",
"content": "Research LangGraph documentation and write a 500-word summary to langgraph_summary.md"
}]
})
print(result["messages"][-1].content)
The agent will:
- Use
write_todosto plan the research and writing tasks - Execute web searches or file reads to gather information
- Write the summary using
write_file - Update the todo list as it completes each step
Customizing the Agent with Different Models
Swap the default model using LangChain's init_chat_model:
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
# Use Claude 3.5 Sonnet instead of GPT-4
agent = create_deep_agent(
model=init_chat_model("anthropic:claude-3-5-sonnet-20241022")
)
# Or use a local model via Ollama
local_agent = create_deep_agent(
model=init_chat_model("ollama:llama3.1")
)
# Or use a model hosted on Vercel AI SDK
vercel_agent = create_deep_agent(
model=init_chat_model("openai:gpt-4o", provider="vercel")
)
Different models have varying strengths:
- GPT-4o: Best for complex reasoning and tool use
- Claude 3.5 Sonnet: Excellent for code generation and following instructions
- Llama 3.1: Cost-effective for simpler tasks when running locally
Adding Custom Tools to the Agent
Extend the agent with domain-specific tools:
from langchain_core.tools import tool
from deepagents import create_deep_agent
@tool
def query_database(query: str) -> str:
"""Execute SQL query against the production database.
Args:
query: SQL query string to execute
Returns:
JSON string of query results
"""
# Your database logic here
return '{"results": [...]}'
@tool
def send_slack_message(channel: str, message: str) -> str:
"""Send a message to a Slack channel.
Args:
channel: Slack channel name (e.g., '#engineering')
message: Message content to send
Returns:
Confirmation message
"""
# Your Slack integration here
return f"Message sent to {channel}"
agent = create_deep_agent(
tools=[query_database, send_slack_message],
system_prompt="You are a data analyst assistant with database and Slack access."
)
The agent automatically learns to use your custom tools alongside built-in ones. The system prompt guides behavior and sets expectations.
Implementing Sub-Agent Workflows
For complex multi-step tasks, delegate work to isolated sub-agents:
from deepagents import create_deep_agent
parent_agent = create_deep_agent(
system_prompt="""You are a research coordinator.
When given a research topic, break it into subtopics and delegate
each subtopic to a sub-agent using the 'task' tool."""
)
result = parent_agent.invoke({
"messages": [{
"role": "user",
"content": """Research the following topics and compile findings:
1. LangGraph architecture and state management
2. DeepAgents tool implementation patterns
3. Production deployment considerations for AI agents
Write a consolidated report to research_report.md"""
}]
})
The parent agent will:
- Use
write_todosto create a task list - Spawn three sub-agents via the
tasktool (one per subtopic) - Each sub-agent works in isolation with its own context
- Collect sub-agent outputs and synthesize the final report
Sub-agents inherit the parent's configuration but maintain separate conversation histories, preventing context pollution.
Configuring Filesystem Sandboxing
By default, DeepAgents provides filesystem access. For production deployments, configure sandboxing:
from deepagents import create_deep_agent
import os
agent = create_deep_agent(
# Restrict filesystem access to a specific directory
working_directory="/app/agent_workspace",
# Prevent access to parent directories
allow_parent_directory_access=False
)
For even stricter isolation, deploy agents in containers or use remote execution backends.
Integrating MCP Servers for Extended Capabilities
The Model Context Protocol (MCP) allows agents to connect to external data sources and APIs. Use langchain-mcp-adapters for integration:
from langchain_mcp_adapters import create_mcp_client
from deepagents import create_deep_agent
# Connect to an MCP server (e.g., GitHub, Postgres, Slack)
mcp_tools = create_mcp_client("github://my-org")
agent = create_deep_agent(
tools=mcp_tools,
system_prompt="You have access to GitHub repositories via MCP."
)
This enables agents to interact with third-party services without custom tool implementations.
Comparing Agent Framework Approaches
| Feature | DeepAgents | Raw LangGraph | AutoGPT | CrewAI |
|---------|-----------|---------------|---------|--------|
| Setup complexity | Single function call | Manual graph construction | Configuration files | Class-based definitions |
| Built-in planning | ✅ write_todos | ❌ Manual | ✅ Built-in | ✅ Built-in |
| Filesystem tools | ✅ 6 tools included | ❌ Manual | ✅ Included | ❌ Manual |
| Sub-agent spawning | ✅ task tool | ⚠️ Manual subgraphs | ✅ Supported | ✅ Crew concept |
| Context management | ✅ Auto-summarization | ❌ Manual | ⚠️ Basic | ⚠️ Basic |
| LangSmith integration | ✅ Native | ✅ Native | ❌ | ❌ |
DeepAgents sits between high-level frameworks like AutoGPT and low-level primitives like raw LangGraph. You get opinionated defaults without sacrificing customization.
Debugging and Monitoring with LangSmith
DeepAgents integrates natively with LangSmith for observability:
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-api-key"
from deepagents import create_deep_agent
agent = create_deep_agent()
result = agent.invoke({
"messages": [{"role": "user", "content": "Debug this error log"}]
})
View the complete execution trace at https://smith.langchain.com including:
- Token usage per LLM call
- Tool invocations and results
- Sub-agent spawning hierarchy
- Context summarization triggers
Using the DeepAgents CLI for Local Development
The CLI provides a pre-built coding agent similar to Claude Code or Cursor:
# Install CLI
curl -LsSf https://langch.in/gh-da-cli | bash
# Run interactively
deepagents
# Run in headless mode for scripting
deepagents --headless "Refactor src/utils.py to use async/await"
The CLI includes all SDK features plus:
- Interactive TUI with streaming responses
- Web search grounding
- Persistent memory across sessions
- Human-in-the-loop approval for destructive operations
Use it during development to prototype agent behaviors before embedding them in applications.
Production Deployment Patterns
Deploy DeepAgents in production using these patterns:
Serverless Functions
Deploy on Vercel or AWS Lambda for request-based agents:
# vercel_function.py
from deepagents import create_deep_agent
import os
agent = create_deep_agent(
model=os.environ.get("MODEL", "openai:gpt-4o")
)
def handler(request):
result = agent.invoke({"messages": request.json["messages"]})
return {"response": result["messages"][-1].content}
Long-Running Background Workers
For tasks that exceed serverless timeouts, deploy on DigitalOcean App Platform or Render:
# worker.py
from deepagents import create_deep_agent
import redis
agent = create_deep_agent()
queue = redis.from_url(os.environ["REDIS_URL"])
while True:
job = queue.blpop("agent_tasks")[1]
result = agent.invoke(job)
queue.set(f"result:{job['id']}", result)
Kubernetes Deployments
Scale horizontally using Kubernetes with persistent state:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deepagents-worker
spec:
replicas: 3
template:
spec:
containers:
- name: agent
image: your-registry/deepagents:latest
env:
- name: LANGCHAIN_API_KEY
valueFrom:
secretKeyRef:
name: langchain-secret
key: api-key
Next Steps and Further Reading
You now have a production-ready AI agent with planning, filesystem access, and sub-agent delegation. Key takeaways:
- Start simple: Use
create_deep_agent()with defaults, customize only when needed - Add tools incrementally: Begin with built-in tools, add custom ones as requirements emerge
- Monitor with LangSmith: Enable tracing from day one to understand agent behavior
- Test sub-agent workflows: Complex tasks benefit from hierarchical delegation
- Deploy deliberately: Choose deployment patterns based on task duration and scale
Explore the official DeepAgents documentation for advanced patterns including custom memory backends, approval workflows, and multi-modal agents.
For TypeScript implementations, check out deepagents.js which provides feature parity with the Python library.
Recommended Tools
- VercelDeploy web apps at the speed of inspiration
- DigitalOceanSimplicity in the cloud