How to set up screenpipe with Claude MCP for automated agent workflows on macOS
How to Set Up Screenpipe with Claude MCP for Automated Agent Workflows on macOS
If you're a developer looking to build AI agents that understand and react to your actual work patterns, screenpipe combined with Claude MCP offers a powerful local-first solution. Unlike cloud-based activity monitoring, everything runs on your machine with zero data leaving your system.
This guide walks you through installing screenpipe, configuring the Claude MCP integration, and building your first agent pipe.
Why Screenpipe + Claude MCP for Developer Workflows
Developers face a common problem: AI agents lack context about what you're actually doing. Claude MCP (Model Context Protocol) bridges this gap by giving Claude real-time access to your screen activity, application context, and audio transcription.
With screenpipe, you get:
- Local processing: 100% on-device, no external API calls for recording
- Context awareness: Claude sees your actual work, not just chat history
- Privacy by default: Capture passwords and PII are filtered; optional encryption at rest
- Agent automation: Trigger custom "pipes" (agents) based on specific work activities
- Accessibility-first: Uses your OS accessibility tree instead of fragile image processing
Prerequisites
Before starting, verify your environment:
- macOS 11+ (Intel or Apple Silicon)
- Node.js 16+ installed
- Claude desktop app with MCP support enabled
- 4GB+ RAM available (screenpipe uses 0.5-3GB)
- 20GB+ free storage (roughly 20GB/month of recordings)
Step 1: Install Screenpipe Desktop App
The easiest approach is downloading the native macOS desktop app:
- Visit screenpi.pe/onboarding
- Download the macOS installer (Universal binary for Intel/Apple Silicon)
- Drag the app to Applications folder
- Open Screenpipe from Applications
- Grant system permissions when prompted (Screen Recording, Microphone, Accessibility)
The desktop app includes auto-updates and all core features in a one-time purchase model.
Step 2: Initialize Screenpipe Recording
Once the app is running, start the recording daemon via CLI:
npx screenpipe@latest record
This command:
- Starts continuous screen capture (~5-10% CPU usage)
- Begins audio transcription with speaker identification
- Creates local SQLite database for searchable recordings
- Monitors app switches and keyboard input (optional)
- Applies PII filters to sensitive data
You should see output indicating recording is active. Leave this running in the background—screenpipe is designed for 24/7 operation.
Step 3: Add Screenpipe MCP to Claude Desktop
Now connect screenpipe to Claude so the AI can query your activity:
claude mcp add screenpipe -- npx -y screenpipe-mcp
This command:
- Registers screenpipe as an MCP server with Claude
- Allows Claude to call screenpipe functions directly
- Enables natural language queries about your work history
Restart Claude desktop after running this command.
Step 4: Test Claude Integration
Open Claude and try these queries to verify integration:
User: What did I see in the last 5 minutes?
User: Summarize today's conversations
User: What files did I work on between 2pm and 3pm?
Claude should respond with specific details from your recorded screen and audio. If you get errors, check that:
- Screenpipe
recordprocess is still running - Claude desktop app is updated to latest version
- You granted Screenpipe necessary system permissions
Step 5: Create Your First Agent Pipe
Pipes are agents triggered by your work activity. Here's an example pipe that updates Linear when you work on specific tasks:
claude mcp ask screenpipe "create a pipe that updates linear every time i work on task X"
Claude will help you define:
- Trigger condition: What activity activates this agent (e.g., "when user opens specific file")
- Action: What the agent does (e.g., "POST to Linear API with summary")
- Filters: Which apps/windows to monitor
Alternatively, create a custom pipe by writing to screenpipe's pipe directory:
{
"name": "linear-updater",
"trigger": "app_focus",
"target_apps": ["Xcode", "VS Code"],
"action": "http_post",
"endpoint": "https://api.linear.app/graphql",
"condition": "contains_keyword(['completed', 'finished', 'merged'])"
}
Comparison: Screenpipe vs. Other Local AI Recording Tools
| Feature | Screenpipe | Cursor AI | GitHub Copilot | Claude Browser | |---------|-----------|-----------|-----------------|----------------| | Local Recording | ✅ Full screen+audio | ✅ Code only | ❌ Cloud only | ❌ Cloud only | | Privacy (On-Device) | ✅ 100% local | ✅ File context | ❌ Sent to cloud | ❌ Sent to cloud | | Agent Automation | ✅ Custom pipes | ⚠️ Limited | ❌ No | ❌ No | | MCP Integration | ✅ Native | ⚠️ Partial | ❌ No | ✅ Native | | Storage Requirement | ~20GB/month | <1GB | N/A | N/A | | Cost Model | One-time purchase | Subscription | Subscription | Subscription |
Configuring Filters and Privacy
Screenpipe automatically filters passwords and common PII patterns, but you can customize filters:
- Open Screenpipe settings (desktop app)
- Navigate to Privacy → Filters
- Add window names to exclude (e.g., banking apps)
- Enable "Encryption at Rest" for extra security
- Configure which apps trigger recording pause
Example filter configuration:
# ~/.screenpipe/config.json
{
"filters": {
"blocked_apps": ["1Password", "LastPass", "Banking"],
"blocked_windows": ["Gmail - Private"],
"pii_detection": true,
"encryption_enabled": true
}
}
Troubleshooting Common Setup Issues
"Permission denied" on screen recording
MacOS requires explicit permission. Go to System Preferences → Security & Privacy → Screen Recording, find Screenpipe, and enable it.
"MCP connection failed" with Claude
Ensure screenpipe record process is running:
ps aux | grep screenpipe
If missing, restart with npx screenpipe@latest record.
High CPU usage (>15%)
Screenpipe should use 5-10% normally. If higher:
- Reduce recording resolution in settings
- Exclude resource-heavy apps (video players, browsers with many tabs)
- Check for memory leaks:
top | grep screenpipe
No audio transcription
Verify microphone permission:
# macOS
sudo dscl . -readall /Local/Default/Users | grep -i microphone
Grant permission in System Preferences → Security & Privacy → Microphone.
Advanced: Building Multi-Step Agent Workflows
Once basic setup works, chain multiple pipes for complex automation:
1. Screenpipe detects: "user opened PR on GitHub"
2. Agent pipe 1: "extract PR description and create Linear issue"
3. Agent pipe 2: "summarize PR changes and post to Slack"
4. Agent pipe 3: "log time spent on code review"
Define workflow order in screenpipe config to prevent race conditions.
Performance Optimization Tips
- Storage: Screenpipe uses ~20GB/month. Enable compression or archive old recordings
- RAM: Runs efficiently on 0.5-3GB; close unnecessary apps if you're near limit
- CPU: 5-10% baseline; disable accessibility tree capture if you only need OCR
- Offline mode: Screenpipe works completely offline; MCP queries are fastest when local
Next Steps
- Explore prebuilt pipes on screenpipe's Discord community
- Read the full documentation at docs.screenpi.pe
- Build your first custom pipe using Claude's interactive guidance
- Share your workflow with the open-source community on GitHub
Screenpipe + Claude MCP gives you a private, powerful AI assistant that understands your actual work—no cloud dependency, no data leaks, completely under your control.
Recommended Tools
- VercelDeploy frontend apps instantly with zero config
- SupabaseOpen source Firebase alternative with Postgres
- DigitalOceanCloud hosting built for developers — $200 free credit for new users