
Data engineers engage in great toil. In fact, toiling and later automating toil may be a defining feature of data engineering.
The Foo report looks stale, can you check the pipeline?
As data engineers well know, such requests often lead to extended bouts of yak shaving and/or deep rabbit-hole exploration.
One does not simply "check the pipeline"
Data engineers use lots of tools. Tools are helpful but also are a source of cognitive overhead... as you must learn to use them.
Data engineers use data warehouses, data connectors, reverse data connectors, QA tools, dbt , parallel compute libraries, etc. Allllll of these APIs are a lot to keep in your head or internal wikis.
Data engineers almost always use an "orchestrator" (either a dedicated one like Prefect, or one they home-rolled) that they use to manage the batch and sometimes streaming workloads that compose these tools and support their organization's data needs.
Therefore, orchestrators often end up being the closet that all the complexity gets shoved into. "Checking a pipeline" is often an exercise in knowingly and begrudgingly stepping on the rake of your entire organization's data practice (or malpractice).
When you are "checking the pipeline", its not clear if the issue is:
- your fault for not reviewing ChatGPT's slop code
- one of your tools' fault for releasing breaking changes
- ex-employee Jim's fault from 3 years ago
- Daylight savings time
- DNS
So data engineers have made their trade by rolling up their sleeves, breaking out their terminal, reading 3rd party documentation and troubleshooting the problematic pipeline for many hours on end.
Remove some toil from the investigation
Enter Model Context Protocol (MCP) - a standard that defines how AI applications connect to external systems. Cool ideas/protocols/abstractions are fun to banter about, but here MCP actually offers us something. MCP offers a way for people to connect our assistants to 3rd party silos, in a compounding way. That is, dbt can build one MCP server and then everyone can hook it up to their AI assistant so they can say "hey Claude what revenue metrics are available?" without ever having to write integration code or know how dbt works at all.
In the same vein, we built an MCP server for Prefect that gives MCP clients like Claude Code, ChatGPT, and Cursor read-only access to your Prefect control plane: ask questions about your deployments, find logs and events from fuzzy english descriptions, and prompt assistants like Claude to solve your toil-rich problems!
Agents can investigate your Prefect world via the MCP server, then take action within it via traditional avenues like:
- writing code
- using the prefect CLI
- using other MCP servers
Note that MCP clients are not created equal. Claude Code can edit your deployment code, commit and run uv run prefect deploy --all for you, but ChatGPT cannot normally access your filesystem.
How it works in practice
Say you're using Claude Code:
You ask:
"Why is X flow failing?"
Claude will:
- Use tools to fetch the flow run details and execution logs
- Identify the issue (e.g., missing environment variable, API timeout etc)
- Suggest the fix and can update your pipeline code and redeploy using the prefect CLI, after you approve it
Another common thing you might ask:
"What's causing these deployment delays?"
Claude will:
- Check work pool and work queue status
- Review concurrency limits (global, deployment-level, work pool, work queue)
- Analyze recent flow run patterns
It can see if you're hitting rate limits, concurrency constraints, or worker availability issues - and explain what's happening in plain English.
"Cancel all the late runs for deployment X"
Claude will:
- use tools to query which flow runs are late for that deployment
- run prefect flow-run cancel <id> commands via terminal
- use tools again to verify all runs are now cancelled
You don’t have to context switch or copy IDs around.
You should expect that for any "read" you can perform with the Prefect API, a reasonable MCP client should be able to accomplish using the MCP server. There will be edges to smooth in this regard, and there are issues with the current state of MCP clients in general, but this is why we have capability-focused evals that do not depend on specific client implementations!
Get started
The Prefect MCP is published in the MCP registry and on GitHub.
You can easily add the MCP server to your favorite MCP client with one command, for example:
claude mcp add prefect -- uvx --from prefect-mcp prefect-mcp-server
Find installation commands for other MCP clients in the docs.
Want to share this with your team? Deploy to FastMCP Cloud (optionally use a service account to define the Prefect API access the MCP server should have) to get a hosted endpoint your team can add to their MCP clients. Each member of your team can run:
claude mcp add prefect --transport http https://your-server-name.fastmcp.app/mcp
Want to see what all it enables? Check out the evals - it includes scenarios like canceling late runs, debugging failures, and more.
This is a beta release - the tools offered and other exact details may change, and we'd love your feedback.
Check out the repo and feel free to:
- open an issue
- contribute a PR
- suggest an eval
or share what you build with it!
Questions?
Join the Prefect Community Slack or open a discussion.
Related Content








