OPEN-SOURCE DATA WORKFLOW ORCHESTRATION
Dataflows as easy as .py
Prefect 1.0 makes it easy to build, test, and run dataflows right from your Python code. With an intuitive API and 50+ integrations, quickly turn even the most complex data pipelines into managed workflows.
Build 2x faster
Get started quickly with our Python framework and tons of examples right from your local machine.
Easily add automatic failure detection and handling to your pipelines.
Supercharge Your Pipelines
Make pipelines smarter with granular scheduling, dependencies, parameters and more.
Connect It All
Connect your entire data stack with a broad library of pre-built integrations.
Fast out of the gate
Prefect is just Python, so you can make code-based data processes into intelligent tasks and flows quickly. Code is easily readable and maintainable, allowing even non-data engineers to build their own flows.Check Out The Tutorial →
Fewer failures, greater peace of mind
Use features such as retries, restarts, and timeouts to automatically and elegantly handle any error or failure. Know when problems occur with real-time notifications, and easily identify the root causes to get back up and running quickly.Learn More →
Your workflows, supercharged
If you can think it, you can build it in Prefect. Run tasks in parallel, pass data between them, and create interflow dependencies. And that's just the start.Learn More →
Take us to your data stack
Prefect has dozens of integrations with the most popular tools across the modern data stack... or you can build your own. With extensive examples and community support, there’s no task you won’t be able to tackle.View our integrations →
Plus other features
Secure by Design
With our Hybrid Execution Model, Prefect never sees your code or data, so it can be run in the most sensitive environments.
Anywhere you can run Python, you can run Prefect. Instantly deploy your flows and monitor runs from Prefect's UI, no Docker required.
Add parameters to any flow for easy runtime templating and reuse.
Pass data between tasks for complex processing and advanced analytics.
Powerful map/reduce operators generate dynamic tasks for each element of an input. Mapped tasks can be linked to create parallel pipelines.
Prefect task outputs can be cached or updated at different intervals, even within the same workflow.
Serialize data in and out of your tasks with customizable result handlers, including local filesystems, S3, and GCS.
Loop tasks with arbitrary control logic.
Race through mapped pipelines by allowing tasks to start before all tasks of the previous stage have finished.
Fire off flow runs in response to external events of any frequency.