An Engine for Innovation
1
2
3
4
5
6
7
8
9
10
11
12
from prefect import task, Flow
@task
def say_hello():
print("Hello, world!")
with Flow("My First Flow") as flow:
say_hello()
flow.run() # "Hello, world!"
Practice Makes Prefect
Developed in partnership with hundreds of data scientists and engineers to ensure compliance with best practices, Prefect Core has been successfully deployed everywhere from data-science bootcamps to MLB teams to Fortune-100 companies.
and more...
Realtime UI
The Prefect UI updates in realtime so you're never behind.
Universal Deploy
Anywhere you can run Python, you can run Prefect. Instantly deploy your flows and monitor runs from Prefect's UI, no Docker required.
Flow code
Prefect flows are plain old Python, so you can build and modify them however you like.
Parameters
Add parameters to any flow for easy runtime templating and reuse.
Robust states
Prefect handles every error, whether expected or not. Some tasks might only run if upstream tasks fail.
Dataflow
Pass data between tasks for complex processing and advanced analytics.
Mapping
Powerful map/reduce operators generate dynamic tasks for each element of an input. Mapped tasks can be linked to create parallel pipelines.
Environments
A flexible environment model means flows can be deployed anywhere from a laptop to multi-cloud clusters.
Realtime
When paired with Dask, Prefect's event-driven scheduler can execute tasks with millisecond latency.
Time Travel
Prefect task outputs can be cached or updated at different intervals, even within the same workflow.
Result Handlers
Serialize data in and out of your tasks with customizable result handlers, including local filesystems, S3, and GCS.
Custom Schedules
Specify custom schedule logic including business days, offsets, and blackout windows, or fall back on good old cron.
Looping
Loop tasks with arbitrary control logic.
Event-Driven Flows
Fire off flow runs in response to external events of any frequency.
Task Affinity
Run each of a flow's tasks in a completely different environment, including new dependencies or platforms.
Depth-First Execution
Race through mapped pipelines by allowing tasks to start before all tasks of the previous stage have finished.