Introducing Prefect 2.0
Mar. 15, 2022

Love your workflows again.

Jeremiah Lowin
Jeremiah LowinCEO

Today we’re launching the first beta release of Prefect 2.0, a major step forward in data engineering tooling. This brand-new, completely open-source orchestration platform represents a huge effort across the Prefect team, our Club42 community ambassadors (more on them tomorrow!), our Lighthouse Partners, and countless passionate users.

Prefect 2.0 is loaded with new features and built on top of our second-generation orchestration engine, Orion. It has benefitted from literally thousands of user requests, suggestions, and contributions. It addresses new use cases and deployment patterns, includes features that used to be available only in our Cloud platform, and was designed with the unique requirements of the modern data stack in mind. Most importantly, Prefect 2.0 takes our product design philosophy of being incredibly simple without compromising on power and elevates it to new heights.

Prefect 2.0 is graduating to beta status because it has reached use-case parity with Prefect 1.0 and is already our team’s preferred orchestration tool. While the product remains an active construction zone, with a number of major features set to ship in the coming months, early adopters should consider its tires primed and ready for kicking.

Please join us as we ready this flagship for launch. Read on to learn what’s new, or get started now!


gif of Farnsworth from Futurama saying "Good news, everyone!"

One of the most exciting new features of Prefect 2.0 is actually its license: Apache 2.0 from top to bottom. We want Prefect to be a productive part of every company’s data infrastructure, and ensuring it has a gold-standard permissive license is critical to that objective. This license applies to the entire open-source Prefect platform, including the Orion server and UI.

While some Prefect 1.0 components are Apache 2.0 licensed, others are available only under a proprietary Prefect Community License. Though the PCL placed few practical restrictions on our users, we learned that it hindered adoption nonetheless. Some of our largest enterprise users needed special permission before deploying Prefect internally, and some of our academic and non-profit users depended on the portability guarantees of a more permissive license. We are excited to affirmatively remove those barriers.

We will continue to build our commercial product, Prefect Cloud, that extends Prefect 2.0 as a fully managed, enterprise-ready orchestration-as-a-service offering. However, we are completely committed to our self-managed product and will dedicate resources to ensure that Prefect 2.0 is always a production-grade open-source orchestration platform.

Prefect 2.0 is built on top of our second-generation orchestration engine, Orion. Orion is the first orchestration API for the modern data stack and is the result of years of experience working intimately with Prefect users and customers. Its name, a contraction of ORchestratION, nods to its critical role. It introduces a variety of innovations. Some are directly user-facing, like running native code next to orchestrated code, and others enable novel use cases, like streaming workflows or a completely new interactive user experience. Orion was designed with performance and transparency in mind, and forms the technical foundation of Prefect 2.0.

Untitled (4)

illustration of a flow run (run task; failed; pause; retry; run task; success)

Prefect’s company mission is to eliminate negative engineering, the tedious and frustrating exercise of ensuring code runs correctly. However, we all know that most orchestration tools actually end up introducing new complexity by requiring that users learn a new vocabulary: the orchestrator’s own data structures and API. The last-generation motto of “workflows as code” belies the fact that the resulting code bears little resemblance to what an engineer would have written without the orchestrator’s constraints: scripts full of DAG declarations, convoluted keyword arguments, CLI commands, tortured conditional branching, and unclear dependency management. And don’t even get us started on “xcoms.”

Prefect 2.0 embraces a new philosophy: “code as workflows.” We acknowledge that whenever an engineer writes code, that code is already the best possible representation of their workflow objectives. It necessarily describes ordering and dependencies, parameterization, data passing, control flow, and conditions. Any further modifications required by the orchestrator are a tax on that engineer’s time and energy. Therefore, we strive to introduce as few changes as possible. Adding a single @flow decorator to a function is all it takes to give it superpowers, including the ability to be monitored, scheduled, and controlled from the Prefect UI. Best of all, that function still behaves like a function: you call it to run it and you can examine its result. You can even use breakpoints to debug it.

We can go further by optionally extending orchestration inside the flow function. By adding @task decorators to smaller units of business logic, you can supercharge those functions with observability, retries, dependency management, and more. Because Orion is a fully dynamic orchestration engine, you can mix and match fully-orchestrated tasks with native code. That means you can continue using if, for, and while loops to control execution, even in response to runtime data such as the output of other tasks. You can share in-memory objects like database connections between tasks with no special configuration. With full asyncio support, you can write native async tasks. Orion is clever enough to pick the optimal execution strategy for your tasks, so you can combine async and synchronous operations in a single workflow or automatically parallelize your synchronous Python script with one of our concurrent executors (enabled by default). Prefect 2.0 also supports running flows on Dask, Ray, and as Kubernetes jobs, and we plan to introduce more execution integrations and optimization strategies in the future.

In this way, Prefect 2.0 represents the most advanced realization of an objective we described two years ago when we first open-sourced the Prefect platform:

Prefect’s insurance-like design goal [is] to be minimally invasive when things run smoothly and maximally helpful when things go wrong.

Untitled (5)

highly flexible, asynchronous workflow pattern

Prefect 2.0’s dynamic engine means the days of building DAG artifacts are thankfully a thing of the past. Users can write their workflows however they prefer, almost as if they weren’t using Prefect at all. By observing function calls and arguments, Prefect automatically builds up a representation of the workflow while it runs, allowing the enforcement of orchestration rules on even completely runtime-generated code. And because no DAG artifact representing all computation has to be produced ahead of time, users are free to mix and match native code with fully-orchestrated tasks.

The core insight that enables this behavior is the Orion server’s dedicated orchestration API. Instead of putting the burden of enforcing orchestration on the client, which requires not only knowledge of the orchestrated graph but also a local copy of the orchestration engine, Orion exposes an API that enables any client to request orchestration in a well-structured way. The server becomes the source of truth for tracking the graph and enforcing orchestration behavior; the client only needs to be capable of sending state updates and following any subsequent instructions. Yes, this means that multi-language support is just a REST call away!

The result is a just-in-time (JIT) orchestration engine that can handle simple cases like classic ETL as well as more advanced scenarios like dynamically generating entire workflows in response to items coming off a Kafka queue. Both examples would use the exact same API. Furthermore, adjusting the workflow in response to runtime information no longer requires special conditional operators; an if statement will do just fine. You don’t have to learn any new concepts, whether you’re generating adaptive steps for hyperparameter optimization, running training loops until an error converges, mapping over complex data, or generating completely novel subflows. Just write your code, apply the @task decorator appropriately, and let Orion figure out the rest.

Over the past few years, there have been countless requests for a streaming version of Prefect, but we’ve learned that “streaming” means wildly different things to different users. As we worked with partners including large banks monitoring credit card transactions for fraud, AI companies applying machine learning to video, and delivery companies with realtime updates coming from drivers and packages, we developed a definition of streaming that met their universal requirements: a data pipeline in which triggering events arrive at unpredictable times, workflow logic is expected to run in realtime, and infrastructure is shared among workflow invocations. Prefect 1.0’s API enabled event-driven workflows, but the latency of provisioning of new infrastructure for each flow made it difficult to handle streaming except in relatively slow-moving cases.

With no DAG to compile and no pre-registration required, Prefect 2.0 workflows are well-suited to streaming orchestration. Tasks, conditions, and even completely novel subflows can be dynamically generated in response to any new event and executed immediately on shared infrastructure. This introduces new possibilities, like infinite-running flows that reuse stateful objects across dynamic tasks and subflows. This functionality will mature significantly this summer and we can’t wait to see how users push the envelope.

Task State Transitions

Orion's task states and governed transitions

At the heart of Orion is a transparent rules engine that operates on States: small messages that can describe any process or object. These states are the basis for orchestration and anything can be assigned a state, with Prefect flows and tasks representing two of the most common implementations of this paradigm. Orion's state engine is powerful enough to represent the execution of Python code in a notebook, the constant ping of a microservice’s healthcheck, or the staleness of a table in a data warehouse, all with a single API.

Orion relies on states to expose both its observability and orchestration capabilities. Observability is the collection of the states themselves: a single source of truth for understanding the behavior of the entire system. How much work is being performed, what its failure rates have been, and whether data is stale: these are all examples of questions that can be answered through the observation of state data. Prefect clients are responsible for transmitting state data and proposed state transitions to the server. If they do nothing more than inform the server of state changes in the world, then Orion is operating as an observability layer only.

Orchestration goes a step further and is defined as the enforcement of rules on transitions between states. For example, a process that was in a Running state might report that it has entered a Failed state. If Orion is operating in observability mode, then it would simply record the new state. However, if Orion is enforcing orchestration rules, it could note that this task was configured to retry up to three times. In this case, Orion might reject the Failed state and put the process into a Retrying state instead, communicating that instruction back to the remote client. The enforcement of this behavior—and any other business logic triggered by a state change—is what we mean by orchestration. Other examples that meet this definition include scheduling work to start at a specific time, enforcing concurrency limits, retrying on failure, passing quality checks, updating lineage, sending notifications, caching and reusing output, tearing down infrastructure on cancellation, and monitoring SLAs.

The above graphic shows the states and transitions that Orion seeks to enforce by default. Users can see exactly what observability and orchestration rules will be applied to any state. Because the rules engine lives on the Orion server, it can be customized without redeploying code. This might include configuration of retries, timeouts, or concurrency limits. It could also include completely custom states for bookkeeping — for example, perhaps your team wants to introduce an “Expected Failure” state and write tasks that use it. Because the Orion engine deals with states as its primary concern, and doesn’t place constraints on client implementations of state-setting objects, it can be extended to new use cases merely by ensuring that its vocabulary is sufficient for describing them. As Orion matures, we will continue to add new states, transitions, and rules to better capture the rich diversity of orchestration use cases and behaviors.

After kicking the tires on Prefect 1.0, one of the most common questions was, “Now how do I deploy it?” With Prefect 2.0, the answer is: you already have. Prefect 2.0 introduces an ephemeral API that lets it deliver a full orchestration suite even when you haven’t spun up an Orion server, database, agent, or UI. You only need to deploy stateful infrastructure when it’s truly beneficial to do so. The two most common reasons for graduating from the ephemeral API are 1) automatic scheduling and remote execution require an active API that monitors for work and 2) a dedicated Postgres instance for performance as you scale. Nonetheless, for most interactive applications, the ephemeral API can deliver the exact same experience with no additional overhead.

This innovation resulted in one of the first true “a-hah!” moments for our early users, who would run a few interactive workflows, perhaps following a tutorial, and only then spin up the server and UI with prefect orion start. Instead of the empty dashboard they expected, the UI would show them the history of their prior interactive flows! That first time you take advantage of the ephemeral API, it feels like magic. The second time, you wonder how you managed without it.

These “ad-hoc” interactive runs have become a staple of our own use of Prefect 2.0 as they lead to incredibly tight feedback cycles during workflow development: call the function iteratively while you work, then spin up a UI to investigate its behavior across all of its invocations. This works whether you’re in a repl, a notebook, or even a CI process and doesn’t require an agent or any special infrastructure.

In Prefect 1.0, we proudly declared “everything is a task!” to demonstrate Prefect’s broad applicability. With Prefect 2.0, we’re modifying that to say “everything can be a task.” Prefect 2.0 is designed for incremental adoption, never requiring users to take on more functionality than they need. The framework’s goal is to provide the best orchestration primitives possible, so if you only want to take advantage of Prefect’s scheduler for non-Prefect functions, go right ahead! If you love cron or some other scheduling service, but want to use Prefect solely for implementing retries or logs, that’s great too! And if you only want to use Prefect’s observability API to collect state updates from other applications, other languages, or even other orchestrators, that’s fair game as well!

We believe our product is more powerful when you use more of it, but we recognize that the negative engineering problem we solve often emerges first as a highly targeted and sometimes even trivial frustration. Scheduling a script for 9am or retrying the one flaky function in your pipeline doesn’t feel like it should be a huge endeavor; thanks to Prefect 2.0, it doesn’t have to be. You can choose whatever features you want from Prefect’s buffet of orchestration functionality, and we guarantee that no matter how much you take, it all works beautifully together.

Thanks to “code as workflows,” testing and debugging couldn’t be simpler. One of the major design principles we followed in building Orion was to ensure that interactively executing a workflow followed the exact same codepath as workflows deployed by a Prefect Agent. This means that debugging your deployed workflows is as easy as importing your workflow function and calling it manually, using your favorite print or breakpoint debugging tools along the way. Assuming you’ve set the appropriate configuration, this debug run can even be orchestrated against the remote API (including Prefect Cloud)! If you’ve ever been perplexed by “No heartbeat detected” errors or Zombie flows in Cloud 1.0, this was designed especially for you.

In addition to a unified debugging story, the ability to unit test your tasks remains as simple as inspecting the wrapped function via task.fn. We still have some work to do to unlock a simple user story for unit testing flows, but don’t panic—you’ll have all the hooks and guides you could imagine by the time Prefect 2.0 leaves beta.

Prefect 2.0 introduces two new concepts to address user requests for more fine-tuned control over how work is done: deployments and work queues.

There are two common phases in workflow construction: the building phase and the deployment phase. In the building phase, users typically run the workflow interactively to test it out and examine its results. Thanks to its dynamic engine and ephemeral API, Prefect 2.0 makes this extremely easy.

Once the workflow is in good shape, it becomes time to hand it over to Prefect for automatic scheduling and remote execution. This involves the creation of a deployment. Each deployment contains metadata such as where the flow code is stored, what parameters the flow accepts, and what infrastructure to run the code on. A single flow can have multiple deployments, perhaps corresponding to different versions of code or different execution environments (e.g. your ETL flow could have dev, staging, and prod deployments with different flow runners or parameter values). Each deployment can be given its own schedule or get kicked off at any time from the Prefect UI or API.

Deployments also contribute to a highly-requested new Prefect feature: the ability to modify workflow code and see it reflected immediately, without needing to re-register the flow at all. This is already the default behavior for Prefect 2.0's interactive runs, but deployments extend the capability to the API itself. For example, a deployment that points at a git repo could be configured to point at the HEAD of a specific branch. In this case, pushing new code would instantly be reflected in all future runs of that deployment, with no modification required on the Prefect server. Each run’s version field would point at the specific commit the it ran against, to ensure that users could track performance across code changes (note that version tracking can be customized by users to support any use case).

Work queues bridge the orchestration API to the user’s execution environment. Work queues are created on the server and collect work that matches their filter criteria. For example, you could have a work queue that corresponds only to a specific deployment, or all flows tagged with “ETL,” or any flows that should be run on Kubernetes. Prefect agents are deployed to poll specific work queues, running any work that the queue delivers. Queues support new features like concurrency limits (e.g. the queue should only release new work if fewer than, say, 5 of its flows are currently running) and can be paused entirely. This means that agent behavior can now be controlled from the UI by modifying work queue parameters. Future work will add more powerful queue behaviors, such as priority and more granular selection criteria.

Untitled (6)

The Prefect 2.0 dashboard gives you instant insight into the state of your flow and task runs

Prefect 2.0 introduces a completely new UI. One of the biggest successes—and challenges—of Prefect 1.0 was the incredible diversity of use cases to which it was applied. Some Prefect users deploy a few hundred mission critical tasks every month; others run millions of relatively lower consequence. Designing a UI that could deliver a positive experience to both types of users was a major undertaking.

Our solution was to anchor the UI to a central dashboard that was designed to be both customizable and scale-invariant. Users with five runs or five million runs should find it equally informative and useful for drilling in to precise behaviors.

We’ve learned that the most important role of an orchestration UI is to give users a way to set up expectations and then observe and explore any deviations from them. To highlight the importance of different contexts, a failure in one workflow might represent a critical emergency, but a failure in another might be unimportant or even expected. Blindly reporting all failures would risk overpowering the first workflow’s signal with the second one’s noise; but not reporting failures at all simply isn’t an option.

To solve this, the Prefect 2.0 dashboard is designed as a canvas for users to explore (and for our UI team to continue adding functionality to!). Using the command line at the top (or its associated query builder), users can define exactly what they’re interested in seeing and save it for future reference. Only nightly ETL flows that ran in the last week? ML flows that correspond to a specific git commit? Only flows that contain tasks that interact with a database? For any interesting question, users can set up screens to ensure that they only get the most actionable data possible.

Untitled (7)

Prefect 2.0 includes several color themes, including color-blind friendly options

We have also focused on ensuring our UI is as accessible as possible. One important enhancement is support for a large range of accessible color themes, out of the box (you can change this in settings!). We’re also working to ensure that we rely more heavily on iconography and texture to communicate different states.

The UI is the youngest part of the Prefect 2.0 product and will undergo rapid development in the next few months. A small number of features, such as creating deployments and setting default storage, are currently only configurable from the Prefect CLI, but will introduced in the UI soon.

Untitled (8)

grid of logos

The thing that makes the modern data stack truly “modern” is the encapsulation of business logic behind APIs. The principal role of a modern orchestrator is to coordinate communication (and frequently data) among those APIs. The dynamic nature of the Orion engine makes it incredibly well-suited to this task.

Increasingly, we see users adding Prefect to their stack to “finalize” it: with all the pieces in place, Prefect begins to coordinate their activities and ensure healthy behavior. We’ve been known to describe Prefect as the toothpick holding the modern data sandwich together. In order to make this even easier, we will be introducing a new product this summer that allows Prefect users to more easily provision and manage the building blocks of their modern data stacks.

In addition, over the last year we’ve seen a new prototypical Prefect user emerge. More than half of Prefect users at large enterprises are not the technical workflow authors you might expect, but stakeholders like analysts and analytics engineers who depend on Prefect-governed workflows for their respective roles. While still embracing “code as workflows,” Prefect 2.0 has been designed to maximize the experience of these users as well, primarily through extensions to our UI and ecosystem integrations. Whether modifying connections to external services, viewing the graphical output of a workflow, changing parameters on the fly, subscribing to critical information, or even composing workflows directly from the UI, Prefect 2.0 can support an enormous variety of code-adjacent work. We’ll have many announcements about these features soon.

  • For the first time, Prefect 2.0 includes features that formerly required Prefect Cloud. The first two features we’ve migrated are both extremely popular: concurrency limits for both flows and tasks. These are commonly used to limit simultaneous requests to a resource, such as a database, that can’t operate at the same scale as the orchestrator.

  • Prefect 2.0 introduces a new model of result storage that maintains client-side data privacy but enables server-side configuration. In this initial release, you can set global defaults for storage including filesystems and all major public clouds; in the near future, we intend to extend this to per-work queue and per-deployment storage defaults.

  • The Prefect ecosystem is growing so large that shipping the entire task library has become prohibitive. That’s why we’re introducing Prefect Collections, a new form of templated repo that delivers functionality in the form of recipes, flows, tasks, documentation, and best practices. We’ll have more on Collections later this week.

  • We plan to embrace serverless execution for Prefect 2.0, meaning the entire platform—from the server to agents to individual tasks—can be run extremely efficiently.

Untitled (9)

orchestrate happiness

The most important feature of Prefect has always been how much people love to use it. Whether you look at our Slack’s #introductions channel or Cloud’s NPS score, we feel deeply privileged to work every day on a tool that has delivered positive value to so many people. Receiving and responding to your feedback is the reason that we pride ourselves on shipping early and shipping often — thank you for being part of our community.

If your current workflow tool doesn’t spark joy, take Prefect 2.0 for a spin. You might just love your workflows again.

Happy engineering!

Posted on Mar 15, 2022
Blog Post
Data Stack
Error Handling
Dynamic DAGs
DevOps & CI/CD

Love your workflows again

Orchestrate your stack to gain confidence in your data