An Introduction to Workflow Orchestration
Managing complex workflows is no small feat. Enter: workflow orchestrators, which make it easier to create, deploy, and monitor tasks. With the help of an orchestrator, teams can quickly build repeatable processes that are easy to run remotely, observe, and troubleshoot if they fail.
Can't figure out which data models depend on which ETL jobs and why that analytics-alerts Slack channel is blowing up? It’s all tied to workflow orchestration. Let’s dive in.
The need for workflow orchestration
Your first question should be, if it isn’t already: why should I care? Orchestration isn’t a new concept, nor are workflows. What’s new is the world around them. It’s not just data that’s being moved from one place to another; data is being transformed, infrastructure is being spun up, AI models are being retrained, and so much more.
We’ve previously written about the original orchestrator: cron. Cron notoriously lacks features that make orchestration scalable, such as: logging, scaling infrastructure, diverse flow triggers, and smart retries, just to name a few. Without these features, workflows inevitably fail. Infrastructure might fail, an external service returns an unexpected result, a process crashes. All of these will lead to a broken workflow that usually has downstream impacts.
As a data engineer, you should be able to easily run any script or task at scale in a repeatable (read: not broken and easily debuggable) way.
What is workflow orchestration?
Workflow orchestration is a process for automating and managing complex processes. It enables teams to quickly create, deploy, and monitor tasks that are too complex or inefficient for engineers to do ad-hoc.
First, we must obligatorily define a workflow in this context.
Definition: A workflow is a process that contains 2 or more steps.
In the context of data engineering, a workflow could mean anything from simply moving data to spinning up infrastructure to triggering an operational process via API. In any of these cases, there are steps that need to happen in a certain order. A modern workflow orchestrator allows you to build repeatable processes with many steps that are observable and thus easily debuggable.
Definition: Workflow orchestration is the coordination of tasks across infrastructure, systems, triggers, and teams.
By automating processes using a scalable orchestrator, teams can achieve greater efficiency while reducing manual effort. This means that engineers can spend more time building features and/or writing code, and less time maintaining the control panel that runs it.
🗓️ Scheduling with code. Any engineer can run a script manually. It goes without saying this approach will be riddled with errors by relying on purely human oversight. Scheduling allows tasks to not be dependent on team members’ PTO or simply remembering the task has to be run. Furthermore, orchestrating tasks should be as simple as inserting one line of code instead of building a whole new system internally. Orchestrators enable scalability so your organization can keep up with demand without compromising quality or accuracy.
⌕ Order of operations. By definition, running a workflow means running multiple steps. In most cases, steps have an order, and the process will fail if order isn’t followed. An orchestrator must scale when the number of steps grows and dependencies become complex. Ideally, the orchestrator should help the user deduce order as to not introduce human error when assuming dependencies. The developer experience in maintaining this order is critical in an orchestrator’s ability to adapt to various use cases.
👁️ High observability. “Everything fails, all the time” according to Amazon’s CTO, Werner Vogels. Of course nothing is meant to fail, but it always does. A workflow orchestrator should provide visibility into all tasks within a workflow—from start to finish—and enable users to react and retry as needed. This increased ability to observe all work not only reduces workflow maintenance time but also helps reduce any miscommunication between teams especially when data is moving across team boundaries. Workflow logs should help engineers stay on top of any new developments or changes in dependent systems and adjust accordingly.
📚 Versatility in triggers and infrastructure. This one’s a doozy, because it can easily get complex. No workflow is the same, no team is the same, no organization is the same. Workflow orchestrators should adapt easily to where code should run and when it should run. It shouldn’t matter whether a workflow needs a one-off virtual machine or a long-running Kubernetes pod. Similarly, one team could rely on time-based scheduling (9am every day, review refunds from yesterday) while another is purely event based (whenever refunds are reviewed, run the data model). The orchestrator should handle all types of workflow triggers easily.
Overall, workflow orchestration is a must for creating efficient data engineering practices that deliver features and save time and money in the long run.
Workflow vs. data orchestration
Data orchestration is a subset of workflow orchestration as it applies to moving and transforming data. However, consider these examples:
- Trigger an order to be moved into fulfillment via API
- Creating a new EC2 instance for an ad-hoc task
- Sending a Slack notification as new customer reviews are created
The examples don’t necessarily have to do with data, but have to do with infrastructure or events. Workflow orchestration encompasses the wide variety of behavior that can happen programmatically in a batched fashion.
Data orchestration centers around analyzing and transforming large amounts of data into usable information, in a scalable and repeatable way. Workflow orchestration, on the other hand, expands to cover automating business processes, such as order fulfillment or customer service operations. Both of these tasks usually require engineering support, whether it be process orders, hitting APIs to trigger processes, and more. This involves creating a set of tasks that need to be executed, the conditions that must be met for them to run, an order in which they should run, and the triggers which will launch the initial ones. All of this will likely involve data movement or transformation, but goes far beyond that as well.
By utilizing both types together effectively, companies can create highly efficient workflows while also gaining powerful analytics capabilities from their data to accompany that work. Workflows involve data, in addition to: APIs; AI models; infrastructure; operational processes; I could go on.
Choosing a workflow orchestrator
The right tool can make all the difference. And while it’s not a Nobel prize worthy revelation, it must be said: tool choice is wildly use case dependent. Below are several potential use cases that could influence your decision when evaluating workflow orchestrators.
➾ Task size and complexity. Scalability is usually priority number one, considering most organizations hope to grow as time progresses. You'll want a tool that supports highly scalable practices whether it be scaling infrastructure, expanding the code base, or additional use cases. Look for a workflow orchestrator that supports: external and heterogeneous infrastructure; easy learning curve for new engineers; many workflow use cases across departments.
🤖 Highly integrated workflows. No tool operates in a vacuum. Consider how well the orchestrator integrates with existing systems or services. This will save you time and money since it eliminates the need for additional training or setup when integrating a new system. Prefer orchestrators that integrate with adjacent tools natively without requiring extra boilerplate code.
🥵 Monitoring failures. Failure is so important that it’s mentioned twice in this article. The time it takes to debug an issue is usually directly correlated to money loss. Monitoring features should be included so teams have visibility into every step of the workflow and can easily pinpoint the reason for a failure. Self-healing, smart retries, and detailed logging should all be native to the workflow orchestrator and work for all your team’s integrated and adjacent tools.
Lastly, when selecting an orchestration tool, make sure it has an easy-to-use UI, intuitive developer experience, and responsive support so users won't have difficulty learning how to use it effectively. By ensuring the workflow orchestrator you choose meets all these requirements, teams will be able to streamline their operations faster and more efficiently than ever before.
Getting started with Prefect Cloud
We believe Prefect is the best workflow orchestrator that natively meets these requirements coupled with a short time to value. We don’t believe users should torture their code to easily deploy workflows in a variety of environments and use cases.
Getting started with Prefect Cloud is easy and intuitive. Prefect Cloud is a cloud-based workflow orchestrator designed to make orchestration accessible to experienced developers and those just starting out. With Prefect Cloud, users can quickly code and deploy repeatable processes that are observable and easily debuggable.
The key advantages of using Prefect Cloud include its intuitive UI, minimal setup requirements, secure approach to infrastructure, scalability, versatile time-based or event-based scheduling, and observability features.
Prefect makes complex workflows simpler, not harder. Try Prefect Cloud for free for yourself, download our open source package, join our Slack community, or talk to one of our engineers to learn more.