Prefect
  • Blog
  • Customers
Get a Demo
Sign InSign Up

Product

  • Prefect Cloud
  • Prefect Open Source
  • Prefect Cloud vs OSS
  • Pricing
  • How Prefect Works
  • Prefect vs Airflow
  • Prefect vs Dagster
  • FastMCP
  • Prefect Horizon
    NEW

Resources

  • Docs
  • Case Studies
  • Blog
  • Resources
  • Community
  • Learn
  • Support
  • Cloud Status

Company

  • About
  • Contact
  • Careers
  • Legal
  • Security
  • Brand Assets
  • Open Source Pledge

Social

  • Twitter
  • GitHub
  • LinkedIn
  • YouTube

© Copyright 2026 Prefect Technologies, Inc. All rights reserved.

Don't replay. Resume.

Run background tasks that recover instantly. Prefect guarantees exactly-once execution for any Python code—without the constraints of deterministic replay.

Start BuildingRead the Docs

Flexible Execution

Run background tasks your way

Whether you need a simple background worker in your web server or a distributed queue for heavy ML jobs, Prefect fits your architecture.

Embedded Workers

Start simple. Run a worker directly inside your FastAPI or Django process to handle background tasks on the same server. No extra infrastructure required.

  • Perfect for lightweight tasks
  • Zero deployment complexity
app.py
1from fastapi import FastAPI
2from prefect import task
3
4app = FastAPI()
5
6@task
7def process_file(id: str):
8 print(f"Processing {id}")
9
10@app.post("/upload")
11async def upload(id: str):
12 # Run in background on this server
13 process_file.submit(id)
14 return {"status": "processing"}

Built-in Durability

Durable Execution, Perfected

Most tools just retry. Prefect uses distributed caching built on object storage and locking to give you exact control over what runs, when, and how many times.

Results, not Replay

Don't restart from scratch. Prefect persists the result of every task. If a workflow fails, it resumes instantly by loading successful results from storage.

Time-Based: "Reuse if < 1 hour old"
Code-Aware: "Re-run if I change the logic"
pipeline.py
1from datetime import timedelta
2from prefect import task
3from prefect.cache_policies import INPUTS, TASK_SOURCE
4
5@task(cache_policy=INPUTS, cache_expiration=timedelta(hours=1))
6def extract():
7 # "Reuse if < 1 hour old"
8 return big_query.fetch()
9
10@task(cache_policy=INPUTS + TASK_SOURCE)
11def transform(data):
12 # "Re-run if I change the logic"
13 # Resumes from here if 'extract' succeeded
14 return clean(data)

Strict Exactly-Once

For payments and critical side effects, "eventually" isn't safe. Prefect uses distributed locking to ensure code runs exactly once, even with high concurrency.

  • Redis/Postgres distributed locking
  • Idempotency keys from inputs
payments.py
1@task(
2 # Guarantee one execution across all workers
3 cache_policy=INPUTS.configure(
4 lock_manager=RedisLockManager(host="redis")
5 )
6)
7def process_payment(user_id, amount):
8 # Safe to retry 100 times
9 # Only one charge will ever happen
10 return stripe.charge(user_id, amount)

Trusted by engineering teams

Snorkel AIcase-study

We improved throughput by 20x with Prefect. It's our workhorse for asynchronous processing—a Swiss Army knife. We run about a thousand flows an hour and we're perfectly fine since most of these are network bound.

SS
Smit Shah
Director of Engineering
G2 Review

Before Prefect, we had long-running analytics tasks that would sometimes stop running for hours... Prefect eliminated this problem and allowed us to automate new use cases with ease.

MU
Michael U.
Washington Nationalscase-study

Tasks, dependencies, retries, and mapping make robust pipelines easy to write.

LM
Lee Mendelowitz
Lead Data Engineer

Build workflows that never restart from scratch

Join thousands of engineers building resilient applications with Prefect.

Start BuildingRead the Docs