Prefect
  • Blog
  • Customers
Get a Demo
Sign InSign Up

Product

  • Prefect Cloud
  • Prefect Open Source
  • Prefect Cloud vs OSS
  • Pricing
  • How Prefect Works
  • Prefect vs Airflow
  • Prefect vs Dagster
  • FastMCP
  • Prefect Horizon
    NEW

Resources

  • Docs
  • Case Studies
  • Blog
  • Resources
  • Community
  • Learn
  • Support
  • Cloud Status

Company

  • About
  • Contact
  • Careers
  • Legal
  • Security
  • Brand Assets
  • Open Source Pledge

Social

  • Twitter
  • GitHub
  • LinkedIn
  • YouTube

© Copyright 2026 Prefect Technologies, Inc. All rights reserved.

product
May 12, 2026

Introducing Infrastructure Decorators: Just @ your infrastructure

Radhika Gulati
Radhika Gulati
Sr. PMM

Your pipeline needs a GPU for five minutes of training. The other 115 minutes are CPU work. So today you've got two options: pay for that GPU the entire run, or split the pipeline into separate pipelines and glue the results back together. Both are bad. You picked one anyway.

Software engineers worked this out for CI/CD a long time ago. Linters run on tiny containers, test suites get the big box, GPU jobs get... well... GPUs. Data engineering is, somehow, still picking one machine for the whole pipeline and hoping for the best.

Today we're shipping Infrastructure Decorators. Bind a flow to the compute it needs with a decorator.

From Deployments to Decorators

Infrastructure Decorators tie compute requirements directly to your Python code. Put @kubernetes (or one of five other decorators) on a flow. Calling that flow runs it on Kubernetes.

They turn this:

flow_run = run_deployment(
    name="train-fraud-model/train-fraud-model-gpu",
    parameters={"transactions": transactions},
)
model = flow_run.state.result()

into this:

@kubernetes(work_pool="gpu-pool")
@flow
def train_fraud_model(transactions):
    return model
 
# called from the pipeline:
model = train_fraud_model(transactions)

The hardware becomes an attribute of the function. Anyone reading the file knows what runs where.

The Homogenous Route for Heterogeneous Compute

The real power of infrastructure decorators is the ability to compose a pipeline that spans entirely different compute types in a single Python file.

In MLOps, your pipeline is rarely homogeneous. You have three distinct "legs":

  • The Scrape: Lightweight, CPU-bound.
  • The Train: Heavyweight, GPU-bound.
  • The Test: High-memory CPU for validation.

With Infrastructure Decorators, your hardware choices move at the speed of your functions:

@docker(work_pool="small-cpu-pool")
@flow
def scrape_data():
    ...
 
@kubernetes(work_pool="h100-gpu-pool")
@flow
def train_model(data):
    ...
 
@ecs(work_pool="high-mem-pool")
@flow
def validate_model(model):
    ...
 
@flow
def ml_pipeline():
    raw_data = scrape_data()        # Cheap CPU
    model = train_model(raw_data)   # High-end GPU
    validate_model(model)           # High-memory Fargate

Right-size at runtime

The decorator lives in your code, and you can override its job variables at the call site. The pattern this unlocks is right-sizing infrastructure at runtime: a parent flow reads the actual input, calculates the resources needed, and submits each call with the matching job variables.

@kubernetes(work_pool="data-processing")
@flow
def process_file(path: str):
    return analyze(path)
 
@flow
def orchestrator(path: str):
    file_size_gb = get_size_from_s3(path)
    memory_mb = max(2048, int(file_size_gb * 2 * 1024))
    # Right-size the pod at the moment of impact
    process_file.submit(
        path=path,
        job_variables={"memory": f"{memory_mb}Mi"},
    )

The orchestrator reads the file size from S3 and right-sizes the pod for the call: a 200MB file gets a 2GB pod and an 8GB file gets 16GB. If your Monday job processes a weekend's backlog and the rest of the week runs on a fraction of the data, Monday's spike stays on Monday, and every other day pays for what it uses.

Bring your config

The decorator also bundles your function with whatever local files it needs. If train_fraud_model reads from a config.yaml or imports a helper script, those come along when Prefect ships the bundle to object storage. Fixing a typo in a config file ships with the next bundle, skipping the image rebuild.

Operators stay in control

Operators stay in control. Decorators route to the blessed work pools you have already set up, respecting the templates and restrictions you have put in place. Every invocation still shows up in the Prefect UI alongside your deployed runs.

Production use at Ramp

The ML platform team at Ramp uses Infrastructure Decorators to bridge the gap between local development and production.

In a builder-first culture, asking every contributor to also be a DevOps expert taxes the team. Infrastructure decorators allow the Ramp team to set the "rails" (the work pools and permissions) while letting the builders define the hardware they need directly in their code.

This is especially powerful for pipelines that require different hardware for different stages. Instead of being forced into one static machine type for an entire run, you can mix and match compute in a single script:

There's two fronts where infrastructure decorators help, and they're both during development. It bridges the gap between development and production codes running remotely. And it has permissions and services that are closer to production than just running on your local laptop. The other part here is that it unlocks extra compute, or bigger machines than your laptop has.

Ryne CarboneRyne Carbone, Staff Machine Learning Engineer at Ramp

By using the @ecs decorator, Ramp gives engineers production-like permissions and hardware access during development without the traditional deployment overhead.

Return of the @

We love the decorator. It gives your Python functions superpowers. It also makes for a good logo on swag (like on a h@).

Getting started

Infrastructure Decorators are available today on every Prefect tier.

If you have suggestions or requests for what you'd like to see, check out our GitHub discussions. Our Community Slack is also a great place to connect with others and get help with your workflows.

  • Check out the documentation →
  • Read how Ramp migrated to Prefect from Metaflow →
  • Watch the Infrastructure Decorators demo →