Prefect
  • Blog
  • Customers
Get a Demo
Sign InSign Up

Product

  • Prefect Cloud
  • Prefect Open Source
  • Prefect Cloud vs OSS
  • Pricing
  • How Prefect Works
  • Prefect vs Airflow
  • Prefect vs Dagster
  • FastMCP
  • Prefect Horizon
    NEW

Resources

  • Docs
  • Case Studies
  • Blog
  • Resources
  • Community
  • Learn
  • Support
  • Cloud Status

Company

  • About
  • Contact
  • Careers
  • Legal
  • Security
  • Brand Assets
  • Open Source Pledge

Social

  • Twitter
  • GitHub
  • LinkedIn
  • YouTube

© Copyright 2026 Prefect Technologies, Inc. All rights reserved.

customer stories
April 23, 2026

Building an ML Platform for the Agent Era with Prefect

Ryne Carbone
Ryne Carbone
Staff Machine Learning Engineer

Industry: Financial Technology (Corporate Cards and Spend Management)

About Ramp: Ramp is a finance automation platform that helps businesses manage corporate cards, expenses, bill pay, procurement, and accounting. The company is known for being one of the most technical, fast-moving engineering orgs in fintech.

Use Case: Orchestration for Ramp's machine learning platform and the broader set of agent-driven, LLM-powered, and ETL workflows now being built by both technical and non-technical contributors across the company.

Key Outcomes:

  • Migrated 200 flows off Metaflow in one quarter, ending with 350 deployed Prefect flows as new workflows were added in parallel.
  • New users can go from idea to production in 30 minutes or less, with the ML platform team now supporting more than 70 active contributors across the company.
  • More commits to the ML platform repo in the last six months than in its entire history before that.

The New Definition of "Builder"

Ramp is one of the most talent-dense, fast-moving engineering orgs in fintech. Its culture is builder-first, and the tools underneath its engineers have to keep up. The ML platform team's orchestration framework started working against them. It forced users into opinionated models that didn't match how they worked, pulled the team into constant extension work to cover developer experience gaps, and made debugging harder as workflows grew. When the team went to rebuild its orchestration layer, the brief was simple: find a partner that could move at Ramp's pace.

"Ramp moves fast. We need partners who do the same," says Ryne Carbone, Staff Machine Learning Engineer. "We're making a bet on not just the tool as it is today, but also where it's going."

The platform they needed to build wasn't the same platform they had two years ago. The definition of a "builder" at Ramp had shifted. Production workflows were once largely the domain of Machine Learning Engineers (MLEs). Now the platform is a shared space for product managers, salespeople, and risk analysts. Two things changed this. Ramp scaled dramatically across customers and employees, and the capabilities of LLMs and coding agents exploded at the same time. By the time coding agents and Ramp's internal agent tooling had matured, the definition of a contributor became "pretty much anyone who has access to those tools."

"The type of person who's building on our platform is quite different," Carbone says.

An AI-native orchestration layer needed to fit that reality. Most of the code at Ramp was being written by coding agents, which meant the orchestration framework had to work alongside them. The path from prototype to production needed to stay short so engineers could ship fast. And the platform team wanted room to define its own patterns rather than inherit whatever the framework came with.

Picking an Orchestrator for the New Mix of Users

When the team evaluated new orchestrators, Ramp applied its usual filter: is the problem unique enough to warrant building in-house, does the developer experience fit the existing stack, and does the vendor move fast enough to keep up. "It shouldn't get in the way of our builders," Carbone says.

Carbone had worked with Prefect before, and two things had stuck with him: speed and clarity.

"Prefect felt mostly like writing Python code," he says. "Going from prototype to production didn't really involve much refactoring. It didn't really get in the way of testing or running workflows." Users can add @flow or @task to Python code they're already writing, and it's ready to deploy.

Runtime customization mattered too. Ramp often writes general, configurable flows with drastically different compute requirements at runtime. "Having the ability to simply go into the UI and easily adjust parameters or resources makes the whole experience smoother," Carbone says. Running Prefect self-hosted and open-source gave them visibility into the platform and room to customize it around their stack, and the pace of development underneath it kept pulling ahead of their questions. "There were a few times when we asked if a feature was on the roadmap, and we just discovered it had already been released and we just hadn't updated yet," Carbone says.

Once the must-haves checked out, the team got excited when they saw how Prefect interfaced with their agent tooling. "Connecting agents to the Prefect CLI and API has made debugging both infrastructure and flow failures pretty painless," Carbone says.

A One-Quarter Migration

Ramp migrated the entire platform in a single quarter.

"During our migration, which was about one quarter, we started with 200 Metaflow flows," Carbone says. "Three months later, we had 350 deployed Prefect flows. It's a combination of all that migration plus users adding many flows during that time."

The quarter ran in three phases. The first month was spent hardening the Prefect infrastructure, building rails for users, and cataloguing flows to migrate. The second month was pressure testing the platform and migrating critical workflows. The third was the mass user cutover.

After the team stopped creating new Metaflow flows, they used Claude Code to translate and migrate the rest of the user flows into Prefect in batches. The speed and the AI-native workflow that made that possible was itself a preview of how the finished platform would operate.

"During migration, before we officially announced we were starting it, people had already started running Prefect flows," Carbone says. "There was already an appetite, and we didn't have to teach them how to do anything."

How Ramp Uses Prefect Today

Machine learning is still used broadly across Ramp, but LLMs and agents have opened up work the team couldn't staff before. "Whereas previously we might not have had the resources to dedicate an MLE to build a traditional ML solution, nearly all users can leverage LLMs to provide real business value in their domain," Carbone says. "We want to make sure the people who have the context to solve problems are enabled to build the right solutions, and that's increasingly not just an MLE."

One workflow shows what that looks like in practice. Ramp's daily ML batch predict workflow leans on two Prefect features the team uses heavily: event-based workflows and templating. The team maintains a single generalized predict flow, defined in Python with Prefect decorators. Users configure their own input parameters, event triggers or schedules, and flow resources, then deploy each variant separately. Users control how and when their flow runs and what runs after it, while the core code stays centralized for the platform team to maintain.

Building Rails on Top of Flexibility

"The great thing about Prefect is that it is very flexible," Carbone says, but agents will "figure out every possible way to do something." The platform team's answer was to build rails. Agent skills coupled with Claude and Codex provide suggested patterns that get users from idea to deployed code in minutes. Templated flows like the daily batch predict let users deploy their own variants while the platform team maintains the core. Prefect's Infrastructure Decorators like @ecs do two things during development: they run code remotely with production-like permissions to close the dev/prod gap, and they unlock compute beyond a developer's laptop, useful for verifying big workflows before production or running ephemeral analysis that doesn't need code review.

"Prefect is very open-ended, which is great," Carbone says. "Being opinionated matters, and it should be the company or your ML platform team doing that. Not the framework."

One example of what this setup enables is an Automated Debugger: a Prefect flow that watches Ramp's alerts channel. When a failure is detected, it kicks off an investigation through the internal agent harness. Using access to the Prefect CLI, API, and skills, the agent tests the flow and, if it finds something, drafts a PR for review. "We were very happy when we figured out we could do that," Carbone says.

Scaling Expertise

"For a very simple flow, a user can get something into production in 30 minutes or less," Carbone says, "especially if they're using Claude or Codex with the skills we created for them." Today, the Ramp ML platform team supports more than 70 active contributors across the company, the core data team plus a growing group of contributors from outside it.

"In our ML platform repo, we've had more commits in the past six months than our entire history before that," Carbone says. Two years ago, shipping a workflow meant finding an MLE with the bandwidth to build it. Today, it means finding whoever has the context to solve the problem and giving them rails to do it safely.