within parameter of the trigger and the schedule_after parameter of the run-deployment action.
Why debounce events?
Automations fire in response to a single event and can only pass that event’s context to the triggered deployment. This creates a challenge when multiple events arrive rapidly:- Each event would trigger a separate flow run
- Each run would only have context from one event
- You’d have multiple runs processing related work simultaneously
- Preventing multiple flow runs from being created for rapid events
- Scheduling a single run after a time window
- Enabling a single run to process the work from all events in the burst
Key limitationAutomations can only pass the context from the triggering event to your deployment. Design your flows to query the source system directly (like listing S3 objects) rather than relying on individual event data.
Use case: Processing S3 file uploads
Consider a scenario where you have a webhook configured to receive S3ObjectCreated events. When users upload five files in quick succession:
Without debouncing: Five separate flow runs are triggered, one for each file.
With debouncing: One flow run is triggered after all uploads complete, processing all five files together.
Implementing debouncing
Use a reactive trigger with matchingwithin and schedule_after values:
Define in prefect.yaml
Define in Python with .serve
How it works
When you configure a reactive trigger with bothwithin and schedule_after:
- First event arrives: The automation fires and schedules a deployment run
- Additional events within the window: These events are recorded but don’t trigger additional runs
- Deployment runs after delay: By the time the run starts (after
schedule_after), all events from the burst have occurred - Flow processes everything: Your flow queries the source system and processes all available items
within parameter implements eager debouncing: it fires immediately on the first event, then ignores subsequent events for the specified duration.
The schedule_after parameter delays the actual flow run, ensuring all events in the burst have completed before processing begins. This implements late debouncing.
Using both parameters together prevents duplicate runs while ensuring your flow has access to all events from the burst.
Choosing the right time window
The appropriate time window depends on your use case:- Rapid API events: 30-60 seconds
- Batch file uploads: 2-5 minutes
- Large file transfers: 15-30 minutes
Design flows for batch processing
Since automations can only pass one event’s context, design your flows to discover and process all available work:- Query the source system directly rather than relying on event data
- Process all available items, not just one
- Use idempotent operations that can safely handle re-processing
Combining with concurrency limits
For additional control, combine debouncing with deployment concurrency limits to prevent overlapping runs:- Only one run executes at a time
- New runs are cancelled if one is already running
- Events are debounced to prevent excessive run creation
What happens to subsequent events?
Events that arrive during thewithin window are still recorded in Prefect’s event system:
- You can view them in the Event Feed
- They can be queried at the start of the flow run
- They’re tracked for audit and debugging purposes
- They don’t trigger additional automation actions
Further reading
- To learn more about reactive triggers, see the Events documentation
- For details on deployment triggers, see the Creating deployment triggers guide
- For webhook configuration, see the Webhooks guide