Attending to Cloud Performance
Apr. 12, 2022

How I improved Prefect's Cloud UI performance.

Evan Sutherland
Evan SutherlandSenior Software Engineer

At Prefect we are constantly striving to eliminate negative engineering and make our users' lives better. We want our users to love our product and we also encourage them to tell us when they don’t.

One piece of feedback we heard was that our UI was not as performant as users wanted when dealing with the scale of data our users regularly use in production workflows.

In response to this feedback, my first objective as the new Senior Front-end Developer at Prefect is to discover and alleviate these issues in Cloud UI. This objective is a moving target that’s unlikely to ever conclude, but we’re excited to share the progress that’s been made.

A simple place to start is by evaluating dependencies to reduce the amount of data your browser has to download to use the site. With help from Webpack Visualizer by @chrisbateman I found 25% of our webpack bundle could be cut with some simple tree shaking of moment.js, highlight.js, and lodash.js alone.

A pretty significant reduction came from font-awesome, which I was able to reduce from 11.4MB to 386KB. I did this through font-awesome’s Pro Subsetter tool, which lets us select only the icons we actually use in the application instead of loading all of their nearly 8,000 icons. This is great for getting users into the app as quickly as possible. A smaller bundle means less that your browser has to download for the site to run correctly.

1 zi5JH-ksITdRqNhnLRyJ0g

Graph output from Webpack Visualizer showing the relative size of each npm package in node_modules folder

One of the most overlooked ways performance takes a hit is not virtualizing large collections of components. Specifically we used Vuetify’s v-virtual-scroll to virtualize the lists in dashboard tiles. We noticed that the dashboard suffers significant performance issues when there is a backup of 200+ scheduled flow runs in the upcoming-runs-tile.

By replacing the list with a virtual scroller we went from rendering 200+ DOM nodes to 3–5 depending on which tile. Browsers can only maintain so many nodes; having too many nodes or too deep of a node tree can increase memory consumption.

1 zi5JH-ksITdRqNhnLRyJ0g

Late runs dashboard widget with 1,000+ late runs in the list

This investigation of the upcoming runs tile also led me to the duration-span component used here to calculate relative date time strings like "6 days, 23 hours, 59 minutes."

1 LjAFMXfZEZxvLfHZdN9obg

Snippet of our time span component showing "6 days, 23 hours, 59 minutes behind schedule"

In order to keep these numbers accurate as time elapses, we had 1000ms intervals on each instance. This makes sense if seconds are displayed, but often times the duration is only down to hour or minute, which means a lot of wasted computation.

Instead, I’m using this block of methods to set an initial timeout to get us to the start of the next smallestBlock and then using setInterval to only update at that block's interval.

So the dashboard now renders a fraction of the nodes, and each node is even more performant than before. However, the tiles are still polling for that large set of data. Many components do this. I discovered 60+ components with individual Apollo graphQL queries polling at an interval between 1 second and 2 minutes regardless of the visibility of the component. This was easy to fix with a Vue directive that wraps the Intersection Observer API. Now these components only poll when they’re actually visible.

Similarly, our Vuex state stores some root level data that is set up to continually poll. We store a list of tenants, agents, projects, and flows in vuex and populated those data stores with global Apollo queries registered at App.vue and refreshed at a regular cadence even if no components were actually reading the data. Now we have a vuex module that is responsible for tracking component subscriptions to these data sources and only polling when at least one component was subscribed to the data.

When investigating performance issues within a Vue app, it’s important to look at the use of watchers. There were also a handful of watchers that were deep, and didn’t need to be. For example, a watcher on $route that only really needed to trigger side-effects when $route.params.tenant changed. This can be written as

Lastly, there were watchers that only really needed to know the when data was loaded. For example, our team-side-nav watches flows and projects to keep its tree-style items current. Seems innocuous, except that those watchers are still executing even when the site-nav is closed. I could just check isOpen inside the watcher, but even better is to use Vue's $watch API to ensure that watch is happening while open, and not when closed.

These changes have accumulated into something worth sharing, but the quest is not over. It’s been exciting to contribute to such a fast moving project where we can bring new features and improvements to users quickly. I’m especially excited to see the focus on bringing delight to the user’s experience. I hope these changes make a difference and bring some delight to your experience. We welcome you to tell us how we’re doing by joining our community slack!

Happy engineering!

Posted on Apr 12, 2022
Blog Post

Love your workflows again

Orchestrate your stack to gain confidence in your data