How We Gave a Redux-powered React App a Speed Boost
A client of ours has a long-lived React application using an architecture that was common at the time of its creation: Redux with thunks and sagas, with Redux Form powering the majority of forms throughout the UI, which is rich with input fields. To those familiar with this combination of technologies, you might immediately understand why they reached out to us to help fix their app’s performance issues.
At Lab Zero, we love React for its simplicity and performance. It’s still our go-to tool even in the face of newer UI libraries that steal thunder every now and then. That said, any app built with any architecture can start to feel sluggish if its functionality continues to grow with few performance considerations in mind.
This was the case with our client’s app, where any interaction would trigger a series of Redux state updates, causing much of the UI to re-render itself unnecessarily. Many customers would simply be unable to use the app, as just typing at a regular pace slowed the app to a crawl.
Our aim was to get the app to a state where typing and clicking through the app would not cause any perceptible performance issues for the majority of customers.
What’s taking so long?
React apps need to retrieve and store data so they can present it to the user. One of the more common methods of doing this is by using the Redux architecture, which persists all of this data in a central store. React Redux then provides this store to any React component in the app that requests it, via mapper functions that pick necessary parts of the store’s state and provide them to the underlying component.
A major issue with this architecture is the fact that each component needs to run their mapper function every time the store updates, regardless of whether the update concerns the component.
Let’s say we have an app that displays detailed info about a list of names. The app has a store whose state consists of these names in their original form:
['Bob', 'Enzo', 'Dot']
Being an older app, it uses Redux’s connect
function instead of hooks. For each value in the state, there is a component instance that is connected to it using a mapStateToProps
function, which looks something like this:
const mapStateToProps = (state, ownProps) => {
const value = state[ownProps.index];
return {
value,
metadata: {
index: ownProps.index,
length: value.length,
reversed: value.split('').reverse().join(''),
uppercase: value.toLocaleUpperCase(),
},
};
};
connect(mapStateToProps)(NameComponent);
The function receives the full state, finds the value, and returns an object containing it, as well as some metadata, such as the value’s index, its length, the value reversed, and the value in uppercase. This object is then passed as props to the underlying component, which then displays the information:
Each value is also editable. Clicking the name reveals an input field, where each keystroke results in a change event that immediately dispatches an action that updates the value in the store:
Putting aside the fact that a Redux store is wholly unnecessary for such a simple use case —these string calculations could be done at the component level instead—there are other architectural concerns.
Particularly, all three components’ mapStateToProps
functions run every time the state updates, but the user’s interaction results in only one state value update at a time, meaning two of these functions will be doing unnecessary work.
How much unnecessary work? If we click on “Enzo” and type “Matrix”, that means at least 6 store updates—one for each character typed. 6 store updates times 3 connected components equals 18 mapper function calls. Each function takes a string, splits it into an array, reverses it, joins it back into a string (thanks to the lack of a string reverse method in JavaScript), and also creates an uppercase variant. 18 mapStateToProps
calls times 4 string/array operations equals 72 additional function calls. If not properly cached, these 18 mapper calls could also result in 18 component re-renders.
Keep in mind, this is just a simple example. Our client’s real app had hundreds of connected components that were performing array operations on lists thousands of records long.
There are several ways of mitigating this sort of issue.
Memoization
For starters, React Redux has a built-in check for whether the result of each connect
call is the same as the last time it was called. Since each mapper function returns an object, React Redux performs a shallow equality check, meaning it determines whether each value in the object is referentially equal to the previous result’s value. If it is, the underlying component will not re-render. This caching technique is referred to as memoization.
Deep equality
In the case of our demo app, though, the shallow equality check will fail each time. mapStateToProps
returns a new metadata
sub-object every time it is run. Since JavaScript uses referential equality, newly created objects are never referentially equal to others, so React Redux will assume a new result with every call.
We can pass an additional options
parameter to connect
, specifying the particular type of equality check to use. If we use a deep equality function, such as Lodash’s isEqual
:
connect(mapStateToProps, null, null, {
areStatePropsEqual: isEqual
})(NameComponent);
…then we don’t have to worry about whether we’re returning new objects, as this equality check will recurse through nested objects (and arrays) and determine equality of each sub-value instead.
This is the solution that our client had gravitated towards in most scenarios. In the interest of preventing component re-renders, they allowed mapper functions to do their work, then ran deep equality checks on their results. While this prevented some computation-intensive components from re-rendering, the added deep equality check introduced some intensive computation of its own.
We found that we could speed up the app by only checking for deep equality when truly necessary, or writing custom areStatePropsEqual
functions to only check specific values.
Memoized selectors
Did you notice, though, that the above solution to customize areStatePropsEqual
didn’t fully solve our unnecessary work problem? This check might have prevented components from re-rendering, but in this case, the component is quite straightforward, with little to no logic of its own. We still have the problem of performing a large number of string/array operations upon every state update, which can become a problem at scale.
This is because we focused on memoizing the mapStateToProps
output, but not its input. If the input, which in this case is a particular state value, remains the same as the last time the function ran, we shouldn’t have to determine its length, or its reversed or uppercase value, or shove these values into a new object at all.
This is where we have historically made use of libraries like Reselect, and its helper Re-reselect, to call functions that cache both their inputs and outputs. For example, if we write a Re-reselect selector that first finds the value then computes derived values:
const getMetadata = createCachedSelector(
(state, index) => state[index],
(value) => ({
index: ownProps.index,
length: value.length,
reversed: value.split('').reverse().join(''),
uppercase: value.toLocaleUpperCase(),
})
)((state, index) => index);
…then call it within mapStateToProps
:
connect((state, ownProps) => {
const value = state[ownProps.index];
return {
value,
metadata: getMetadata(state, index),
};
})(NameComponent);
…then we can be sure that the value for metadata
will not be recalculated, and therefore be referentially equal, as long as state[index]
remains the same.
This is usually the most performant approach to derive data from the store. It encapsulates any set of values that requires additional processing with their own memoized selectors, which means we can use these same selectors across all kinds of connected components.
Memoized input
That said, for this particular case where 1) we’re only returning the value and its metadata and 2) we don’t plan on using this metadata in other components, we can likely avoid using Reselect entirely by telling React Redux how to determine if the necessary part of the store’s state is equal to the last time mapStateToProps
ran, by providing a custom areStatesEqual
function:
connect(mapStateToProps, null, null, {
areStatesEqual: (nextState, prevState, nextOwnProps, prevOwnProps) => prevState[prevOwnProps.index] === nextState[nextOwnProps.index]
})(NameComponent);
This is different from areStatePropsEqual
, which determines whether the output of mapStateToProps
is the same. If the particular inputs are equal, mapStateToProps
will not run at all.
Now we have more performant mapper functions which do a fraction of the work. But must these functions really run on every keystroke?
Reducing updates
As we mentioned before, our client’s app relies on Redux Form to manage the state of its forms—and there are a lot of them. In fact, the app mostly serves as a document authoring tool where the user can click to type anywhere within a large body of text, and each line is its own form field. In practice, this means there could be several hundred fields present at any given time.
Our example app above behaves much like Redux Form. For every field on the page (that appears when you click on a name), each keystroke emits a change action that immediately updates the global store. But as the author of Redux Form will tell you, this is poor architectural design. How often is it that a change to an in-progress form would affect other parts of the application? Would the app’s header, sidebar, or footer need to update while the user is typing into an unrelated form field?
This is why every other form library makes use of local instead of global state. The closer the form values are to the actual form, the fewer parent components need to render. And no changes to the global state means no connect
function calls at all.
But sometimes it’s not that simple. For example, in our client’s app, entering anything into its multitudinous fields resulted in an auto-save. Programmatically this meant that Redux would dispatch some thunks or sagas that would initiate a fetch to the server every time the user pressed a key.
Server fetches comprise at least two asynchronous steps: the request and response. The response handler is often split into a success or failure handler, followed by a final fulfillment handler that cleans up for either case. In order to inform the user that an auto-save is happening (via loading indicators in the UI), the global state needs to be updated at least twice to show then hide these indicators.
Add that on top of the global state already updating when the field’s value changes, and now you have at least four action dispatches (change, request, response, fulfill), resulting in possibly thousands of component re-renders for each keystroke. A global state update could take dozens or even hundreds of milliseconds, resulting in the browser’s main thread constantly freezing, creating a poor typing experience for the user.
Regardless of how efficient we could have made the myriad connect
calls, it was clear that we needed to avoid Redux updates as much as possible. But removing Redux Form would have amounted to a full rewrite of the application. With all the time and resources in the world, this would have been the right approach—and our client knows this, having introduced a yearslong plan to overhaul their app’s architecture. But we knew we could find some wins in the shorter term.
Local state with debouncing
Our client’s app contains a number of custom input components (e.g. Input
, Textarea
, Select
) that wrap a popular UI library’s components with some default props and behavior. These components are then passed into the Redux Form Field
component, which wraps the input components with value
and onChange
props, allowing them to be controlled by Redux Form.
But it’s not required for each component to immediately call its onChange
prop. We added a few features to each input component:
- local state containing the input’s value, which stays up to date when the Redux Form
value
changes - a function that wraps and debounces Redux Form’s
onChange
listener, ensuring it only runs a maximum of once every 200 milliseconds. - a custom
handleChange
function that updates the local state and calls the debouncedonChange
.
const CustomTextarea = ({ onChange, value }) => {
// Internal state value starts with the initial Redux Form value
const [internalValue, setInternalValue] = useState(value);
// Create the debounced onChange callback
const debouncedOnChange = useMemo(() => debounce(onChange, 200), [onChange]);
// Create the change handler that sets both the internal and external value
const handleChange = useCallback((event) => {
setInternalValue(event.target.value);
debouncedOnChange(event);
}, [debouncedOnChange, setInternalValue]);
// Update the internal value if the external value changes
useEffect(() => {
if (value !== internalValue) {
setInternalValue(value);
}
}, [internalValue, value]);
return <UiTextarea onChange={handleChange} value={internalValue} />;
}
Simply put, this causes the component to update immediately, but delays the update to the global store (necessary to trigger the auto-save and other side-effects) until the user has stopped typing for 200 milliseconds.
This gave us the immediate feedback we were looking for. The UI responded to our keypresses in less than 16 milliseconds, which is less than one frame, since browsers visually update at 60 frames per second. The rest of the work was deferred until the user was no longer interacting with the page.
Batching updates
Having been around since before React 18, our client’s app had not been updated to support new features like automatic batching. Given the app would often dispatch multiple actions synchronously, such as Redux Form’s change action immediately followed by a request to auto-save, we observed a full Redux update cycle, followed by an app-wide React re-render, followed by another set of both.
We put in the work to enable the newer createRoot
API, which enabled automatic batching. This was mostly straightforward, aside from a few places where components would expect to receive props from one state update, immediately followed by different props from another state update. After ironing out the kinks, we immediately saw the fruits of our labor: one React re-render when changing/requesting, and one React re-render when receiving the successful response (committing new data to the store) and fulfilling it (turning off loading flags).
While this was a long time coming and also unlocked other helpful features like Suspense and useTransition
, we realized we could take one more step toward reducing re-renders.
Skipping the reducers
When a Redux action is dispatched, it eventually hits the store’s reducers, then React Redux receives the new state and calls its connect
callbacks scattered throughout the app.
But we found that there were some actions for which there were no reducers. Particularly, there were some form fields in the app that intentionally didn’t display any loading indicators when they were saving. These actions were still important as we had some saga code that initiated fetches when they were dispatched. Though the state was not being updated, React Redux went through the connect
step regardless.
We found that we could avoid this last step by introducing some custom Redux middleware that prevented the actions from reaching the reducers:
const middlewares = [thunkMiddleware, sagaMiddleware];
// prevent REQUEST_SAVE from reaching the store, as it is only used
// within sagas.
const skipStoreMiddleware = () => next => action => {
if (action.type !== 'REQUEST_SAVE') {
next(action);
}
};
middlewares.push(skipStoreMiddleware);
applyMiddleware(...middlewares);
Of course, we did our due diligence and wrote comments in the reducers where we might expect this action to affect things, so if the state did need to be updated in the future, they’d know how to allow these actions to reach the reducers.
Read-only mode
While our client’s app acts as a big notebook where the user can enter text into most places within the body of the page, we found that there were some fields that were in a disabled state, as their notes had been “finalized” and made read-only. But these disabled fields were still Redux Form Field
s, connected to the store, receiving updates unnecessarily, and included in form validation when saving.
Checking further up the tree whether these notes had been finalized allowed us to avoid rendering fields entirely - instead of rendering a Field
, we could quickly render a simple div
, cutting down on both connect
calls and third-party UI component renders.
What’d we win?
After applying these various strategies to our client’s app, we used Google Chrome’s Performance profiler and took some measurements of the lengths of CPU tasks related to common interactions. Some tasks were either merged or split, so not all measurements line up perfectly.
When typing a single character into a textarea:
keystroke | redux change | save request | save response | total | |
---|---|---|---|---|---|
Before | 46ms | 31ms | 110ms | 187ms | |
After | 6ms | 28ms | 43ms | 77ms | |
Savings | 40ms | 67ms | 110ms |
When typing into a subform that also saved its parent resource (explaining why the parent saves twice is outside the scope of this post):
keystroke | parent save request | parent save | child save request | child save | parent load | parent save request | parent save | total | |
---|---|---|---|---|---|---|---|---|---|
Before | 72ms | 126ms | 84ms | 822ms | 178ms | 131ms | 1413ms | ||
After | 17ms | 30ms | 42ms | 38ms | 177ms | 118ms | 55ms | 36ms | 513ms |
Savings | 55ms | 96ms | 46ms | 645ms | 123ms | 95ms | 900ms |
While we were able to increase the app’s responsiveness to keystrokes and bring it to a level where almost no frames were skipped when typing, and we were able to decrease total CPU task duration by about 60%, it’s apparent there’s even more work to do.
Particularly, we recommended that our client establish a cross-functional team to monitor and address performance issues going forward. We knew that our improvements—memoization, debouncing, and skipping updates—could be applied further, as we only had time to address the most egregious performance issues. We also recommended that they bolster their analytics to provide more insight into what causes especially long CPU tasks, as it’s often hard to replicate production-level issues in a development environment. A performance engineer’s work is never done!
Hero photograph by Jason Chen on Unsplash
Continue the conversation.
Lab Zero is a San Francisco-based product team helping startups and Fortune 100 companies build flexible, modern, and secure solutions.