Optimizing React Native (+ Expo)

React Native and Expo are a powerful duo. Instead of needing to write Java/Kotlin and/or Swift—not to mention not being able to create an iOS app without a Mac—it’s now easy to write and test native apps using a framework that’s familiar to many web developers. And while React Native is pretty easy to dive into for someone who’s worked with React before, performance can feel like more of a pain point when developing and testing mobile apps.

For one thing, running via Expo in development mode (i.e., expo start) will be slower than using a production-grade version (expo publish) because production mode minifies your code. This isn’t a huge cause for concern, since speeds on end user devices will more align with those of a published Expo build. That said, the disparity can highlight potential performance issues and make the development/testing process more frustrating.

If we ever needed a reminder to follow React (Native) best practices, here it is.

Measuring Performance

Before looking into actual optimization techniques, we first need to know how to measure an Expo React Native app’s performance. There are two main tools I’ve found helpful in my experience:

  • Expo Performance Monitor

    • Check out the Expo docs to see how to open up this menu on your device/emulator. It displays information about the app’s memory (RAM) usage, JavaScript heap, number of views, and frame rate for both the UI and JS threads.
  • React Profiler

    • This React component allows you to execute a callback when any component in your app finishes rendering. It’s easy to log out the component’s render time, giving insight into which areas of the app are performance bottlenecks.

Optimizations

One nice thing about React Native is that getting a noticeable boost in performance often doesn’t require a major overhaul of the code’s underlying logic. It can be as simple as moving code around, switching to built-in performant components, and relying on memoization. At a base level, gains from techniques like lazy-loading and eliminating unnecessary re-renders can be the most efficient way to trim load times and create a more seamless UX.

I’ll walk through a few best-practice approaches that I’ve seen grant noticeable improvements to an app’s performance. For a deeper dive, check out the official React Native performance docs.

FlatList

Why use it?

Coming from writing React code, I instinctively reach for map when rendering lists. This can be fine in React Native, but if you’re rendering a lot of items (or if the items themselves are complex), it can slow things down to a crawl. Enter FlatList, a React Native component built with performance in mind. It has a few significant optimizations under the hood—such as its near-constant memory usage, regardless of the number of rows in the list—along with configuration options that can help you optimize for your specific use case.

// the old way
<ScrollView>
	{ myListData.map((item) => (
			<ChildComponent
				key={item.id}
				// ...props
			/>
		)
	)}
</ScrollView>

// the new way
<FlatList
  data={myListData}
  initialNumToRender={10}
  renderItem={({ item }) => (
    <ChildComponent
			// ...props
      key={item.id}
    />
  )}
	onEndReached={fetchMoreData}
/>

The Metrics

I created a small demo app in order to get some benchmarks for comparing FlatList to a regular old ScrollView with mapped elements. The main screen of this app is a simple list with 100 items—each of which is a child component with some text, a button, and a couple other minor UI elements. Right off the bat, switching from the ScrollView list to a FlatList cut the app’s memory usage by more than 17%: an average of ~116 MB while using the ScrollView list, compared to about 96 MB with FlatList.

Next, to see how FlatList compares to a mapped ScrollView in terms of render speed, I plugged in a React Profiler into the demo app and logged out the render time for the parent component (which contains the list). I found that using a FlatList significantly sped up initial render time—an average of 54ms compared to 320ms for the ScrollView list. For re-renders, the difference was almost as huge: 62ms for the FlatList and 194ms for the ScrollView list. FlatList was the clear winner in this metric, slashing initial render times by 83% and re-render times by 68%. Even after scrolling down to the middle of the list—so more items would be present in the FlatList—the performant component never dipped below being 25% faster than the mapped list. The only caveat here is that FlatList does some additional rendering as you scroll through the list, since not all of its elements are mounted and rendered at the start. A mapped list will do all the work at the beginning, so it won’t need to re-render as you scroll through the list. Still, FlatList’s distributed approach saves a huge amount of time on the initial render—on top of additional time saved whenever a state change forces a re-render of the list.

Memory UsageMount TimeRe-Render TimeAdditional Renders During Scroll?
Mapped List116 MB320 ms194 msNo
FlatList96 MB54 ms62 msYes

The Why

Impressed by FlatList’s performance, I was curious about how else it differs from a mapped list. Crucially, it turns out that FlatList limits unnecessary list item re-renders. In React and React Native, if a state variable in a parent component changes, all child components will be re-rendered even if their props haven’t changed. If your parent component includes a list of many child components (especially if those child components are complex), this could result in a lot of overhead. However, FlatList saves time by not re-rendering every item, since many of them could be off-screen and thus don’t need to be updated. In the demo app mentioned above, when using a mapped ScrollView list, changing a state variable in the parent component resulted in all 100 list items being re-rendered every time. With FlatList, only the list elements that were either visible or within a short scroll of visibility were re-rendered; I found this to be around 28-40 items, depending on the FlatList’s windowSize and the scroll position in the list. This explains some of the improved render times I mentioned above; FlatList is faster because it’s doing the minimum amount of work possible. A mapped list naively renders everything, taking the performance hit in the case that the user doesn’t want to scroll through the entire list. FlatList’s approach seems to be more of an “I’ll cross that bridge when I get there” mentality. Hence the much faster initial render (and limited re-renders), at the small tradeoff of needing to do the rest of the rendering as the user scrolls.

In terms of low-hanging fruit, it doesn’t get much easier than simply swapping in a FlatList component for a mapped ScrollView.

And finally, on top of these optimizations, the FlatList component makes it easy to set up lazy loading. In a real-world app, we obviously wouldn’t want to load an entire 100-item list all at once, especially since the list is probably dependent on data fetched from an API. By passing a load-more-items callback to FlatList’s onEndReached prop, we can ensure we never load (many) more list items than we need.

Options for Advanced Usage

If you’re already using FlatLists and are looking to further optimize, there are a good number of props that you can tinker with to achieve this goal. A couple that might be useful as starting places are:

  • initialNumToRender configures a number of list items for the initial render. As long as you’re sure that the number of items you’re rendering will cover the entire list area on every device, this prop can greatly speed up the initial render.
  • windowSize allows the developer to change how big of an area is rendered together as part of the list. Shrinking the window size results in fewer list items being rendered concurrently, which can free up memory. However, this comes at the tradeoff of increased chance of seeing blank space while scrolling, as well as a slightly less smooth scrolling experience.

The React FlatList docs offer more information for those looking to explore their options when it comes to list optimization. It may take some trial and error to find the best configuration for a specific use case, but if performance is a concern, then this is almost certainly worth your time.

Memoization

Those familiar with React will know that it ships with a variety of hooks, each with a specific purpose. For example, useState is a way to create and maintain state variables over the component’s lifetime, while useEffect allows the developer to execute side-effects when a given variable’s value changes. Other hooks, like useMemo and useCallback, allow the developer to leverage memoization—that is, caching a value so it isn’t recalculated unless it really needs to be.

useMemo

Take useMemo, for instance. This hook accepts a function and a list of dependencies. Whenever the dependencies change, the function will be re-executed, and the return value will be cached. This can save a lot of unnecessary work; instead of re-executing what could be an expensive function on every render, it’ll only be run when one of the variables it relies on (and thus, potentially its return value) has changed.

In a (very contrived) example, suppose we want to store a hash of some value. We need the hash to be updated whenever the original value changes, but the hashing operation itself is pretty resource-intensive and slow. Here’s what the code might look like initially.

import { useState, useMemo} from "react";

const MyFuncComponent = () => {
	// BAD: this function doesn't need to be re-declared on every render!
	const hashSomething = (value: string) => {
		// some time-intensive operations here
		...
		return hashedValue;
	}

	const [valueToHash, setValueToHash] = useState("");

	// BAD: this runs on every render, even if `valueToHash` hasn't changed
	const hashResult = hashMyValue(valueToHash);

	...
}

We can fix this up with two small changes.

  • First, let’s move the hashSomething function declaration outside of our functional component. Right now, it’ll be re-declared on every render, and there’s no need for that. Moving it outside MyFuncComponent will make the code better organized and (slightly) faster.
  • Next, we can add a useMemo hook around the call to hashSomething, using this to memoize our hashResult variable.

Let’s break this second part down: if we call hashMyValue() with the same argument 100 times, its return value will never change. Since we’re saving the return value the first time (well, every time) it runs, those other 99 re-executions would be a huge waste of time. However, if the argument to hashMyValue() does change, likely so will the return value, and we’ll want to re-execute the function to keep our state up to date.

Wrapping hashMyValue(valueToHash) in a useMemo hook and putting valueToHash in the dependency array effectively tells React: “Hey, only re-run this code if valueToHash changes. Otherwise, there’s no need.”

So here’s what the above example might look like once it’s cleaned up:

import { useState, useMemo} from "react";

const MyFuncComponent = () => {
	const [valueToHash, setValueToHash] = useState("");

	// GOOD: `hashMyValue` will only be re-executed when `valueToHash` changes
	const hashResult = useMemo(() => hashMyValue(valueToHash), [valueToHash]);

	...
}

// GOOD: this won't be re-declared during a re-render
const hashSomething = (value: string) => {
	// some time-intensive operations here
	...
	return hashedValue;
}

Relying on useMemo, we can cut down on unnecessary operations. Performing a SHA-256 hash of a file might reasonably take one-tenth to one-fifth of a second (for a small file) or significantly longer for a large file. With a simple useMemo hook, we can avoid doing that work every time a re-render is triggered, saving us a noticeable amount of time on the aggregate.

useMemo isn’t just useful for when you’re doing difficult computations, though. Instead of filtering a list in your render function, for example, memoize the filtered list to avoid redoing the same operation. Generally, when there’s work being done in the render function—e.g., array manipulations like filter, map, reduce, etc.—and it can be memoized and moved outside the render function, do that. You’ll improve performance by running those methods only when necessary, and you’ll see the difference when interacting with your app.

useCallback

In the example above, we moved the hashSomething function definition outside of the functional component. I know what you’re thinking: what if your hashSomething function relies on state variables for its calculations? Moving the definition is no longer an option.

In this case, instead of moving the function definition, you can keep it where it is and wrap it in a useCallback hook. This will accomplish the same goal: the function will only be re-declared when a value in its dependency array changes.

For example, say this is what we’re starting with:

import { useState} from "react";

const MyFuncComponent = () => {
	const [valueToHash, setValueToHash] = useState("");
	const [valForCalculation, setValForCalculation] = useState(null);
	const [anotherStateVar, setAnotherStateVar] = useState(null);

	const hashSomething = (value: string) => {
		if (!anotherStateVar || !valForCalculation) {
			... // do something
		}
		else {
			... // do something else with those state variables
		}

		// some time-intensive operations here
		...
		return hashedValue;
	}

	...
}

hashSomething is being re-declared on every render, which we don’t want. The state variables that hashSomething depends on are valForCalculation and anotherStateVar (pardon the terrible variable names). Just like we did for the useMemo hook above, we’ll put those variables in the dependency array. The end result would look like this:

import { useState, useCallback} from "react";

const MyFuncComponent = () => {
	const [valueToHash, setValueToHash] = useState("");
	const [valForCalculation, setValForCalculation] = useState(null);
	const [anotherStateVar, setAnotherStateVar] = useState(null);

	const hashSomething = useCallback((value: string) => {
			if (!anotherStateVar || !valForCalculation) {
				... // do something
			}
			else {
				... // do something else with those state variables
			}

			// some time-intensive operations here
			...
			return hashedValue;
		},
		[valForCalculation, anotherStateVar]
	);

	...
}

Great, now hashSomething will only be re-declared when either valForCalculation or anotherStateVar changes. Wrapping a function definition in useCallback isn’t generally necessary for every function you write, but there’s one case where it really matters, and that’s when the function is being passed as a prop to a child component. Every time the function is reinitialized, even if the function signature looks the exact same, the child component will be re-rendered. This is due to referential equality, which is equality based on the function’s reference (location in memory). When the hashSomething function is recreated, the reference is different since it’s a distinct function, and the child component takes that as a sign that it needs to re-render. By wrapping hashSomething in a useCallback, we can ensure its reference will not change unless a value in its dependency array changes, which cuts down on re-renders for any child component using it.

That’s a lot of words. Let’s demonstrate this in one more small example.

import { useState, useCallback} from "react";
import { View } from "react-native";

const MyFuncComponent = () => {
	const [valueToHash, setValueToHash] = useState("");
	const [valForCalculation, setValForCalculation] = useState(null);
	const [anotherStateVar, setAnotherStateVar] = useState(null);

	const hashSomething = useCallback((value: string) => {
			// calculations involving state variables
			...
			return hashedValue;
		},
		[valForCalculation, anotherStateVar]
	);

	// other state variables used in render function below
	...

	return (
		<View>
			...
			<MyChildComponent hashFunction={hashSomething} />
		</View>
	);
}

Before we memoized hashSomething, its reference would change across renders of MyFuncComponent, triggering re-renders of MyChildComponent. Now, though, hashSomething’s reference only changes when either valForCalculation or anotherStateVar changes, so MyChildComponent isn’t re-rendered unnecessarily.

This can be confusing to wrap your head around at first, but it’s an important concept when it comes to performance. Cutting down on unwanted re-renders is often the quickest way to improve app speeds.

Memoizing Styles

For those who use Tailwind CSS, the tailwind-rn project allows you to continue using Tailwind’s class-based styles in React Native. Doing so looks like this:

import { useTailwind } from "tailwind-rn";

const MyFuncComponent = (props: MyFuncComponentProps) => {
	const tailwind = useTailwind();
	...

	return (
		<div style={tailwind('max-w-xl mx-auto')}>
			...
		</div>
	);
}

However, if you have a lot of styles—maybe this is a large component with a lot of styled elements, or maybe it’s a component that’s rendered many times in a list—these tailwind() calls can eat up some extra time during re-renders.

One way to work around this is to wrap your styles in a useMemo hook so they’re only calculated once. Instead of the class above, we’d have something like:

...
const containerStyle = useMemo(() => tailwind('max-w-xl mx-auto'), [tailwind]);

return (
	<View style={containerStyle}>
		...
	</View>
);

This is perhaps not the prettiest code you could write, but it can have a small effect—0.1s of render time saved in local testing—on performance when you’re really in a crunch.

Recap

To run through the topics we touched on here:

  1. use FlatList instead of map, especially for lists that are long or involve complex components
  2. for variables whose values are calculated, especially when the calculations are resource-intensive, wrap them in a useMemo so they’re only recalculated when necessary
  3. when you’re declaring a function that’s passed as a prop to a child component, wrap the signature in a useCallback so you don’t force unwanted re-renders of the child component
  4. memoize Tailwind styles when you’re really worried about performance
  5. and finally, keep in mind that production builds are way faster than dev builds. If you’re using Expo, you can get a more realistic sense of your app’s performance by running a build