Skip to main content World Without Eng

Why you should consider building static sites

Published: 2023-06-02
Updated: 2023-10-31

If you’re building web pages in production, consider building and hosting them statically. This can simplify your life while benefitting your customers and your business. The benefits come from three properties of static pages:

  1. Generally good speed and performance
  2. Amenity to graceful degradation
  3. Simple, reliable, managed hosting options

Speed: good for the user and the business

This isn’t always true (and check out my article on when server-side rendering is preferable to static for performance reasons), but generally static pages are going to be faster to fetch and load than server-side rendering (SSR) or client-side rendering (CSR). Shorter load times create a nice experience for your user, and they have also been shown to improve conversion rates, which leads to better business outcomes.

The shorter load times are due to the rendering—or really the lack thereof. For a static site, the HTML is all rendered in advance, so all your server has to do is serve a plain HTML file to your user. There may be some hydration required in the user’s browser, but as long as you aren’t blocking your whole page waiting on that hydration, you can most likely give your user a contentful paint right away. The perceived load time is short.

With SSR though, the HTML needs to be rendered when a request comes to the server. Depending on how much HTML is being rendered, and what data is being fetched before rendering, there can be a lengthy response time. Users won’t see any content during this time, since there isn’t any HTML to start processing until your server responds.

Further, fetching data and rendering HTML can be demanding on the server, causing it to slow down under load. If the data being fetched is unique for each incoming request, this may be unavoidable. But I’ve often seen engineers use SSR (or CSR) to fetch data that actually isn’t changing from request to request, meaning that there’s a lot of time being wasted making calls and rendering HTML when it could’ve been done once.

Lastly, CSR needs to render HTML as well, but it all happens in the user’s browser. A small bit of HTML is retrieved from a server, which then instructs the user’s browser to pull some JavaScript, which then pulls everything else and renders a page. This is typically slower than SSR, since all that data fetching now needs to come from a geographically-distant user, rather than from a server that’s probably located in the same data center. The user still won’t see anything in their browser until all the fetching and rendering is complete. The perceived load time is much longer.

Even with a static page, there are some good rules to follow for optimal performance. For one, try to keep your time to interactivity short. Generally you’ll want to keep it under 3 seconds. Users will leave your page if it takes a long time to load, and the ones who stay will be less likely to convert. Therefore, faster is better when it comes to page load speeds. The most practical way to achieve this low time-to-interactive is to keep a strict page weight budget. The less data your user needs to get and process, the faster their load times will be. To hit that 3 second mark, a good page weight budget is 800 kB for everything: HTML, CSS, JS, images, fonts, and so on. This will take discipline, but I can speak from my own testing that taking a page from an 8 second load time down to a 3 second load time increased conversion rate on that page by 7 percentage points. It’s worth being disciplined.

Graceful degradation

We’ve seen that static pages are intrisically performant because they’re pre-rendered. This same property tends to make it easier to gracefully degrade a static page as well.

Graceful degradation is all about providing as much functionality and information to a user as possible despite errors. For instance, when a page can’t fetch data, it should present the user with sensible defaults, pull some fallback data, or show a nice, user-friendly error. Or if a page has trouble making an outbound request, it should retry and then show a user-friendly error message with resolution steps if applicable.

In theory, any web page can gracefully degrade. It simply requires disciplined error handling. As a developer, disciplined error handling comes down to two things:

  1. Make it easy for yourself to identify and handle failure modes.
  2. Try to reduce the possible failure modes as much as possible.

This is where static webpages really shine: they make it easy to identify your failure modes, and they reduce some of the potential failures your page might encounter.

First, static pages make finding errors easy. This is because, much like compiling code, static pages have to be built before they can be served. (This assumes you’re using a framework rather than writing vanilla HTML/CSS/JS). If the build fails for any reason, there won’t be any artifacts to deploy. As such, you’re forced to resolve any errors that might prevent your page from rendering before the page makes it to production. A server-side rendered or client-sided rendered page, on the other hand, is rendering at runtime. If you don’t have sufficient test coverage, some rendering issues might not come up until a user clicks into a particular page. By then it’s too late. The user has seen your bug.

Second, static pages reduce the number of failure modes you might encounter. Since the build process verifies that the pages will render, we can take rendering issues off of our list of concerns. Now our runtime issues will mostly revolve around making requests to either hydrate the page, or outbound requests from user interactions. That’s a much smaller set of errors that we need to write error handling code for!

But the benefits don’t stop there: static pages can also prevent a whole host of issues that crop up around serving the page. Particularly for server-side rendered pages, any issue with the server becomes an issue with your page. If you’ve got a bug with some of your data fetching, an exception might bubble up and prevent your page from rendering. Unless you’ve handled that error case, your client might simply receive a 500 response. Or, let’s say your server has been receiving a lot of traffic recently. Since server-side rendering can be taxing on your servers, it’s possible that you’d start to drop requests. If that happens, your users won’t get a gracefully degraded page—they’ll likely get a white page with 504 Gateway Timeout plastered across the top instead! By switching to a static page, you reduce the amount of work your servers have to do, which reduces the likelihood of you encountering an error or overloading your servers. In fact, you set yourself up to take advantage of some no-code solutions that are fully managed and extremely reliable, like serving your page with Amazon S3. You can front it with a Cloudfront CDN to cache assets geographically close to your users, thereby improving performance and reliability even more!

Set it and forget it

That segues nicely into this last benefit, which is that static sites are dead-simple to host and manage.

With SSR, you need to be quite careful with your servers. Since rendering can be a pretty demanding operation, there’s a chance you’ll overload your servers. That means users will see a blank page while waiting for a response, or worse, they’ll get timeout errors which aren’t user-friendly at all. You need to carefully allocate CPU and memory to your servers, and configure autoscaling, and set up monitors and alerts to try to avoid this scenario.

However, if you’ve got some static artifacts and you upload them to S3, and you front them with Cloudfront, then you can skip all of that. (You could also use the Google Cloud or Microsoft Azure equivalents, or other hosting or CDN services as you see fit). There’s no CPU, memory, or autoscaling configuration to do, and your monitoring and alerting is simplified. This is because S3 and Cloudfront are both managed, so you don’t need to care about the server details. Now you can simply monitor your 4XX and 5XX rate, your number of requests, and your cache hit/miss rate. Together, those will help you figure out if you’ve accidentally released a broken artifact, or if there are performance opportunities you’re missing out on (like caching certain file paths, or caching for a long enough period of time). But any of the issues you might’ve had with serving your page are handled for you with this approach.

Conclusion

Overall, if you’re in a situation where you’re using SSR or CSR, and you’re rendering pages that aren’t changing much, or only require a small amount of hydration, do yourself a favor and switch to serving a static page instead. You will reduce load times for your users, increase your own conversion rates, catch rendering bugs before going to production, and reduce much of the complexity around serving your pages to your users. Your users, your business, and your coworkers who are on-call will all thank you!