Skip to main content

Angular Production Builds

In this article I will discuss some of the alternatives for preparing an Angular application for production. I will look at several different approaches, and highlight the pros and cons.

I hope you will read the rest of the article, but In case you are a TLDR person, I will give you the results of this experiment right away.

The following table shows load times and bundle sizes for the different bundling approaches I will discuss in this article.

Bundler Bundle size (kb) Load time (seconds)
JiT SystemJS N/A 6
JiT SystemJS-builder 260 2
AoT Rollup 147 1.3
AoT Webpack no lazy loading 151 1.3
AoT Closure Compiler 97.4 1
AoT Webpack with lazy loading 104 1

When looking at static assets, the most important performance factors in Angular applications are compilation and bundling.


Compilation in Angular refers to converting Angular specific syntax to plain JavaScript that the browser can understand. Think conversion of template syntax like ngFor, ngIf, etc to plain JavaScript.

Compilation comes in two different flavors; JiT (Just in time) and AoT (Ahead of time).

The difference between JiT and AoT is not related to what happens, but when it happens.

JiT compilation takes place in the browser, after the application has been downloaded. This means we have to do extra processing before the application can render. It also means we have to ship a compiler with our application runtime.

AoT addresses these issues by doing compilation at build-time. Not only does this remove runtime compilation, but equally important, we no longer have to include the compiler with the Angular runtime. This significantly reduces the size of the bundle since the compiler is a big dependency.

JiT is not really an option for production applications, but I am including a few JiT examples since it gives us a low end baseline to compare against.

The demo application we will use is a medium size application, consisting of a collection of some of my Angular samples.

Angular performance is mainly an issue on mobile devices, so for this demo, I will be using throttled “Good 3G” in Chrome to simulate a slow device.

To keep it simple, all reported load times are from the “Finish” value on Chrome’s network tab.

However, I have deployed versions of all samples, so feel free to try out different metrics as well.


As a low end baseline I have deployed the application as a standard JiT application.

The application is deployed here.

As you can tell, there is a very noticeable lag before we see the fully rendered application. If you open the browser’s network tab, you will see why.

The application makes 163 requests and takes about 6 seconds to fully load. This is way too slow by any standard.

JiT With Bundling

As I noted above, one of several problems with the JiT build is that it makes 163 individual requests to just load the application.

Let’s remove this issue by adding bundling with minification to the JiT build.

I have deployed this version here.

I am using a tool called SystemJS-Builder to do the bundling. SystemJS-Builder is a tool in the SystemJS family, but it’s a separate tool from the SystemJS module loader.

As you can tell, this results in noticeable improvement. We are no longer making 163 requests, and minification reduces the payload significantly (260kb). Still, the total time to load is more than 2 seconds.

Much better, but still a very noticeable rendering lag.


We have now taken the JiT build pretty much as far as we can take it. Unfortunately, performance is still not acceptable.

Luckily we are not out of aces. We can still improve performance by switching to AoT and more optimized JavaScript bundling.

Next we will look at more realistic production alternatives and see how they improve performance.


AoT offers clear performance benefits, but there are important bundling considerations as well.


In the JiT build I was transpiling to CommonJS modules. CommonJS is flexible, but it’s not a format optimized for bundling. Instead we should be using ES2105 modules.

ES2015 modules are much better suited for a technique called Tree shaking.

Tree shaking is the process of walking the trail of import and export statements in the application code.

Rollup offers a great implementation of Tree shaking, so it has become a very popular choice for application bundling.

I have deployed the Rollup version of the application here.

As you can tell, the Rollup version is much faster than the JiT builds. The load time is reduced to roughly 1.3 seconds and the bundle size is 147kb. This is a 43% reduction from the bundled JiT demo. The decrease comes from a combination of not including the compiler and enabling Tree shaking.

Webpack (no lazy loading)

Next we will repeat the experiment using Webpack as the bundler. Webpack is another popular bundler, but the approach is slightly different from Rollup.

Webpack will always produce bigger bundles than Rollup. This is a result of how Webpack wraps included modules in an internal module system. This means more overhead in the bundle from extra function wrappers.

Webpack does not support Tree shaking, so you will miss out on any opportunities to Tree shake as well. There is some confusion about Webpack and Tree shaking, but I have an article here that might help clear it up.

I have deployed the Webpack demo here.

As you can tell, the performance of the Webpack bundle is comparable to Rollup. We see a slightly bigger bundle (151k ~2.7% increase).

A difference of 4k is not really noteworthy, but Webpack bundles will always be bigger due to the extra overhead.

It’s also worth noting that the NgModule architecture in Angular is an hurdle to Tree shaking in Angular. This is likely one of the reasons we don’t see a bigger difference between Rollup and Webpack here.

I have more information bout this here, but basically the configuration arrays (declaration, providers, etc) in NgModules squander some Tree shaking opportunities.

You will also see an increase in Webpack bundle size if your code exports multiple classes from the same file. This is the worst use case for Webpack since it can’t Tree shake out unused exported classes.

Closure Compiler

The results so far have been pretty good, but there is still room for improvement.

Webpack and Rollup are both traditional bundlers. They offer very little in the way of code optimization beyond Tree shaking (Rollup) and minfication.

For more aggressive optimizations we can add the Closure compiler.

The main difference is that the Closure compiler will run a much deeper analysis of the application. It can drastically improve the bundle size through function inlining, function flattening, and other code removal approaches. This is much more effective than minifcation, which we will see from the Closure compiled bundle.

You can check out the application here.

As you can see, the Closure bundle is only 97.4kb. This is 35% smaller than Webpack, so pretty impressive! The total load time is about 1 second.

Closure compiler is amazing, but the aggressive optimizations come at a cost. The compiler makes several assumptions about your code. Unless you make sure your code is compatible with Closure, your application will likely break.

This has been made a little easier with Angular and its custom Typescript compiler though. Via the Angular Typescript compiler, certain conventions are ensured to make the code more compatible with Closure.

Webpack (with lazy loading)

So far we have only discussed single bundle applications. As the application grows, it may be impractical to serve the entire application as a single JavaScript bundle.

This is where Webpack offers more flexibility than most other alternatives. Webpack supports splitting the application into multiple files. This is perfect in cases where you use a router since you can create a bundle per route. This gives you lazy loading in its true sense.

In my final example I have converted the demo application to separate bundles per route.

You can see the result here.

As you can tell, separate bundles are loaded on demand as you navigate through the application. In addition to route specific bundles there is a “shared” bundle. The shared bundle is 101kb. The default route adds a 3.1kb bundle to the initial pay load.

Given that the total is only 104kb for the initial load, we are very close to the results from the Closure compiler sample. The load time is pretty similar as well – just over one second.

The key here is to spread the load across multiple requests instead of a single mother load. It doesn’t really matter that the sum of all the bundles exceed the size of the single Closure bundle. By loading small and fast bundles on demand, the application will be perceived as very fast.

I should mention that lazy loading is in theory possible with the Closure compiler as well, but it’s non trivial to set it up at the time of writing. Once we have lazy loading added to a Closure build, we can expect even better results though.

All the samples for this article can be found here.

(c) Torgeir Helgevold