One of the advantages of using a third-party service like Netlify is it helps you think about the problem and provides an example of how to solve it. It starts you down the journey to becoming an expert at the topic. Looking at Netlify’s solution and considering our own needs, we saw opportunities for optimization. There was stuff we simply didn’t need or could achieve more simply if we just had more control.
While microapps got us going with React and solved our bundle.js size problem, we still didn’t have a solution for the slow Webpack build.
Can we solve all our problems at the same time? Let’s go; clear the whiteboards; let’s build some prototypes!
After a few failed starts, what we ended up with was, if Webpack build speed is the problem, let’s just split the microapps into independent Webpack builds and only build what we need.
There was already no code dependency between the microapps. They didn’t need to be built together.
But how can we even do that?
We could put each microapp in its own folder and run the Webpack builds in parallel with a shell script. This would be fine for the production build, but what about development? This wouldn’t work for webpack-dev-server, we can’t just run multiple webpack-dev-servers on the same port.
To make this work in development, we needed a single webpack-dev-server. Fortunately, webpack-dev-server has a feature that allows you to specify a context path and proxy it. Context paths save the data again!
We can now move each microapp into its own repository. This finally allows us to specify npm dependencies and have an independent Webpack configuration for each microapp.
On our local development, we just spun up our main webpack-dev-server that proxied to all the microapps, then started the microapps we were actually working on, each on different ports. This reduced the webpack build time drastically!
As for production, we decided to use AWS CloudFront, but we hit another snag. Each microapp is a single-page application and needed to rewrite the context path. There is a trick with AWS CloudFront to emulate the rewrites and it’s to configure the 404 document to be /index.html. This effectively rewrites
/* to /index.html. This is fine if you only have one single-page application, but we had many.
Defeated. We were about to give up, but then we wondered…if you can only have one rewrite per CloudFront, what if we just added another layer of CloudFronts? What if we spun up a CloudFront for each microapp and then configured our root CloudFront to send traffic to it for a context path? And it works, but we were sad. This was a monstrosity.
But time was up, we had a solution. We needed to get off of Netlify before our contract ended, let’s ship i-, WAIT. We forgot about prerendering.
When we moved to Netlify, we punched in our prerender.io key to keep link unfurling working, but we need a solution for CloudFront. After yet another investigation, we learned that we can integrate prerender.io with CloudFront Lambda@Edge.
We added two lambdas, viewer-request (before caching) and origin-request (after cache miss). The viewer-request lambda simply checks the user-agent to see if it’s a link bot and sets a header. The origin-request lambda only acts if it sees the bot header and then calls prerender.io.
This was all very complicated to fetch 3 pieces of information. On top of that, we realized we haven’t implemented link unfurling for our microapps at all.
Can we do better? Can we speed up link unfurling and have it work for the microapps as well?
This project also avoided us having to return to Jenkins. We sprinkled on AWS CodePipeline along with some AWS CodeBuild and then we realized we had a lot of infrastructure for each microapp. We needed to automate this and reached for Terraform.
We automated the creation of microapps by forking our template repository and generating a Terraform script for it. The script spun up the entire build pipeline and CloudFront distribution. Finally, updated our root CloudFront distribution to add the microapp.
It all worked. It was complex, but it worked. We finally had a small change to a single microapp only trigger the rebuild of a small webpack build and safely deploy only the affected microapp.
Fast builds and isolated deploys achievement unlocked! But it feels like we’re forgetting something…I’m sure it’s not important.
What we’ve eliminated thus far
The honeymoon period with our new frontend architecture didn’t last very long. The way Terraform worked simply didn’t gel with how our team operated. We had different teams all trying to maintain various Terraform states, but it became a failure of the commons. The rigour required to use Terraform well got in the way with teams just trying to get things done. The Terraform rules were violated; manual changes were made to environments; the Terraform state didn’t reflect reality anymore.
What didn’t help the unreadable diff when adding a new microapp to our main CloudFront distribution. Even though there is a well-defined order for path behaviours in CloudFront, the Terraform module didn’t sort by that when showing the plan diff. This was the first broken window and we lost respect for Terraform.
And there was still the niggling issue of the second layer of CloudFront distributions for each microapp. That just didn’t seem right. This extra layer compounded the Terraform issues as it takes several minutes to create/update/delete CloudFront distributions.
When we eventually ran into our first bad deployment, we realized we were missing a lever. In our rush to move off of Netlify, we forgot to retain one of the most useful features, instant rollback.
If that wasn’t enough, the CodePipeline/CodeBuild was fairly slow. This one can be easily fixed by throwing more money at it, but given our Terraform woes, we rather just get rid of it.
Remember that link unfurling solution we came up with? Well, it’s actually rather consequential as we added the origin-request lambda after we decided to add the second layer of CloudFronts.
Why does that matter? The origin-request lambda happens to be the place where one can do rewrites. We didn’t know if Lambda@Edge was a good solution to implement rewrites back then, but now since we already were using it for link unfurling, now there was no cost.
Lovely, we can simplify. We can modify the shell script to not create the extra CloudFronts, but instead to modify the origin-request lambda. But why not take this one step further? We want to get rid of Terraform anyways and to do that we need a solution to AWS CodePipeline/CodeBuild first.
What is the build even doing? It’s really not that complicated
npm run build
Instead of pushing the source to AWS to have it build, we can just build with GitHub actions and push the assets to S3.
OK! We have all the pieces to replace the existing shell script and since we no longer rely on Terraform, the automation can now just be implemented on an endpoint. The automation now all boils down to various GitHub API calls.
One last thing we need to restore. Let’s bring back instant rollbacks. In order to get the “instant” part of instant rollback to work, we need to make sure we are caching everything correctly. Notice I didn’t say, never cache anything, as that would require our users to redownload the bundle.js on every page reload.
I’ve been using bundle.js as shorthand, but really, it looks more like
bundle.main.6483891676e878fb219e.js. Since the bundle.js has an unique hash per build, this means we can cache hashed files forever. There is never a reason why the content would ever be different for the same filename. The index.html refers to the exact bundle.js, so as long as we never cache index.html, then instant rollback is simply rolling back the index.html from a previous build. It’s fine to have the browse always redownload index.html as it’s a very small text file, but you could optimize even this by putting a really short 1 second cache on it. This would vastly improve performance on busy CloudFront edge nodes.
We’ve finally caught up to what we do today at Battlefy. Here’s what it looks like to add a microapp.
The final tally of everything we’ve removed in this journey
There are many things we learned over this journey and I want to just highlight two of them.
Terraform was just a really bad fit for our team and it wasn’t something that was easy to see ahead of time. For any future tool, we need to consider is it the kind of tool designed for centralized command and control, or can it actually be used in a distributed manner?
Microapps is an example of a pattern I love to use, the switch pattern. If you squint, the context path each microapp is mounted to is like a
case in a
switch statement. The key to applying this pattern is to figure what is the domain of the thing you are switching over, for microapps it’s the context path. And second figure out the architectural mechanism to do the actual switching. For microapps, to switch over the context path, we used proxies in webpack-dev-server and origin-response lambda in CloudFront. If you have these two things, you can apply the switch pattern.
While our current frontend architecture is working great, we’re never satisfied. There are still many improvements we can still make, we will just keep on the backlog until see we see a business need or opportunity to realize it.
If that sounds interesting to you and want to learn from me on how to build simple effective systems, you’re in luck, Battlefy is hiring! Check out our open positions
If you have any questions or comments, feel free to tweet me at pyrolistical