BattlefyBlogHistoryOpen menu
Close menuHistory

Fast builds, isolated deploys and instant rollback — Battlefy’s frontend architecture

Ronald Chen May 18th 2021

At Battlefy we deploy when things are done and don’t have a fixed release schedule. As we power esports globally, we don’t have the luxury of deploying during a maintenance window. This means we must ensure deployment of a new feature or a bug minimizes the impact to the whole site. Ideally, we want to isolate the change down to the part of the site that is being changed.

But given Battlefy is a single-page application, how does one do that? With a Webpack build, wouldn’t it mean everything ends up in one giant bundle.js?

There are ways to split the bundle.js, and given enough time we could have refactored our application into stable isolated chunks. But that isn’t very reliable and is hard to maintain.

Another problem with one giant Webpack project, the build slows down over time as the site grows. Usually when you make a change to a site, it’s relatively small compared to the entire website. Why are we being punished by waiting for the entire site to rebuild just for our tiny change?

Again with more Webpack hackery and caching we can speed up the build, but there must be a simpler way.

Regardless of how fast your Webpack build is, it’s still not instantaneous. Wouldn’t it be nice to undo that oopsie deploy to the last live build instead of reverting the pull request, getting an emergency code review, wait for your linter/tests, then waiting for it to rebuild and finally deploying?

This article will show you how we have solved these problems at Battlefy with our current frontend architecture. Our builds are fast, deploys are isolated, and we can instantly rollback deploys.

But first, lets walk down memory lane to gain a better appreciation on where we ended up.

Express is a http server right?

I don’t know the full history of our frontend architecture and the truth has been lost through time, but my earliest recollection was we served our single-page application with an Express server inside a Docker container hosted on AWS Elastic Beanstalk in a single AWS region. We ran our own Express server to intercept link bots and do link unfurling with prerender.io. The build was a Jenkins job that ran the Grunt script inside the Docker container which became the Docker image we uploaded to Docker hub. To kick off the build we had to create a Github release and manually run the Jenkins job based on a tag.

It works, but not amazing. There are far too many unnecessary parts. Let’s get to simplifying.

Unversion the frontend

One of my first contributions to Battlefy was to streamline the frontend deploy process.

When you release npm libraries, you need to carefully manage the version. The version allows people to properly report bugs and loosely communicates the impact of version upgrades.

But websites are different. There is only ever one version running live at any given time and it already has an unique identifier, a commit rev.

By eliminating the version, we can also remove another manual step, kicking off the Jenkins job. Now we can configure Jenkins to just automatically run the job whenever a commit has been pushed to master. A process change within the team does need to occur to make this work. The master branch is no longer allowed to be used as a staging area for deployments. That needs to be moved into a branch, which forces people to name it, hurray!

What we’ve eliminated thus far

  • the manual process to rev the version and cut a release
  • the manual process to pass version number into Jenkins job and start it

Sus Docker usage

Now the Jenkins job that creates the Docker image starts right away when we push to master, but it still took a long time. Looking at the build log, I saw that most of the time is spent upload the final image to Dockerhub.

Then on the otherside our AWS Elastic Beanstalk application downloads the image from Dockerhub. In theory this is necessary, as all the Elastic Beanstalk nodes need the same image to be serving the same static assets.

Sus.

We bake environment variables into each build; it’s not like we share a common image and run it with different values. Nor do we even run Docker locally on our development environments. We used grunt-serve locally.

What is even the point of Docker? The only thing Docker provided was a stable environment for our grunt script to run. It makes the build more portable, but that’s not a problem we have. We only build from one server.

And then that’s when I started ripping it all out. Instead of doing a Docker build in Jenkins, I just zipped the source and pushed it to Elastic Beanstalk. Elastic Beanstalk has a feature where you can give it a Dockerfile and it builds just in time for you. This sped up the build and reduced cost. We were no longer uploading hundreds of megabytes (our static assets included images) and then redownloading. Dockerhub is outside of the AWS network, this means we were paying for the bandwidth both ways.

This also allowed us to close our Dockerhub account, which we were paying to host private repositories.

Holup, but doesn’t that mean each of our nodes in Elastic Beanstalk is each doing their own Docker build? Yep and since our build is deterministic, it’s totally fine. Each node produced the same result and served it.

What we’ve eliminated thus far

  • the manual process to rev the version and cut a release
  • the manual process to pass version number into Jenkins job and start it
  • time waiting for Docker image to be uploaded to Dockerhub
  • time waiting for Docker image to be downloaded from Dockerhub
  • Dockerhub cost
  • AWS bandwidth cost uploading/downloading to/from Dockerhub

At some point, we moved the images out of our frontend repository and hosted it in AWS CloudFront. This sped up the build significantly as the Jenkins job now only need to zip and upload text files. We were also using grunt to optimize the images during the build, but we didn’t need to do this every build. We just took the optimized images from one build, shoved it up into AWS CloudFront and never thought about them ever again.

With the images out of the way, our grunt script was getting smaller and smaller. It allowed us to see what the whole script was really trying to do and made it possible for our CTO to just migrate the whole thing over to Webpack.

Read part two of this series as we continue our journey to Netlify.

If you want to learn from me on how to build simple effective systems, you’re in luck, Battlefy is hiring! Check out our open positions

If you have any questions or comments, feel free to tweet me at pyrolistical

2024

  • MLBB Custom Lobby Feature
    Stefan Wilson - April 16th 2024

2023

2022

Powered by
BATTLEFY