BattlefyBlogHistoryOpen menu
Close menuHistory

Parallel programming in Node.js is so slick

Ronald Chen March 14th 2022

Naysayers will say Node.js isn’t for serious work as its single threaded, and while it is true the JavaScript event loop is indeed single threaded, this is a performance advantage. Node.js solved the C10k problem by spending less resources per network socket, where as the previous paradigm allocated a thread per socket, which didn’t scale.

Naysayers will then pivot to say Node.js still isn’t for serious work as it doesn’t fully utilize multicore CPUs. This is true, but not just for Node.js, it is true for all the popular programming languages. Automatic parallelization isn’t a thing yet. Parallel programming needs deliberate design. However, having done parallel programming in traditional languages, I assert Node.js’ parallel programming model is so much easier and safer.

Worker threads

Worker threads are the easiest use and is conceptually the same as Web Workers. In fact the web-worker package smooths out the differences and offers a single API that works in both the browser and in Node.js.

The primary means of communication between threads is message passing. Message passing is already used extensively in JavaScript APIs. window.postMessage is used to communicate between <iframe>s and popup windows. Message passing is safer, as the sender does not force any action upon the receiver. The receiver will handle the message at their discretion. Whereas in traditionally parallel programming models, communication is done by interrupts and synchronization; both of which make assumptions and demands upon the receiver. When these assumptions are broken, applications are incorrect and worst of all may deadlock.

In addition to message passing, worker threads can share memory using SharedArrayBuffer. Shared memory can improve performance over message passing as it avoids the need to serialize/deserialize messages.

SharedArrayBuffer is modified using Atomics, which offers safe load/store/compare & swap/wait/notify operators. But wait, isn’t this just the same traditional synchronization model and hence the same hazards? This is where JavaScript is so slick. Worker threads are impossible to deadlock, as the main thread is not allowed to use Atomics.wait. By making it so only children threads are allowed to block their own thread, forward progress will always be made on the main thread. But isn’t is limiting if we cannot suspend the main thread? Well, there is a TC39 proposal for Atomics.waitAsync, which would allow the main thread to get a promise instead of blocking.

Cluster and child process fork

There are other APIs to do parallel programming in Node.js, each with their own tradeoffs.

Cluster is great for backend server entry points that makes it easy distribute incoming network connections over multiple processes. However, cluster is often hard to use as it requires a single source file to be both the main process and child process. This is why cluster should only be used as server entry points or script files.

Under the hood, cluster is using child process fork which is an even lower level construct. It is more flexible than cluster as child processes can be created with any source file.

Both cluster and child process fork child are processes, not threads. This has the advantage of having process isolation and independent memory. However, this means memory cannot be shared. The only direct means of communication is message passing, but as with any process, arbitrary custom inter-process communication can be implemented. Processes will consume more resources than threads as an entire independent V8 JavaScript engine will be spawned for each child process.

Do you salivate with the thought of being able to use all those CPU cores? You’re in luck, Battlefy is hiring.

2024

  • MLBB Custom Lobby Feature
    Stefan Wilson - April 16th 2024

2023

2022

Powered by
BATTLEFY