Why do we use Node.JS at Bolt

Mar 4, 2024

Why do we use Node.JS at Bolt

Written by Denys Pysmennyi, Engineering Manager at Bolt.

During interviews, the most frequently asked candidate question is, “What is Bolt’s tech stack?”

It may come as a surprise, but all of Bolt’s 1000+ microservices were developed using Node.JS. This includes all 600 developers at Bolt who also use Node.JS.

To answer why, let me dive into our engineering philosophy.

At Bolt, we believe technology is a tool to solve business problems. While this may not sound glamorous to engineers, our operational efficiency has been critical to our success. Therefore, it’s important to consider business needs when tackling technical issues. In fact, some problems can be resolved manually without the use of technology.

Our team abides by the “one way of doing things” principle, which entails utilising the same tool for a set of issues as much as feasible. So, for example, we use Node.JS with TypeScript entirely for the backend. However, we make exceptions for specialised and predominantly computation-intensive tasks.

Without further ado, I’ll explain why this approach is helping us achieve high efficiency.

It’s lightweight

Bolt utilises a microservices approach, with over 1000 distinct services currently in operation (as of the time of writing). These services run multiple instances in various AWS availability zones, enabling us to operate at a significant scale.

The services are mostly occupied with handling API requests, managing different events, and executing scheduled tasks.

The rationale behind conducting business operations is usually simple. It entails examining “if” statements, accessing other services, and retrieving information from the database. These duties do not require a bulky JVM with an excessive amount of RAM.

One of the primary benefits of using Node.JS is its lightweight nature. With just one EC2 machine, multiple Node.JS processes can be supported. In fact, we currently have machines running up to 40 different Node.JS microservices.

One crucial aspect of being lightweight is its cost-saving benefits on AWS expenses. Although it may not sound exciting from an engineering standpoint, it’s a fundamental factor in terms of business profitability.

It’s single-threaded

When it comes to creating high-load systems, concurrency can be a major challenge. However, Node.JS is designed to be single-threaded during execution, eliminating many of the concurrency problems. This means that threads, monitors, and fork-join pools are no longer necessary and can be forgotten.

In the past, requests were handled using separate threads or processes that consumed a lot of resources. However, with Node.JS, all requests are stored in an event loop and are put on hold during async calls. This means that requests won’t take up processor time while they’re waiting.

When dealing with DB and external services, concurrency issues still need to be managed. However, the use of an event loop can make the code more streamlined and readable.

It may seem that a service that can only handle one request at a time would not be able to manage a large number of requests. However, it’s important to note that most business services are not very resource-intensive, and the database typically handles most of the workload. Even at our current level, where we receive hundreds of requests per second, most services can still manage the load on a single instance (though it’s not advisable to rely on only one process running).

There is a drawback to consider. It’s important to be cautious with the event loop since there is only one execution thread. Whenever a function takes a long time to run, other requests will be put on hold, and this can quickly lead to a snowball effect.


Want to read the full article? Head to our Bolt Labs blog on Medium.

Download Bolt

Recent posts