Let Your Job Do Your Job – A Simple Architecture to Increase Performance

By Tomer S.

Nov 16, 2017

We live in a busy world.

We pay for many services such as cleaning and babysitting in order to get more free time for ourselves, using it to achieve the goals we desire. When we decide to use some of these services, we expect to have a start time and an estimated end time, and at the end to get the results according to the type of work we choose.

These principles are the basics of the architecture I will describe in this article. The architecture, which was implemented in one of our apps, uses jobs and client intervals to make an asynchronous mechanism and increase system performance.

Delayed jobs

“Delayed Job” (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.” (https://github.com/collectiveidea/delayed_job). Delayed jobs are a great Ruby gem, used in the Ruby On Rails framework.

This gem helps reduce the use of the network, and the amount of time the client waits for an answer.

A delayed job mechanism backed up by a DB add is a lot more flexible to scale by applying
back-pressure at a spike and adding more workers there. This is in contrast to the VM event
loop where requests may have the side-effect of putting another thing to compute on the event loop, which can lead to thrashing as requests pour in during a spike.

This happens because the event loop is unable to continue running while a blocking operation is occurring. Synchronous methods in Ruby are the most commonly used blocking operations. Because of that, DJs provide a useful alternative to traditional blocking implementations, and implement a non-blocking methodology.

In our system, the client sends basic information the first time he logs in, and the server triggers a job that handles the additional information the client gets from different API queries. The server initializes a specific table on the DB which handles the jobs by its status (in_process, success, failed).

‘There is nothing to regret about a job well done.’ (Joe Garcia)

Point nemo, which many consider “the middle of nowhere,” is an area in the southern ocean where all the floating garbage accumulates. When our system is triggering jobs, it is critical to know where their “point nemo” is. For our system, the solution is to cancel previous jobs of the client when a new one is triggered, because every client has one unique result per request. Therefore, the results are overriding each other. Even if you don’t override your result, my tip is to estimate where your ‘point nemo’ is and make sure your job will destroy itself there.

Donkey: Are we there yet?

In the opening scene of the movie Shrek 2, the donkey repeatedly asks Shrek and Fiona: “Are we there yet?” Our luck is that the server has more patience than Shrek to answer the same interval question. While the job is running and after it has been initialized, the client makes a query in every unit time. The client queries the server for the job results, and waits for these results and the status – success or failure. This interval is supporting the non-blocking methodology.

On one hand, on systems that use an open socket between client and server we can increase performance by pushing the data to the client. On the other hand, handling an open socket can lower server response to other requests while the client is waiting for the answer.

After the client finishes collecting all results, it raises a flag to inform the end of the interval, or as John Bytheway writes, “In the city, we work until quitting time. On the farm, we work until the job is finished.”

Link to Architecture Diagram

Top to Bottom

In this article, I described a simple architecture that increases performance and handles asynchronous API calls of external services. By initializing jobs on the server side using the data from the client, our system is free to handle requests from different clients by delaying jobs to larger intervals while freeing up resources.

So, if you are not a slacker, let your job do your job.

Leave a Reply

Your email address will not be published.