Let your job do your job

Nov 16 2017

We live in a busy world.

We pay for many services such as cleaning, babysitting etc, to get more free time for ourselves,
and use it to achieve the goals we desired. When we decide to use one of the services described above
we expect to have a start time and an estimated end time, and at the end to get the results according to
the type of work we declared.
These principles are the basics of the architecture I will described in this article. The architecture
which was implemented in one of our apps. This architecture uses jobs and client intervals in order to
make an asynchronous mechanism and increase system performance.

Delayed jobs

"Delayed::Job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background." (https://github.com/collectiveidea/delayed_job). Delayed jobs is a great Ruby gem, used in Ruby On Rails framework.
This gem helps reduce the use of the network, and the amount of time the client waits for an answer.

A delayed job mechanism backed up by a DB add is a lot more flexibility to scale by applying
back-pressure at spike and adding more workers at spike. This is in contrast to VM event
loop where requests may have a side-effect of putting another thing to compute on the event loop,
which can lead to thrashing as requests pour in during a spike.
This happens because the event loop is unable to continue running while a blocking operation is occurring. Synchronous methods in Ruby are most commonly used blocking operations. Because of that, Delayed Jobs provide a useful alternative to traditional blocking implementations, and implement non-blocking methodology.

In our system, the client sends basic information the first time he logs in, and the server triggers a job that handles the additional information the client gets from different API queries. The server initializes a specific table on the DB which handles the jobs by its status (in_process, success, failed).

"There is nothing to regret with a job well done." (Joe Garcia)

Point nemo is a point in the south ocean where all the garbage of space is placed. When our system is triggering jobs, it is critical to know where their "point nemo" is. For our system the solution is to cancel previous jobs of the client when a new one is triggered, because every client has one unique result per request. Therefore, the results are overriding each other. Even if you don’t override your result, my tip is to assume where your 'point nemo' is and make sure your job will destroy itself there.

Donkey: Are we there yet?

On the opening scene in the movie 'Sherk 2' the donkey asks Sherk and Fiona every couple of seconds 'are we there yet'? Our luck is that the server has more patience than Sherk to answer the same interval question. While the job is running and after it is initialized, the client makes a query in every unit time. The client queries the server for the jobs results, and waits for result and status – success or failed. This interval is supporting the non-blocking methodology.

On one hand, on systems that use an open socket between client and server we can increase performance by pushing the data to the client. On the other hand, handling an open socket can lower server response to other requests while the client is waiting for the answer.
After the client finishes to collect all results, it raises a flag to inform the end of the interval, or as John Bytheway wrote, "In the city, we work until quitting time. On the farm, we work until the job is finished".

Link to Architecture Diagram

Top to Bottom

In this article I described a simple architecture in order to increase performance and handle asynchronous API calls of external services. By initializing jobs on the server side using the data from the client, our system is free to handle requests from different clients by delaying jobs to larger intervals while freeing up resources.

So, if you are not a slacker, let your job do your job.

Tomer S.
Software Developer
Back to Blog