Asynchronous Non-Blocking Backends – Node.js is cool

I am beginning to have a liking for the new stuff everyone is buzzing about. The server side V8 based javascript framework called Node.js.

In the start when i heard about it at a hacker’s get together, i thought its another one of those, cool buzz words framework which makes you cool and hip if you use it.  However after playing around with it for a bit, it seems pretty nice.

The idea is simple. if you are in the business of writing High Performance Web-Servers or backends, you know you have to be God to get the concurrency right in lets say Java. Concurrency is one of the hardest things to do right in large systems. Its easy to do it wrong.  You have to write specific multi-threaded code to be able to do it, or you have to use frameworks like Scala + Akka or Java + Akka

What if you can write a backend, which does all the concurrency by it self. Your code never blocks, and everything is based on Event-Driven callbacks ?

I am sure if you are in the business of writing an HTTP or a TCP server you’d love to have that, and it sounds very familiar right? i mean if you ever programmed with Ajax or jQuery, it’s the same concept.  The only difference is , it is done on the server side, and it is done using “javascript” and it runs on the Google’s phenomenally fast javascript engine called V8.

Now i know some of you will say javascript on the server side is a terrible choice. Maybe you are right, but what if you are a web-programmer who codes in Javascript for a living, for you to learn C++ or Java and then try to write a high performance backend using some weird looking multi-threaded APIs  would be a nightmare.Node.js allows you to write a high performance backend, lightning fast, no need to worry about concurrency everything is taken care of.

Lets say in Java your code will go something like …

List data = dataService.getAllData(pending);
data.doSomething();
// Something else happens

Node.js equivalent will look like

dataService.getAllData(pending, function(data) {
data.doSomething() ;
}
// Something else happens

In the Java example the “Something Else” will always execute *after* the data.doSomething(). So your thread blocks. In Node, the execution continues and “Something Else” will normally be executed before data.doSomething(), and data.doSomething() will only be executed after getting all data is done.

Now lets look at how concurrent this can be. Lets steal a simple example from the Node.js website and put it through Apache Bench to see how concurrent can it be.

Since Node.js comes with a built in HTTP, TCP, DNS and many other servers, we’ll use the built in HTTP to do a basic benchmark.

Here is the code example.js

usama-dars-macbook-pro-4:~ usm$ cat example.js
var http = require('http');
http.createServer(function (req, res) {
 res.writeHead(200, {'Content-Type': 'text/plain'});
 res.end('Hello World\n');
}).listen(9090, '127.0.0.1');
console.log('Server running at http://127.0.0.1:9090/');

Running it with node will start the server on port 9090. This is just a simple HTTP server which outputs hello world, and this example is given on http://www.nodejs.org

usama-dars-macbook-pro-4:~ usm$ node example.js 
Server running at http://127.0.0.1:9000/

Now let’s apache bench it with 100  request and a concurrency of 10

usama-dars-macbook-pro-4:bin usm$ ./ab -r -n 100 -c 10 http://127.0.0.1:9090/
 This is ApacheBench, Version 2.3 <$Revision: 1178079 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient).....done
 Server Software:
 Server Hostname: 127.0.0.1
 Server Port: 9090
Document Path: /
 Document Length: 12 bytes
Concurrency Level: 10
 Time taken for tests: 0.018 seconds
 Complete requests: 100
 Failed requests: 0
 Write errors: 0
 Total transferred: 7600 bytes
 HTML transferred: 1200 bytes
 Requests per second: 5585.97 [#/sec] (mean)
 Time per request: 1.790 [ms] (mean)
 Time per request: 0.179 [ms] (mean, across all concurrent requests)
 Transfer rate: 414.58 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 0 0.2 0 1
 Processing: 0 1 0.8 1 4
 Waiting: 0 1 0.7 1 4
 Total: 0 2 0.8 2 4
Percentage of the requests served within a certain time (ms)
 50% 2
 66% 2
 75% 2
 80% 2
 90% 3
 95% 3
 98% 4
 99% 4
 100% 4 (longest request)

Now lets increase the requests and concurrency

usama-dars-macbook-pro-4:bin usm$ ./ab -r -n 1000 -c 100 http://127.0.0.1:9090/
 This is ApacheBench, Version 2.3 <$Revision: 1178079 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software:
 Server Hostname: 127.0.0.1
 Server Port: 9090
Document Path: /
 Document Length: 12 bytes
Concurrency Level: 100
 Time taken for tests: 0.209 seconds
 Complete requests: 1000
 Failed requests: 0
 Write errors: 0
 Total transferred: 76000 bytes
 HTML transferred: 12000 bytes
 Requests per second: 4786.96 [#/sec] (mean)
 Time per request: 20.890 [ms] (mean)
 Time per request: 0.209 [ms] (mean, across all concurrent requests)
 Transfer rate: 355.28 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 3 2.9 2 13
 Processing: 1 17 8.4 16 40
 Waiting: 1 16 8.5 14 40
 Total: 2 20 8.5 20 41
Percentage of the requests served within a certain time (ms)
 50% 20
 66% 24
 75% 26
 80% 28
 90% 32
 95% 34
 98% 38
 99% 39
 100% 41 (longest request)

You can see that even by increasing the load on the server exponentially, it is serving concurrent requests nicely, and does not increase the time exponentially, or effects how many requests per second i can serve with my server, and all of that without having to write any specific code using threads or workers or Actors or whatever the hell you use to write concurrent code.

You can learn more on the website , and the creator of Node.js has an excellent talk on youtube here

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s