Consider how the typical web server works, such as Apache. Apache has two different approaches to how it
handles incoming requests. The first is to assign each request to a separate process until the request is
satisfied; the second is to spawn a separate thread for each request.
The first approach (known as the prefork Multi-Processing Model, or MPM) can create as many child
processes as specified in an Apache configuration file. The advantage to creating a separate process is that
applications accessed via the request, such as a PHP application, don't have to be thread safe. The
disadvantage is that each process is memory intensive, and doesn't scale very well.
The second approach (known as the worker MPM), implements a hybrid process-thread approach. Each
incoming request is handled via a new thread. It's more efficient from a memory perspective, but also
requires that all applications be thread safe. Though the popular web language PHP is now thread safe,
there's no guarantee all the many different libraries used with PHP are thread safe.
Regardless of approach used, both types respond to requests in parallel. If five people access a web
application at the exact same time, and the server is set up accordingly, the web server handles all five
requests simultaneously.
Node does things differently. What happens is that when you start a Node application, it's created on a
single thread of execution. It sits there, content, waiting for someone to come along and make a request.
When it gets a request, no other request can be processed until it's finished processing the code for the
current request.
You might be thinking that this doesn't sound very efficient, and it wouldn't be except for one thing: Node
operates asynchronously, via an event loop and callback functions. An event loop is nothing more than
functionality that basically polls for specific events and invokes event handlers at the proper time. In Node,
a callback function is this event handler.
Unlike other single threaded applications, when you make a request to a Node application and it must, in
turn, make some request of resources (such as a database request or file access), Node processes the
request, but doesn't wait around until the request receives a response. Instead, it attaches a callback function
to the request. When whatever has been requested is ready (or finished), an event is emitted to that effect,
triggering the associated callback function to do something with either the results of the requested action, or
the resources requested.
If five people access an application at the exact same time, and the application needs to access a resource
from a file, Node attaches a callback function to a response event for each request. As the resource
becomes available for each, in turn, the callback function is called, and each person's request is satisfied. In
the meantime, the Node application can be processing other requests, either for the same people, or
different people.
Though it doesn't process the requests in parallel, depending on how busy the application is, and how it's
designed, most people usually won't perceive any delay in the response. Best of all, the application is very
frugal with memory and other limited resources.
Reading a File Asynchronously
To demonstrate the asynchronous nature of Node, Example 1-2 is a modification of the Hello World
application from earlier in the chapter. Instead of just typing out "Hello, World!", it actually opens up the
previously created helloworld.js, and outputs the contents to the client.
Example 1-2. Asynchronously opening and writing out contents of a file
// load http module
var http = require('http');
var fs = require('fs');
// create http server
http.createServer(function (req, res) {
// open and read in helloworld.js
fs.readFile('helloworld.js', 'utf8', function(err, data) {
www.it-ebooks.info