AJAX applications, Web sites that communicate with a server in the background to update displayed pages on the fly, increasingly present a new challenge for Web servers that were not designed with AJAX in mind. Jetty’s lightweight Java Web server software offers a new AJAX-friendly architecture so that such applications can be supported without overloading the server with traffic.

Most Web servers and their corresponding standards were designed to handle a simple request-response cycle, where Web browsers will send requests for content and the server will return responses as quickly as possible. AJAX applications break this mold, often requiring the web server to send notifications of events when they occur without the browser issuing a specific request.

To do this within the current capabilities of the Internet, AJAX applications often send a request to the server, expecting that this request will not receive a response until the server wishes to notify the browser of an event (e.g., when a new message is received by a chat application). Since web browsers will stop waiting for a response after a while, the AJAX application will repeat the request whenever it expires.

On the browser side, this is a very elegant solution. Every user on the system will have a “persistent request” with the server being notified of events. On the server side, however, this can be a big problem.

Until now, web servers have typically been built based on a “one thread per request” model, so that the server allocates resources for each active request to generate a response to that request. In a web application that does not support AJAX, this model can support a virtually unlimited number of active users, as long as only a manageable number of those users are sending requests at any given time.

With the AJAX processing model of every active user having a “persistent request” to the server, this architecture falls apart. A thousand logged in users means that the server is loaded with a thousand active requests, for which it will have a thousand times the resources needed to process a single request. Such a load can quickly crush even a powerful web server.

An innovative solution to this problem is being tested in the current version of Jetty 6.0 for Alpha 3 . A new feature called Continuations allows a web application to “hold” a request so that it doesn’t consume any resources until the application is prepared to respond to it.

As it turns out, the Jetty team deserves more credit than I do. When developing the Continuation API, any standards-compliant Java web server (that is, all of them) would simply stop and wait for an event in the thread for a “persistent request” as usual. On Jetty 6.0, however, Continuation will actually cause the request to fail, except that Jetty will catch it internally by placing the request in a queue. This error releases the thread responsible for the request (thus solving the request loading problem), but keeps the request on file until the web application wishes to notify clients of the event, at which point any “persistent requests” on file are brought back to life and processed from scratch as if they had just been received.

From the browser’s perspective, this is all transparent. A “persistent request” is sent, and the server responds to it when an event occurs on the server.