Push data to client
Traditional web application is based on request and response model that information is delivered as a single payload and then immediately close the connection to the client. To keep the client in sync, we normally pull the server periodically. This approach may generate unacceptable load to the server. To solve this problem, we want to have a push mechanism from server to client. This is why Comet is defined. Comet is a generic term describing various approaches to send data asynchronously from a Web server to a client without the need for the client to explicitly request the data. It is an essential technique for any real-time event-driven web applications, where the majority of events occur on the server and data must be “pushed” frequently to the client. To achieve this, Comet servers must maintain a continuous connection to each client for the duration of the session.
OK. How to maintain a continuous connection to each client for the duration of the session?
If you try to adapt traditional server to the Comet methodology, it may not scale and often fails after a few thousand simultaneously open connections. A true Comet implementation requires a very different kind of server architecture to be efficient and scalable – Liberator (a solid Comet server that are used by the financial industries. However, it is written in C and not open source although it has FREE edition distributed).
To understand this statement a little bit more, we need to know how traditional web containers handle the request. They are under one request per thread model.
- The client , typically , a browser sends request for resource to a web server.
- The server has a listening thread that keeps track of incoming connections.
- When a request arrives , the server uses one process or thread to process the request.
- The resource is returned to the client and the connection is closed.
In this model, the number of requests that can be served in a second would depend on two things
- How many threads are there to handle the client requests
- How long it takes to serve one request.
If all threads of server are busy, then the incoming requests are put in a queue. The server would return to the requests in queue when server threads become free. The number of requests handled per second is always greater than the number of allowed simultaneous connections. All this is made possible because the time required to process a request is very short. In other words you can server more requests in a second than you have threads.
However, there are one breed of applications that need to hold onto the connections. Think of applications that require real time data coming to clients (stock tickers) or think of applications where low-latency is required. In the above traditional web model, the browser has to re-connect to get the new data. (Polling). If the new updates “can” happen with high frequency (e.g. a chat application) then the polling frequency also has to increase . An alternative to high frequency polling is to use push based applications. For push based application, once the browser connects to server, the server will maintain the connection till the browser time-out (server response stream is not closed) and keeps flushing data down the connection as and when they become available. In servlet container, to hold the connection, your thread in the service method cannot exit the method. Otherwise, the response stream will be closed. So what you do is, you block the thread on some condition within the service method. So the thread will block for your condition. When push data becomes available , this thread writes to response stream and again enters a blocked state. So as long as you hold onto the connection, you can not return this thread to the thread pool. And as more and more “push” connections are established you would run out of threads! To remedy the problem, the possible solutions are:
- Increase # of server threads.
There is confusion that whether BlazeDS supports real time messaging. Yes it does . In fact, BlazeDS has a full spectrum of channel types ranging from simple polling, to near-real-time polling, to real-time streaming.
- Simple polling – ping the server from Flex client using the traditional request and response model
- Near-real-time polling (long polling) – Instead of acknowledging right away, the server could hold the polling request until there’s a message for the client. This ensure the messages are delivered to the client as soon as they become available. The caveat for using long-polling is the thread limitation in most application servers. At this moment, BlazeDS could not support more than a few hundred long-polling clients on most application servers. However, this problem could be resolved once servers like Tomcat start to support asynchronos, non-blocking connection threads. Update: Now Tomcat 6 supports NIO.
- Real-time streaming – BlazeDS supports real-time message streaming over AMF and HTTP. Unlike long polling, which closes and reopens the connection upon receiving a message, streaming keep the connection open at all times. Streaming suffers from the same thread blocking issue as long polling. A cap must be set so the server is not hang by idle threads.
The reason why people are confused is that Adobe doesn’t release its proprietary push solution RTMP to BlazeDS. So, RTMP isn’t available as a channel in the BlazeDS configuration files. BlazeDS lives in a Servlet container and hence constrained by one-thread-per-connection limit whereas LCDS has NIO-based channels that can scale up to 1000s of requests. On the other hand, BlazeDS has the advantage that it’ll work over port 80/443, whereas LCDS will use some port for persistent connections that would require a firewall configuration. Once the servlet that implements BlazeDS is revved to support Comet Events under Tomcat 6, and then Jetty Continuations, then the long polling technique will be fine.
UPDATE: We are waiting for a solution that supports Comet Events under Tomcat 6. Then BlazeDS can be coupled to the Tomcat NIO HTTP listener and be able to scale as well as any NIO based server software.
I have learnt from this article that you can create a channel set in client side. So Flex can fail-over to other channels until it gets connected or the list is exhausted.
Marc has put an effort to build a better data grid like a spreadsheet in Flex. (check this out)
Here are the references I used for this article
- Tuning Apache and Tomcat for Web 2.0 comet application
- Performance of Grids for Streaming Data – This shows you the performance numbers on various frontend technologies. Again, Flex shows us a good result.
- Are raining comets and threads? – Comet Daily
- Comet & Java: Threaded Vs Nonblocking IO
- JDK 1.6 uses epoll to implement NIO
- BlazeDS dev guide
- Achieve performance breakthrough using BlazeDS – Farata System put an effort to write its NIO channel that runs on Jetty 7 and receive promising result.