To handle web requests, there are two competitive web architectures thread-based one and event-driven one.
The most intuitive way to implement a multi-threaded server is to follow the process/thread-per-connection approach.
It is appropriate for sites that need to avoid threading for compatibility with non-thread-safe libraries.
It is also the best Multi-Processing Modules for isolating each request, so that a problem with a single request will not affect any other.
Processes are too heavyweight with slower context switching and memory-consuming. Therefore, the thread-per-connection approach comes into being for better scalability, though programming with threads is error-prone and hard-to-debug.
In order to tune the number of threads for the best overall performance and avoid thread-creating/destroying overhead, it is a common practice to put a single dispatcher thread in front of a bounded blocking queue and a thread pool. The dispatcher blocks on the socket for new connections and offers them to the bounded blocking queue. Connections exceeding the limitation of the queue will be dropped, but latencies for accepted connections become predictable. A pool of threads polls the queue for incoming requests, and then process and respond.
Unfortunately, there is always a one-to-one relationship between connections and threads. Long-living connections like Keep-Alive connections give rise to a large number of worker threads waiting in the idle state for whatever it is slow, e.g. file system access, network, etc. In addition, hundreds or even thousands of concurrent threads can waste a great deal of stack space in the memory.
Event-driven approach can separate threads from connections, which only uses threads for events on specific callbacks/handlers.
An event-driven architecture consists of event creators and event consumers. The creator, which is the source of the event, only knows that the event has occurred. Consumers are entities that need to know the event has occurred. They may be involved in processing the event or they may simply be affected by the event.
The Reactor Pattern
The reactor pattern is one implementation technique of the event-driven architecture. In simple words, it uses a single threaded event loop blocking on resources emitting events and dispatches them to corresponding handlers/callbacks.
There is no need to block on I/O, as long as handlers/callbacks for events are registered to take care of them. Events are like incoming a new connection, ready for read, ready for write, etc.
Those handlers/callbacks may utilize a thread pool in multi-core environments.
This pattern decouples modular application-level code from reusable reactor implementation.
There are two important participants in the architecture of Reactor Pattern:
A Reactor runs in a separate thread and its job is to react to IO events by dispatching the work to the appropriate handler. It’s like a telephone operator in a company who answers the calls from clients and transfers the communication line to the appropriate receiver.
A Handler performs the actual work to be done with an I/O event similar to the actual officer in the company the client who called wants to speak to.
Reactor responds to I/O events by dispatching the appropriate handler. Handlers perform non-blocking actions.
The intent of the Reactor pattern is:
The Reactor architectural pattern allows event-driven applications to demultiplex and dispatch service request that are delivered to an application from on or more clients.
One Reactor will keep looking for events and will inform the corresponding event handler to handle it once the event gets triggered.
The Reactor Pattern is a design pattern for synchronous demultiplexing and order of events as they arrive.
It receives messages/requests/connections coming from multiple concurrent clients and processes these post sequentially using event handlers.
The purpose of the Reactor design pattern is to avoid the common problem of creating a thread for each message/request/connection.
In Summary: Servers has to handle more than 10,000 concurrent clients and Threads can not scale the connections using Tomcat /Glassfish/ Jboss /HttpClient.
Then receives events from a set of handles and distributes them sequentially to the corresponding event handlers.
So, the application using the reactor only needs to use a thread to handle events simultaneously arriving.
Basically the standard Reactor allows a lead application with simultaneous events, while maintaining the simplicity of single threading.
A demultiplexer is a circuit that has an input and more than one output.
It is a circuit used when you want to send a signal to one of several devices.
This description sounds similar to the description given to a decoder, a decoder, but is used to select between many devices while a demultiplexer is used to send a signal, among many devices.
A Reactor allows multiple tasks which block to be processed efficiently using a single thread.
Reactor manages a set of event handlers and executes a cycle.
When I called to perform a task that connects with a new or available handler becoming active.
When called to perform a task, it connects with the handler that is available and makes it as active.
The cycle of events:
1 - Finds all handlers that are active and unlocked or delegates this for a dispatcher implementation.
2 - Execute each of these handlers sequentially found until complete or reach a point where they block.
Handlers completed inactivate or assets for reuse, allowing the event cycle to continue.
3 - Repeats from Step One (1)
Why matter now a day?
Because the Reactor pattern is used by Node.js, Vert.x, Reactive Extensions ,Netty, Ngnix and others. So if you like identify pattern to know how thinks works behind the scenes, is important pay attention in this pattern.