Medusa: A High-Performance Internet Server Architecture

What is Medusa?

Medusa is an architecture for high-performance, robust, long-running TCP/IP servers (like HTTP, FTP, and NNTP). Medusa differs from most other server architectures in that it runs as a single process, multiplexing I/O with its various client and server connections within a single process/thread.

Medusa is written in Python, a high-level object-oriented language that is particularly well suited to building powerful, extensible servers. Medusa can be extended and modified at run-time, even by the end-user. User 'scripts' can be used to completely change the behavior of the server, and even add in completely new server types.

How Does it Work?

Most Internet servers are built on a 'forking' model. ('Fork' is a Unix term for starting a new process.) Such servers actually invoke an entire new process for every single client connection. This approach is simple to implement, but does not scale very well to high-load situations. Lots of clients mean a lot of processes, which gobble up large quantities of virtual memory and other system resources. A high-load server thus needs to have a lot of memory. Many popular Internet servers are running with hundreds of megabytes of memory.

The I/O bottleneck.

The vast majority of Internet servers are I/O bound - for any one process, the CPU is sitting idle 99.9% of the time, usually waiting for input from an external device (in the case of an Internet server, it is waiting for input from the network). This problem is exacerbated by the imbalance between server and client bandwidth: most clients are connecting at relatively low bandwidths (28.8 kbits/sec or less, with network delays and inefficiencies it can be far lower). To a typical server CPU, the time between bytes for such a client seems like an eternity! (Consider that a 200 Mhz CPU can perform roughly 50,000 operations for each byte received from such a client).

A simple metaphor for a 'forking' server is that of a supermarket cashier: for every 'customer' being processed [at a cash register], another 'person' must be created to handle each client session. But what if your checkout clerks were so fast they could each individually handle hundreds of customers per second? Since these clerks are almost always waiting for a customer to come through their line, you have a very large staff, sitting around idle 99.9% of the time! Why not replace this staff with a single super-clerk , flitting from aisle to aisle ?

This is exactly how Medusa works! It multiplexes all its I/O through a single select() loop - this loop can handle hundreds, even thousands of simultaneous connections - the actual number is limited only by your operating system. For a more technical overview, see Asynchronous Socket Programming

Why is it Better?

Performance

The most obvious advantage to a single long-running server process is a dramatic improvement in performance. There are several types of overhead involved in the forking model:

Medusa eliminates both types of overhead. Running as a single process, there is no per-client creation/destruction overhead. This means each client request is answered very quickly. And virtual memory requirements are lowered dramatically. Memory requirements can even be controlled with more precision in order to gain the highest performance possible for a particular machine configuration.

Persistence

Another major advantage to the single-process model is persistence. Often it is necessary to maintain some sort of state information that is available to each and every client, i.e., a database connection or file pointer. Forking-model servers that need such shared state must arrange some method of getting it - usually via an IPC (inter-process communication) mechanism such as sockets or named pipes. IPC itself adds yet another significant and needless overhead - single-process servers can simply share such information within a single address space.

Implementing persistence in Medusa is easy - the address space of its process (and thus its open database handles, variables, etc...) is available to each and every client.

Not a Strawman

All right, at this point many of my readers will say I'm beating up on a strawman. In fact, they will say, such server architectures are already available - like Microsoft's Internet Information Server. IIS avoids the above-named problems by using threads. Threads are 'lightweight processes' - they represent multiple concurrent execution paths within a single address space. Threads solve many of the problems mentioned above, but also create new ones:

Threads are required in only a limited number of situations. In many cases where threads seem appropriate, an asynchronous solution can actually be written with less work, and will perform better. Avoiding the use of threads also makes access to shared resources (like database connections) easier to manage, since multi-user locking is not necessary.

Note: In the rare case where threads are actually necessary, Medusa can of course use them, if the host operating system supports them. For example, an image-conversion or fractal-generating server might be CPU-intensive, rather than I/O-bound, and thus a good candidate for running in a separate thread.

Another solution (used by many current HTTP servers on Unix) is to 'pre-spawn' a large number of processes - clients are attached to each server in turn. Although this alleviates the performance problem up to that number of users, it still does not scale well. To reliably and efficiently handle [n] users, [n] processes are still necessary.

Other Advantages

Such problems are virtually non-existent when working in a high-level language like Python, where for example all access to variables and their components are checked at run-time for valid range operations. Even unforseen errors and operating system bugs can be caught - Python includes a full exception-handling system which promotes the construction of 'highly available' servers. Rather than crashing the entire server, Medusa will usually inform the user, log the error, and keep right on running.

Current Features

Where Can I Get It?

Medusa is available from http://www.nightmare.com/medusa

Feedback, both positive and negative, is much appreciated; please send email to rushing@nightmare.com.