Jupiter Broadcasting

NGINX vs Apache | TechSNAP 39

How NGINX stacks up to Apache, and which server is right for the job!

PLUS: The EFF has raised a red flag over the new version of AOL’s instant messenger we’ll share the details on how it’s logging your conversations, and pre-loading your links.

All that and more, in this week’s episode TechSNAP!

Thanks to:

GoDaddy.com Use our codes TechSNAP10 to save 10% at checkout, or TechSNAP20 to save 20% on hosting!

Pick your code and save:
techsnap7: $7.49 .com
techsnap10: 10% off
techsnap20: 20% off 1, 2, 3 year hosting plans
techsnap40: $10 off $40
techsnap25: 25% off new Virtual DataCenter plans
techsnapx: 20% off .xxx domains

 

Direct Download Links:

HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

   
Subscribe via RSS and iTunes:

   

Show Notes:

StratFor database full of incredibly weak passwords


EFF warns users about privacy issues with the new AIM chat client


Lilupophilupop SQL Injection attack spreading rapidly


Nginx overtakes Microsoft as No. 2 Web server


Feedback:

Q: Apache vs. nginx?
A: NGINX and Apache both have their strengths and weaknesses, and therefore each has their place depending what your requirements and goals are.

NGINX is fast and light, designed to serve static content as quickly as possible. Out of the box, it lacks the ability to do any type of interpretation or CGI. NGINX is however a great load balancer, with the ability to handle requirements such as ‘sticky’ backends, last resort backends, and unfair load balancing. NGINX is event driven, so uses a small number of single threaded workers, which allows it to easily meet the C10K requirement (10,000 concurrent clients), using only 10mb of ram.

Apache is far more powerful and versatile. Apache has a number of different ‘mpm’s (Multi-Processing Modules). The most common is prefork, where apache will start a number of worker processes that then wait for incoming client connections. When the number of idle workers gets to low, Apache starts more in an attempt to ensure that there is always a worker ready to handle the next request, rather than making that user wait while the worker starts up. The issue with this approach is that each worker must load all of the the capabilities of the web server, for example, things like PHP and webdav. This means that, even a worker which is only going to server a simple image, requires the memory and resources of a worker that is processing a much more complex request. There is a limit to how many workers can be running at once, due to limited resources on the machine such as RAM. If the Apache MPM is not tuned with a proper MaxClients setting, to limit the number of workers that are started, the server can quickly enter ‘swap death’, as it is constantly paging memory in and out of swap to try to service the requests, slowing down the rate at which the requests can be served, further increasing the number of pending requests. Also, the Apache worker is not free to start work on the next request, until the client has received the response, and closed the connection. This means that ‘keep alive’ connections, which a great performance improvement, can also reduce the available capacity of the server, as many workers are tied up simply waiting to see if there will be an additional request.

NGINX is however not incapable of dealing with things like PHP. NGINX is designed as a reverse proxy, allowing it to pass off requests that it cannot handle itself, to the appropriate server that can handle them. For most items, there are 2 major options; FastCGI (works much like the apache mechanism described above, a number of php, perl or other processes preforked and waiting to answer requests, however a major difference is that these workers never receive simple requests for things such as image, NGINX handles those internally); The other option is to proxy the requests to another server, such as an Apache server, which will then handle the more complex requests. An advantage to this solution is that NGINX will receive the response from apache (usually over localhost or an internal LAN) very quickly, freeing that Apache worker for the next request, while NGINX handles returning the response to the client at little to no cost due to the event driven nature of NGINX.

Some notable shortcomings of NGINX: For performance and security reasons, NGINX does not support .htaccess files, all configuration must be done in the server config file. Extensive rewrite rules are possible, but are done in a very different format from standard apache mod_rewrite rules. There are currently no webhosting control panels that support NGINX.

While both servers are very useful, if you need versatility or generalized solution, value ease of use, or have to support many customers, Apache is likely the better solution. If you have a very busy site, and you need to get the most out of your hardware, NGINX is quite likely the right solution for you. Even just placing an NGINX in front of your apache server can greatly increase performance.

Q: Common Questions!

We would love to answer common sysadmin questions, in fact, that is what I am doing right now :p. Just send them in to techsnap@jupiterbroadcasting.com and we’ll try to keep throwing knowledge at you. Developer questions are a bit more complicated, neither Chris nor I are developers, although we can answer a lot of DevOps questions. Send it in anyway, and we’ll see if we can come up with an answer for you.

Server to busy pages, such as the failwhale, are static, and so require little to no resources to return to the user. If you are using a server like NGINX, you can serve 1000s of failwhale pages per second from a laptop without issue. Most sites big enough to need an ‘overloaded’ page have a dedicated set of web servers or load balances infront of the actual application servers that run the site, and it is these front end servers that return the overloaded page, when they cannot find a backend server that is available to serve the user request.

For your second question, you’ll need to be more specific. Email us back with a use case, and I’ll try to walk you through some potential solutions.

Roundup: