Gunicorn

Gunicorn is an alternative to uWSGI which is popular in some circles. It is an all-Python solution with a much more limited feature set. The mechanics of running applications from Gunicorn are not much different than with uWSGI, which can be seen in the side by side comparisons below.

HTTP protocol, TCP socket

uWSGI

[uwsgi]
pidfile = /tmp/hello-http-tcp.pid
daemonize = /tmp/uwsgi.log
http-socket = 127.0.0.1:2000
wsgi-file = hello.py
master = true
processes = 4
threads = 2
uid = daemon
gid = daemon

The application can be started using this config file using the following command from the directory containing hello.py:

# uwsgi <config-file>

The application can be stopped with the following command:

# uwsgi --stop <pid-file>

Gunicorn

pidfile = '/tmp/hello-http-tcp.pid'
errorlog = '/tmp/gunicorn.log'
loglevel = 'warning'
bind = '127.0.0.1:2000'
daemon = True
workers = 4
worker_class = 'sync'
threads = 2
user = 'daemon'
group = 'daemon'

Note the Python syntax.

The application can be started using this config file using the following command from the directory containing hello.py:

# gunicorn -c <config-file> hello:application

The application can be stopped by sending SIGTERM to the process id stored in the configured pid file.

Gunicorn's approach to virtualenv is different than that of uWSGI. Refer to Using Virtualenv in the Gunicorn documentation for more information.

HTTP protocol, Unix socket

uWSGI

[uwsgi]
pidfile = /tmp/hello-http-unix.pid
daemonize = /tmp/uwsgi.log
http-socket = /tmp/helloHTTP.s
wsgi-file = hello.py
master = true
processes = 4
threads = 2
uid = daemon
gid = daemon

Gunicorn

pidfile = '/tmp/hello-http-unix.pid'
errorlog = '/tmp/gunicorn.log'
loglevel = 'warning'
bind = 'unix:/tmp/helloHTTP.s'
daemon = True
workers = 4
worker_class = 'sync'
threads = 2
user = 'daemon'
group = 'daemon'

Performance

In my testing, uWSGI performs significantly better than Gunicorn across the board. Here are the results I saw with the hello example which simply prints Hello world and with a readbody example which accepts a 70K POST body and echoes it to the client.

Software hello with TCP hello with Unix readbody with TCP readbody with Unix
httpd, uWSGI 19930 req/sec 21182 req/sec 9063 req/sec 9669 req/sec
httpd, Gunicorn 9139 req/sec 10325 req/sec 3157 req/sec 3301 req/sec
nginx, uWSGI 13577 req/sec 18516 req/sec 3801 req/sec 4311 req/sec
nginx, Gunicorn 10950 req/sec 12879 req/sec 3181 req/sec 3247 req/sec

A few notes about how these numbers were obtained:

  • six runs with ab over loopback with 100000 requests and concurrency 100, taking the average requests per second after dropping the high and low runs
  • Linux x86_64 3.11.0-23-generic (Ubuntu 13.10) with this TCP tuning:

    # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range
    # echo "3" > /proc/sys/net/ipv4/tcp_fin_timeout
    # echo "1" > /proc/sys/net/ipv4/tcp_tw_recycle
    # echo "0" > /proc/sys/net/ipv4/tcp_syncookies

Be careful how you use this information. It isn't any particular application (certainly not yours), the client is on the same machine as the web server and application, the configurations of the web servers and middleware are very basic, etc. You should expect that investing time in profiling or other investigation of any particular combination would lead to improvements in the configuration of the web server or middleware which might change the relative performance compared with another combination.

An interesting result for this simple test is that httpd + uWSGI trumps nginx + uWSGI across the board (generally by a very significant amount), but nginx + Gunicorn trumps httpd + Gunicorn almost across the board. I wouldn't be surprised if this is a hint that some default configuration of one of the four pieces of software is especially bad for these samples. In general, I would claim that data which doesn't fit the general pattern (uWSGI faster than Gunicorn, TCP faster than Unix, etc.) is an indication of an implementation or configuration oddity which warrants further investigation.