With both httpd and nginx, for all protocols tested, Unix sockets are noticeably faster than TCP.
Use of any of these protocols over TCP requires that the server side implement lingering close, to give the client side time to read the entire response before a TCP RST prevents the remainder of the response from being read.
HTTP servers like nginx and httpd, FastCGI server libraries like flup and the oft-used Open Market library, all implement this. Unfortunately, uWSGI does not currently, resulting in unavoidable, intermittent errors with TCP that come and go with client/server timing and ordering of I/O operations. (more information)
For HTTP, TCP support is much more common than Unix socket support. Thus, when your application listens on Unix you probably can't point your favorite client program at the application for debugging, and you may not be able switch out httpd or nginx for some other proxy either.
For protocols other than HTTP, it is largely a moot point because there are no clients in the traditional sense.
Only TCP supports this, of course
Starting as root, or using other system-specific configuration, is required for binding to low ports. But applications behind the web server usually run on high (unrestricted) ports, so this isn't usually a consideration. Client code, such as the web server connecting to the application port, can connect to any port.
This has the more complex filesystem permission considerations. In a production environment with more than an absolute minimum number of services, the requirement to consider these permissions will be worthwhile because the permissions will ensure that unrelated services can't communicate with your application's sockets, whether through defects or malicious use of vulnerabilities.
TCP nuances include:
netstat) while sockets remain in
Unix sockets present some very rare difficulties, such as lack of support for Unix sockets on some non-standard filesystems.
Once upon a time, HP-UX didn't respect a half-close from the peer with Unix sockets at one point.