Suexec behaviour with nginx

Posted on September 5, 2010

This week, I had to set up and configure an nginx server for the first time. If there is something that I think is essential for a web server, it’s to clearly separate the environment of each of the websites that run on it. Especially, when you execute PHP (or whatever) scripts on your website, security is something you have to pay attention to.

With Apache, I used to use suexec, which executes a FastCGI script (let’s say PHP in my case) with the rights of the script’s owner. Pretty cool, when you think that by default all the PHP scripts are run with the same Apache user (www-data most of the time). But nginx doesn’t have such a feature. Actually, there are a few things to pay attention to when you switch from Apache to nginx:

  • Apache has a mod_php module, which integrates PHP directly into the Apache software. nginx doesn’t, so you always have to use CGI or FastCGI to execute PHP scripts
  • Apache has modules such as suexec or suPHP, which allow to run scripts with the script owner’s rights. nginx doesn’t

Alright, so how do I protect my websites against each others’ scripts? Well, it’s true that there is no suexec-like feature in nginx, but you can still get the same behaviour hopefully 🙂 Here is your code for an nginx server (the equivalent of a VirtualHost in Apache):

location ~ \.php$ {
  fastcgi_index index.php;
  fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
  include fastcgi_params;

Without going into the details to deep, it will make nginx to call your PHP-CGI instance on the following address: each time a .php file is requested. PHP-CGI then executes the script and return the result, that is passed to nginx, which passes it to the final client’s browser…

The thing is, you can run as much instances of PHP-CGI as you like, and you can configure as much nginx servers (VirtualHosts) as you like. Then, I came up with the following architecture. There would be one instance by website (typically for me, each website folder belongs to a different user), and each nginx server would call the corresponding one. Let’s say we have 3 websites on our server:
root folder /home/ /home/ /home/
files owner usera userb userc
instance of PHP-CGI

You get the idea. To run an instance of PHP-CGI, use a command like this:
/usr/bin/php-cgi -c /etc/php/php90001.ini
nb: it depends on which version of PHP you use (php5-cgi, php5-fpm…) but usually you will need one php.ini file per instance you want, because you have to specify at least the port on which it listens and the user that runs it

To declare a new website, create a new file in /etc/nginx/sites-enabled/, and put the right piece of code. Don’t forget to specify the right address for each website.

Test your configuration with a test page:

<?php phpinfo(); ?>

and see if the USER directive is well what you were expecting.

And voilà. It should work properly. This manipulation was made with a Linux Debian (Lenny) server. There are many other things to do to secure a web server and you shouldn’t think that this procedure is enough, of course 😉

Nevertheless, I definitely prefer the Apache way to do it, it requires far less work… How do you achieve it? Do you have a better way to do it, or some automatisation techniques or tips?

About the author

Cyril Mazur is a serial web entrepreneur with experience in various fields: online dating, forex & finance, blogging, online advertising... who enjoys building things that people like to use.