This applies to Debian 6 "Squeeze", using PHP 5.4.11 from Dotdeb, and Froxlor 0.9.27 from the Debian archives.

Setup instructions

  1. We assume that your Apache2 server is installed and working, and so is your PHP 5.4.x installation.
  2. Install PHP-FPM:
    apt-get install libapache2-mod-fastcgi php5-fpm
  3. Enable to use PHP-FPM in Froxlor. (After saving, you will get an additional "configuration" link in the line for PHP-FPM.)
  4. In the PHP-FPM configuration in Froxlor, change "Path to php-fpm configurations" to "/etc/php5/fpm/pool.d/", because that's the path where the Debian package expects these .conf files by default. (Alternatively, you could adapt that behavior by editing the include directive in /etc/php5/fpm/php-fpm.conf, at the very bottom).
  5. In the PHP-FPM configuration in Froxlor, change "Command to restart php-fpm" to "/etc/init.d/php5-fpm restart".
  6. Let Froxlor create the new configs:
    php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
  7. Exchange the php5 handler with the fastcgi one (and other stuff needed by PHP-FPM):
    a2enmod fastcgi actions alias
    a2dismod php5
  8. Fix that Apache complains about a config line "Invalid command 'SuexecUserGroup'" in the Apache vhost configs generated by Froxlor [source]:
    1. apt-get install apache2-suexec
    2. a2enmod suexec
  9. Fix that php-fpm cannot start because Froxlor missed creating system users and groups for the customers it refers to by name in the php-fpm config files.
    1. cd /var/customers/webs/
    2. For every customer in there, execute an equivalent with the proper ID values and customer names for:
      addgroup --gid 10006 customername
      adduser --uid 10006 --gid 10006 customername
  10. Restart PHP-FPM:
    service php5-fpm restart
  11. Restart Apache2:
    service apache2 restart

Should work now. Verify by testing as shown below.

How to test your setup

  1. When testing your setup, test with a domain or subdomain site, not with the "IP and port" site. Because for the latter one, Froxlor misses to create a proper pool configuration file in /etc/php-fpm.d/ (while generating the VirtualHost config file properly), so it will always fail with error messages like this in /var/log/apache2/error.log, using your FQDN server name:

    [Wed Feb 20 19:57:13 2013] [error] [client 91.15.26.18] (2)No such file or directory: FastCGI: failed to connect to server "/var/www/hostname.example.com.fpm.external": connect() failed
    [Wed Feb 20 19:57:13 2013] [error] [client 91.15.26.18] FastCGI: incomplete headers (0 bytes) received from server "/var/www/hostname.example.com.de.fpm.external"

    Note that the file /var/www/hostname.example.com.fpm.external is indeed missing, but that is not the problem: the equivalent file is missing for working websites as well (the docs say "The filename does not have to exist in the local filesystem.").

  2. The first, simplest test is to choose a website and place a little script (called userinfo.php or something) in it with just this content: <?php system('id'); ?>. When calling it in your webbrowser, it should generate output that points to the user and group used in the SuexecUserGroup directive in that site's VirtualHost config. So note that php-fpm, as configured by Froxlor, does not use the script's owner as the user to execute it as, unlike mod_suphp.

  3. Then proceed to test a full website (keeping all other sites temporarily disabled by moving the configs out of /etc/apache2/sites-enabled/). Do not choose a phpMyAdmin or WordPress site for your first testing site however, as there can be special problems to be dealt with.

Fixing other issues

  • "There is no fastcgi wrapper set." When restarting Apache2, you might see messages like "[warn] FastCGI: there is no fastcgi wrapper set, user/group options are ignored". These can be ignored because Froxlor uses suexec to adapt the user and group of the server process, not the php-fpm internal mechanisms. See the system('id'); test above which proves this.
  • Adding directories to open_basedir. When using Froxlor with Apache and mod_php5, you could add site-specific values to open_basedir. When using PHP-FPM, this is no longer possible because site-specific values are now stored in /etc/php5/fpm/pool.d/*.conf files, which will be overwritten when Froxlor regenerates its config files. And there's seemingly no option to add to them from within Froxlor. One might edit the affected .conf files and set them to non-writable for the Froxlor user, but  that will create hard-to-track future problems. It's cleaner to add all directories required for one site to all of them, globally, via "Server -> Settings -> Web Server Settings -> Configuration", where you'll find an option to append paths to the open_basedir setting of all your virtual hosts.
  • Installing Roundcube from the Debian package. You will have to add some paths to the open_basedir default setting as described just above. This includes /etc/roundcube. However, it seems that Froxlor 0.9.27 silently discards any directory in /etc/ that you try to add to open_basedir via "Server -> Settings -> Web Server Settings -> Configuration". Seems to be an undocumented "security feature" 😀 To fix that bug, you could normally overwrite open_basedir per vhost, but PHP-FPM does not interpret that, which is why we have to modify the global open_basedir setting. The best solution I found was to do this (if we're lucky, Debian package management will not complain because we only exchange what is the symlink and what is the real thing):
    rm /var/lib/roundcube/config/*
    mv /etc/roundcube/* /var/lib/roundcube/config/

    ln -s /var/lib/roundcube/config/ roundcube
  • Restarting PHP-FPM. This can be required after doing manual changes to its config files in /etc/php-fpm.d/. The simplest way is: service php5-fpm restart.
  • Enabling the IP-and-port site. By default, Froxlor will not generate a /etc/php5/fpm/pool.d/*.conf file for the "IP and port" website, so it will not be served by php-fpm, resulting in "Server Error 500". This behavior is controlled by the option in Froxlor to use PHP-FPM "for Froxlor itself" (wuite a misnomer, but true: in the "IP and port" site configuration there is the same assumption that Froxlor is normlly provided via that site, where it says "User defined docroot (empty = point to froxlor)"). So the solution goes like this:
    1. Enable that option to use PHP-FPM for Froxlor itself.
    2. Note that the additional options on the same page, to change the user and group names to be used for the Froxlor chost with PHP-NPM, have no effect on the generated VirtualHost config (which seems to be a Froxlor bug). So better leave them as they are at "froxlorlocal".
    3. Let Froxlor recreate the config files:
      php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
    4. Ensure that there is now a config file in /etc/php5/fpm/pool.d/ named by the FQDN of your host.
    5. Restart Apache2: service apache2 restart
    6. Restart PHP-FPM: service php5-fpm restart
    7. Call up your IP address in your browser and see if it works.
  • Fixing WordPress sites that use URL rewriting. When re-enabling these sites, they will probably fail with this error message in the log: "Request exceeded the limit of 10 internal redirects due to probable configuration error.". The solution is to adapt the .htaccess file of WordPress with the rewrite rules to look like this [source]:
    # BEGIN WordPress
    <IfModule mod_rewrite.c>
    RewriteEngine On
      RewriteBase /
      RewriteCond %{REQUEST_URI} !^/fastcgiphp/*
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . index.php [L]
    </IfModule>
    # END WordPress
  • Fixing Indefero sites that use URL rewriting. Indefero is a simple, Google Code like, open source code and project hosting software that utilizes git. When trying to serve it via PHP-FPM, it will say "Server Error 500", and in the log "Request exceeded the limit of 10 internal redirects due to probable configuration error.". In analogy to the solution for WordPress above, simply add a RewriteCond %{REQUEST_URI} !^/fastcgiphp/* to its .htaccess file to prevent the circular redirection. The .htaccess will then be:
    Options +FollowSymLinks
    <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteCond %{REQUEST_URI} !^/fastcgiphp/*
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule ^(.*) /index.php/$1
    </IfModule>

Problem. When installing a PHP 5.4 package and a MySQL 5.5 package, both from Dotdeb for Debian 6.0 "Squeeze", this results in PHP interfacing with MySQL via a SQL client library version 5.1.66 (or another one from the MySQL 5.1 series). That in turn causes phpMyAdmin to complain about it after logging in there: "Your PHP MySQL library version 5.1.66 differs from your MySQL server version 5.5.29. This may cause unpredictable behavior." Is this a problem, and how to fix it?

Analysis and solutions. This is not a real problem, just a cosmetic one. The client library version is lower and will be compatible for calling a higher server version. Only the other way round it would break things. At least this is the idea why the author has packaged it with this dependency [source].

Specifically, package php5-mysql depends on libmysqlclient16 (>= 5.1.21-1), which depdens on mysql-common (>= 5.1.66-0+squeeze1). Which in total means that the mysqli.so library for PHP5, provided by Dotdeb, claims to be ok working with a MySQL 5.5 server.

This has been found out from the output of: dpkg -s php5-mysql, dpkg -s libmysqlclient16. For an overview of all your installed MySQL-related packages and their versions, use dpkg -l "*mysql*". Finally, to find out the MySQL client library version from within PHP, see here.

So it is only PHP software (phpMyAdmin) which complains about this. The only clean way to remove this warning would be to install the MySQL 5.1 server instead. But as every downgrade, it's not too easy (you have to backup your databases and may need to re-import them from SQL files [source]). And as said, it's seemingly just a cosmetic problem, all software so far runs fine.

The other option to fix this would be to use php5-mysqlnd instead of php5-mysql. However, that's seemingly not possible when using phpMyAdmin from Debian Squeeze packages (like we do), which depends on php5-mysql only and seemingly forgot to include an alternative dependency to php5-mysqlnd [source].

Not an easy question, also there's a lot of discussion about this on the web. I will just summarize what I found on the web, give my personal evaluation of it, and name some sources.

The selection criteria here are esp. security and ease of use, within boundaries of performance and memory usage (here, of a medium-sized VPS host for shared webhosting).

The different alternatives

php-fpm sounds like the best overall solution both for speed and memory usage (except on systems with very limited RAM). It is a FastCGI implementation with improvements over the older mod_fastcgi and mod_fcgi ones that lets it better adapt to servers with limited RAM (by using its "dynamic mode", for example). See the php-fpm website. For installation, there are installation instructions for Apache (more of these) on Debian, also explaining how the Froxlor server admin panel accesses php-fpm (as it generates the VirtualHost configs just like that). But there are also slightly more advanced installation instructions for Apache and lighthttpd.

php-fpm will not allow php_admin_value open_basedir directives in the per-site VirtualHost configs (Froxlor will not generate these directives in VirtualHost sections when having PHP-FPM enabled, and when using them manually there Apache will not start.). However, it allows open_basedir directives in its own per-site configuration files, with globally valid user additions. The instructions for using this feature are in teh article about PHP-FPM with Froxlor.

mod_suphp seems to be the best alternative for a shared webhost with very limited RAM (1 GiB) and many but rather low-traffic sites. Its disadvantage is having the slow speed of CGIs (because a separate process has to be started and for every PHP request), but that only becomes a problem when CPU resources for the overhead of process starting are no longer sufficient. On the plus side, it is memory efficient. See the installation instructionsIf it does not work out well (too slow / too high CPU load), you can try php-fpm or mod_fcgi instead, with memory limiting tweaks by reducing idle process runtime. 

However, the problem with mod_suphp is that it does not allow php_admin_value directives per VirtualHost section. This can be worked around by using per-host php.ini files instead [source], except when using a server admin panel that does not support them. Like Froxlor, independent of the webserver you use with it (this feature is only provided by a bleeding edge patch for Froxlor 0.9.28-svn5, which even generates these per-site php.ini files and the proper open_basedir settings [source]. Which is not a good idea to use, as the future development of Froxlor is unsure at best as of 2013-02, meaning one might get permanently stuck in using an untested development version of ones server panel (!)).

So the only option is to configure open_basedir globally in the corresponding php.ini config file. But then, all website paths (or their common root path) would be listed there, which nearly completely anihilates the benefit of open_basedir for shared hosting, namely, the protection against cross-infection with worms, between sites of different customers. Which means that currently, Froxlor prohibits a meaningful use of php-fpmmod_suphp and mod_fcgi, except you want to put in own development efforts of course. Because: using one of these handlers alone, without open_basedir, provides no proper protection as users will use mode 0666 for their files by mistake or lack of experience [source]. Then, open_basedir alone is an even better protection alone than one of the uid-changing handlers alone. Which means currently, with Froxlor 0.9.27, security wise we're prohibited from using mod_suphp. Use php-fpm instead, or switch to mod_ruid2 or mpm-itk after hardening the kernel with grsec.

An added problem specific to mod_suphp and Froxlor is that, as Froxlor does not support mod_suphp natively, it will still assume that mod_php5 is in use and allow users to enable their open_basedir sections (unlike when enabling php-fpm in Froxlor for example). Which will then cause Apache to fail on the next restart because of the generated php_admin_value directives out of proper ifmod sections … quite a nightmare scenario.

mod_fcgi and mod_fastcgi (implementing FastCGI) on the other hand also has the "execute with user ID of script owner" mechanism but needs a lot of RAM when having more than a few websites (50 MiB permanently for a single process, which can handle only one request in parallel for one website).

mod_php with mod_ruid2 is nice (security of mod_suphp plus speed of mod_php) but, in case that somebody finds certain apache2 vulnerabilities, is a security problem itself as it would allow people to suid to root with apache2 and scripts running inside of it (like with mod_php5). So it is only recommended when having a hardened kernel with grsecurity or similar [source, even recommended in the official project documentation], so nothing for simple and quick setups.

mpm-itk. This provides the same "execute as the file's user" security as the above alternatives (except mod_php alone of course). But it's not a module, instead part of the Apache binary. It also does not use the CGI model like mod_suphp, making it much faster than that. And also faster than FastCGI [benchmarks]. Indeed, it should be nearly as fast as mod_php alone. Also, it's very simple to configure with just three directives, and in contrast to mod_suphp it allows the php_admin_value directives. See some installation instructions. However, the big caveat is the same as with mod_ruid2: Apache2 runs as root until after header processing, when it can switch user IDs, so a potential exploit happening before might give root access to the system immediately [source, at "Quirks and Warnings"]. For that reason, only use it with a hardened kernel, just as with mod_ruid2 (see there).

mod_php alone (resp. mod_php5 now) provides insufficient isolation of customer sites against each other, as all files are readable (and upload directories even writable) by the webserver user, which means world-readable and world-writable because of the way Froxlor does its user account management.

More alternatives. There is a nice Apache wiki article on Privilege Separation with some nice ideas and background infos. However, it provided no additional practical solution in this context.

Results (esp. together with Froxlor)

For a secure but also simple to do and simple to maintain setup on a shared webhost with not too many medium traffic sites, I would use php-fpm. The same if there's just one or a few high-traffic sites. If there are instead a really lot of low traffic sites, I would use mod_suphp instead.

If the solution has to be deployed together with the server management panel Froxlor (version 0.9.27 currently), php-fpm is also a good solution (instructions). mod_suphp however is not (see the discussion about open_basedir problems in the mod_suphp section above).

Sources

In addition to the individual sources already linked to, the following documents (mostly forum discussions) were consulted when writing this article:

Problem

This description applies to Froxlor 0.9.27. The issue was reported by me as Froxlor issue #1159 ("Absolute document root path interpreted as relative to customer folder"), but since their issue reporting system is down as of 2013-02-18 I publish it here with teh workaround, as good as I can remember.

How to reproduce this problem:

  1. Create domain with absolute docroot. As Froxlor admin, create a new domain entry in Froxlor as a "main domain". Adjust the document root setting of that new domain by using an absolute path rather than one relative to the customer directory. For example, I had to use /usr/share/phpmyadmin/ for an installation of phpMyAdmin.
  2. Re-create configs. For that, do one of these:
    1. Execute this via SSH on the server:
      php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
    2. Wait until the next Froxlor cron job runs, which will also rewrite all vhost configs that need changes. (Clicking "Server -> Re-create configs" in Froxlor won't speed that up, it just queues additional config files for re-creation.)
  3. Restart Apache. Because the config changes will not be picked up in all cases automatically. So:
    service restart apache2
  4. Test. The domain should have been created correctly, using your provided document root directory. Froxlor should not have created directories below the customer's directory.
  5. Edit as customer. Edit the settings of the domain as a the customer who owns this domain. (To switch to the customer's account as Froxlor admin, go to "Ressources -> Domains" and click on the appropriate link in the "Customer" column. A new window with a concurrent login opens. Here, navigate to "Domains -> Settings" and click the "Edit" icon for your domain.) Just change something unimportant and save the settings. Do not change the document root path!
  6. Re-create configs, restart Apache. See steps 2 and 3.

As a result of this process, the domain you edited will no longer work, just showing the standard Froxlor "under construction" page. Froxlor has interpreted the absolute document root directory as relative to the customer's directory, and has created a corresponding directory hierarchy there incl. the "under construction" page. This happens only when editing a domain condiguration as customer, not when doing so as root. It has only a negative effect for domains that require an absolute document root path, as the relative interpretation works just fine for the standard directory names generated by Froxlor.

Fix

Edit your domain settings as Froxlor admin and remove the customer directory path from the front of your document root path. Then re-create the configs and restart Apache as shown above. Finally remove the nonsense directories created by Froxlor because of this bug.

This fix only works for Froxlor main domains, as subdomains cannot be edited by Froxlor admins.

Workaround for prevention

Simply do not edit domains that require absolute document root paths with a Froxlor customer account. Use a Froxlor admin account. Of course this implies that with the current Froxlor 0.9.27 you cannot let your customers have domaisn with absolute document root paths, instead only with paths inside the customer directories.

For so-called "Froxlor main domains" (which can also be subdomains!), editing as admin is possible without restrictions. (The only setting not visible when editing them as admin is "Redirect code", and that one makes no sense together with an absolute document root path as it requires the document root field to contain a URL.)

For Froxlor subdomains however, editing as admin is not possible at all. They do not appear in "Ressources -> Domains" when logged in as Froxlor admin. Which means, subdoamains that require absolute document root paths must be created as Froxlor main domains to edit them as admin. If not, you're forced to change the Apache config files manually, and your changes will be lost whenever Froxlor re-generates these config files.

In this post, I want to show a solution that can help to quickly install your set of desired open source Android apps from FDroid, by installing them with adb. It also works with Google Play, but you have to download them as .apk files first. This is not possible on Google Play directly, but for example with third-party services like downloader-apk.com. Be aware of potential security implications though.

So it would be possible to have a single script running on your computer and bulk-installing all your Android apps on your phone. However, once you installed your desired apps once in any way, it is faster and more comfortable to use App2zip, App2zip Pro or ZIPme to create a .zip file with your apps that you can then install in recovery mode on any phone you want them on.

The process for unattended install of Android apps via adb works as follows:

  1. Enable USB debugging on the Android phone. This is needed for adb to work. [instructions]
  2. Install Google Play. We install a minimized version of Google Apps here that contains just Google Play and required libraries. You can install everything else from Google Apps via the Google Play Store later. [TODO: Minimize this further by installing just the three essential apps, saving 70 more MiB].
    1. Download the minimized Google Apps package from  "[APP][MINIMALISM] Google Play 3.10.10 | Market ONLY Gapps for GB/ICS/JB4.1/JB4.2".
    2. Push it to the phone's SD card:
      adb push jb42-signed.zip /sdcard/jb42-signed.zip
    3. Reboot into your favorite recovery:
      adb reboot recovery
    4. Install the ZIP file from the SD card with your recovery software.
  3. Install APKs. For every APK, simply call [see adb command line arguments]:
    adb install filename.apk
    Due to permissions issues, on some Ubuntu host systems you will have to do:
    sudo adb install filename.apk

This task refers to the FolderSync app version 2.5.4 – the paid version, though it may work with the lite version too.

It seems that this is not an intended feature currently, but it is possible. Instructions:

  1. Enable USB tethering on your Android phone.
  2. Connect the computer to the USB network connection (while disconnecting it from wifi to make sure indeed the USB connection is used in this test).
  3. Look up the computer's IP for this connection, and set up an account (here, SFTP) in FolderSync accordingly.
  4. Create a folderpair in FolderSync, and make sure that you check "Use Wifi" for the connection to use.
  5. Make sure wifi is enabled on the Android phone and connected to some network. (Sadly, the folder syncing via USB hack only works when the phone is indeed connected to a wireless network, even though the actual data goes via USB of course. I guess "connection" means "wlan0 has IP address", not that an actual Internet connection would be needed. So if you can find a way to set up a static wifi connection to a base station with an invented ame (that is not in range of course), it should be sufficient to make FolderSync work.)
  6. Open the folderpair entry in FolderSync and click the "arrows in circle" button to trigger immediate syncing. Should succeed now.

With wifi disabled (or enabled but not connected to any wifi network) on the Android phone, the following error message will appear: "Folderpair not synced because syncing is not configured for current network type or network is not available".

The idea is that files saved in some folder on Android are automatically transferred to a folder on your Linux-based desktop computer, and vice versa. This should happen locally, without a "cloud storage" somewhere on the Internet.

For now, the best solution for automatic syncing is the commercial, closed-source app FolderSync as recommended by LinuxJournal.

 

I tried a bunch of alternative solutions, but they would not work as intended:

SparkleShare Android app

The SparkleShare for Android app [source code here] Allows to download files from a Sparkleshare repository only, not to upload them. Also the downloading has to happen manually.

Installation:

  1. Install git-daemon on the computer that should run the SparkleShare server. (This is maybe not needed, as a SparkleShare server is nothing else than a git server, and it is set up below.)
  2. Install the SparkleShare server on your PC or a web server where you have sufficient right, according to these instructions.
  3. Install sparkleshare-dashboard according to these instructions, on the computer that also runs the SparkleShare server. (In the step to start teh redis server, use this command on Ubuntu: redis-server /etc/redis/redis.conf).
  4. Install SparkleShare for Android on your phone.
  5. Insall SparkleShare, see Install SparkleShare 1.0 In Ubuntu (Dropbox-Like File Synchronization Tool). Because, even if you installed the SparkleShare server on your own local PC, you can not directly put the files into the server's directory to be synced. It will only sync files in directories that are watched, and that is done by the SparkleShare client.
  6. Configure this all.
    • When trying to start sparkleshare-dashboard, it may complain about not finding some nodejs module. In its directory, call "npm link <modulename>" for every module it complains about, to solve this [source].
    • Note that the SparkleShare client only works with its own generated SSH key. It will give you the SSH pubkey as "unique link code". Put it in the ~/.ssh/authorized_keys file on the SparkleShare server's host, of the user running the SparkleShare server. This will give the SparkleShare Desktop client access, while the Android client gets acvcess by the "device pairing" function of sparkleshare-dashboard.
    • Note that the link code required by SparkleShare for Android is NOT the SSH pubkey handed out by the SparkleShare desktop client as "unique link code", but instead the 10-letter or so code that appears as text and QR code when using the "device pairing" function of sparkleshare-dashboard.
    • When you can see folders in the sparkleshare-dashboard web frontend, you should be able to see them in a paired Android device, too. But I only was able to get to this point by making the SparkleShare repos "public" in config.js; this is not the correct way of course if you want private data syncing, but a good first step when configuring it all:
      exports.folders = [
        { type: 'git', name: 'Private GIT folder', path: '/home/storage/sparkleshare-data-local', pub: true }
      ];
    • As said, I was not yet able to get the access permissions of the sparkleshare-dashboard app right to access SparkleShare repositories. I don't even know if the user created for sparkleshare-dashboard needs to conform to any other user (SparkleShare server user maybe? probably not, as that was set up without a password).

Gidder to host a SparkleShare repo

Gidder is a full-fledged Android git server. Normally, any git server could be used to host a SparkleShare repository, and this would make this a very lightweight, nice solution, without the sparkleshare-desktop, nodejs, redis database etc. needed by the SparkleShare for Android app (and requiring installation without the package format …). However, Gidder cannot be used with SparkleShare, as it does not support SSH key authentication (only password authentication) [source], and SparkleShare only supports this (and no password authentication).

And in any way, this solution would still need a "git commit" and "git push" action on the Android device to get file changes to the desktop computer. While here, we want something automatic.

dvcs-autosync or git-auto-sync

See the dvcs-autosync website and git-auto-sync website. This runs on the Linux host and can auto-push to a git repo, probably including one hosted by the Gidder Android-based git server (I did not test). However, still the problem remains that on the Android side, no auto-syncing to a repository seems possible.