Problem. When installing a PHP 5.4 package and a MySQL 5.5 package, both from Dotdeb for Debian 6.0 "Squeeze", this results in PHP interfacing with MySQL via a SQL client library version 5.1.66 (or another one from the MySQL 5.1 series). That in turn causes phpMyAdmin to complain about it after logging in there: "Your PHP MySQL library version 5.1.66 differs from your MySQL server version 5.5.29. This may cause unpredictable behavior." Is this a problem, and how to fix it?

Analysis and solutions. This is not a real problem, just a cosmetic one. The client library version is lower and will be compatible for calling a higher server version. Only the other way round it would break things. At least this is the idea why the author has packaged it with this dependency [source].

Specifically, package php5-mysql depends on libmysqlclient16 (>= 5.1.21-1), which depdens on mysql-common (>= 5.1.66-0+squeeze1). Which in total means that the mysqli.so library for PHP5, provided by Dotdeb, claims to be ok working with a MySQL 5.5 server.

This has been found out from the output of: dpkg -s php5-mysql, dpkg -s libmysqlclient16. For an overview of all your installed MySQL-related packages and their versions, use dpkg -l "*mysql*". Finally, to find out the MySQL client library version from within PHP, see here.

So it is only PHP software (phpMyAdmin) which complains about this. The only clean way to remove this warning would be to install the MySQL 5.1 server instead. But as every downgrade, it's not too easy (you have to backup your databases and may need to re-import them from SQL files [source]). And as said, it's seemingly just a cosmetic problem, all software so far runs fine.

The other option to fix this would be to use php5-mysqlnd instead of php5-mysql. However, that's seemingly not possible when using phpMyAdmin from Debian Squeeze packages (like we do), which depends on php5-mysql only and seemingly forgot to include an alternative dependency to php5-mysqlnd [source].

Not an easy question, also there's a lot of discussion about this on the web. I will just summarize what I found on the web, give my personal evaluation of it, and name some sources.

The selection criteria here are esp. security and ease of use, within boundaries of performance and memory usage (here, of a medium-sized VPS host for shared webhosting).

The different alternatives

php-fpm sounds like the best overall solution both for speed and memory usage (except on systems with very limited RAM). It is a FastCGI implementation with improvements over the older mod_fastcgi and mod_fcgi ones that lets it better adapt to servers with limited RAM (by using its "dynamic mode", for example). See the php-fpm website. For installation, there are installation instructions for Apache (more of these) on Debian, also explaining how the Froxlor server admin panel accesses php-fpm (as it generates the VirtualHost configs just like that). But there are also slightly more advanced installation instructions for Apache and lighthttpd.

php-fpm will not allow php_admin_value open_basedir directives in the per-site VirtualHost configs (Froxlor will not generate these directives in VirtualHost sections when having PHP-FPM enabled, and when using them manually there Apache will not start.). However, it allows open_basedir directives in its own per-site configuration files, with globally valid user additions. The instructions for using this feature are in teh article about PHP-FPM with Froxlor.

mod_suphp seems to be the best alternative for a shared webhost with very limited RAM (1 GiB) and many but rather low-traffic sites. Its disadvantage is having the slow speed of CGIs (because a separate process has to be started and for every PHP request), but that only becomes a problem when CPU resources for the overhead of process starting are no longer sufficient. On the plus side, it is memory efficient. See the installation instructionsIf it does not work out well (too slow / too high CPU load), you can try php-fpm or mod_fcgi instead, with memory limiting tweaks by reducing idle process runtime. 

However, the problem with mod_suphp is that it does not allow php_admin_value directives per VirtualHost section. This can be worked around by using per-host php.ini files instead [source], except when using a server admin panel that does not support them. Like Froxlor, independent of the webserver you use with it (this feature is only provided by a bleeding edge patch for Froxlor 0.9.28-svn5, which even generates these per-site php.ini files and the proper open_basedir settings [source]. Which is not a good idea to use, as the future development of Froxlor is unsure at best as of 2013-02, meaning one might get permanently stuck in using an untested development version of ones server panel (!)).

So the only option is to configure open_basedir globally in the corresponding php.ini config file. But then, all website paths (or their common root path) would be listed there, which nearly completely anihilates the benefit of open_basedir for shared hosting, namely, the protection against cross-infection with worms, between sites of different customers. Which means that currently, Froxlor prohibits a meaningful use of php-fpmmod_suphp and mod_fcgi, except you want to put in own development efforts of course. Because: using one of these handlers alone, without open_basedir, provides no proper protection as users will use mode 0666 for their files by mistake or lack of experience [source]. Then, open_basedir alone is an even better protection alone than one of the uid-changing handlers alone. Which means currently, with Froxlor 0.9.27, security wise we're prohibited from using mod_suphp. Use php-fpm instead, or switch to mod_ruid2 or mpm-itk after hardening the kernel with grsec.

An added problem specific to mod_suphp and Froxlor is that, as Froxlor does not support mod_suphp natively, it will still assume that mod_php5 is in use and allow users to enable their open_basedir sections (unlike when enabling php-fpm in Froxlor for example). Which will then cause Apache to fail on the next restart because of the generated php_admin_value directives out of proper ifmod sections … quite a nightmare scenario.

mod_fcgi and mod_fastcgi (implementing FastCGI) on the other hand also has the "execute with user ID of script owner" mechanism but needs a lot of RAM when having more than a few websites (50 MiB permanently for a single process, which can handle only one request in parallel for one website).

mod_php with mod_ruid2 is nice (security of mod_suphp plus speed of mod_php) but, in case that somebody finds certain apache2 vulnerabilities, is a security problem itself as it would allow people to suid to root with apache2 and scripts running inside of it (like with mod_php5). So it is only recommended when having a hardened kernel with grsecurity or similar [source, even recommended in the official project documentation], so nothing for simple and quick setups.

mpm-itk. This provides the same "execute as the file's user" security as the above alternatives (except mod_php alone of course). But it's not a module, instead part of the Apache binary. It also does not use the CGI model like mod_suphp, making it much faster than that. And also faster than FastCGI [benchmarks]. Indeed, it should be nearly as fast as mod_php alone. Also, it's very simple to configure with just three directives, and in contrast to mod_suphp it allows the php_admin_value directives. See some installation instructions. However, the big caveat is the same as with mod_ruid2: Apache2 runs as root until after header processing, when it can switch user IDs, so a potential exploit happening before might give root access to the system immediately [source, at "Quirks and Warnings"]. For that reason, only use it with a hardened kernel, just as with mod_ruid2 (see there).

mod_php alone (resp. mod_php5 now) provides insufficient isolation of customer sites against each other, as all files are readable (and upload directories even writable) by the webserver user, which means world-readable and world-writable because of the way Froxlor does its user account management.

More alternatives. There is a nice Apache wiki article on Privilege Separation with some nice ideas and background infos. However, it provided no additional practical solution in this context.

Results (esp. together with Froxlor)

For a secure but also simple to do and simple to maintain setup on a shared webhost with not too many medium traffic sites, I would use php-fpm. The same if there's just one or a few high-traffic sites. If there are instead a really lot of low traffic sites, I would use mod_suphp instead.

If the solution has to be deployed together with the server management panel Froxlor (version 0.9.27 currently), php-fpm is also a good solution (instructions). mod_suphp however is not (see the discussion about open_basedir problems in the mod_suphp section above).

Sources

In addition to the individual sources already linked to, the following documents (mostly forum discussions) were consulted when writing this article:

Problem

This description applies to Froxlor 0.9.27. The issue was reported by me as Froxlor issue #1159 ("Absolute document root path interpreted as relative to customer folder"), but since their issue reporting system is down as of 2013-02-18 I publish it here with teh workaround, as good as I can remember.

How to reproduce this problem:

  1. Create domain with absolute docroot. As Froxlor admin, create a new domain entry in Froxlor as a "main domain". Adjust the document root setting of that new domain by using an absolute path rather than one relative to the customer directory. For example, I had to use /usr/share/phpmyadmin/ for an installation of phpMyAdmin.
  2. Re-create configs. For that, do one of these:
    1. Execute this via SSH on the server:
      php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
    2. Wait until the next Froxlor cron job runs, which will also rewrite all vhost configs that need changes. (Clicking "Server -> Re-create configs" in Froxlor won't speed that up, it just queues additional config files for re-creation.)
  3. Restart Apache. Because the config changes will not be picked up in all cases automatically. So:
    service restart apache2
  4. Test. The domain should have been created correctly, using your provided document root directory. Froxlor should not have created directories below the customer's directory.
  5. Edit as customer. Edit the settings of the domain as a the customer who owns this domain. (To switch to the customer's account as Froxlor admin, go to "Ressources -> Domains" and click on the appropriate link in the "Customer" column. A new window with a concurrent login opens. Here, navigate to "Domains -> Settings" and click the "Edit" icon for your domain.) Just change something unimportant and save the settings. Do not change the document root path!
  6. Re-create configs, restart Apache. See steps 2 and 3.

As a result of this process, the domain you edited will no longer work, just showing the standard Froxlor "under construction" page. Froxlor has interpreted the absolute document root directory as relative to the customer's directory, and has created a corresponding directory hierarchy there incl. the "under construction" page. This happens only when editing a domain condiguration as customer, not when doing so as root. It has only a negative effect for domains that require an absolute document root path, as the relative interpretation works just fine for the standard directory names generated by Froxlor.

Fix

Edit your domain settings as Froxlor admin and remove the customer directory path from the front of your document root path. Then re-create the configs and restart Apache as shown above. Finally remove the nonsense directories created by Froxlor because of this bug.

This fix only works for Froxlor main domains, as subdomains cannot be edited by Froxlor admins.

Workaround for prevention

Simply do not edit domains that require absolute document root paths with a Froxlor customer account. Use a Froxlor admin account. Of course this implies that with the current Froxlor 0.9.27 you cannot let your customers have domaisn with absolute document root paths, instead only with paths inside the customer directories.

For so-called "Froxlor main domains" (which can also be subdomains!), editing as admin is possible without restrictions. (The only setting not visible when editing them as admin is "Redirect code", and that one makes no sense together with an absolute document root path as it requires the document root field to contain a URL.)

For Froxlor subdomains however, editing as admin is not possible at all. They do not appear in "Ressources -> Domains" when logged in as Froxlor admin. Which means, subdoamains that require absolute document root paths must be created as Froxlor main domains to edit them as admin. If not, you're forced to change the Apache config files manually, and your changes will be lost whenever Froxlor re-generates these config files.

In this post, I want to show a solution that can help to quickly install your set of desired open source Android apps from FDroid, by installing them with adb. It also works with Google Play, but you have to download them as .apk files first. This is not possible on Google Play directly, but for example with third-party services like downloader-apk.com. Be aware of potential security implications though.

So it would be possible to have a single script running on your computer and bulk-installing all your Android apps on your phone. However, once you installed your desired apps once in any way, it is faster and more comfortable to use App2zip, App2zip Pro or ZIPme to create a .zip file with your apps that you can then install in recovery mode on any phone you want them on.

The process for unattended install of Android apps via adb works as follows:

  1. Enable USB debugging on the Android phone. This is needed for adb to work. [instructions]
  2. Install Google Play. We install a minimized version of Google Apps here that contains just Google Play and required libraries. You can install everything else from Google Apps via the Google Play Store later. [TODO: Minimize this further by installing just the three essential apps, saving 70 more MiB].
    1. Download the minimized Google Apps package from  "[APP][MINIMALISM] Google Play 3.10.10 | Market ONLY Gapps for GB/ICS/JB4.1/JB4.2".
    2. Push it to the phone's SD card:
      adb push jb42-signed.zip /sdcard/jb42-signed.zip
    3. Reboot into your favorite recovery:
      adb reboot recovery
    4. Install the ZIP file from the SD card with your recovery software.
  3. Install APKs. For every APK, simply call [see adb command line arguments]:
    adb install filename.apk
    Due to permissions issues, on some Ubuntu host systems you will have to do:
    sudo adb install filename.apk

This task refers to the FolderSync app version 2.5.4 – the paid version, though it may work with the lite version too.

It seems that this is not an intended feature currently, but it is possible. Instructions:

  1. Enable USB tethering on your Android phone.
  2. Connect the computer to the USB network connection (while disconnecting it from wifi to make sure indeed the USB connection is used in this test).
  3. Look up the computer's IP for this connection, and set up an account (here, SFTP) in FolderSync accordingly.
  4. Create a folderpair in FolderSync, and make sure that you check "Use Wifi" for the connection to use.
  5. Make sure wifi is enabled on the Android phone and connected to some network. (Sadly, the folder syncing via USB hack only works when the phone is indeed connected to a wireless network, even though the actual data goes via USB of course. I guess "connection" means "wlan0 has IP address", not that an actual Internet connection would be needed. So if you can find a way to set up a static wifi connection to a base station with an invented ame (that is not in range of course), it should be sufficient to make FolderSync work.)
  6. Open the folderpair entry in FolderSync and click the "arrows in circle" button to trigger immediate syncing. Should succeed now.

With wifi disabled (or enabled but not connected to any wifi network) on the Android phone, the following error message will appear: "Folderpair not synced because syncing is not configured for current network type or network is not available".

The idea is that files saved in some folder on Android are automatically transferred to a folder on your Linux-based desktop computer, and vice versa. This should happen locally, without a "cloud storage" somewhere on the Internet.

For now, the best solution for automatic syncing is the commercial, closed-source app FolderSync as recommended by LinuxJournal.

 

I tried a bunch of alternative solutions, but they would not work as intended:

SparkleShare Android app

The SparkleShare for Android app [source code here] Allows to download files from a Sparkleshare repository only, not to upload them. Also the downloading has to happen manually.

Installation:

  1. Install git-daemon on the computer that should run the SparkleShare server. (This is maybe not needed, as a SparkleShare server is nothing else than a git server, and it is set up below.)
  2. Install the SparkleShare server on your PC or a web server where you have sufficient right, according to these instructions.
  3. Install sparkleshare-dashboard according to these instructions, on the computer that also runs the SparkleShare server. (In the step to start teh redis server, use this command on Ubuntu: redis-server /etc/redis/redis.conf).
  4. Install SparkleShare for Android on your phone.
  5. Insall SparkleShare, see Install SparkleShare 1.0 In Ubuntu (Dropbox-Like File Synchronization Tool). Because, even if you installed the SparkleShare server on your own local PC, you can not directly put the files into the server's directory to be synced. It will only sync files in directories that are watched, and that is done by the SparkleShare client.
  6. Configure this all.
    • When trying to start sparkleshare-dashboard, it may complain about not finding some nodejs module. In its directory, call "npm link <modulename>" for every module it complains about, to solve this [source].
    • Note that the SparkleShare client only works with its own generated SSH key. It will give you the SSH pubkey as "unique link code". Put it in the ~/.ssh/authorized_keys file on the SparkleShare server's host, of the user running the SparkleShare server. This will give the SparkleShare Desktop client access, while the Android client gets acvcess by the "device pairing" function of sparkleshare-dashboard.
    • Note that the link code required by SparkleShare for Android is NOT the SSH pubkey handed out by the SparkleShare desktop client as "unique link code", but instead the 10-letter or so code that appears as text and QR code when using the "device pairing" function of sparkleshare-dashboard.
    • When you can see folders in the sparkleshare-dashboard web frontend, you should be able to see them in a paired Android device, too. But I only was able to get to this point by making the SparkleShare repos "public" in config.js; this is not the correct way of course if you want private data syncing, but a good first step when configuring it all:
      exports.folders = [
        { type: 'git', name: 'Private GIT folder', path: '/home/storage/sparkleshare-data-local', pub: true }
      ];
    • As said, I was not yet able to get the access permissions of the sparkleshare-dashboard app right to access SparkleShare repositories. I don't even know if the user created for sparkleshare-dashboard needs to conform to any other user (SparkleShare server user maybe? probably not, as that was set up without a password).

Gidder to host a SparkleShare repo

Gidder is a full-fledged Android git server. Normally, any git server could be used to host a SparkleShare repository, and this would make this a very lightweight, nice solution, without the sparkleshare-desktop, nodejs, redis database etc. needed by the SparkleShare for Android app (and requiring installation without the package format …). However, Gidder cannot be used with SparkleShare, as it does not support SSH key authentication (only password authentication) [source], and SparkleShare only supports this (and no password authentication).

And in any way, this solution would still need a "git commit" and "git push" action on the Android device to get file changes to the desktop computer. While here, we want something automatic.

dvcs-autosync or git-auto-sync

See the dvcs-autosync website and git-auto-sync website. This runs on the Linux host and can auto-push to a git repo, probably including one hosted by the Gidder Android-based git server (I did not test). However, still the problem remains that on the Android side, no auto-syncing to a repository seems possible.

The problem: During Drupal admin work, for example after importing a bulk of posts from another source, you might want to assign many of them to Drupal group (provided by the Organic Groups module). In the admin backend, there is no other way than editing every node individually and entering the group's name into the "Groups: Your groups" or "Groups: Other groups" fields. There has to be a faster way for this, though.

My solution with SQL

I'm going to solve this on the database level, as all the other options do not (yet) work for me, see below.

  1. Create a backup of your Drupal database.
  2. Save this little script into a new PHP file on your server:

    <?php
      // PHP Script to generate a set of SQL statements that assign a set of posts
      // to a group in Drupal 7.

      // Node ID of the group to assign the nodes below to.
      $group_id = "366";

      // Node IDs of the nodes to assign to group with node ID $group_id.
      // (One mode to get the right set relatively fast is by copying the "System" column
      // from the "URL aliases" Drupal view at http://example.com/admin/config/search/path .)
      $node_ids = array(183, 357, 358, 360, 361); //
     
      foreach ($node_ids as $node_id) {
        echo("INSERT
          INTO og_membership (type, etid, entity_type, gid, group_type, state, created, field_name, language)
          VALUES ('og_membership_type_default', $node_id, 'node', $group_id, 'node', 1, 1359419168, 'og_group_ref', 'en');\n\n"
        );
      }
    ?>

  3. Adapt the script to contain the proper group node ID and the proper node IDs for nodes you want to assign to this group.
  4. Execute the script with: php generate-bulkassign-sql.php.
  5. Clear the Drupal cache by calling this in your website's document root directory: drush cc. Else, you might not see some changes as Drupal would read from the cache instead where possible.

Non-Working or Should-Be-Working Alternatives

Views Bulk Operations and og_actions

Views Bulk Operations is a Drupal module that can execute a configurable action on a selected set of content nodes. The node-to-group mass assignment should be possible together with the og_actions module, providing relevant VBO actions for groups, and whose functionality is said to be integrated into the core og (Organic Groups) module as of Drupal 6 [source]. However, the relevant action "Modify Group Content" action resp. "Add the node to the specified group…" action did not show up for me. So there seems to be no pure UI way (for me) to mass-assign content to a group in Drupal 7.

Views Bulk Operations and "Execute arbitrary PHP script"

There is another way to use Views Bulk Operations: together with own PHP code that performs the action on every entity (here: node) it is called on. An introduction to this technique is at "Copying data between fields with VBO".

In "Administration -> Configuration -> System -> Actions" (http://example.com/admin/config/system/actions/manage) there is a way to create an advanced action "Execute arbitrary PHP script", specifying your own PHP code to be executed on every node you call this action on. However any action I would create this way would not show up in the list of actions on top of the content items list in the admin backend, where I would normally select some content items, select that action from the dropdown and click "Update". Other advanced actions that I created did however show up, for example those created via the convert module.

Once the action shows up correctly, one would use this PHP code to get the action done (note: untested, maybe still buggy):

<?php
  // Script to assign one node to a pre-defined group via "Execute arbitrary PHP script" action.
  //
  // Intended to be used with the VBO feature, see: http://drupal.org/node/570220
  // PHP code to be copied into the "Execute arbitrary PHP script" field
  // without the <?php ? > code wrapper!

  $gid = 6; // Node ID of the group to assign content to.

  // Source and explanations: http://drupal.org/node/1249396#comment-6778598
  // API docs: http://api.drupalize.me/api/drupal/function/og_group/7
  og_group($gid, array('entity' => $entity));

  node_save($entity);
?>

PHP script to assign all nodes to a group at once

The most comfortable way to run a short own script inside the Drupal environment is drush php-script. It will bootstrap the Drupal context for you, so you have access to Drupal classes and functions just as when inside a custom module. See also this introduction to drush php-script.

I tried to run this PHP code with the drush php-script technique, but so far it does not work ("Error: Class name must be a valid object or a string in <drupal-dir>/includes/common.inc line 7752"):

<?php
  // Script to assign a set of nodes to a pre-defined group.
  //
  // Intended to be used with the "drush php-script" command.

  $group_id = 1; // Node ID of the group to assign content to.
  $group_type = "node"; // Entity type of this group, for example "node", "comment".

  $node_ids = array(
    3, 238, 346, 123
  );

  foreach ($node_ids as $node_id) {
    $entity = node_load($node_id);

    // Source and explanations: http://drupal.org/node/1249396#comment-6778598
    // API docs: http://api.drupalize.me/api/drupal/function/og_group/7
    og_group($group_id, array('entity' => $entity));

    node_save($entity);

    // $is_member_after = og_is_member($group_type, $group_id, $entity_type = "node", $entity, $states = array(OG_STATE_ACTIVE));
    // echo("Assigned node $node_id to group $group_id. Member state now: $is_member_after.");
  }

?>

Automating the manual form editing with drupal_form_submit()

This is a new idea that I did not yet try. Like above, one would use drush php-script to execute a little PHP script, but the script would fill and submit the "node edit" form instead of dealing with the Organic Groups API directly. There is a working example for creating groups this way (not assigning nodes to groups though).