The WordPress Polylang plugin is a nice system to make your WordPress blog multilingual. However, if you had a different solution in use before, and have a lot of blog posts, migrating to Polylang is an effort that cannot reasonably be done manually. So I developed a little Ruby script polyglot2polylang.rb for that. It is meant to migrate your blog content from the (no longer maintained) Polyglot plugin. However, you can easily adapt it to work for all multilingual plugins that store different language versions inside one WordPress page / post / comment etc. by using special tags, for example <lang_en>…</lang_en>.

Installation

  1. Download the Ruby scripts and Gemfile from the polylang-migrate repository at Github, and save them all into one directory. Or just git clone it, of course.
  2. In that directory, execute bundle install to install necessary gems (basically just nokogiri).

Usage

  1. Export your WordPress blog content as WordPress WXR file using the "Tools -> Export" menu item.
  2. Syntax-check your exported WXR file. (Else, unexpected behavior can occur. For example, an unexpected closing tag with no corresponding opening tag will cause nokogiri to consider the current item as the last one, without any hint or warning. Which means that not all posts of the blog will be processed.) To check the syntax, for example use xmllint --noout, which will print parser errors on stdout and print nothing if all is right.
  3. Run the polyglot2polylang.rb script on the exported WXR file (see the instructions in the script).
  4. Make a backup of your complete WordPress database.
  5. Make a backup of all your media files: cd httpdocs/wp-content/; cp -a uploads/ uploads.orig/;. This is because when deleting the last media library entry that refers to a specific file, that file will be deleted, too. And we will have to delete all media library entries later!
  6. Install the WordPress Polylang plugin.
  7. Delete all posts, pages, comments, categories, tags and media library entries in your WordPress blog, using the WordPress backend interface. You can use the Bulk Delete plugin to speed up that task a bit.
  8. Make sure the files for the media library entries are available at the URLs mentioned in their <item> tags in the WXR file. Adapt the polyglot2polylang.rb script to modify these URLs accordingly, if necessary.
  9. Import the modified WordPress WXR file. When asked, check "Download and import file attachments."

The last step wil fail for WXR files with many attachments to download and import (incl. 500 Internal Server Error), except if your web server is configured to allow very long script execution times. You can either configure it to extend these execution times [instructions, of which step 1 is irrelevant now] or alternatively use this, rather clumsy, workaround (in analogy to what other people found):

  1. Use wxr-separate-attachments.rb (also in the to split your WXR file into attachments and content (posts and pages).
  2. Create a post (media file, page or post) with an ID that is higher than every ID that will be used by your imports. This can be done by creating some media file, then modifying its field "ID" in table wp_posts, via phpMyAdmin. This step is needed because WordPress will create a media file for every WXR file you upload for import, and if its ID is one that should be available to an imported post, the imported post's ID will get shifted instead, without any error message, and the IDs of all following imported posts likewise, producing total cofusion with respect to ID references in the WXR file.
  3. Import the WXR file with attachments into your WordPress blog as many times in a row until no error happens. After every import, some more media files will be successfully downloaded and imported. Of course, check "Download and import file attachments." every time.
  4. Import the WXR file with posts and pages.
  5. You will now have all content imported, but media library entries will appear as not attached to posts and pages [issue report]. This also happens when importing attachments after posts and pages. The solution is to create the attaching directly in the database, using phpMyAdmin to execute the SQL file <outfile>.attach.sql that was also created when you did run polyglot2polylang.rb.

Limitations

  • Workaround for title translations getting lost. As documented in the script, the WXR export will not contain Polyglot markup tags in the post and page titles, so these translations get lost during this process. You can however get the original titles as an export from your database (for example by exporting a one-column result set to CSV in phpMyAdmin) and then use an adapted version of depolyglot.rb to create SQL statements that will convert your artificially-unique titles back to the correct translations that did get lost.
  • Changing language-specific slugs. After this process, you will have the original post slugs for one language version and other language versions with an appendix ("-italiano" in the unmodified scripts). The polyglot2polylang.rb script generates another SQL file <outfile>.attach.sql that you can edit and execute to adapt these more to your liking, by doing proper translations. The original slugs are better not changed this way to keep URL compatibility with existing links, but you can edit them in your WordPress backend – WordPress then creates a 302 forwarder for the original one, and this also gets saved when backing up to a WXR file.
  • Better interface with the database directly. The whole process of transition is quite a complex, and further complicated by title translations getting lost in an WXR export, and timeouts of the WXR import. For that reason, if I had to re-do this task, I would write a script that directly operates on the WordPress SQL database. The database has a clear structure, and I'd rather like to deal with that than all these hacks. If there's no Ruby installed on the server, the script can also run locally and access the database with a remote connection. And, when enabled with some option, it could even ask the user for interactibely correcting slug names etc..

Further Information

WordPress offers different interfaces, and Ruby can access all of them. There are several scripts out there for these, but no "perfect one" yet. They are all in different stages of maturity and age, and none of them seems actively maintained. So, choose according to your personal needs and preferences:

WordPress WXR interface

I would propose you look through this list, ordered by my evaluation of the script's general quality, and choose the first that fits your needs:

  • translatour. A tool dedicated to make WordPress WXR files accessible in Ruby.
  • wp-import-dsl. A Ruby domain-specific language (DSL) to import WordPress WXR files to anything else. However, this seems not to be the best choice if you just want to do a few little changes to the WXR files, as you have to write the complete output rendering yourself. Last update 2011-06.
  • wordpress_import. A minimalist parser for WordPress WXR files, done in Ruby. Filters out junk tags and can be used as a library.
  • wordpress_to_word_to_ebook. x-ian's tiny script to convert a WXR file into a single HTML page. From there, you can follow the instructions on his blog to create an e-book from your WordPress posts, if desired. Uses nokogiri.
  • WordPress WXR to Postmarkdown. Snippets to create a script that can convert a WordPress WXR file into posts for the Postmarkdown gem, using Markdown markup. Quite elegant approach, using the Nokogiri and Upmark gems.
  • wxr_export.rb. A script to output WXR files, using the XML::Builder Ruby gem. It's a bit special because this WXR file is meant to only contain comments and be imported into the Disqus cloud-based comment system.
  • Typo Export to WordPress. Short Ruby on Rails script that can convert content from the Typo blogging system to WordPress.
  • wordpress2blogger.rb. Short Ruby on Rails script that can convert a WordPress WXR file to the XML file format expected for importing into the blogger.com platform. From 2008-04.
  • wordpress_importer. evan's script that can import a "homegrown blog" and convert its content into a WordPress WXR file. Last commit 2010-12.
  • refinerycms-wordpress-import. Script to convert a WordPress WXR file into content for the Refinery CMS. Contains a nice gem for that task, that could be converted to do a different conversion task as well.
  • hackWXR.rb. A tiny script to enclose some tag contents in WordPress WXR files into CDATA tags to prevent them from (slow) XML parsing when handling large WXR files with Ruby code. See also the author's related blog post.
  • splitWXR.rb. A Ruby tool to split a big WXR file into smaller pieces, to avoid errors when importing into WordPress. See also the author's related blog post. There's also a version including a graphical interface: SplitWXR.

WordPress XMLRPC interface

You can also interface with its XMLRPC interface. Alternatives, the most recommendable first:

  • WordPressto. A Ruby gem to access the WordPress XMLRPC interface. (This is an actively developed fork of the original johnl/Wordpressto, which was last updated 2010-09.)
  • wp_rpc. Appears like another fork of WordPressto, last updated 2012-10 (as of 2013-03).

WordPress JSON interface

WordPress can provide a JSON interface by using the JSON-API plugin for WordPress. Then, you can interface with this from Ruby by using:

WordPress REST API

You can use these Ruby tools to interface with the wordpress.com REST API:

WordPress database access

Still another way is to interface directly with the MySQL database of WordPress. Alternatives:

  • WP-Ruby. Tools to map Ruby objects to the WordPress database structure. Last updated 2010-07.

At DIY Days Gothenburg, I showed the EarthOS project in the experience hall. With this, I collect and integrate open source alternatives for plain everything in life, and showcased things like Makerbot, Bitcoin, Open Source Ecology.

What totally surprised me was how DIY Days contributed back: It made me understand that EarthOS is more story than engineering project. Because that's how DIY Days participants intuitively understood it ("I appreciate it as a work of art.") It's a novel, disguised as a manual. An utopian exploration into how open source will transform the world.

Some thoughts from the accidental storyteller that I am:

  • Share Do Learn. Share your half-ready project – DIY Days fits great! Forced to explain, you're forced to learn what you do. It so happened to me.
  • Mingle with people you never meet. As a tech hacker, getting involved in the artful DIY Days community enabled me to see my story. A story is what the audience says is a story!
  • Tell an open story, not just an open end. EarthOS has no plot, just a framework, tools, inspirations. People imagine their own plot and role from this vague vision of the future.
  • Let's call it a plan. Your ambitious story is most inspiring when you call it "project" and disguise it as reality. It made me sad that People never knew if EarthOS is real, vision or outright fiction. Now it's a story that covers all these, and I want to play with this confusion, challenging the audience towards action.
  • Live out your stories. It makes them much harder to ignore, and it makes you a synthesis of the arts. Everything has its place: code is poetry, sewing is costume design, makeup is art, home hacking is set design and demonstrations are stages. Let's care to tell world-changing, deeply meaningful stories with our lives. Unfinished stories which others desire to continue when we're dead.

Awesome life stories to all! (And, would love to hear how you enact yours.)

Matthias (@matjahu)

This is a somewhat extended version of an article by me that appeared in the Learn Do Share Book number 3 (from DIY Days Gothenburg in 2013-02 – available in full for download). The article was inspired by my experience at DIY Days Gothenburg 2013.

This was one of my project proposals for the Interactivos'13 open-source projects workshop in Madrid. It didn't get selected in the end, but if you feel inspired by this or want to implement this … feel free to do so. This is open content licenced under CC BY 3.0 or at your option, any later version.

Project Summary

For many professions, there's a home for collaboration on the Net: programmers have SourceForge and Github (and many more). Electronics engineers have Open Design Engine (opendesignengine.net) and Upverter (upverter.com). Writers have tools like EtherPad Lite and Google Docs. But artists and designers? Not one I'm aware of. 

Sure, there's deviantart.com and flickr.com. Huge platforms, but not collaborative at all: the only thing to do there is present your work and comment on others. "Fork Me on Art Hub" project wants to fill exactly that gap. It wants to place artists and designers into an open content "rhizome of graphical knowledge", where it feels like everybody collaborating with everybody else, and doing so without needing any special invitation.

Here's how: A web-based platform for social collaboration in artwork and design, grown around version management for artwork files with git, the promotion of open content licenses (like CC-BY-SA), and "uninvited contributions" by fellow designers and those previously known as "art consumers". This kind of "uninvited contributions" is well-known in the software world, for example called "forking" and "pull requests" on Github.

(Note: For the version management part, this project is complemented by the "Git for GIMP" project proposal. That project makes the workflow much more likable for designers, but initially the platform can also work without this and collaborate with SparkleShare for example.)

Project Description

The portal's functionality is best explained by assuming a "social network" type portal, plus the following features (listed by importance, and explained in detail in the "Project Description"):

  1. Free git project hosting. Artists and designers can register for free and host art project for free as long as they assign an open content license to it. All art projects are automatically versioned with git, and the different versions are also accessible via a web interface (gitorious is a nice base software for that). And not just the artwork will be in the git repo, also utility files and everything else needed to collaborate on an artwork, like scripts for generative art.

  2. "Fork Me" function to create derivatives. Like on Github, there will be a prominent "Fork" feature. Once you click this, it allows to initialize a new own git repo with the artwork in question, and to add own versions by building on original ones. Once you have something you want to contribute back to the original author, you can create a "pull request" for that version.

  3. One-click accepting of contributions. Ideally, it will be possible to include others' updates back into your own work by just clicking "accept" for an appropriate pull-request notification that pops up on the website. It would be possible to get pre-views of the changes before accepting a change, of course.

  4. Embeddable widget with "Fork Me" function. This is one of the most innovative aspects here. For embedding an artwork into a website, whether the artists own one or any other, the Art Hub platform provides auto-generated "embed code". That HTML snippet not only shows the artwork, but also a "Fork Me" button that takes the reader to the Art Hub platform and shows some easy steps to create a derivative artwork. And then, all derivatives are shown in a slideshow that is also accessible from within the embedded HTML. Which means that creating derivative works results in immediate publicity in all publications showcasing the original work – and the "consumer" is no longer consumer at all, but co-producer. Esp. for art-related publications it will be a lot of fun for the artistists and art-enthusiast readers to see the derivative works produced by their fellow readers.

  5. Social commit messages. To make the Art Hub system more enjoyable in spute of quite technical version management, the git commit messages for each new version should be split into a technical and social part. Giving thanks, making a funny comment etc. goes into the social part, and update notifications and pull requests on the web platform would should show these social parts of the commit message as well, alongside with the picture of the author, similar to the update notification feed found in social networks like Facebook.

  6. Derivative graph. With artwork, it's not like with software: given a set of derivatives, people will hardly ever agree on a best version, while in software all improvements are regularly merged into the main version. So with artwork, there will be many forks that do not get merged back into the original, and these should be shown as a tree-like graph of derivative works (incl. preview images) on a project's page.
    This would even be the main feature of this invention: allowing not just one version of a graphic to exist, but a lot of interdependent versions. (They can be all incorporated within one git repo, as branches that branch into even more branches.) Those who search for a work to incorporate can then look through all the variants. And it would be the work of the main graphic project's authors to provide a systematic collection of the derivatives that are the most relevant, in her view.

  7. "Getting derivatives" as reputation. Collaboration is also about culture: on Github, you can estimate the popularity of a project by looking at the number of followers and forks. And similarly, people creating derivative works should be considered a good thing on the Art Hub platform and their number would be shown prominently, to encourage the culture of sharing.

  8. Embedding option with automatic attribution. When generating the HTML snippet with the embed code, the platform also automatically includes proper attribution for all base works, in accordance with the artworks' licenses. This automatic attribution removes a major practical hazzle when dealing with open content photography and images: keeping track of sources and attributing correctly.

  9. art hub integration into FLOSS graphics apps. There would be plugins for major FLOSS apps (GIMP, Inkscape, MyPaint) to open and fork Art Hub art repositories directly from the Internet. (Note that these app plugins would manage a local git repo invisibly, no need to care about that.) When saving back to the repo (or a new forked repo) with the graphics application, a "new fork / derivative / pull request" notice will appear on the Art Hub platform. Tihs feature is for workflow improvement only, and not needed for a first working verison.

  10. Federation. Some artists may want to fork the platform itself and create their own self-hosted artist community. As the platform software will be free and open source, this is clearly possible. However there should be an actively promoted "federation" feature that allows a global search on all platforms that have it enabled, plus cross-platform forking of artwork projects.

  11. CC licence registry. The platform can also take over the role of a "copyright licence registry", here for open content licenses, as another way to promote collaboration among the arts. It's a platform to record the fact of people licencing their work, to avoid potential later legal hazzle.

  12. Automatic pingbacks for derivatives. Of course the platform informs the authors about derivatives created on the platform, but additionally it can search the web (with image similarity search etc.) for other derivatives and likewise create notifications for these.

  13. New collaboration option for large graphics. This software would allow new types of collaboration on large infographics etc., by creating placeholders at first, putting them together into the master graphic, then letting everybody work on fleshing out one placeholder each and feeding the changes automatically into the master graphic.
    Similarly, this kind of distributed, versioned graphics creation system should also work for multi-page DTP documents with lots of illustrations, like by integrating it with Scribus. So a lot of authors (including the general public) can work on creating a complex document, both the text and graphics.

This was one of my project proposals for the Interactivos'13 open-source projects workshop in Madrid. It didn't get selected in the end, but if you feel inspired by this or want to implement this … feel free to do so. This piece is open content licenced under CC BY 3.0 or at your option, any later version.

Project Summary

Version control software like git makes collaboration between programmers quite seamless: it can merge together their changes and lets them revert unwanted changes. Not so for artists and designers, where collaboration still can mean mailing files around with timestamps in filenames. That's slow and error prone, not the fun of simultaneous collaboration.

Projects like SparkleShare improve on that, bringing git to designers (and designers love it). But git was originally made for source code and not images, so it's always a manual editing effort to merge changes from two designers who did parallel changes to the same version of an artwork. Resolving all these conflicts manually is also no fun, and effectively blocks designers from experiencing git's true power: branching, for example. You could do some experimental changes to some artwork, exploring your own path or paths, while your collaborators proceed on the main version, fixing little flaws for example. Once you agree what experimental changes to include in the main version, git should do so for you. For source code, git can do so automatically. For GIMP images (or maybe MyPaint, Inkscape or Scribus files instead), this project will extend git with that ability.

An additional aspect of this project is that it complements the "Fork Me on Art Hub" project proposal, which is a git based art sharing platform with novel features that encourage collaboration between artists and those still considered "consumers of art". However, this project can also function without that special platform, as it can work with every git repository (like from GitHub, Gitorious, Bitbucket, or self-hosted).

Finally, here's the main technical innovation of this "Git for GIMP" project: "change instructions" for raster images. For now, when SparkleShare stores a new image version into a git repository, it does so as a binary file. Git can compare it to the earlier version and store only the binary diff to save space (see "git gc"), but it does not understand about its inner structure, so it does not know how to merge parallel changes. After this project, git will instead store an image version as aggregated "change instructions" for a base image. Informal examples of change instructions would be:

  • move layer "person 1" by 14 px to right and by 30 px to top
  • change transparency of layer "flare" to 30%
  • change image data of layer "shadow" by combining it with the attached overlay layer (which has RGBA enabled)

The last type allows git even to merge changes to the actual image data of the same layer. Namely, if they don't conflict (don't affect the same pixels). Note that working with image files is no different with this extended git: when checking out a specific version, git will apply the relevant change instructions to the base version and provide the requested version in your file system.

Project Description

The "Project Summary" contains all the major points about this project already, so here are just some more details about the idea and possible implementation, in a quite random order, one detail per paragraph:

The current situation of images in git / svn. There are several options to add handling of binary data to git [examples]. It seems that changes only create small increments in repo size (at least when using "git gc" garbage collection). This would be the same as in SVN then, as discussed with a Pixelnovel Timeline developer. However in all these cases, git and svn do not yet understand about the inner structure of the image files, so they cannot automatically merge non-conflicting changes.

The user's experience. From a user's perspective, the software should act mostly like SparkleShare (and will probably indeed be based on it!). So, a designer's work is synced to a central git repository and from there to teammates automatically whenever a change is saved. However, to enable advanced versioning like git branching, there will be a little git plugin for the chosen graphics application, probably GIMP, to enter the git commit message, choose or create a branch, revert to a prior version (ideally with thumbnail preview) and so on.

GIMP or Inkscape? The proposal is here so far to build a tool for putting OpenRaster images (from GIMP or MyPaint) into git repositories. This requires a completely new tool to extract the "change instructions" mentioned above, and to build new OpenRaster images by applying them. If that's too complex for a two-week workshop, a similar approach can be done for Inkscape's SVG files. With the advantage that they are XML text already, so it will require little effort to teach git how to merge parallel changes. The main effort would then be to develop a user-friendly git plugin for Inkscape that designers will love to use. (It should show incoming "pull requests" notifications when others have done changes to an open file, and the designer would accept them with a single click.)

OpenRaster, not XCF. In case that a pixel based graphics application is chosen for this project (like GIMP, which is the current proposal), it is advisable to use the OpenRaster format for storing the images. So, not GIMP's native XCF format, which is not recommeded as a data interchange format and mostly represents GIMP's internal data structures [source]. OpenRaster is included in GIMP since version 2.7.1 or 2.8 [source]. The additional advantage of OpenRaster is that it benefits multiple applications (like MyPaint) and allows collaboration between them. A disadvantage is that it is still quite a new, not much adopted file format – but nonetheless the proposed open standard format for raster images. Apart from OpenRaster and XCF, TIFF would be the only other format that could be used. However the modes of saving layer metadata etc. are normally proprietary, as TIFF is basically just a container format.

Deriving instructions from GIMP history? In GIMP's case, these "change instructions" might be derived from the GIMP's history feature. But maybe a better alternative is to derive them by comparing two versions of a saved file directly – as done in the world of source code by "diff".

Inspirations from Pixelnovel Timeline and ComparePSD. The closest existing product for version control in images is Pixelnovel Timeline, and it offers a lot if insights for a great workflow and user interface when developing version control software for designers – see http://pixelnovel.com/timeline . It is based on the SVN version control system, however it can only do linear versioning and rollback and needs manual merging for changes derived in parallel from the same version. Also interesting for UI design in this project is the Pixelnovel ComparePSD tool for comparing PSD files layer by layer.

Inspirations from Kaleidoscope App. There is an app for visually comparing differeing versions of an image, to spot differences optically: Kaleidoscope.

This applies to Debian 6 "Squeeze", using PHP 5.4.11 from Dotdeb, and Froxlor 0.9.27 from the Debian archives.

Setup instructions

  1. We assume that your Apache2 server is installed and working, and so is your PHP 5.4.x installation.
  2. Install PHP-FPM:
    apt-get install libapache2-mod-fastcgi php5-fpm
  3. Enable to use PHP-FPM in Froxlor. (After saving, you will get an additional "configuration" link in the line for PHP-FPM.)
  4. In the PHP-FPM configuration in Froxlor, change "Path to php-fpm configurations" to "/etc/php5/fpm/pool.d/", because that's the path where the Debian package expects these .conf files by default. (Alternatively, you could adapt that behavior by editing the include directive in /etc/php5/fpm/php-fpm.conf, at the very bottom).
  5. In the PHP-FPM configuration in Froxlor, change "Command to restart php-fpm" to "/etc/init.d/php5-fpm restart".
  6. Let Froxlor create the new configs:
    php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
  7. Exchange the php5 handler with the fastcgi one (and other stuff needed by PHP-FPM):
    a2enmod fastcgi actions alias
    a2dismod php5
  8. Fix that Apache complains about a config line "Invalid command 'SuexecUserGroup'" in the Apache vhost configs generated by Froxlor [source]:
    1. apt-get install apache2-suexec
    2. a2enmod suexec
  9. Fix that php-fpm cannot start because Froxlor missed creating system users and groups for the customers it refers to by name in the php-fpm config files.
    1. cd /var/customers/webs/
    2. For every customer in there, execute an equivalent with the proper ID values and customer names for:
      addgroup --gid 10006 customername
      adduser --uid 10006 --gid 10006 customername
  10. Restart PHP-FPM:
    service php5-fpm restart
  11. Restart Apache2:
    service apache2 restart

Should work now. Verify by testing as shown below.

How to test your setup

  1. When testing your setup, test with a domain or subdomain site, not with the "IP and port" site. Because for the latter one, Froxlor misses to create a proper pool configuration file in /etc/php-fpm.d/ (while generating the VirtualHost config file properly), so it will always fail with error messages like this in /var/log/apache2/error.log, using your FQDN server name:

    [Wed Feb 20 19:57:13 2013] [error] [client 91.15.26.18] (2)No such file or directory: FastCGI: failed to connect to server "/var/www/hostname.example.com.fpm.external": connect() failed
    [Wed Feb 20 19:57:13 2013] [error] [client 91.15.26.18] FastCGI: incomplete headers (0 bytes) received from server "/var/www/hostname.example.com.de.fpm.external"

    Note that the file /var/www/hostname.example.com.fpm.external is indeed missing, but that is not the problem: the equivalent file is missing for working websites as well (the docs say "The filename does not have to exist in the local filesystem.").

  2. The first, simplest test is to choose a website and place a little script (called userinfo.php or something) in it with just this content: <?php system('id'); ?>. When calling it in your webbrowser, it should generate output that points to the user and group used in the SuexecUserGroup directive in that site's VirtualHost config. So note that php-fpm, as configured by Froxlor, does not use the script's owner as the user to execute it as, unlike mod_suphp.

  3. Then proceed to test a full website (keeping all other sites temporarily disabled by moving the configs out of /etc/apache2/sites-enabled/). Do not choose a phpMyAdmin or WordPress site for your first testing site however, as there can be special problems to be dealt with.

Fixing other issues

  • "There is no fastcgi wrapper set." When restarting Apache2, you might see messages like "[warn] FastCGI: there is no fastcgi wrapper set, user/group options are ignored". These can be ignored because Froxlor uses suexec to adapt the user and group of the server process, not the php-fpm internal mechanisms. See the system('id'); test above which proves this.
  • Adding directories to open_basedir. When using Froxlor with Apache and mod_php5, you could add site-specific values to open_basedir. When using PHP-FPM, this is no longer possible because site-specific values are now stored in /etc/php5/fpm/pool.d/*.conf files, which will be overwritten when Froxlor regenerates its config files. And there's seemingly no option to add to them from within Froxlor. One might edit the affected .conf files and set them to non-writable for the Froxlor user, but  that will create hard-to-track future problems. It's cleaner to add all directories required for one site to all of them, globally, via "Server -> Settings -> Web Server Settings -> Configuration", where you'll find an option to append paths to the open_basedir setting of all your virtual hosts.
  • Installing Roundcube from the Debian package. You will have to add some paths to the open_basedir default setting as described just above. This includes /etc/roundcube. However, it seems that Froxlor 0.9.27 silently discards any directory in /etc/ that you try to add to open_basedir via "Server -> Settings -> Web Server Settings -> Configuration". Seems to be an undocumented "security feature" 😀 To fix that bug, you could normally overwrite open_basedir per vhost, but PHP-FPM does not interpret that, which is why we have to modify the global open_basedir setting. The best solution I found was to do this (if we're lucky, Debian package management will not complain because we only exchange what is the symlink and what is the real thing):
    rm /var/lib/roundcube/config/*
    mv /etc/roundcube/* /var/lib/roundcube/config/

    ln -s /var/lib/roundcube/config/ roundcube
  • Restarting PHP-FPM. This can be required after doing manual changes to its config files in /etc/php-fpm.d/. The simplest way is: service php5-fpm restart.
  • Enabling the IP-and-port site. By default, Froxlor will not generate a /etc/php5/fpm/pool.d/*.conf file for the "IP and port" website, so it will not be served by php-fpm, resulting in "Server Error 500". This behavior is controlled by the option in Froxlor to use PHP-FPM "for Froxlor itself" (wuite a misnomer, but true: in the "IP and port" site configuration there is the same assumption that Froxlor is normlly provided via that site, where it says "User defined docroot (empty = point to froxlor)"). So the solution goes like this:
    1. Enable that option to use PHP-FPM for Froxlor itself.
    2. Note that the additional options on the same page, to change the user and group names to be used for the Froxlor chost with PHP-NPM, have no effect on the generated VirtualHost config (which seems to be a Froxlor bug). So better leave them as they are at "froxlorlocal".
    3. Let Froxlor recreate the config files:
      php /var/www/froxlor/scripts/froxlor_master_cronjob.php --force
    4. Ensure that there is now a config file in /etc/php5/fpm/pool.d/ named by the FQDN of your host.
    5. Restart Apache2: service apache2 restart
    6. Restart PHP-FPM: service php5-fpm restart
    7. Call up your IP address in your browser and see if it works.
  • Fixing WordPress sites that use URL rewriting. When re-enabling these sites, they will probably fail with this error message in the log: "Request exceeded the limit of 10 internal redirects due to probable configuration error.". The solution is to adapt the .htaccess file of WordPress with the rewrite rules to look like this [source]:
    # BEGIN WordPress
    <IfModule mod_rewrite.c>
    RewriteEngine On
      RewriteBase /
      RewriteCond %{REQUEST_URI} !^/fastcgiphp/*
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule . index.php [L]
    </IfModule>
    # END WordPress
  • Fixing Indefero sites that use URL rewriting. Indefero is a simple, Google Code like, open source code and project hosting software that utilizes git. When trying to serve it via PHP-FPM, it will say "Server Error 500", and in the log "Request exceeded the limit of 10 internal redirects due to probable configuration error.". In analogy to the solution for WordPress above, simply add a RewriteCond %{REQUEST_URI} !^/fastcgiphp/* to its .htaccess file to prevent the circular redirection. The .htaccess will then be:
    Options +FollowSymLinks
    <IfModule mod_rewrite.c>
      RewriteEngine On
      RewriteBase /
      RewriteCond %{REQUEST_URI} !^/fastcgiphp/*
      RewriteCond %{REQUEST_FILENAME} !-f
      RewriteCond %{REQUEST_FILENAME} !-d
      RewriteRule ^(.*) /index.php/$1
    </IfModule>