In this post, I want to show a solution that can help to quickly install your set of desired open source Android apps from FDroid, by installing them with adb. It also works with Google Play, but you have to download them as .apk files first. This is not possible on Google Play directly, but for example with third-party services like Be aware of potential security implications though.

So it would be possible to have a single script running on your computer and bulk-installing all your Android apps on your phone. However, once you installed your desired apps once in any way, it is faster and more comfortable to use App2zip, App2zip Pro or ZIPme to create a .zip file with your apps that you can then install in recovery mode on any phone you want them on.

The process for unattended install of Android apps via adb works as follows:

  1. Enable USB debugging on the Android phone. This is needed for adb to work. [instructions]
  2. Install Google Play. We install a minimized version of Google Apps here that contains just Google Play and required libraries. You can install everything else from Google Apps via the Google Play Store later. [TODO: Minimize this further by installing just the three essential apps, saving 70 more MiB].
    1. Download the minimized Google Apps package from  "[APP][MINIMALISM] Google Play 3.10.10 | Market ONLY Gapps for GB/ICS/JB4.1/JB4.2".
    2. Push it to the phone's SD card:
      adb push /sdcard/
    3. Reboot into your favorite recovery:
      adb reboot recovery
    4. Install the ZIP file from the SD card with your recovery software.
  3. Install APKs. For every APK, simply call [see adb command line arguments]:
    adb install filename.apk
    Due to permissions issues, on some Ubuntu host systems you will have to do:
    sudo adb install filename.apk

This task refers to the FolderSync app version 2.5.4 – the paid version, though it may work with the lite version too.

It seems that this is not an intended feature currently, but it is possible. Instructions:

  1. Enable USB tethering on your Android phone.
  2. Connect the computer to the USB network connection (while disconnecting it from wifi to make sure indeed the USB connection is used in this test).
  3. Look up the computer's IP for this connection, and set up an account (here, SFTP) in FolderSync accordingly.
  4. Create a folderpair in FolderSync, and make sure that you check "Use Wifi" for the connection to use.
  5. Make sure wifi is enabled on the Android phone and connected to some network. (Sadly, the folder syncing via USB hack only works when the phone is indeed connected to a wireless network, even though the actual data goes via USB of course. I guess "connection" means "wlan0 has IP address", not that an actual Internet connection would be needed. So if you can find a way to set up a static wifi connection to a base station with an invented ame (that is not in range of course), it should be sufficient to make FolderSync work.)
  6. Open the folderpair entry in FolderSync and click the "arrows in circle" button to trigger immediate syncing. Should succeed now.

With wifi disabled (or enabled but not connected to any wifi network) on the Android phone, the following error message will appear: "Folderpair not synced because syncing is not configured for current network type or network is not available".

The idea is that files saved in some folder on Android are automatically transferred to a folder on your Linux-based desktop computer, and vice versa. This should happen locally, without a "cloud storage" somewhere on the Internet.

For now, the best solution for automatic syncing is the commercial, closed-source app FolderSync as recommended by LinuxJournal.


I tried a bunch of alternative solutions, but they would not work as intended:

SparkleShare Android app

The SparkleShare for Android app [source code here] Allows to download files from a Sparkleshare repository only, not to upload them. Also the downloading has to happen manually.


  1. Install git-daemon on the computer that should run the SparkleShare server. (This is maybe not needed, as a SparkleShare server is nothing else than a git server, and it is set up below.)
  2. Install the SparkleShare server on your PC or a web server where you have sufficient right, according to these instructions.
  3. Install sparkleshare-dashboard according to these instructions, on the computer that also runs the SparkleShare server. (In the step to start teh redis server, use this command on Ubuntu: redis-server /etc/redis/redis.conf).
  4. Install SparkleShare for Android on your phone.
  5. Insall SparkleShare, see Install SparkleShare 1.0 In Ubuntu (Dropbox-Like File Synchronization Tool). Because, even if you installed the SparkleShare server on your own local PC, you can not directly put the files into the server's directory to be synced. It will only sync files in directories that are watched, and that is done by the SparkleShare client.
  6. Configure this all.
    • When trying to start sparkleshare-dashboard, it may complain about not finding some nodejs module. In its directory, call "npm link <modulename>" for every module it complains about, to solve this [source].
    • Note that the SparkleShare client only works with its own generated SSH key. It will give you the SSH pubkey as "unique link code". Put it in the ~/.ssh/authorized_keys file on the SparkleShare server's host, of the user running the SparkleShare server. This will give the SparkleShare Desktop client access, while the Android client gets acvcess by the "device pairing" function of sparkleshare-dashboard.
    • Note that the link code required by SparkleShare for Android is NOT the SSH pubkey handed out by the SparkleShare desktop client as "unique link code", but instead the 10-letter or so code that appears as text and QR code when using the "device pairing" function of sparkleshare-dashboard.
    • When you can see folders in the sparkleshare-dashboard web frontend, you should be able to see them in a paired Android device, too. But I only was able to get to this point by making the SparkleShare repos "public" in config.js; this is not the correct way of course if you want private data syncing, but a good first step when configuring it all:
      exports.folders = [
        { type: 'git', name: 'Private GIT folder', path: '/home/storage/sparkleshare-data-local', pub: true }
    • As said, I was not yet able to get the access permissions of the sparkleshare-dashboard app right to access SparkleShare repositories. I don't even know if the user created for sparkleshare-dashboard needs to conform to any other user (SparkleShare server user maybe? probably not, as that was set up without a password).

Gidder to host a SparkleShare repo

Gidder is a full-fledged Android git server. Normally, any git server could be used to host a SparkleShare repository, and this would make this a very lightweight, nice solution, without the sparkleshare-desktop, nodejs, redis database etc. needed by the SparkleShare for Android app (and requiring installation without the package format …). However, Gidder cannot be used with SparkleShare, as it does not support SSH key authentication (only password authentication) [source], and SparkleShare only supports this (and no password authentication).

And in any way, this solution would still need a "git commit" and "git push" action on the Android device to get file changes to the desktop computer. While here, we want something automatic.

dvcs-autosync or git-auto-sync

See the dvcs-autosync website and git-auto-sync website. This runs on the Linux host and can auto-push to a git repo, probably including one hosted by the Gidder Android-based git server (I did not test). However, still the problem remains that on the Android side, no auto-syncing to a repository seems possible.

The problem: During Drupal admin work, for example after importing a bulk of posts from another source, you might want to assign many of them to Drupal group (provided by the Organic Groups module). In the admin backend, there is no other way than editing every node individually and entering the group's name into the "Groups: Your groups" or "Groups: Other groups" fields. There has to be a faster way for this, though.

My solution with SQL

I'm going to solve this on the database level, as all the other options do not (yet) work for me, see below.

  1. Create a backup of your Drupal database.
  2. Save this little script into a new PHP file on your server:

      // PHP Script to generate a set of SQL statements that assign a set of posts
      // to a group in Drupal 7.

      // Node ID of the group to assign the nodes below to.
      $group_id = "366";

      // Node IDs of the nodes to assign to group with node ID $group_id.
      // (One mode to get the right set relatively fast is by copying the "System" column
      // from the "URL aliases" Drupal view at .)
      $node_ids = array(183, 357, 358, 360, 361); //
      foreach ($node_ids as $node_id) {
          INTO og_membership (type, etid, entity_type, gid, group_type, state, created, field_name, language)
          VALUES ('og_membership_type_default', $node_id, 'node', $group_id, 'node', 1, 1359419168, 'og_group_ref', 'en');\n\n"

  3. Adapt the script to contain the proper group node ID and the proper node IDs for nodes you want to assign to this group.
  4. Execute the script with: php generate-bulkassign-sql.php.
  5. Clear the Drupal cache by calling this in your website's document root directory: drush cc. Else, you might not see some changes as Drupal would read from the cache instead where possible.

Non-Working or Should-Be-Working Alternatives

Views Bulk Operations and og_actions

Views Bulk Operations is a Drupal module that can execute a configurable action on a selected set of content nodes. The node-to-group mass assignment should be possible together with the og_actions module, providing relevant VBO actions for groups, and whose functionality is said to be integrated into the core og (Organic Groups) module as of Drupal 6 [source]. However, the relevant action "Modify Group Content" action resp. "Add the node to the specified group…" action did not show up for me. So there seems to be no pure UI way (for me) to mass-assign content to a group in Drupal 7.

Views Bulk Operations and "Execute arbitrary PHP script"

There is another way to use Views Bulk Operations: together with own PHP code that performs the action on every entity (here: node) it is called on. An introduction to this technique is at "Copying data between fields with VBO".

In "Administration -> Configuration -> System -> Actions" ( there is a way to create an advanced action "Execute arbitrary PHP script", specifying your own PHP code to be executed on every node you call this action on. However any action I would create this way would not show up in the list of actions on top of the content items list in the admin backend, where I would normally select some content items, select that action from the dropdown and click "Update". Other advanced actions that I created did however show up, for example those created via the convert module.

Once the action shows up correctly, one would use this PHP code to get the action done (note: untested, maybe still buggy):

  // Script to assign one node to a pre-defined group via "Execute arbitrary PHP script" action.
  // Intended to be used with the VBO feature, see:
  // PHP code to be copied into the "Execute arbitrary PHP script" field
  // without the <?php ? > code wrapper!

  $gid = 6; // Node ID of the group to assign content to.

  // Source and explanations:
  // API docs:
  og_group($gid, array('entity' => $entity));


PHP script to assign all nodes to a group at once

The most comfortable way to run a short own script inside the Drupal environment is drush php-script. It will bootstrap the Drupal context for you, so you have access to Drupal classes and functions just as when inside a custom module. See also this introduction to drush php-script.

I tried to run this PHP code with the drush php-script technique, but so far it does not work ("Error: Class name must be a valid object or a string in <drupal-dir>/includes/ line 7752"):

  // Script to assign a set of nodes to a pre-defined group.
  // Intended to be used with the "drush php-script" command.

  $group_id = 1; // Node ID of the group to assign content to.
  $group_type = "node"; // Entity type of this group, for example "node", "comment".

  $node_ids = array(
    3, 238, 346, 123

  foreach ($node_ids as $node_id) {
    $entity = node_load($node_id);

    // Source and explanations:
    // API docs:
    og_group($group_id, array('entity' => $entity));


    // $is_member_after = og_is_member($group_type, $group_id, $entity_type = "node", $entity, $states = array(OG_STATE_ACTIVE));
    // echo("Assigned node $node_id to group $group_id. Member state now: $is_member_after.");


Automating the manual form editing with drupal_form_submit()

This is a new idea that I did not yet try. Like above, one would use drush php-script to execute a little PHP script, but the script would fill and submit the "node edit" form instead of dealing with the Organic Groups API directly. There is a working example for creating groups this way (not assigning nodes to groups though).

This refers to the reference Bitcoin client (bitcoin-qt), version 0.7.2. While it was catching up with the blockchain, it did cause such a high I/O waitstate load that working on the same computer alongside was very frustrating to impossible.

Setting bitcoin-qt to low CPU and IO priority helped a bit, but not too much:

ionice -c 3 -p $(pidof -s $(which bitcoin-qt))

renice -n 20 -p $(pidof -s $(which bitcoin-qt))

Additionally setting the main application(s) you work with to higher prioritoes helps more, but it won't be great still:

sudo ionice -c 2 -n 0 -p $(pidof -s firefox)

sudo renice -n -10 -p $(pidof -s firefox)

The underlying problem is the inefficient disk I/O activity of the BerkeleyDB used in the reference client bitcoin-qt. It will be solved in one of the next (after 0.7.2) releases by switching to a different DB engine [source]. Until then (or even at all) you may switch to the Blockinfo My Wallet online wallet. It does store the private keys in encrypted form, and does decryption only locally with JavaScript in the browser – so if they did an honest implementation of that, their service is not prone to theft as have been other Bitcoin online wallets before. (But: Use at your own risk anyway, do not blame me, do not put in your lifetime savings etc. …)

There are also alternative solutions for some situations like downloading the blockchain directly.

As of 2013-01-27, this is a problem because of many bugs both inside the wordpress_migrate Drupal module and the Drupal Commons 7.x-3.0 distro (which is not yet released, but making good progress). So, expect this guide to be out of date in a matter of days, then try to go along this while omitting the workaround steps (so then include tag and category migrations, and omit the patching).

Step by step guide:

  1. Empty WordPress trash. Before starting the import process, empty your trashbin in the WordPress blog. Else, already trashed posts and their comments are also exported in the WXR file, and will create confusing errors because of showing up as "Unimported" for blog entrys in "Content -> Migrate" statistics in the Drupal admin backend, and also trashed comments causing error messages "No node ID provided for comment" when trying to import comments in spite of the fact that the post was not imported.
  2. Export as WXR. Export a WXR file from your WordPress blog, using the "Tools -> Export" function.
  3. Install wordpress_migrate. Install wordpress_migrate for your Drupal Commons 7 installation. For that, execute in the document root directory of your site:
    drush dl wordpress_migrate
    drush pm-enable wordpress_migrate
  4. Clear cache. Clear the Drupal cache (this is the sign for the migrate framework to register the new migration classes):
    drush cc all
  5. Fix comment migration. There is a bug report with a submitted but not yet accepted  fix that prohibits importing comments, resulting in an error message like "Unknown data property field_target_comments. in EntityStructureWrapper->getPropertyInfo() (line 339 of commons-7.x-3.0-beta1/profiles/commons/modules/contrib/entity/includes/" To prevent it, install the fix yourself:
    cd profiles/commons/modules/contrib/commons_notify;
    patch <comment_node_nogroup-1868776.patch;
    drush cc all;
    And if you attempted an unusccessful migration already, go to "Content -> Migration" in the Drupal admin backend and do a "Rollback" and then a "Reset" for the comments migration, before attempting a new one by executing the "Import" action there again.
  6. Start migration. Log into the Drupal admin backend and navigate to "Content -> WordPress migration -> Import". Here, select the following settings (also see their official documentation), then click "Import":
    • Your WXR file as source.
    • "Wordpress categorie: Do not import", "Wordpress tags: Do not import". (If your target content type like (here) "Page" has any vocabulary fields, the "Do not import" option is not available. Choosing one of the vocabularies will then make the import fail with an error message like "MigrateException: No migration found with machine name ExampleTag in MigrationBase::getInstance() (line 444 of […]/sites/all/modules/migrate/includes/”. Or the same with “ExampleCategory”. So delete the vocabulary field before via “Structure -> Content types -> [your content type] -> manage fields").
    • "Text: Full HTML", "Comments: Full HTML". [source]
    • "Import pages to: Page" and "Import posts to: Page". [source] (Importing posts to "Post" would result in error messages like "Invalid argument supplied for foreach() File […]/commons_radioactivity.module, line 140". In contrast to pages, posts in Drupal Social Commons can belong to groups. So if you want to make posts, not pages, bulk-convert the pages to posts later with the node_convert module.)
    • Create page aliases to match the original WordPress addresses.
  7. Register migration classes, if necessary. If no importing is possible at all, it may be because on your system the drush cc all did not properly trigger the migration class registration [issue report]. Then instead, call drush migrate-auto-register [source]. This is however buggy in itself: try to access the "Content -> Migration" tab in the Drupal admin backend. It might fail with this error message: "MigrateException: Failure to sort migration list – most likely due to circular dependencies involving OgMigrateContent,OgMigrateUser,OgUiMigrateAddField,OgUiSetRoles in migrate_migrations() ". This is because the drush command erroneously registered these "Og*" names as migration classes. Undo that by deleting the rows from database table migrate_status where column machine_name starts with one of these Og* names. [source1, source2].
  8. Ignore dependencies, if needed. Now your import should run, but it might not succeed. Because, no attachments or comments are imported if even one post fails to be imported. Error messages in this case are like:
    Skipped ExampleAttachment due to unfulfilled dependencies: ExampleBlogEntry
    Skipped ExampleComment due to unfulfilled dependencies: ExampleBlogEntry
    In that case go to "Content -> Migrate" in the Drupal admin backend, select the migrations that have not run, check "Options -> Ingnore dependencies" on the bottom, choose "Import", click "Execute". This will import all comments and attachments for those posts that were properly imported.
  9. Import pingbacks too, if you want. At this time, the migration statistics in "Content -> Migrate" in the Drupal admin backend should show that everything was imported ok, except that there can be a non-zero number for unimported  comments. These are most probably pingback comments. You can check that by looking into the database table migration_map_[migrationname]comment: find some source IDs that are not listed in column sourceid1 but appear in your WXR file as comment ids between <wp:comment_id>…</wp:comment_id>. If these are all pingback comments, indicated by <wp:comment_type>pingback</wp:comment_type> in the WXR file, you know the cause and probably want to ignore these. If you want to import pingbacks, too, you have to modify the WXR file by replacing all <wp:comment_type>pingback</wp:comment_type> with <wp:comment_type></wp:comment_type> (yes, that's correct: delete the value within this tag).
  1. Mailinator. In some sense, a good idea. Namely, when it's only about cutting the link to your real-world identity when entering an e-mail address (like for registering on some site). You have to access Mailinator with TOR though or you might still get tracked down.
  2. You need either two friends to refer you, or to wait between a day and some weeks to get approved for an e-mail account.

More on that: Section on e-mail in Tech Tools for Activism: "Signing up for an independent email address is not an immediate automated process, because of spammers. For example, issues email addresses though a friend-of-a-friend basis. If you know someone with an email address, you can fill in a the sign up form at   Alternatively, you can sign up for an account at or You will need to fill in a short form letting them know why you want the email and they normally respond in 24 hours."

"Stay away from email services that make their money by collecting user data: it’s a business model that’s bad for privacy. You may have to pay a few dollars a month to a private email service that encrypts and securely stores your data. Some examples are Unspyable, Countermail, Silent Circle, or Lavabit." [source]