Symptoms

The exact error when executing the cron job manually (/usr/bin/php -f /var/www/vhosts/example.com/httpdocs/cron.php) was:

PHP Fatal error:  Allowed memory size of 536870912 bytes exhausted (tried to allocate 7 bytes) in /var/www/vhosts/onlinediscountmarkt.de/httpdocs/lib/Zend/Db/Statement/Pdo.php on line 290

Solution

This error can happen if Magento's cron_schedule table has too many records, coming from the log entries of past cron runs. So, delete all the entries, directly in the database with TRUNCATE TABLE cron_schedule; or using phpMyAdmin or similar.

Now, when calling the cron job again from the command line, it finishes normally, without a crash.

Prevention

It seems that the table could become that full because in "System -> Configuration -> System -> Cron", "History cleanup every" was set to "1440", thinking this was to be set in minutes. But instead, it seems it's set in days. The same for "Success history lifetime" and "Failure history lifetime" there. So better set all three to some meaningful value like "30".

Discussion

This "Allowed memory size exhausted" error also persisted after installing the AOE Scheduler module and removing all scheduled cron tasks, then calling the cron job manually. This hinted to the fact that this error is unrelated to any single of Magento's various cron tasks (so also independent of installed contrib modules), and instead happens in Magento core. (This is Magento 1.5.0.1 by the way.)

Also, from searching the error message on the Internet, it appears that the "Allowed memory size exhausted" problem at this specific code location is a rather generic error that else happens for example when processing too many product records at once (see for example here or here).

This had me look at Magento's cron_schedule table, in this case it had 673498 entries according to AOE scheduler (in the database, even 830,881 rows). These were too many records for Pdo.php to process within the memory limits: in line with other reports of the "Allowed memory size exhaused" error in Pdo.php line 290, the error was here caused by too many records in this cron_schedule table.

Instructions

These are instructions how to import messages held in KMail 2 into Thunderbird. Versions were KMail 4.10.5 and Thunderbird 17.0.8, but should not matter for this to work.

  1. Install ImportExportTools in Thunderbird.
  2. In Thunderbird, create a folder into which you want your e-mails imported. This can be in "Local Folders", which is recommended, but also in an IMAP account. In the latter case, importing and uploading are done at the same time, so there are two error sources and the second step can not easily be repeated when it fails, without repeating the import step as well.
  3. Right-click on the new folder and select "ImportExportTools -> Import Messages".
  4. In the file selection dialog, go to ~/.kde/share/apps/kmail/mail/<your folder to import>/cur/, select "All Files" in the file filter at the bottom, and select all these messages (simplest by pressing Ctrl+A).
  5. After the import, all messages are marked as unread. Right-click the folder again and select "Mark folder as read" to fix this.

This should import all messages nicely and without errors. You may compare message counts to those in KMail.

Discussion

The above solution seems quite straightforward, but it really was the only working solution I found. Various other solutions are meant to work, but did not:

  • Uploading local e-mails to an IMAP folder in KMail, then downloading again from there in Thunderbird. Should work usually, but in other cases errors in KMail can be triggered. For example, creating a new folder is not possible in KMail if the e-mail server is based on courier.
  • Selecting a bunch of e-mails in KMail and right-clicking on them then selecting "Save as …" gives the option to save them all to one .mbox file. Which can then be imported with "ImportExportTools -> Import Mbox files …" in Thunderbird. However that import often fails, so that no e-mails are imported at all.

Recording a video from Linux desktop content, like for creating a screencast presentation, is quite simple with recordmydesktop or its GUI version gtk-recordmydesktop.

However if you want to capture the system sound as well (not what you might speak live into a microphone), it gets a bit more difficult. Here is one possible solution with the pulseaudio sound server (available by default in Ubuntu Linux):

  1. Install pavucontrol by executing sudo apt-get install pavucontrol.
  2. In gtk-recordmydesktop, go to "Advanced -> Sound -> Device" and change the value from "DEFAULT" to "default".
  3. Start pavucontrol.
  4. Do a test recording with gtk-recordmydesktop and, while it's recording, in pavucontrol go to tab "Recording", there in entry "ALSA plug-in[recordmydesktop]:ALSA capture from" change the value from "Built-in Audio Analog Stereo" to "Monitor of Built-in Audio Analog Stereo".
  5. In tab "Input devices" use the dropdown on the bottom to display also monitor devices, look for the ""Monitor of Built-in Audio Analog Stereo" device and make sure it's not silenced and the volume gauge is at 100%. When something is playing through your speakers, you have to get a signal showing up there, independently of if you're recording at the moment.
  6. Record with gtk-recordmydesktop as you're used to do.

This solution was taken from ubuntuforums.org thread 1509398.

There is also another solution involving only ALSA and the snd-aloop kernel module for recording from a loopback soundcard. However I could not get it to work.

Acquia Drupal Commons 3 is a great Drupal 7 distribution to create a social network type site. Casetracker is a nice, extensible support ticket management system for Drupal 7 that can also be used as a task manager for distributed collaboration on tasks. Here's how to integrate both cleanly as I did for the Edgeryders site:

  1. Create a new Drupal Commons integrated content type as a module. This is really simple by the existing, quite generic Drupal Commons content type modules (commons_post), copying it to another module (here, commons_tasks) and replacing all occurences of post / posts with task / tasks respectively. In my case, the result is a module commons_tasks. Not yet readily packaged as a Drupal module, but you cand download the files from that link already. That's better than creating it yourself, since there are a few other tweaks included (like adapting the link for creating a new task to inclde a reference to the Casetracker project ID as parameter for which to create the task.)
  2. Install the new module. Here: Save the directory commons_tasks under ./sites/all/modules and call: drush pm-enable commons_tasks.
  3. Adapt Casetracker settings. In the "Casetracker settings" screen (at /admin/config/casetracker/settings), set "Group" to be the only project node type and "Task" to be the only case node type.
  4. Add og_group_ref to the new content type. Go to "Administration -> Structure -> Content types -> Task -> Manage fields" (/admin/structure/types/manage/task/fields) and add add the existing field og_group_ref.
  5. Configure field og_group_ref. That is, configure the field and fied display settings to be the same as in the Drupal Commons content types, e.g. Post. Especially take care to enable the "field prepopulate" setting to select proper group membership as default when creating new content from within a group.
  6. Configure permissions for Casetracker. To be done at /admin/people/permissions. Note that you don't need to give any permissions in the "CT Basic" section because these only relate to the Casetracker's default project and case content types, which we don't use.
  7. Disable the casetracker_basic module. It's not needed because we use other content types for Casetracker projects and cases here. Execute: drush pm-disable casetracker_basic.
  8. Adapt comments settings. In the content type settings for "Task" (/admin/structure/types/manage/task), adapt comments accordingly. You probably neither want a title nor threading for the comments.

As a result, all Tasks are now handled by the casetracker module and can also be managed in its task manager at /casetracker. But additionally, tasks are content of organic groups like posts, wikis, polls and questions in Drupal Commons, and are nicely integrated with the Drupal Commons group browsing widget, content creation widget and notifications system, including the "Follow" and e-mail notifications features.

For example, you might be able to read and understand a foreign language website sufficiently using machine translation (like Google Translate), but machine translation is mostly not sufficient for contributing own content to it. So what? The optimum case would be to have a webservice that offers instant translation for small texts. Or translation with a short turnaround time like one hour, after which you can insert your statement into the website or forum. Ideally, such a webservice would be P2P organized: you would earn credits by doing this instant translation for others, and would spend them for getting own translation.

Funny enough, this exact thing does not exist. Here's what I found instead, by adequacy, starting with the best solutions:

  • Fiverr. So far the best solution I could find for cases where you don't need super professional (just understandable) translation quality. You will easily find people offering 500 – 1000 words of translation for 5 USD. Turnaround time can be as fast as 24 hours. Or try to make a special deal with somebody offering translation, language lessons or similar, so that you agree on a time to chat where you can get the translation right away, or agree that you can send multiple e-mails with short texts which the translator will translate immediately or within a day, up to a total word amount.
  • SpeakLike Chat Translation. Indeed they do realtime translation in chats, and by chatting with yourself on two accounts you can of course intercept the translation for putting it into a webpage lateron. Price is about 0.05 EUR/word [source].
  • SOS Translator Chat. They offer live translation by instant messaging with a translator. They charge a start rate plus a rate per minute, not per word [source]. So this is rather difficult to fit in for contributing content to a website.
  • VerbalizeIt Skype-embedded translation. It seems they offer text-to-text realtime translation for Skype chats. However it's a professional service, so it's not cheap, seemingly starting at 0.17 USD/word [source].
  • ackuna. Nice idea: a crowdsourcing site for translating. You translate something for others, and you can get something translated by others. However there is no system where you have to earn points by translating before you can get something translated, so from a first glance it seems that you can't count on your text being translated in reasonable time. It can take months. Also, the site is mostly for short phrases as found in software applications – while it's possible to enter paragraphs of text, this will probably quickly overchallenge the goodwill of the uncompensated volunteers. The site also accepts special file formats for software i18n, but is not limited to that: there's also a textbox to paste the text you want translated [source, at "How do I create a project?"] I have to admit that I don't like the translators' interface of Ackuna too well: it's a good start, but for great usability a lot of tweaks have to be made. Like translating in lists (with tabbing), AJAX voting in lists (saving more page load times), a transactional one-point-per-word system where people have to earn points by translating before posting own projects, showing only entries without any submission while translating, etc..
  • Transfix.it. A software where humans proofread and fix machine-translated texts.
  • WikiTranslation. A site for gratis, community-generated translations which can be voted.
  • OneHourTranslation. Fast turnaround time of at most one hour to start and one hour per 200 words. But mostly too expensive for private use in forums etc. (0.06 EUR/word).

Some other interesting finds about translation include:

  • Linguanaut Free Translation. Translation is done by volunteers. Which means they are of course not compelled to do the job, there is no deadline, and one can't be sure when the job gets done. But for occasional, not time-critical translations it seems great. Not a good idea to exploit volunteer with high volume work, of course.
  • Free human translation at Translatorsbase.com. Only for single words and short phrases, but for these really useful. The translators get higher ranking by providing such free translations.
  • Gengo. Offering people-powered translation services.
  • Babylon Human Translation. By well-known translation software provider "Babylon".
  • TYWI Mobile Interpreter. Offers simultaneus human translation of speech, via a professional translator, connected by Internet.
  • Wikipedia on Telephone intepretation.

One of my latest projects was setting up the Git source code management system on a customer's server, so that different developers can have different access rights to repositories. How to do that?

For the really simple case of one set of repositories with r/w access to every developer in your organization, you can use a git server, with or without a dedicated user, with or without SSH keys [source]. For the complex case with even intra-repository rights management, there is Gitolite; or Gitosis, but that is no longer maintained [source]. But what is the simplest solution for the middle ground: multiple users, and giving each one access to a potentially different set of repositories?

Here's a solution for that – I haven't seen it documented anywhere, but I guess it's quite obvious. I simply extend the outta-the-box git server scenario by running multiple git servers in parallel, each for one (non-intersecting) set of repositories with one (potentially intersecting) set of developers who get r/w access. So for example, you can have a setup like this:

  • team 1
    • repositories: project 1, project 2
    • developers with access: Alice, Bob, Andrew
  • team 2
    • repositories: project 3
    • developers with access: Alice, Juan
  • team 3
    • repositories: project 4, project 5
    • developers with access: Juan

Setting up a team

For every team, you set up one dedicated user and git server on your host. You can have as many servers as you want, and adding one always follows this procedure. Let's assume we want to set up a team devteam1.

  1. Create a new user and set a password for it. Say you want to name the user devteam1, then execute: adduser devteam1.
  2. In /etc/passwd, change devteam1's shell to be /usr/bin/git-shell (or wherever it is on your host – see which git-shell). This prohibits SSH access with this user, but allows git commands.

Adding a developer to a team

Say you want add developer Alice to your team devteam1, which means giving her r/w access to all repositories of devteam1. The simplest solution is to hand out the password for system user devteam1 to the new developer. This requires however to enter it on every commit. (If you want to avoid this, the usual and widely documented method if to use SSH keys with git.)

Note that using one shared user account for writing to your remote repository does not mean that commits from all the developers will use the same author information. Instead, the password needed for git push just is for transferring your commits. Your commits have been made earlier, in your local git repository, using the author information configured in git.

You can determine git's author information by looking at user.name and user.email in git config --list (potentially overwritten in environment variables GIT_AUTHOR_NAME and GIT_AUTHOR_EMAIL [source]). You can configure this author information for your user in ~/.gitconfig on your local computer, and if needed override it in .git/config per repository, by adding a section like this there:

[user]
        name = Alice Example
        email = alice@example.com

Adding a repository to a team

Say you want to add repository project1 to team devteam1. On your server, do this:

  1. cd /home/devteam1
  2. mkdir project1.git
  3. cd project1.git
  4. git init --bare
  5. chmod -R u=rwX,g=rX,o= .
  6. chown -R devteam1:devteam1 .

Explanations:

  • git init --bare creates a "bare" repository, which means one that stores all the code in objects, but without having a working directory (the set of source files a developer will work with on her local computer). On your local computer, the git repository resides in a .git subdirectory, while a "bare" repository is created in the current directory, not within a .git one.
  • The chmod and chown commands make sure the devteam user is the one with write access to the repository; else, git push commands involving this repository (using devteam1@example.com:/home/devteam1/project1.git) would result in this error message: "error: insufficient permission for adding an object to repository database ./objects" [source]. They also make sure no other user has access, not even nread access.
  • Note that read and read/write access to git repositories on the server is controlled by the file permissions of this repository's files. So rights management is effectively just Unix system file rights management: Every Unix system user with read access to these files can be used in a git command to read (clone and pull) the repository. Every Unix system user with write access can be used in a git push command. So if you want to create a world-readable repository, make its files world-readable. If you don't want the simple "team -> repositories" hierarchy we use here, you can instead create a group with r/w access for every repository on the server, a system user per developer, and add that user to all groups of repositories that you want him / her to access r/w.

Working with your repositories

Say you want to access a repository project1 that belongs to devteam1, and your server is example.com:

  • Cloning the git repository to your local computer: git clone devteam1@example.com:/home/devteam1/project1.git. This creates a local directory project1. Go there and do your changes to your code.
  • Getting and merging changes from the repository: git pull.
  • Committing your changes to the repository: git add .; git commit -m "Commit message."; git push; You will be asked for the password of user devteam1 on the server.

Here's my collection of favorite DNS tools and techniques to troubleshoot issues: