AWS CLI is the Amazon Web Services command line interface tool, the new unified utility to manage your cloudy Amazon things.

People are usually told to install it with pip install awscli (including in the official docs), but this is a hateable solution for Linux package system fans because

  • You get one more package system (pip, for Python packages) where you have to care for updates, and where you will forget just that. Not that it would be any worse than having own package management systems already for Firefox add-ons, Chrome extensions, Gnome extensions, Ruby gems, Drupal modules, and WordPress plugins. All of that is just plain bad. Grrrr.
  • You no longer have a single point of control and overview for what software is installed on your system.

So, let's try installing the AWS CLI from packages. Fortunately, there are fairly recent (awscli 1.2.9, from 2014-01) packages for upcoming Ubuntu 14.04 (Trusty Tahr). We are on Ubuntu 13.10 however, but we can fix it by adapting these instructions for Debian to our situation:

  1. Add a package source for Ubuntu trusty, by adding a line like this (with your Ubuntu mirror) to the bottom of /etc/apt/sources.list:
    deb trusty main universe
  2. Create a preferece for Ubuntu trusty packages that will allow to install them when specifying the distribution, but will not select them automatically even when the version is newer than the local one. For that, create a file /etc/apt/preferences.d/ubuntu-trusty.pref with the following content:
    Package: *
    Pin: release a=trusty
    Pin-Priority: 200
  3. Install awscli and its dependencies: sudo apt-get update; sudo apt-get install -t trusty awscli.

You are ready to use it now (try aws --version). Note that they include the functional equivalent of the Amazon EC2 CLI tools, and many more Amazon CLI tools – you will very probably not need to install any Amazon specific CLI tools any more, regardless of what outdated how-tos are telling you.

Also see Amazon's official AWS CLI documentation.


This issue started to occur right after installing Ubuntu 13.10 on a ThinkPad T61 with an "Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03)" according to lspci. So, whenever I do one of these:

  • adjust volume with the hardware volume silence / up / down buttons, only while not in a Skype call
  • adjust volume with an equivalent software mixer feature, only while not in a Skype call

the effect is this:

  • the next Skype call will be completely silent (both output and input do not work)
  • the Skype call after that will play a randomized variant of rattling noise for output, while input probably works fine (judged by the level bar indicator); interestingly, the closing sound of the Skype call will play correctly, which indicates that only one of the audiostreams is not initialized correctly (the Gnome sound settings panel shows that two are in use during a call)
  • the Skype call after that will play also the output, but with lots of stuttering in between
  • the Skype call after that will play the output correctly, and the input will also work fine

For this to be repeatable, allow for some 3-5 seconds of separation between calls. At times, these four steps will also be reduced to three, or the "rattling noise" or "stuttering" step could also occur two or more times.

In addition, there is a similar problem with Skype chats. Whenever I do one of these:

  • send a message

the effect is this:

  • a randomized variant of permanent, rattling noise (the "message sent" sound will not be sent)

This noise will persist also through Skype calls initiated afterwards. It will (in most cases) change when sending another message, and after some messages will also completely disappear again. The surefire way to make it disappear is though when your chat partner sends you a message.


The reason for this behavior is that Ubuntu 13.10 ships with PulseAudio 4.0, and Skype does not properly support that so far [source].

The proper solution is to simply install Skype from the Canonical Partner repositories (so, not by manually downloading a Skype .deb via its website). It contains a patch to the skype.desktop file that starts Skype now with a proper workaround as "env PULSE_LATENCY_MSEC=60 skype" [source].

However, if like me you're used to start applications via the Alt+F2 mini-terminal, you might even have this patched Skype package installed, and it has still no effect to fix your audio in Skype. As you probably want to keep starting Skype by hitting Alt+F2 and typing skype, here is a way to do so:

  1. Createa file /usr/local/bin/skype, with the following content:


    # Workaround for the Skype incompatibility with PulseAudio 4.0, as explained in
    # This is already installed in the Ubuntu Saucy Skype package from Canonical Partner repos, however their
    # solution of prepending an environment variable in the .desktop file is not used when starting Skype
    # via the Alt+F2 mini-terminal. To apply the solution for that too, we need this file to supersede the normal
    # Skype command. Use "which skype" to confirm that the skype command afterwards indeed refers to
    # /usr/local/bin/skype instead of /usr/bin/skype.

    env PULSE_LATENCY_MSEC=60 /usr/bin/skype

  2. Give proper execution rights to that file.
  3. Check with which skype to make sure Skype is now called from /usr/local/bin/skype instead of from /usr/bin/skype.

There are also some dirty workarounds (here mentioned only to learn something about Skype and Pulseaudio, not to use them):

  1. Do not use volume buttons except when in a Skype call. This is not practical of course.
  2. Disable Skype sound events for "Call Connecting" and "Chat Message Sent", and maybe all others. This can be done in the Skype "Options -> Notifications" menu item. This however can result in Skype calls falling completely silent, and this will also not fix itself in the next call then. (It can however be fixed, right during the call even, by playing some seconds of sound from a different application.)
  3. Create the permanent random noise, then mute it via the "System Sounds" stream. This works as follows:
    1. Disable the "System Sounds" PulseAudio stream by going to the Gnome Sound Settings dialog, there to tab "Sound Effects" and set "Alert Volume: Off". Alternatively, in Skype go to "Options -> Sound Devices -> Open Pulse Audio Volume Control -> Playback", and click the "Mute audio" button for stream "System sounds".  In both cases, you will notice that the Skype sound channel in the "Applications" tab (if there is one, usually only after step 2) also goes silent, and with it the Skype notifications in chats, and your volume level adjustment feedback sound goes silent as well (explanation see below).
    2. Let Skype create a sound mess. You have to create that permanent random noise, either by using the "send chat message" technique from above (with a notifocation sound enabled of course), or by test playing any notification sound via Skype's "Options -> Notifications -> Test Event" button. (You can not use the random noise generated by a Skype call with broken audio from above.)
    3. Now do whatever you want in Skype, the problem with corrupted sound in calls will not appear any more, even when using volume adjustments in between of calls.
    4. You have to repeat step 2 after a restart of Skype. (The muting of the System Sounds stream stays active after Skype quits because it is not a Skype feature; however Skype is only immune against creating corrupt audio in calls once it has corrupted the System Sounds audio by executing step 2.)

Explanation Attempt

Let me try a little explanation (just from observations, I do not know how PulseAudio works internally): The problem of Skype seems to be that it cannot properly write to the "System Sounds" stream if another application has written to it in between (including the feedback sounds of the volume change buttons). When Skype tries to write to the System Sounds stream in this situation, it results in that noise (as exemplified by the "send chat message" case). Or for some reason, it can also result in all Skype sounds being muted, and on second try in that noise (as exemplified by the phone call example; it's really due to the notification sounds played at the start of the phone call, since there is no such problem when disabling the notification sounds).

So, the third workaround above works by letting Skype try (creating the noise), but muting that noise away (before or afterwards). As Skype's attempt to write to the channel never returns, it is blocked and Skype has to use a different (newly created) channel for the notification sounds when starting a phone call, as can be seen in the "Applications" tab of Gnome sound settings. That might be why now, playing the notification sound no longer results in corrupted audio. That indeed Skype tries permanently to write to the System Sounds channel (and only creating noise doing so) can be recognized from the fact that the noise stops when Skype exits: then, at last, it stops its desperate attempts of writing to System Sounds.

Other Issues: Silent Input and Output

Muted input. For some reason, at times it happens that Skype shuts off the microphone input when exiting. (This is possibly related to letting Skype access ones input mixer levels in its options dialog.) It can be fixed by going into the Gnome Sound Settings dialog, to tab "Input", and switching "Input volume" off and on again. When you see the input level bar moving when sound is present, all is well again.

Muted output. In other cases (as seen above), Skype output might be completely muted, while still working for other applications. Playing any sound from another application will fix this. And probably, going to Gnome Sound Settings and switching "Output volume" off and on again will also help.

Checkvist is a nice, web-based ouline editor. Since it uses a hierarchical content structure, and a mindmapping software like Freemind does the same, interfacing between them can work well in theory. There are some quirks and tips though that we will explore here in practice.

I selected Checkvist among alternative solutions for the following reasons:

  • Work fast with large amounts of text. Large amounts means here something like 400k characters (400 pages A4), which Freemind can handle easily. The web-based, open source mindmapping tool Wisemapping for example was only able to work with some few pages of text before getting sluggish).
  • Unlimited lists, list items and other features in the gratis version. In contrast, Workflowy is also nice but offers only 250 free list items per month … .
  • Collaborative editing in the free version. Because that is what I need it for: a real-time collaborative interface for some content I developed so far in Freemind mindmaps.
  • Public sharing in the free version.
  • Comfortable importing from Freemind 1.0. In contrast, Wisemapping for example would support only imports from Freemind 0.9 directly, so you would need Freemind 0.9 installed as well to copy, paste and save your Freemind 1.0 mindmap in 0.9 format before uploading it.

How to import Freemind content into Checkvist

  1. You can only import plain text. No icons, colors, HTML rich text formatting of nodes etc., but you do not have to remove them before either.
  2. Make sure you do not have multiple paragraphs of text in any one node. Because the second and following paragraphs would start without indentation in the text version, leading to hierarchy level errors during the import. So, split every node that currently has multiple paragraphs using this technique:
    1. Position the node selection at the root node of the branch you want to export.
    2. Do "Navigate -> Unfold All" (Ctrl + Shift + End).
    3. Do "Edit -> Select Visible Branch" (Ctrl + Shift + A).
    4. Do "Format -> Use Plain Text". This will convert bulleted and numbered lists into normal paragraphs, as else "Split Node" would not be able to break them up.
    5. Do "Tools -> Split Node".
    6. Do "Navigate -> Fold All" (Ctrl + Shift + Home).
  3. Copy the content you want to import. Select all nodes you want to appear on the first level after the import, and do "Edit -> Copy" (Ctrl + C).
  4. In Checkvist, select "Import" and paste the clipboard content.
  5. Click "Import tasks". It will import your Freemind content as indented text.


Wait, the *Secure* Socket Layer in HTTPS can be insecure? Yep, in the age of total surveillance, it can.

Good news: To the best of our knowledge, there is also secure SSL still. (But don't trust me on these instructions with your life or the life of your website users – you have to become your own expert!). The considerations below take into account both secrecy and server performance. The tips are in decreasing order of importance.

(1) Your users need an uncompromized computer!

Because if we'd start with compromized hardware, all is lost anyway. The malware can simply grab your communications from the browser screen and send it to the surveillance body. No need to break SSL, then. But of course, that would also be simple: that malware would also have an easy job to hide man-in-the-middle attacks by preventing the tools mentioned below from detecting SSL certificate changes.

The best first tip for having a non-compromized computer is having two: one for daily work, one for only the high-value communications. And you would not go near any threat on the Internet with the second one.

(2) Enforce HTTPS for all connections

For performance reasons, you might think about using SSL connections only for login (password transmission), or at least only while users are logged in (also protecting the content they post, and the session cookie which else can be used for session hijacking). However, surveillance can derive lots of metadata, behavioral data etc. also from looking at what people read while not being logged in. With proper SSL speed optimization, the server load of enforcing SSL everywhere should be manageable.

(3) Throw out insecure SSL cipher suites

Configure your webserver to not use:

  • the old utmost crap "export" cipher suites
  • plain DES (triple DES is ok though)
  • RC2
  • RC4 (which is kind of broken)

See the source for these recommendations. Note: The SSL Labs SSL Test is a nice site to check if your configuration works as intended.

From the remaining ones, all ciper suites that use at least 128 bit keys for symmetrical keys are ok. This is roughly equivalent to the security of 3072 bits RSA keys [source, p. 64], which are a sufficient protection against brute force attacks even beyond 2030 (as we will see below). Which means, encrypted data recorded now could only be broken some time after 2030. This is further protected by the fact that the symmetrical keys are only used for one SSL session, and using brute force attacks to decrypt one such small session from 20+ years ago is almost certainly not worth the effort in 2040 or so.

In a few years, when all browsers will support higher grade AES cipher suites and so on, you would of course switch to only allow at least 192 or 256 bits of security. Which is equivalent to 7680 resp. 15360 bit RSA keys [source, p. 64] and comes at relatively neglible performance costs of needing about 30% more CPU time for the same data throughput [source].

(4) Use only DHE Perfect Forward Secrecy key cipher suites

We want to use perfect forward secrecy (PFS) cipher suites. PFS means: when the private key of the server is leaked at some time, recorded communication of the past still cannot be decrypted. It only allows the attacker to impersonate the server for negotiating keys for new sessions, until the SSL certificate expires (which is hopefully soon).

Here is how to configure your webserver for using PFS cipher suites.

However, not all PFS ciphers are the same. As Bruce Schneier writes: "Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can." [source] This means, use Diffie-Hellman key exchange (DHE) cipher suites, not Elliptic-curve Diffie-Hellman key exchange (ECDHE) [source]. The danger of weak ECDHE ciphers is that recorded, encrypted communication could be broken later with limited effort attacks (albeit only session by session). The downside of DHE, on the other side, is "only" that it is slower.

(If you really want to use ECDHE for performance reasons, offer it only with elliptic curves that are safe. I am not sure if browsers support any safe curves [see] or how to restrict what curves your OpenSSL installation will offer to the client [see]. Tell us in the comments when you find it out.)

(5) Always store your private key in encrypted form

Which should mean, store it in encrypted form. This will require you to enter a passphrase when restarting your webserver process. But assuming that your webserver is stable, this would only be done when you are at the server anyway. It is quite common practice to store the private key in plain text so that it is only readable by root, but that is a severe vulnerability that would allow a remote attacker or somebody with physical access to the server to do a man-in-the-middle attack that does not need a certificate change.

(6) Use a 2048 bit private key

Because 1024 bit RSA private keys are considered too weak already.

A 2048 bit RSA private key for your sever is however enough. You want to avoid a needlessly longer key because SSL handshake performance exponentially degrades with increasing key length [source; test results].

2048 bits are enough because: With only forward secrecy cipher suits available, a broken or compromized private key of the server only means that an attacker can impersonate the server from that moment on (that is, can do a man-in-the-middle attack without causing a SSL certificate change). But he can not decipher past recorded communications. So the key strength has only to be enough to prevent brute force attacks during the certificate's validity. Which means 2048 bits until 2030, and 3072 bits afterwards [source]. But keep the validity period short of course (say, a year).

(7) Let your users monitor for SSL certificate changes

The problem with powerful surveillance bodies is that they are powerful: it is credibly alleged that three letter agencies can deploy man-in-the-middle attacks effortlessly on large scales by having backdoors in consumer DSL routers [source]. This also allows to compromise SSL connections, as follows. The router usually acts as a DNS server, and forwards requests to DNS servers of the telecommunications provider. When enabled by the backdoor, this allows DNS requests to be deflected to another DNS server, resulting (for example) in a site with a fraudulent SSL certificate being served to you (that will forward to the real site, but monitor your communications) [source, p. 23].

Such an attack might use a different SSL certificate signed by the same Certification Authority (CA) – or a different CA, it does not really matter since browsers by default do not notify users when a site's CA changed. But after all, the certificaate is different, and that is how it can be detected. Because: they simply cannot use these man-in-the-middle attacks on SSL for anything beyond targeted operations [source]. When used permanently, the attacks would be detected for example by site owners who notice the difference between wha certificate their site should serve to the world and what they see when visiting it in their browser.

In any case, it means that we should assume the CA-based certification mechanism to be completely broken, and should not rely on it. Until it is going to be replaced with a distributed, reliable mechanism (maybe a PGP style trust network? or registering the public key / domain name pairs on the Namecoin blockchain?), we have to make do with verifying the SSL certificate yourself (see below), or where this is too much effort, by tracking certificate changes.

Tracking certificate changes provides a decent protection against man-in-the-middle attacks, since these require exchanging the certificate, as shown above. You should monitor both the differences from what certificate everone else sees (using the Perspectives Firefox plugin) and from what certificate you saw in the past (using the Certificate Patrol Firefox plugin). From practical experience, Certificate Patrol is however not useful on large websites (Google, Facebook Twitter etc.) since these tend to use multiple certificates, from multiple CAs, and exchange them frequently. This makes using Certificate Patrol annoying; it would be better to have an option in it that lets you switch it on only for sites that you "do not want to be surveilled on".

(8) Let your users verify the SSL public key themselves

It is alleged that three-letter agencies collude with Certification Authorities (CAs) to get a second, different certificate signed by them for every new one they sign [source, p. 23]. Which will allow the man-in-the-middle attacks on SSL explained above. It is not known if this extends to any CA outside the US (where they can be subpoenaed …). But for practical purposes, let's just assume that all CAs are compromised.

A HTTPS certificate merely says that a website's traffic goes to whoever controls the domain plus whoever controls the CA. Even worse: since browsers by default do not notify a user when the HTTPS certificate changed and the new comes from a different CA, without manual checks at each page load a user can never be sure if the traffic does not perhaps go to certain three-letter agencies, as it can be safely assumed that at least some CAs are controlled by three-letter agencies. So, while HTTPS certificates are still good enough to exclude ordinary criminals, they are not up against massive surveillance.

(Also of course, you cannot trust the CAs yourself: never let them generate the private key for you, or upload your private key to them. With that precaution, even using a CA that is controlled by a three-letter agency is not a problem. They do not get you server's private key, they just sign your server's public key to attest it belongs to your server. Which is a correct statement, and not affected by being compromised. The compromised CA could create a duplicate certificate for a spy agency's own private key, but whether they use that or one from a different CA is just a matter of taste since a browser does not warn users against CA changes.)

In practical terms: You may proceed using a certfificate from a compromised CA for your "normal" website visitors, while at the same time also warning everyone that the HTTPS CA scheme is broken and users should not rely on it, instead should verify the SSL public key themselves. Tell your users to not simply trust the CA certificate represented by a happy Firefox displaying a lock to them. Tell them to verify by themselves, at the first time using the site, that the certificate is the one issued by the site operators instead of possibly a man-in-the-middle. Together with being alerted about certificate changes (see above), this provides a very decent protection against man-in-the-middle attacks.

Verifying can be difficult, depending on the size of your user community, but as tracking SSL certificate changes is usually enough by itself it does not have to be a very thorough vaidation. So you have different options. Here are some proposals, by increasing security:

  • Publish the fingerprint of the correct SSL certificate on the same website.
  • In the signup message, include the SSL certificate's fingerprint. This protects against later, dynamic modification of your website content by attackers.
  • As above, but also sign the signup message with the organization's GPG key, and publish the GPG public key on keyservers.
  • As above, but let users manually verify your GPG public key fingerprint (and also the current SSL certificate fingerprint while you're at it). This can be done without personal meeting of business card handovers etc., simply by creating a video live link, making sure it's not a pre-recorded video (some talking …) and letting the site representative both speak and sign the finger print characters at the same time. This of course mandates that the site representative is well-known, ideally by having met in person before.

(9) Use a self-signed certificate

With the above technique of still using CA-certified public keys but warning users against trusting them, most users would not see or simpply not follow the warning. They can be forced to deal with untrustable certificates though if you simply use a self-signed certificate. It will cause the browser to promt users to add a security exception, so they will have to verify the certificate before doing so.

(10) Let your users monitor IP address changes

If an attacker can get hold of the server's original SSL private key, he can impersonate the server without the tools above detecting an SSL certificate change. However, a similar tool would detect an IP address change. And you would announce the IP address of your server, and changes to it, for users to verify them, in analogy to how you want them to verify the SSL certificate fingerprint above. (And when you're at it, you might even want to switch to a self-signed certificate that would then also include the IP address, for free. At least on a second, synonymous domain or subdomain, for the users who know what they are doing.)

(11) Exchange your SSL private key and certificate frequently

Not sure about this one. It seems better to invalidate a SSL private key after a month than a year, which will prohibit an attacker from going on for long with man-in-the-middle attacks using your original private key that are undetectable by the above SSL certificate change monitoring. However, it introduces a task for users to manually verify the fingerprint of a new SSL certificate every month or so. Which is unrealistic for nearly all public-facing websites. Also it might not be needed because a man-in-the-middle attack could still be detected by IP address change monitoring proposed above. (However maybe even IP addresses can be bent by the secret services? I just don't know.)

(12) Put your server at a secure location

Even if you have encrypted your private key as stored on the server, there is a chance that it might be read from a memory dump. Which can be obtained when having physical access to your machine, or remote access to the virtualization system if you are on a VPS host (virtual private host). So at least do not rent a server in a country where three-letter agencies have easy access to company secrets, and also don't host at large hosting companies. Ideally of course, place the server physically at your home. And there into an intrusion-protected room. With lots of concrete around. In your basement. But ahh well … sorry, now I became paranoid about it all 😛


You have installed the ISPConfig server admi panel (in my case, version, all according to their guidelines called "The Perfect Server". In this case, I used the setup instructions for ISPConfig 3 on Debian 7.0 Wheezy with Apache2, BIND, Dovecot. Then you create a website in ISPConfig, create an FTP account for it in ISPConfig, try to log in with this account, and it does not work. The client-side FTP log would be simply:

Command:    USER c1testuser
Response:    331 User c1testuser OK. Password required
Command:    PASS ***************
Response:    530 Login authentication failed
Error:    Critical error
Error:    Could not connect to server

Concurrently, you would see something like this in /var/log/syslog:

Nov 19 16:48:53 one pure-ftpd: (? [INFO] New connection from
Nov 19 16:48:53 one pure-ftpd: (? [INFO] PAM_RHOST enabled. Getting the peer address
Nov 19 16:49:00 one pure-ftpd: (? [WARNING] Authentication failed for user [c1testuser]
Nov 19 16:49:00 one pure-ftpd: (? [INFO] Logout.

And something like this in /var/log/auth.log:

Nov 19 16:48:53 one pure-ftpd: pam_unix(pure-ftpd:auth): authentication failure; logname= uid=0 euid=0 tty=pure-ftpd ruser=c1testuser rhost=


This problem can have several reasons, so you should try the following alternatives. I have roughly ordered them into a logical order of things to check, starting from frequent causes.

  1. Did you include the ISPConfig username prefix? In ISPConfig, when you create a FTP user, SSH user etc., the panel will prepend this username with a prefix that you can configure in "System -> Interface -> Main Config -> Sites". If the prefix uses "c[CLIENTID]" in that form, it might be "c1" for example. So when setting up your FTP client, do not just use the username you entered, but put that prefix before. Just as you see the username listed in "Sites -> FTP Accounts" in column "username".
  2. You do not have to include the domain name into the user name. At some webhosters, the actual username would be something like "". This is not required for ISPConfig FTP logins [source], and probably would not work either.
  3. Check that the job queue is empty. In ISPConfig, your tasks (like creating an FTP user) get processed deferredly by a cron job that runs every minute. But if it fails, the job queue can get stuck. Check in ISPConfig in "Monitor -> System State -> Show Jobqueue" to make sure it's empty, or if not, fix this as shown here.
  4. If you have "Replication failed" errors in the job queue, sync ISPConfig versions across servers. This is only relevant if you use ISPConfig for multiple combined servers. See the instructions.
  5. Make sure you can at least connect with system users via FTP. All users that you create in ISPConfig via "Sites -> Web Access -> FTP Accounts" are virtual users, which are mapped to system users when logging in via FTP. The mapping is defined in the "Options" tab, " field when creating a FTP account in ISPConfig, and managed in MySQL database dbispconfig, table ftp_user. The system users to which they are mapped (like web1, web2 etc.) cannot be used for FTP logins either as they have no passwords defined (see /etc/shadow: ! in the second field means "no password authentication"). However, when you create SSH users in ISPConfig, this results in system users with passwords. So try to log in with an existing SSH user, and do not forget the prefix added by ISPConfig to your chosen username. (Just check "Sites -> Shell-User" to see the proper user names.) If this works, only virtual users' authentication is broken, else something more profound could be wrong (Firewall config, fail2ban config etc.). Note that in a working setup, you should be able to login with both system and virtual users into FTP (I tested it). I was able to successfully log in using these settings (in FileZilla):
    • host: server's IPv4 address; also works with the domain name of any website hosted on this server
    • port: empty (means the default value 21 is used)
    • protocol: FTP
    • encryption: Use Plain FTP
    • logon type: Normal
    • user: the username incl. the prefix added by ISPConfig
    • password: the password
    • all other fields: left at their defaults
  6. Check if pure-ftpd-mysql's queries arrive at the database with the virtual users. You can do so by enabling MySQL query logging:
    1. In /etc/mysql/my.cnf, enable or insert the following lines:
      general_log_file        = /var/log/mysql/mysql.log
      general_log             = 1
    2. Restart the MySQL server: service mysql restart
    3. Try another failing FTP login, then look at /var/log/mysql/mysql.log to see what queries arrived at MySQL. Simply search for ftp_user, as all configured queries in /etc/pure-ftpd/db/mysql.conf include that table name. In my case, no queries arrived here at all. This can mean that you have the wrong variant of pureftp installed, see the next step.
    4. Do not forget to disable query logging again (and MySQL again) after you fixed this issue, as it's a performance killer.
  7. Make sure you have pure-ftpd-mysql installed. Execute apt-get install pure-ftpd-mysql and see if it wants to install the package; if yes, that's the issue here. Let it install the package, and afterwards your FTP logins with virtual users should work now. Because only the pure-ftpd-mysql variant includes MySQL based authentication. Note that /etc/init.d/pure-ftpd-mysql exists and that /etc/init.d/pure-ftpd-mysql restart resp. service pure-ftpd-mysql restart can be executed without an error message even before the pure-ftpd-mysql package is installed. These commands simply will not output anything instead of the server startup command, and that's the only symptom that something is wrong here.

I still have no idea how I missed to install pure-ftpd-mysql. I simply followed the "The Perfect Server" ISPConfig installation instructions and in my variant, on page 4, the correct package gets installed (where they run apt-get install pure-ftpd-common pure-ftpd-mysql quota quotatool).


This happened right after the installation of ownCloud 5.0.13, when trying to log in for the first time. This failed, and the login page was simply served again (without any error message or explanation). The URL of that login page was then something like

During installation, I also had to create an xcache admin user and password to solve the "xcache.admin.user and/or xcache.admin.pass settings is not configured" issue (for a description and instructions to solve it, see this tutorial).

However, this issue only happened when accessing the ownCloud installation from the same URL that was used for creating the admin user in the first place during installation. When using an alternative URL, like an IP based on available on most servers as, login worked flawlessly (also reported here).


You have to disable xcache authentication again, and instruct your browser to forget the HTTP Basic authentication details that else get sent with every HTTP request header to the site's URL and trigger this issue again. Step by step:

  1. Go to the place in your php.ini or server admin panel where you configured the xcache authentication. It would look similar to this:

    xcache.admin.user = "admin"
    xcache.admin.pass = "798967d2527320febcf"

  2. Either add the line "xcache.admin.enable_auth = Off" or delete this whole section.
  3. Reconfigure or restart your web server (if not done by your server admin panel automatically). Depending on how your PHP gets served, it is usually one of these, on Debian and Ubuntu Linux at least:
    service apache2 reload;
    service php-fpm reload;
  4. Delete your HTTPBasic authentication details from your browser.
    • In Firefox, this is done by going to "Edit -> Preferences -> Clear your recent history -> Active Logins".
    • In Chrome, it can be done as shown here.
  5. You can confirm with a phpinfo() script, called from the ownCloud domain that would not work for login, that the HTTPBasic login details are no longer sent in HTTP headers for requests to this domain. Look for variable _SERVER[“HTTP_AUTHORIZATION”], the value should now be no value.


  • The issue of the ownCloud login loop (with or without an actual redirect loop error message) is quite common and probably has multiple causes. If the above instructions did not help you, try some more solutions.
  • Maybe it is possible to use "xcache.admin.enable_auth = Off" during the initial ownCloud setup already, avoiding this whole maize. I did not try.
  • The issue seems to be ownCloud issue 4556, as esp. confirmed by this comment, showing the same solution.
  • Disabling HTTPBasic authentication for xcache is not a security issue because these login details are normally meant only for the xcache admin interface. So simply install this admin interface on a different virtual host, and enable HTTPBasic authentication over there.
  • This issue was not due to entering invalid login details for the ownCloud user; since when doing so deliberately, an error message would appear asking if one forgot the password.
  • The fact that the IP-based URL worked had nothing to do with webserver setup to handle the name-based URL [as assumed here]. Instead, it simply leads to using a different web server configuration, where xcache authentication had never been enabled. This happens for example in web server setups where you modify php.ini settings with snippets added per domain.

Edit the website record in ISPConfig 3, there go to the options tab and in line "PHP open_basedir" set the value "none".

Save, wait a minute for the job to disappear from ISPConfig's job queue (Monitor -> Show Jobqueue) and confirm that your change made it into the configuration with a command like cat /etc/php5/fpm/pool.d/web3.conf (using your website ID of course). Or even better, look into phpinfo() output from your website.