This is an intellectual challenge: how to design a generic, many-to-many communication system that prohibits surveillance entities from proving that you (1) read some website or (2) contributed content to some website, even if they do (1) capture and analyze all traffic on the Internet and (2) can break all encryption that is used for many-to-many communication (practically mostly SSL, which might be broken in many cases by NSA using MITM attacks). The only capabilities that we assume here that the surveillance body does not have is (1) breaking encryption used for local-only storage, such as TrueCrypt and (2) breaking encryption used for one-on-one encrypted communication between parties who know each other personally (which is comparatively simple to achieve with PGP etc.). So we're only talking here about them treating you as part of the big mass of people (one of the many activists out there …), not being one of the select few for which they do "targeted access operations" to infect your computer by software or hardware …

Note that, as we assume that the surveillance body captures all Internet communication globally, Tor can no longer be considered secure as they can then do timig correlations on the whole Tor network at once and with that information (simplified by running some own Tor nodes …) de-anonymize its participants. (For that reason, we cannot use realtime two-way communication at all.)

So: here's my proposal from three hours of thinking on today's evening about this. I guess it's pretty wanky 🙂 Anyway, your feedback is welcome.

The basic idea is to hide reading the website steganographically in reading another unsuspicious website, and contributing to the website steganographically in a botnet infection (and by contributing from public wi-fi only). So the surveillance body would see you communicating, but you can plausibly deny reading and writing on that forbidden website; you just read a photo blog and had a spam virus infection … can happen, right? 😛

Part by part:

Deniable reading: steganographic site-in-a-site

This needs an unsuspicious "host website" with considerable data traffic for every user. For example a photography forum or even a porn site. Being used as a host site could be negotiated in secret with the operators (if you are an activist with a valid cause to which people tend to agree), or the site could also be hacked for that purpose, or a site can be "reused" which happens to be a customer website on a web server you operate. But that's quite evil … . In any case, the site should have a large existing community so that everybody can justifiably claim that they just used the host site and did not know anything about the payload data hidden in its traffic.

So to read the "secret website" you want to access, you do a daily round on this "host website", looking at new posts etc.. You will use a special browser (started from a steganographically hidden and encrypted partition on your computer) with a plugin that extracts the new steganographic payload data from this host website. So every new day of updates on the host website also contain the new day of updates for the payload website. The updates are very compact, compressed, git-style updates, probably just plain text. Also, to make connecting input to users even harder, normally every post in the payload site is anonymous (not even pseudonymous!), but users could identify themselves with transient handles for just a few posts to create necessary context in a thread.

Instead of starting the payload extraction and decryption software from a steganographically hidden area, another alternative is to get it to be part of the basic operating system installation for everyone. Which is then likewise unsuspicious because it provides an alibi.

The payload website would not be encrypted. So the surveillance body would find out about it quickly, and it would take some weeks or months to get the host website switched off or its "infection" with the payload site removed (by choosing a proper jurisdiction for the server location, and by using Tor to hide its location somewhat, it can definitely take that long). At which point the payload site would switch to a different host site. Even better, it would always use different host sites in parallel for redundancy, and switching from one to the other would not need any re-downloading of previous data. Just the new "git commits".

But maybe it is better to have the payload data encrypted – if it helps to keep the "infection" from being detected for a long time, it definitely is better. That however implies that every user has to get their own specialized payload, encrypted with a PGP public key. So the host site cannot be a broadcast type of site (like a forum), but has to provide content to every user (like a PTT voice messaging site for example, since PTT voice messages can well take steganographic payload). The payload site server would also encode slightly randomized payload for the different users, and of course not log the random elements, to prevent the connections between the public keys (on the server) and the user accounts (of the host site) to be made when the server is compromised. Which means that even then, nobody can tell which users got data with payload and which are just normal users. Only when finding the users and seizing their computers one could tell that … but no, not even then, since (1) the users usually cannot be found and (2) their private keys for decryption are steganographically hidden on their computer (see below).

Against breaking communication encryption: anonymity by free wi-fi

There is no issue with encryption being broken (or content not being encrypted at all!) if this still does not give them a hint to your real-world identity. The solution is to not give them any connection between the IP address and personal identities. How to achieve this?

  • Use a mobile device that looks for open wi-fi networks while you walk around in a big city. If it finds one, it will connect to it quickly, send its data, and disconnect again. The data is a git-style, very compact update to the shared content of the website / forum you contribute to. With this method, you can sync to that website 1-2 times a day, which should be enough for most purposes.
  • If your country does not allow anonymous Internet access at all (such as in China), send your data by encrypted e-mail to somebody abroad whom you trust and who will perform the above procedure for you. If you don't have somebody you trust, you are doomed anyway 😉

The security and safety of this procedure can be enhanced by:

  • using directional wi-fi antennae hidden below your clothing (for example in your arm – the system will guide you to point your arm in the right direction to get the best connection to a remote wifi 😉
  • more importantly, by using a new, spoofed MAC address for every connection. Which is important in case the wi-fi network logs the MAC addresses of connections
  • choosing only wi-fi networks open to the general public (in bars, airports etc.), to avoid raising suspicion of the surveillance body against individuals whose wi-fi you would else compromise
  • disguising your optical appearance against face recognition through surveillance cameras etc.
  • unsuspicious behavior, to prevent raising suspicion in surveillance videos etc.; this however is a pretty low-grade threat, as it involes a lot of manual coordination work ("targeted access operations"), which a surveillance body cannot do for its whole population
  • writing from a different device than you use for reading; you would use commands like "reply-to:post452798" for placing your content at the right spot of the website; you would never ever exchange data between the two devices (since then, also data coming from a trojan by which the surveillance body infected your reading computer, could be transferred to your writing computer and could prove that you participate in the website instead of just "unknowingly" getting its data in steganograophic form)

Deniable contributions: a botnet is controlling your device!

This is a pretty funny idea: claiming that your computer (that you use for sending) does stuff that you don't even want it to do is perfectly reasonable. So even if you are caught sending through a free wi-fi network, you still have an alibi. Because there will indeed be a virus on this device, that also has the habit of sending out e-mail spam, but it is special in that you can control it. You can also justifiably deny doing so, since all the programs and data to do so (including your website contributions before being sent) are in a TrueCrypt-style partition with a full filesystem that is steganographically embedded in your personal library of self-made photos. Because these are your own photos (and you did not publish them anywhere!), nobody can claim that you steganographically modify them since they did not see them before.

But we need some more stuff: as spambots usually do, yours will auto-generate the spam e-mails you will send out, including many spelling errors, random input for changes to embedded images etc., "to pass the spam filters". But instead, these changes also allow you to add the steganographic input, which also will look just like random changes, so just like the spambot behavior. In reality, it is encrypted with a public key of the server from the website you contribute to, and as long as that private key keeps private, your alibi is safe. This is actually P2P encryption which we allowed above to be not broken. But even if it gets broken, you're still not caught by any means since you always use the free wi-fi for Internet access 🙂

The e-mails will travel to the server hosting the secret website (or to a P2P encryption connected, befriended server), with the alibi that it is also an e-mail server. The server will however claim that these e-mails are spam, and not forward them to its users. But of course, internally evaluate them to extract the payload data. The private key for doing that, and the whole "secret website" software has to be protected in the server, of course, and has to have "deniable existence". This is possible by, again, using steganographic storage of a TrueCrypt style partition, maybe in the image data of the "host website". When the server is physically accessed, it will quickly unmount that logical partition, delete the access key from its memory, and be just another normal webserver 🙂

Simplifications and optimizations

  • It is not needed to hack or infect a host site to use it for embedding another site. Just select one that allows anonymous image or video uploads. If doing so, it should however be a site where all the steganographic content is downloaded by regular users, too, so the steganographic users have an alibi. For example, a meme collection site that allows anonymous submissions and publishes daily updates is a good choice. Or a site that hosts pirated content or porn. This is much better than "infecting" sites and running a separate server for the steganographics. Since this way, no own server and no infected site can be taken down. All own software runs on the clients (which would have a little configuration file that selects what URLs to download for steganographic content, and which for cover, and a decryption key to access the extracted steganographic information. Depening on how far this key is shared, it becomes any software between steganographic 1:1 communciation or many-to-many communication.)
  • The important point is to make steganographic communication comfortable. Not like e-mail, but like a full forum, even with special features like calendars etc.. Only then one can organize social change with it. The way NNTP (Newsnet) works is a good paradigm: a desktop client software collects message / data packages from somewhere, and provides the frontend locally.
  • For anonymous posting on public wi-fi networks, maybe one can even use little quadrocopter or blimp drones, operating at night. During the day they would hide in some place not at your home, and in passing on your way to work etc. you would transfer the next set of data to upload to them.

In your twenties, you were a visionary. You wanted to learn it all, and fix it all. All the world.

Ever realized that you cannot do everything that is meaningful in your life? When you dedicate your life to help people with HIV, you can't go find a cure for HIV. Or find the quantum gravity model. Or develop sustainable government. Or find out and teach us all about the Transcendental and God (if you find there is God). Or clean up all the landmines. Or the ocean plastic. Or invent a fair-for-all mode for economic exchange. Or this. That.

Because your lifetime is limited.

And then you realize, it would be great to at least achieve one of these. And then you focus on that one.

And then you realize, you have neither time or money for even one of these meaningful contributions (… contributions to what, actually?). Because your parents might be old, needing your help. Or you made children, just like everyone else, and now have to care for them. Or you got fired from your job, the bank took your house, and now you're living in a tent. That you found in that garbage can. It's just a tarp actually. Or you get medical conditions, so you can be happy to make it through the day.

Because your lifetime is limited.

And then you realize, your life will pass and end as meaningless as everyone else's life. And life, what is life? It then seems like a meaningless aggregates of matter to you. You, yourself, just a bunch of atoms, with your conscience an unnecessary (and unpleasant) emergence of it.

And you start to enjoy that your lifetime is limited, not your limited lifetime.

Stop that.

Now, come back to your visions.

Just change one thing: it should be no longer your vision, now it's ours. Humanity's. We are all in our twenties again.

Everyone who has given up on seeking, and expecting to see, the abolition of greed, poverty and evil, and the introduction of immortality and freedom for all, has given up living while alive. Seek, and expect, again. Because now we seek, and expect, together. You were frustrated by your powerlessness as an individual. Now marvel at what seven billion can do. And what God will do, if there is a God, and seven billion seek him.

Yes, you should expect and seek God, because there might be God after all. But do not forget all the rest of what is good. Physical immortality. Good governance structures. Unextinction of animals. Desert forests. The Theory of Everything. Space colonization. So much before us!

Now what? It's all about how we organize. If your grandma cooks a simple healthy meal for scientists working on quantum gravity, she contributes. If you read news about political quarrels, visit touristic spots from your hard-earned surplus money, engage in any avoidable consumption, you do not.

Wake up, all of us!

The pieces are coming together already. Take note, organize yourselves, contribute. Some inspirations? Here you go:

And of course: Are we alone in the universe? What does it all mean? Are we sure about this? Why? Re-asking the big questions is probably one of our biggest challenged. Us modern folks got so used to the scientific storied of Big Bang, cosmic evolution, and biologic evolution. And now, scientific evolution comes along and puts to question the very concept of space-time. And with it, the existing notion of Big Bang.

Now, what?

Symptoms

You have installed the ISPConfig server admi panel (in my case, version 3.0.5.3), all according to their guidelines called "The Perfect Server". In this case, I used the setup instructions for ISPConfig 3 on Debian 7.0 Wheezy with Apache2, BIND, Dovecot. Then you create a website in ISPConfig, create an FTP account for it in ISPConfig, try to log in with this account, and it does not work. The client-side FTP log would be simply:

Command:    USER c1testuser
Response:    331 User c1testuser OK. Password required
Command:    PASS ***************
Response:    530 Login authentication failed
Error:    Critical error
Error:    Could not connect to server

Concurrently, you would see something like this in /var/log/syslog:

Nov 19 16:48:53 one pure-ftpd: (?@xxx.xxx.xxx.xxx) [INFO] New connection from xxx.xxx.xxx.xxx
Nov 19 16:48:53 one pure-ftpd: (?@xxx.xxx.xxx.xxx) [INFO] PAM_RHOST enabled. Getting the peer address
Nov 19 16:49:00 one pure-ftpd: (?@xxx.xxx.xxx.xxx) [WARNING] Authentication failed for user [c1testuser]
Nov 19 16:49:00 one pure-ftpd: (?@xxx.xxx.xxx.xxx) [INFO] Logout.

And something like this in /var/log/auth.log:

Nov 19 16:48:53 one pure-ftpd: pam_unix(pure-ftpd:auth): authentication failure; logname= uid=0 euid=0 tty=pure-ftpd ruser=c1testuser rhost=

Solution

This problem can have several reasons, so you should try the following alternatives. I have roughly ordered them into a logical order of things to check, starting from frequent causes.

  1. Did you include the ISPConfig username prefix? In ISPConfig, when you create a FTP user, SSH user etc., the panel will prepend this username with a prefix that you can configure in "System -> Interface -> Main Config -> Sites". If the prefix uses "c[CLIENTID]" in that form, it might be "c1" for example. So when setting up your FTP client, do not just use the username you entered, but put that prefix before. Just as you see the username listed in "Sites -> FTP Accounts" in column "username".
  2. You do not have to include the domain name into the user name. At some webhosters, the actual username would be something like "user1@example.com". This is not required for ISPConfig FTP logins [source], and probably would not work either.
  3. Check that the job queue is empty. In ISPConfig, your tasks (like creating an FTP user) get processed deferredly by a cron job that runs every minute. But if it fails, the job queue can get stuck. Check in ISPConfig in "Monitor -> System State -> Show Jobqueue" to make sure it's empty, or if not, fix this as shown here.
  4. If you have "Replication failed" errors in the job queue, sync ISPConfig versions across servers. This is only relevant if you use ISPConfig for multiple combined servers. See the instructions.
  5. Make sure you can at least connect with system users via FTP. All users that you create in ISPConfig via "Sites -> Web Access -> FTP Accounts" are virtual users, which are mapped to system users when logging in via FTP. The mapping is defined in the "Options" tab, " field when creating a FTP account in ISPConfig, and managed in MySQL database dbispconfig, table ftp_user. The system users to which they are mapped (like web1, web2 etc.) cannot be used for FTP logins either as they have no passwords defined (see /etc/shadow: ! in the second field means "no password authentication"). However, when you create SSH users in ISPConfig, this results in system users with passwords. So try to log in with an existing SSH user, and do not forget the prefix added by ISPConfig to your chosen username. (Just check "Sites -> Shell-User" to see the proper user names.) If this works, only virtual users' authentication is broken, else something more profound could be wrong (Firewall config, fail2ban config etc.). Note that in a working setup, you should be able to login with both system and virtual users into FTP (I tested it). I was able to successfully log in using these settings (in FileZilla):
    • host: server's IPv4 address; also works with the domain name of any website hosted on this server
    • port: empty (means the default value 21 is used)
    • protocol: FTP
    • encryption: Use Plain FTP
    • logon type: Normal
    • user: the username incl. the prefix added by ISPConfig
    • password: the password
    • all other fields: left at their defaults
  6. Check if pure-ftpd-mysql's queries arrive at the database with the virtual users. You can do so by enabling MySQL query logging:
    1. In /etc/mysql/my.cnf, enable or insert the following lines:
      general_log_file        = /var/log/mysql/mysql.log
      general_log             = 1
    2. Restart the MySQL server: service mysql restart
    3. Try another failing FTP login, then look at /var/log/mysql/mysql.log to see what queries arrived at MySQL. Simply search for ftp_user, as all configured queries in /etc/pure-ftpd/db/mysql.conf include that table name. In my case, no queries arrived here at all. This can mean that you have the wrong variant of pureftp installed, see the next step.
    4. Do not forget to disable query logging again (and MySQL again) after you fixed this issue, as it's a performance killer.
  7. Make sure you have pure-ftpd-mysql installed. Execute apt-get install pure-ftpd-mysql and see if it wants to install the package; if yes, that's the issue here. Let it install the package, and afterwards your FTP logins with virtual users should work now. Because only the pure-ftpd-mysql variant includes MySQL based authentication. Note that /etc/init.d/pure-ftpd-mysql exists and that /etc/init.d/pure-ftpd-mysql restart resp. service pure-ftpd-mysql restart can be executed without an error message even before the pure-ftpd-mysql package is installed. These commands simply will not output anything instead of the server startup command, and that's the only symptom that something is wrong here.

I still have no idea how I missed to install pure-ftpd-mysql. I simply followed the "The Perfect Server" ISPConfig installation instructions and in my variant, on page 4, the correct package gets installed (where they run apt-get install pure-ftpd-common pure-ftpd-mysql quota quotatool).

Symptoms

This happened right after the installation of ownCloud 5.0.13, when trying to log in for the first time. This failed, and the login page was simply served again (without any error message or explanation). The URL of that login page was then something like http://example.com/index.php?redirect_url=%2Findex.php%2Fapps%2Ffiles.

During installation, I also had to create an xcache admin user and password to solve the "xcache.admin.user and/or xcache.admin.pass settings is not configured" issue (for a description and instructions to solve it, see this tutorial).

However, this issue only happened when accessing the ownCloud installation from the same URL that was used for creating the admin user in the first place during installation. When using an alternative URL, like an IP based on available on most servers as http://xxx.xxx.xxx.xxx/owncloud/, login worked flawlessly (also reported here).

Solution

You have to disable xcache authentication again, and instruct your browser to forget the HTTP Basic authentication details that else get sent with every HTTP request header to the site's URL and trigger this issue again. Step by step:

  1. Go to the place in your php.ini or server admin panel where you configured the xcache authentication. It would look similar to this:

    [xcache.admin]
    xcache.admin.user = "admin"
    xcache.admin.pass = "798967d2527320febcf"

  2. Either add the line "xcache.admin.enable_auth = Off" or delete this whole section.
  3. Reconfigure or restart your web server (if not done by your server admin panel automatically). Depending on how your PHP gets served, it is usually one of these, on Debian and Ubuntu Linux at least:
    service apache2 reload;
    service php-fpm reload;
  4. Delete your HTTPBasic authentication details from your browser.
    • In Firefox, this is done by going to "Edit -> Preferences -> Clear your recent history -> Active Logins".
    • In Chrome, it can be done as shown here.
  5. You can confirm with a phpinfo() script, called from the ownCloud domain that would not work for login, that the HTTPBasic login details are no longer sent in HTTP headers for requests to this domain. Look for variable _SERVER[“HTTP_AUTHORIZATION”], the value should now be no value.

Discussion

  • The issue of the ownCloud login loop (with or without an actual redirect loop error message) is quite common and probably has multiple causes. If the above instructions did not help you, try some more solutions.
  • Maybe it is possible to use "xcache.admin.enable_auth = Off" during the initial ownCloud setup already, avoiding this whole maize. I did not try.
  • The issue seems to be ownCloud issue 4556, as esp. confirmed by this comment, showing the same solution.
  • Disabling HTTPBasic authentication for xcache is not a security issue because these login details are normally meant only for the xcache admin interface. So simply install this admin interface on a different virtual host, and enable HTTPBasic authentication over there.
  • This issue was not due to entering invalid login details for the ownCloud user; since when doing so deliberately, an error message would appear asking if one forgot the password.
  • The fact that the IP-based URL worked had nothing to do with webserver setup to handle the name-based URL [as assumed here]. Instead, it simply leads to using a different web server configuration, where xcache authentication had never been enabled. This happens for example in web server setups where you modify php.ini settings with snippets added per domain.

Edit the website record in ISPConfig 3, there go to the options tab and in line "PHP open_basedir" set the value "none".

Save, wait a minute for the job to disappear from ISPConfig's job queue (Monitor -> Show Jobqueue) and confirm that your change made it into the configuration with a command like cat /etc/php5/fpm/pool.d/web3.conf (using your website ID of course). Or even better, look into phpinfo() output from your website.

There are two main reasons for this: the next job causing an error, or a stale lock.

How to fix a job queue stuck with a job that causes an error

  1. Check for errors. Go to "Monitor -> Show System Log" and check if there is a message of loglevel "Error". If yes, proceed below; if no, it is more likely that you have the stale lock file issue – see next section.
  2. Fix the error. Fix the cause for the error and then click "Remove" in the ISPConfig system log view. This will cause ISPConfig to reprocess the stuck job when its cron job runs the next time (which is every minute in a default setup).
  3. If needed, use Debug log level. If it is not clear what causes the error, enable the Debug log level (in System -> Server Config -> Server -> Loglevel), wait a minute and revisit "Monitor -> Show System Log". Since ISPConfig continuously tries to process the failing job again, now you should at least see more detailed error messages.
  4. If needed, use direct cron output. If it is still not clear what causes the error, you can look at the direct output of the cron job by executing /usr/local/ispconfig/server/server.sh in a console. If you see "There is already an instance of server.php running.", wait a few seconds and try again – your execution accidentally overlapped with the regular ISPConfig cron job. If you keep getting this however, it is more probable that you have the "stale lock file" issue, see below.
  5. Or let ISPConfig ignore the failing job. If you don't know how to fix the error, you can let ISPConfig jump over this job and proceed with the next one. For that, increment database column server.updated by one in the ISPConfig database (usually dbispconfig), in the row that belongs to the affected server. If you want to jump over several jobs: make this number correspond to the ID of the last job record which ISPConfig should consider "done". Job records are stored in table sys_datalog, and their IDs in column sys_datalog.datalog_id.

Sources:

  1. "Delete ISPconfig job queue" thread
  2. ISPConfig 3 Manual 1.4, chapter 7.1 "How Do I Find Out What Is Wrong If ISPConfig Does Not Work?"

How to fix when a database creation job causes an error

This is one of the cases from the scenario above, where a job in the job queue causes an error. However, this error is particularly nasty, since it will not show up as an error in the ISPConfig job queue, and can cause teh cron job to occupy all the system's memory, so that even the web server can crash (or fail to restart) in response.

Symptoms:

  • The ISPConfig job queue is no longer processed, and the next job to process (so, the one it got stuck with) is probably a database creation or deletion job.
  • You have the smpyoms of the "stale lock file" issue below, because the ISPConfig cron job does not return, and if it finally crashes, it leaves the stale lock file.
  • When deleting the stale lockfile, enabling the debug log level and then starting the cron job from the command line, one gets an infinite loop of error messages like these:
    23.02.2014-15:25 - WARNING - DB::query(SELECT count(syslog_id) as number FROM sys_log WHERE datalog_id = 910 AND loglevel = 2) -> mysqli_query Access denied for user 'root'@'localhost' (using password: YES)

How to fix this:

The cause of this was that the MySQL root user password was changed after installing ISPConfig, and without adding this to the ISPConfig configuration. It is needed there for ISPConfig to be able to create and delete MySQL databases. So, edit /usr/local/ispconfig/server/lib/mysql_clientdb.conf and update the MySQL root password there (entering it in plain text). [source]

How to fix a stale lock file

This reason is the most likely cause in cases where you see the following message permanently when enabling the debug log level in System -> Server Config -> Server -> Loglevel and then (repeatedly) trying to start the cron job manually on the console:

user@host:~ $ /usr/local/ispconfig/server/server.sh
user@host:~ $ 15.11.2013-19:56 – DEBUG – There is already an instance of server.php running. Exiting.

It means that the  ISPConfig cron job crashed (or you killed it) when it was running the last time, leaving a stale lock file behind.

Fix this by executing:

rm -f /usr/local/ispconfig/server/temp/.ispconfig_lock

[source]

Huawei 3G USB sticks ("dongles") support a special command to control the mode how they will appear to a host computer. This allows to switch off the flipflop USB capability, also called USB modeswitching.

USB modeswitching means that these and many other USB 3G sticks at first only provide a virtual CD-ROM drive to the computer from which a (Windows) computer will install the device drivers, and then the device will switch to show only a 3G modem interface. This of course does not work with non-Windows computers like Linux and Android systems, making it a completely bad idea of course. For Linux, there is now the usb_modeswitch utility which now switches the device to the 3G modem mode more or less automatically, but this is still not standard for Android (though possible on rooted devices, and included in the PPP Widget app for rooted devices).

So for using Huawei 3G sticks on non-rooted Android devices, we want to disable the virtual CD-ROM mode permanently. This can be done as follows on a Linux host:

  1. Make sure you execute these steps from a computer that can see the serial line interfaces of the device. This should be possible on any Linux computer that has usb_modeswitch installed and configured, like done by default on at least Ubuntu 12.10 (and newer).
  2. Find out the first USB interface of the device by doing ls /dev/ttyUSB* before and after plugging the 3G stick in and taking the first result.
  3. Open one terminal for receiving messages from the device, by executing cat /dev/ttyUSB1 (using your device number of course).
  4. In another terminal, send a command to the stick to test communication with it. Execute echo "ATi^M" > /dev/ttyUSB1. But note: You can not simply copy this over, as the ^M is just the visual representation of a single control character. Enter this character in the terminal directly by pressing Ctrl+V and afterwards Ctrl+M.
  5. Now, your terminal with cat should display information that the stick sent about itself.
  6. If this worked, now send the command to disable the virtual CD-ROM mode: echo "AT^U2DIAG=0^M" > /dev/ttyUSB1. Note that ^M is again a control character that you have to enter specially (see above), but ^U is just normal text, so you can copy & paste it.
  7. Now your cat terminal should answer with OK. If so, the command was successful. [source]

The basic idea for this console based process comes from the article "Send AT commands to USB modem" by brunomgalmeida.

There are also instructions on an equivalent procedure using Windows; however I was not able to follow that procedure as my Huawei K3715 3G stick did not let me talk to it through a serial terminal at all, probably because I set it up with bad connection speed etc. settings (as these were nowhere to be found …).

Also compare "Hayes command set" on Wikipedia for more information.