These are the instructions for installing a generic NEOS server, which you can use for many purposes.

WARNING: These instructions guide you through a successul installation of NEOS itself, but stop there. NEOS is not usable without adding one or more solvers, and for that you have to develop one XML config file and one wrapper script each. These are not provided within the NEOS server software package, not even for open source solvers like COIN-OR CBC. For that reason, I finally abandoned NEOS and moved to the 12 years newer and much cleaner COIN-OR Optimization Services (OS) alternative solution. (The last NEOS 4.0 release is from 2002-10-25, while COIN-OR OS is in active development and meant to be the "next-generation NEOS server" [source].) Also NEOS turned out to be really messy, old Perl software with confusing and inconsistent documentation, so that it is a pain to work with it. You have been warned. Proceed at your own risk 🙂

You can test the NEOS interface by using the public NEOS server, as we did too. So installing your own NEOS server is only reasonable when you want to avoid overloading their servers and / or prohibit all your job submissions from becoming public, as happens with the public NEOS server.

Choosing a machine for your server

For our purposes, using Amazon EC2 Spot Instances seems the most economic and scalable solution. For Economy App, we are starting with one of the CPU power optimized c1.medium instances. It is available more or less permanently for 0.028 USD/h in the South America data center (sa-east-1) – that's 20 USD/month when running full-time, while we only need it to run 25% of the time at first.

As our userbase will grow, we will quickly need a lot or calculation power, and then have to care for where to get it economically. And there's lots of optimization potential – see the initial CloudHarmony cloud hosting comparisons, and their current cloud hosting benchmarks for example.

Installing the NEOS server 4.0

Here we are exploring how to install it on a Linux server (Ubuntu 10.13, to be exact), based on the original installation instructions. It is assumed that your web server and the NEOS server will run on the same machine (else, see this hint). We also assume that you start with only one server – NEOS can handle multiple workstations running solvers though [source].

  1. Download NEOS. Download and extract one of the NEOS server packages. The newst is NEOS server 4.0 from 2002-10-25. (Except you want to go for NEOS server 5.0 from 2005, which however appears largely undocumented.) So for example, do:
    cd /var/local/;
    sudo wget ftp://ftp.mcs.anl.gov/pub/neos/Server/server-4.0-102502.tar.gz;
    sudo tar -xzf server-4.0-102502.tar.gz;
    sudo mv server-4.0 neos;
    sudo rm server-4.0-102502.tar.gz;
  2. Provide the directory structure. You have to create web-accessible directories for CGI files and public web pages and a kind of NEOS workspace directory [source]. The latter is called "directory for variable-length files", and if web-accessible, will make all job submissions to your NEOS server public [source]. In our case, we create these directories below the NEOS installation base dir (originally server-4.0). The workspace dir and document root dir are siblings, so job submissions will not be public.
    sudo mkdir /var/local/neos/cgi-bin/ /var/local/neos/htdocs/ /var/local/neos/workspace/;
  3. Adjust file ownership. Change file ownership rights to the user who will install and run the NEOS server. For our Ubuntu 13.10 AMI running as an Amazon EC2 Spot Instance, we use for example:
    sudo chown -R ubuntu:ubuntu /var/local/neos;
  4. Provide Mail. The NEOS Makefile expects a program Mail (yes, caps M), which in old times was used for an extended version of mail [source]. Let's at least provide mail as a surrogate and see if that works:
    apt-get install mail && sudo ln -s $(which mail) /usr/local/bin/Mail
  5. Configure NEOS. Execute make and answer its config questions according to the NEOS FAQ instructions:
    cd /var/local/neos/config && make
  6. Configure crash recovery. Configure NEOS so it will be automatically restarted via cron when it crashes. For that, execute the following command as the user who should run the NEOS server (but if crontab -l shows you have a crontab already, edit it manually since the following command would replace it): crontab /usr/local/neos/crontabfile.
  7. Install legacy libs. The NEOS server restart script uses flush.pl, a legacy Perl library. This creates a warning about being deprecated and imminent removal from the next major release, so we resolve that warning by installing the library which will provide it in the future: sudo apt-get install libperl4-corelibs-perl.
  8. Start your NEOS server.
    /var/local/neos/bin/restart
  9. Check that the NEOS server is running. Just check in ps aux output that the above command started the initializer.pl, scheduler.pl and socket-server.pl daemons successfully [source].
  10. Confirm that the XML-RPC interface to your server works. By default it resides at port 3333, you configured it with make above. For that, just visit that URL with a browser (something like http://example.com:3333/ ). If it works, the NEOS server will server you a message like this:
    121
    your-server-name: ERROR: "Host: example.com:3333" is not recognized by this server.
  11. Install and configure a web server. It will serve the NEOS web interface. In this case, we use lighttpd.
    1. Install lighttpd.
      sudo apt-get install lighttpd
    2. Enable CGI. Enable the CGI module that you will use to serve the NEOS Perl application:
      sudo lighty-enable-mod cgi
    3. Don't serve CGI files as source! A little security optimization for the NEOS server files: in /etc/lighttpd/lighttpd.conf, change this line
      static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
      to this
      static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".cgi" )
    4. Add a configuration section for the NEOS server. This goes to /etc/lighttpd/lighttpd.conf (in vhost style though it's our only site).
      $HTTP[“host”]  =~ "example.com" {
        server.document-root = "/var/local/neos/htdocs"
        alias.url += ( "/cgi-bin" => "/var/local/neos/cgi-bin/" )
        $HTTP[“url”] =~ "^/cgi-bin" {
            cgi.assign = ( ".cgi" => "" )
        }
      }

      [References: nixCraft lighttpd setup for Perl programs; lighttpd Configuration File Options reference.]
    5. Reload the lighttpd configuration. Execute: sudo service lighttpd force-reload
    6. Adjust file permissions. We need to give the web server user (by default www-data for lighttpd) read and write permissions to some directories:
      cd /var/local/neos;
      sudo chown -R :www-data .;
      sudo chmod g+r+w htdocs/jobs workspace/spool/WEB workspace/databases workspace/tmp
      ;
    7. Test the web interface. Make sure that the NEOS web interface works by visiting the base URL in a browser. It would be http://example.com/ for the above case. Also visit http://exampel.com/cgi-bin/check-status.cgi to make sure your CGI config works.
  12. Configure your system so that the NEOS server will be started at system start. This is especially relevant when using an Amazon EC2 spot instance, since these are occasionally shut down and restarted depending on your bid pricing (and yes, this means system shutdown, not virtual machine hibernating). For that add the following to the content of /etc/rc.local (and disable the default exit 0 if still present):
    /var/local/neos/bin/restart

Registering solvers at your NEOS server

Your NEOS server 4.0 is installed now, but so far it does not have any knowledge about what solvers are available and how to contact them. The NEOS Solvers FAQ explains the process of registering solvers; see also this short list of steps and the HelloNEOS example.

The problem is that the configurations for registering all the solvers of the public NEOS server are all missing – not included in the NEOS server software as they are considered parts of the installation. You would have to write a complex XML file and a wrapper script to (for example) make your NEOS server talk to the free software COIN-OR CBC solver.  In the time needed for this, you can just as well install the alternative solution COIN-OR OS, which comes already pre-configured for being used with CBC. Which is why I abandoned using NEOS at this point.

Installing a solver and making it talk to your NEOS server

At this point your NEOS server knows how to talk to the CBC solver, but neither is your CBC solver installed nor did we add anything to let it know how to talk back to the NEOS server. So let's do both of this now.

(Note: You might see the neos-comms packages for download. However, do not use these. Reasoning: These files are from 1998, while code with equivalent functionality is available inside the latest NEOS 4.0 2002-10-25 server package. That code is clearly newer and better, with comments and all. The neos-comms package contained a neos-comms-4.01/bin/client file that is not available in the new code, but seemingly is just an added start script for the comms daemon and Tcl/Tk interface that got now integrated into the daemon and Tcl/Tk application themselves.)

  1. Start the comms daemon. This is the "Communications daemon" which has to run on the solver workstation (same machine in our case) and mediates between the NEOS server on one side and the solver (here CBC) on the other. It can be started with the "Communications GUI", available as a Tcl/Tk application within the NEOS server software installation. For details, see comms-help.txt (also available within the equivalent place in your NEOS server installation).

Links to NEOS documentation

AWS CLI is the Amazon Web Services command line interface tool, the new unified utility to manage your cloudy Amazon things.

People are usually told to install it with pip install awscli (including in the official docs), but this is a hateable solution for Linux package system fans because

  • You get one more package system (pip, for Python packages) where you have to care for updates, and where you will forget just that. Not that it would be any worse than having own package management systems already for Firefox add-ons, Chrome extensions, Gnome extensions, Ruby gems, Drupal modules, and WordPress plugins. All of that is just plain bad. Grrrr.
  • You no longer have a single point of control and overview for what software is installed on your system.

So, let's try installing the AWS CLI from packages. Fortunately, there are fairly recent (awscli 1.2.9, from 2014-01) packages for upcoming Ubuntu 14.04 (Trusty Tahr). We are on Ubuntu 13.10 however, but we can fix it by adapting these instructions for Debian to our situation:

  1. Add a package source for Ubuntu trusty, by adding a line like this (with your Ubuntu mirror) to the bottom of /etc/apt/sources.list:
    deb http://de.archive.ubuntu.com/ubuntu trusty main universe
  2. Create a preferece for Ubuntu trusty packages that will allow to install them when specifying the distribution, but will not select them automatically even when the version is newer than the local one. For that, create a file /etc/apt/preferences.d/ubuntu-trusty.pref with the following content:
    Package: *
    Pin: release a=trusty
    Pin-Priority: 200
  3. Install awscli and its dependencies: sudo apt-get update; sudo apt-get install -t trusty awscli.

You are ready to use it now (try aws --version). Note that they include the functional equivalent of the Amazon EC2 CLI tools, and many more Amazon CLI tools – you will very probably not need to install any Amazon specific CLI tools any more, regardless of what outdated how-tos are telling you.

Also see Amazon's official AWS CLI documentation.

Symptoms

This issue started to occur right after installing Ubuntu 13.10 on a ThinkPad T61 with an "Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03)" according to lspci. So, whenever I do one of these:

  • adjust volume with the hardware volume silence / up / down buttons, only while not in a Skype call
  • adjust volume with an equivalent software mixer feature, only while not in a Skype call

the effect is this:

  • the next Skype call will be completely silent (both output and input do not work)
  • the Skype call after that will play a randomized variant of rattling noise for output, while input probably works fine (judged by the level bar indicator); interestingly, the closing sound of the Skype call will play correctly, which indicates that only one of the audiostreams is not initialized correctly (the Gnome sound settings panel shows that two are in use during a call)
  • the Skype call after that will play also the output, but with lots of stuttering in between
  • the Skype call after that will play the output correctly, and the input will also work fine

For this to be repeatable, allow for some 3-5 seconds of separation between calls. At times, these four steps will also be reduced to three, or the "rattling noise" or "stuttering" step could also occur two or more times.

In addition, there is a similar problem with Skype chats. Whenever I do one of these:

  • send a message

the effect is this:

  • a randomized variant of permanent, rattling noise (the "message sent" sound will not be sent)

This noise will persist also through Skype calls initiated afterwards. It will (in most cases) change when sending another message, and after some messages will also completely disappear again. The surefire way to make it disappear is though when your chat partner sends you a message.

Solution

The reason for this behavior is that Ubuntu 13.10 ships with PulseAudio 4.0, and Skype does not properly support that so far [source].

The proper solution is to simply install Skype from the Canonical Partner repositories (so, not by manually downloading a Skype .deb via its website). It contains a patch to the skype.desktop file that starts Skype now with a proper workaround as "env PULSE_LATENCY_MSEC=60 skype" [source].

However, if like me you're used to start applications via the Alt+F2 mini-terminal, you might even have this patched Skype package installed, and it has still no effect to fix your audio in Skype. As you probably want to keep starting Skype by hitting Alt+F2 and typing skype, here is a way to do so:

  1. Createa file /usr/local/bin/skype, with the following content:

    #!/bin/bash

    # Workaround for the Skype incompatibility with PulseAudio 4.0, as explained in
    # http://www.webupd8.org/2013/10/get-sound-working-in-skype-with-ubuntu.html
    #
    # This is already installed in the Ubuntu Saucy Skype package from Canonical Partner repos, however their
    # solution of prepending an environment variable in the .desktop file is not used when starting Skype
    # via the Alt+F2 mini-terminal. To apply the solution for that too, we need this file to supersede the normal
    # Skype command. Use "which skype" to confirm that the skype command afterwards indeed refers to
    # /usr/local/bin/skype instead of /usr/bin/skype.

    env PULSE_LATENCY_MSEC=60 /usr/bin/skype

  2. Give proper execution rights to that file.
  3. Check with which skype to make sure Skype is now called from /usr/local/bin/skype instead of from /usr/bin/skype.

There are also some dirty workarounds (here mentioned only to learn something about Skype and Pulseaudio, not to use them):

  1. Do not use volume buttons except when in a Skype call. This is not practical of course.
  2. Disable Skype sound events for "Call Connecting" and "Chat Message Sent", and maybe all others. This can be done in the Skype "Options -> Notifications" menu item. This however can result in Skype calls falling completely silent, and this will also not fix itself in the next call then. (It can however be fixed, right during the call even, by playing some seconds of sound from a different application.)
  3. Create the permanent random noise, then mute it via the "System Sounds" stream. This works as follows:
    1. Disable the "System Sounds" PulseAudio stream by going to the Gnome Sound Settings dialog, there to tab "Sound Effects" and set "Alert Volume: Off". Alternatively, in Skype go to "Options -> Sound Devices -> Open Pulse Audio Volume Control -> Playback", and click the "Mute audio" button for stream "System sounds".  In both cases, you will notice that the Skype sound channel in the "Applications" tab (if there is one, usually only after step 2) also goes silent, and with it the Skype notifications in chats, and your volume level adjustment feedback sound goes silent as well (explanation see below).
    2. Let Skype create a sound mess. You have to create that permanent random noise, either by using the "send chat message" technique from above (with a notifocation sound enabled of course), or by test playing any notification sound via Skype's "Options -> Notifications -> Test Event" button. (You can not use the random noise generated by a Skype call with broken audio from above.)
    3. Now do whatever you want in Skype, the problem with corrupted sound in calls will not appear any more, even when using volume adjustments in between of calls.
    4. You have to repeat step 2 after a restart of Skype. (The muting of the System Sounds stream stays active after Skype quits because it is not a Skype feature; however Skype is only immune against creating corrupt audio in calls once it has corrupted the System Sounds audio by executing step 2.)

Explanation Attempt

Let me try a little explanation (just from observations, I do not know how PulseAudio works internally): The problem of Skype seems to be that it cannot properly write to the "System Sounds" stream if another application has written to it in between (including the feedback sounds of the volume change buttons). When Skype tries to write to the System Sounds stream in this situation, it results in that noise (as exemplified by the "send chat message" case). Or for some reason, it can also result in all Skype sounds being muted, and on second try in that noise (as exemplified by the phone call example; it's really due to the notification sounds played at the start of the phone call, since there is no such problem when disabling the notification sounds).

So, the third workaround above works by letting Skype try (creating the noise), but muting that noise away (before or afterwards). As Skype's attempt to write to the channel never returns, it is blocked and Skype has to use a different (newly created) channel for the notification sounds when starting a phone call, as can be seen in the "Applications" tab of Gnome sound settings. That might be why now, playing the notification sound no longer results in corrupted audio. That indeed Skype tries permanently to write to the System Sounds channel (and only creating noise doing so) can be recognized from the fact that the noise stops when Skype exits: then, at last, it stops its desperate attempts of writing to System Sounds.

Other Issues: Silent Input and Output

Muted input. For some reason, at times it happens that Skype shuts off the microphone input when exiting. (This is possibly related to letting Skype access ones input mixer levels in its options dialog.) It can be fixed by going into the Gnome Sound Settings dialog, to tab "Input", and switching "Input volume" off and on again. When you see the input level bar moving when sound is present, all is well again.

Muted output. In other cases (as seen above), Skype output might be completely muted, while still working for other applications. Playing any sound from another application will fix this. And probably, going to Gnome Sound Settings and switching "Output volume" off and on again will also help.

Checkvist is a nice, web-based ouline editor. Since it uses a hierarchical content structure, and a mindmapping software like Freemind does the same, interfacing between them can work well in theory. There are some quirks and tips though that we will explore here in practice.

I selected Checkvist among alternative solutions for the following reasons:

  • Work fast with large amounts of text. Large amounts means here something like 400k characters (400 pages A4), which Freemind can handle easily. The web-based, open source mindmapping tool Wisemapping for example was only able to work with some few pages of text before getting sluggish).
  • Unlimited lists, list items and other features in the gratis version. In contrast, Workflowy is also nice but offers only 250 free list items per month … .
  • Collaborative editing in the free version. Because that is what I need it for: a real-time collaborative interface for some content I developed so far in Freemind mindmaps.
  • Public sharing in the free version.
  • Comfortable importing from Freemind 1.0. In contrast, Wisemapping for example would support only imports from Freemind 0.9 directly, so you would need Freemind 0.9 installed as well to copy, paste and save your Freemind 1.0 mindmap in 0.9 format before uploading it.

How to import Freemind content into Checkvist

  1. You can only import plain text. No icons, colors, HTML rich text formatting of nodes etc., but you do not have to remove them before either.
  2. Make sure you do not have multiple paragraphs of text in any one node. Because the second and following paragraphs would start without indentation in the text version, leading to hierarchy level errors during the import. So, split every node that currently has multiple paragraphs using this technique:
    1. Position the node selection at the root node of the branch you want to export.
    2. Do "Navigate -> Unfold All" (Ctrl + Shift + End).
    3. Do "Edit -> Select Visible Branch" (Ctrl + Shift + A).
    4. Do "Format -> Use Plain Text". This will convert bulleted and numbered lists into normal paragraphs, as else "Split Node" would not be able to break them up.
    5. Do "Tools -> Split Node".
    6. Do "Navigate -> Fold All" (Ctrl + Shift + Home).
  3. Copy the content you want to import. Select all nodes you want to appear on the first level after the import, and do "Edit -> Copy" (Ctrl + C).
  4. In Checkvist, select "Import" and paste the clipboard content.
  5. Click "Import tasks". It will import your Freemind content as indented text.

 

Wait, the *Secure* Socket Layer in HTTPS can be insecure? Yep, in the age of total surveillance, it can.

Good news: To the best of our knowledge, there is also secure SSL still. (But don't trust me on these instructions with your life or the life of your website users – you have to become your own expert!). The considerations below take into account both secrecy and server performance. The tips are in decreasing order of importance.

(1) Your users need an uncompromized computer!

Because if we'd start with compromized hardware, all is lost anyway. The malware can simply grab your communications from the browser screen and send it to the surveillance body. No need to break SSL, then. But of course, that would also be simple: that malware would also have an easy job to hide man-in-the-middle attacks by preventing the tools mentioned below from detecting SSL certificate changes.

The best first tip for having a non-compromized computer is having two: one for daily work, one for only the high-value communications. And you would not go near any threat on the Internet with the second one.

(2) Enforce HTTPS for all connections

For performance reasons, you might think about using SSL connections only for login (password transmission), or at least only while users are logged in (also protecting the content they post, and the session cookie which else can be used for session hijacking). However, surveillance can derive lots of metadata, behavioral data etc. also from looking at what people read while not being logged in. With proper SSL speed optimization, the server load of enforcing SSL everywhere should be manageable.

(3) Throw out insecure SSL cipher suites

Configure your webserver to not use:

  • the old utmost crap "export" cipher suites
  • plain DES (triple DES is ok though)
  • RC2
  • RC4 (which is kind of broken)

See the source for these recommendations. Note: The SSL Labs SSL Test is a nice site to check if your configuration works as intended.

From the remaining ones, all ciper suites that use at least 128 bit keys for symmetrical keys are ok. This is roughly equivalent to the security of 3072 bits RSA keys [source, p. 64], which are a sufficient protection against brute force attacks even beyond 2030 (as we will see below). Which means, encrypted data recorded now could only be broken some time after 2030. This is further protected by the fact that the symmetrical keys are only used for one SSL session, and using brute force attacks to decrypt one such small session from 20+ years ago is almost certainly not worth the effort in 2040 or so.

In a few years, when all browsers will support higher grade AES cipher suites and so on, you would of course switch to only allow at least 192 or 256 bits of security. Which is equivalent to 7680 resp. 15360 bit RSA keys [source, p. 64] and comes at relatively neglible performance costs of needing about 30% more CPU time for the same data throughput [source].

(4) Use only DHE Perfect Forward Secrecy key cipher suites

We want to use perfect forward secrecy (PFS) cipher suites. PFS means: when the private key of the server is leaked at some time, recorded communication of the past still cannot be decrypted. It only allows the attacker to impersonate the server for negotiating keys for new sessions, until the SSL certificate expires (which is hopefully soon).

Here is how to configure your webserver for using PFS cipher suites.

However, not all PFS ciphers are the same. As Bruce Schneier writes: "Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can." [source] This means, use Diffie-Hellman key exchange (DHE) cipher suites, not Elliptic-curve Diffie-Hellman key exchange (ECDHE) [source]. The danger of weak ECDHE ciphers is that recorded, encrypted communication could be broken later with limited effort attacks (albeit only session by session). The downside of DHE, on the other side, is "only" that it is slower.

(If you really want to use ECDHE for performance reasons, offer it only with elliptic curves that are safe. I am not sure if browsers support any safe curves [see] or how to restrict what curves your OpenSSL installation will offer to the client [see]. Tell us in the comments when you find it out.)

(5) Always store your private key in encrypted form

Which should mean, store it in encrypted form. This will require you to enter a passphrase when restarting your webserver process. But assuming that your webserver is stable, this would only be done when you are at the server anyway. It is quite common practice to store the private key in plain text so that it is only readable by root, but that is a severe vulnerability that would allow a remote attacker or somebody with physical access to the server to do a man-in-the-middle attack that does not need a certificate change.

(6) Use a 2048 bit private key

Because 1024 bit RSA private keys are considered too weak already.

A 2048 bit RSA private key for your sever is however enough. You want to avoid a needlessly longer key because SSL handshake performance exponentially degrades with increasing key length [source; test results].

2048 bits are enough because: With only forward secrecy cipher suits available, a broken or compromized private key of the server only means that an attacker can impersonate the server from that moment on (that is, can do a man-in-the-middle attack without causing a SSL certificate change). But he can not decipher past recorded communications. So the key strength has only to be enough to prevent brute force attacks during the certificate's validity. Which means 2048 bits until 2030, and 3072 bits afterwards [source]. But keep the validity period short of course (say, a year).

(7) Let your users monitor for SSL certificate changes

The problem with powerful surveillance bodies is that they are powerful: it is credibly alleged that three letter agencies can deploy man-in-the-middle attacks effortlessly on large scales by having backdoors in consumer DSL routers [source]. This also allows to compromise SSL connections, as follows. The router usually acts as a DNS server, and forwards requests to DNS servers of the telecommunications provider. When enabled by the backdoor, this allows DNS requests to be deflected to another DNS server, resulting (for example) in a site with a fraudulent SSL certificate being served to you (that will forward to the real site, but monitor your communications) [source, p. 23].

Such an attack might use a different SSL certificate signed by the same Certification Authority (CA) – or a different CA, it does not really matter since browsers by default do not notify users when a site's CA changed. But after all, the certificaate is different, and that is how it can be detected. Because: they simply cannot use these man-in-the-middle attacks on SSL for anything beyond targeted operations [source]. When used permanently, the attacks would be detected for example by site owners who notice the difference between wha certificate their site should serve to the world and what they see when visiting it in their browser.

In any case, it means that we should assume the CA-based certification mechanism to be completely broken, and should not rely on it. Until it is going to be replaced with a distributed, reliable mechanism (maybe a PGP style trust network? or registering the public key / domain name pairs on the Namecoin blockchain?), we have to make do with verifying the SSL certificate yourself (see below), or where this is too much effort, by tracking certificate changes.

Tracking certificate changes provides a decent protection against man-in-the-middle attacks, since these require exchanging the certificate, as shown above. You should monitor both the differences from what certificate everone else sees (using the Perspectives Firefox plugin) and from what certificate you saw in the past (using the Certificate Patrol Firefox plugin). From practical experience, Certificate Patrol is however not useful on large websites (Google, Facebook Twitter etc.) since these tend to use multiple certificates, from multiple CAs, and exchange them frequently. This makes using Certificate Patrol annoying; it would be better to have an option in it that lets you switch it on only for sites that you "do not want to be surveilled on".

(8) Let your users verify the SSL public key themselves

It is alleged that three-letter agencies collude with Certification Authorities (CAs) to get a second, different certificate signed by them for every new one they sign [source, p. 23]. Which will allow the man-in-the-middle attacks on SSL explained above. It is not known if this extends to any CA outside the US (where they can be subpoenaed …). But for practical purposes, let's just assume that all CAs are compromised.

A HTTPS certificate merely says that a website's traffic goes to whoever controls the domain plus whoever controls the CA. Even worse: since browsers by default do not notify a user when the HTTPS certificate changed and the new comes from a different CA, without manual checks at each page load a user can never be sure if the traffic does not perhaps go to certain three-letter agencies, as it can be safely assumed that at least some CAs are controlled by three-letter agencies. So, while HTTPS certificates are still good enough to exclude ordinary criminals, they are not up against massive surveillance.

(Also of course, you cannot trust the CAs yourself: never let them generate the private key for you, or upload your private key to them. With that precaution, even using a CA that is controlled by a three-letter agency is not a problem. They do not get you server's private key, they just sign your server's public key to attest it belongs to your server. Which is a correct statement, and not affected by being compromised. The compromised CA could create a duplicate certificate for a spy agency's own private key, but whether they use that or one from a different CA is just a matter of taste since a browser does not warn users against CA changes.)

In practical terms: You may proceed using a certfificate from a compromised CA for your "normal" website visitors, while at the same time also warning everyone that the HTTPS CA scheme is broken and users should not rely on it, instead should verify the SSL public key themselves. Tell your users to not simply trust the CA certificate represented by a happy Firefox displaying a lock to them. Tell them to verify by themselves, at the first time using the site, that the certificate is the one issued by the site operators instead of possibly a man-in-the-middle. Together with being alerted about certificate changes (see above), this provides a very decent protection against man-in-the-middle attacks.

Verifying can be difficult, depending on the size of your user community, but as tracking SSL certificate changes is usually enough by itself it does not have to be a very thorough vaidation. So you have different options. Here are some proposals, by increasing security:

  • Publish the fingerprint of the correct SSL certificate on the same website.
  • In the signup message, include the SSL certificate's fingerprint. This protects against later, dynamic modification of your website content by attackers.
  • As above, but also sign the signup message with the organization's GPG key, and publish the GPG public key on keyservers.
  • As above, but let users manually verify your GPG public key fingerprint (and also the current SSL certificate fingerprint while you're at it). This can be done without personal meeting of business card handovers etc., simply by creating a video live link, making sure it's not a pre-recorded video (some talking …) and letting the site representative both speak and sign the finger print characters at the same time. This of course mandates that the site representative is well-known, ideally by having met in person before.

(9) Use a self-signed certificate

With the above technique of still using CA-certified public keys but warning users against trusting them, most users would not see or simpply not follow the warning. They can be forced to deal with untrustable certificates though if you simply use a self-signed certificate. It will cause the browser to promt users to add a security exception, so they will have to verify the certificate before doing so.

(10) Let your users monitor IP address changes

If an attacker can get hold of the server's original SSL private key, he can impersonate the server without the tools above detecting an SSL certificate change. However, a similar tool would detect an IP address change. And you would announce the IP address of your server, and changes to it, for users to verify them, in analogy to how you want them to verify the SSL certificate fingerprint above. (And when you're at it, you might even want to switch to a self-signed certificate that would then also include the IP address, for free. At least on a second, synonymous domain or subdomain, for the users who know what they are doing.)

(11) Exchange your SSL private key and certificate frequently

Not sure about this one. It seems better to invalidate a SSL private key after a month than a year, which will prohibit an attacker from going on for long with man-in-the-middle attacks using your original private key that are undetectable by the above SSL certificate change monitoring. However, it introduces a task for users to manually verify the fingerprint of a new SSL certificate every month or so. Which is unrealistic for nearly all public-facing websites. Also it might not be needed because a man-in-the-middle attack could still be detected by IP address change monitoring proposed above. (However maybe even IP addresses can be bent by the secret services? I just don't know.)

(12) Put your server at a secure location

Even if you have encrypted your private key as stored on the server, there is a chance that it might be read from a memory dump. Which can be obtained when having physical access to your machine, or remote access to the virtualization system if you are on a VPS host (virtual private host). So at least do not rent a server in a country where three-letter agencies have easy access to company secrets, and also don't host at large hosting companies. Ideally of course, place the server physically at your home. And there into an intrusion-protected room. With lots of concrete around. In your basement. But ahh well … sorry, now I became paranoid about it all 😛

This is an intellectual challenge: how to design a generic, many-to-many communication system that prohibits surveillance entities from proving that you (1) read some website or (2) contributed content to some website, even if they do (1) capture and analyze all traffic on the Internet and (2) can break all encryption that is used for many-to-many communication (practically mostly SSL, which might be broken in many cases by NSA using MITM attacks). The only capabilities that we assume here that the surveillance body does not have is (1) breaking encryption used for local-only storage, such as TrueCrypt and (2) breaking encryption used for one-on-one encrypted communication between parties who know each other personally (which is comparatively simple to achieve with PGP etc.). So we're only talking here about them treating you as part of the big mass of people (one of the many activists out there …), not being one of the select few for which they do "targeted access operations" to infect your computer by software or hardware …

Note that, as we assume that the surveillance body captures all Internet communication globally, Tor can no longer be considered secure as they can then do timig correlations on the whole Tor network at once and with that information (simplified by running some own Tor nodes …) de-anonymize its participants. (For that reason, we cannot use realtime two-way communication at all.)

So: here's my proposal from three hours of thinking on today's evening about this. I guess it's pretty wanky 🙂 Anyway, your feedback is welcome.

The basic idea is to hide reading the website steganographically in reading another unsuspicious website, and contributing to the website steganographically in a botnet infection (and by contributing from public wi-fi only). So the surveillance body would see you communicating, but you can plausibly deny reading and writing on that forbidden website; you just read a photo blog and had a spam virus infection … can happen, right? 😛

Part by part:

Deniable reading: steganographic site-in-a-site

This needs an unsuspicious "host website" with considerable data traffic for every user. For example a photography forum or even a porn site. Being used as a host site could be negotiated in secret with the operators (if you are an activist with a valid cause to which people tend to agree), or the site could also be hacked for that purpose, or a site can be "reused" which happens to be a customer website on a web server you operate. But that's quite evil … . In any case, the site should have a large existing community so that everybody can justifiably claim that they just used the host site and did not know anything about the payload data hidden in its traffic.

So to read the "secret website" you want to access, you do a daily round on this "host website", looking at new posts etc.. You will use a special browser (started from a steganographically hidden and encrypted partition on your computer) with a plugin that extracts the new steganographic payload data from this host website. So every new day of updates on the host website also contain the new day of updates for the payload website. The updates are very compact, compressed, git-style updates, probably just plain text. Also, to make connecting input to users even harder, normally every post in the payload site is anonymous (not even pseudonymous!), but users could identify themselves with transient handles for just a few posts to create necessary context in a thread.

Instead of starting the payload extraction and decryption software from a steganographically hidden area, another alternative is to get it to be part of the basic operating system installation for everyone. Which is then likewise unsuspicious because it provides an alibi.

The payload website would not be encrypted. So the surveillance body would find out about it quickly, and it would take some weeks or months to get the host website switched off or its "infection" with the payload site removed (by choosing a proper jurisdiction for the server location, and by using Tor to hide its location somewhat, it can definitely take that long). At which point the payload site would switch to a different host site. Even better, it would always use different host sites in parallel for redundancy, and switching from one to the other would not need any re-downloading of previous data. Just the new "git commits".

But maybe it is better to have the payload data encrypted – if it helps to keep the "infection" from being detected for a long time, it definitely is better. That however implies that every user has to get their own specialized payload, encrypted with a PGP public key. So the host site cannot be a broadcast type of site (like a forum), but has to provide content to every user (like a PTT voice messaging site for example, since PTT voice messages can well take steganographic payload). The payload site server would also encode slightly randomized payload for the different users, and of course not log the random elements, to prevent the connections between the public keys (on the server) and the user accounts (of the host site) to be made when the server is compromised. Which means that even then, nobody can tell which users got data with payload and which are just normal users. Only when finding the users and seizing their computers one could tell that … but no, not even then, since (1) the users usually cannot be found and (2) their private keys for decryption are steganographically hidden on their computer (see below).

Against breaking communication encryption: anonymity by free wi-fi

There is no issue with encryption being broken (or content not being encrypted at all!) if this still does not give them a hint to your real-world identity. The solution is to not give them any connection between the IP address and personal identities. How to achieve this?

  • Use a mobile device that looks for open wi-fi networks while you walk around in a big city. If it finds one, it will connect to it quickly, send its data, and disconnect again. The data is a git-style, very compact update to the shared content of the website / forum you contribute to. With this method, you can sync to that website 1-2 times a day, which should be enough for most purposes.
  • If your country does not allow anonymous Internet access at all (such as in China), send your data by encrypted e-mail to somebody abroad whom you trust and who will perform the above procedure for you. If you don't have somebody you trust, you are doomed anyway 😉

The security and safety of this procedure can be enhanced by:

  • using directional wi-fi antennae hidden below your clothing (for example in your arm – the system will guide you to point your arm in the right direction to get the best connection to a remote wifi 😉
  • more importantly, by using a new, spoofed MAC address for every connection. Which is important in case the wi-fi network logs the MAC addresses of connections
  • choosing only wi-fi networks open to the general public (in bars, airports etc.), to avoid raising suspicion of the surveillance body against individuals whose wi-fi you would else compromise
  • disguising your optical appearance against face recognition through surveillance cameras etc.
  • unsuspicious behavior, to prevent raising suspicion in surveillance videos etc.; this however is a pretty low-grade threat, as it involes a lot of manual coordination work ("targeted access operations"), which a surveillance body cannot do for its whole population
  • writing from a different device than you use for reading; you would use commands like "reply-to:post452798" for placing your content at the right spot of the website; you would never ever exchange data between the two devices (since then, also data coming from a trojan by which the surveillance body infected your reading computer, could be transferred to your writing computer and could prove that you participate in the website instead of just "unknowingly" getting its data in steganograophic form)

Deniable contributions: a botnet is controlling your device!

This is a pretty funny idea: claiming that your computer (that you use for sending) does stuff that you don't even want it to do is perfectly reasonable. So even if you are caught sending through a free wi-fi network, you still have an alibi. Because there will indeed be a virus on this device, that also has the habit of sending out e-mail spam, but it is special in that you can control it. You can also justifiably deny doing so, since all the programs and data to do so (including your website contributions before being sent) are in a TrueCrypt-style partition with a full filesystem that is steganographically embedded in your personal library of self-made photos. Because these are your own photos (and you did not publish them anywhere!), nobody can claim that you steganographically modify them since they did not see them before.

But we need some more stuff: as spambots usually do, yours will auto-generate the spam e-mails you will send out, including many spelling errors, random input for changes to embedded images etc., "to pass the spam filters". But instead, these changes also allow you to add the steganographic input, which also will look just like random changes, so just like the spambot behavior. In reality, it is encrypted with a public key of the server from the website you contribute to, and as long as that private key keeps private, your alibi is safe. This is actually P2P encryption which we allowed above to be not broken. But even if it gets broken, you're still not caught by any means since you always use the free wi-fi for Internet access 🙂

The e-mails will travel to the server hosting the secret website (or to a P2P encryption connected, befriended server), with the alibi that it is also an e-mail server. The server will however claim that these e-mails are spam, and not forward them to its users. But of course, internally evaluate them to extract the payload data. The private key for doing that, and the whole "secret website" software has to be protected in the server, of course, and has to have "deniable existence". This is possible by, again, using steganographic storage of a TrueCrypt style partition, maybe in the image data of the "host website". When the server is physically accessed, it will quickly unmount that logical partition, delete the access key from its memory, and be just another normal webserver 🙂

Simplifications and optimizations

  • It is not needed to hack or infect a host site to use it for embedding another site. Just select one that allows anonymous image or video uploads. If doing so, it should however be a site where all the steganographic content is downloaded by regular users, too, so the steganographic users have an alibi. For example, a meme collection site that allows anonymous submissions and publishes daily updates is a good choice. Or a site that hosts pirated content or porn. This is much better than "infecting" sites and running a separate server for the steganographics. Since this way, no own server and no infected site can be taken down. All own software runs on the clients (which would have a little configuration file that selects what URLs to download for steganographic content, and which for cover, and a decryption key to access the extracted steganographic information. Depening on how far this key is shared, it becomes any software between steganographic 1:1 communciation or many-to-many communication.)
  • The important point is to make steganographic communication comfortable. Not like e-mail, but like a full forum, even with special features like calendars etc.. Only then one can organize social change with it. The way NNTP (Newsnet) works is a good paradigm: a desktop client software collects message / data packages from somewhere, and provides the frontend locally.
  • For anonymous posting on public wi-fi networks, maybe one can even use little quadrocopter or blimp drones, operating at night. During the day they would hide in some place not at your home, and in passing on your way to work etc. you would transfer the next set of data to upload to them.

In your twenties, you were a visionary. You wanted to learn it all, and fix it all. All the world.

Ever realized that you cannot do everything that is meaningful in your life? When you dedicate your life to help people with HIV, you can't go find a cure for HIV. Or find the quantum gravity model. Or develop sustainable government. Or find out and teach us all about the Transcendental and God (if you find there is God). Or clean up all the landmines. Or the ocean plastic. Or invent a fair-for-all mode for economic exchange. Or this. That.

Because your lifetime is limited.

And then you realize, it would be great to at least achieve one of these. And then you focus on that one.

And then you realize, you have neither time or money for even one of these meaningful contributions (… contributions to what, actually?). Because your parents might be old, needing your help. Or you made children, just like everyone else, and now have to care for them. Or you got fired from your job, the bank took your house, and now you're living in a tent. That you found in that garbage can. It's just a tarp actually. Or you get medical conditions, so you can be happy to make it through the day.

Because your lifetime is limited.

And then you realize, your life will pass and end as meaningless as everyone else's life. And life, what is life? It then seems like a meaningless aggregates of matter to you. You, yourself, just a bunch of atoms, with your conscience an unnecessary (and unpleasant) emergence of it.

And you start to enjoy that your lifetime is limited, not your limited lifetime.

Stop that.

Now, come back to your visions.

Just change one thing: it should be no longer your vision, now it's ours. Humanity's. We are all in our twenties again.

Everyone who has given up on seeking, and expecting to see, the abolition of greed, poverty and evil, and the introduction of immortality and freedom for all, has given up living while alive. Seek, and expect, again. Because now we seek, and expect, together. You were frustrated by your powerlessness as an individual. Now marvel at what seven billion can do. And what God will do, if there is a God, and seven billion seek him.

Yes, you should expect and seek God, because there might be God after all. But do not forget all the rest of what is good. Physical immortality. Good governance structures. Unextinction of animals. Desert forests. The Theory of Everything. Space colonization. So much before us!

Now what? It's all about how we organize. If your grandma cooks a simple healthy meal for scientists working on quantum gravity, she contributes. If you read news about political quarrels, visit touristic spots from your hard-earned surplus money, engage in any avoidable consumption, you do not.

Wake up, all of us!

The pieces are coming together already. Take note, organize yourselves, contribute. Some inspirations? Here you go:

And of course: Are we alone in the universe? What does it all mean? Are we sure about this? Why? Re-asking the big questions is probably one of our biggest challenged. Us modern folks got so used to the scientific storied of Big Bang, cosmic evolution, and biologic evolution. And now, scientific evolution comes along and puts to question the very concept of space-time. And with it, the existing notion of Big Bang.

Now, what?