Because I live in a truck, in summer I have the interesting problem of excess (practically free) photovoltaics electricity. The same can happen in an off-grid home or a grid-connected home in a location with zero-export regulations.

This is a small overview of the current options to earn from utilizing your unused or underused computing resources and / or electricity.

By “recommendability”, judged purely subjectively by myself:

  • SONM. Blockchain project where you rent out systems by time, like VPS hosts on a cloud platform. The system just went live (yesterday). Looks like well-done tech, worth a try. Of course, nobody knows what you can earn with this (yet), but it should not get lower than the price of electricity. So if you have excess (“free”) electricity available, it’s always a benefit for you.
  • Golem. Blockchain project for various special-purpose computation tasks (Blender rendering, later machine learning etc.). Already live since a few months, see here for reported earnings.
  • iExec. Blockchain project where you rent out your CPU resources and earn tokens. For a comparison to SONM, see here.
  • Primecoin. The first “meaningful mining” coin ever created. Coins are mined by securing transactions with a prime number chain called “Cunningham chain“. This is for the most part basic research, but has some uses: “Cunningham chains are now considered useful in cryptographic systems since “they provide two concurrent suitable settings for the ElGamal cryptosystem … [which] can be implemented in any field where the discrete logarithm problem is difficult.” (source). For results of the prime number chains it found, see the records and  the details. The coin is “naturally scarce” due to the scarcity of prime numbers, just that the upper limit of coins that will exist is not known beforehand (nice feature :D).
  • Gridcoin. One of the first “useful mining” coins, started in October 2013. Uses an interesting concept called “proof of research” that combines proof of stake and proof of BOINC (contributions to the BOINC platform for distributed scientific computing). You are not paid by BOINC projects but donate your CPU resources to them; instead you are paid in newly minted Gridcoins. Since this (together with the 1.5% inflation from teh proof-of-stake) sets Gridcoins on a path of continuous inflation and there is no immediate use value for Gridcoin (except speculation), this is a rather poor design for a currency. I once tested this about 1-2 years ago and calculated what I could make when running my i7 notebook on excess solar power (4-6 hours a day), and it was only 1-2 USD a year.
  • EFF prizes for large primes. You can participate in GIMPS (a collaborative effort hunting these primes) but this is more for sportsmanship and not for the money, as it seems there are no regular “mining pool style” payouts or shares of a future payout in case of an eventual, collaborative success. GIMPS will distribute a small fraction to the person actually finding it on their computer (3000 USD of 150k USD? compare here and here). You could instead hunt these primes solo, but the chances of success are of course slim. Good for those who like playing lottery and have free electricity around, so it does not cost them anything …
  • Proof-of-work mining. There are lots of cryptocurrencies you can mine with proof-of-work, including Bitcoin of course (but that’s only meaningful with GPUs and ASIC miners these days) and others that are designed to be economically CPU mineable. However, I don’t recommend this, as all these calculations are used for nothing beyond securing transactions – which can also be done with proof-of-stake instead of burning all that electricity. All mineable coins where mining serves a meaningful purpose beyond this have been included in the list above.

And some not yet or no longer functional projects:

  • DCP. Very similar to Gridcoin, as rewards are again earned from BOINC calculations. But seems to provide a more modern tech stack that could potentially do other tasks in the future. Not released yet.
  • Curecoin. Similar to Gridcoin, but limited to only one of the BOINC tasks (protein folding). Also, only half of the energy is used for these computations while the other half goes for proof-of-work. Gridcoin does not have that issue, as proof-of-stake uses only negligible CPU resources. This applies to the previous version. The coin seems to undergo a rewrite / relaunch currently.

There are other (blockchain based) projects that reward people for data storage, data transmission (CDN, video streaming), attention (“voluntary ads viewing”) and sharing personal data. We focused on CPU / GPU intensive tasks here only, as that is the best use in case you have to “burn” free electricity as meaningfully as possible.

At the company I co-founded, we have tried for quite some time to find a collaboration software solution that works for young, free-range, independent workers. We’re settling on Dynalist for now – which is not open source 🙁 but otherwise close-to-perfect for our uses, after some necessary adaptations.

Below is a list of various applications I studied during our search, ordered roughly by suitability for our purposes, the best first.

  1. Dynalist. Not open source. Unlimited nodes in private lists even in the free version. Tags, due dates, Markdown formatting. Nice search options with link to searches, allowing “GTD” type selections of nodes like “everything due in the next week”.
  2. Workflowy. Not open source. Like Dynalist, but some features less. “The original”.
  3. Open Source Dynalist / Workflowy replacements. Of course that would be the ultimate solution, but we’re not there yet. I found the several promising base software applications though, if you want to invest some work (best first):
    1.  Treenote. So far, an offline outliner application similar to Workflowy. An online variant with realtime collaboration is in the making as a master thesis project, and “nearly finished” as of 2017-11. That could be the complete solution, so let’s keep an eye on what happens here.
    2. Etherpad Lite. A proven, open source realtime collaborative editor. There are multiple open source variants (most notably Stekpad / formerly Hackpad), and multiple plugins. However so far, there is nothing like the list folding and zoom-to-item features of Dynalist / Workflowy – it’s all one long document, and the tasklist plugin only adds checkboxes before list items (see). Tag, search and filter functions are also not nearly as functional for a GTD / task list application as they are in Dynalist / Workflowy, and there is no deadline feature. But the collaborative editing part is there (incl. full history and authorship), the plugin infrastructure is there, so it seems doable. Given the advanced state of its realtime editing capabilities, and the difficulty to get this part right, this is probably a better base software than any of the below alternatives.
    3. Vimflowy. See also here on Github. It’s the closest open source Dynalist-like software that I found. Can be used with the mouse, while the Vim modes are also useful after getting used to them. It can do remote data storage, but unfortunately no collaborative real-time editing. So that is a major thing to add (but could be simple when not requiring true realtime updates, rather AJAX to make changes, and a button to pull changes). Also the design and a lot of little bugs have to be fixed. But it’s promising, and in active development as of 2017-11.
    4. ndentJS. Engine / base component for a hierarchical list widget with realtime collaborative editing.
    5. Concord. Another open source, JavaScript engine for Workflowy / Dynalist style task management applications. Seemingly the only one that is available open source. Documentation is here. Needs some programming to create a useful application out of this, though, because even in its most advanced incarnation (Fargo), it was “just” an outliner (see), from which we would be missing real-time collaboration features, specialized features for tasks etc..
    6. HackFlowy. And another engine / base component for a hierarchical list widget with realtime collaborative editing.
  4. Taiga. Open source, kanban style collaboration tool. Nice, but you have to like the kanban way of doing things. For my taste, it is still too much form filling for truly agile, “uninhibited” collaboration. In large, esp. public projects where you need a full revision history (such as open source projects with a public issue tracker), Taiga is a great tool though – collaboration has to be less agile, more formalized there to work.
  5. Wekan. Open source, similar to Taiga and Trello.
  6. Tracks. Open source, Ruby based GTD application. Mature, but not much in development. More than 10 years in the making. Misses a more comfortable user experience (no drag&drop between projects etc., rather some form filling) and misses collaboration features (every user account gets to manage their own tasks only, it seems). Otherwise, very nice. You can try it out with a test account on gtd.pm.
  7. Gingko. Not open source. Very nice and somewhat similar to Dynalist and Workflowy concept-wise. However, more specialized for writing longer texts. While it can be used for task-based collaboration, it is lacking specialized features for task-based collaboration on the other hand (no due dates, no “focus” mode). Pay-what-you-like sales model, the free version is limited to 100 cards per month.
  8. TDO. Open source, minimalistic, nice little kanban style task manager. But allows no sub-tasks (which is what I don’t like about kanban style), and seemingly no made for collaboration.
  9. Nitro 3.0. Open source, nice, collaborative task manager with markdown, due dates, priorities, notes on tasks etc.. It just seems that tasks cannot be nested but are contained in multiple flat lists (if judging from Nitro 1.5 is any indication). Also, Nitro 3.0 is not yet released as of November 2017 and it is a complete rewrite, so it will probably not be available as stable software for some months still. But then, definitely worth a look.
For a friend, I recently researched which notebook can be recommended for Ultra HD video editing (4K UHD, 3840 × 2160 px). Here is, in short, what we found.
 
First priority: Intel Core i7-7xxx CPU, as fast as possible
There are three major ways to encode video: with the processor in software (Linux libx264 and libx265 libraries), with the CPU in hardware (using Intel's or AMDs dedicated features for that), or with the GPU in hardware (using Nvidia's NVENC mechanism). The hardware based mechanisms are much faster. For example, one comparison test was 55 fps on a i7-5930K CPU and up to 540 fps on a NVIDIA GeForce GTX 980 GPU [source]. So a factor of 10 can be expected.
 
However, hardware encoding is somewhat limited in terms of features, so the same video quality will have a bigger file size, or for the same file size, you get less quality. For example, Nvidia NVENC supports B frames (some kind of small, compressed frame type that reduces video size by approx. 30% at the same quality) only in H.264, not in H.265 [source]. So the enthusiast video editor will probably want to do the final encoding runs in software on the CPU, which can be 20 times slower, but gives better quality for the same file sizes. Preview versions can still be created with the CPU or GPU hardware support, but do not need as much computation power as resolutions will be lower. Also, reportedly the primary, most powerful competitors are Intel CPU hardware based encoding on Kaby Lake processors (Core i7-7xxx) and GPU Nvidia Pascal GPU based encoding on Nvidia GeForce GTX 10xx graphics chips, and the best of both are approximately equally powerful. There seems to be better software support for the Nvidia solution though (but that is just a rough impression).
 
As a consequence, when you want the best quality and accept a long final coding run in exchange, the graphics board does not really matter much, it "only" has to be suitable for playing back 4k video and perhaps applying some live effects on them. So even a "previous generation" (Maxwell based) Nvidia GTX 960 will do, as some models have in this list of 4k editing notebooks. You will however want the best and fastest CPU you can get, which (in gamer notebooks) seems to be approx. the Intel Core i7-7600HQ (2.8 GHz).
 
Second priority: display
The next question is what display to get. Choices are between 15" and 17" displays, and for both between Full HD (1920×1080) and 4k Ultra HD (3840×2160) resolutions. Without a 4k display, you obviously can't watch your 4k video in full glory while editing, but even with a 17" 4k display, pixels are so small that there is very little to no optical difference between a Full HD and a 4k Ultra HD display (as reported by gamers). You will have to zoom into frames to see quality differences anyway. But the price difference is sometimes just 200 EUR, which might make the 4k Ultra HD display worth having.
 
Third priority: main memory, mass storage
These things can be upgraded as needed, so you don't do a final decision on purchase. 16 GB DDR4 RAM and a "128 GB SSD plus 1 TB hard disk" combination seem a reasonable minimum though. To speed things up, the SSD should be sufficient for the operating system, software, and the video files of your current project, while the (cheaper and larger) hard disk would hold all the archived video editing projects.

Model recommendations
The most interesting models (high performance but at the lower end of the possible price range) that we found are these:

  • HP Omen OMEN 17-w207ng, 1500 EUR, i7-7600HQ CPU, Nvidia GeForce GPX 1050 Ti, 17" display 3840×2160, 256 GB SSD, 1 TB HDD
  • HP Omen 15-ax202ng, 1300 EUR, i7-7600HQ CPU, Nvidia GPX 1050, 15" display 1920×1080, 256 GB SSD, 1 TB HDD
  • Dell XPS 15 9560, ca. 1600 EUR, i7-7600HQ CPU, 15" display 1920×1080

More interesting information and sources

For a crisis-mapping project after the 25 April 2015 earthquake in Nepal, we needed a collaborative online database that is easy to set up and maintain. I found the product on obvibase.com to be the best existing solution. Much much better than the misuse of Google Spreadsheets that people usually engage in.

However, the Obvibase software is not free software, and it is not perfect either. So here's my list of improvements that I just submitted to them as feedback, after a month of frequent and in-depth usage of their database. If somebody wants to do so: a free software clone of obvibase.com plus the suggestions below would get us pretty close to the best collaborative ad-hoc database solution ever smiley

  • Alt+Arrow Left should work for "page back", but does not.
  • The "Main menu → More actions … → Restore …" action's form should list the person who has done an edit in another column, and also what changes were done (showing the before and after version of the affected cell).
  • For better privacy protection and because people are used to it already, sharing should work like in Google Docs (adding Google Accounts who get access, with different access levels per account). Otherwise, sharing the access link accidentally gives people read-write access, which cannot be revoked again.
  • The "Page Up" and "Page Down" keys should work in the list view.
  • Pressing the space bar on the first selectable column (with the double arrow) should select the record, adding a checkmark to the very first column.
  • It should be possible do do simple styling of columns (background color, font color, bold font, italics font). Then, one can mark up one column as more important, and speed up visual navigation.
  • Column titles should be formatted in bold and / or with a background color, to speed up visual navigation on the screen.
  • There should be a way to create a new database in a new tab. If the menu link is a normal link that can be opened in a new tab, it will be enough.
  • There should be a sharing mode where people with the link can add records and edit or delete their own records, but noth others. It will have to require login with a Google Account. Because, the general read-write access mode link can not be shared publicly to avoid access by destructive individuals who might delete every record, or overwrite it with spam.
  • In the web-published, read-only version, long text cells should have an on-hover box showing the full text, as done for all other versions.
  • It should be possible to mark individual columns as non-public in the column settings, which would exclude them form display in the public version that is read-only accessible to all web visitors. This would allow to collect public and confidential information (like, personally identifiable information) in the same database.
  • Full export to CSV. There is currently no simple way how to export and re-import the full database incl. all nested records to one or some CSV files. It is however needed for making local backups. The problem is that exporting the main database table misses out records from any nested table, and exporting a nested table misses out records from any other nested table, and also any record from the main database table that does not have a record in the exported nested table.
  • CSV exporting from nested tables should not repeat records from the main table. This happens currently, but redundancy is most always a bad idea. Instead, the nested record should refer to the ID of its parent record ("foreign key relation").
  • SQLite3 export. Would be good for offline usage, as an interface to other tools, as a way to run complex SQL queries on the dataset, for comfortable backups (unlike CSV export, which misses out some data) etc..
  • SQLite3 import, including reconciliation. To allow people "in the field" to contribute without Internet access, there should be an SQLite3 import feature. Concurrent edits would have to be reviewed before finishing the import.
  • ODS export. Not that important, but would be good for having an alternative to the SQLite3 format export. The export would include multiple sheets, one for the main database and one for each nested table. Embedded scripts would be used for filtering when following a link from the main database table to the associated records in the nested table.
  • When pressing Ctrl+V in a cell while it is not in edit mode, the current clipboard content should be entered into the cell. Currently, the Paste / Import window is shown, which is confusing because Ctrl+V is a clipboard operation in all applications, so people will use it intuitively (forgetting to go to "Edit mode" before). In the current way it functions, there are also two issues: iun addition to showing the Paste / Import window, currently "v" is inserted into the cell when pressing "Ctrl+V", and the focus in on the cell rather than in the text field in the Paste / Import window (which prevents the Esc key to be usable for closing that window).
  • When trying to change the filter value for a text column by clicking on the header, the caret is initially at the beginning of the value. It would be more useful to position it at the end. More importantly, even though the caret is in the filter value text box, the "Pos1" and "End" keys navigate to the first and last menu entry instead. They should however move the cursor.
  • To reset a text column to "no filter value", it should also be possible to click the column header, delete the filter value in the text field and then press the "Apply" button or Enter key. Because that is what people intuitively expect. Currently, a message will appear saying "No text to search for".
  • Search by transliteration and equivalent Latin characters. To find the records one is looking for fast, there should not be a need to enter special characters. For example, when a record contains the name "Matjaž", it should be possible to find it via "Matjaz" and also by any transliteration of "Matjaž" to basic Latin characters.
  • In case of a synchronization error, currently a message will appear that "some records have been rolled back". This means data loss, maybe for max. 30 s of editing. It is not due to concurrent edits, but rather due to intermittent connectivity problems. Instead of causing data loss, another solution should be possible. Like for example, asking if syncing should be tried again, and finally offering the rolled back data in CSV format to copy and add it again quickly.
  • The database becomes hard to navigate if there are many columns, exceeding the screen width. For this quite frequent case, the database should allow several lines per record, distinguishing between vertically stacked columns by using different text formats.
  • The textbox showing values for multiple-choice columns has too big paddings left and right, causing line breaks where there should not be any. Also, it should be always at least the same width as the column itself, for the same reason.
  • When changing selectable values in the settings dialogue for multiple choice columns, it should be possible to delete existing values and create new ones, or to edit existing values (which will edit all records that use the old value accordingly, saving a lot of work).
  • For powerusers, the filter field when clicking on column headers should allow regexp searches (and storing them in a "recent regexp searches" sub-menu). This can help with analysing columns that have comma-separated values, hierarchical text values and other constructions in them.
  • URLs in all text columns should be automatically made into clickable hyperlinks (and automatically shown abbreviated when not in edit mode). Else, it's additional work to go into edit mode and select and copy the URL manually.
  • Subtable records should be reachable with no or very little additional work. This is esp. important when storing contact information there, as it frequently contains hyperlinks. Proposal: When hovering over the "[…] records" entry that links to a subtable, an on-hover window should appear that shows the subtable records, incl. columns, hyperlinks etc..
  • Currently, reverting to a prior version (via "Main menu → More actions → Restore …") will lead to a database that uses a new URL, without notifying the team about this. This means starting a new branch from the version one reverted to, without knowing, and without a way to merge changes into the main branch that the rest of the team is working on. This should be considered a bug.
  • It would be good to be able to see who's editing the database at the same time. Just like in Google Docs and Google Spreadsheets, where there are user icons showing up in the top right then. It could be the same user icons as used for the Google accounts, since login happens via a Google account anyway.
  • In the form to create a comment, it should be possible to press "Ctrl + Enter" instead of clicking "Save" with the mouse. It's faster. And like it's done in the Google Doc comments as well.
  • For more comfortable and faster visual navigation in long tables, it should be possible to style each column in the column settings separately. A combination of several options for influencing the style would be available: font size, font weight, font color, overflow / line break behavior etc..

Downloading your OSMAnd~ maps to a computer and installing them to your Android phone from there has several advantages:

  • Use cheaper or faster network access. In a case where, for example, there is no wifi available but your computer is connected by wired Ethernet network, downloading the maps to your computer is probably much faster and cheaper than using your mobile data plan. (An alternative is configuring tethering at your computer. USB tethering and Bluetooth tethering would work but wifi tethering can be a challenge to set up since most wifi hardware in notebooks does not support access point mode, and Android might not support ad-hoc mode.)
  • Avoid tracking. You avoid being tracked by the OSMAnd~ maps server, which else "will send your device and application specs to an Analytics server upon downloading the list of maps you can download" [source].

So, how to do this?

  1. Just download the relevant zip archives from the OsmAnd maps index.
  2. Connect your Android phone and unpack the downloaded zip archives into the osmand folder on the SD card. (Alternatively, use adb push filename.ext /sdcard/0/osmand/ according to these answers).
  3. Start OsmAnd~. It should read and index the files (which for the first startup will take a bit of time).

One of my websites was constantly throwing "Internal Server Error" errors, and that error appeared as follows in /var/log/apache2/error.log:

Thu Oct 16 17:59:14 2014 (19446): Fatal Error Unable to allocate shared memory segment of 67108864 bytes: mmap: Cannot allocate memory (12)

And that even though 9 GiB of memory was free. Also, only one website was affected, others did run fine. The error appeared even when requesting files that did not exist at all (independent of file type: JPG, PHP etc.). After ten minutes or so, the error would disappear on its own, for a while. Also, after restarting Apache the problem disappeared, for a while.

Reason

The problem was due to hitting the system limit of shmpage (shared memory allocation), similar to this case. This limit effectively does only exist in virtualized (VPS) environments, not on physical machines. Hitting this limit can be confirmed by running "cat  /proc/user_beancounters", which in our case would output right after the above error situation: shmpages held=247122 [...] limit=262144 failcnt=10067. At 4 kiB page size, the max. allowed 262144 shared memory pages correspond to (262144*4096)/1024^3 = 1 GiB of shared memory.

When restarting Apache, the problem would be temporarily resolved because shared memory usage would initially be lower: it reduced the number of shared memory pages held form about 247000 to 148000, as per cat /proc/user_beancounters.

The number of 67108864 bytes of shared memory to allocated, mentioned in the error message, gives a hint what consumes this much shared memory: it is just the default value of PHP's opcache size of 64 MiB, configured in /etc/php5/cgi/php.ini (because 67108864 / 1024 / 1024 = 64). The problem is not that PHP-FCGI would try to allocate 64 MiB of opcache shared memory once, or once per site, but once per process [source]. PHP-FastCGI processes are reused for several requests [source], so the opcache caching makes still some sense, contrary to this opinion. However, they are relatively short-lived as their number varies depending on site load, so the caching does not add much benefit. And worse, the opcache caches are not shared between the PHP-FastCGI processes, but each one gets its own. With about 10 processes per site, each consuming 64 MiB of shared memory, we quickly hit the 1 GiB shared memory limit of the VPS, like above. To illustrate, these were the values in my case: the free command indicated the following shared memory usage:

  • 69 MB some seconds after stopping apache2
  • 136 MB immediately after starting apache2 (service apache2 start; free)
  • 500 – 700 MB some 30 – 180 s after starting apache2
  • 989 MB typically when apache2 is running for a long time, very close to the 1 GiB limit already

Nearly all this shared memory usage is created by the php-cgi processes, as can be seen in top output (use "b" to toggle to background highlighting, "x" to toggle to highlighting the sort column, and "<" / ">" to switch to SHR as the sort column). Namely, when 991 MB shared memory are consumed, 620 MB (= 9 * 64 MB + 2 * 40 MB) of this was consumed by 11 php-cgi processes.

Solution

The solution is to use PHP-FPM instead of PHP-FastCGI [source]. Contrary to that source, this works independently of the webserver, also working with Apache2.

After deploying this solution, you can compare the "Zend OPcache" section of phpinfo() scripts from different sites on your server. As you can see from the "Cache hits", "Cache misses", "Cached scripts" and "Cached keys" numbers, there is only one single OPcache for all your PHP sites. However, you can configure opcache parameters different for different sites (like memory usage etc.), and this is also reflected in the phpinfo() output. I can't really make sense of that so far, but assume that memory usage etc. are indeed configured per PHP-FPM process pool. So 128 MiB for one site and 64 MiB for another would allow for 192 MiB total shared memory usage for opcache.

The shared memory situation after switching the site to PHP-FPM was only a 70 MiB max. difference in shared memory between Apache2 and php-fpm running and not running, compared to the 900 MiB earlier. This was generated by five php-fpm processes running simultaneously, with 40-50 MB shared memory "each". So clearly, the shared memory is indeed shared between php-fpm processes instead of each having its own.

(Another tip: there are other important ways to optimize OPcache, see this blog post.)

Working solution

Finding this solution was quite a nightmare. But, here it is.

This assumes you have a working setup of Panels in Drupal 7 already.

  1. Use a group path prefix. Set up pathauto to include your group's path as a prefix for group content. This is a challenge by itself due to various bugs and changes in Drupal, but here is my solution.
  2. Edit the node template panel page. In the CTools page manager (at /admin/structure/pages), enable the node_view (node template) page, if not already done, and then edit it (at /admin/structure/pages/edit/node_view).
  3. Create a new Panel variant.
  4. Create a selection rule by path. Click "Selection rules" in teh left sidebar for your new Panel variant. Then select from the list "String: URL path", add it, and configure it to use your group's path prefix with a wildcard, like groupname/*.
  5. Reorder your variant. Click "Reorder variants" at the top right (should be /admin/structure/pages/nojs/operation/node_view/actions/rearrange). Reorder the variants to make yours come after "Node panelizer" and before "Fallback panel". Drupal will work through the vriants and select the first where teh selection rules match, so placing it after "Fallback" would make it never show up.
  6. Test. Go to some non-panelized group content that has your group's path prefix, and see if the panel variant gets applied. To see an effect, you will have to add content to the panel variant of course.

Alternative solutions

The following solution should also work:

Panelize display modes, set display mode per group. This is a combination of the following:

  • ds_extras: needed to create additional view modes for content types
  • context: for general context management, needed by context_og
  • context_og: needed in context to detect when a node belongs to a certain group
  • contextual_view_modes: to set a view mode for a content type based on its context
  • panelizer: to panelize a view mode

You would define a context for each group for which you want a default panel, triggered by a node belonging to the group. Then you would configure the content types (with the options added by contextual_view_modes) to set a specific view mode for each of these contexts, which you created via ds_extras. And use Panels (or rather Panelizer I think) to create default panels for these new view modes.

This solution is semantic, since it is not dependent on URL patterns. However, it is also more complex, clutters the view modes namespace, and requires one new panel for each content type / view mode combination, rather than just one in the node template.

Non-working solutions

The following solutions that I tried did not work:

Selecting a Panel variant by group membership. This should work – see my instructions here. But currently (2014-09) it does not due to Drupal issue #2242511 "How to create panel variant with selection rule for groups audience field". There is an impractical workaround available, but "somebody" should go in there and fix it for real …

Using og_panels. This would be the most comfortable variant, as it avoids also the need to be admin to change a group's default layout. However, there is no Drupal 7 port of the og_panels module. See: Drupal issue #990918.

Selecting panel variants via contexts. Proposal: Create a solution from a combination of the following:

  • context: for general context management, needed by context_og. See: https://www.drupal.org/project/context
  • context_og: to detect when a node belongs to a certain group. See: https://www.drupal.org/project/context_og
  • panels: to set a panel variant in the Panels selection rules, based on the context detected via context_og. See: https://www.drupal.org/project/panels

Problem: Panelizer does not list the context / context_og contexts in the list of contexts of the "Context exists" selection rule. This is because these are simply two different things called "context": the Panels module does not even depend on the context module, nor vice versa.

The "Context exists" selection rule will list the "Panels contexts" that are defined in the context section above the selection rules section, if and only if that context field contains data. See this wording in the "Context exists" selection rule: "Check to see if the context exists (contains data) or does not exist (contains no data).", and the reproduction instructions from Drupal issue #1263896. This means that in effect, "context" in Panels means the same as "contextual filters" in views: it is a way to pass in arguments that can be grabbed by the view resp. the content elements on the panel, and using them in "Context exists" is rather a side use.

Using context_panels_layouts. Proposal: Create a solution from a combination of the following:

  • context: for general context management, needed by context_og
  • context_og: to set a conext when a node is part of a certain group
  • context_panels_layout: to set a panel as a context reaction
  • panels: to set a panel variant in the Panels selection rules, based on the context detected via context_og

Problem: This does not work, as context_panels_layout can only be used to set Panels layouts, not actual panel variants that also include the content added to the layout.