apt-cacher-ng versus apt-transport-https

The headline sounds pretty technical, and so is the topic. Let’s quickly introduce both antagonists:

  • apt-cacher-ng is a tool to cache packages of the apt ecosystem. As an administrator, you may have multiple Debian-based systems. The overlap of packages that all the systems need is typically huge. That means, hundreds of your systems will require the latest security update for curl at around the same time. Running an apt-cacher-ng server in your local environment will take a bit heat off Debian’s infrastructure and improves the download speed of packages. See also the Apt-Cacher NG project page.

  • apt-transport-https is an apt module to obtain packages over a secure https:// connection. Traditionally, packages are downloaded through plain HTTP or FTP, but as these are unencrypted a third party may observe what you’re doing at a repository (which packages you’re downloading etc..). Please note, that apt-transport-https is already integrated in latest versions of apt - no need to install it separately.

So basically, both apt-cacher-ng and apt-transport-https do a good thing! But… They don’t really like each other.. At least by default. However, I’ll show you how to make them behave ;-)

The Problem

The issue is perfectly obvious: You want apt-cacher-ng to cache TLS encrypted traffic…? That won’t happen.

The Solution

You need to tell the client to create an unencrypted connection to the cache server, and then the cache server can connect to the repository through TLS.

Example

Let me explain that using Docker. To properly install Docker on a Debian based system, you would add a file /etc/apt/sources.list.d/docker.list containing a repository such as:

deb [arch=amd64] https://download.docker.com/linux/debian stretch stable

However, when apt is told to use a cache server, it would fail to download Docker’ packages:

# apt-get update
[...]
W: Failed to fetch https://download.docker.com/linux/debian/dists/stretch/InRelease  Invalid response from proxy: HTTP/1.0 403 CONNECT denied (ask the admin to allow HTTPS tunnels)     [IP: 1.2.3.4 3142]
W: Some index files failed to download. They have been ignored, or old ones used instead.

Let’s fix that using the following workaround:

0. Assumptions

  • There is an apt-cacher-ng running at http://apt.cache:3142.
  • apt.cache resolves to 1.2.3.4.
  • There is a client configured to use the cache server, e.g. /etc/apt/apt.conf.d/02proxy says:
Acquire::http { Proxy "http://apt.cache:3142"; }

1. Create a mock DNS for the cache server

You need to create a pseudo domain name that points to the cache server. This name will then tell the cache server which target repository to access. Let’s say we’re using docker.cache. You can either create a proper DNS record, or just add a line to the client’s /etc/hosts file:

1.2.3.4 apt.cache docker.cache

Now, both apt.cache and docker.cache will resolve to 1.2.3.4 at the client.

2. Update the client’s repository entry

Instead of contacting the repository directly, the client should now connect to the cache server instead. You need to change the contents in /etc/apt/sources.list.d/docker.list to:

deb http://docker.cache stretch stable

Thus, the client now treats the cache server as a proper repository!

3. Inform the cache server

The apt-cacher-ng of course needs to be told what to do, when clients want to access something from docker.cache: It should forward the request to the original repository!

This is called remapping. First create a file /etc/apt-cacher-ng/backends_docker_com at the server containing the link to the original repository:

https://download.docker.com/linux/debian

Then, populate the remapping rule in /etc/apt-cacher-ng/acng.conf. You will find a section of Remap entries (see default config of acng.conf). Just append your rule:

Remap-dockercom: http://docker.cache ; file:backends_docker_com

This line reads:

  • There is a remap rule called Remap-dockercom
  • which remaps requests for http://docker.cache
  • to whatever is written in file backends_docker_com

That’s it. Restart the cache server and give it a try :)

4. Add more Remaps

If you want to use more repositories through https://, just create further mock-DNS-entries and append corresponding remapping rules to the acng.conf. Pretty easy..

The Improvements

This setup of course strips the encryption off apt calls. Granted, it’s just the connections in your own environment, but still not really elegant.. So the goal is to also encrypt the traffic between client and cache server.

There is apparently no support for TLS in apt-cacher-ng, but you can still configure an Nginx proxy (or what ever proxy you find handy) at the cache server, which supports TLS and just forwards requests to the upstream apt-cacher-ng at the same machine. Or you could setup an stunnel.

Supplemental

There are a few other workarounds for this issue available. Most of them just show how to circumvent caching for HTTPS repositories (which somehow reduces the cache server to absurdity). Here, I just documented the (in my eyes) cleanest solution.

Programmatically Obtain Shortcut Icons

For a side project, I just needed to download the favicons of brands to visually augment a web portal :)

Wikipedia's favicon, shown in an older version of Firefox, image obtained from Wikimedia Commons
Wikipedia's favicon, shown in an older version of Firefox, image obtained from Wikimedia Commons

Historically, that icon was named favicon.ico, and stored in the root directory of the website. However, nowadays the icon is typically called shortcut icon, and there are tons of options on how to get it into the browers’ tab pane… Very rarely it’s still named favicon.ico. It’s often not even an ICO file, but a PNG image or an SVG graph. And developers often refer to it from within a webpage’s HTML code using a <link ...> tag.

However, it’s pretty convinient to create a link to a brand as it properly resembles the brand’s official log!

Of course, downloading the remote web page, parsing the HTML code, and selecting the correct short cut icon (if any, otherwise falling back to $domain/favicon.ico including error handling etc) would be pretty expensive and error-prone.
In such cases it’s always good to outsource the job to someone who’s doing that anyway for their business..

And lucky us *hrumph* there is Google! ;-)

Google Shares Stuff

Google provides a Shared Stuff (s2) link to automatically retrieve the favicon image of any website. The syntax is:

https://www.google.com/s2/favicons?domain=twitter.com

Thus, the GET parameter domain carries the domain of the site of interest (here it’s twitter.com).

Pretty straight forward, isn’t it?

As a bonus, you’ll get a small PHP function to download the icon and store it on your disk:

function get_favicon ($url) {

    $domain = parse_url($url)['host'];
    $filepath = CACHE_DIR . "/" . sha1 ($domain);

    if (!file_exists ($filepath))
        file_put_contents ($filepath,
            file_get_contents ('https://www.google.com/s2/favicons?domain=' . $domain));

    return $filepath;
}

This will retrieve the favicon for $url, store it in CACHE_DIR, and return the path to the stored file (the file name being the sha1 hash of the domain). Just make sure you defined CACHE_DIR and enjoy your icons :)

Alternatives

So I heard you don’t like Google? There is at least one alternative: https://api.faviconkit.com/twitter.com/144.
There is also PHP tool for that, if you want to self-host such a tool: https://github.com/ao/favicons.

Run your Private Firefox Sync Server

Firefox Sync logo obtained from Wikimedia Commons
Firefox Sync logo obtained from Wikimedia Commons

As I’m working on multiple machines (two desks at work, one desk at home, laptop, …) I’ve always been looking for a way to sync my browsers… Of course, I knew about Firefox’ sync, but I obviously don’t want to store my private browsing data in Mozilla’s cloud! Every once in a while I stumbled upon articles and posts suggesting to run a private syncserver. However, every time when looking into that project it left an uncomfortable impression: (i) you need to manually compile some 3rd party software, (ii) the whole thing seems very complex/unclean, as it requires an account server and a sync server and may work with Mozilla’s account server (but how?), and (iii) the sync project was once already abandoned (Firefox Weave was discontinued because too complex and unreliable)… Therefore, I never dared to give it a try.

Today, when I’ve again been frustrated with that fragmented situation, I saw that Mozilla’s syncserver sources contain a Dockerfile! It probably has been there for ages, but I never recognised it.. Even if that project may be a mess, in a container environment it’s pretty easy to give it a try (and clean it, if unsatisfied)! That changes everything! :P

So I changed everything, and tooted about it. Various people then convinced me to write this article. And I also learnt that Epiphany can do Firefox’ sync out of the box!

Get the Syncserver Running

Running your own syncserver using Docker is pretty straight forward. This how-to is based on the project’s readme at GitHub:mozilla-services/syncserver, but I’m using docker-compose and I deployed the service behind an Nginx proxy. You can of course skip the proxy settings and have it run locally or something.

Get the Code

Just clone the sources from GitHub:

git clone https://github.com/mozilla-services/syncserver

You should now see a new directory syncserver containing all the sources, including a Dockerfile.

Build a Docker Image

Change into the project’s directory, that contains the Dockerfile and build a new Docker image using:

docker build -t syncserver:latest .

That will take a while, but when it’s finished you’ll find a new image (double check with docker images).

The provided Dockerfile is basically sufficient, but in my scenario I also need to properly declare an exposed port. So I edited that file and added

EXPOSE 5000

See also the diff of my commit. I decided to take port 5000, as the user running the syncserver is unpriviledged (so :80 and :443 are not an option) and :5000 is the example in the project’s readme ;-)

Create a Docker-Compose Configuration

Docker-Compose makes it easier to assemble and handle multiple containers in a medium complex environment.

My compose config looks like this:

firefox-sync:
  restart: always
  image: syncserver:latest
  container_name: firefox-sync
  volumes:
    - /path/to/mozilla-sync/share:/syncshare
  environment:
    - SYNCSERVER_PUBLIC_URL=https://firefox-sync.example.com
    - SYNCSERVER_SECRET=waitis6eexeeda7ifoolitauv2Aehooxie8eim2quaiyiaXeer
    - SYNCSERVER_SQLURI=sqlite:////syncshare/syncserver.db
    - SYNCSERVER_BATCH_UPLOAD_ENABLED=true
    - SYNCSERVER_FORCE_WSGI_ENVIRON=true
    - PORT=5000
    - VIRTUAL_HOST=firefox-sync.example.com
    - VIRTUAL_PORT=5000
    - HTTPS_METHOD=noredirect
  logging:
    driver: syslog
    options:
      tag: docker/web/firefoxsync

This snippet encodes for a container named firefox-sync, which is based on the image syncserver:latest. It mounts the host’s directory /path/to/mozilla-sync/share into the container as /syncshare (I’d like to store my stuff outside of the container). In addition it declares some environment:

  • SYNCSERVER_PUBLIC_URL tells the service the actual URL to your instance.
  • SYNCSERVER_SECRET should be complicated as it is used to generate internal certificates and stuff.
  • SYNCSERVER_SQLURI tell the service which database to use. I point it to the directory (/syncshare) that was mounted into the container, so it will actually store the database on the host.
  • SYNCSERVER_BATCH_UPLOAD_ENABLED is, if I understand correctly, an option to allow for uploading everything immediately…?
  • SYNCSERVER_FORCE_WSGI_ENVIRON must be set to true, if SYNCSERVER_PUBLIC_URL doesn’t match the actual URL seen by the python tool. In my case, I would connect to SYNCSERVER_PUBLIC_URL, which is however the Nginx proxy, which forwards the traffic to the syncserver. However, the syncserver will see a different request (e.g. it’s internally not https anymore) and complain.

The last two variables (VIRTUAL_HOST and VIRTUAL_PORT) just configure the reverse proxy that I’m using. Feel free to drop these lines if you want to expose the service directly to the network, but then you need to add a port forwarding for that container, such as

ports:
  - "80:5000"

which forwards traffic at your machine’s HTTP port (:80, use a different port if you’re already running a web server) to the service’s port in the container (:5000).

If you have a porper Docker-Compose configuration, just run

docker-compose up -d --remove-orphans

to start the service. Et voilà, you should be able to access the service at the configured SYNCSERVER_PUBLIC_URL :)

Configure Firefox to use your Private Sync Server

First make sure you’re signed out in the browser! That means, about:preferences#sync should not show your identity and instead provide a button to sign in.

Then, open about:config and search for identity.sync.tokenserver.uri. By default, it will be set to Mozilla’s sync server https://token.services.mozilla.com/1.0/sync/1.5. Edit that field and point it to your SYNCSERVER_PUBLIC_URL plus /token/1.0/sync/1.5. Thus, in our example above I’d set it to https://firefox-sync.example.com/token/1.0/sync/1.5.

Now go back to about:preferences#sync and sign in with your Mozilla account. Yes, correct. You still need an account at Mozilla! But that is just for authentication… There is an option to also run a private account server (see Run your own Firefox Accounts Server), but that’s even more complicated. And as I need a Mozilla account anyway to develop my AddOns, I skipped that additional hassling..

Open Issues and Troubleshooting

There are still a few issues with different clients. For example, I don’t know how to tell Epiphany to use my private syncserver instead of Mozilla’s public instance.. In addition, there is apparently no Firefox in the F-Droid repository, that properly supports sync…

For general debugging and troubleshooting, search engines are a good start.. In addition, I learnt that there is about:sync-log, which contains very detailed error messages in case of problems.

Eventually…

… I got my sync! #hooray

It’s still crisply and I didn’t test it too much, but so far it’s looking pretty good.

Mount multiple subvolumes of a LUKS encrypted BTRFS through pam_mount

Some days ago, @daftaupe@mamot.fr convinced me on Mastodon to give BTRFS a try. That’s actually been a feature on my list for some time already, and now that I need to switch PCs at work I’m going for it. However, this post wouldn’t exist if everything went straight forward.. ;-)

The Scenario

I have a 1TB SSD that I want to encrypt. It should automatically get decrypted and mounted to certain places when I log in. pam_mount can do that for you, and I’ve already been using that a lot in different scenarios. However, with BTRFS it’s a bit different. With any other file systems you would create a partition on the hard drive, which is then LUKS encrypted. This has the drawback, that you need to decide on the partition’s size beforehand!

With BTRFS you can just encrypt the whole drive and use so-called subvolumes on top of it. Thus, you’re a bit more flexible by creating and adjusting quotas as required at any point in time (if at all…), but (or and!) the subvolumes are not visible unless the device is decrypted.

Let’s have a look into that and create the scenario. I assume that the SSD is available as /dev/sdb. Then we can create an encrypted container using LUKS:

root@srv ~ # cryptsetup -y -v --cipher aes-xts-plain64 --key-size 256 --hash sha256 luksFormat /dev/sdb

WARNING!
========
This will overwrite data on /dev/sdb irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase for /dev/sdb: ****
Verify passphrase: ****
Key slot 0 created.
Command successful.

You’re not sure which cipher or key-size to choose? Just run cryptsetup benchmark to see which settings perform best for you. My CPU, for example, comes with hardware support for AES, thus the AES ciphers show a significantly higher throughput. If you’re still feeling uncompfortable with that step, I recommend reading the sophisticated article at the ArchLinux’ wiki on dm-crypt/Device encryption.

We can now open the encrypted device using

root@srv ~ # cryptsetup luksOpen /dev/sdb mydrive
Enter passphrase for /dev/sdb: ****

This will create a node in /dev/mapper/mydrive, which represents the decrypted device.

Next, we’ll create a BTRFS on that device:

root@srv ~ # mkfs.btrfs /dev/mapper/mydrive
btrfs-progs v4.17
See http://btrfs.wiki.kernel.org for more information.

Detected a SSD, turning off metadata duplication.  Mkfs with -m dup if you want to force metadata duplication.
Label:              home
UUID:               d1e1e1f9-7273-4b29-ae43-4b9ca411c2ba
Node size:          16384
Sector size:        4096
Filesystem size:    931.51GiB
Block group profiles:
Data:             single            8.00MiB
Metadata:         single            8.00MiB
System:           single            4.00MiB
SSD detected:       yes
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
ID        SIZE  PATH
1   931.51GiB  /dev/mapper/mydrive

That’s indeed super fast, isn’t it!? I also couldn’t believe it.. ;-)

We can now mount the device, for example to /mnt/mountain:

root@srv ~ # mount /dev/mapper/mydrive /mnt/mountain
root@srv ~ # cd /mnt/mountain

So far, the file system is completely empty. But as it’s a BTRFS, we can create some subvolumes. Let’s say, we want to create a volume for our $HOME, and as we’re developing this website, we also want to create a volume called www:

root@srv /mnt/mountain # btrfs subvolume create home
Create subvolume './home'

root@srv /mnt/mountain # btrfs subvolume create www
Create subvolume './www'

root@srv /mnt/mountain # btrfs subvolume list .
ID 258 gen 21 top level 5 path home
ID 259 gen 22 top level 5 path www

So we have two subvolumes in that file system: home (id 258) and www (id 259). We could now mount them with

root@srv ~ # mount -o subvol=/home /dev/mapper/mydrive  /home/user
root@srv ~ # mount -o subvol=/www  /dev/mapper/mydrive  /var/www

But we want the system to do it automatically for us, as we login.

So unmount everything and close the LUKS container:

root@srv ~ # umount /mnt/mountain /home/user /var/www
root@srv ~ # cryptsetup luksClose mydrive

PamMount can Decrypt and Mount Automatically

I’m using pam_mount already for ages! It is super convenient. To get your home automatically decrypted and mounted, you would just need to add the following lines to your /etc/security/pam_mount.conf.xml:

<volume path="/dev/disk/by-uuid/a1b20e2f-049c-4e5f-89be-2fc0fa3dd564" user="YOU"
        mountpoint="/home/user" options="defaults,noatime,compress,subvol=/home" />

<volume path="/dev/disk/by-uuid/a1b20e2f-049c-4e5f-89be-2fc0fa3dd564" user="YOU"
        mountpoint="/var/www" options="defaults,noatime,compress,subvol=/www" />

Given this, PAM tries to mount the respective subvolumes of the disk (identified by the UUID a1b20e2f-049c-...) to /home/user and /var/www as soon as YOU logs in.

Here, I am using UUIDs to identify the disks. You can still use /dev/sdb (or similar), but there is a chance, that the disks are recognised in a different sequence with the next boot (and /dev/sdb may become /dev/sdc or something…). Plus, the UUID is invariant to the system – you can put the disk in any other machine and it will have the same UUID.

To find the UUID of your disk you can use blkid:

root@srv ~ # blkid
[...]
/dev/sdb: UUID="a1b20e2f-049c-4e5f-89be-2fc0fa3dd564" TYPE="crypto_LUKS"
[...]

The Problem

As said above, with BTRFS you’ll have your partitions (called subvolumes) right in the filesystem – invisible unless decrypted. So, what is PAM doing? It discovers the first entry in the pam_mount.conf.xml configuration, which basically says

mount a1b20e2f-049c-... with some extra options to /home/user when YOU logs in

PAM is also smart enough to understand that a1b20e2f-049c-... is a LUKS encrypted device and it decrypts it using your login password. This will then create a node in /dev/mapper/_dev_sdb, representing the decrypted device. And eventually, PAM mounts /dev/mapper/_dev_sdb to /home/user. So far so perfect.

But as soon as PAM discovers the second entry, it tries to do the same! Again it detects a LUKS device and tries to decrypt that. But unfortunately, there is already /dev/mapper/_dev_sdb!? Thus, opening the LUKS drive fails and you’ll find something like that in your /var/log/auth.log:

(mount.c:72): Messages from underlying mount program:
(mount.c:76): crypt_activate_by_passphrase: File exists
(pam_mount.c:522): mount of /dev/disk/by-uuid/a1b20e2f-049c-... failed

First it seems annoying that it doesn’t work out of the box, but at least it sounds reasonable that PAM cannot do what you what it to do..

The Solution

… is quite easy, even though it took me a while to figure things out…

As soon as the first subvolume is mounted (and the device is decrypted and available through /dev/mapper/_dev_sdb), we have direct access to the file system! Thus, we do not neet to tell PAM to mount /dev/disk/by-uuid/a1b20e2f-049c-..., but we can use /dev/mapper/_dev_sdb. Or even better, we can use the file system’s UUID now, to become invariant to the sdb-variable. If you run blkid with the device being decrypted you’ll find an entry like this:

root@srv ~ # blkid
[...]
/dev/sdb: UUID="a1b20e2f-049c-..." TYPE="crypto_LUKS"
/dev/mapper/_dev_sdb: UUID="d1e1e1f9-7273-..." UUID_SUB="..." TYPE="btrfs"
[...]

You see, the new node /dev/mapper/_dev_sdb also carries a UUID, actually representing the BTRFS :)
This UUID was by the way also reported by the mkfs.btrfs call above.

What does that mean for our setup? When we first need a subvolume of an encrypted drive we need to use the UUID of the parent LUKS container. For every subsequent subvolume we can use the UUID of the internal FS.

Transferred to the above scenario, we’d create a /etc/security/pam_mount.conf.xml like that:

<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<pam_mount>

  <volume path="/dev/disk/by-uuid/a1b20e2f-049c-4e5f-89be-2fc0fa3dd564" user="YOU"
          mountpoint="/home/user" options="defaults,noatime,subvol=/home" />

  <volume path="/dev/disk/by-uuid/d1e1e1f9-7273-4b29-ae43-4b9ca411c2ba" user="YOU"
          mountpoint="/var/www" options="defaults,noatime,subvol=/www" />

  <mkmountpoint enable="1" remove="true" />

</pam_mount>

Note the different UUIDs? Even though both mounts origin from the same FS :)

Open Problems

Actually, I wanted to have my home in a raid of two devices, but I don’t know how to tell pam_mount to decrypt two devices to make BTRFS handle the raid..? The only option seems to use mdadm to create the raid, but then BTRFS just sees a single device and, therefore, cannot do its extra raid magic

If anyone has an idea on that issue you’ll have my ears :)

Thunderbird 60+ is missing calendars

Lightning is a calendar plugin for Thunderbird.
Lightning is a calendar plugin for Thunderbird.

I’m running Thunderbird to read emails on my desktops. And I’m using the Lightning plugin to manage calendars, evens, and tasks.

However, since I updated to Thunderbird 60 some weeks ago, Lightning strangely seems to be broken. The Add-ons manager still lists Lightning as properly installed, but there the “Events and Tasks” menu is missing, as well as the calendar/tasks tabs and the calendar settings in the preferences. As I’ve been pretty busy with many other things, I didn’t study the problem - hoping that the bug gets fixed in the meantime - but living without the calendar addon is cumbersome. And today it became annoying enough to make me investigate this…

There seems to be various issues with calendars in the new Thunderbird version: Mozilla provides an extensive support page dedicated to this topic. Sadly, none of these did help in my case..

I then made sure that the versions of Thunderbird and Lightning are compatible (both are 1:60.0-3~deb9u1 for me):

$ dpkg -l thunderbird
ii  thunderbird       1:60.0-3~deb9u1     amd64     mail/news client with RSS, chat [...]
$ dpkg -l lightning 
ii  lightning         1:60.0-3~deb9u1     all       Calendar Extension for Thunderbird

Eventually, I stumbled upon a thread in the German Debian forums: Thunderbird 60 - Lightning funktioniert nicht. And they figured out, that it may be caused by missing language packs for Lightning… Indeed, I do have language packs for Thunderbird installed (de and en-gb), that are not installed for Lightning:

$ dpkg -l| egrep "thunderbird|lightning"
ii  lightning                1:60.0-3~deb9u1
ii  thunderbird              1:60.0-3~deb9u1
ii  thunderbird-l10n-de      1:60.0-3~deb9u1
ii  thunderbird-l10n-en-gb   1:60.0-3~deb9u1

And it turns out, that this was a problem! Thunderbird apparently wouldn’t run Lightning unless it has all required language packs installed. After installing the missing language packs (aptitude install lightning-l10n-de lightning-l10n-en-gb), the extension is again fully working in Thunderbird! How unsatisfactory…

All that may be cause by a missing dependency..? Even though thunderbird recommends lightning, thunderbird-l10n-de (and similiar) do not recommend lightning-l10n-de. Not exactly sure how, but maybe the dependencies should be remodelled…?

Native SSH server on LinageOS

I finally trashed my shitty Shift5.2 and got a spare OnePlus One from a good colleague.

tldr: scroll down to Setup of SSH on LineageOS.

I strongly discourage everyone from buying a ShiftPhone. The Phone was/is on Android patch level from 2017-03-05 – which is one and a half year ago! Not to mention that it was running an Android 5.1.1 in 2018… With soo many bugs and security issues, in my opinion this phone is a danger to the community! And nobody at Shift seemed to really care…

However, I now have a OnePlus One, which is supported by LineageOS - the successor of CyanogenMod. So, first action was installing LineageOS. Immediately followed by installing SU to get root access.

Next, I’d like to have SSH access to the phone. I did love the native SSH server on my Galaxy S2, which used to run CyanogenMod for 5+ years. Using the SSH access I was able to integrate it in my backup infrastructure and it was much easier to quickly copy stuff from the phone w/o a cable :)

The original webpage including a how-to for installing SSH on CyanogenMod has unfortunately vanished. There is a copy available from the WayBackMachine (thanks a lot guys!!). I still thought dumping an up-to-date step-wise instruction here may be a good idea :)

Setup of SSH on LineageOS

The setup of the native SSH server on LineageOS seems to be pretty similiar to the CyanogenMod version. First you need a shell on the phone, e.g. through adb, and become root (su). Then just follow the following three steps:

Create SSH daemon configuration

You do not need to create a configuration file from scratch, you can use /system/etc/ssh/sshd_config as a template. Just copy the configuration file to /data/ssh/sshd_config;

cp /system/etc/ssh/sshd_config /data/ssh/sshd_config

Just make sure you set the following things:

  • PermitRootLogin without-password
  • PubkeyAuthentication yes
  • PermitEmptyPasswords no
  • ChallengeResponseAuthentication no
  • Subsystem sftp internal-sftp

Setup SSH keys

We’ll be using SSH-keys to authenticate to the phone. If you don’t know what SSH keys are, or how to create them, you may go to an article that I wrote in 2009 (!!) or use an online search engine.

First, we need to create /data/.ssh on the phone (note the .!) and give it to the shell user:

mkdir -p /data/.ssh
chmod 700 /data/.ssh
chown shell:shell /data/.ssh

Second, we need to store our public SSH key (probably stored in ~/.ssh/id_rsa.pub on your local machine) in /data/.ssh/authorized_keys on the phone. If that file exists, just append your public key into a new line. Afterwards, handover the authorized_keys file to the shell user:

chmod 600 /data/.ssh/authorized_keys
chown shell:shell /data/.ssh/authorized_keys

Create a start script

Last but not least, we need a script to start the SSH service. There is again a template available in /system/bin/start-ssh. Just copy the script to /data/local/userinit.d/:

mkdir /data/local/userinit.d/
cp /system/bin/start-ssh /data/local/userinit.d/99sshd
chmod 755 /data/local/userinit.d/99sshd

Finally, we just need to update the location of the sshd_config to /data/ssh/sshd_config in our newly created /data/local/userinit.d/99sshd script (in the template it points to /system/etc/ssh/sshd_config, there are 2 occurences: for running the daemon w/ and w/o debugging).

That’s it

You can now run /data/local/userinit.d/99sshd and the SSH server should be up and running :)

Earlier versions of Android/CyanogenMod auto-started the scripts stored in /data/local/userinit.d/ right after the boot, but this feature was removed with CM12.. Thus, at the moment it is not that easy to automatically start the SSH server with a reboot of your phone. But having the SSH daemon running all the time may also be a bad idea, in terms of security and battery…

Regain RSS feeds for the University of Rostock

RSS feeds for uni-rostock.de
RSS feeds for uni-rostock.de

I’m consuming quite some input from the internet everyday. A substantial amount of information arrives through podcasts, but much more essential are the 300+ RSS feeds that I’m subscribed to. I love RSS, it’s one of the best inventions in the world wide web!

However, there are alarming rumors and activities trying to get rid of RSS… We probably should all get our news filtered by Facebook or something..!? The importance of RSS, which allows users to keep track of updates on many different websites, seems to get continuously ignored.. And so does the new website of our University, where official RSS feeds aren’t provided anymore :(

Apparently, many people were already asking for RSS feeds of the University’s webpage. At least that’s what they told me, when I asked… But the company who built the pages won’t integrate RSS anymore - probably wasn’t listed in the requirements.. And the University wouldn’t touch the expensive website.

“Fortunatelly,” they stayed with Typo3 as the CMS, which we’ve been using as well - before we decided to switch. And this Typo3 platform can output the page’s content as RSS feed out of the box, you just need to know how! ;-)

And… I’ll tell you: Just append ?type=9818 to the URL. That’s it! Really. It’s so easy.

Here are a few examples:

Sure, it doesn’t work everywhere. If the editors maintain news as static HTML pages, Typo3 fails to export a proper RSS feed. It’s still better than nothing. And maybe it helps a few people…

The RSS icon was adapted from commons:Generic Feed-icon.svg.

Proper Search Engine for a Static Website powered by DuckDuckGo (and similar)

Static websites are great and popular, see for example Brunch, Hexo, Hugo, Jekyll, Octopress, Pelican, and …. They are easy to maintain and their performance is invincible. But… As they are static, they cannot dynamically handle user input, which is an obvious requirement for every search engine.

Outsource the task

Lucky us, there are already other guys doing the search stuff pretty convincingly. So it’s just plausible to not reinvent the wheel, but instead make use of their services. There are a number of search engines, e.g. Baidu, Bing, Dogpile, Ecosia, Google, StartPage, Yahoo, Yippy, and more (list sorted alphabetically, see also Wikipedia::List of search engines). They all have pros and cons, but typically it boils down to a trade between coverage, up-to-dateness, monopoly, and privacy. You probably also have your favourite. However, it doesn’t really matter. While this guide focusses on DuckDuckGo, the proposed solution is basically applicable to all search engines.

Theory

The idea is, that you add a search form to your website, but do not handle the request yourself and instead redirect to an endpoint of a public search engine. All the search engines have some way to provide the search phrase encoded in the URL. Typically, the search phrase is stored in the GET varialble q, for example example.org/?q=something would search for something at example.org. Thus, your form would redirect to example.org/?q=.... However, that would of course start a search for the given phrase on the whole internet! Instead, you probably want to restrict the search results to pages from your domain.

Fortunatelly, the search engines typically also provide means to limit search results to a domain, or similar. In case of DuckDuckGo it is for example the site: operator, see also DuckDuckGo’s syntax. That is, for my blog I’d prefix the search phrase with site:binfalse.de.

Technical realisation

Implementing the workaround is no magic, even though you need to touch your webserver’s configuration.

First thing you need to do is adding a search form to your website. That form may look like this:

<form action="/search" method="get">
     <input name="q" type="text" />
     <button type="submit">Search</button>
</form>

As you see, the form just consists of a text field and a submit-button. The data will be submitted to /search on your website.

Sure, /search doesn’t exist on your website (if it exists you need to use a different endpoint), but we’ll configure your web server to do the remaining work. The web server needs to do two things: (1) it needs to prefix the phrase with site:your.domain and (2) it needs to redirect the user to the search engine of your choice. Depending on the web server you’re using the configuration of course differs. My Nginx configuration, for example, looks like this:

location ~ ^/search {
    return 302 https://duckduckgo.com/?q=site%3Abinfalse.de+$arg_q;
}

So it sends the user to duckduckgo.com, with the query string site:binfalse.de concatenated to the submitted search phrase ($arg_q = the q variable of the original GET request). If you’re running an Apache web server, you probably know how to achieve the same over there. Otherwise it’s a good opportunity to look again into the manual ;-)

Furthermore, the results pages of DuckDuckGo can be customised to look more closely like your site. You just need to send a few more URL parameters with the query, such as kj for the header color or k7 for the background color. The full list of available configuration options are available from DuckDuckGo settings via URL parameters.

In conclusion, if you use my search form to search for docker, you’ll be guided to https://binfalse.de/search?q=docker. The Nginx delivering my website will then redirect you to https://duckduckgo.com/?q=site%3Abinfalse.de+docker, try it yourself: search for docker!

This of course also works for dynamic websites with WordPress, Contao or similar…

Run Baïkal through Docker

Baïkal is a quite popular Calendar+Contacts server. It supports CalDAV as well as CardDAV.

I’ve been using it for my calendars and adressbooks already for more than 4 years now. However, I initially installed it as plain PHP application with a MySQL database. The developers also announced quite early, that they are working on a Docker image, but there is nothing useful as of mid 2018. So far they just provide a quite inconvenient how-to and a list of issues that apparently prevent them from providing a proper Docker image. Thus, I just dockerised the application myself :)

The Docker image

Actually, creating a Docker image for Baïkal was super easy. In the end, it is “only” a PHP application ;-) The corresponding Dockerfile can be found in the root directory of Baïkal’s git repository (at least in my fork). The latest version at the time of writing is:

FROM php:apache
MAINTAINER martin scharm <https://binfalse.de/contact>

# we're working from /var/www, not /var/www/html
# the html directory will come with baikal
WORKDIR /var/www

# install tools necessary for the setup
RUN apt-get update \
 && apt-get install -y -q --no-install-recommends \
    unzip \
    git \
    libjpeg62-turbo \
    libjpeg62-turbo-dev \
    libpng-dev \
    libfreetype6-dev \
    ssmtp \
 && apt-get clean \
 && rm -r /var/lib/apt/lists/* \
 && a2enmod expires headers

# for mail configuration see https://binfalse.de/2016/11/25/mail-support-for-docker-s-php-fpm/


# install php db extensions
RUN docker-php-source extract \
 && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) pdo pdo_mysql \
 && docker-php-source delete

# install composer
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
 && php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
 && mkdir -p composer/packages \
 && php composer-setup.php --install-dir=composer \
 && php -r "unlink('composer-setup.php');" \
 && chown -R www-data: composer


# prepare destination
RUN rm -rf /var/www/html && chown www-data /var/www/
ADD composer.json /var/www/
ADD Core html /var/www/Core/
ADD html /var/www/html/

# install dependencies etc
USER www-data
RUN composer/composer.phar install


USER root

# the Specific dir is supposed to come from some persistent storage
VOLUME /var/www/Specific

So, it basically

  • installs some dependencies through apt-get,
  • installs the PDO-MySQL extension,
  • installs composer,
  • adds the Baikal sources into the image,
  • and finally installs remaining Baikal dependencies through composer.

I distribute the image as binfalse/baikal.

Using the Docker image

Using the image is fairly simple. Basically, you only need to mount some persistent space to /var/www/Specific

docker run -it --rm -p 80:80 -v /path/to/persistent:/var/www/Specific binfalse/baikal

Please make sure that the directory /path/to/persistent has proper permissions. In the container an Apache2 is serving the contents, so make sure the user www-data (UID 33) is allowed to rwx that directory.

To start with, you can use the original Specific directory from the Baïkal repository. Then head to your Baikal instance (which will probably redirect to BASEURL/admin/install), and setup your server. Every configuration will be stored in the mounted volume at /path/to/persistent.

SSL

To support encrypted connections you would need to mount the certificates as well as a modified Apache configuration into the container. However, I recommend to run it behind a reverse proxy, such as binfalse/nginx-proxy, and let the proxy handle all SSL connections (as for all other containers). This way, you just need one proper SSL configuration.

MySQL

The default SQLite database is perfect for a first test, but is slow and just allows for a limited amount of SQL variables. If you for example have more than 999 contacts, the first sync of a clean WebDAV device will result in an exception such as:

PDOException: SQLSTATE[HY000]: General error: 1 too many SQL variables

Thus, for production you may want to switch to a proper database, such as MariaDB. Lucky you, the Docker image supports MySQL! ;-)

To reproducibly assemble both containers, I recommend Docker-Compose. Here is a sample config with two containers baikal and baikal-db:

version: '2'
services:
    baikal:
        restart: always
        image: binfalse/baikal
        container_name: baikal
        volumes:
            - /srv/baikal/config:/var/www/Specific
        links:
            - baikal-db
    baikal-db:
        restart: always
        image: mariadb
        container_name: baikal-db
        volumes:
            - /srv/baikal/database:/var/lib/mysql
        environment:
            MYSQL_ROOT_PASSWORD: roots-difficult-password
            MYSQL_DATABASE: baikal
            MYSQL_USER: baikal
            MYSQL_PASSWORD: baikals-difficult-password

This assumes, that your Baikal configuration can be found in /srv/baikal/config. The database will be stored in /srv/baikal/database. Also note the database credentials for configuring Baikal. If you’re not running a reverse proxy in front of the application, you also need to add some port forwarding for the baikal container:

version: '2'
services:
    baikal:
        restart: always
        image: binfalse/baikal
        [...]
        ports:
            - "80:80"
            - "443:443"
        [...]

Mail support

I’m not sure why, but Baikal’s list of issues included support for mail. However, adding mail support should also be fairly easy if needed. I already wrote a How-To for PHP-mail in Docker.

Logging with Docker

In a typical Docker environment you’ll have plenty of containers (probably in multiple networks?) on the same machine. Let’s assume, you need to debug some problems of a container, eg. because it doesn’t send mails anymore.. What would you do? Correct, you’d go and check the logs.

By default, Docker logs the messages of every container into a json file. On a Debian-based system you’ll probably find the file at /var/lib/docker/containers/CONTAINERID/CONTAINERID-json.log. However, to properly look into the logs you would use Docker’s logs tool. This will print the logs, just as you would expect cat to dump the logs in /var/log. docker-logs can also filter for time spans using --since and --until, and it is able to emulate a tail -f with --follow.

However, the logs are only available for exsiting containers. That means, if you recreate the application (i.e. you recreate the container), you’ll typically loose the log history… If your workflow includes the --rm, you will immediately trash the log of a container when it’s stopped. Fortunatelly, Docker provides other logging drivers, to e.g. log to AWS, fluentd, GPC, and to good old syslog! :)

Here I’ll show how to use the host’s syslog to manage the logs of your containers.

Log to Syslog

Telling Docker to log to the host’s syslog is really easy. You just need to use the built-in syslog driver:

docker run --log-driver syslog [other options etc]

Voilà, the container will log to the syslog and you’ll probably find the messages in /var/log/syslog. Here is an example of an Nginx, that I just started to serve my blog on my laptop:

Feb 21 16:06:32 freibeuter af6dcace59a9[5606]: 172.17.0.1 - - [21/Feb/2018:15:06:32 +0000] "GET /2018/02/21/logging-with-docker/ HTTP/1.1" 304 13333 "http://localhost:81/" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" "-"

By default, the syslog driver uses the container’s ID as the syslog tag (here it is af6dcace59a9), but you can further configure the logging driver and, for example, set a proper syslog tag:

docker run --log-driver syslog --log-opt tag=binfalse-blog [other options etc]

This way, it is easier to distinguish between messages from different containers and to track the logs of an application even if the container gets recreated:

Feb 21 16:11:16 freibeuter binfalse-blog[5606]: 172.17.0.1 - - [21/Feb/2018:15:11:16 +0000] "GET /2018/02/21/logging-with-docker/ HTTP/1.1" 200 13333 "http://localhost:81/" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" "-"

If you’re using Docker Compose, you can use the logging keyword to configure logging:

version: '2'
  services:
    website:
      restart: unless-stopped
      image: nginx
      container_name: website
      volumes:
        - /srv/web/default/:/usr/share/nginx/html
      logging:
        driver: syslog
        options:
          tag: docker/website

Here, I configured an nxinx that just serves the contents from /srv/web/default. The interesting part is, however, that the container uses the syslog driver and the syslog tag docker/website. I always prefix the tag with docker/, to distinguish between log entries of the host machine and entries from Docker containers..

Store Docker logs seperately

The workaround so far will probably substantially spam your /var/log/syslog, which may become very annoying… ;-)

Therefore, I recommend to write Docker’s logs to a seperate file. If you’re for example using Rsyslog, you may want to add the following configuration:

if $syslogtag contains 'docker/' then /var/log/docker
& ~

Just dump the snippet to a new file /etc/rsyslog.d/docker.conf and restart Rsyslog. This rule tells Rsyslog to write messages that are tagged with docker/* to /var/log/docker, and not to the default syslog file anymore. Thus, your /var/log/syslog stays clean and it’s easier do monitor the Docker containers.

Disentangle the Container logs

Since version 8.25, Rsyslog can also be used to split the docker logs into individual files based on the tag. So you can create separate log files, one per container, which is even cleaner! The idea is to use the tag name of containers to implement the desired directory structure. That means, I would tag the webserver of a website with docker/website/webserver and the database with docker/website/database. We can then tell Rsyslog to allow slashes in program names (see the programname section at www.rsyslog.com/doc/master/configuration/properties.html) and create a template target path for Docker log messages, which is based on the programname:

global(parser.PermitSlashInProgramname="on")

$template DOCKER_TEMPLATE,"/var/log/%programname%.log"

if $syslogtag contains 'docker/' then ?DOCKER_TEMPLATE
&~

Using that configuration, our website will log to /var/log/docker/website/webserver.log and /var/log/docker/website/database.log. Neat, isn’t it? :)

Inform Logrotate

Even though all the individual logfiles will be smaller than a combined one, they will still grow in size. So we should tell logrotate of their existence!

Fortunatelly, this is easy as well. Just create a new file /etc/logrotate.d/docker containing something like the following:

/var/log/docker/*.log
/var/log/docker/*/*.log
/var/log/docker/*/*/*.log
{
        rotate 7
        daily
        missingok
        notifempty
        delaycompress
        compress
        postrotate
                invoke-rc.d rsyslog rotate > /dev/null
        endscript
}

This will rotate the files ending in *.log in /var/log/docker/ and its subdirectories everyday and keep compressed logs for 7 days. Here I’m using a maximum depth of 3 subdirectories – if you need to create a deeper hierarchy of directories just add another /var/log/docker/*/*/*/*.log etc to the beginning of the file.