For years, I’ve used a variety of services on my home server. Much of it revolved around managing media. Only recently have I adopted Docker, and now I wonder why I have not done so earlier
Services like Transmission and PlexPy have been staples on my home server for years. Since the beginning, they were just running on a bare-metal Ubuntu install. Unfortunately, bare-metal makes it very difficult to effectively back up and manage a bunch of different services.
Once, when the SSD drive the server was running died, it took me weeks to get all my services back to where I wanted.
When I first made the jump to bare-metal virtualization, I replicated this setup somewhat. But instead of the “all my eggs in one basket approach”, I created separate VMs for each service.
Obviously that design is still less than ideal. It still suffers from the same problem: configuration files spread out all over the filesystem of the VMs. Except instead of needing to manage and backup one server, I now had 5-10 VMs each running a single service each.
While I had no problem backing up various configs and databases, it meant every VM was a pile of cruft of copying and rsyncing files to the centralized NAS for backup. This setup lasted for about 3 weeks before I abandoned it.
Enter Docker
Anybody that has been following tech for more than 5 minutes has probably heard at least something about Docker. It didn’t take long to decide that Docker was a great solution to fulfill my needs:
- Centralized configuration
- Easy to backup and restore
- Agnostic treatment of the underlying host. I wanted to be able to stop the service on one server and start it on a second server with little or no re-configuration.
As I have a rather hefty centralized NAS, my main goal was to be able to run my services off a share on the NAS. That way, when the NAS is snapshotted and backed up, all my home services are automatically backed up and snapshotted.
Docker Home Production V1
On the most simple level, running a Docker container can be thought of as just running a single command on the command-line, and having a whole host of services start. So I created a share on my NAS and exported it via NFS to a rather vanilla Ubuntu install. The only tweaking that was done was to install the latest Docker PPA.
As I had never really played too much with Docker other than a few copy/pastes to fire up a container, I was starting from scratch.
Version 1 of my Docker setup involved an array of simple scripts:
Where the content of one of these scripts was essentially this:
This config did a few things.
- Stopped and removed the existing container. This ensured that a fresh version of the container was pulled.
- Stored all the configs in a unified place.
- Gave a centralized layout for sharing between containers that could be accessed externally
- Set any required environment variables.
- Pulled the container and started it.
Starting all the services was done from a basic systemd config and was a simple as:
/usr/bin/find /mnt/Docker/scripts/ -name "*.sh" -exec sh {} \;
This setup worked great for months. But unfortunately I ran into a situation where I wanted to create container dependacies. So it was time to start using docker-compose.
Version 2.0
The more I’ve used it, the more I’ve learned that instead of being a tool, Docker is more like a full toolbox. When poking around and asking “What’s the best way to do X in Docker”, I’ve found the number of responses and different methods of accomplishing the same task are numerous.
While my V1 setup was working perfectly and worked without issue for months, I ran into issues when I wanted to create a custom PHP/Nginx container. While I could have made it work, when trying to create a container with some dependencies, I found that docker-compose was a much better tool in the Docker toolbox.
After a bit of hacking around, I ended up with a docker-compose.yml
file that looked something like this:
nginx_ipv6_echo:
image: nginx:latest
ports:
- "8283:80"
volumes:
- /mnt/Docker/Config/nginx_php/code:/code
- /mnt/Docker/Config/nginx_php/site.conf:/etc/nginx/conf.d/site.conf
links:
- php_ipv6_echo:php
php_ipv6_echo:
image: php:7-fpm
volumes:
- /mnt/Docker/Config/nginx_php/code:/code
muximux:
image: linuxserver/muximux
volumes:
- /mnt/Docker/Config/muximux:/config
ports:
- "443:443"
environment:
TZ: "America/Chicago"
PGID: 65534
PUID: 65534
A quick bit of shell scripting gives me a new and improved start script
#!/bin/bash
#docker_yml.sh
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
hostpath=`hostname --fqdn`
cd /mnt/Docker/scripts/$hostpath
docker-compose pull
docker-compose up -d
This script does a few important things.
- Stops and removes the existing containers. This cleans everything up so even if containers are running, they can easily be updated.
- Goes into host specific directory. This allows me to quickly move containers between different hosts. For example,
docker2.lan
contains a different yml for that host.
- Pulls and brings up the containers.
Finally, a simple systemd file starts and stops with the host VM
cat /etc/systemd/system/mydockers.service
[Unit]
Description=Start all dockers on boot
RequiresMountsFor=/mnt/Docker
[Install]
WantedBy=multi-user.target
[Service]
Type=oneshot
ExecStart=/mnt/Docker/scripts/docker_yml.sh
ExecStop=/mnt/Docker/stopall.sh
RemainAfterExit=true
StandardOutput=journal
The stopall.sh
just contains the top two lines from the docker_yml.sh
file to stop and remove all the containers.
The huge caveat here is that anything using SQLite probably should NOT be run on NFS/CIFS shares. There is a locking issue which can cause problems. It generally doesn’t cause problems unless the software being run runs in multiple threads, which most things don’t.
Plex most decidedly does not like being run off NFS shares. I ran my Plex library off NFS for years, but since PMS versions around November of 2016, it’s no longer possible to store the Plex library on a remote drive. It causes random lockups and other problems. I fixed this by storing the Plex library on a local drive with an rsync job to backup the Plex library to the NFS drive.
In closing
For me, Docker has definitely fixed my workflow. Instead of multiple single-use VMs, I have a handful of docker servers, which can be spun up at will. Obviously this bypasses the clustering ability of Docker, but I think in my use-case, that would be overkill. For now, this setup has been running beautifully.