Docker encourages its users to build containers that log to standard
out or standard error. In fact, its now common practice to do so. Your
process controller (uWSGI/Gunicorn) should combine the logs from
all processes and do something useful with them. Like write them all
to a file without your app having to worry about locking. Or, maybe even
Docker supports this practice and collects logs for us. In JSON to add
missing timestamps and to work well with LogStash. But, what is a show
stopping issue for us is that these files grow boundlessly. You cannot
use the logrotate utility with them because the Docker daemon will not
re-open the file. Well, unless you stop/start the specific container.
Docker logging issues are an ongoing topic and this is clearly an area
where Docker will improve in the future.
There are two other widely accepted ways of working around this:
- Bind mount in /dev/log and off load logs to the host's Syslog
- Mount a volume from the host or a different container where logs
will be processed.
The second point is out. Same problem of not being able to easily tell
the app to re-open files for log rotation without restarting the container.
Using /dev/log and off loading logs to the system's log daemon sounds
like a good idea. The Docker host can provide this service arbitrarily to
all containers. Containers need not deal with (much) logging complexity
This approach has multiple problems.
Off loading logs to the host's Syslog most likely means that you want to add
some additional configuration to rsyslog which requires a restart of the
rsyslog daemon. (Say, you want to stick your logs in a specific, app-specific
file.) The first thing rsyslog does when it starts is (re-)create the
/dev/log socket. At this point, any running Docker container that has
already bind mounted /dev/log now has an old socket not the newly created
one. In any case, rsyslog is no longer listening to any of the currently
running containers for logs. Full stop. This method doesn't pass the smoke
What ended up working for me was using the network, but it added complexity
to the Docker host. I'm managing Docker hosts with Ansible so this wasn't
a huge problem. I'd rather tune my Docker hosts than alter each image
and container. I set the network range on the docker0 bridge interface
to a specific and private IP range. Now, my Docker hosts always have
a known IP address that my Docker containers can make connections to.
DOCKER_OPTS="--ip 127.0.0.1 --bip 172.30.0.1/16"
I configured rsyslog on the host to listen for UDP traffic and bind only to
this private address:
I then built my image to run the process with its output piped to logger
using the -n option to specify my syslog server. Guess what. No logs.
The util-linux in Ubuntu Trusty (and other releases) is 2.20 which dates
from 2011-ish. The logger utility has known bugs. Specifically that
the -n option is ignored silently unless you also specify a local UNIX
socket to write to. This version of util-linux also does not have the
nsenter command which is very handy when working with Docker containers
either. (See here for nsenter.) This is a pretty big frustration.
The final solution was to make my incantations in my Dockerfiles slightly
more complex for apps that do not directly support Syslog. But, it works.
CMD foobar-server --options 2>&1 \
| logger -n 172.30.0.1 -u /dev/null -t foobar-server -p local0.notice
I promise I'm not logging to /dev/null.