Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info


Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Blog How-tos

How to implement logging in Docker with a sidecar approach

By Garland Kan 10 Sep 2015

As a consultant building highly automated systems for clients using Docker, I have seen how important it is to be able to get application logs out of your containers and into a place where the developers can view and search through them easily. Sending your logs to Loggly accomplishes this and gives you some very nice features such as an easy interface to search logs, create alerts, and build dashboards off of the data.

Centralized Log Management Is a Must-Have in the World of Docker

Logging is very important because it gives a complete view into your application environment. If you’re beyond the days of having only a few machines, most of your machines are dynamic. The cluster controls where your application runs and takes care of healing itself when a machine goes down and a new one replaces it. This makes it impossible to solve problems manually by ssh’ing into a machine to look at logs: You first have to figure out where exactly your app is running and get the logs from the potentially numerous machines they are on to get the full picture. In a container world, centralized log management is a must.

Since Loggly is the central place into which all the logs flow, you can find items for a certain application in an environment no matter where it is running. In most cases, application developers don’t care where an application ran; they just need to know what happened.

Solution: Pair Application and Logging Containers

My approach is to pair up each container running a CoreOS cluster with a Loggly logging container. This makes it very easy to see which container is logging out, specify which files need to be logged out, set tags for each application/container, and start and stop the container and its logging in unison.

The following is the Docker process output for one application in a container:

# docker ps

When we launch the “app_x” container, we also launch an “app_x_logger” container that pairs up with it.  

The best practice on a CoreOS cluster is avoid installing anything natively on the OS because most of it is not persisted across upgrades. I like this idea because it keeps the OS clean. To support file-based logging, I created a Docker container that supports shared volumes on Docker. You can find and use the container on DockerHub here.

How to Use the Loggly Docker Container

When we start, the application container is named “app_x”. We expose one or more locations inside the container where logs are written to. We will later use the Loggly logging container to grab any files that show up here and send them to Loggly.

docker run \
-d \
-name app_x \
-v /opt/app/logs \

Notice the “-v /opt/app/logs” parameter. This tells Docker to expose this out as a data volume so that another container can access it.

Then we start up the Loggly logging container to use the “Data Volumes” from the app_x container.

docker run -d \
–volumes-from app_x \
env DIRECTORIES_TO_MONITOR=“/opt/tomcat/logs,/var/log” \
env TAGS=“app_x” \

Notice the “–volumes-from app_x”. This option takes all the data volumes exposed from the container named “app_x” and binds it into this container. Files generated by “app_x” into these directories will also be accessible to this container. (A detailed description on what is happening here can be found here.)

Once started, the Loggly log container will basically run Loggly’s configure-file-monitoring.sh shell script, adding in all the directories you passed into it. It’s best to use as many of the Loggly default setup scripts as possible so you don’t have to maintain your own. If Loggly updates the scripts to fix bugs or add functionality, this container will get all the updates without requiring you to do anything. A simple bash loop was added so that every file in the directories that was passed into the container (“DIRECTORIES_TO_MONITOR”) will be added to Loggly.

Analyzing Your Docker Logs in Loggly

I recommend using Loggly tags with all of your log events to make it very easy for you to search for logs of interest. Just prefix all of your searches with:


This will use a tag to pull up the logs that came from this “app_x” container. If you pair each application this way, each application has its own tag, making it very easy to search for specific items.


The hardest part about this setup is to create the initial concept of pairing up an application container and a logging container, also referred to as sidecar or sidekicks. (See this article for other examples.) Once you get that completed, you can simply reuse it over and over again. At this point, you can pretty much forget that the logging container is there. This approach to logging with Docker and Loggly is a DevOps dream: You build something once that effortlessly allows you to troubleshoot the required logs and application, yet it can be reused on every application without any maintenance.

The Loggly and SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.
Garland Kan

Garland Kan