Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Analyzing PHP Logs

Ultimate Guide to Logging - Your open-source resource for understanding, analyzing, and troubleshooting system logs

Analyzing PHP Logs

The most obvious way to inspect how your software’s running is to simply execute it and watch how it runs. This is fine when your application’s working as expected. During development, however, this is rarely the case, especially in your local development environment.

For deeper inspection, log collection is often the first step. Good log messages allow you to go back in time when your code isn’t behaving as expected. The second step is looking through your logs to find the information you need.

Add More Contextual Data for Analysis

The main idea behind logging is including as much data as possible without going overboard. Typically, more debug and informational logs are collected from code during active development than code being deployed to production. Either way, you may have to trace a certain URL that’s giving you trouble, or perhaps a chain of requests involved in a user workflow. Adding the following to your logging setup is completely optional, but it’s super helpful to include some things by default to help you search when you need to. Including the request URL and session ID are a great start.

The following are some snippets to help you do this with Laravel and Monolog.

First, we need to create a processor class that will help us process log entries:

<?php

 

// File: app/Helpers/LogProcessor.php

 

namespace App\Helpers;

 

use Monolog\Logger;

 

class LogHelper

{

/**

 	•@param Logger $logger

*/

public function __invoke($logger)

{

foreach ($logger->getHandlers() as $handler) {

$handler->pushProcessor(new LogProcessor());

}

}

}

Next, we need a helper class to tie our Monolog processor into our Laravel config:

<?php

 

// File: app/Helpers/LogHelper.php

 

namespace App\Helpers;

 

use Illuminate\Support\Facades\Request;

use Illuminate\Support\Facades\Session;

 

/**
 	•Adds the request URL and session ID to the context of Monolog messages.

* •@package App\Helpers */ class LogProcessor { public function __invoke(array $record): array { $record[’context’][’request_url’] = Request::fullUrl(); $record[’context’][’session_id’] = Session::getId();   return $record; } }

Now we can edit config/logging.php and add the tap entry to our logging channel:

‘channels’ => [

// ...

 

‘daily’ => [

‘driver’ => ‘daily’,

‘path’ => storage_path(‘logs/laravel.log’),

‘level’ => ‘debug’,

‘days’ => 14,

‘tap’ => [ App\Helpers\LogHelper::class],

],

],

The Old Way – Parsing Logs with Command Line Tools

The ability to parse, view, and search logs on the command line is an indispensable skill in the software developer’s toolkit. During everyday development your application is running locally and processing things locally in real time is valuable. There are also cases when you must diagnose issues on a remote server and the same skill set applies.

Having Tail Open by Default as Good Practice

It’s beneficial to keep a window with your log files open during development. There are several things going on under the hood for even a simple web application. The warnings and notices appearing in PHP error logs are often the first symptoms that something is off in your code, especially when your application doesn’t appear broken at first glance.

This is easy to do with the tail command (please note I’ve removed timestamps for legibility).

$ tail -f laravel.log

DEBUG: debug {”id”:”one,””request_url”:”http://my.test.com/one,””session_id”:”MvXb0r...”}

DEBUG: debug {”id”:”two,””request_url”:”http://my.test.com/two,””session_id”:”MvXb0r...”}

WARNING: cache miss {”request_url”:”http://my.test.com/cached,””session_id”:”MvXb0r...”}

Searching for Different Kinds of Messages

The easiest way to search for messages is to use the grep command.

$ grep WARN laravel.log

WARNING: cache miss {”request_url”:”http://my.test.com/cached,””session_id”:”MvXb0r...”}

WARNING: last name undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

WARNING: first name undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

WARNING: mobile phone undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

You can also use grep multiple times to filter out the noise.

$ grep WARN laravel.log | grep -v undefined

[2019-09-08 10:25:43] local.WARNING: cache miss {”request_url”:”http://my.test.com/cached,””session_id”:”MvXb0rPZSJT3sscLktF07sFetYvSLXEMVa4oLxIg”}

Combining Tail and Grep To Reduce Noise

There are times when you want to limit a live view of your logs to particular data. For instance, if you wanted to limit lines to a particular session, you could use:

$ tail -f laravel.log | grep MvXb0rPZSJT3sscLktF07sFetYvSLXEMVa4oLxIg

WARNING: cache miss {”request_url”:”http://my.test.com/cached,””session_id”:”MvXb0r...”}

WARNING: last name undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

WARNING: first name undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

WARNING: mobile phone undefined {”request_url”:”http://my.test.com/profile,””session_id”:”MvXb0r...”}

Using a Log Management Tool

The Benefits of Using a Tool over the Command Line

There are several benefits of using a tool instead of files and command line utilities. However, the most important of all is using a tool (or an application) allows your log management to grow with your application and your team. You can get by with simple utilities on one or two servers and a small team. You can also bet things will get out of hand quickly as you grow.

Loggly Setup

Setup is painless and we’ll be able to leverage the helper classes we created earlier. All we need is one more class and a little configuration, and we’re on our way.

First, we need a class for pushing Monolog’s built-in LogglyHandler into our logging stack.

<?php

 

// File: app/Helpers/Loggly.php

 

namespace App\Helpers;

 

use Monolog\Handler\LogglyHandler;

use Monolog\Logger;

 

class Loggly

{

/**
 	•@param Logger $config
 	•@return Logger
*/

public function __invoke($config)

{

$logger = new Logger(env(‘APP_NAME’));

$logger->pushHandler(

new LogglyHandler(env(‘LOGGLY_KEY’) . ‘/tag/’ . config(‘services.loggly.tag’),

Logger::DEBUG)

);

return $logger;

}

}

Next, we need to add our Loggly key to our environment file (.env).

LOGGLY_KEY=ff2337d9-dd24-4030-8c6e-309ae53738db

Then we configure our key and tag in config/services.php.

‘loggly’ => [

‘key’ => env(‘LOGGLY_KEY’),

‘tag’ => env(‘APP_NAME’),

],

Lastly, we reconfigure our logging channels.

‘channels’ => [

‘stack’ => [

‘driver’ => ‘stack’,

‘channels’ => [’daily’, ‘loggly’],

‘ignore_exceptions’ => false,

],

 

// ...

 

‘loggly’ => [

‘driver’ => ‘custom’,

‘formatter’ => Monolog\Formatter\LogglyFormatter::class,

‘via’ => App\Helpers\Loggly::class,

‘tap’ => [App\Helpers\LogHelper::class],

],

],

Using Loggly to View Logs in Real Time

The live tail function in SolarWinds® Loggly® allows you to view incoming log messages in real time. This works much the same as using tail -f on the command line. It supports viewing messages in both grouped and ungrouped mode, which can help reduce noise if several of the same messages are repeated.

Combining a search while viewing the logs in real time can be useful when you’re trying to troubleshoot problems. For example, you could filter the live tail view by session ID, request URL, or a warning/notice message to help you diagnose what’s causing issues.

Loggly Live Tail View © 2019 SolarWinds Worldwide, LLC. All Rights Reserved.

Improving Log Analysis using Loggly

Application Error State Alerting

One great feature of Loggly is the ability to generate alerts based on events discovered in your log files. Finding out your application has been down for 30 minutes after being notified by an angry user is never a great experience. Loggly allows you to define an alert on anything you can search for and to then send the alert to a variety of endpoints.

For example, you’d want to be alerted for sure if your application encountered a fatal error or an uncaught exception.

Add Alert to List © 2019 SolarWinds Worldwide, LLC. All Rights Reserved.

Graphs of Log Statistics

Another great feature is the ability to create graphs based on the statistical data in your log files. The more your application is being used, the more log data it generates, making it increasingly difficult to make any sense of the data in your logs. This is a simple example of a graph, but I leave it to your imagination (and your data) as to what’s possible to create.

This graph is visualizing the number of occurrences for different exception types. We’re looking at the field json.context.exception.class, which gives us the class thrown in the exception.

You can also save and use the graphs you build to create dashboards, which can give you a great bird’s-eye view of how your application is behaving.

Graphs of Log Statistics © 2019 SolarWinds Worldwide, LLC. All Rights Reserved.

Conclusion

Command-line tools can be quite useful. They can get the job done when you’re working solo on a small project, or on a simple project with a small group of developers. Once an application grows to a certain point (or if you just want to make your life easier), Loggly is a much better alternative.

Loggly allows you to aggregate logs from several hosts, which is essential in a load-balanced or multi-tier deployment. It also indexes logs for fast search, which is key when dealing with higher volumes from a production environment. An analysis combined with custom alerting can give you access to new insights about how your software behaves, how your customers are using your application, and real-time notifications when there are errors requiring your attention.