Connect with Us
MySQL log analysis can be bewildering. Many unforeseen issues can crop up while you configure your database layer to produce logs and send them to another server. In this article, we’ll look at ways to troubleshoot some common problems. Let’s first cover the basics of this technique.
Applications can go wrong at multiple levels. For example, you could have filled caches, slow-running queries, and even hidden vulnerabilities. Often, it’s hard to tell what problems your application has in the absence of logging.
Logs give you a window into the state of an application by taking snapshots at given points in time. In this manner, logs let you understand what’s going on within the system. In this post, we’ll dive into the common types of MySQL logs and investigate strategies to collect and analyze them. Next, we’ll go over how to set up MySQL’s slow query log for analysis. Finally, we’ll also examine some troubleshooting steps you can take to resolve common logging setup issues.
On any given day, MySQL can amass gigabytes worth of logs—and that’s just on a single server. A poor database administrator (DBA) used to have to collect and filter logs manually to debug issues. Not only did the person have to determine where the logs were, he or she also needed to determine what types of logs to collect and filter. Now we have more streamlined methods.
Here are a few of the most common types of MySQL logs that DBAs use to troubleshoot problems.
MySQL Slow Query Log: Use this log to record queries that take a long time to run.
General Query Log: Use this to provide a general overview of what mysqld is doing. It records when clients connect or disconnect.
Error Log: This one records errors that happen when you start, stop, or run MySQL.
Binlog: The binlog contains all statements that update data. We can use it to recover the state of an instance after a crash.
Relay Log: This log records changes received from a master server.
So, which of these logs is most useful?
The error log is enabled by default, and it contains a wealth of information. With the error log, we can filter messages by levels: information, warning, and error. On some operating systems, we can even get the stack trace of errors that have occurred.
All that is great, but unfortunately, setting up the error log on Unix systems requires different configuration options than setting up on Windows systems. As such, it could be a hassle to set up if we’re configuring multiple systems running on different platforms. So, for simplicity’s sake, we’ll pass on the error log for now.
To enable the general log, we can start mysqld with the –general_log flag. Using the general log file, we can find the exact information the client sent to the server. Additionally, from a security standpoint it also reflects the types of connections established and can be used to ensure all connections with the server are established with the Secure Sockets Layer (SSL).
The general log might still be a bit too generic. Let’s see if the Slow Query Log will meet our needs.
This log helps us identify long-running queries we can speed up. Like the general log, the slow query is disabled by default, and we need to configure MySQL to enable it. As the name suggests, the slow query log only contains logs from queries that aren’t performant. It’s specific and hence has also had the lowest overhead incurred from the logging process. That makes it ideal for query profiling.
It seems like this would be a suitable first step for us to measure query execution time with high fidelity. So, let’s take a look at how you can use the slow query log to do profiling.
As mentioned previously, slow query logging isn’t enabled by default within MySQL. That means you’ll need to turn it on. To do this, first edit the MySQL configuration file, which is at /etc/mysql/my.cnf
You can run nano /etc/mysql/my.cnf to edit the configuration file. Here’s what you should edit the file to look like:
[mysqld] general_log_file = /var/log/mysql/mysql.log general_log = 1 log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log_queries_not_using_indexes = 1 min_examined_row_limit = 10000
To reach this stage, we have to edit the config options within the file. Let’s break down some of the main ones.
slow_query_log = 1: Use this statement to enable the query log log_slow_queries = /var/log/mysql/mysql-slow.log This statement informs MySQL to store the slow query logs within this file: var/log/mysql/mysql-slow.log
long_query_time = 2: Use this statement to log any query that takes more than two seconds.
log_queries_not_using_indexes = 1: This statement logs queries that don’t use indexes because those might be slower.
min_examined_row_limit = 10000 : With this statement, MySQL will log all queries that examine more than 10,000 rows, regardless of the execution time of the query.
Now, how do you get those logs to the server? Let’s cover that next.
To relay logs from the MySQL server, you’ll need to use a syslog agent like rsyslog. rsyslog is a networking tool that lets devices communicate with your logging server. To make this easier, rsyslog spawns agents that send out messages to the central server at specific times.
You’ve done a fair amount of work so far. Let’s focus on putting these logs together and gaining some useful information from them.
Why do you need a log aggregation service at all? It’s because a log on its own is just text and carries little meaning. To get useful information, you need to normalize the logs, recognize patterns, and then generate charts from the data.
Without advanced tooling, this could involve using text-filtering tools like awk or grep to extract relevant fields. And after that, you’d still have to figure out how to use a tool like gnuplot or R to generate diagrams. You’d have to do this for every metric or query you want to examine. Tools like mysqldumpslow or pt-digest help ease the process. But they also might not have the full range of features (like dashboards) you’re looking for.
Luckily, an aggregation service gives you an easy way to perform all these functions. With an aggregation tool, the tasks of text extraction and dashboard generation are handled for you. All you need to do is to send the data through rsyslog and search for queries you’re interested in.
You now know why a log aggregation service is helpful. Now let’s look at the types you can select from.
So, what kind of aggregation services are available to you? Let’s look at a popular one:
– What it is: Loggly supports automated parsing for MySQL logs. It has preconfigured dashboards and decent search capabilities.
– Who it’s for: People who are looking to run deployments in the cloud. Loggly is ideal for teams who would love to have a ready-to-go logging tool with little setup or overhead.
Once you set up a log aggregation service, how can you fix any glitches?
While undesirable, it’s inevitable you’ll run into issues while setting up your logging configuration. To help you along this journey of log analysis, here are some common troubleshooting steps you may wish to take.
1. You can cat to see if the file exists. Do this by checking the slow query log file exists by typing cat /var/log/mysql/mysql-slow.log to verify the log file’s content is as you expected.
2. Ensure the log file is of an appropriate size. If it’s too large, you can flush or rotate the logs using a tool such as logrotate. Here’s a guide to setting up Log-rotate for MySQL.
Today you got a brief overview of what logs are and how you can use log analysis. We also went over the different types of logs in MySQL and how to use them. Also, we touched on how to troubleshoot common problems in a logging setup.
With this information, you should be ready to set up MySQL log analysis for your team. For more information about how to configure a tool like Loggly, you can refer to their guide for doing MySQL Logging and analysis.
This post was written by Joel Lee. Joel is a software engineer who has worked with databases and cloud technologies. In his free time he enjoys learning languages like Spanish and Elixir.