Learn how to do no B*LLSH*T benchmarking
Every development team should benchmark its software to understand how it performs and what its performance envelope looks like — that is, where things start to break down. If you’re running a growing business in the cloud, you just can’t deliver good scalability, reliability, and performance without this insight. Our new white paper illustrates many useful benchmarking techniques by going hands-on with a key component of our architecture, Elasticsearch (ES). It will also give you ideas about how to get started or how to improve your existing benchmarking tools.
Here at Loggly, we spend a lot of time benchmarking our entire system because the nature of our business puts heavy demands like these on our pipeline:
- Massive streams of incoming events
- Bursts that can quadruple any customer’s typical log volume
- The burning need for “no logs left behind”
- Operational troubleshooting use cases that demand near real-time indexing and time series index management
We treat scalability, reliability, and performance as P1 product features, and benchmarking is the key to knowing where to put our development resources.
We focused the white paper on a single component of our system because it’s much easier to illustrate what an end-to-end benchmarking process might look like. And once you figure out how hard you can push your key component, you can then design the rest of the system to stay within those boundaries.
If you are a developer looking to improve your benchmarking tools, a Loggly user looking for new ways to use Loggly, or maybe just someone who is interested in ES, I think you’ll find some useful and interesting information in the white paper. While our primary aim is to give you a general approach to benchmarking, we also dive deep on some specific techniques and on ES failure modes.
Download the white paper, fill up your pint glass, and learn about:
- A time machine that will give you a look at how ES performance has changed since version 0.90.13
- A process for evolving from simple performance benchmarks to more complex test cases that are closer to your production environments
- Ways your test bed might need to evolve as your benchmarking process does
- Key factors that can make or break your benchmarking
- How a log management solution like Loggly can make benchmarking practical and easy
I’m interested in hearing your feedback and ideas, so please use the Comments section below.
Here’s to blissful benchmarking!
The Loggly and SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.
Jon Gifford