A few words about Micro-Benchmarks
It’s been a long time since I included my “this discussion is only approximately correct” disclaimer so I’ll just preface it here. In the interest of space and clarity, this discussion is only approximately correct. OK, now we can move on…
I love micro-benchmarks.
Really.
I rely on micro-benchmarks to help me understand what is going on in my system on a day to day or even build to build basis. Understanding which micro-benchmarks are really vital is essential to keeping your performance under control. But you have to be careful not to do more than this.
One thing you should never do is use your micro-benchmarks to drive prioritization of your performance problems. Remember, a micro-benchmark’s mission in life is to magnify some aspect of your programs performance. Because of this anyone can readily generate a micro-benchmark that makes any given issue seem like a major problem. Do not be fooled! Micro-benchmarks in-and-of-themselves tell you almost nothing about what is important. What they help you do is manage and track the important things once you have discovered them with representative workloads.
Once you have established which micro-benchmarks are key performance drivers in your system, you can use this to help you make progress on the large problems more quickly, but you must be careful to re-validate. Seemingly positive-looking changes in the micro-benchmark can easily be negative under your real workload because the micro-benchmark is often much less resource constrained. For instance, you could make a change that doubles your metric at the expense of addition L2 cache usage. That might be a good thing but it could also be a disaster, the collateral damage you would do to the rest of the program could easily erase all those gains.
So, should you use these quick running scenarios? Yes, but certainly not exclusively. And not as the primary planning tool, but as a planning aid.