How to do “performance-based” (benchmark) unit tes

2020-05-21 04:43发布

Let's say that I've got my code base to as high a degree of unit test coverage as makes sense. (Beyond a certain point, increasing coverage doesn't have a good ROI.)

Next I want to test performance. To benchmark code to make sure that new commits aren't slowing things down needlessly. I was very intrigued by Safari's zero tolerance policy for slowdowns from commits. I'm not sure that level of commitment to speed has a good ROI for most projects, but I'd at least like to be alerted that a speed regression has happened, and be able to make a judgment call about it.

Environment is Python on Linux, and a suggestion that was also workable for BASH scripts would make me very happy. (But Python is the main focus.)

4条回答
做个烂人
2楼-- · 2020-05-21 05:28

MarkR is right... doing real-world performance testing is key, and may be somewhat dodgey in unit tests. Having said that, have a look at the cProfile module in the standard library. It will at least be useful for giving you a relative sense from commit-to-commit of how fast things are running, and you can run it within a unit test, though of course you'll get results in the details that include the overhead of the unit test framework itself.

In all, though, if your objective is zero-tolerance, you'll need something much more robust than this... cProfile in a unit test won't cut it at all, and may be misleading.

查看更多
Melony?
3楼-- · 2020-05-21 05:31

When I do performance testing, I generally have a test suite of data inputs, and measure how long it takes the program to process each one.

You can log the performance on a daily or weekly basis, but I don't find it particularly useful to worry about performance until all the functionality is implemented.

If performance is too poor, then I break out cProfile, run it with the same data inputs, and try to see where the bottlenecks are.

查看更多
神经病院院长
4楼-- · 2020-05-21 05:47

You will want to do performance testing at a system level if possible - test your application as a whole, in context, with data and behaviour as close to production use as possible.

This is not easy, and it will be even harder to automate it and get consistent results.

Moreover, you can't use a VM for performance testing (unless your production environment runs in VMs, and even then, you'd need to run the VM on a host with nothing else on).

When you say doing performance unit-testing, that may be valuable, but only if it is being used to diagnose a problem which really exists at a system level (not just in the developer's head).

Also, performance of units in unit testing sometimes fails to reflect their performance in-context, so it may not be useful at all.

查看更多
干净又极端
5楼-- · 2020-05-21 05:47

While I agree that testing performance at a system level is ultimately more relevant, if you'd like to do UnitTest style load testing for Python, FunkLoad http://funkload.nuxeo.org/ does exactly that.

Micro benchmarks have their place when you're trying to speed up a specific action in your codebase. And getting subsequent performance unit tests done is a useful way to ensure that this action that you just optimized does not unintentionally regress in performance upon future commits.

查看更多
登录 后发表回答