Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to __main__
.
What is a good way to profile how long a python program takes to run?
My way is to use yappi (https://code.google.com/p/yappi/). It's especially useful combined with an RPC server where (even just for debugging) you register method to start, stop and print profiling information, e.g. in this way:
Then when your program work you can start profiler at any time by calling the
startProfiler
RPC method and dump profiling information to a log file by callingprintProfiler
(or modify the rpc method to return it to the caller) and get such output:It may not be very useful for short scripts but helps to optimize server-type processes especially given the
printProfiler
method can be called multiple times over time to profile and compare e.g. different program usage scenarios.Also worth mentioning is the GUI cProfile dump viewer RunSnakeRun. It allows you to sort and select, thereby zooming in on the relevant parts of the program. The sizes of the rectangles in the picture is proportional to the time taken. If you mouse over a rectangle it highlights that call in the table and everywhere on the map. When you double-click on a rectangle it zooms in on that portion. It will show you who calls that portion and what that portion calls.
The descriptive information is very helpful. It shows you the code for that bit which can be helpful when you are dealing with built-in library calls. It tells you what file and what line to find the code.
Also want to point at that the OP said 'profiling' but it appears he meant 'timing'. Keep in mind programs will run slower when profiled.
To add on to https://stackoverflow.com/a/582337/1070617,
I wrote this module that allows you to use cProfile and view its output easily. More here: https://github.com/ymichael/cprofilev
Also see: http://ymichael.com/2014/03/08/profiling-python-with-cprofile.html on how to make sense of the collected statistics.
It would depend on what you want to see out of profiling. Simple time metrics can be given by (bash).
Even '/usr/bin/time' can output detailed metrics by using '--verbose' flag.
To check time metrics given by each function and to better understand how much time is spent on functions, you can use the inbuilt cProfile in python.
Going into more detailed metrics like performance, time is not the only metric. You can worry about memory, threads etc.
Profiling options:
1. line_profiler is another profiler used commonly to find out timing metrics line-by-line.
2. memory_profiler is a tool to profile memory usage.
3. heapy (from project Guppy) Profile how objects in the heap are used.
These are some of the common ones I tend to use. But if you want to find out more, try reading this book It is a pretty good book on starting out with performance in mind. You can move onto advanced topics on using Cython and JIT(Just-in-time) compiled python.
I recently created tuna for visualizing Python runtime and import profiles; this may be helpful here.
Install with
Create a runtime profile
or an import profile (Python 3.7+ required)
Then just run tuna on the file