How to measure ISR execution time?

2020-07-29 23:48发布

问题:

I am on linux kernel 2.6.32. I am facing an issue in which one of the two ISR (serial and ethernet) are taking more time (hundreds of microseconds) on several occasion/under some scenarios which I don't know. I would like to get the time difference every time the ISR executes.

What would be the best way (least expensive in terms of overhead involved). I don't see ARM architecture has some TSC register (read_tsc api) which would give me direct access to time as it offers on some other architecture.

So Idea is 1) The moment ISR is invoked measure time 2) the moment ISR is complete measure the time. 3) get the difference of 1 and 2 store it in some variable. 4) Keep doing the steps 1 to 2 and when the value received in the step 3 is greater than the past value overwrite it (keep/preserve value with maximum latency). When the issue happens (some abrupt condition print the value) or array of last 10 value).

I need to do in kernel driver so let me know what would be the least expensive way.

回答1:

OMAP3 has Cortex-A8 core. That does have Performance Monitor Unit (PMU). Cycle Count (CCNT) would correspond to x86 TSC, except probably you have to enable it counting before you read. Good info in BeagleBoard post.

In 2.6.32.55 I see arch/arm/oprofile/op_model_v7.c gives full access and control. My need was bare-metal, I used ARM example code that was simple and worked for me.

It would also be possible to use an OMAP3 GPT, but that would be more work, e.g. to get its clock input set up from PRCM.