I am monitoring some machines using WMI, using .NET's System.Management
stuff. The query I am using is this:
SELECT Timestamp_Sys100NS, PercentProcessorTime
FROM Win32_PerfRawData_PerfOS_Processor
WHERE Name='_Total'
From that I compute the CPU usage % using the well known formula:
double cpu_usage = (1 - (double)delta_cpu / delta_time) * 100;
It works very well every machine but one (so far).
The problem is that for one machine, which is Windows 2003 server (with hyper-threading enabled, if it matters), I am sometimes getting negative CPU usage values. In other words, the (double)delta_cpu / delta_time
expression yields number > 1
. I did search the web for hints as to why this could be happening but I found nothing.
Is this Windows 2003 server specific? Or is it hyper-threading related problem? Or is it just expected and I should just clamp the CPU usage value or the cpu_delta
value into some range?
EDIT:
The second weird thing I am observing with this one machine is that the Timestamp_Sys100NS
value does not indicate FILETIME
like date (ticks since epoch January 1, 1600) but instead it looks like ticks since boot time.
EDIT 2: I have now verified that this problem is across a lot of Windows 2003 servers. And I am apparently not the only one with the same problem.
EDIT 3:
I have solved the time stamp issue by querying LastBootUpTime
from Win32_OperatingSystem
and adding that to the Timestamp_Sys100NS
when the value of Timestamp_Sys100NS
is too far in the past. That seems to give correct date and time. The code manipulating the date after it is retrieved from Win32_OperatingSystem
looks like this:
WbemScripting.SWbemDateTime swbem_time = new WbemScripting.SWbemDateTime();
swbem_time.Value = date_str;
string time_as_file_time_str = swbem_time.GetFileTime(true);
return new DateTimeOffset(epoch.Ticks + long.Parse(time_as_file_time_str),
swbem_time.UTCSpecified
? TimeSpan.FromMinutes(swbem_time.UTC)
: TimeSpan.Zero);
...then adjust to UTC...
boot_time = boot_time.UtcDateTime;
...then is boot_time
simply added to the time stamp (current
) returned in by WMI in the Timestamp_Sys100NS
field...
if (time.Year < 2000)
time = boot_time + current;
EDIT 4:
It appears that there are 3 classes of system with respect to Timestamp_Sys100NS
:
- First are Vista+ system where the
Timestamp_Sys100NS
is time in ticks since epoch in UTC. - Second are some Windows 2003 systems where the
Timestamp_Sys100NS
needs to be added toWin32_OperatingSystem.LastBootUpTime
to get reasonable time. - Third class are systems where doing the above addition still results in a date days off of the right date and time.
EDIT 5: Some of the affected machines might have been VMs but not all of them.
it sounds like a standard "time synchronization" issue to me.
your system's clock is... a clock. in your case, your clock may be running fast (perhaps it completes a minute in 99% of the actual time) so when your computer syncs with an external clock (such as through the Windows Time service) your system time will jump backwards.
alternatively, the user may manually adjust the system time (eg: Date and Time control panel), so it's something you should design for (your users would be very unhappy if setting their system's time crashes your app!)
the way I solve this is by clamping. always require at least 0.0 seconds of "real time" to pass, but also clamp to a maximum of 0.5 seconds, as time adjustment may leap forwards, not only backwards.
i hope that helps.
Try using the code create application on that machine and see if you get the right readings http://www.microsoft.com/download/en/details.aspx?id=8572
The formula you are specifically talking about is PERF_100NSEC_TIMER_INV http://msdn.microsoft.com/en-us/library/ms803963.aspx
I have not personally dealt with this problem because I have never seen values below zero.
This is all I have been doing:
Usage:
If I watch the CPU usage on the task manager for the 2003 box, it matches fine. I think you can disregard anything less than zero as long as you are calculating with these values.