可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
On Windows, clock()
returns the time in milliseconds, but on this Linux box I'm working on, it rounds it to the nearest 1000 so the precision is only to the "second" level and not to the milliseconds level.
I found a solution with Qt using the QTime
class, instantiating an object and calling start()
on it then calling elapsed()
to get the number of milliseconds elapsed.
I got kind of lucky because I'm working with Qt to begin with, but I'd like a solution that doesn't rely on third party libraries,
Is there no standard way to do this?
UPDATE
Please don't recommend Boost ..
If Boost and Qt can do it, surely it's not magic, there must be something standard that they're using!
回答1:
You could use gettimeofday at the start and end of your method and then difference the two return structs. You'll get a structure like the following:
struct timeval {
time_t tv_sec;
suseconds_t tv_usec;
}
回答2:
#include <sys/time.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
struct timeval start, end;
long mtime, seconds, useconds;
gettimeofday(&start, NULL);
usleep(2000);
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
printf("Elapsed time: %ld milliseconds\n", mtime);
return 0;
}
回答3:
Please note that clock
does not measure wall clock time. That means if your program takes 5 seconds, clock
will not measure 5 seconds necessarily, but could more (your program could run multiple threads and so could consume more CPU than real time) or less. It measures an approximation of CPU time used. To see the difference consider this code
#include <iostream>
#include <ctime>
#include <unistd.h>
int main() {
std::clock_t a = std::clock();
sleep(5); // sleep 5s
std::clock_t b = std::clock();
std::cout << "difference: " << (b - a) << std::endl;
return 0;
}
It outputs on my system
$ difference: 0
Because all we did was sleeping and not using any CPU time! However, using gettimeofday
we get what we want (?)
#include <iostream>
#include <ctime>
#include <unistd.h>
#include <sys/time.h>
int main() {
timeval a;
timeval b;
gettimeofday(&a, 0);
sleep(5); // sleep 5s
gettimeofday(&b, 0);
std::cout << "difference: " << (b.tv_sec - a.tv_sec) << std::endl;
return 0;
}
Outputs on my system
$ difference: 5
If you need more precision but want to get CPU time, then you can consider using the getrusage
function.
回答4:
I also recommend the tools offered by Boost. Either the mentioned Boost Timer, or hack something out of Boost.DateTime or there is new proposed library in the sandbox - Boost.Chrono: This last one will be a replacement for the Timer and will feature:
- The C++0x Standard Library's time utilities, including:
- Class template
duration
- Class template
time_point
- Clocks:
system_clock
monotonic_clock
high_resolution_clock
- Class template
timer
, with typedefs:
system_timer
monotonic_timer
high_resolution_timer
- Process clocks and timers:
process_clock
, capturing real, user-CPU, and system-CPU times.
process_timer
, capturing elapsed real, user-CPU, and system-CPU times.
run_timer
, convenient reporting of |process_timer| results.
- The C++0x Standard Library's compile-time rational arithmetic.
Here is the source of the feature list
回答5:
I've written a Timer
class based on CTT's answer. It can be used in the following way:
Timer timer = Timer();
timer.start();
/* perform task */
double duration = timer.stop();
timer.printTime(duration);
Here is its implementation:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
using namespace std;
class Timer {
private:
timeval startTime;
public:
void start(){
gettimeofday(&startTime, NULL);
}
double stop(){
timeval endTime;
long seconds, useconds;
double duration;
gettimeofday(&endTime, NULL);
seconds = endTime.tv_sec - startTime.tv_sec;
useconds = endTime.tv_usec - startTime.tv_usec;
duration = seconds + useconds/1000000.0;
return duration;
}
static void printTime(double duration){
printf("%5.6f seconds\n", duration);
}
};
回答6:
If you don't need the code to be portable to old unices, you can use clock_gettime(), which will give you the time in nanoseconds (if your processor supports that resolution). It's POSIX, but from 2001.
回答7:
clock() has a often a pretty lousy resolution. If you want to measure time at the millisecond level, one alternative is to use clock_gettime(), as explained in this question.
(Remember that you need to link with -lrt on Linux).
回答8:
With C++11 and std::chrono::high_resolution_clock
you can do this:
#include <iostream>
#include <chrono>
#include <thread>
typedef std::chrono::high_resolution_clock Clock;
int main()
{
std::chrono::milliseconds three_milliseconds{3};
auto t1 = Clock::now();
std::this_thread::sleep_for(three_milliseconds);
auto t2 = Clock::now();
std::cout << "Delta t2-t1: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
<< " milliseconds" << std::endl;
}
Output:
Delta t2-t1: 3 milliseconds
Link to demo: http://cpp.sh/2zdtu
回答9:
clock() doesn't return milliseconds or seconds on linux. Usually clock() returns microseconds on a linux system. The proper way to interpret the value returned by clock() is to divide it by CLOCKS_PER_SEC to figure out how much time has passed.
回答10:
This should work...tested on a mac...
#include <stdio.h>
#include <sys/time.h>
int main() {
struct timeval tv;
struct timezone tz;
struct tm *tm;
gettimeofday(&tv,&tz);
tm=localtime(&tv.tv_sec);
printf("StartTime: %d:%02d:%02d %d \n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec);
}
Yeah...run it twice and subtract...
回答11:
In the POSIX standard clock
has its return value defined in terms of the CLOCKS_PER_SEC symbol and an implementation is free to define this in any convenient fashion. Under Linux, I have had good luck with the times()
function.
回答12:
gettimeofday - the problem is that will can have lower values if you change you hardware clock (with NTP for example)
Boost - not available for this project
clock() - usually returns a 4 bytes integer, wich means that its a low capacity, and after some time it returns negative numbers.
I prefer to create my own class and update each 10 miliseconds, so this way is more flexible, and I can even improve it to have subscribers.
class MyAlarm {
static int64_t tiempo;
static bool running;
public:
static int64_t getTime() {return tiempo;};
static void callback( int sig){
if(running){
tiempo+=10L;
}
}
static void run(){ running = true;}
};
int64_t MyAlarm::tiempo = 0L;
bool MyAlarm::running = false;
to refresh it I use setitimer:
int main(){
struct sigaction sa;
struct itimerval timer;
MyAlarm::run();
memset (&sa, 0, sizeof (sa));
sa.sa_handler = &MyAlarm::callback;
sigaction (SIGALRM, &sa, NULL);
timer.it_value.tv_sec = 0;
timer.it_value.tv_usec = 10000;
timer.it_interval.tv_sec = 0;
timer.it_interval.tv_usec = 10000;
setitimer (ITIMER_REAL, &timer, NULL);
.....
Look at setitimer and the ITIMER_VIRTUAL and ITIMER_REAL.
Don't use the alarm or ualarm functions, you will have low precision when your process get a hard work.
回答13:
I prefer the Boost Timer library for its simplicity, but if you don't want to use third-parrty libraries, using clock() seems reasonable.
回答14:
As an update,appears that on windows clock() measures wall clock time (with CLOCKS_PER_SEC precision)
http://msdn.microsoft.com/en-us/library/4e2ess30(VS.71).aspx
while on Linux it measures cpu time across cores used by current process
http://www.manpagez.com/man/3/clock
and (it appears, and as noted by the original poster) actually with less precision than CLOCKS_PER_SEC, though maybe this depends on the specific version of Linux.
回答15:
I like the Hola Soy method of not using gettimeofday().
It happened to me on a running server the admin changed the timezone. The clock was updated to show the same (correct) local value.
This caused the function time() and gettimeofday() to shift 2 hours and all timestamps in some services got stuck.
回答16:
I wrote a C++
class using timeb
.
#include <sys/timeb.h>
class msTimer
{
public:
msTimer();
void restart();
float elapsedMs();
private:
timeb t_start;
};
Member functions:
msTimer::msTimer()
{
restart();
}
void msTimer::restart()
{
ftime(&t_start);
}
float msTimer::elapsedMs()
{
timeb t_now;
ftime(&t_now);
return (float)(t_now.time - t_start.time) * 1000.0f +
(float)(t_now.millitm - t_start.millitm);
}
Example of use:
#include <cstdlib>
#include <iostream>
using namespace std;
int main(int argc, char** argv)
{
msTimer t;
for (int i = 0; i < 5000000; i++)
;
std::cout << t.elapsedMs() << endl;
return 0;
}
Output on my computer is '19'.
Accuracy of the msTimer
class is of the order of milliseconds. In the usage example above, the total time of execution taken up by the for
-loop is tracked. This time included the operating system switching in and out the execution context of main()
due to multitasking.