I'm on a 64bit Ubuntu 12.04 system and tried the following code:
#include <unistd.h>
#include <time.h>
#include <stdio.h>
int
main(void)
{
struct timespec user1,user2;
struct timespec sys1,sys2;
double user_elapsed;
double sys_elapsed;
clock_gettime(CLOCK_REALTIME, &user1);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &sys1);
sleep(10);
clock_gettime(CLOCK_REALTIME, &user2);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &sys2);
user_elapsed = user2.tv_sec + user2.tv_nsec/1E9;
user_elapsed -= user1.tv_sec + user1.tv_nsec/1E9;
printf("CLOCK_REALTIME: %f\n", user_elapsed);
sys_elapsed = sys2.tv_sec + sys2.tv_nsec/1E9;
sys_elapsed -= sys1.tv_sec + sys1.tv_nsec/1E9;
printf("CLOCK_PROCESS_CPUTIME_ID: %f\n", sys_elapsed);
}
As I understand it, this should print something like
CLOCK_REALTIME: 10.000117
CLOCK_PROCESS_CPUTIME_ID: 10.001
But in my case, what I get is
CLOCK_REALTIME: 10.000117
CLOCK_PROCESS_CPUTIME_ID: 0.000032
Is this the correct behaviour? If so how I can I determine the actual seconds of sys1 and sys2?
When I change CLOCK_PROCESS_CPUTIME_ID to CLOCK_REALTIME then I get the expected result, but that's not what I want because we need the precision.
[EDIT] Apparently CLOCK_PROCESS_CPUTIME_ID returns the actual time the cpu spent on prcessing. CLOCK_MONOTONIC seems to return the right value. But at what precision?
Running time here means elapsed time, if I don't misunderstand. Normally,
CLOCK_REALTIME
is good for that, but if the time is set during the run of the application,CLOCK_REALTIME
's notion of elapsed time changes too. To prevent that - unlikely as it may be - I suggest usingCLOCK_MONOTONIC
or, if present,CLOCK_MONOTONIC_RAW
. From the description in the man pageCLOCK_MONOTONIC
may be influenced by NTP adjustments, whileCLOCK_MONOTONIC_RAW
isn't. All these clocks typically have a resolution of one nanosecond (check that withclock_getres()
), but for your purposes a resolution below one microsecond would suffice.To calculate elapsed time in microseconds