I'm trying to compare GPU to CPU performance. For the NVIDIA GPU I've been using the cudaEvent_t
types to get a very precise timing.
For the CPU I've been using the following code:
// Timers
clock_t start, stop;
float elapsedTime = 0;
// Capture the start time
start = clock();
// Do something here
.......
// Capture the stop time
stop = clock();
// Retrieve time elapsed in milliseconds
elapsedTime = (float)(stop - start) / (float)CLOCKS_PER_SEC * 1000.0f;
Apparently, that piece of code is only good if you're counting in seconds. Also, the results sometime come out quite strange.
Does anyone know of some way to create a high resolution timer in Linux?
Check out
clock_gettime
, which is a POSIX interface to high-resolution timers.If, having read the manpage, you're left wondering about the difference between
CLOCK_REALTIME
andCLOCK_MONOTONIC
, see Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?See the following page for a complete example: http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/