I am getting acquainted with the MicroC/OS-II kernel and multi-tasking. I have programmed the following two tasks that uses semaphores:
#define TASK1_PRIORITY 6 // highest priority
#define TASK2_PRIORITY 7
void task1(void* pdata)
{
while (1)
{
INT8U err;
OSSemPend(aSemaphore_task1, 0, &err);
int i;
if (sharedAddress >= 0)
{
printText(text1);
printDigit(++sharedAddress);
}
else
{
printText(text2);
printDigit(sharedAddress);
}
OSTimeDlyHMSM(0, 0, 0, 11);
OSSemPost(aSemaphore_task2);
}
}
void task2(void* pdata)
{
while (1)
{
INT8U err;
OSSemPend(aSemaphore_task2, 0, &err);
sharedAddress *= -1;
OSTimeDlyHMSM(0, 0, 0, 4);
OSSemPost(aSemaphore_task1);
}
}
Now I want to measure the context switch time, i.e., the time it takes to for the processor to switch between these two tasks.
Is this done by just using a function timer() like:
void task1(void* pdata)
{
while (1)
{
INT8U err;
OSSemPend(aSemaphore_task1, 0, &err);
int i;
if (sharedAddress >= 0)
{
printText(text1);
printDigit(++sharedAddress);
}
else
{
printText(text2);
printDigit(sharedAddress);
}
OSTimeDlyHMSM(0, 0, 0, 11);
OSSemPost(aSemaphore_task2);
timer(start);
}
}
void task2(void* pdata)
{
while (1)
{
timer(stop):
INT8U err;
OSSemPend(aSemaphore_task2, 0, &err);
sharedAddress *= -1;
OSTimeDlyHMSM(0, 0, 0, 4);
OSSemPost(aSemaphore_task1);
}
}
or have I gotten this completely wrong?
For doing performance measurements, the standard approach is to first calibrate your tools. In this case it is your timer, or the suggested clock (if you use C++).
To calibrate it, you need to call it many times (eg 1000) and see how long each takes on average. Now you know the cost of measuring the time. In this case, it is likely to be in a similar range (at best) to the feature you are trying to measure - the context switch.
So the calibration is important.
Let us know how you go.