I recently tried Sysdig for a project.
My main goal is to catch syscalls from the kernel and Sysdig seems to be a good option. Actually, I had done some tests with different techniques and compared the overhead for each one.
I come here with a simple test: a C program which just opens, writes and then close a file 100000 times.
#include "stdio.h"
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
int main(void)
{
int i;
char puffer[100];
FILE * handle;
for (i=0; i<100000; i++) {
strcpy (puffer, "Sysdig c'est trop fort !\n");
handle = fopen("test.txt", "w");
fputs(puffer, handle);
fgets(puffer, 80, handle);
fclose(handle);
}
return 0;
}
I ran 5 tests without supervision and the average duration is:
- real: 30,29s
- user:1.068
- sys:13.098
I ran other 5 tests with small LKM with kprobes. It took about 3 more seconds to complete the same test.
At this point, everything seems normal but when I ran tests with Sysdig:
sudo sysdig proc.name="ctest" > ctestlog
Where ctest
is my C program, I had those results on average:
- real: 17,1108
- user: 0,6336
- sys: 7,3752
So the result from this tests is that Sysdig enhanced my process quite by twice.
I precise that standard deviation from my tests is less than 1 sec.
Has someone already seen something like that or have an explanation?
Can it be related with linux task scheduler because of Sysdig job?