Linux netlink socket behaviours while pushing data from kernel to user space

65 Views Asked by At

I have 20MB/sec data received in kernel streamed from an external device via PCIe/DMA with chunks of 1MB and I need to push this data in kernel to an application. (Linux Kernel v5.4, 64 bit)

According to a discussion here, I have couple of options but I'm now evaluating to use netlink socket for this purpose. I have studied this book/chapter2 and also couple of stackoverflow questions but I couldn't find answers to my questions about netlink before starting the actual implementation.

It is mentioned at above sources that netlink has an internal FIFO, therefore the application doesn't have to process all data pushed from kernel in a strict narrow time which suits my intention perfectly.

But nobody mentioned:

  1. I want to allocate 128MB FIFO so that the app should have ~6sec to process the data in the queue before overflow. There is no interface provided in the netlink APIs to specify FIFO size, so does it really have an internal FIFO? If yes, then what is the size of it and how to increase/decrease in the kernel module?

  2. I intend to use function nlmsg_put() to push data from kernel to app in the MSI interrupt which indicates that 1MB data chunk is ready to push. But this function should be non-blocking. Is this function really non-blocking?

  3. What is the behaviour of nlmsg_put() function if the internal FIFO is full?

  4. In the linux/netlink.h header file in the kernel module, struct nlmsghdr which will be transferred using nlmsg_put() function has nlmsg_len variable defined as u32. Does that mean, can I push 1MB chunks that I received via DMA to the application at a time?

  5. Does netlink follow zero-copy approach when data pushed from kernel to a app?

0

There are 0 best solutions below