This is an extension to my previous question
How does blocking mode in unix/linux sockets works?
What I gather from Internet now, all the process invoking blocking calls, are put to sleep until the scheduler finds the reasons to unblock it. The reasons can vary from buffer empty to buffer full to any other condition.
But then can this be an efficient way to real-time, let's say hard/firm real-time applications? As the process is not unblocked when the unblocking condition holds true, rather when the scheduler gives him his CPU slice, and the unblocking condition is both true.
As if u want a responsive solution, I don't think "spin locks" or "busy waits" are the right way to do it, CPU slices are wasted, and over-all the system shall get un-responsive or may poor-responsive.
Can somebody please clear these conflicting thoughts.
First of all, you have a misconception:
Blocking calls are not "busy waiting" or "spin locks". Blocking calls are sleepable -- that means the CPU would work on other task, no cpu are wasted.
On your question on blocking calls
Blocking calls are easier -- they are easy to understand, easier to develop, easier to debug.
But they are resource hog. If you don't use thread, it would block other clients; If you use thread, each thread would take up memory and other system resource. Even if you have plenty of memory, switching thread would make the cache cold and reduce the performance.
This is a trade off -- faster development and maintainability? or scalability.