I'm about to write a FastCGI / SCGI (independent implementations) server but I don't have much experience with networked and especially delayed-application programming.
I got a theoretical problem when I think in detail about what I want to do. When the web server sends a TCP Request to my server, the input (bytes) will be processed to the current handler (either FastCGI / SCGI handler). The problem is: what will happen if the response creation needs some time? Lets say the client requests a large file about 10MB. Am I right that the server needs to wait until the 10MB have been flushed to the client? So there is the conclusion: if the client has a low internet connection, my server is blocked until the full 10MB are flushed, isn't it?
What about process requests that need to execute SQL statements for example, they need some time. The server would be blocked for this time, isn't it? Please correct me.
To solve this problem, my only idea is to use threads - and I'm not gonna very experienced with them. And if I understood the SCGI protocol correctly, then threads are not a solution cause it expects an immediate answer to a request. Instead of FastCGI which supports request-ids. What would be your solution?