I'm writing a simple RPC server in Python 3 using pycapnp
. I have a single function call that takes roughly a second to complete. This function also needs data stored in a cache (currently implemented using lru-dict
).
Everything works fine with a single client, but as soon as I start increasing the load, requests start queuing (wall time measured inside the function running on the server is ~1 second; on the client I can easily get 10 or more seconds).
As far as I can tell, there's currently no support in pycapnp
for other event loops.
I've tried creating a ThreadPoolExecutor
in the __init__
method of the server implementation (where the cache is also created), and then adding the following to the RPC method:
capnp.Promise(self.executor.submit(long_running_function, request, cached_data).result()).\
then(lambda result: setattr(_context.results, 'response', result))
While this works, the main thread is apparently still waiting on each Promise
to be fulfilled, which again means clients have to queue.
Anyone in a similar situation found a way out? I don't necessarily have to use pycapnp
, but that's the recommended Cap'n Proto Python implementation AFAICT.
Would serialising and then simply sending over a socket be simpler? I don't have use for promise pipelining or any other fancy features in Cap'n Proto RPC.