Where is the bottleneck in these 10 requests per second with Python Bottle + Javascript fetch?

331 Views Asked by At

I am sending 10 HTTP requests per second between Python Bottle server and a browser client with JS:

import bottle, time
app = bottle.Bottle()
@bottle.route('/')
def index():
    return """<script>
var i = 0;
setInterval(() => {
    i += 1;
    let i2 = i;
    console.log("sending request", i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            console.log("finished processing", i2);
        });
}, 100);
</script>"""
@bottle.route('/data')
def data():
    return "abcd"
bottle.run(port=80)

The result is rather poor:

sending request 1
sending request 2
sending request 3
sending request 4
finished processing 1
sending request 5
sending request 6
sending request 7
finished processing 2
sending request 8
sending request 9
sending request 10
finished processing 3
sending request 11
sending request 12

Why does it fail to process 10 requests per second successfully (on an average i5 computer): is there a known bottleneck in my code?

Where are the 100 ms lost per request, that prevent the program to keep a normal pace like:

sending request 1
finished processing 1
sending request 2
finished processing 2
sending request 3
finished processing 3

?

Notes:

  • Tested with Flask instead of Bottle and the problem is similar

  • Is there a simple way to get this working:

    • without having to either monkey patch the Python stdlib (with from gevent import monkey; monkey.patch_all()),

    • and without using a much more complex setup with Gunicorn or similar (not easy at all on Windows)?

    ?

3

There are 3 best solutions below

9
Maurice Meyer On

As mentioned in the comments, using gevent makes your code run as expected. Just using gevent's monkey-patch capabilities no async-rewrite:

from gevent import monkey

monkey.patch_all()
import bottle

app = bottle.Bottle()


@bottle.route('/')
def index():
    return """<script>
var i = 0;
setInterval(() => {
    i += 1;
    let i2 = i;
    console.log("sending request", i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            console.log("finished processing", i2);
        });

}, 100);
</script>"""


@bottle.route('/data')
def data():
    return "abcd"


bottle.run(host='0.0.0.0', port=80, server='gevent')

Browser console output:

sending request 158
(index):10 finished processing 158
(index):6 sending request 159
(index):10 finished processing 159
(index):6 sending request 160
(index):10 finished processing 160
(index):6 sending request 161
(index):10 finished processing 161
(index):6 sending request 162
(index):10 finished processing 162
(index):6 sending request 163
(index):10 finished processing 163
(index):6 sending request 164
(index):10 finished processing 164
(index):6 sending request 165
(index):10 finished processing 165
(index):6 sending request 166

Note:

You could create your own threaded WSGI server (so pure Python):

import bottle

from wsgiref.simple_server import make_server, WSGIServer
from socketserver import ThreadingMixIn


class ThreadingWSGIServer(ThreadingMixIn, WSGIServer):
    daemon_threads = True


class MyServer:

    def __init__(self, wsgi_app, listen='0.0.0.0', port=80):
        self.wsgi_app = wsgi_app
        self.listen = listen
        self.port = port
        self.server = make_server(self.listen, self.port, self.wsgi_app,
                                  ThreadingWSGIServer)

    def serve_forever(self):
        self.server.serve_forever()


app = bottle.Bottle()


@bottle.route('/')
def index():
    return """<script>
var i = 0;
setInterval(() => {
    i += 1;
    let i2 = i;
    console.log("sending request", i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            console.log("finished processing", i2);
        });

}, 100);
</script>"""


@bottle.route('/data')
def data():
    return "abcd"


if __name__ == '__main__':
    wsgiapp = bottle.default_app()
    myWsgiServer = MyServer(wsgiapp)
    myWsgiServer.serve_forever()
0
flakes On

Are you absolutely tied to flask+bottle? You can pretty easily get this working using a FastAPI server out of the box.

Nice thing with FastAPI is that this is then all single-threaded with asyncio support. No monkey patching or other weird behavior required by gevent. IMO that makes life A LOT easier.

Added in some timestamps to show that its sending roughly 10 requests per second.

from fastapi import FastAPI
from fastapi.responses import HTMLResponse
import uvicorn

app = FastAPI()

@app.get("/", response_class=HTMLResponse)
async def index():
    return """<script>
var i = 0;
const start = Date.now();
setInterval(() => {
    const startOffset = Date.now() - start;
    i += 1;
    let i2 = i;
    console.log(`${startOffset}: sending request`, i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            const duration = Date.now() - start - startOffset;
            console.log(`finished processing ${i2} in ${duration}ms`);
        });
}, 100);
</script>"""

@app.get("/data")
async def data():
    return "abcd"

if __name__ == "__main__":
    uvicorn.run(app)
106: sending request 1
finished processing 1 in 22ms
208: sending request 2
finished processing 2 in 18ms
315: sending request 3
finished processing 3 in 20ms
420: sending request 4
finished processing 4 in 8ms
524: sending request 5
finished processing 5 in 27ms
624: sending request 6
finished processing 6 in 10ms
729: sending request 7
finished processing 7 in 39ms
831: sending request 8
finished processing 8 in 37ms
932: sending request 9
finished processing 9 in 12ms
1037: sending request 10
finished processing 10 in 7ms
0
Salman A On

There is a simple explanation:

The browser and server need a finite amount of time to establish a HTTP connection. Bulk of the delay that you're observing is that setup time. Here is an example of Chrome requesting a plain text file from IIS server running locally:

new connection

If we eliminate the time needed to establish a new HTTP connection then it becomes:

reusing existing connection

In the above example, the browser and server were able to use Connection: keep-alive; the connection used to fetch test-02.html was reused to fetch data.txt.

Unfortunately Bottle Web Framework does not seem to support HTTP keep-alive. While this isn't necessarily bad, it creates another issue when the browser makes multiple requests to one origin: only six requests could be executed in parallel, the remaining ones are queued. This explains the symptoms you mentioned.

You need to use another server. The alternative suggested in the other answer does support keep-alive. The first six requests will still take longer but remaining ones will not have the connection overhead. Your code has no speed issues. You could use Chrome Developer Tools > Network tab to verify this.