Any idea why requests to vertx embedded in grails are synchronously queued up

1.6k Views Asked by At
Environment: Mac osx lion
Grails version: 2.1.0
Java: 1.7.0_08-ea

If I start up vertx in embedded mode within Bootstrap.groovy and try to hit the same websocket endpoint through multiple browsers, the requests get queued up.

So depending on the timing of the requests, after one request is done with its execution the next request gets into the handler.

I've tried this with both websocket and SockJs and noticed the same behavior on both.

BootStrap.groovy (SockJs):

    def vertx = Vertx.newVertx()
    def server = vertx.createHttpServer()
    def sockJSServer = vertx.createSockJSServer(server)
    def config = ["prefix": "/eventbus"]

    sockJSServer.installApp(config) { sock ->
      sleep(10000)      
    }
    server.listen(8088)

javascript:

<script>

    function initializeSocket(message) {
            console.log('initializing web socket');
            var socket = new SockJS("http://localhost:8088/eventbus");
            socket.onmessage = function(event) {
                console.log("received message");
            }
            socket.onopen = function() {
                console.log("start socket");
                socket.send(message);
            }
            socket.onclose = function() {
                console.log("closing socket");
            }
    }

OR

BootStrap.groovy (Websockets):

    def vertx = Vertx.newVertx()
    def server = vertx.createHttpServer()
    server.setAcceptBacklog(10000);
    server.websocketHandler { ws ->
        println('**received websocket request')
        sleep(10000)
    }.listen(8088)

javascript

socket = new WebSocket("ws://localhost:8088/ffff");
            socket.onmessage = function(event) {
                console.log("message received");
            }
            socket.onopen = function() {
                     console.log("socket opened")
                socket.send(message);
            }
            socket.onclose = function() {
                console.log("closing socket")
            }
4

There are 4 best solutions below

0
On BEST ANSWER

From the helpful folks at vertx:

def server = vertx.createHttpServer() is actually a verticle and a verticle is a single threaded process

0
On

Vert.x use the JVM to create a so called "multi-reactor pattern", that is a reactor pattern modified to perform better.

As far as I understood is not true that each verticle has its own thread: the fact is that each verticle is always served by the same event loop, but more verticles can be binded with the same event loop and there can be multiple event loops. An event loop is basically a thread, so few threads should serve many verticles.

I didn't use vert.x in embedded mode (and I don't know if the main concept change) but you should perform much better instantiating many verticles for the job

Regards, Carlo

0
On

As bluesman says, each verticle goes in its own thread. You can span your verticles across cores in your hardware, even clustering them with more machines. But this add capacity to accept simultaneous requests.

When programming realtime apps, we should try to build the response as soon as posible to avoid blocking. If you think your operation can be time intensive, consider this model:

  1. Make a request
  2. Pass the task to a worker verticle and assign this task an UUID (for example), and put it into response. The caller now knows that the work is in progress and receive the response so fast
  3. When the worker ends the task, put a notification in event bus using the UUID assigned.
  4. The caller check the event bus for the task result.

This is tipically done in a web application vía websockets, sockjs, etc.

This way you can accept thousands of request without blocking. And clients will receive the result without blocking the UI.

0
On

As mentioned before Vertx concept is based on reactor pattern which means the single instance has at least one single-threaded event loop and processes events sequentially. Now the request processing may consist of several events, the point here is to serve the request and each event with non-blocking routines.

E.g. when you wait for Web Socket message the request should be suspended and in the event of message it is woken back. Whatever you do with the message should be also non-blocking thus asynchronous, like any file IO, networking IO, DB access. Vertx provides basic elements which you should use to build such async flow: Buffers, Pumps, Timers, EventBus.

To wrap it up - just never block. The use of sleep(10000) kills the concept. If you really need to halt the execution use VertX's Timers instead.