Slick result set size default limit

486 Views Asked by At

I wonder if there is a default resultset size when performing DBAction with slick akka-streaming integration.

Indeed (I am using akka-stream slick). When I write the following query:

Slick.source(sql"""select * FROM DRUG""".as[(Map[String,String])])

as in

val tableRows =
    Slick.source(sql"""select * FROM DRUG""".as[(Map[String,String])])
      .map{e => RawData("DRUG", "/Users/xxxx/xxxx/WSP1WS5-DRUG-auto-model.ttl", "OBJECT", "", e)}
      .mapAsyncUnordered(8){value =>
        Future{
          println(s"Writing {${value.toString}}")
          val kryo = kryoPool.obtain()
          val outStream = new ByteArrayOutputStream()
          val output = new Output(outStream, 4096)
          kryo.writeClassAndObject(output, value)
          output.close()
          kryoPool.free(kryo)
          new ProducerRecord[String, Array[Byte]]("test", outStream.toByteArray)
        }
      }
      .runWith(Producer.plainSink(producerSettings))

My query return up to something like 400 records and then just hang there. I have about 5k records in that table. Is that normal?

However I have been able to retrieve them all, albeit, I believe slower than it should be, by using the following statement:

Slick.source(sql"""select * FROM DRUG Where ROWNUM <= 1000000000""".as[(Map[String,String])])

The database i am tapping on is oracle by the way. Hence i wonder if it is oracle, slick, or the akka-stream integration that cause this.

Any suggestion on this ?

0

There are 0 best solutions below