The problem I am having is I have a large set of data I want to retrieve from influx that also has the rather expensive call to schema.fieldAsCols() as the end. Sometimes the results exceed 100Mb and so I want to be able to paginate the same way that the raw data table does, just with a larger number of results per page.
I've tried all the usual limit(n: 10, offset: 10)
but of course it is per table so I've used
|> schema.fieldsAsCols()
|> group()
|> limit(n: 10, offset: 10)
but doing it this way requires the expensive call to schema.fieldAsCols() every time so the pagination is pointless and trying to reorder the limits and group semantically changes the query. I also have trouble getting the total number of results settling on
|> schema.fieldsAsCols()
|> group()
|> map(fn: (r) => ({r with _value: 1}))
|> count()
as after the schema.fieldsAsCols() there is no _value column
This all gives the outputs I want but each query is very slow due to the schema.fieldsAsCols() call. But the view raw data table returns results instantly.
Is there any way to paginated with flux in an efficient manner?