Postgres optimize query on large table

1.1k Views Asked by At

I am trying to select rows from a big table (~10M rows)
The table contains a column timestamp on which I set an index

I want to take rows by chunks of N rows, in order (ordered by timestamp), this is why I use limit and offset

I run parallel queries from different processes

The queries take ages, and the disk usage of my server explodes (more than 150G than normal disk usage)

Once the queries are quilled, the disk usage is back to normal

This is the explanation of my query (this one is without the order by)

my_db=# explain select * from my_table order limit 1 offset 1;
                                   QUERY PLAN
---------------------------------------------------------------------------------
 Limit  (cost=0.14..0.28 rows=1 width=1001)
   ->  Seq Scan on my_table  (cost=0.00..1447042.46 rows=10162346 width=1001)

Here is how the real query looks like:

SELECT my_column
FROM my_table
WHERE id > x
ORDER BY timestamp ASC
limit y
OFFSET z

Any ideas on how to optimize this process? Lots of thanks

1

There are 1 best solutions below

0
On

https://www.eversql.com/faster-pagination-in-mysql-why-order-by-with-limit-and-offset-is-slow/

This uses a unique column as a bookmark to prevent using offset