To reduce some of the query load on my database, I currently batch multiple queries together into one INSERT query with a variable number of rows. This produces queries like:

INSERT INTO example_table (column_1, column_2, ...)
VALUES ($1, $2, ...), ($3, $4, ...), ($5, $6, ...), ...
RETURNING "id";

Because the number of rows varies, from the perspective of the pg_stat_statements extension, it looks like lots of different types of queries are run.

I am looking to efficiently batch-insert several rows while allowing pg_stat_statements to aggregate all the query statistics together neatly. Is there some approach I can use to achieve this, perhaps by telling pg_stat_statements that these are all the same query type or by using prepared statements?

1

There are 1 best solutions below

0
On BEST ANSWER

I addressed this by inserting rows with UNNEST, which allowed for a consistent query shape regardless of the number of rows inserted.

INSERT INTO example_table (column_1, column_2, ...)
  (SELECT * FROM UNNEST($1::uuid[], $2::varchar[], ...));

UNNEST worked because I didn't need to use DEFAULT in any of my rows. In some cases the array casts are not necessary, but I added them to be explicit. They are also needed for some types like uuid[].

For my use case, the performance was comparable to that of INSERT queries with multiple VALUE tuples: UNNEST was faster on average but had a higher standard deviation across about 600k calls each.