Inserting row to temp table

1.4k Views Asked by At

I have a problem with a performance, when I'm trying to create temporary table. The following code is a part of plpgsql function:

StartTime := clock_timestamp();
CREATE TEMP TABLE wo_tmp WITH (OIDS)  AS
SELECT workorders1_.woid AS w_id, workorders1_.woid4seg AS w_id4seg
FROM common.workorders workorders1_ 
INNER JOIN common.lines lines2_ ON workorders1_.wolineid=lines2_.lineid
INNER JOIN common.products products2_ ON workorders1_.woprodid=products2_.prodid 
INNER JOIN common.depts depts3_ ON lines2_.linedeptid=depts3_.deptid 
WHERE workorders1_.wostatus='F' 
    AND workorders1_.wotypestatus = ANY ('{R,C,I,D}'::text[])
AND (p_deptid = 0 OR (depts3_.deptid = p_deptid AND ((p_deptid = 5 AND workorders1_.wosegid = 1) OR workorders1_.wosegid = 4)))
AND (p_lineid = 0 OR lines2_.lineid = p_lineid)
AND (p_prodid = 0 OR products2_.prodid = p_prodid)
    AND (p_nrkokili = 0 OR workorders1_.wonrkokili = p_nrkokili)
    AND (p_accepted = TRUE OR workorders1_.worjacceptstatus = 'Y') 
    AND workorders1_.wodateleaverr BETWEEN p_dfr AND p_dto
    AND lines2_.status <> 'D';

CREATE INDEX wo_tmp_w_id_idx
    ON wo_tmp USING btree (w_id ASC NULLS LAST);
CREATE INDEX wo_tmp_w_id4seg_idx
    ON wo_tmp USING btree (w_id4seg ASC NULLS LAST);

    EndTime := clock_timestamp();
    Delta := extract('epoch' from EndTime)::bigint - extract('epoch' from StartTime)::bigint;
    RAISE NOTICE 'Duration [0] in seconds=%', Delta;

Here's an explain analyze report: http://explain.depesz.com/s/uerF
It's strange, because when I execute this function, I obtain notice: Duration [0] in seconds=11. I check query without creating temp table and the result time is ~300ms.

Is it possible that inserting records (~73k) into a temporary table takes 11 seconds? Can I speed it up?

2

There are 2 best solutions below

5
On BEST ANSWER

When you fill a temp table inside functions, you can find more than one issue:

  • locking issues - every temp table is table with some fields in system catalog. Intensive creating and dropping these tables creates high overhead with lot locking. Sometimes temp tables can be replaced by arrays. It is not your case, because you need a indexes.

  • blind optimization - embedded SQL in PlpgSQL functions are optimized for most common values (this mechanism was slightly enhanced in PostgreSQL 9.2 (but still with possible performance issues). It is not optimized for current values - and this fact can enforces some performance issue. Then dynamic SQL is necessary. Some links of this issues (one and second)

  • Some hw or file system issues - I am little bit confused about help WITHOUT OIDS. It looks like your file system is terrible bottleneck for you. Temp tables are stored in file system cache - storing 53K rows there should be fast .. removing four bytes (from 35) is not too big change.

    postgres=# create table t1 with (oids) as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 302.083 ms
    postgres=# create table t2  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 267.459 ms
    postgres=# create temp table t3 with (oids)  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 154.431 ms
    postgres=# create temp table t4  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 153.085 ms
    postgres=# \dt+ t*
                        List of relations
      Schema   | Name | Type  | Owner |  Size   | Description 
    -----------+------+-------+-------+---------+-------------
     pg_temp_2 | t3   | table | pavel | 3720 kB | 
     pg_temp_2 | t4   | table | pavel | 3160 kB | 
     public    | t1   | table | pavel | 3720 kB | 
     public    | t2   | table | pavel | 3160 kB | 
    (4 rows)
    

Writing 3MB file to file system should be significantly less than 1sec .. so it is strange for 11 sec overhead. p.s. default temp_buffers is 8MB, so your result should be stored in memory only - and probably this hypothesis is false - and more probable is blind optimization hypothesis.

1
On

For starters, don't use WITH (OIDS) for temporary tables. Ever. Use of OIDs in regular tables is discouraged. That goes doubly for temp tables. Also reduces required RAM / space on disk, which is probably the main bottle neck here. Switch to WITHOUT OIDS.

Next, a likely cause (educated guess) is a lack of temp buffers which forces the temp table to spill to disk. Check the actual size of the temp table with

SELECT pg_size_pretty(pg_relation_size('wo_tmp'));

And set temp_buffers accordingly, possibly for the session only - round up generously, enough to avoid writing to disk.

Details: