UserDefinedFunction takes 10 times longer than plain sql query

33 Views Asked by At

again problems with a user defined function, maybe somebody has an idea.

Calling the function:

Select * FROM get_process_single_real('Befuellung','P24A2A009','24_NTest','Gewicht'); takes up to 15 seconds.

Executing the code of the function inline takes a few hundred milliseconds. (Without the BEGIN RETURN QUERY ... End statement)

BEGIN
    RETURN QUERY SELECT
        tb_fixed_station.station_name station_bezeichnung,
        tb_fixed_process.process_name prozess_bezeichnung,
        tb_datas_process_detail.part_identifier werkstueck_ident,
        tb_datas_process_detail.job_identifier job_ident,
        tb_datas_process_detail.created_at zeitstempel,
        tb_param_setparameter.setparameter_name wert_bezeichnung,
        tb_datas_real_detail.real_value wert
    FROM 
        tb_datas_process_detail
    JOIN 
        tb_datas_real_detail ON (tb_datas_real_detail.id_datas_process_detail = tb_datas_process_detail.id)
    JOIN
        tb_param_setparameter ON (tb_datas_real_detail.id_param_setparameter = tb_param_setparameter.id)
    JOIN
        tb_fixed_process ON (tb_datas_process_detail.id_actfixed_process = tb_fixed_process.id)
    JOIN
        tb_fixed_station ON (tb_fixed_process.id_fixed_station = tb_fixed_station.id)
    WHERE
        tb_datas_process_detail.part_identifier = part_id
        and tb_datas_process_detail.job_identifier = job_id
        and tb_fixed_process.process_name = process
        and tb_param_setparameter.setparameter_name = in_setparameter_name
    LIMIT 1;
END; 

Any hints are appreciated.

The version of the database is: PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bit

I've read that UDF tend to be slower but, 10 times is to much. Is there a way to recompile the execution plan every time the function is called? I read somewhere about updating statistics as well?

1

There are 1 best solutions below

1
Laurenz Albe On

It might well be a case of a suboptimal generic plan being chosen. You can either investigate why the generic plan is bad and try to improve that, or you can tell PostgreSQL to generate a new custom plan every time:

ALTER FUNCTION get_process_single_real SET plan_cache_mode = force_custom_plan;

If my guess misses, you will have to dig deeper. The best way to do that is to activate auto_explain. Set shared_preload_libraries = 'auto_explain', auto_explain.log_min_duration low enough and auto_explain.log_nested_statements = on, and after a restart you will get the execution plan of all slow statements in the log.

You get even better data if you set auto_explain.log_analyze = on and auto_explain.log_buffers = on, but that will impact your overall performance.

Once you have the execution plan of a slow execution (and some PostgreSQL skills), you should be able to figure out what is going on.