I have large table crumbs (about 100M+ rows, 100GB). It's just collection of json stored as text. It has index on column run_id that has about 10K unique values. So each run is small (1K - 1M rows).
For simple query:
explain analyze verbose select * from crumbs c
where c.run_id='2016-04-26T19_02_01_015Z' limit 10
Plan is good:
Limit (cost=0.56..36.89 rows=10 width=2262) (actual time=1.978..2.016 rows=10 loops=1)
Output: id, robot_id, run_id, content, created_at, updated_at, table_id, fork_id, log, err
-> Index Scan using index_crumbs_on_run_id on public.crumbs c (cost=0.56..5533685.73 rows=1523397 width=2262) (actual time=1.975..1.996 rows=10 loops=1)
Output: id, robot_id, run_id, content, created_at, updated_at, table_id, fork_id, log, err
Index Cond: ((c.run_id)::text = '2016-04-26T19_02_01_015Z'::text)
Planning time: 0.117 ms
Execution time: 2.048 ms
But if I try to look inside json stored in one of the columns it then wants to do full scan:
explain verbose select x from crumbs c,
lateral json_array_elements(c.content::json) x
where c.run_id='2016-04-26T19_02_01_015Z'
limit 10
Plan:
Limit (cost=0.01..0.69 rows=10 width=32)
Output: x.value
-> Nested Loop (cost=0.01..10332878.67 rows=152343800 width=32)
Output: x.value
-> Seq Scan on public.crumbs c (cost=0.00..7286002.66 rows=1523438 width=895)
Output: c.id, c.robot_id, c.run_id, c.content, c.created_at, c.updated_at, c.table_id, c.fork_id, c.log, c.err
Filter: ((c.run_id)::text = '2016-04-26T19_02_01_015Z'::text)
-> Function Scan on pg_catalog.json_array_elements x (cost=0.01..1.01 rows=100 width=32)
Output: x.value
Function Call: json_array_elements((c.content)::json)
Tried:
analyze crumbs
But made no difference.
Update 1 Disabling sequential scanning for whole database works, but this is not an option in our application. In many other places seq scan should stay:
set enable_seqscan=false;
Plan:
Limit (cost=0.57..1.14 rows=10 width=32) (actual time=0.120..0.294 rows=10 loops=1)
Output: x.value
-> Nested Loop (cost=0.57..8580698.45 rows=152343400 width=32) (actual time=0.118..0.273 rows=10 loops=1)
Output: x.value
-> Index Scan using index_crumbs_on_run_id on public.crumbs c (cost=0.56..5533830.45 rows=1523434 width=895) (actual time=0.087..0.107 rows=10 loops=1)
Output: c.id, c.robot_id, c.run_id, c.content, c.created_at, c.updated_at, c.table_id, c.fork_id, c.log, c.err
Index Cond: ((c.run_id)::text = '2016-04-26T19_02_01_015Z'::text)
-> Function Scan on pg_catalog.json_array_elements x (cost=0.01..1.01 rows=100 width=32) (actual time=0.011..0.011 rows=1 loops=10)
Output: x.value
Function Call: json_array_elements((c.content)::json)
Planning time: 0.124 ms
Execution time: 0.337 ms
Update 2:
Schema is:
CREATE TABLE crumbs
(
id serial NOT NULL,
run_id character varying(255),
content text,
created_at timestamp without time zone,
updated_at timestamp without time zone,
CONSTRAINT crumbs_pkey PRIMARY KEY (id)
);
CREATE INDEX index_crumbs_on_run_id
ON crumbs
USING btree
(run_id COLLATE pg_catalog."default");
Update 3
Rewriting query like so:
select json_array_elements(c.content::json) x
from crumbs c
where c.run_id='2016-04-26T19_02_01_015Z'
limit 10
Gets correct plan. Still unclear why wrong plan is chosen for second query.
Rewriting the query so that the limit is applied first and then the cross join against the function should make Postgres use the index:
Using a derived table:
Alternatively using a CTE:
Or use
json_array_elements()
directly in the select list:However this is something different then the other two queries because it applies the limit after "unnesting" the json array, not on the number of rows returned from the
crumbs
table (which is what your first query is doing).