I have a table with about 3000000 rows described as below.
Field |Type |Null|Key|Default|Extra |
------------------|------------|----|---|-------|--------------|
id |bigint(20) |NO |PRI| |auto_increment|
entity_name |varchar(50) |YES | | | |
field_type |varchar(50) |YES |MUL| | |
kv_key |text |YES |MUL| | |
kv_value |text |YES |MUL| | |
value_type |int(11) |YES | | | |
dataset_id |varchar(255)|YES |MUL| | |
dataset_version_id|varchar(255)|YES |MUL| | |
experiment_id |varchar(255)|YES |MUL| | |
experiment_run_id |varchar(255)|YES |MUL| | |
job_id |varchar(255)|YES |MUL| | |
project_id |varchar(255)|YES |MUL| | |
It has indexes as
Table |Non_unique|Key_name |Seq_in_index|Column_name |Collation|Cardinality|Sub_part|Packed|Null|Index_type|Comment|Index_comment|
--------|----------|-------------|------------|------------------|---------|-----------|--------|------|----|----------|-------|-------------|
keyvalue| 0|PRIMARY | 1|id |A | 0| | | |BTREE | | |
keyvalue| 1|kv_dsv_id | 1|dataset_version_id|A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_p_id | 1|project_id |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_j_id | 1|job_id |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_e_id | 1|experiment_id |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_d_id | 1|dataset_id |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_er_id | 1|experiment_run_id |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_field_type| 1|field_type |A | 0| | |YES |BTREE | | |
keyvalue| 1|kv_kv_val | 1|kv_value |A | 0| 255| |YES |BTREE | | |
keyvalue| 1|kv_kv_key | 1|kv_key |A | 0| 255| |YES |BTREE | | |
an Explain of select count(*) return in 178ms with a plan
id |count |task|operator info |
------------------|----------|----|-----------------------------------------------------------------------------|
StreamAgg_48 |1.00 |root|funcs:count(col_0) |
└─IndexReader_49 |1.00 |root|index:StreamAgg_8 |
└─StreamAgg_8 |1.00 |cop |funcs:count(1) |
└─IndexScan_39|2964754.00|cop |table:keyvalue, index:dataset_version_id, range:[NULL,+inf], keep order:false|
and the actual query takes about 2.6 sec
.
trace format = 'row' select count(*) from keyvalue;
operation |startTS |duration |
---------------------|---------------|------------|
session.getTxnFuture |20:21:00.074939|6.455µs |
├─session.Execute |20:21:00.074937|999.484µs |
├─session.ParseSQL |20:21:00.074980|17.226µs |
├─executor.Compile |20:21:00.075010|340.281µs |
├─session.runStmt |20:21:00.075370|525.307µs |
├─session.CommitTxn|20:21:00.075882|3.542µs |
├─recordSet.Next |20:21:00.075946|2.585509798s|
├─streamAgg.Next |20:21:00.075948|2.585497556s|
├─tableReader.Next |20:21:00.075950|2.585418751s|
├─tableReader.Next |20:21:02.661433|2.77µs |
├─recordSet.Next |20:21:02.661488|11.319µs |
└─streamAgg.Next |20:21:02.661491|587ns |
My tidb setup is as follows
storage--tidb-discovery-f96cbd845-kgbvx 1/1 Running 0 94d
storage--tidb-operator--controller-manager-fff86dd78-b7rmh 1/1 Running 0 3d19h
storage--tidb-pd-0 1/1 Running 0 3d18h
storage--tidb-pd-1 1/1 Running 0 3d18h
storage--tidb-pd-2 1/1 Running 0 3d18h
storage--tidb-tidb-0 2/2 Running 0 3d18h
storage--tidb-tidb-1 2/2 Running 0 3d18h
storage--tidb-tidb-initializer-9fff8f78d-gh4pr 1/1 Running 0 3d22h
storage--tidb-tikv-0 1/1 Running 0 3d18h
storage--tidb-tikv-1 1/1 Running 0 3d18h
storage--tidb-tikv-2 1/1 Running 0 3d18h
storage--tidb-tikv-3 1/1 Running 0 3d18h
TIDB version
version() |
------------------|
5.7.25-TiDB-v3.0.4|
How can I speed up the query? I am also curious why did the query pick the index it chose.
I'm a developer of TiDB. For your questions:
How to speed up the query?
There are 3000000 rows in the table and SQL is a very simple one. The execution plan is already the best one (with the partial aggregation push to TiKV). So I suggest you increase some execution concurrency like follows:
For your SQL, the
tidb_distsql_scan_concurrency
maybe works. It's better to set it as your(number of CPU cores / 8 * 15)
. You can useset session/global tidb_distsql_scan_concurrency=?
to change it.Why did the query pick the index?
Because
count(*)
is equivalent to count(1), the bytes of an index key-value pair is smaller than the key-value pair scanned byTableScan
plan. There are some blogs FYI:TiDB Internal (I) - Data Storage
TiDB Internal (II) - Computing
TiDB Internal (III) - Scheduling