The following error occurs during data query: The number of partitions [274000] relevant to the query is too large. Please add more specific filtering conditions on partition columns in WHERE clause, or consider changing the value of the configuration parameter maxPartitionNumPerQuery. The following is my script:
// Create a database
create database "dfs://stock_tick"
partitioned by VALUE(2020.01.01..2035.01.01), HASH([SYMBOL, 50]),
engine='TSDB',
atomic='TRANS',
chunkGranularity='TABLE'
// Create a table
create table "dfs://stock_tick"."tick_data" (
code SYMBOL[comment="Stock Code"],
market INT[comment="Market Code"],
date DATE[comment="Trading Date",compress="delta"],
time TIME[comment="Timestamp",compress="delta"],
close FLOAT[comment="Last Price"],
close_rate FLOAT[comment="Change Range"],
up_down FLOAT[comment="Change Amount"],
vol_ratio FLOAT[comment="Relative Ratio"]
)
partitioned by _"date",_"code"
sortColumns=["market","code","time"],
keepDuplicates=LAST
If the partition size of your script is reasonable, there is no need to adjust the number of partitions. To solve this error, you can modify the configuration parameter maxPartitionNumPerQuery in dolphindb.cfg (for standalone mode) or cluster.cfg (for cluster mode) to adjust the maximum number of partitions that a single query can search. Then, restart the system for the updates to take effect. You can also add filtering conditions using the WHERE clause to read only the data satisfying the specified conditions, instead of all partitions.