Bucket count help when creating an in memory table in sql server 2016

1k Views Asked by At

I have some business that needs to be run on a daily basis and will be affecting all the rows in the tables. Once a record is fixed and can't change again by the logic it gets moved to an on disk table. At its max there will end up being approximately 30 million rows in the table. Its very skinny, just the linkage items to a main table and a key to a flag table. The flag key is what will be updated.

My question is when I'm preparing a table of this size which size bucket count should I be looking to use on the index?

The table will start off small with likely only a few hundred thousand rows in April, but come the end of the financial year it will ave grown to the maximum mentioned as previous years have indicated and I'm not sure if this practically empty bucket at the start will have any issues or if it is ok to have the count at the 30 million mark.

thanks in advance you comments, suggestion and help.

I've provided the code below and I've tried googling what occurs if the bucket count is high but the intial number of rows is low as the table grows over time but found nothing to help me understand if there will be a performance issue because of this.

CREATE TABLE [PRD].[CTRL_IN_MEM]
(
    [FILE_LOAD_ID] INT NOT NULL,
    [RECORD_IDENTIFIER] BIGINT NOT NULL,
    [FLAG_KEY] SMALLINT NOT NULL,
    [APP_LEVEL_PART] BIT NOT NULL

--Line I'm not sure about
    CONSTRAINT [pk_CTRL_IN_MEM] PRIMARY KEY NONCLUSTERED HASH ([FILE_LOAD_ID], [RECORD_IDENTIFIER]) WITH (BUCKET_COUNT = 30000000),
    INDEX cci_CTRL_IN_MEM CLUSTERED COLUMNSTORE

) WITH (MEMORY_OPTIMIZED = ON, DURABILITY=SCHEMA_AND_DATA)
0

There are 0 best solutions below