What are the circumstances which make NOLOCK risky?

61 Views Asked by At

I've read this and this.

My question is: What are the circumstances which make NOLOCK risky?

  1. large rows with enough data that it can cross a page. This requires a large number of columns or varchar(max) nvarchar(max) varbinary(max) xml text ntext or image columns.

  2. A non clustered index PK lookup.

  3. The row was involved in a transaction which was subsequently rolled back.

  4. database maintenance can still cause data movement and errors

  5. If you have a variable width column and you expand it with an update, it could get moved to a new page. Anything that causes data to move to a new page can be a problem. Data movement between pages can cause a "Error 601: Could not continue scan with NOLOCK due to SQL Server data movement. [3]

1 and 2 can be found here "torn reads"

Are there any other circumstances which can cause partial/incorrect rows returned?

1

There are 1 best solutions below

7
Charlieface On

Firstly:

large rows that can cross a page. This requires a large number of columns or varchar(max) nvarchar(max) varbinary(max) xml text ntext or image columns.

No, you can get that even on a single one of those columns, where the data is stored off-row, so any case where it's more than 8kb.

The row was involved in a transaction which was subsequently rolled back.

No, you can get problems even if the transaction wasn't rolled back. If you have a page split, or the row is moved in the index sort to a different page), then the row could have been read twice, or not at all, or different each time.

And it doesn't have to be that row, it just needs to be on a page which is being changed. It could even be an index or table rebuild.


Other possible problems:

  • If you get an allocation-order scan then you may be subject to error 601

Could not continue scan with NOLOCK due to data movement

  • It would also apply an extra BULK_OPERATION lock, which may cause more blocking on other queries.
  • You can end up with worse blocking if there is a Sch-M schema modification lock ongoing.
  • Most importantly, in my opinion: it's all very well saying "I don't care if I read a row twice, or not at all", but what happens if the row is part of an INNER JOIN query, where that row is expected to exist uniquely exactly once? You would double up your entire resultset, or zero out the entire resultset, because of the join condition.

Bottom line, there is almost never a reason to use NOLOCK.

If you want performance and think no-one is changing the table anyway, then just use WITH (TABLOCK) and be done.

Or if you are sure no-one is ever going to change anything in your database then set it to READ_ONLY and no locks will get taken in the first place.

And if you are worried about blocking then use SNAPSHOT or READ COMMITTED SNAPSHOT isolation and avoid the whole issue.

The only time you should ever use it is in either diagnostic queries (to investigate blocking or running queries), or when querying system tables. And if you really do need to use it, then just set the isolation level explicitly.