How can I read dirty values in SQL UPDATE statement WHERE clause

137 Views Asked by At

Let's assume I have the following query in two separate SSMS query windows:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
UPDATE dbo.Jobs
SET [status] = 'Running'
OUTPUT Inserted.*
WHERE [status] = 'Waiting'
--I'm NOT committing yet
--Commit Transaction

I run query window 1 (but do not commit), and then I run query window 2.

I want for query window 2 to immediately update only rows that were inserted after I started query 1 (all new records come in with a status of 'Waiting'). However, SQL Server is waiting for the first query to finish, because in an update statement it's not reading dirty values (even if it's set to READ UNCOMMITTED);

Is there a way to overcome this?

In my application I will have 2 (or more) processes running it, I want that process 2 should be able to pickup the rows that process 1 have not picked up; I don't want that process 2 should need to wait until process 1 is finish

1

There are 1 best solutions below

0
On

What you are asking for is simply impossible.

Even at the lowest isolation level of READ UNCOMMITTED (aka NOLOCK), an X-Lock (exclusive) must be taken in order to make modifications. In other words, writes are always locked, even if the reads that fetched those rows were not locked.

So even though session 2 is running under READ UNCOMMITTED also, if it wants to do a modification it must also take an X-Lock, which is incompatible with the first X-Lock.

The solution here is to either do this in one session, or commit immediately. In any case, do not hold locks for any length of time, as it can cause massive blocking chains and even deadlocks.

If you want to just ignore all those rows which have been inserted, you could use the WITH (READPAST) hint.


READ UNCOMMITTED as an isolation level or as a hint has huge issues.

It can cause anything from deadlocks to completely incorrect results. For example, you could read a row twice, or not at all, when by the logical definition of the schema there should have been exactly one row. You could read entire pages twice or not at all.

You can get deadlocks due to U-Locks not being taken in UPDATE and DELETE statements.

And you still take schema locks, so you can still get stuck behind a synchronous statistics update or an index rebuild.