I'm updating records of various entities in a database and am actively considering whether it makes sense to create tailored UPDATE requests to prolong SSD life.
Say, if I have an object with 20 properties and a table with corresponding columns for it.
Will it make any difference in amount of bytes written (sectors rewritten) if I execute, say, instead of
UPDATE MyTable
SET
Filed1 = @Value1,
Filed2 = @Value2,
Filed3 = @Value3,
.....
Filed18 = @Value18,
Filed19 = @Value19,
Filed20 = @Value20
WHERE
ID = @ID
only
UPDATE MyTable
SET
Filed3 = @Value3,
Filed17 = @Value17,
Filed20 = @Value20
WHERE
ID = @ID
if only values 3, 17 and 20 were changed?
Will it have any benefit?
It will make a difference, but not in the way you think. It will primarily affect reads not writes. And SSDs are not really affected by large amounts of reads.
SQL Server already optimizes for "non-updating" updates, by reading the value from the existing row, and only updating where it needs to. If nothing on the row changed, (and you weren't using RCSI isolation, and there is no
rowversionorGENERATED ALWAYScolumns) then the row isn't updated at all.But all updates write whole pages, and a row cannot be more than one page, so either way you are writing a whole page. The only time this wouldn't happen is when you have an LOB
maxtype, which stores its data off-row if it's too large.So the write almost always happens, but there is also the read to take into account, because SQL Server wants to know if the value has changed. It requires it to use an index which actually has those columns, which may detrimentally affect the execution plan.
So, if you know you aren't changing those columns, don't add them to the
UPDATE. Otherwise I wouldn't worry about it, as it's already mostly optimized.