Sql updating large number of rows

So one test would be to perform the following, one-shot delete: I know this is going to require a massive scan and take a huge toll on the transaction log. :-) While that was running, I put together a different script that will perform this delete in chunks: 25,000, 50,000, 75,000 and 100,000 rows at a time.

Each chunk will be committed in its own transaction (so that if you need to stop the script, you can, and all previous chunks will already be committed, instead of having to start over), and depending on the recovery model, will be followed by either a to minimize the ongoing impact on the transaction log.

Customer was running UPDATE query affecting 100 million rows.

This query was the last step in an end-user application upgrade from SQL 2000 to SQL server 2008.

However, tables statistics were not updated following the RESTORE.

Initially started running updated statistics for all table using: EXEC sp_MSforeachtable 'UPDATE STATISTICS ?

sql updating large number of rows-77sql updating large number of rows-36sql updating large number of rows-48

Here are the results for duration and log impact: Again, in general, while log size is significantly reduced, duration is increased.

Many times it turns out that they were performing a large delete operation, such as purging or archiving data, in one large transaction.

I wanted to run some tests to show the impact, on both duration and the transaction log, of performing the same data operation in chunks versus a single transaction.

Note that I did not try any of these tests with compression enabled (possibly a future test!

), and I left the log autogrow settings at the terrible defaults (10%) – partly out of laziness and partly because many environments out there have retained this awful setting.