Monitoring SQL Log File

536 Views Asked by At

After deploying a project on the client's machine, the sql db log file has grown to up to 450G, although the db size is less than 100MB, The logging mode is set to Simple mode, and the transactions are send from a windows service that send insertion and updating transaction every 30 seconds. my question is, how to know the reason of db log file growth? I would like to know how to monitor the log file to know what is the exact transaction that causes the problem. should i debug the front end ? or there is away that expose the transactions that cause db log file growth. Thank you.

3

There are 3 best solutions below

1
On BEST ANSWER

Note that a simple recovery model does not allow for log backups since it keeps the least amount of information and relies on CHECKPOINT, so if this is a critical database, consider protecting the client by use of a FULL RECOVERY plan. Yes, you have to use more space, but disk space is cheap and you can have greater control over the point in time recovery and managing your log files. Trying to be concise:

  • A) Your database in Simple Mode will only truncate transactions in your transaction log as when a CHECKPOINT is created.

  • B) Unfortunately, large/lots of uncommitted transactions, including BACKUP, creation of SNAPSHOT, and LOG SCANs, among other things will stop your database from creating those checkpoints and your database will be left unprotected until those transactions are completed.

  • Your current system relies on having the right edition of your .bak file, which depending on the size may mean hours of potential loss.

In other words, it is that ridiculous size because your database is not able to create a CHECKPOINT to truncate these transactions often enough....

a little note on log files

Foremost, Log files are not automatically truncated every time a transaction is committed (otherwise, you would only have the last committed transaction to go back to). Taking frequent log backups will ensure pertinent changes are kept (point in time) and SHRINKFILE will squeeze the log file to the smallest size available/size specified.

Use DBCC SQLPERF(logspace) to see how much of your log file is in use and how large it is. Once you perform a full backup, the log file will be truncated to the remaining uncommitted/active transactions. (Do not confuse this with shrinking the size)

Some suggestions on researching your transactions:

  • You can use the system tables to see the most expensive cache, frequent, and active plans.
  • You can search the log file using an undocumented extended stored procedure, fn_dblog.
  • Pinal has great info on this topic that you can read at this webpage and link: Beginning Reading Transaction Log
0
On

A Log File is text, and depending on your log levels and how many errors and messages you receive these files can grow very quickly.

You need to rotate your logs with something like logrotate although from your question it sounds like you're using Windows so not sure what the solution for that would be.

The basics of log rotation are taking daily/weekly versions of the logs, and compressing them with gzip or similar and trashing the uncompressed version.

As it is text with a lot of repetition this will make the files very very small in comparison, and should solve your storage issues.

0
On

log file space won't be reused ,if there is open transaction..You can verify the reason for log space reuse using below DMV..

select log_reuse_wait_desc,database_id from sys.databases

In your case,your database is set to simple and database is 100 MB..but the log has grown upto 450 GB..which is very huge..

My theory is that ,there may be some open transactions ,which prevented log space reuse..log file won't shrink back,once it grew..

As of know you can run above DMV and see ,what is preventing log space reuse at this point,you can't go back in time to know what prevented log space reuse