SQL Server 2008 transaction logs are getting way too big

leeland

Diamond Member
Dec 12, 2000
3,659
0
76
Just started using it and the transaction logs are ridiculously large. Does truncating work differently?

are you backing them up? What is the recovery model set to the database in question?

The following script will give you the information regarding all your databases and their respective recovery models.

Code:
SET NOCOUNT ON

SELECT  CAST(name as CHAR(65))                                   as [DB Name],
        CAST(DATABASEPROPERTYEX (name, 'Recovery') as CHAR(10))  as [Recovery Model]
FROM    master.dbo.sysdatabases
ORDER BY
        name,
        DATABASEPROPERTYEX (name, 'Recovery')


If they are growing out of control my gut would say the database is set to FULL recovery and auto grow on the log file.

You should have some level of backups running (i.e. Full database backups followed by transaction log backups if they are in full recovery model)

The reason I say this is by backing up the log file you are clearing out the transactions stored in the file which decrease the amount used.

The 'physical' file won't shrink in size but the logical size will decrease once the log gets backed up (it is kind of hard to get your head wrapped around the concept, if you need an example I can find one for you).

Once you have the log 'cleared' out you can actually shrink the 'physical' file to a respectable size.

Then setting up some level of maintenance is recommended...

If in simple mode, some degree of backups such as daily Fulls or once a week full with daily differentials...

If you need more help / info just post the question
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yah change the database to simple mode - backup the database whenever it gets too big - the backup file - compressed ! - will be your saviour and the log will get truncated.

be sure to use bulk insert mode for large ETL's. Also disable as many KEYS/FK during large ETL's to reduce time spent.

you want that log file - if the machine crashes it will use the change block detection and the logfile to roll back and replay to keep the data integrity as best as possible.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yeah i mean if you are a big company that requires mission critical no data loss ( as opposed to snapshots in time loss ).

ideally imo you want the log files to average a size that doesn't need growth that often. if you most of the time are using 2GB of log, but sundays you grow that to 6GB ; i'd force the min log file size to somewhere in between. you don't want to lag out the sql server with ginormous log file growths. shrinking isn't the smartest either. if you shrink the log (backup too!) too small then you need that 2GB of log instantly you can lag/timeout your application while it gets busy making the file. usually on sunday night batch processing when nobody is around to see the errors fire off after a backup. :)