I have a job that runs every night that updates a table from a file that is FTP'd to me. The key thing is all rows except about ~2000 are deleted (if ID starts with a certian letter, they dont get deleted). I've noticed my transaction log fills up quickly because it is deleting ~500k rows every night. I know the truncate command uses less of the transaction log.
So my question is:
Would it be more efficient to copy the 2000 rows to a different table, truncate the table, then copy them back?
the job pretty much looks like this:
1. Get file
2. Delete from <table> where id not like 'p%'
3. Load from txt file
I think it might be better to do:
1. Get File
2. Insert into <TempTable> (select <cols> from <sourcetable> where id like 'p%')
3. Truncate <sourcetable>
4. Load from txt file
5. Insert into <sourcetable> (select <cols> from <TempTable> where id like 'p%')
It's pretty situational but I'm just curious, I might have to try it out myself. I know it's bad design and the 2000 that don't get deleted should be in a different table, but it wasnt my decision.
Thanks for the input!
So my question is:
Would it be more efficient to copy the 2000 rows to a different table, truncate the table, then copy them back?
the job pretty much looks like this:
1. Get file
2. Delete from <table> where id not like 'p%'
3. Load from txt file
I think it might be better to do:
1. Get File
2. Insert into <TempTable> (select <cols> from <sourcetable> where id like 'p%')
3. Truncate <sourcetable>
4. Load from txt file
5. Insert into <sourcetable> (select <cols> from <TempTable> where id like 'p%')
It's pretty situational but I'm just curious, I might have to try it out myself. I know it's bad design and the 2000 that don't get deleted should be in a different table, but it wasnt my decision.
Thanks for the input!
