SQL index question

ochadd

Senior member
May 27, 2004
408
0
76
I'm not a good SQL admin. With that said, some questions.

I've got two tables holding production inventory histories. Each has just over 7.2 million rows. Table 1 has 3.4GB reserved, 1GB data, 2.4GB index. Table 2 has 6GB reserved, 2.4 data, 3.6GB index. 240KB unused for both.

Is this a problem? They aren't fragmented beyond what I've read is acceptable. Trying to defrag the indexes times out anyway. I was thinking of rebuilding/dropping the index on both but I'm clueless on how long this would take or if it would do anythign beneficial? Might destroy data get me thrown outta here for all I know.

edit: Total database size is 26GB. Server is a quad core, 4gb of ram, 4x15k rpm drives.
 

KB

Diamond Member
Nov 8, 1999
5,406
389
126
I don't see a problem with your setup. 240KB unused will probably be filled up pretty quickly so you could enlarge the databases by 10% at night to prevent the delay required during enlarging when the database is in heavy use. I assume you allow the database to automatically increase its size?

What database platform are you on? MSSQL, MySql, DB2? Speaking for MSSQL, I have read that index fragmentation greater than 10% should be re-organized and greater than 30% should be rebuilt. With a large database this will make 10+ minutes. For a rebuild, what it does is it increases the size of the database to allow two copies of the index. It rebuilds a new copy of the index in the new space it added and then deletes the old index. The deleted space however will be reclaimed at the filesystem level unless you have auto-shrink on the database (which you shouldn't).

Rebuilding/dropping an index won't delete user data. Depending on the query a properly defragmented index can increase the speed of the query by 1 - 30% depending on fragmentation. The difference between no index and having even a fragmented index can be 10 - 500%. So keep the index if it is useful.
 

LittleNemoNES

Diamond Member
Oct 7, 2005
4,142
0
0
I forget the explanation but yes, it can happen (index > data itself)

We have that @ our business.
 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: KB
I don't see a problem with your setup. 240KB unused will probably be filled up pretty quickly so you could enlarge the databases by 10% at night to prevent the delay required during enlarging when the database is in heavy use. I assume you allow the database to automatically increase its size?

What database platform are you on? MSSQL, MySql, DB2? Speaking for MSSQL, I have read that index fragmentation greater than 10% should be re-organized and greater than 30% should be rebuilt. With a large database this will make 10+ minutes. For a rebuild, what it does is it increases the size of the database to allow two copies of the index. It rebuilds a new copy of the index in the new space it added and then deletes the old index. The deleted space however will be reclaimed at the filesystem level unless you have auto-shrink on the database (which you shouldn't).

Rebuilding/dropping an index won't delete user data. Depending on the query a properly defragmented index can increase the speed of the query by 1 - 30% depending on fragmentation. The difference between no index and having even a fragmented index can be 10 - 500%. So keep the index if it is useful.

It is running on MSSQL 2000 and I've got it set to grow automatically. If I reindex would that decrease the size from I have now?

My understanding on indexes is that each time a record is accessed it creates a pointer in the index. This data is really only accessed one time. Once it is shipped it's not likely to be looked at unless a recall happens. It isn't often but the data needs to be there. It has grown to its current size over the last 3.5 years and if it's possible it would be nice to shrink that index up.
 

leeland

Diamond Member
Dec 12, 2000
3,659
0
76
If you still need some help with this send me a P.M. and I can try to lend a hand

 

leeland

Diamond Member
Dec 12, 2000
3,659
0
76
Originally posted by: gersson
I forget the explanation but yes, it can happen (index > data itself)

We have that @ our business.


I am going on a limb and assuming you are running something like sp_spaceused "tableName"

and getting the sizes of data, index and unused...

The index size can be larger that the total data size because the system stored procedure lists the size of ALL the indexes so if you have 1 clustered index and 8 NC indexes then you will see a larger size since it has to account for the size of each individual index.

I have a couple different scripts I use to gauge the size of each table in the database and see what the largest size of say the top 5 to 15 tables are

Then you can run a DBCC showcontig to evaluate the scan density and page count for each table and each of the indexes on that table...

There is a real science to understanding this report and I don't claim to have all the answers...

But generally if you evaluate the top 10% of the tables for fragmentation and re-index the table you will solve a large percentage of your speed issues and regain the most space.

Just my 2 cents

Lee