I plan to do 4 or 6 kingston SSD. Got two 128gb for $80 and a few 96gb for $90 shipped. so they have GC which will work good with my cheapie raid controllers.
so with sql server enterprise you get parallel indexing and all sql server dispatches more i/o when you split up drives.
I could raid-0 them on this quad core box with 8 or 16gb of ram.
say 1 big raid 4 or 6 drives, or 2 raid of 2-3 drives each.
Or jbod.
Given enterprise edition of sql server parallel indexing and splitting files apart the other option of raid/JBOD would be:
1. DB on drive 0/1
2. TMP/MASTER on 2/3
3. LOG on 3/4
or
DB on 1/2/3
tmp on 4
LOG on 5/6
I could use two raid controllers (LSI) in jbod mode so sql server could dispatch 3 threads to work on db, 1 thread on tmp, and 2 threads for log?
or if i raid them i might get 1 thread to DB, 1 thread to tmp, 1 thread to log?
or if i raid them all together in one big raid-0 i might get higher burst read/write but at a cost of less worker threads?
Any thoughts - primary goal is to use consumer throwaway ssd to cache and handle huge catalogs of data and ETL (transformation of data formats).
The core work will be done on a normal disk server which has 6 x 600gb 15K 3.5" SAS drives in raid-10 (pretty fast) but not scalable since 1.8TB is about $4200 and my working set is not that big at all. I need the # of spindles in raid-10 for safety and disk iops.
With SSD and 6 96GB ssd - 90gb formatted - i have 540GB @ $1/gb - so i'll keep my writes down on this box (mostly dev but also catalog full text/parametric search). - No way i can afford an intel 710 they are $10G for the big boys each (no joke! look it up). same with fusion i/o $10G a pop.
So back to the raid-0 or jbod - more worker threads per volume - so a quad core or 6 core could dispatch more i/o threads to run concurrent and enterprise edition of sql server can divvy up index searches into more than 1 thread (per index) to push even more iops.
Anyone have a DEV setup with sql server and thoughts? I'm not worried about reliability - I make backups and this is dev and caching(read) so i can build two and if one fails just reroute at the model.
Bare metal server 2008 R2 + sql server - no vmware or anything - i want RAW max speed at cheapest cost. I know there are faster ssd's but no point torching a fancy ssd if i have a runaway routine burn up 100 terabytes of writes on a holiday weekend. that would not be nice.
I could move to raid-1/10 to add reliability if this works out. also if i don't partition a drive is it correct that the lifetime will increase? or is that hardcoded into the fimware.
Intel 710 = intel 320 plus custom firmware and uses 200 out of 320gb. So if i take my 96gb and chunk it down to 50gb - by not creating a partition would that be the same? or do you have to have firmware hacks to change the reservation?
so with sql server enterprise you get parallel indexing and all sql server dispatches more i/o when you split up drives.
I could raid-0 them on this quad core box with 8 or 16gb of ram.
say 1 big raid 4 or 6 drives, or 2 raid of 2-3 drives each.
Or jbod.
Given enterprise edition of sql server parallel indexing and splitting files apart the other option of raid/JBOD would be:
1. DB on drive 0/1
2. TMP/MASTER on 2/3
3. LOG on 3/4
or
DB on 1/2/3
tmp on 4
LOG on 5/6
I could use two raid controllers (LSI) in jbod mode so sql server could dispatch 3 threads to work on db, 1 thread on tmp, and 2 threads for log?
or if i raid them i might get 1 thread to DB, 1 thread to tmp, 1 thread to log?
or if i raid them all together in one big raid-0 i might get higher burst read/write but at a cost of less worker threads?
Any thoughts - primary goal is to use consumer throwaway ssd to cache and handle huge catalogs of data and ETL (transformation of data formats).
The core work will be done on a normal disk server which has 6 x 600gb 15K 3.5" SAS drives in raid-10 (pretty fast) but not scalable since 1.8TB is about $4200 and my working set is not that big at all. I need the # of spindles in raid-10 for safety and disk iops.
With SSD and 6 96GB ssd - 90gb formatted - i have 540GB @ $1/gb - so i'll keep my writes down on this box (mostly dev but also catalog full text/parametric search). - No way i can afford an intel 710 they are $10G for the big boys each (no joke! look it up). same with fusion i/o $10G a pop.
So back to the raid-0 or jbod - more worker threads per volume - so a quad core or 6 core could dispatch more i/o threads to run concurrent and enterprise edition of sql server can divvy up index searches into more than 1 thread (per index) to push even more iops.
Anyone have a DEV setup with sql server and thoughts? I'm not worried about reliability - I make backups and this is dev and caching(read) so i can build two and if one fails just reroute at the model.
Bare metal server 2008 R2 + sql server - no vmware or anything - i want RAW max speed at cheapest cost. I know there are faster ssd's but no point torching a fancy ssd if i have a runaway routine burn up 100 terabytes of writes on a holiday weekend. that would not be nice.
I could move to raid-1/10 to add reliability if this works out. also if i don't partition a drive is it correct that the lifetime will increase? or is that hardcoded into the fimware.
Intel 710 = intel 320 plus custom firmware and uses 200 out of 320gb. So if i take my 96gb and chunk it down to 50gb - by not creating a partition would that be the same? or do you have to have firmware hacks to change the reservation?