How to achieve super fast load times?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imported_wired247

Golden Member
Jan 18, 2008
1,184
0
0
As someone mentioned, budget is definitely a concern, whether you say it is or you don't.


I thought about SAS but decided it would be far too expensive to get the amount of storage I wanted.

SATA II is quite fast, and good raid setups can be had without selling kidneys.

Do some research before you buy, else you could be spending over $2000-4000 for something that might still not satisfy you completely.

A few Raptors in raid-5 or raid-0 sound like it would work great for you. I have been impressed with the raid controller in my signature, and it is not rediculously expensive either.

Going with 15k rpm SAS hard drives is fun, but they are teeny tiny storage wise. IF you can live with that miniscule storage then it may be appropriate for you.

 

Fallen Kell

Diamond Member
Oct 9, 1999
6,184
520
126
First thing I would do is get more RAM. If you are doing video editing and audio editing, you want enough RAM that you can load the entire file into memory, let the editor process the file in RAM, and not have to copy/save/load parts of it back to the disk so that it can continue processing the entire file.

I would look into RAID controller cards, for a quick rundown on some semi-enterprise level cards start reading some of the following:

SAS RAID controllers
http://www.tomshardware.com/20...controllers_from_amcc/
http://www.tomshardware.com/20...pci_express/page2.html

RAID Scaling
http://www.tomshardware.com/20...2/raid_scaling_charts/
http://www.tomshardware.com/20...7/raid_scaling_charts/
http://www.tomshardware.com/20...7/raid_scaling_charts/

SAS Drives 10,000RPM vs 15,000RPM
http://www.tomshardware.com/2007/10/10/sas_hard_drives/

SATA RAID Controllers
http://www.tomshardware.com/20...llers-for-smb-servers/


Yes, it is a LOT of reading. But you are jumping head first into something here and you need to know and read some of the above. A RAID setup in itself might not be good enough, and in fact the WRONG RAID setup WILL be WORSE than a non-RAID setup. You MUST understand RAID and how it works and the impact of different stripping sizes, levels of RAID, and how the controller deals with generating the XOR's (in the case of RAID 5 and RAID 6), the limitations of total thru-put to the controller card, the limitations of using different types of disks and why certain features a HIGHLY recommended to have on disks that are in a RAID setup. Just going out and picking up a RAID card (or worse, just slapping them on a standard motherboard and using the built in controller on a consumer level device), with a standard off the shelf consumer level drive won't give you much of a performance increase. This isn't something you can just slap together and boom, instant speed. You must know how to properly configure the controller to give you the best overall performance for the type of applications are you trying to do.


And these things will NOT be cheap. You are looking at $400-700 for a good RAID controller alone. And if you plan on going SAS and use SAS drives, well the only real choice is the Seagate Savvio 10K.2 or Cheetah 15K.4. Both are 2.5" disks and the Savvio 73GB disks will set you back ~$350 each and the Cheetah... well, lets just say that you can find used for $400 each...... And you should be looking for at a MINIMAL of 4 of these drives.

That was if money really didn't matter. If it does matter (and I suspect it does), I would recommend looking at a AMCC's 3ware 9650SE-8LPML, and getting a few WD Raptor's (minimal 4 or 8 if you have the cash and space).
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,061
3,557
126
Originally posted by: Fallen Kell

That was if money really didn't matter. If it does matter (and I suspect it does), I would recommend looking at a AMCC's 3ware 9650SE-8LPML, and getting a few WD Raptor's (minimal 4 or 8 if you have the cash and space).

hey fallen any reason why u like this card more then the areca 1220?

I know my next build will have its own controler card. Havent decided between SAS or just keeping it SATA II.

i have 4 raptors right now and i can just migrate them to the card.


But whats so nice about the AMCC vs the Areca?
http://www.newegg.com/Product/...x?Item=N82E16816131004

 

Amaroque

Platinum Member
Jan 2, 2005
2,178
0
0
Originally posted by: aigomorla

i have 4 raptors right now and i can just migrate them to the card.

Hey, do you have 4 150'5 or 74's

I have two 74's and two 150's. So it's not a great idea to RAID them all together.
 

perdomot

Golden Member
Dec 7, 2004
1,390
0
76
Platinumsteel,
My 3 hdds are separate because when it comes to encoding, raid 0 didn't do anything for me and I tried it. If you think about, a hdd that has 70MB/sec writes would be able to have an encoded divx movie of say 1.5GB done in under 30 seconds. Encoding is strictly cpu and a little memory limited. Best thing you can do is get a very OCable quad core and max it out. Then load up the PC with the max ram it can handle and set up 3 hdds like I have. That wont be very expensive at all but will provide excellent performance. Then you can use the saved cash to invest in some water cooling and an aluminum case with good air circulation as encoding rigs get hot very quickly.
 

TheOtherRizzo

Member
Jun 4, 2007
69
0
0
You'd have to be more specific about what you're doing exactly but I'm quite sure that CPU power will help you a lot more than all RAID and RAM recommendations people are giving here. Or to be more precise: encoding speed is completely a CPU thing and loading speed depends on what type of file and what software. RAM size is more an issue for image editing than for AV editing.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,184
520
126
Originally posted by: aigomorla
Originally posted by: Fallen Kell

That was if money really didn't matter. If it does matter (and I suspect it does), I would recommend looking at a AMCC's 3ware 9650SE-8LPML, and getting a few WD Raptor's (minimal 4 or 8 if you have the cash and space).

hey fallen any reason why u like this card more then the areca 1220?

I know my next build will have its own controler card. Havent decided between SAS or just keeping it SATA II.

i have 4 raptors right now and i can just migrate them to the card.


But whats so nice about the AMCC vs the Areca?
http://www.newegg.com/Product/...x?Item=N82E16816131004

The Areca seems to have a processing bottle-neck for overall threw-put. It also has a botched implementation of RAID 0+1, being anywhere from 33-50% slower (I/O's per sec, webpages served to client requests, and database operations per second performed) then the AMCC. The RAID 5 suffers from not having a fast enough XOR engine so the parity bit generation takes longer then it should, which also causes the card to be about 50% slower then the AMCC in RAID 5 and even slower in RAID 6. And if the RAID 5 array is operating in degrated mode (i.e. has 1 failed disk), it is about 60-70% slower than the AMCC operating under the same conditions.

Basically overall, it has some design flaws which simply make it outclassed by the AMCC. Sure, you can get the Areca for around $430, but for $50-80 more you get something that has double the performance in doing just about anything...

Now the Areca holds its own in RAID 0, so if you are only going to do RAID 0, then maybe it is worth it for you.

I personally picked up a Promise SuperTrak EX8350. It too suffers from a botched RAID 0+1 implementation, but since I know I will never use that I didn't care. I got it because it tended to be faster then the AMCC in RAID 5 performance, which is what I was going to use it for. Overall it isn't as good a card as the AMCC, but it holds it own in RAID 5 performance, and it is available for $350-360, saving me over $100 from the AMCC solution.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,061
3,557
126
Originally posted by: Fallen Kell

The Areca seems to have a processing bottle-neck for overall threw-put. It also has a botched implementation of RAID 0+1, being anywhere from 33-50% slower (I/O's per sec, webpages served to client requests, and database operations per second performed) then the AMCC. The RAID 5 suffers from not having a fast enough XOR engine so the parity bit generation takes longer then it should, which also causes the card to be about 50% slower then the AMCC in RAID 5 and even slower in RAID 6. And if the RAID 5 array is operating in degrated mode (i.e. has 1 failed disk), it is about 60-70% slower than the AMCC operating under the same conditions.

Basically overall, it has some design flaws which simply make it outclassed by the AMCC. Sure, you can get the Areca for around $430, but for $50-80 more you get something that has double the performance in doing just about anything...

ahhhhh thank you for this info.

looks like im getting the AMCC for my next rig then. :T

what about the SAS option? The areca card that bad in SAS?

The reason why i keep saying areca is because a lot of people i know use it. And they say its absolutely the best. But i have little knowledge in this area... im not an IT. But my IT recomends areca. :T

But my IT also asks me for advice in liquid cooling. :T
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,184
520
126
SAS is Serial Attached Scsi. Neither the AMCC or Areca cards that have been discussed support SAS. You need to look at the Areca ARC-1680ML for SAS support, but this is a HORRIBLE card. Even WORSE then the 1220 (if you can imagine that). The AMCC 9690SA (their SAS level controller) is upwards of 5 time faster then the Areca in RAID 5, and the Areca suffers from an performance threshold in RAID 0 where the AMCC is twice as fast on large file transfers (or any other similar large number of operations).

I don't know why this is, but the Areca products seem to have a fundamental flaw in their design or are simply outclassed and underpowered (or really poor firmware which has a fundamental mistake somewhere in the logic) compared to similar priced options from other companies.

Don't just take my word on it, look at the numbers for yourself. Toms Hardware benchmarked these cards:

http://www.tomshardware.com/20..._from_amcc/page12.html

While normally I don't always like Tom's Hardware for their single product reviews/articles, I can not seen any fault in their tests of these controller cards in their recent storage articles they have been posting. I have also experienced some of this in work with the Areca controllers where they are noticeably slower then the similar spec'ed servers with LSI controllers. The Areca's were used to save some on costs, but they perform so much slower that they should have spent the extra $50-100 for any of the other RAID solutions offered (it was cheapest for a reason I guess).

At some point in the future Areca may have different firmware/software that might improve its performance, but I havn't seen it yet for the SAS drives that we have at work.
 

imported_wired247

Golden Member
Jan 18, 2008
1,184
0
0
Originally posted by: Fallen Kell
SAS is Serial Attached Scsi. Neither the AMCC or Areca cards that have been discussed support SAS. You need to look at the Areca ARC-1680ML for SAS support, but this is a HORRIBLE card. Even WORSE then the 1220 (if you can imagine that). The AMCC 9690SA (their SAS level controller) is upwards of 5 time faster then the Areca in RAID 5, and the Areca suffers from an performance threshold in RAID 0 where the AMCC is twice as fast on large file transfers (or any other similar large number of operations).

I don't know why this is, but the Areca products seem to have a fundamental flaw in their design or are simply outclassed and underpowered (or really poor firmware which has a fundamental mistake somewhere in the logic) compared to similar priced options from other companies.

Don't just take my word on it, look at the numbers for yourself. Toms Hardware benchmarked these cards:

http://www.tomshardware.com/20..._from_amcc/page12.html

While normally I don't always like Tom's Hardware for their single product reviews/articles, I can not seen any fault in their tests of these controller cards in their recent storage articles they have been posting. I have also experienced some of this in work with the Areca controllers where they are noticeably slower then the similar spec'ed servers with LSI controllers. The Areca's were used to save some on costs, but they perform so much slower that they should have spent the extra $50-100 for any of the other RAID solutions offered (it was cheapest for a reason I guess).

At some point in the future Areca may have different firmware/software that might improve its performance, but I havn't seen it yet for the SAS drives that we have at work.


Good info. I was debating between the 3ware 9650SE and the areca 1210 (in part) based on this
http://www.tomshardware.com/20...mb-servers/page12.html

and the price point.

At the time I wasn't convinced about vista 64 support on the 3ware card. Perhaps it was a poor buying decision, but then again I'm not exactly running a file server.

So I went with the areca. Pro's are: it was so easy to use... a caveman could do it, and it is slightly cheaper than the 3ware. Con's... I guess the I/O processor is not up to snuff for large queueing, too bad :( but, it's still much much better than onboard raid. for now, I'm still a happy customer, but now I will definitely be wondering if I should've gone with the 3ware

=S


Also, I felt at the time that it was a much better purchase than the LSI MEGAraid card which I had also considered. Although, it seems that the LSI card is probably not much better than onboard raid.

 

Fallen Kell

Diamond Member
Oct 9, 1999
6,184
520
126
Yeah, system support can be an issue with any card, and if you don't know if it is fully supported on your system, well, you can scratch that card of the list. As I said before, there may be firmware/software fixes for the Areca that come out in the future. You might want to check on that, as I think it may be something that they are just doing wrong (especially on the SAS card). They skimped a bit on their XOR processor in the 1220 with only getting the Intel 332 chip (the Promise uses the 333 chip), but you are most definitely correct in that it is much better then the on-board, especially since it at least does off-load the XOR operations to its own on-board chip, and not need to use your CPU (like almost all on-board RAIDs that support RAID 5).
 

imported_wired247

Golden Member
Jan 18, 2008
1,184
0
0
Originally posted by: Fallen Kell
Yeah, system support can be an issue with any card, and if you don't know if it is fully supported on your system, well, you can scratch that card of the list. As I said before, there may be firmware/software fixes for the Areca that come out in the future. You might want to check on that, as I think it may be something that they are just doing wrong (especially on the SAS card). They skimped a bit on their XOR processor in the 1220 with only getting the Intel 332 chip (the Promise uses the 333 chip), but you are most definitely correct in that it is much better then the on-board, especially since it at least does off-load the XOR operations to its own on-board chip, and not need to use your CPU (like almost all on-board RAIDs that support RAID 5).



Good point, areca does have good support and they respond to emails within 1 day (claim according to their website)

I have the latest firmware on disk ready to go, but I've had negative experiences with installing firmware which taught me the hard way that more often than not "if it ain't broke, don't fix it"

 

platinumsteel

Member
Jul 30, 2006
26
0
66
@ perdomot TheOtherRizzo,
I think you guys are hitting the spot there.I also agree that its really the cpu power that basically do most of the work...Because I tried encoding on different computers systems both with almost the same specs but different cpu's and I have seen noticeable differences especially with those new core 2 duos.I think I would consider perdamot and get a core 2 duo overclock it and setup a raid 0 with 4 hardrives and see how that works out.....thanks again for your support.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
RAID is overestimated here for encoding performance... you should FIRST go for a faster CPU...
I would upgrade in this order
1. Faster CPU (with as many cores as you can. SSE4 capability highly recommended as it can double the speed of certain encodes)
2. More ram
3. RAID arrays...

Seperate drives seem to work out better though, 3 hard drives work best here.
1. Windows and programs run from drive C
2. Source file is located on drive D
3. Destination file is written on drive E

As long as C D and E are separate hard drives, not separate partitions.
 

perdomot

Golden Member
Dec 7, 2004
1,390
0
76
platinumsteel,
If you really want to experiment with raid 0, the best way is to make your C drive. Best results I've seen are 2 150 GB Raptors in a raid 0 array. Get that and two 750GB WD HDDs for your source and destination drives and you will be totally set.
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
Stop the presses!

I actually agree with taltamir on something :D ... sorta

First and most important decision: software (and with software you must make a platform decision). If you select Final Cut Pro you go with a Mac. Other big dawgs (Premiere, Kulabyte, etc.) are Win-based.

And without starting a flame-war, this is where it gets interesting. AMD/Intel quads are each highly competitve here - and (dare I say ???) Mainconcept seems to favor AMD.

The best 'bang for the buck' is in the low-end quads. There is a pretty substantial point of diminishing returns on clock speed so go with a q6600 or the Phenoms.

Your 'ultimate rig' would be a 2p Phenom box running Kulabyte (which uses Mainconcept encodes and runs 8 threads in parallel).

When you edit a project your changes are processed in RAM. When the available RAM isn't enough, hard disk space is used as an additional work area. We're talking 'page file' management here so WinV with a super-drive page file looks good - at a minimum with WinXP moving the page file off the OS/Apps drive should increase performance.

I think Premiere will give you three scratch 'options'. 'Capture' on drive two. 'Video' on drive three. 'Audio' on drive four. Your final product would then be drive five. (Your OS/Apps is drive one.)

Place your project file on a different drive than the video/audio/capture files. You can place it on your OS/Apps drive if necessary. I've thought about using a thumb drive for project files with an auto-save to an external eSATA drive (I just got a new mobo with an eSATA port).

If you want to "RAID" all those drives it's up to you :)


 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
How about distilling all this information down to the essentials?



You need a fast CPU with lots of cores to reduce the actual encoding time. This will probably be #1 priority.

You need a fast HDD subsystem to reduce actual read/write time, but this will still be limited by your encoding time. I think some people are focusing on maximum reads or maximum writes without taking the whole picture into account, and that is during editing/encoding there are times when you need to read AND write at the same time. Now, I just read through this thread from start-finish and seems like the "fast computer" guys recommend RAID/SAS/etc. However, a couple of guys that specifically mentioned video editing software suggested using separate drives instead of RAID. If you think about this it makes sense. Even if you have a RAID array that can sustain 200Mb transfers, if it has to read AND write at the same time, each process gets half the speed. Well, not strictly true as there would be overhead and such, but "true enough" to make thinking this way easier to grok. So, with overhead you have (for the sake of argument) 70Mb "available" to reads or writes. So, instead of expensive SAS controllers and all, why not just use two cheap and modern 7200RPM drives separately, and assign read to one and write to another? You'll get the full read/write bandwidth of each drive. You also need to reduce system paging to the drives you read/write to. This can be done by using a separate HDD for the operating system and applications, and by using a lot of RAM (if over 3GB, use a 64 bit OS).

Or, do a combination as some have suggested. One fast drive (maybe a Raptor 150) for operating system, application software and swap/page file. A 2 drive RAID0 for source and a 2 drive RAID0 for destination.

So, everyone's suggestions sound great and can be considered "fast." You (platinumsteel) just need to distill the suggestions and go for it! Let us know what you end up getting and how much faster it is than your current rig.
 

platinumsteel

Member
Jul 30, 2006
26
0
66
Taltamir,
Could you please explain what SSE4 capability means?......Yeh well I think I have my mind made up here now thanks to you guys,I will settle for separate hardrives.But taltarmir I think you would have to link me with a tutorial on how I should go about setting it up.I have never done a raid array or any kind of setup so I am a newbie in this area and would definitely appreciate a nice and easy to understand tutorial on setting up that 3 separate hardrives u mentioned.....thanks again
 

platinumsteel

Member
Jul 30, 2006
26
0
66
Oh goodie ZAP I am so sorry I saw your posts kinda late...That combination sounds like what I dreamed off.lol.....Well I think I may give that a shot...But please if it's possible can you provide a tutorial on how to achieve that big combo?....That's what I call bang for your buck...
 
Dec 30, 2004
12,553
2
76
Originally posted by: platinumsteel
Well thanks again Amaroque...Well now I am a lil confused...I have got techs saying that sas controllers,raid 0 with more than 3 drives...Oh man i don't know who to believe.Are there anyone here who have experience with almost all the suggestions given?Maybe they will be able to tell me whats best....But common sense is telling me that according too Perdot and KenAF Reading the raw video source from one drive and encoding to a second drive improves performance over raid 0..thats sounds more logical to believe may be the best option.......But could anyone link me up with the SAS controllers that support both SAS and SATA?I would really like to check them out.....Meanwhile I am going to price up how much 4 raptors will cost....

edit: I'm only on page 1, looks like people already thought of the budget thing. BTW, this is a good thread!
Thing is, everybody's right, it's just a matter of how much performance do you want to spend money on. I think if you gave everyone a budget that might help them more be able to recommend stuff.

Me, I think you could probably go with perdomot's setup (read from one and encode->write to another).

If you go with a dual core I would overclock if I were you. Just get a Tuniq Tower cooler for the CPU, or a ThermalRight Ultra 120 (I have the Ultra120 and I would recommend the Tuniq; it is better for case airflow) and some AS5 paste to put on top of the cpu and below the cooler. I would consider getting a quad core (q6600 is pretty popular) and overclocking that; seeing as you're doing encoding then those 4 cores are going to scale pretty well. However, I don't know all that much about encoding as I've only converted between codecs, but I found I didn't need a fast computer, because I would set up the files how I wanted them to go, and just click go and then leave. IE, 1h encoding time vs 2h does not matter that much to me, because I'll just set it up and go to bed.
 

bheiser

Junior Member
May 14, 2008
1
0
0
Originally posted by: Fallen Kell
[
The Areca seems to have a processing bottle-neck for overall threw-put. It also has a botched implementation of RAID 0+1, being anywhere from 33-50% slower (I/O's per sec, webpages served to client requests, and database operations per second performed) then the AMCC. The RAID 5 suffers from not having a fast enough XOR engine so the parity bit generation takes longer then it should, which also causes the card to be about 50% slower then the AMCC in RAID 5 and even slower in RAID 6. And if the RAID 5 array is operating in degrated mode (i.e. has 1 failed disk), it is about 60-70% slower than the AMCC operating under the same conditions.

Basically overall, it has some design flaws which simply make it outclassed by the AMCC. Sure, you can get the Areca for around $430, but for $50-80 more you get something that has double the performance in doing just about anything...

Hi, this is very interesting. I am researching an upgrade, having made a bad decision on an nVidia 680i board with on-board RAID 6 months ago (nothing but horrible performance and a litany of problems). I'm now deciding between the 9650SE and the 1220 ... and trying to determine which mobo will be most stable with either of them.

This is the first I'd heard of the 1220 performing poorly relative to the 9650SE. Did you perform benchmarks yourself, or do you have a reference to one?

I'm not arguing with your statement, but rather, I am trying to hone in on a decision Real Soon Now, so this would be very useful information to help in my selection.

Thanks!

 

Griswold

Senior member
Dec 24, 2004
630
0
0
Originally posted by: taltamir
1. Faster CPU (with as many cores as you can. SSE4 capability highly recommended as it can double the speed of certain encodes)

SSE4 and encoding is mainly hype courtesy of Intel and DivX. When using an encoder such as x264, there is virtually no need for or gain from using SSE4.

Read more about it by those who should know:

http://forum.doom9.org/showthr...=133567&highlight=sse4

Granted, if you prefer DivX you'll probably need SSE4 to get on the same level as x264 in terms of speed at similar bitrates and profiles. :D

 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,061
3,557
126
Originally posted by: heyheybooboo
Stop the presses!

I actually agree with taltamir on something :D ... sorta

First and most important decision: software (and with software you must make a platform decision). If you select Final Cut Pro you go with a Mac. Other big dawgs (Premiere, Kulabyte, etc.) are Win-based.

And without starting a flame-war, this is where it gets interesting. AMD/Intel quads are each highly competitve here - and (dare I say ???) Mainconcept seems to favor AMD.

The best 'bang for the buck' is in the low-end quads. There is a pretty substantial point of diminishing returns on clock speed so go with a q6600 or the Phenoms.

Your 'ultimate rig' would be a 2p Phenom box running Kulabyte (which uses Mainconcept encodes and runs 8 threads in parallel).

When you edit a project your changes are processed in RAM. When the available RAM isn't enough, hard disk space is used as an additional work area. We're talking 'page file' management here so WinV with a super-drive page file looks good - at a minimum with WinXP moving the page file off the OS/Apps drive should increase performance.

I think Premiere will give you three scratch 'options'. 'Capture' on drive two. 'Video' on drive three. 'Audio' on drive four. Your final product would then be drive five. (Your OS/Apps is drive one.)

Place your project file on a different drive than the video/audio/capture files. You can place it on your OS/Apps drive if necessary. I've thought about using a thumb drive for project files with an auto-save to an external eSATA drive (I just got a new mobo with an eSATA port).

If you want to "RAID" all those drives it's up to you :)

Your idea is wonderful however, the IO speed on the ICHR9 or even the NVIDIA 570/690 is very slow to standards in comparison to an areca or 3ware controller.

And im suprised ruby hasnt said anything yet. As far as i know, i dont have enough info to take this debate on with you, but i know full metal and ruby are more then enough qualified.

Originally posted by: Amaroque
Originally posted by: aigomorla

i have 4 raptors right now and i can just migrate them to the card.

Hey, do you have 4 150'5 or 74's

I have two 74's and two 150's. So it's not a great idea to RAID them all together.

WOW this is an old response, i have 4 x 74gigs
but to asnwer your question no, 4 raptors > 2 raptors and it feels and acts and responds faster.

http://i125.photobucket.com/al...aigomorla/IMG_0885.jpg

I dont need backup or redunancy. I have a server with 4TB of backup/storeage space already, and im in the middle of building this that will backup weekly for me for something like that:

http://i125.photobucket.com/al...aigomorla/IMG_1079.jpg

 

jg0001

Member
Aug 8, 2006
69
0
0
Originally posted by: platinumsteel
@ perdomot TheOtherRizzo,
I think you guys are hitting the spot there.I also agree that its really the cpu power that basically do most of the work...Because I tried encoding on different computers systems both with almost the same specs but different cpu's and I have seen noticeable differences especially with those new core 2 duos.I think I would consider perdamot and get a core 2 duo overclock it and setup a raid 0 with 4 hardrives and see how that works out.....thanks again for your support.

No offense, but I think it's funny that you are being offered up all these ridiculously esoteric high-tech solutions (since you yourself said money was no problem) and yet you continue to talk about using a Core2Duo, and an E6600 at that. [Not that there's anything wrong with that, but you clearly are on an entirely different track than the hardcore responses would suggest.]

All of the RAID stuff is probably beyond your needs and ability and you should focus on the following instead:
(1) Core2QUAD on a newish mobo -- preferably get a Q9450 or, if money really is no object, a Q9770 on an X48 chipset mobo

(2) 8GB of DDR2 (4 x2GB) is probably fine; you can go DDR3 if you have the mobo for it

(3) MINIMUM of 3 HDDs -- 1 for the O/S, 1 for the SOURCE video, 1 for the OUTPUT;
Source & Output should be set up on 2 different controllers if possible (many mobos have a 2ndary controller for SATA drives in order to give you a lot of ports)

(4) on top of (3), you could RAID any of the 3 drives for performance, but you may find that it's not worth it, especially the first time a single drive in a 2 drive RAID setup dies, taking the set with it. If you're going for RAID for speed, and you don't care about safety, go right ahead. If you do, you're going to have to have a small pile of drives. The "return" on that investment may not be as exciting as you'd expect.

As others have said, unless you are doing some kind of super hardcore work, the HDD speed won't matter much in terms of ENCODE timings, assuming you at least separate the source and destination to different HDDs... again, it's not like you are going to be able to encode faster than your HDD's can read & write data.

In this equation, CPU speed (& core count) will be carrying 95% of the burden of how fast things are with HDD speed helping only a little.

If you wanted to get 'slightly' nuts, you could build a Skulltrail system with DUAL Quadcore processors... an entire second Quadcore would make a lot more difference than the craziest raid setup.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,061
3,557
126
Originally posted by: jg0001
Originally posted by: platinumsteel
@ perdomot TheOtherRizzo,
I think you guys are hitting the spot there.I also agree that its really the cpu power that basically do most of the work...Because I tried encoding on different computers systems both with almost the same specs but different cpu's and I have seen noticeable differences especially with those new core 2 duos.I think I would consider perdamot and get a core 2 duo overclock it and setup a raid 0 with 4 hardrives and see how that works out.....thanks again for your support.

No offense, but I think it's funny that you are being offered up all these ridiculously esoteric high-tech solutions (since you yourself said money was no problem) and yet you continue to talk about using a Core2Duo, and an E6600 at that. [Not that there's anything wrong with that, but you clearly are on an entirely different track than the hardcore responses would suggest.]

All of the RAID stuff is probably beyond your needs and ability and you should focus on the following instead:
(1) Core2QUAD on a newish mobo -- preferably get a Q9450 or, if money really is no object, a Q9770 on an X48 chipset mobo

(2) 8GB of DDR2 (4 x2GB) is probably fine; you can go DDR3 if you have the mobo for it

(3) MINIMUM of 3 HDDs -- 1 for the O/S, 1 for the SOURCE video, 1 for the OUTPUT;
Source & Output should be set up on 2 different controllers if possible (many mobos have a 2ndary controller for SATA drives in order to give you a lot of ports)

(4) on top of (3), you could RAID any of the 3 drives for performance, but you may find that it's not worth it, especially the first time a single drive in a 2 drive RAID setup dies, taking the set with it. If you're going for RAID for speed, and you don't care about safety, go right ahead. If you do, you're going to have to have a small pile of drives. The "return" on that investment may not be as exciting as you'd expect.

As others have said, unless you are doing some kind of super hardcore work, the HDD speed won't matter much in terms of ENCODE timings, assuming you at least separate the source and destination to different HDDs... again, it's not like you are going to be able to encode faster than your HDD's can read & write data.

In this equation, CPU speed (& core count) will be carrying 95% of the burden of how fast things are with HDD speed helping only a little.

If you wanted to get 'slightly' nuts, you could build a Skulltrail system with DUAL Quadcore processors... an entire second Quadcore would make a lot more difference than the craziest raid setup.

very nicely put.