• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Okay now i REALLY want an SSD lmao

Heh, I love when he piles up the parts, then slowly slides a fire extinguisher towards the pile of parts.
 
That video is very very old.
The "really fast, really fast" is still amusing though. With a sufficient controller you could get 2GB/s reads/write from 8-9 X25-E drives, though without the 6TB capacity.
 
Originally posted by: ilkhan
That video is very very old.
The "really fast, really fast" is still amusing though. With a sufficient controller you could get 2GB/s reads/write from 8-9 X25-E drives, though without the 6TB capacity.

Yeah, the youtube publish date is March 02, 2009 and I distinctly recall there being at least two prior threads specifically related to this same youtube vid.

Not that I'm calling a "repost" or anything like that, I like seeing this material resurface every now and then because it is just so "out there" and cool in terms of what a people can do with just a pile of off the shelf hardware.

And you're right, if bandwidth was solely the name of the game then they'd saturate two or three raid controllers operating in ganged mode with far fewer of even today's top of the line MLC-based drives (either the G2 or the Vertex Turbo) but it wouldn't have that awesome capacity number to go with it.

Get yourself one of these 4x PCIe mobos and throw in three of these ARC-1680IX-12-4G ganged in raid-0 mode along with six of these OCZSSD2-1VTX250G per controller and you'd have yourself a rather bandwidth robust 4.5TB disk-subsystem. (should hit 3GB/s sustained with no problems whatsoever, maybe even 4GB/s sustained)
 
My 10Gb/S Ethernet cables came in! 😀

You know what's better than bandwidth with large SSD raid arrays? They take n times longer before trim is required where n is the number of member disks!

Future IOP hosts will probably be like GPUs to deal with the additional strain particularly with parity levels. If someone made a SAS6 controller that used CUDA it would probably be able to keep up with a substantial number of drives across expanders. Of course PCI-E 16X bandwidth limits are reached very quickly so we're back to multiplexing hosts - again. 🙁
 
kinda kills the usefulness of it once you run into the PCI-e barrier lol. we need a board with 4 PCIe 5 16x slots just to keep up with todays large RAIDs of SSDs lol. im betting they double the pin density of the slots next and add another 2 layers to the mobo PCB just for PCIe lanes lol
 
Originally posted by: Rubycon
My 10Gb/S Ethernet cables came in! 😀

You know what's better than bandwidth with large SSD raid arrays? They take n times longer before trim is required where n is the number of member disks!

Future IOP hosts will probably be like GPUs to deal with the additional strain particularly with parity levels. If someone made a SAS6 controller that used CUDA it would probably be able to keep up with a substantial number of drives across expanders. Of course PCI-E 16X bandwidth limits are reached very quickly so we're back to multiplexing hosts - again. 🙁

What do you need 10Gb/s ethernet for?

I can see it now, the next-gen of Intel's IOP processors will be Larrabee-based :laugh: You'll hook your hard-drives into the your dual-use discrete gpu card.
 
Originally posted by: Idontcare

What do you need 10Gb/s ethernet for?

I can see it now, the next-gen of Intel's IOP processors will be Larrabee-based :laugh: You'll hook your hard-drives into the your dual-use discrete gpu card.

2GB/S SAN storage.

CUDA would rule for IOP especially compound parity arrays. :Q
 
Back
Top