• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Fiber Chanel Vs SCSI ?

Kniteman77

Platinum Member
So I see Fiber chanel Drives, and I see SCSI drives, and they're both used in server aplications where high bandwith and low failure rates are required . . . . but what is the difference ?

What do the connections look like ?

Can I hook them both up using SCA and a SCSI raid card ?

Anyway, if anyone wants to leave anything about this here it'd be much appreciated.
 
No, FC drives does not use 80 pin sca connectors, it uses an edge connectors something that looks like VHD LVD plugs..

What's the application you're going to use this for ? FC is too expensive for desktop use
 
Forget Fiber Channel. That is a technology for SAN/NAS applications, where huge amount of drives move huge amounts of data. Fiber channel speeds are measured in Gb/s where scsi speeds are measured in Mb/s.
And, fibre channel technology is HUGELY expensive. Fibre channel uses copper or fiber optic connectors, and is not compatible with normal scsi... and NO, you cannot use fibre channels drives with your normal scsi card/ sca interface.

Main thing in Fibre channel vs scsi:

speed (interface): 2Gbps vs. 320Mbps
cable distances: fibre up to 10Km vs. ~25m (differential)
max # of devices: 126 nodes/ 16 million devices (fabric) vs. 16 devices

The fibe channel connector and cables are much like you std. network cable/ RJ45 connector... to get the idea.
 
Cool thanks 🙂

I'm just looking for some cheap SCSI drives to make a raid 5 array using a recently aquired controler. I've had problems with stuff going out on me latley and I want data redundancy.

I saw some Fiber Chanel drives that had both Fiber Chanel and SCSI in the name and couldnt tell from the pictures what kind of connector they used.

Also I was thinking of putting 2 drives in Raid 0 for my new OS partition, anyone have any reccomendations for that ?

The machine is going to be a Dual 3.3 Xeon with a Gig of ram, mostly using it for everyday use and gaming. I just want it to be quick on the draw when it comes to loading things.
 
Fibre Channel is just a different interface. Most SCSI drive lines have FC-AL models available.

LVD cables have a max length of 1.5 meters between devices and 12 meters for the whole SCSI chain, but the actual reliable length is generally less than that. SCSI is for drives inside a server.

FC is a pretty broad serial topology that allows a nearly infinite number of devices and cables up to 10km in length.

FC is 4 Gbit (but usually operates at 1 or 2 Gbit I think) and SCSI is 320 Mbyte, so the bandwidth is about the same. The drives all perform similarly and there's no reason to get FC unless you need it (and you'd know if you need it).

FC drives have a small flat 40-pin FC-SCA connector, whereas SCSI is a two-row 68-pin LVD connector or an 80-pin SCA-2 connector.

FC-AL optical:
http://www.rbarkerphoto.com/co...al/fc-al-conn1-400.jpg

Obviously, FC requires a FC controller card:
http://hypermicro.com/product....CTAD405&dept_id=13-003

Most controller cards have internal HSSDC connections then external copper or optical connections. The HSSDC cable connects the controller to the backplane which has FC-SCA connections.

 
that is true.. many test shows dual cpu take a 1-5% hit...

i have a dual xeon and use quanum 73gb 10k in raid 1 for boot and swap + VM storage and a set of 146x3 in RAID 5 for data..

windows 2003 sits on it, running DC + DFS to the computers in the house... its pretty good that i can log in from any pc at home and download my profile and work/surf as usual.. and i have 6 pc total in additional to the server...
 
I've jumped into the SCSI arena when used UW HVD's were cheaper than ide drives. Unless you have a lot of money and a lot of time to spend, just stick to IDE. Just get some Raptors and be done with it. There are raid 5 sata controllers if you really want. If you are not extremely careful how you build your system, those SCSI drives will fail a lot quicker than IDE drives would.
 
Originally posted by: cubby1223
If you are not extremely careful how you build your system, those SCSI drives will fail a lot quicker than IDE drives would.

Care to explain that statement?

You're wrong, unless by "fail" you simply mean "not work until you hook them up right."
 
I'm talking about the massive amounts of heat SCSI drives generate and if installed like a normal IDE harddrive is installed in most desktop systems, they will die quicker. Plus he's talking about setting up a raid 5 array which means a minimum of 3 SCSI harddrives going into the case.
 
I just want a REALLY fast OS partition, and i'm wondering weather 2 15k SCSI drives or 2 74 gig Raptors (either one in Raid 0) would give me better performance.
 
A general rule here is FCAL is for high demand servers only and scsi is for workstations and light server duity.

FCAL connects to its drives externaly via two optical cables.

Originally posted by: Kniteman77
I just want a REALLY fast OS partition, and i'm wondering weather 2 15k SCSI drives or 2 74 gig Raptors (either one in Raid 0) would give me better performance.


2 15k scsi is better than a 2 raptors. and do not forget that if you are going with scsi both ends of the bus need to be terminated.

http://www.granitedigital.com/ has some of the best terminators and cables. Two active terminators should give you the best performance. Call them for advice and custom options.
 
Soooooooooo, is there a cheap way to have two computers ten feet apart share the same RAID array of IDE drives? (As opposed to buying new drives for a second computer, with each computer booting from a different partition in the array)
 
If you really, really, really need fast thoroughput get FC (they're also the one to use for remote NAS or centralized HDD storage -- like your primary computer and secondary computer are 50 feet away and they're sharing the same disk array). You won't face the problems with loss of throughput like with a l-o-n-g patch cable with IDE, SATA and even SCSI.

BTW, eBay sometimes have smaller FC HDD for sale pretty cheap (16gig drives - 12 drive box of refurbs I've seen for less than $200). What gets you in the pocketbook is the FC controller and disk array encloser. But if you're moving gigabyte files over LAN, it might be what you want.
 
Originally posted by: TerumoYou won't face the problems with loss of throughput like with a l-o-n-g patch cable with IDE, SATA and even SCSI.

Are you saying that it is possible for two or more computers to share and boot off the same drive array with long IDE, SATA, or SCSI patch cables?? Please tell me more.

Sorry to the orginal poster for posting my question here, the original poster only wanted to know if FC is a reasonable option to speed up a home system, I'm more interested in looking for a cheaper alternative than FC for home systems to boot off a shared storage array (ie one drive array for multiple computers).
 
Originally posted by: mapen
Are you saying that it is possible for two or more computers to share and boot off the same drive array with long IDE, SATA, or SCSI patch cables?? Please tell me more.

Sorry to the orginal poster for posting my question here, the original poster only wanted to know if FC is a reasonable option to speed up a home system, I'm more interested in looking for a cheaper alternative than FC for home systems to boot off a shared storage array (ie one drive array for multiple computers).

If you're talking about a diskless workstation, that's a whole other ball of wax. It can be done to run applications and all (and even games), but there's still a problem with standard copper cabling and loss of throughput -- not going to match FC speeds. Can't get around that and why fiber is used for long distances.

This is easier NAS setup, and if you have a spare computer system around....

Background: Tom's Hardware review....5 minute setup NAS.....
http://www6.tomshardware.com/s...29/nas-storage-04.html

Open-E NAS 2.0 - $599 (P4 256mb or memory of higher)
https://www.usa-assmann.com/sh...m.asp?itemid=3&catid=3

Or the desktop edition (2 HDD).....

Open-E NAS SOHO - $399 (P3 or higher/128mb memory or higher)
https://www.usa-assmann.com/sh...m.asp?itemid=2&catid=3
 
Originally posted by: mapen
Originally posted by: TerumoYou won't face the problems with loss of throughput like with a l-o-n-g patch cable with IDE, SATA and even SCSI.

Are you saying that it is possible for two or more computers to share and boot off the same drive array with long IDE, SATA, or SCSI patch cables?? Please tell me more.

Sorry to the orginal poster for posting my question here, the original poster only wanted to know if FC is a reasonable option to speed up a home system, I'm more interested in looking for a cheaper alternative than FC for home systems to boot off a shared storage array (ie one drive array for multiple computers).


You can boot over a network, There are network cards that are bootable and will act similar to a HDD controller card when the pc starts. My 3com 3cr990svr97 does this, It has it's own bios. It needs to beconnceted to an enterprise os like Wndow$ 2000 $erver or simiar ($$$$). Your computer then becomes what is called a dumb termanal.
 
I was thinking of doing that before but declined because of the issue of refresh rates and bus saturation. That was before Windows 2003 Server came around, which appears can actually do more than serve simple apps, but even games over the network.

If the kinks can be worked out, folks would just need one main server and just network dummy terminals. The cost savings would be tremendous.
 
Back
Top