When will 10GBase-T reach the consumer level?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
My use case is on a high-end NAS (2 SSD + 12 spinners) that serves essentially as storage for everything that isn't installed to the local disk on my desktop. Profiles, home directories, etc are all mounted there. Additionally, that thing routes my internet, provides a few services and should in theory also work as a DVR, live-streaming HDTV.

That's not high-end NAS. That's consumer level computer, and you won't get 10gb out of it.

SFP+ just isn't all that great.

I don't think you know what you're talking about.

If you're talking about direct attached copper SFP+ cables (Cisco calls them twinax) then those are meant to be used within a rack to connect equipment to the top of rack switch. 10m is plenty.

SFP+ itself is extremely flexible and with the correct modules can go extreme distances.
 

_Rick_

Diamond Member
Apr 20, 2012
3,983
74
91
That's not high-end NAS. That's consumer level computer, and you won't get 10gb out of it.



I don't think you know what you're talking about.

If you're talking about direct attached copper SFP+ cables (Cisco calls them twinax) then those are meant to be used within a rack to connect equipment to the top of rack switch. 10m is plenty.

SFP+ itself is extremely flexible and with the correct modules can go extreme distances.

Ah, then I guess I need separate modules and cables when exceeding the magic 10m - thanks for clearing that up. I was mostly looking at direct attached cables which are available in the retail channel, and make the connection quick and easy.
And, no, I probably won't hit anything near 10gb from the start but I don't want to run into 1Gb limitations either. Given the price drop for SSDs, I might eventually put another tier of storage onto that server (provided NFS and/or Samba can cope with it --- if not, I guess I have to go the way of iSCSI et al.) by raiding a few 512GB SSDs.
Also, who knows, there may be other clients eventually too, so server-side, with a decent caching FS, there may well be peaks in excess of 1Gb.
My key point was, that 10GbE price-wise got into regions where it can not be considered an option, especially for back-haul lines between gE switches and to nodes that carry a lot of traffic.


...my interest may also be motivated by the fact, that my gE RTL chip keeps falling back to 100M mode on my server, for no discernable reason. It works for a few days, then drops. This is frustrating, and once you consider proper server grade GbE hardware, 10GbE isn't that much worse, only around three times the cost per port, once you include an SFP module.
 

BirdDad

Golden Member
Nov 25, 2004
1,131
0
71
My home is wired with Cat6 450MHz not Cat6a, would I still be able to use the 10Gb ethernet in the future without rewiring everything?
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Infiniband is a fast networking system optimised for very high bandwidth. Typically each card will have a bunch of channels (usually 4, sometime 12) which can be paralleled to form a single higher bandwidth channel (this works at the hardware layer, and actually bonds the channels - where ethernet link aggregation is problematic because TCP dies horribly if there are multiple channels, so most ethernet bonding solutions are severely crippled to prevent TCP going horribly wrong).

The infiniband protocol is much more sophisticated than ethernet (with loads of features designed for supercomputing, etc.), but like ethernet you can run IP over it; infiniband, itself, is actually more sophisticated and flexible than TCP is, but to use these features requires application support. However, there is nothing stopping you running TCP over infiniband and running it just like it was ethernet.

The only problem you might run into with infiniband, is that most ethernet NICs have TCP hardware processing, whereas most infiniband cards don't, and the TCP packet processing must be done in software by the CPU. You may find that you get very high CPU usage at high data rates.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
My home is wired with Cat6 450MHz not Cat6a, would I still be able to use the 10Gb ethernet in the future without rewiring everything?

Probably. The main benefit of Cat6a is that it has bumpers on the cables, so they can't be packed closely.

10G-baseT is sensitive to interference from one cat6 cable to a neighboring one, so for long runs (>150 feet), you should use cat6a because the bumpers keep the cables a few mm apart and reduce interference.

Obviously, this type of interference is only an issue with long runs of tightly packed cables (like a neatly wired datacenter). If you wiring is untidy and each cable is kept slightly separate from each other, there are unlikely to be problems.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
There are no bumpers on Cat6a that are normal/standard. It operates at up to 500MHz and cross-talk begins at 350MHz. It's got better insulation to resist crosstalk and thus can go longer distances.

It's more heavy, bulky and unwieldy though so unless you really need it, a PITA.

For 10G and longer distances, fiber to an aggregation closet makes more sense.
 

Red Squirrel

No Lifer
May 24, 2003
70,671
13,835
126
www.anyf.ca

ch33zw1z

Lifer
Nov 4, 2004
39,795
20,390
146
Woah that is tempting. Are those SFPs standard? Wonder if that would work in my Dell switch too. Would work great to setup a 10g link from the switch to the file server. Maybe get a card for the VM server too.

1 Gbe sfp's over glass are not wanted as much anymore IMXP, so cheaper, a 10 Gbe sfp will be more $.

As far as the infiniband thing, research wisely. I haven't seen Infiniband anything in a small/medium business environment. IMXP, it's still isolated to the enterprise market. (that doesn't mean it's not, I just haven't seen it yet.)

Infiniband is likely to cost you as much as 10Gbe
 

ch33zw1z

Lifer
Nov 4, 2004
39,795
20,390
146
There are no bumpers on Cat6a that are normal/standard. It operates at up to 500MHz and cross-talk begins at 350MHz. It's got better insulation to resist crosstalk and thus can go longer distances.

It's more heavy, bulky and unwieldy though so unless you really need it, a PITA.

For 10G and longer distances, fiber to an aggregation closet makes more sense.

I agree, and most customers I service do as well. 10Gbe links are typically over glass.
 

Red Squirrel

No Lifer
May 24, 2003
70,671
13,835
126
www.anyf.ca
Yeah I'll probably wait out before I do anything 10gbe. I don't really NEED it, more than I WANT it. Speed is probably not as much of an issue as latency is, as NFS has a bit of latency to it compared to locally direct attached storage. I could use iSCSI but the nature of my system does require the same data to be accessible from multiple servers. So I'll live with what I got, it works good enough.
 

code65536

Golden Member
Mar 7, 2006
1,006
0
76
About 10 years ago, I got my first GbE-capable computer. It was a cheap $300 Dell tower with a Broadcom GbE chip embedded on the motherboard and connected via PCIe (which was new back then). And it was a small, bare chip--no heat sink or other form of special cooling. And yes, it really does hit gigabit speeds (over 90MB/s over Windows SMB).

What I find striking about 10GbE is that all the NICs are big hulking cards with large heatsinks. And the switches all have loud, powerful fans. Yes, I know, we're pushing an order of magnitude more data and so we need an order of magnitude more processing power. But hell, it's been 10 years! Hasn't silicon processes improved enough in the past decade that we should be able to handle this load with roughly the same power profile?
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
About 10 years ago, I got my first GbE-capable computer. It was a cheap $300 Dell tower with a Broadcom GbE chip embedded on the motherboard and connected via PCIe (which was new back then). And it was a small, bare chip--no heat sink or other form of special cooling. And yes, it really does hit gigabit speeds (over 90MB/s over Windows SMB).

What I find striking about 10GbE is that all the NICs are big hulking cards with large heatsinks. And the switches all have loud, powerful fans. Yes, I know, we're pushing an order of magnitude more data and so we need an order of magnitude more processing power. But hell, it's been 10 years! Hasn't silicon processes improved enough in the past decade that we should be able to handle this load with roughly the same power profile?

Your first stop should be that 'real' servers esp 1-RU are very loud still too. They aren't meant for home use really in quiet environments.

Cooling: LOUD / COOLS / EXPENSIVE [Choose two]
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Your first stop should be that 'real' servers esp 1-RU are very loud still too. They aren't meant for home use really in quiet environments.

Cooling: LOUD / COOLS / EXPENSIVE [Choose two]
Wut?

1U servers exist because of density. It's often cheaper to get 2U and 4U designs in the same power density because of reduced thermal constraints. Your 3 choices can be "all of the above" as long as you ignore size.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
Wut?

1U servers exist because of density. It's often cheaper to get 2U and 4U designs in the same power density because of reduced thermal constraints. Your 3 choices can be "all of the above" as long as you ignore size.

You are trolling now.

True server and serving up 10G or more and totally quiet for CHEAP?
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
What's your gaming desktop? Jesus, this isn't OT. Provide details esp. your advantage running this.

Advantage? How about you post why you haven't exactly ran into any quiet servers? Hell our 2U racks are quiet as a desktop. HP DL380 G8, 2x8 core xeons, 256GB of RAM, 2 SAS drives, and 2 K4000 Quadro's.

Gaming desktop is: Antec 900 chassis, 1xAMD1090T, 2x Radeon 6970 in crossfire, 32GB of RAM, MSI FX Mainboard, Antec Earthwatts 750.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
You are trolling now.

True server and serving up 10G or more and totally quiet for CHEAP?

I'm trolling? You put yourself in a hole. You said in reference to servers that you can have LOUD / COOL / EXPENSIVE, and that the market can pick two of them.

As I already stated, that isn't the case at all. The market is more dynamic than the constraints you've provided, and I listed exactly how your constraints are incorrect.

As usual, instead of taking something from it, you'll spend all day trying to weasel your way around it. Fortunately, the comment wasn't for you, but hopefully to people new to this segment that don't take your comments on the matter as necessarily the only truth. Chill. :rolleyes:
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
70,671
13,835
126
www.anyf.ca
A lot of the newer servers/network equipment are actually very quiet. The "newer" (been like 4+ years since I played with them) dell 1U's are ridiculously quiet. It was sitting in the office and I kept thinking it was off. The last two servers I bought (Supermicros) for home are also very quiet, but not as quiet as that Dell one. My last server is a 2U and it's about as loud as a typical workstation so not quiet but not a jet engine either.

Either way noise is not really an issue as server stuff is typically in it's own room. My server room is fairly loud but it wont be too bad once I drywall and insulate it. At work (Telco CO) some of the walls have like 3 layers of drywall, I think that's more a fire code thing but wow does that ever block sound.
 

ch33zw1z

Lifer
Nov 4, 2004
39,795
20,390
146
Many of the new x86 servers I work on are very quiet after POST. During POST the fans rev full and then slowly go down.

It's all relative though. There's racks of equipment I'm around often that require a yearly hearing exam and to wear hearing protection while I'm around it.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I could use iSCSI but the nature of my system does require the same data to be accessible from multiple servers. So I'll live with what I got, it works good enough.


You are in luck, iSCSI works great with multiple hosts / servers.