When will 10GBase-T reach the consumer level?

lagokc

Senior member
Mar 27, 2013
808
1
41
10Gigabit ethernet PCIe cards are only $350 each... if for some reason you absolutely needed a 10Gb connection in your home it isn't that out of reach to get a pair and a crossover cable.

Does the 10GbT standard still support crossover cables?

My guess is it'll be a few years before it's nearly as common as 1Gb is now, most people are perfectly happy with 56Mb 802.11g...
 

lagokc

Senior member
Mar 27, 2013
808
1
41
You only need 400Mb/s to stream at 4k resolution. What sort of "large files" are you moving around your house that need more than 1Gb/s? Entire backups of multi-TB drives?
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Yup, technologies are available now for high bandwidth transfers in a point to point topology. Mesh networks will not be viable in >1GB capacities for some time.

First generation Intel NetEffect 10Gb ethernet cards (over CX4, or SFP) are available for about $150 on the used market.

For Storage there are even more options. SAS if available of course at the best price / performance costs. $350 is enough to get you a use 9201-16i which is good for 16 SAS2 links or a whopping 96Gb/s of half-way bandwidth, or 192Gb/s full duplex. The problem is that these are direct attached technologies that only multiplex at the backplane level.

For more network ready storage technologies, the standardization of 8Gb FC in the Enterprise Unified Storage markets has left a glut of 4Gb FC equipment on the used market. 4Gb FC cards are available for $20-$30 a plug, and are easily multi-pathed on most systems (like VMWare). This means you can easily get 8Gb multi-pathed bandwidth per card.

There is also Infiniband, which is one of my favorite technologies as it can be used for storage, networking, or both. 10Gb hardware is available for around $100, with costs increasing to about $200 for 20Gb hardware, and then going up exponentially for the recent 56Gb/s FDR Dual Port cards.

The real costs are in the meshing. 10Base-T Ethernet switches are still around $1,000 per port, with FC switches around a tenth of the cost at roughly $100 per port. Infiniband switches fluctuate wildly on the used market but average around $250 per port. SAS switches are also around $150 a port but remove the bonuses of virtualized storage.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
True but with a much greater emphasis on "media streaming" and sharing other large files, it would definitely be a huge improvement.

Most people with large files have them on mechanical drives which can't do much more than 100MB/s sustained.
1Gbit gives you pretty much enough bandwidth to max out your typical single mechanical drive for transfers A->B
 

kevnich2

Platinum Member
Apr 10, 2004
2,465
8
76
Most people with large files have them on mechanical drives which can't do much more than 100MB/s sustained.
1Gbit gives you pretty much enough bandwidth to max out your typical single mechanical drive for transfers A->B

Most non SSD drives I've seen only give around 300-400mb/s transfer rate which is still half the speed of gig. Put in a SSD drive or a bunch of drives in raid, your speed goes up. I've really only seen 10gb be necessary (to justify the cost) in a business environment in SAN applications where they're serving several servers.
 

Unoid

Senior member
Dec 20, 2012
461
0
76
Reviving old thread.

When can we start seeing 10gbit ethernet Routers and integrated motherboard chipsets that support 10gb ethernet?

gigabit internet is becoming more prevalent, Home users are transferring large amounts of data on home networks, (nas systems)

I can saturate my home 1gbit ethernet network plenty when transferring back and forth with my NAS, and also streaming plex etc.
 

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
Reviving old thread.

When can we start seeing 10gbit ethernet Routers and integrated motherboard chipsets that support 10gb ethernet?

gigabit internet is becoming more prevalent, Home users are transferring large amounts of data on home networks, (nas systems)

I can saturate my home 1gbit ethernet network plenty when transferring back and forth with my NAS, and also streaming plex etc.

I guess when the demand to go faster than 1 gb Ethernet is great enough for manufacturers to start designing and pumping out cheapo 10 gb Ethernet NICs and switches. The problem is that it just doesn't seem like the mass market is asking for faster than 1 gb Ethernet speeds. Sure, there are some home and some business cases where they need more than 1 gb Ethernet, but there's just not enough demand to justify manufacturers switching to 10 gb Ethernet.

Basically, we're stuck at 1 gb Ethernet for quite a long time.
 
Feb 25, 2011
16,994
1,622
126
Don't underestimate the inertia of all the cabling out there that won't support >1Gb.

That's a lot of buildings to rewire.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
Windows 8/8.1. Multiple GbE NICs. Call it a day. SMB3 + SMB Multichannel works a treat.

I have a pair of NICs in my machines, when not disk limited by my RAID array (the arrays are getting pretty filled), I hit 237MB/sec between my server and my desktop.

10GbE is probably at LEAST 3 years away from "consumer penetration". I suspect we'll start seeing some enthusiast motherboards with 10GbE onboard NICs somewhere in the 2016-2017 time range and it'll probably quickly move downstream over the following 2-3 years.

Routers are close with 11ac. Once MU:MIMO and 160MHz come along manufacturers will either have to add 10GbE ports or allow link aggregation if they want to actually take full advantage of the radios in them as a single GbE link is going to be a limiting factor in multi device scenarios (and maybe even really good links with a 3:3 client and 160MHz...that is in theory around 2.7Gbps...which a good link might be around 1.4-1.8Gbps actual payload over the air). I suspect they'll do link aggregation first (its just firmware) before we see 10GbE, but it is coming.

Other than crap wiring, Cat 5e supports 10GbE. Basically anything that'll do 1GbE will do 10GbE, at least over modest distances. With well terminated cabling and low interference (IE not running by a bunch of power lines, NOT trunks of cables, etc) Cat5e can support 10GbE up to 45m and Cat6 can support 10GbE up to 55m. It takes Cat6a to do 100m.

In a general residential setup, 5e is going to handle 10GbE just fine other than crappy termination. Even still, you'll probably be able to do 10GbE, you just might get a bit less than wire speed with some packet loss due to bad termination or a bit of interference.

Oh noes, 8000Mbps instead of 10000Mbps.

10GbE is coming to in part because storage is taking a quantum leap. SSDs are starting to be a standard thing in laptop. They are also starting to be a standard thing in desktops to, even if they aren't necessarily primary storage there. Even regular spinning disks can easily push >>>1Gbps. A lot of the newer 7200rpm drives can comfortably push >160MB/sec. That is no 10Gbps ethernet, but it is well over the capability of a single 1Gbps link.

I think what is going to push it this time around is 11ac routers. Once it gets comfortably over being able to do more than 1Gbps in aggregate over the air bandwidth manufacturers are going to have to do SOMETHING. Otherwise there are going to start being a lot of outcries of "I bought this super fast router...but I can't use all of it because your ethernet port is bottlenecking me". Maybe not at first, but eventually.

Prices are coming down. You can buy some server boards in the ~$300-400 price range that have integrated 10GbE NICs (a few of them have TWO 10GbE NICs). Switch prices are coming down. Netgear has a 10GbE switch that runs around $80 per port. It still is going to take time, but I wouldn't be suprised if you could get a 16 port switch with 10GbE ports on it for $300-400 in another couple of years and a NIC for under $200 along with some enthusiast motherboards with 10GbE NICs integrated.

The time is coming. Just slowly.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
It is here. There is a nice Netgear 10Gbe switch for only about 100 a port.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
It is here. There is a nice Netgear 10Gbe switch for only about 100 a port.

Yeah...talk to me when the price is down to $20 per port and a 10GbE NIC is less than $100.

Or heck, I'd even take $30 a port and less than $150 for a 10GbE NIC.
 

Red Squirrel

No Lifer
May 24, 2003
70,658
13,833
126
www.anyf.ca
I'd love to see 10gb become affordable. Not just nics, but switches. They are still in the $10k+ range though last I checked. Would be nice to use 10g for servers so NFS speeds can be improved for VMs and such. gig is good enough for the rest of the stuff for now.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Yeah...talk to me when the price is down to $20 per port and a 10GbE NIC is less than $100.

Or heck, I'd even take $30 a port and less than $150 for a 10GbE NIC.

New pull Mellanox MNPA19 (Single-port 10Gb Ethernet) are available on eBay for <$60 :) For direct connect this is a great option, as you can build a direct link between 2 boxes for less than $150 (2 10Gb cards and some twinax between them).

10Gb switches are definitely still a while away from affordable to the masses though :)
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I would really like the next step in network throughput out as well, I have been pegged at the 1gbit/s limits with my NAS for a long time. Its not that its necessarily slow at 90-100MB/s but when I know the drives can push more like 200+ its a little annoying waiting for it. I even find myself pushing files through compressed pipes just to squeeze out more performance. Alas the NAS only has 1 nic port so I can't bond the connection despite the desktop being capable of doing so.

The price is just not right yet.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Most people with large files have them on mechanical drives which can't do much more than 100MB/s sustained.
1Gbit gives you pretty much enough bandwidth to max out your typical single mechanical drive for transfers A->B

I realize this post is over a year old, but I would like to point out that with NAS becoming more and more common, and with HDD with 1TB platters being pretty standard we're seeing 150+MB/s from HDDs, it isn't particularly difficutl to max out 1GbE with just a single HDD, let alone half dozen in a NAS.

And at the rate we've seen 10GbE crawl along, we very well might have SSD technology evolved to the point to where it can supply affordable mass storage, and by then 10GbE will likely be a bottleneck.

Windows 8/8.1. Multiple GbE NICs. Call it a day. SMB3 + SMB Multichannel works a treat.

Its a solution, but certainly not an ideal one. It can be great for a server that sees frequent, simultaneous requests from multiple systems on the network, but if you want each client machine to achieve those speeds that means you need additional NICs for all those machines, additional wiring, and a managed switch...and while its certainly a good deal more affordable than 10GbE, it can easily add up to a hefty amount, plus come with a lot more hassle.

I'd love to see 10gb become affordable. Not just nics, but switches. They are still in the $10k+ range though last I checked. Would be nice to use 10g for servers so NFS speeds can be improved for VMs and such. gig is good enough for the rest of the stuff for now.

12 port 10GbE switch @ ~$1600:
http://www.newegg.com/Product/Produc...9SIA24G15Y1174

but even with the relatively affordable ~$150 10GbE NIC you can scrounge up on Ebay, its still priced well outside the range of the average consumer.
 

Red Squirrel

No Lifer
May 24, 2003
70,658
13,833
126
www.anyf.ca
I realize this post is over a year old, but I would like to point out that with NAS becoming more and more common, and with HDD with 1TB platters being pretty standard we're seeing 150+MB/s from HDDs, it isn't particularly difficutl to max out 1GbE with just a single HDD, let alone half dozen in a NAS.

And at the rate we've seen 10GbE crawl along, we very well might have SSD technology evolved to the point to where it can supply affordable mass storage, and by then 10GbE will likely be a bottleneck.



Its a solution, but certainly not an ideal one. It can be great for a server that sees frequent, simultaneous requests from multiple systems on the network, but if you want each client machine to achieve those speeds that means you need additional NICs for all those machines, additional wiring, and a managed switch...and while its certainly a good deal more affordable than 10GbE, it can easily add up to a hefty amount, plus come with a lot more hassle.



12 port 10GbE switch @ ~$1600:
http://www.newegg.com/Product/Produc...9SIA24G15Y1174

but even with the relatively affordable ~$150 10GbE NIC you can scrounge up on Ebay, its still priced well outside the range of the average consumer.

Not available in Canada. And... it's a netgear. it's going to die in a few months. :p
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Not available in Canada. And... it's a netgear. it's going to die in a few months. :p

lol, well, its an example of how things are changing. There were a few $5K options I could have used as an example, but beggars can't be choosers! When it dies in a few months, just replace it with a second and a third if/when needed, you'll come in under that $5K and well under that $10K ;)
 
Last edited:

azazel1024

Senior member
Jan 6, 2014
901
2
76
New pull Mellanox MNPA19 (Single-port 10Gb Ethernet) are available on eBay for <$60 :) For direct connect this is a great option, as you can build a direct link between 2 boxes for less than $150 (2 10Gb cards and some twinax between them).

10Gb switches are definitely still a while away from affordable to the masses though :)

Wow, I didn't realize they were that cheap. You'd still need a couple of SFP+ adapters to go in, though I know those are fairly cheap. I wonder what kind of power budget that Mellanox card has?

At any rate, it wouldn't work with my current setup as I have no fiber run between my desktop and server. Just a pair of Cat5e (which should support 10GbE just fine, it isn't an excessively long run, but it IS through closed up walls).

Though maybe a SFP+ 10GBase-T adapter in those cards so I could do it over copper. Hmmm. Though that looks it ups the cost to about $140 per machine. Still not bad though. I might have to give it some thought. Though I don't know if I have the spare slots in my desktop (I do in my server). To bad no 4x PCI-e 2.0. I've got one of those free in my desktop.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
I realize this post is over a year old, but I would like to point out that with NAS becoming more and more common, and with HDD with 1TB platters being pretty standard we're seeing 150+MB/s from HDDs, it isn't particularly difficutl to max out 1GbE with just a single HDD, let alone half dozen in a NAS.

And at the rate we've seen 10GbE crawl along, we very well might have SSD technology evolved to the point to where it can supply affordable mass storage, and by then 10GbE will likely be a bottleneck.



Its a solution, but certainly not an ideal one. It can be great for a server that sees frequent, simultaneous requests from multiple systems on the network, but if you want each client machine to achieve those speeds that means you need additional NICs for all those machines, additional wiring, and a managed switch...and while its certainly a good deal more affordable than 10GbE, it can easily add up to a hefty amount, plus come with a lot more hassle.



12 port 10GbE switch @ ~$1600:
http://www.newegg.com/Product/Produc...9SIA24G15Y1174

but even with the relatively affordable ~$150 10GbE NIC you can scrounge up on Ebay, its still priced well outside the range of the average consumer.

Managed switch not need. You can do it as a direct connection or through an unmanaged switch. In fact if you do link aggregation on the switch and thus also teaming on the server/client SMB Multichannel will NOT work (at least not on Windows 8/8.1. It may work on Server 2013. Not sure).

I agree it is a less than ideal solution, but it is generally a cheap one in a low client count environment or one where you only have one or two clients that need to be able to handle a lot of data to a server and everything else's requirements are sparse. It also does a lot of what teaming will do, so it IS a solution for just a server that needs to have multiple requests if you don't want to get a managed switch (which semi-managed switches are stupid cheap, so that shouldn't really be an issue).

In my setup that is very true, its really just my desktop that I care about having a very fat pipe. Everything else is fine with GbE or wireless right now and my RAID arrays have trouble pushing 2GbE to its limits these days since they are getting rather full.

My replacement setup I have in mind could probably push 3GbE near its limits though and I don't have the runs to do that.

10GbE would be VERY nice. However, just pointing out that if you need/want more than 1GbE right now and can't afford 10GbE, then there is a workable solution to exceed 1GbE, at least so long as you don't mind working in a Windows environment (for SMB/CIFS anyway).
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Managed switch not need. You can do it as a direct connection or through an unmanaged switch. In fact if you do link aggregation on the switch and thus also teaming on the server/client SMB Multichannel will NOT work (at least not on Windows 8/8.1. It may work on Server 2013. Not sure).

I agree it is a less than ideal solution, but it is generally a cheap one in a low client count environment or one where you only have one or two clients that need to be able to handle a lot of data to a server and everything else's requirements are sparse. It also does a lot of what teaming will do, so it IS a solution for just a server that needs to have multiple requests if you don't want to get a managed switch (which semi-managed switches are stupid cheap, so that shouldn't really be an issue).

In my setup that is very true, its really just my desktop that I care about having a very fat pipe. Everything else is fine with GbE or wireless right now and my RAID arrays have trouble pushing 2GbE to its limits these days since they are getting rather full.

My replacement setup I have in mind could probably push 3GbE near its limits though and I don't have the runs to do that.

10GbE would be VERY nice. However, just pointing out that if you need/want more than 1GbE right now and can't afford 10GbE, then there is a workable solution to exceed 1GbE, at least so long as you don't mind working in a Windows environment (for SMB/CIFS anyway).

You don't sound like that average "consumer." I mean I am sure my backup from my SSD gaming station going over to my NAS could easily exceed gig but I never bothered channel grouping anything because it doesn't bother me that it takes awhile.

Heck you don't see a lot of 10gbe in business except in storage and back haul. It will get here sooner or later but we are not quite there.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Wow, I didn't realize they were that cheap. You'd still need a couple of SFP+ adapters to go in, though I know those are fairly cheap. I wonder what kind of power budget that Mellanox card has?

At any rate, it wouldn't work with my current setup as I have no fiber run between my desktop and server. Just a pair of Cat5e (which should support 10GbE just fine, it isn't an excessively long run, but it IS through closed up walls).

Though maybe a SFP+ 10GBase-T adapter in those cards so I could do it over copper. Hmmm. Though that looks it ups the cost to about $140 per machine. Still not bad though. I might have to give it some thought. Though I don't know if I have the spare slots in my desktop (I do in my server). To bad no 4x PCI-e 2.0. I've got one of those free in my desktop.

Power consumption is minimal. Those single port Ethernet only cards use a little less than 6 watts a piece with an SFP+ loaded. And you don't need fiber if the systems are within 30 feet (total cable running length). You could just use TwinAx Direct Attach cabling like I referenced in my first post. SFP+ modules are already attached to the cable, and the whole assemblies are down to $1 / foot at this point. Most cables below 5-7 meters are passive and very cheap to obtain. Above 5-7 meters they are active cables. Distance tops out around 15 meters. But in exchange you get a connection that has far less latency (.1us vs 2us), and far less power usage (1W vs 4-6W for 10gbe ethernet transceivers).

If you need more then that, I'd still personally go to 10gb SFP's for $20 a piece and then just whatever fiber length I needed. CAT6a still seems to have a long way to go to get into the home for 10gb, while fiber and twinax can reach the distances more easily, and therefore, used equipment prices are falling to reachable costs much more quickly than CAT6a.