- Jun 24, 2017
- 39
- 5
- 41
10GBase-T has been a very long wait and I understand that much of it related to PHY power and the economy of scale in cloud data centers.
On the other hand, now that NBase-T switches are getting to $50/port or lower, I'd like to see more options on the NIC side of things.
But for some reason every 10GBase-T and NBase-T NIC (single or double) seems to be stuck at x4 lanes as if PCIe v1 was still ruling the world, while at v4 you can actually run 2 10Gbit ports through a single lane: How did we wind up in a situation where there are plenty of USB 3.2 10Gbit ports on modern motherboards, but the same datarate on Ethernet is both a $150 premium on X570 motherboards and grabbing four lanes of 20 Gbit bandwidth?
Makes me want to run TCP/IP over USB, which for some reason seems more difficult (and expensive) to get than SCSI over USB (you try getting FCoE or iSCSI on a budget!).
We have HDMI with Ethernet side channels, Display Ports which seem to be essentially PCIe, Thunderbolt aka USB4 which is everything except Ethernet while I really just want to put half a dozen PCs under my desk into a fabric that is as fast as the bus they connect to. I don't really mind if it's not Ethernet, I'll take a USB or, better yet, an Inifinity Fabric switch, perhaps with an Ethernet uplink or router. I'd actually use RDMA before Ethernet, or accept SCSI over NFS any day: Ethernet was once thought to be the better alternative to Infiniband for economical HPC... Today I wonder why?
I used to run networks over parallel port cables, even ran TCP/IP over a serial port and a modem in the old days, IP over Fibre-Channel wasn't a hit, but possible, Mellanox Connect-X5 will do Infiniband, NVMe-oF/PCIe, Ethernet, but for some reason there are no switches that support all three, even if the silicon on both sides is distinct multiples of the same base architecture... binary is binary and protocol translations are done at wire speeds these days...
Actually somewhere on this site I believe I have read that the Infinity Fabric supports Ethernet natively as a protocol, besides RAM and PCIe and that it's just a matter of adding PHYs to deliver as many ports as you don't care to use for USB 3.2 or PCIe x1 v4 slots: So why aren't we getting these options?
Am I really the only one asking for this?
On the other hand, now that NBase-T switches are getting to $50/port or lower, I'd like to see more options on the NIC side of things.
But for some reason every 10GBase-T and NBase-T NIC (single or double) seems to be stuck at x4 lanes as if PCIe v1 was still ruling the world, while at v4 you can actually run 2 10Gbit ports through a single lane: How did we wind up in a situation where there are plenty of USB 3.2 10Gbit ports on modern motherboards, but the same datarate on Ethernet is both a $150 premium on X570 motherboards and grabbing four lanes of 20 Gbit bandwidth?
Makes me want to run TCP/IP over USB, which for some reason seems more difficult (and expensive) to get than SCSI over USB (you try getting FCoE or iSCSI on a budget!).
We have HDMI with Ethernet side channels, Display Ports which seem to be essentially PCIe, Thunderbolt aka USB4 which is everything except Ethernet while I really just want to put half a dozen PCs under my desk into a fabric that is as fast as the bus they connect to. I don't really mind if it's not Ethernet, I'll take a USB or, better yet, an Inifinity Fabric switch, perhaps with an Ethernet uplink or router. I'd actually use RDMA before Ethernet, or accept SCSI over NFS any day: Ethernet was once thought to be the better alternative to Infiniband for economical HPC... Today I wonder why?
I used to run networks over parallel port cables, even ran TCP/IP over a serial port and a modem in the old days, IP over Fibre-Channel wasn't a hit, but possible, Mellanox Connect-X5 will do Infiniband, NVMe-oF/PCIe, Ethernet, but for some reason there are no switches that support all three, even if the silicon on both sides is distinct multiples of the same base architecture... binary is binary and protocol translations are done at wire speeds these days...
Actually somewhere on this site I believe I have read that the Infinity Fabric supports Ethernet natively as a protocol, besides RAM and PCIe and that it's just a matter of adding PHYs to deliver as many ports as you don't care to use for USB 3.2 or PCIe x1 v4 slots: So why aren't we getting these options?
Am I really the only one asking for this?
Last edited: