Originally posted by: alpineranger
The task seems trivial to me. HDMI (TDMS) transmitters/receivers are cheap and readily available, otherwise HDMI wouldn't be used. They've been around for years, and it used to be you'd seem them as small extra ICs on any (older) graphics card with a DVI port. I glanced at the traces a few times and it looks like they use some basic serial interface. Buy a few off the shelf parts, and use them as modems. That is what they are anyways, and they won't care what sort of data they're transporting.
Yes, you're right about the interface chips (sort of), but that neglects the rest of the puzzle which is the most important part. Have you looked at the cost of FPGAs that can handle this kind of data rate with good buffering etc, and the complexity of PCB design to use them, etc.? I have.
Have you noticed that FPGAs in general are not easily / cheaply interfaced to a high bandwidth PCI-Express bus? Most of them aren't electrically compatible and the ones that are either have elaborate and expensive soft-IP-library needs to get them to work with the higher layer timings/protocols involved, or they have dedicated hard interfaces and are vastly expensive. There are only a handfull or external PCIE transceiver chips out there and they're not exactly easy to come by, cheap, etc.
We're comparing this to the cost of what, a $17 retail cost 1GBE NIC, and realizing you can probably load like 4 of them in slots on top of the two on your motherboard for a total cost less than just the PCB will cost of a high end HDMI + PCIE + FPGA type solution in low quantities.
Of course for a few hundred dollars or so you can get 10GBe NICs and even the cost of one of those will be cheaper than the FPGA + HDMI + PCIE board you could probably make at quantity 5000 type prices.
The only 'cheap' way to do it would be a PCIE + HDMI ASIC which probably would cost like $10 a piece in 10,000 unit quantities like...for instance... oh.... a cheap GPU card like the 8500GT's ASIC does. But for that you NEED to order at least like 10,000 unit+++ and above quantities and pay like $100,000 NRE fees to the fab, IP license fees for the PCIE and HDMI interface designs, go get FCC approval of the board, etc. etc. etc.
The RIGHT solution is like the poster below me suggests is to have normal GPU cards starting to support things like 10GBE, Infiniband links, bi-directional / multi-party bus HDMI/DVI etc. etc. so you can actually route your GPU video around easily AND start to do some interesting high speed data transfer between the boards if you're doing GPGPU type stuff or distributed rendering or whatever.
There is no good reason current high end GPU cards don't have things like open access infiniband links and bidirectional HDMI and 1GBE / 10GBE so on other than a perceived lack of "consumer demand" and the desire to save every $0.001 (literally) off the cost of each chip by cutting out all capabilities they don't feel are critical. Hopefully this will start to change as the GPU + CPU fusion happens and we start to see high bandwidth interfaces like HDMI / QuickPath / Infiniband / etc. under CPU control and with several GBy/sec bandwidth "northbridge" style interconnects between the I/O interfaces like those and main memory, video memory, CPU, GPU coprocessors, etc.