Using HDMI 1.3 for data transfers

her209

No Lifer
Oct 11, 2000
56,336
11
0
http://www.hdmi.org/press/pr/pr_20060622.aspx

Higher speed: HDMI 1.3 increases its single-link bandwidth from 165MHz (4.95 gigabits per second) to 340 MHz (10.2 Gbps) to support the demands of future high definition display devices, such as higher resolutions, Deep Color? and high frame rates. In addition, built into the HDMI 1.3 specification is the technical foundation that will let future versions of HDMI reach significantly higher speeds.
How difficult would it be to do?
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Not difficult if you have about 100,000$ to spend manufacturing and designing on the custom integrated circuits needed to do it.

There is no (AFAIK) "standard" way to do this with off the shelf parts for a data transfer purpose.

GPU boards OUTPUT somewhat (repetitively, and with weird timing) arbitrary data from their frame-buffers to HDMI.

The only major circuits that INPUT data from HDMI are basically chips that drive pixels on LCD monitors and the like, and they're not particularly well suited to capturing data at high rates and feeding it into a computer.

Typically to do high speed data TRANSFER like this you'd be looking at Infiniband products, 10Gb/s ethernet NICs, and similar sorts of SAN technologies.

 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
I'm sure you could program an FPGA to do the trick, but I have no idea if it could keep up with that kind of bandwidth.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Originally posted by: Crusty
I'm sure you could program an FPGA to do the trick, but I have no idea if it could keep up with that kind of bandwidth.

Yes you could read and write the data with some commercially available FPGAs and interface chips, and even keep up with the data rate given some fairly expensive / fast FPGAs.

But then what? What do the FPGAs plug into that gets the data into the computer? PCI-Express? Not likely; most FPGAs are difficult or impossible to interface to that without a further level of complex / expensive / speed limited interface chip.
Ethernet? Too slow; that defeats the whole purpose.
etc.

Basically you end up spending more money building the interface card given $500+ worth of FPGAs, PCI-Express interfaces, etc. etc. than if you'd just bought an infiniband or 10GbE system in the first place.

The only way around that is to order about 50,000+ semi-custom chips that do the job you specifically want done and probably don't do anything else.

I'm not saying it's a GOOD state of affairs that it's so hard / expensive to get high bandwidth data into a PC, but that's the reality. Unless you can get some semi-custom chips made that speak PCI-Express, QuickPath, Infiniband, etc. you're going nowhere fast and expensively.

 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: her209
http://www.hdmi.org/press/pr/pr_20060622.aspx

Higher speed: HDMI 1.3 increases its single-link bandwidth from 165MHz (4.95 gigabits per second) to 340 MHz (10.2 Gbps) to support the demands of future high definition display devices, such as higher resolutions, Deep Color? and high frame rates. In addition, built into the HDMI 1.3 specification is the technical foundation that will let future versions of HDMI reach significantly higher speeds.
How difficult would it be to do?

Its possible.
But as others have said its expensive.
You have to ask yourself what is the bandwidth required by the data.
Then you pick the interface.
Using HDMI to transfer mp3 files would just be a waste.

It would be far cheaper to use something like 802.3ak
http://standards.ieee.org/announcements/pr_8023ak.html
It can handle 10Gbps.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Transferring data at high speed is easy. Transferring data at high speed over great distance is difficult. Doing the latter is valuable. Most of these interfaces are going to be very distance limited.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
I thought the point of HDMI was to transmit data at high speeds :)

In all seriousness, the tech is already available to transfer data at that high of speed, its just a matter of need. Fiber Optic stuff has a basically unlimited bandwidth at very high speeds. Of course, so does sending 2 filled 1xTB HDs by UPS. (or fedex)
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
The phrase "data transfer" isn't absolutely as ambiguous as possible, but it's close. What data? Transferred to where?

As QuixoticOne notes, if you're thinking data transfer between PC's, then the limitation is finding a cheap OTS way to get 10 gbps from the transport into memory.

If you're thinking data transfer between high speed switches point to point over arbitrary distances, then you have two problems: what kind of cable run lengths can HDMI handle, and how do you interface it to the switches?

Gigabit ethernet is as fast as you can economically get going between two PCs. In high-speed point-to-point applications they don't need help from HDMI.
 

alpineranger

Senior member
Feb 3, 2001
701
0
76
The task seems trivial to me. HDMI (TDMS) transmitters/receivers are cheap and readily available, otherwise HDMI wouldn't be used. They've been around for years, and it used to be you'd seem them as small extra ICs on any (older) graphics card with a DVI port. I glanced at the traces a few times and it looks like they use some basic serial interface. Buy a few off the shelf parts, and use them as modems. That is what they are anyways, and they won't care what sort of data they're transporting.
 

mvrx

Junior Member
Feb 23, 2008
19
0
0
I think you are going in the wrong direction here. :)

We shouldn't be pushing to HDMI to become a data interconnection technology, we should be pushing for HDMI (or a similar AV communication protocol) to run over 10GbE (gigabit ethernet). It won't be long now that 10GbE will become the off the shelf adopted technology that 1GbE is today.

Having AV data running over a local network means your content is routable, displayable on an unlimited number of devices, universal between PC, media player, TV, and your garbage disposal.. wait, what was I talking about.. oh yea..

All this needs is a company to come up with a highly integrated 10GbE/HDMI transceiver, add in a HDCP stipper (because I hate you RIAA and MPAA) and bring a cheap device to market. It would sell like hot cakes. This would actually work over 1GbE, if the transceiver implemented a moderate real time lossless compression/decompression piece into the puzzle to keep 1080i and 1080p under 800mbit.

If anyone reading this works for a development company in this field, contact me. I have some people interested in funding development of an HDMI->ethernet adaptor.

p.s. For those of you about to send me links to HDMI/componet -> cat5/6 adaptors, this is not what I'm talking about.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Originally posted by: alpineranger
The task seems trivial to me. HDMI (TDMS) transmitters/receivers are cheap and readily available, otherwise HDMI wouldn't be used. They've been around for years, and it used to be you'd seem them as small extra ICs on any (older) graphics card with a DVI port. I glanced at the traces a few times and it looks like they use some basic serial interface. Buy a few off the shelf parts, and use them as modems. That is what they are anyways, and they won't care what sort of data they're transporting.

Yes, you're right about the interface chips (sort of), but that neglects the rest of the puzzle which is the most important part. Have you looked at the cost of FPGAs that can handle this kind of data rate with good buffering etc, and the complexity of PCB design to use them, etc.? I have.
Have you noticed that FPGAs in general are not easily / cheaply interfaced to a high bandwidth PCI-Express bus? Most of them aren't electrically compatible and the ones that are either have elaborate and expensive soft-IP-library needs to get them to work with the higher layer timings/protocols involved, or they have dedicated hard interfaces and are vastly expensive. There are only a handfull or external PCIE transceiver chips out there and they're not exactly easy to come by, cheap, etc.

We're comparing this to the cost of what, a $17 retail cost 1GBE NIC, and realizing you can probably load like 4 of them in slots on top of the two on your motherboard for a total cost less than just the PCB will cost of a high end HDMI + PCIE + FPGA type solution in low quantities.

Of course for a few hundred dollars or so you can get 10GBe NICs and even the cost of one of those will be cheaper than the FPGA + HDMI + PCIE board you could probably make at quantity 5000 type prices.

The only 'cheap' way to do it would be a PCIE + HDMI ASIC which probably would cost like $10 a piece in 10,000 unit quantities like...for instance... oh.... a cheap GPU card like the 8500GT's ASIC does. But for that you NEED to order at least like 10,000 unit+++ and above quantities and pay like $100,000 NRE fees to the fab, IP license fees for the PCIE and HDMI interface designs, go get FCC approval of the board, etc. etc. etc.

The RIGHT solution is like the poster below me suggests is to have normal GPU cards starting to support things like 10GBE, Infiniband links, bi-directional / multi-party bus HDMI/DVI etc. etc. so you can actually route your GPU video around easily AND start to do some interesting high speed data transfer between the boards if you're doing GPGPU type stuff or distributed rendering or whatever.

There is no good reason current high end GPU cards don't have things like open access infiniband links and bidirectional HDMI and 1GBE / 10GBE so on other than a perceived lack of "consumer demand" and the desire to save every $0.001 (literally) off the cost of each chip by cutting out all capabilities they don't feel are critical. Hopefully this will start to change as the GPU + CPU fusion happens and we start to see high bandwidth interfaces like HDMI / QuickPath / Infiniband / etc. under CPU control and with several GBy/sec bandwidth "northbridge" style interconnects between the I/O interfaces like those and main memory, video memory, CPU, GPU coprocessors, etc.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
I agree with all of your sentiments and aspirations.

I think we're likely to see DisplayPort starting to eclipse DVI and (HDMI to an extent) though since DisplayPort is apparently cheaper for them to implement in devices as well as being technically better in a few noteworthy ways.

Current HDCP video is transported by things like digital set-top cable boxes over open standards firewire based links, for instance, to HDCP enabled monitors. So just opening up the HDMI interface to 10GBE or whatever doesn't really help you defeat HDCP. And the keys used for any stripper that acted like a clone of a HDCP monitor would likely be blacklisted pretty quickly, though I'm sure the whole implementation could/should be defeated in the interest of interoperability / electronic product cost / fair use / etc. Though I wouldn't be surprised if they used legal pressure to try to keep such strippers off the market even if they were there for the legitimate purpose of promoting interoperability with other hardware.

I imagine you could design one of these kinds of devices moderately easily at the IP core level for an ASIC or whatever, though you'd need to get it produced in large quantities (N * 10,000) with substantial start-up costs to make the most economical per-unit costs for the chips.
Check into opencollector et. al. if you want to try to go the IP core design route.

You could probably also talk to some semi-defunct minor player type company in the GPU / Northbridge / Ethernet scene like maybe Matrox or VIA or SMSC or SIS or whatever and see if you could talk them into doing an bidirectional HDMI interface and 10GBe MAC on their next northbridge or whatever.

Originally posted by: mvrx
I think you are going in the wrong direction here. :)

We shouldn't be pushing to HDMI to become a data interconnection technology, we should be pushing for HDMI (or a similar AV communication protocol) to run over 10GbE (gigabit ethernet). It won't be long now that 10GbE will become the off the shelf adopted technology that 1GbE is today.

Having AV data running over a local network means your content is routable, displayable on an unlimited number of devices, universal between PC, media player, TV, and your garbage disposal.. wait, what was I talking about.. oh yea..

All this needs is a company to come up with a highly integrated 10GbE/HDMI transceiver, add in a HDCP stipper (because I hate you RIAA and MPAA) and bring a cheap device to market. It would sell like hot cakes. This would actually work over 1GbE, if the transceiver implemented a moderate real time lossless compression/decompression piece into the puzzle to keep 1080i and 1080p under 800mbit.

If anyone reading this works for a development company in this field, contact me. I have some people interested in funding development of an HDMI->ethernet adaptor.

p.s. For those of you about to send me links to HDMI/componet -> cat5/6 adaptors, this is not what I'm talking about.

 

Googer

Lifer
Nov 11, 2004
12,576
7
81
Originally posted by: her209
http://www.hdmi.org/press/pr/pr_20060622.aspx

Higher speed: HDMI 1.3 increases its single-link bandwidth from 165MHz (4.95 gigabits per second) to 340 MHz (10.2 Gbps) to support the demands of future high definition display devices, such as higher resolutions, Deep Color? and high frame rates. In addition, built into the HDMI 1.3 specification is the technical foundation that will let future versions of HDMI reach significantly higher speeds.
How difficult would it be to do?

Your better off using Fiber Channel if you need that kind of bandwidth, it's capable of up to 12+ Gb/s. For me, Firewire 800 and Gigabit will do just fine. Anything faster and my HDD won't be able to keep up.
 

NeoPTLD

Platinum Member
Nov 23, 2001
2,544
2
81
When you extra a 74min Audio CD, the resulting files are greater than 650MB, which is the data-mode capacity of a CD. The Audio mode holds more information by having less error correction algorithm at an expense of having less than 100% accurate data.

For audio video use, 99.999% transfer accuracy maybe good enough, but for data, anything below 100% isn't acceptable, so adding error correction probably slows it down considerably.