Ati's R500 . . . *Update* X-b0xNext info*

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: BFG10K
Light is both a wave and a particle, and waves intefere with each other.
AFAIK I don't think it should matter in the grand scheme of things because all we need to know is if there's a signal or not. Shine two torches across each other and both will still hit their respective opposing walls. Just pick wavelengths to ensure the signal isn't cancelled and it should be OK.

If the leakage current at small transistor feature-sizes could be significantly reduced, then silicon-based devices could continue to scale down. It would be interesting to use atomic scale CVD or something similar, to build semi-conductors on a crystalline-carbon (aka diamond) substrate base, instead of crystalline silicon.
Now I understand the context of your diamond comment but I'd like to add that reduced leakage doesn't help the heat/power issue.

Umm, how so?
You need 100w power (random numbers), so you have 100w which will be made into heat.
Say you have 30w leakage, you actually need 130w to give the chip the 100w it needs, as 30w is wasted.
You're producing 130w heat and using 130w power.

Say you now reduce leakage. It only leaks 10w.
You now need 110w and produce 110 watts heat.

How would reducing leakage NOT reduce power consumption/heat production? Have I oversimplified it? (I know the difference may not be that great, but it would help, unless you mean that reducing leakage by itself wouldn't be enought to solve the issue, as the difference would only be small)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: BFG10K
Light is both a wave and a particle, and waves intefere with each other.
AFAIK I don't think it should matter in the grand scheme of things because all we need to know is if there's a signal or not. Shine two torches across each other and both will still hit their respective opposing walls. Just pick wavelengths to ensure the signal isn't cancelled and it should be OK.

I think that you should read up some more on optical computing (as should I as well). But just one more minor comment on this subject at the limit of my knowledge - I do know that as light enters a different transmission medium, it slows down slightly, so in order to change wavelengths around, you will necessarily add some propegation-delay (latency). So changing wavelengths around isn't the fastest way to handle things. But I don't know for certain that waveform constructive/destructive inteference is how "optical semiconductors" actually work, thus I will read up on this subject some more as well.

Btw, "torches" are broad-spectrum light sources. Semiconductor-based lasers are very narrow-spectrum light sources, and interactions of such tend to be very different, since the frequency of the light beam (waves) should be the same, and thus interact (interfere) much more directly than with a broad-spectrum light source.

If the leakage current at small transistor feature-sizes could be significantly reduced, then silicon-based devices could continue to scale down. It would be interesting to use atomic scale CVD or something similar, to build semi-conductors on a crystalline-carbon (aka diamond) substrate base, instead of crystalline silicon.
Now I understand the context of your diamond comment but I'd like to add that reduced leakage doesn't help the heat/power issue.

Yes it does, that's the primary problem with the Prescott's increased power/thermal output, and largely the primary problem with moving to reduced transistor feature-sizes right now. (They got the lithography issues mostly solved.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Oh yeah, relating to the topic of this thread, just came across this article:

However the killer part of what's said is that because the next, next generation ATI GPU, which he confirms is ?R5xx?, will have a totally different architecture, than both the near to release R420 and Nvidia's NV40, then that architecture will address the poor performance of doing branching in Shaders.

It seems that what is planned for ATI's 'R5xx'is a fully unified Shader architecture ? a single engine so to speak - as opposed to the current architectures which have independent Pixel and Vertex Shaders.