R600 to be 80nm

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Gstanfor
You posted while I was writing my post. Anyway, Xenos is pretty much irrelevant to PC based DX10, no matter how much ATi/AMD may wish it otherwise.

phew ... for a brief instant i was wondering if i wasn't making myself clear to anyone :p

:Q


and yeah, Josh was right ... i missed it . ... . Sharma DID say DX10 ... and not simply "unified shaders" as i thought he said ... Sharma IS confused

then i went "hunting" to see 'where' he went wrong

there goes the fanboy theory that AMD PR actually "know anything" abut r600
--other than to *sell* their product


:p

 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: apoppin
Originally posted by: TecHNooB
appopin, is this how you act in real life? I remember you used to make quality posts. Then the smiley thing came along. Or did I miss it the first time?
i still do and that 'emoticon thing' has always been

so ... what is the problem ?

really

i get a little testy with dreddfunk who dissed me ... we trade a few public barbs and then 'settle it' with mutual apologies and a new understanding

how is it that anyone else should be involved?
:confused:

Because you're posting on an open forum?

 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: apoppin
Originally posted by: Creig
Originally posted by: apoppin

what *other people* ? ... just josh

and see from my PoV that IS the *difference*

--there is *only ONE* person who had an issue me but *resolved it* - rather amicably, i might add ... the others are simply bystanders ... and trolls, encouraging a forum fight

... and that is *why* i am not gonna read another one of Josh' lame arguments on this subject - because he should not be involved at all and certainly not anymore

it's the principal of it and i certainly don't expect you guys to understand that at all

i am not whining that others are picking ... i am just saying i do not have to nor will respond to anyone's base trolling

i also understand completely why fanboys would want to derail this thread with OT bickering

that is beyond obvious

edited

So you feel it's perfectly acceptable to call other people "troll" and "fanboy" simply because they have a differing viewpoint than your own? I have news for you apoppin. That is called trolling and you're the one who has been doing the majority of it in this thread.

And this is a public forum, therefore EVERYONE who reads the thread is involved. There are no bystanders. If you want to ensure that nobody intrudes on your conversation, keep it to PMs. Otherwise, anything and everything you say is open to comment.

absolutely not!


you said it ...

and i DO understand why you - a self-admitted fanboy - would want to take this thread way off topic

feel free to comment all you want ... remembering that i also feel perfectly free to comment on your posts ... and will defend myself

in this particular case, i said it was my principle and i will not discuss that issue further as it is closed and involves one other poster.

i think you are doing all the *trolling* here ... even though it won't be discussed, you continue to pick at it as though you might somehow "prove something" ...

i *already* know what you think of me and frankly i guess i think and feel about the same about you ... that this "issue" somehow added more negative things in your own mind toward me doesn't bother me whatsoever

it is a badge of honor to be despised by a troll


I don't know where you came up with "self-admitted fanboy" from. I've claimed to be a fan of BOTH ATI and Nvidia, but not a fanboy for one or the other. I have a nice mix of both Nvidia and ATI products in my house. I personally think that ATI dropped the ball again by failing to get the R600 out when they had a chance to pull ahead of Nvidia, the same way they did with the the R520. The G80 is a very nice piece of hardware that is going through the normal driver teething that you'd expect of any new architecture. About the only thing I DON'T like about Nvidia is their corporate ethics towards consumers. The R600 is late and AMD is claiming that they're holding it back for marketing purposes. Is it true? Maybe, maybe not. I personally think that it was delayed at first because of technical reasons and that they decided that since it was already delayed to just hold it back a little more to do a full lineup release.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
This reminds me of the arguments I used to have with an ex-girlfriend. We were probably the two most stubborn people to ever date. Arguments lasted hours, sometimes continued for days because we both needed the last word.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Matt2
This reminds me of the arguments I used to have with an ex-girlfriend. We were probably the two most stubborn people to ever date. Arguments lasted hours, sometimes continued for days because we both needed the last word.

I don't particuarly enjoy being called "troll" and "fanboy" over and over by apoppin. Would you? So I'm going to keep refuting him every single time until he quits. This is an open forum and I don't want people to get the wrong impression of me just because apoppin doesn't like me. And if he doesn't quit with the name calling, I'm going to take it up with the Mods as I consider it a personal attack.

If he wants to debate my viewpoint, that's fine. But this non-stop string of personal insults is not okay with me.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: Creig
Originally posted by: Matt2
This reminds me of the arguments I used to have with an ex-girlfriend. We were probably the two most stubborn people to ever date. Arguments lasted hours, sometimes continued for days because we both needed the last word.

I don't particuarly enjoy being called "troll" and "fanboy" over and over by apoppin. Would you? So I'm going to keep refuting him every single time until he quits. This is an open forum and I don't want people to get the wrong impression of me just because apoppin doesn't like me. And if he doesn't quit with the name calling, I'm going to take it up with the Mods as I consider it a personal attack.

If he wants to debate my viewpoint, that's fine. But this non-stop string of personal insults is not okay with me.

I'm not trying to come down on you or apoppin or anyone else who gets involved in these pissing matches.

I was simply trying to point out that this could go on for months.

:D
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: Matt2
Originally posted by: Creig
Originally posted by: Matt2
This reminds me of the arguments I used to have with an ex-girlfriend. We were probably the two most stubborn people to ever date. Arguments lasted hours, sometimes continued for days because we both needed the last word.

I don't particuarly enjoy being called "troll" and "fanboy" over and over by apoppin. Would you? So I'm going to keep refuting him every single time until he quits. This is an open forum and I don't want people to get the wrong impression of me just because apoppin doesn't like me. And if he doesn't quit with the name calling, I'm going to take it up with the Mods as I consider it a personal attack.

If he wants to debate my viewpoint, that's fine. But this non-stop string of personal insults is not okay with me.

I'm not trying to come down on you or apoppin or anyone else who gets involved in these pissing matches.

I was simply trying to point out that this could go on for months.

:D

As proven by some thread's that involved Rollo (and a few) vs. the entire ATV back in the day :) Let's just stop the animosity and name calling!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Matt2
This reminds me of the arguments I used to have with an ex-girlfriend. We were probably the two most stubborn people to ever date. Arguments lasted hours, sometimes continued for days because we both needed the last word.

i bet the make-up sex was extra fun :p
:Q

:D


and Creig .. when you stop calling *me* "names" and attempting to insult me at almost every opportunity, i will return the courtesy. ;)

until then i will give you back what i get from you ...

you tone down the rhetoric ... and i will respond positively

i am glad to stick to the issues


but i can call names right back at you


EDIT ... i just ReRead your last two posts - twice !

*hold on a minute*

:shocked:

we have exactly the same complaints about each other :p


do you "want" a truce?

i am *offering*

... for the children
:D


seriously .... for forum harmony
[less disharmony]

i'd will be glad to take this to PMs ... dislike you?
... i don't know you
 

eno

Senior member
Jan 29, 2002
864
1
81
Wow, came in here looking for any new info on R600/8900. Turns out this post is Lame, 11 pages of crap. Enjoy your time arguing online.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: Matt2
If AMD is concentrating that hard on Fusion then I still say that they are going to regret it.
Why?

I find it laughable that people think that next year, AMD is going to bring Fusion to the table and it's going to perform on par with high end discreet graphics.
Again, why?

C/GPUs are going to go through the same evolutionary process as every other PC component. We're going to slow, low end C/GPUs first. As time goes on the performance will get better and Nvidia will eventually be squeezed out of the market. But that wont happen to 2010 or 2011, you can take that one to the bank.
You sound very confident, and to tell you the truth, I completely disagree.

Spending resources on the current nonintegrated (discreet) 3D hardware solution is a waste of precise (and expansive) R&D resources, even you agree that it (discreet 3D hardware) will eventfully be obsolete, if so then why spend R&D resources on technology that will be obsolete in a year or too?

Additionally, why are you so sure that Fusion or any similar project will take so much time to reach high performances?

Do you disagree with what seems to be the general consensus that modern GPU and CPU exist on the same axis in the sense that they are both complex Processing Units with only different focuses (Central/Graphic), and that they are heading toward each other i.e. CPU become more and more parallel in nature (superscalar) and in focus (SIMD instructions) with every generation, and that GPU becomes more and more programmable (more general purpose) with every generation?

If you agree then what, in your opinion, is preventing it (C/GPU) from reaching the high performances area?


This is why I have a very hard time believing that ATI's drivers are going to be any better than Nvidia's. Most of the people on this board claim that ATI is going to have superior drivers because of their "experience" with DX10 via Xbox 360. These are the same folks who claimed that ATI also had a head start on the actual hardware because of the Xbox 360.
I agree that AMD GPU driver will be better but for different reasons.

I stipulate that companies that design/manufacture general purpose PUs have more experience in (and are better equipped to) produce higher quality software, you can read a more detailed post from my on this matter here.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: apoppin
you tone down the rhetoric ... and i will respond positively

i am glad to stick to the issues


I'll be more than happy to do so, as long as you reciprocate.



Peace.
 

Vinnybcfc

Senior member
Nov 9, 2005
216
0
0
Originally posted by: eno
Wow, came in here looking for any new info on R600/8900. Turns out this post is Lame, 11 pages of crap. Enjoy your time arguing online.

Qft

Got bored of trying to find some useful info after a couple of pages
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Creig
Originally posted by: apoppin
you tone down the rhetoric ... and i will respond positively

i am glad to stick to the issues


I'll be more than happy to do so, as long as you reciprocate.



Peace.

Ok, you got it
:thumbsup:

*agreed*

i will "tone it down" toward you

--and IF you EVER feel really "wronged" or "offended" by my comments or feel they are "personal" ... fire off a quick PM to me FIRST and i am pretty sure we can resolve it like adults without continually bring our personal differences and bickering into the public forum


and thank-you :)
=====================


... as to the *topic* - this thread would be *nothing* without the controversy ... :p
sadly :(

there is NO new *useful info* on r600 - except that it *can* be AGP ... and Sharma is factually wrong on at least DX10 in that interview
:thumbsdown:

if r600 was here we could argue about *important* things
:D
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: Creig
I don't particuarly enjoy being called "troll" and "fanboy" over and over by apoppin. Would you?
Thanks for the offer, but I have other things to do. :p

Eno said it, "Where's the beef?"

Special Olympics and arguing on the internet scenario in play.... (all involved - not directed specifically at Creig - Just quoted him as it read funny.)

 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
C/GPUs are going to go through the same evolutionary process as every other PC component. We're going to slow, low end C/GPUs first. As time goes on the performance will get better and Nvidia will eventually be squeezed out of the market. But that wont happen to 2010 or 2011, you can take that one to the bank.
You sound very confident, and to tell you the truth, I completely disagree.

Spending resources on the current nonintegrated (discreet) 3D hardware solution is a waste of precise (and expansive) R&D resources, even you agree that it (discreet 3D hardware) will eventfully be obsolete, if so then why spend R&D resources on technology that will be obsolete in a year or too?

Additionally, why are you so sure that Fusion or any similar project will take so much time to reach high performances?

Do you disagree with what seems to be the general consensus that modern GPU and CPU exist on the same axis in the sense that they are both complex Processing Units with only different focuses (Central/Graphic), and that they are heading toward each other i.e. CPU become more and more parallel in nature (superscalar) and in focus (SIMD instructions) with every generation, and that GPU becomes more and more programmable (more general purpose) with every generation?

If you agree then what, in your opinion, is preventing it (C/GPU) from reaching the high performances area?

They key factor holding back Fusion-type graphics is memory bandwidth. People in the 8600 thread are crucifying the card because it has a 128-bit memory bus at ~2GHz memory clock which is around 32GB/s of memory bandwidth. Currently, top of the line DDR2 modules offer around 8.5GB/s of memory bandwidth. How do you foresee a Fusion-type integrated GPU achieving anywhere near the 86+GB/s memory bandwidth of an 8800GTX, let along the 32GB/s of an 8600GTS? The only option would be to integrate high-speed GDDR4 into the motherboard which would drive up costs and leave you stuck with the same memory speed/type even if you threw in a new GPU. For now, expansion slots for graphics memory are out of the question because the noise introduced by the slot interface prohibits such high-speed operation. Additionally, the cost of integrating 384 or 512 bit memory slots onto a motherboard would be prohibitive even if 2GHz operation were possible.

Where do you see the memory bandwidth coming from for a top-end integrated graphics solution?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: SexyK
C/GPUs are going to go through the same evolutionary process as every other PC component. We're going to slow, low end C/GPUs first. As time goes on the performance will get better and Nvidia will eventually be squeezed out of the market. But that wont happen to 2010 or 2011, you can take that one to the bank.
You sound very confident, and to tell you the truth, I completely disagree.

Spending resources on the current nonintegrated (discreet) 3D hardware solution is a waste of precise (and expansive) R&D resources, even you agree that it (discreet 3D hardware) will eventfully be obsolete, if so then why spend R&D resources on technology that will be obsolete in a year or too?

Additionally, why are you so sure that Fusion or any similar project will take so much time to reach high performances?

Do you disagree with what seems to be the general consensus that modern GPU and CPU exist on the same axis in the sense that they are both complex Processing Units with only different focuses (Central/Graphic), and that they are heading toward each other i.e. CPU become more and more parallel in nature (superscalar) and in focus (SIMD instructions) with every generation, and that GPU becomes more and more programmable (more general purpose) with every generation?

If you agree then what, in your opinion, is preventing it (C/GPU) from reaching the high performances area?

They key factor holding back Fusion-type graphics is memory bandwidth. People in the 8600 thread are crucifying the card because it has a 128-bit memory bus at ~2GHz memory clock which is around 32GB/s of memory bandwidth. Currently, top of the line DDR2 modules offer around 8.5GB/s of memory bandwidth. How do you foresee a Fusion-type integrated GPU achieving anywhere near the 86+GB/s memory bandwidth of an 8800GTX, let along the 32GB/s of an 8600GTS? The only option would be to integrate high-speed GDDR4 into the motherboard which would drive up costs and leave you stuck with the same memory speed/type even if you threw in a new GPU. For now, expansion slots for graphics memory are out of the question because the noise introduced by the slot interface prohibits such high-speed operation. Additionally, the cost of integrating 384 or 512 bit memory slots onto a motherboard would be prohibitive even if 2GHz operation were possible.

Where do you see the memory bandwidth coming from for a top-end integrated graphics solution?
If they can stick a 512-bit memory controller on a graphics card that costs $500, it can't be *that* expensive. Heck, an X800 or 9700 even goes for less than $100, and that includes the GPU and the memory. My guess is that a 256-bit controller on the motherboard would be $20 *at most*. 512-bit would maybe be $50 *at most*. I would personally be willing to pay that much for the massive memory bandwidth boost it would give my entire system.

Not only that, but I am pretty sure they can implement the memory controller onto the die like they did with the A64...

I am certain that the added memory speed and bandwidth will benefit a quad-core CPU.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: SexyK
They key factor holding back Fusion-type graphics is memory bandwidth. People in the 8600 thread are crucifying the card because it has a 128-bit memory bus at ~2GHz memory clock which is around 32GB/s of memory bandwidth. Currently, top of the line DDR2 modules offer around 8.5GB/s of memory bandwidth. How do you foresee a Fusion-type integrated GPU achieving anywhere near the 86+GB/s memory bandwidth of an 8800GTX, let along the 32GB/s of an 8600GTS? The only option would be to integrate high-speed GDDR4 into the motherboard which would drive up costs and leave you stuck with the same memory speed/type even if you threw in a new GPU. For now, expansion slots for graphics memory are out of the question because the noise introduced by the slot interface prohibits such high-speed operation. Additionally, the cost of integrating 384 or 512 bit memory slots onto a motherboard would be prohibitive even if 2GHz operation were possible.

The thing is you don?t see that kind of bandwidth in CPU <--> main memory (pipeline sub-system / FSB) because CPU don?t need that kind of bandwidth, in 9 out of 10 scenarios CPU needs low latencies.

Nothing is stopping Intel/AMD from providing such bandwidth, nothing! It was just never needed before, look at quad-core for crying out load, even it only reaches bandwidth bottleneck in certain server application. That?s 4 CPUs using the same pipeline to the memory subsystem, and the only reason Intel/AMD don?t provide such bandwidth in today CPUs is that it will add cost to the CPUs even when it is not needed in 90%+ of its cycles.

Latency, well that is a different story, there are hard technology limitation there, bandwidth not so much.

Since you brought out cost as an argument then please indulge me, how much does a hardcore gamer system cost now, and how much do you think that a high performances fusion system will cost?

Where do you see the memory bandwidth coming from for a top-end integrated graphics solution?
http://www.sun.com/processors/UltraSPARC-T1/details.xml
Integration

* Up to 8 cores, 4 threads per core
* *4* 144-bit DDR2-533 SDRAM interfaces
- Quad error correct, octal error detect, chipkill ECC
* 4 DIMMS per controller - 16 DIMMS total
* Optional 2-channel operation mode
* JBUS Interface
- 3.1 GB/sec peak effective bandwidth
- 128 bit address/data bus
- 150 - 200 MHz operation

Take the underlying parts and let math do the rest of the job (replace DDR2-533 with DDR3-1066 and 150 ? 200 MHz with 2 - 2.5 GHz).

This is just to provide you with a thread of thought.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
isn't that the whole point of Fusion ... to implement the memory controller on die?

probably sometime after Barcelona ... last i heard was '08 ... next year

i don't think AMD is aiming for "low end" at all
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: SickBeast
Originally posted by: SexyK
C/GPUs are going to go through the same evolutionary process as every other PC component. We're going to slow, low end C/GPUs first. As time goes on the performance will get better and Nvidia will eventually be squeezed out of the market. But that wont happen to 2010 or 2011, you can take that one to the bank.
You sound very confident, and to tell you the truth, I completely disagree.

Spending resources on the current nonintegrated (discreet) 3D hardware solution is a waste of precise (and expansive) R&D resources, even you agree that it (discreet 3D hardware) will eventfully be obsolete, if so then why spend R&D resources on technology that will be obsolete in a year or too?

Additionally, why are you so sure that Fusion or any similar project will take so much time to reach high performances?

Do you disagree with what seems to be the general consensus that modern GPU and CPU exist on the same axis in the sense that they are both complex Processing Units with only different focuses (Central/Graphic), and that they are heading toward each other i.e. CPU become more and more parallel in nature (superscalar) and in focus (SIMD instructions) with every generation, and that GPU becomes more and more programmable (more general purpose) with every generation?

If you agree then what, in your opinion, is preventing it (C/GPU) from reaching the high performances area?

They key factor holding back Fusion-type graphics is memory bandwidth. People in the 8600 thread are crucifying the card because it has a 128-bit memory bus at ~2GHz memory clock which is around 32GB/s of memory bandwidth. Currently, top of the line DDR2 modules offer around 8.5GB/s of memory bandwidth. How do you foresee a Fusion-type integrated GPU achieving anywhere near the 86+GB/s memory bandwidth of an 8800GTX, let along the 32GB/s of an 8600GTS? The only option would be to integrate high-speed GDDR4 into the motherboard which would drive up costs and leave you stuck with the same memory speed/type even if you threw in a new GPU. For now, expansion slots for graphics memory are out of the question because the noise introduced by the slot interface prohibits such high-speed operation. Additionally, the cost of integrating 384 or 512 bit memory slots onto a motherboard would be prohibitive even if 2GHz operation were possible.

Where do you see the memory bandwidth coming from for a top-end integrated graphics solution?
If they can stick a 512-bit memory controller on a graphics card that costs $500, it can't be *that* expensive. Heck, an X800 or 9700 even goes for less than $100, and that includes the GPU and the memory. My guess is that a 256-bit controller on the motherboard would be $20 *at most*. 512-bit would maybe be $50 *at most*. I would personally be willing to pay that much for the massive memory bandwidth boost it would give my entire system.

Not only that, but I am pretty sure they can implement the memory controller onto the die like they did with the A64...

I am certain that the added memory speed and bandwidth will benefit a quad-core CPU.

Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
I forgot to mention cache and Prefetcher: these solutions are there to nullify the latency problem that exist in modern CPU. CPU don?t have a bandwidth problem they have a latency problem, that is the only reason you see current (low) bandwidth in CPU memory subsystem.
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: kobymu
Originally posted by: SexyK
They key factor holding back Fusion-type graphics is memory bandwidth. People in the 8600 thread are crucifying the card because it has a 128-bit memory bus at ~2GHz memory clock which is around 32GB/s of memory bandwidth. Currently, top of the line DDR2 modules offer around 8.5GB/s of memory bandwidth. How do you foresee a Fusion-type integrated GPU achieving anywhere near the 86+GB/s memory bandwidth of an 8800GTX, let along the 32GB/s of an 8600GTS? The only option would be to integrate high-speed GDDR4 into the motherboard which would drive up costs and leave you stuck with the same memory speed/type even if you threw in a new GPU. For now, expansion slots for graphics memory are out of the question because the noise introduced by the slot interface prohibits such high-speed operation. Additionally, the cost of integrating 384 or 512 bit memory slots onto a motherboard would be prohibitive even if 2GHz operation were possible.

The thing is you don?t see that kind of bandwidth in CPU <--> main memory (pipeline sub-system / FSB) because CPU don?t need that kind of bandwidth, in 9 out of 10 scenarios CPU needs low latencies.

Nothing is stopping Intel/AMD from providing such bandwidth, nothing! It was just never needed before, look at quad-core for crying out load, even it only reaches bandwidth bottleneck in certain server application. That?s 4 CPUs using the same pipeline to the memory subsystem, and the only reason Intel/AMD don?t provide such bandwidth in today CPUs is that it will add cost to the CPUs even when it is not needed in 90%+ of its cycles.

Latency, well that is a different story, there are hard technology limitation there, bandwidth not so much.

Since you brought out cost as an argument then please indulge me, how much does a hardcore gamer system cost now, and how much do you think that a high performances fusion system will cost?

Where do you see the memory bandwidth coming from for a top-end integrated graphics solution?
http://www.sun.com/processors/UltraSPARC-T1/details.xml
Integration

* Up to 8 cores, 4 threads per core
* *4* 144-bit DDR2-533 SDRAM interfaces
- Quad error correct, octal error detect, chipkill ECC
* 4 DIMMS per controller - 16 DIMMS total
* Optional 2-channel operation mode
* JBUS Interface
- 3.1 GB/sec peak effective bandwidth
- 128 bit address/data bus
- 150 - 200 MHz operation

Take the underlying parts and let math do the rest of the job (replace DDR2-533 with DDR3-1066 and 150 ? 200 MHz with 2 - 2.5 GHz).

This is just to provide you with a thread of thought.

I'm sorry but your argument makes no sense to me. You are talking about CPUs not needing huge memory bandwidth, but we are talking about integrated GPUs here. It is well documented that GPUs require huge amounts of memory bandwidth and in fact will use pretty much all the bandwidth we can throw at them right now. A low-bandwidth, low-latency memory interface may work well for CPUs, but it will completely cripple a GPU.

As for your point about the UltraSPARC-T1's memory interface, need I point out that an entry-level system based on the T1 (with a sinlge, 1GHz CPU) costs $9,995? That hardly seems like technology that will be available in affordable desktop systems anytime soon. As I noted in my previous post, right now this type of implementation is cost prohibitive, if not technically unfeasible.
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: kobymu
I forgot to mention cache and Prefetcher: these solutions are there to nullify the latency problem that exist in modern CPU. CPU don?t have a bandwidth problem they have a latency problem, that is the only reason you see current (low) bandwidth in CPU memory subsystem.

If you think a cache/prefetch system will alleviate the latency/bandwidth problems in a high-end graphics subsystem, then please check out the performance of nVidia Turbocache or ATI/AMD Hypermemory add-in cards. Again, we are talking about an integrated GPU, not a CPU.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
isn't it all really speculation - at this point?
:confused:

there is really very little hard info available ... i expect everything to be much clearer by the end of this year

 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: apoppin
isn't it all really speculation - at this point?
:confused:

there is really very little hard info available ... i expect everything to be much clearer by the end of this year

Agreed apoppin - it is all speculation, but I think people predicting that Fusion-based GPUs will outperform discreet add-in boards, or even the demise of discreet add-in GPUs entirely are off-base. Neither of those scenarios will be happening any time soon.