R600 to be 80nm

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
That's silly. My 8800GTS is around the same size as an MATX motherboard.

There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).

I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.

You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SexyK
Originally posted by: apoppin
isn't it all really speculation - at this point?
:confused:

there is really very little hard info available ... i expect everything to be much clearer by the end of this year

Agreed apoppin - it is all speculation, but I think people predicting that Fusion-based GPUs will outperform discreet add-in boards, or even the demise of discreet add-in GPUs entirely are off-base. Neither of those scenarios will be happening any time soon.

define 'soon' so we have a common base

i am saying 5 years will start to eliminate discreet GPUs as the integrated solutions surpass the current PCIe2 technology
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: SexyK
Originally posted by: apoppin
isn't it all really speculation - at this point?
:confused:

there is really very little hard info available ... i expect everything to be much clearer by the end of this year

Agreed apoppin - it is all speculation, but I think people predicting that Fusion-based GPUs will outperform discreet add-in boards, or even the demise of discreet add-in GPUs entirely are off-base. Neither of those scenarios will be happening any time soon.
That's fair, but I think it's more likely that nVidia will either die or be absorbed by a CPU company than it is to think that AMD will go out of business within the next 5 years.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.

If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.

Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.

When it would be needed, then you will see it. It is that simple.

Because integrated GPU NEED it you will see it happen.

As for your point about the UltraSPARC-T1's memory interface, need I point out that an entry-level system based on the T1 (with a sinlge, 1GHz CPU) costs $9,995? That hardly seems like technology that will be available in affordable desktop systems anytime soon. As I noted in my previous post, right now this type of implementation is cost prohibitive, if not technically unfeasible.
It was, like I said, just an example.
It is technically possible!

You don?t see that in desktop system with a 1K$ price tag on them because it isn?t needed there.
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: SickBeast
Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
That's silly. My 8800GTS is around the same size as an MATX motherboard.

There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).

I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.

You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.

As for the xbox 360, the GPU has access to ~25GB/s of memory bandwidth, however the memory is not in DIMM form - it is attached to the motherboard making the xbox360 graphics solution more like an add-in card than an integrated solution.

I also am not disagreeing that in 5 years we may have a fusion-type product that can compete on the high end. However many people around here seem to think that the first-gen fusion products will make discreet add-in cards obsolete overnight, which will not be the case.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: SexyK
If you think a cache/prefetch system will alleviate the latency/bandwidth problems in a high-end graphics subsystem, then please check out the performance of nVidia Turbocache or ATI/AMD Hypermemory add-in cards. Again, we are talking about an integrated GPU, not a CPU.
Cache and prefetch are example of creative solution that engineers come up with when they reach hard technological limitations (latency).
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: SexyK
Originally posted by: SickBeast
Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
That's silly. My 8800GTS is around the same size as an MATX motherboard.

There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).

I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.

You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.

As for the xbox 360, the GPU has access to ~25GB/s of memory bandwidth, however the memory is not in DIMM form - it is attached to the motherboard making the xbox360 graphics solution more like an add-in card than an integrated solution.

I also am not disagreeing that in 5 years we may have a fusion-type product that can compete on the high end. However many people around here seem to think that the first-gen fusion products will make discreet add-in cards obsolete overnight, which will not be the case.
Well, I was just reading that AMD is working on 4 versions of Fusion:

- data centric
- graphics centric
- media centric
- general use

What I take from this is that the 'graphics centric' version will probably be a pretty good performer.

They're saying that AMD's main ambition for making Fusion was to create laptops with longer battery life and much better graphics performance, so you may well be correct to a degree.

I'm thinking Fusion will be better than most midrange graphics cards when it is released. That's not to say that it won't be possible to beat out the high-end; it's just that very few people require that level of graphics horsepower.
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: SickBeast
Originally posted by: SexyK
Originally posted by: SickBeast
Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
That's silly. My 8800GTS is around the same size as an MATX motherboard.

There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).

I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.

You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.

As for the xbox 360, the GPU has access to ~25GB/s of memory bandwidth, however the memory is not in DIMM form - it is attached to the motherboard making the xbox360 graphics solution more like an add-in card than an integrated solution.

I also am not disagreeing that in 5 years we may have a fusion-type product that can compete on the high end. However many people around here seem to think that the first-gen fusion products will make discreet add-in cards obsolete overnight, which will not be the case.
Well, I was just reading that AMD is working on 4 versions of Fusion:

- data centric
- graphics centric
- media centric
- general use

What I take from this is that the 'graphics centric' version will probably be a pretty good performer.

They're saying that AMD's main ambition for making Fusion was to create laptops with longer battery life and much better graphics performance, so you may well be correct to a degree.

I'm thinking Fusion will be better than most midrange graphics cards when it is released. That's not to say that it won't be possible to beat out the high-end; it's just that very few people require that level of graphics horsepower.

That's interesting information on fusion, do you have a link? Anyway, you may very well be right about what performance to expect. Considering the amount of information available now, it's really impossible to know. I am just thinking out loud about the hurdles i foresee AMD/ATI having to overcome in order to make fusion work. Trust me, I would love to see them succeed. Although I still think initial system cost would probably be higher with fusion, the cost of upgrading a socket-based GPU would presumably be lower than purchasing a whole new add-in card.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
LoL!!

Sorry, but the 360's Xenos GPU was not faster than any card on the market in 2006. Hell, the G80 launched in 2006.

Even before the G80, the X19k series and G7 series could play some of the same games with higher resolution higher IQ and still get playable frames.

I still have yet to see a single Xbox 360 game that actually uses decent levels of AA and AF. (Granted, there could be one out now. I haven't played the 360 for a few months)

Part of the reason why the probably don't use higher IQ in their games is because of the bandwidth constraints.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: josh6079
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
LoL!!

Sorry, but the 360's Xenos GPU was not faster than any card on the market in 2006. Hell, the G80 launched in 2006.

Even before the G80, the X19k series and G7 series could play some of the same games with higher resolution higher IQ and still get playable frames.

I still have yet to see a single Xbox 360 game that actually uses decent levels of AA and AF. (Granted, there could be one out now. I haven't played the 360 for a few months)

Part of the reason why the probably don't use higher IQ in their games is because of the bandwidth constraints.

The 360 does not even come close to matching the image quality or resolution of a separate video card. Not too mention that all it has to do is run games customized just for the GPU and a minimal operating system.

I don't think a CGPU or a integrated GPU will ever replace a separate graphics card. Why else would the R600 be so huge if they were moving in a direction of making em smaller.

The only benefit of a combined CPU\GPU is cost.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Wreckage
Originally posted by: josh6079
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
LoL!!

Sorry, but the 360's Xenos GPU was not faster than any card on the market in 2006. Hell, the G80 launched in 2006.

Even before the G80, the X19k series and G7 series could play some of the same games with higher resolution higher IQ and still get playable frames.

I still have yet to see a single Xbox 360 game that actually uses decent levels of AA and AF. (Granted, there could be one out now. I haven't played the 360 for a few months)

Part of the reason why the probably don't use higher IQ in their games is because of the bandwidth constraints.

The 360 does not even come close to matching the image quality or resolution of a separate video card. Not too mention that all it has to do is run games customized just for the GPU and a minimal operating system.

I don't think a CGPU or a integrated GPU will ever replace a separate graphics card. Why else would the R600 be so huge if they were moving in a direction of making em smaller.

The only benefit of a combined CPU\GPU is cost.
Yeah, TODAY there are better graphics cards than the Xbox 360`s GPU. When it was released, there were not.

The X1900 cards weren`t out yet, and I`m not certain if even the X1800 cards were.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: SickBeast

Yeah, TODAY there are better graphics cards than the Xbox 360`s GPU. When it was released, there were not.

The X1900 cards weren`t out yet, and I`m not certain if even the X1800 cards were.

AFAIK the 360 runs games at 720p with little or no AA. Plenty of cards at that time could run new games at 1280x720 2xAA
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Xenos is no where near as capable as you people make it out to be.

You guys are fooled into thinking it performs better than it does because it uses it's own custom API that is tailor made for it without any OS overhead.

Look at the freakin RSX GPU in PS3. That's nothing but a 7800GTX and as far as I'm concerned, PS3's graphics easily match the 360's (I'm a 360 user and I love it).

As for the whole C/GPU issue, I'm not going to get sucked into a flame war at the moment. I have a couple more classes today and I dont want to start something I cant finish. Besides, this thread is already off topic enough.

kobymu- It's obvious that I'm not going to change your mind and you wont change mine. I'm sticking to my opinion that C/GPUs wont overtake discreet graphics till 2011. Your opinion is just as valid as mine because they're both based off of pure speculation.

On a side note, if AMD is going to shock the world by giving us a C/GPU with greater than R600 performance next year, how in the hell is Intel planning on competing?
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: apoppin
intel has their own solution

even the OP is OFF topic

:roll:

:D

what's that about "derail" again, Wreckage?
:confused:

I know Intel has there own solution, can't think of it's name right now but they're working on it.

However, who here thinks that Intel can come up with anything other than a low-mid range solution in their C/GPU?

If Intel really thought AMD was going to give us a C/GPU that was equal to a high end discreet graphics card as some of you do, they would be crapping their pants right now and might have made a push to buy Nvidia.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: SickBeastI'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome.
Having a higher bit width would require a mother board with more layers. That would add cost. For comparison, a 8800gtx has 12 pcb layers and a motherboard has 4-8 depending on how fancy and how compact the board is.

 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.

If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.

Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...

CPU NEEDS high bandwidth! It's just not realistic (meaning much more expensive) to get it. That's why there are tricks around that (cache, prefetch, ...). The ultimate ideal state would be the whole RAM in a form of cache.

Some tasks don't need more than 1MB of memory. They run from cache and they are fast but there are many tasks that need to go main memory and they would benefit from high bandwidth.

GPUs need high bandwidth because by nature of a task at hand they need more than 1MB(2, 4, 8,... what ever cache size would be possible) of memory.

CPUs are a little bit different. They execute different type of tasks and many of them fit into cache and that's the reason why they appear that they don't need high bandwidth. But if there was no cache they would starve to death for high bandwidth.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: Janooo
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.

If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.

Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...

CPU NEEDS high bandwidth! It's just not realistic (meaning much more expensive) to get it. That's why there are tricks around that (cache, prefetch, ...). The ultimate ideal state would be the whole RAM in a form of cache.

Some tasks don't need more than 1MB of memory. They run from cache and they are fast but there are many tasks that need to go main memory and they would benefit from high bandwidth.

GPUs need high bandwidth because by nature of a task at hand they need more than 1MB(2, 4, 8,... what ever cache size would be possible) of memory.

CPUs are a little bit different. They execute different type of tasks and many of them fit into cache and that's the reason why they appear that they don't need high bandwidth. But if there was no cache they would starve to death for high bandwidth.

From anand latest article:

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=2963&p=6

All the core2duo variety had the SAME bandwidth.
All the Athlon 64 X@ variety had the SAME bandwidth.

And that is from the 3D RENDERING Performance page.

Even when you look at 2 core CPU only you see a delta of 50% if not more.

What does that tell you?

Did you ever program any real world application? Do you have any idea what the hell you are talking about?

For every application you can find that is bottlenecked by bandwidth I can find you 10 that are bottlenecked by other subsystems, 10!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
http://www.theinquirer.net/default.aspx?article=38802
AMD JUST ANNOUNCED lower earnings for its first financial quarter, it is now set to earn $1.225 billion.

It blames the revenue shortfall on the price war with Intel, although it won't directly say it. Since the quarter is already over, one would guess this number is pretty accurate.

In any case, to make up for it, it will lower capex by about $500M in 2007, but said this will not change any numbers for the current year. AMD is also going to stop hiring in all but critical positions, and slow discretionary spending.

and

8800 Ultra to arrive on May Day

for $999
:shocked:
 

SexyK

Golden Member
Jul 30, 2001
1,343
4
76
Originally posted by: kobymu
Originally posted by: Janooo
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.

If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.

Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...

CPU NEEDS high bandwidth! It's just not realistic (meaning much more expensive) to get it. That's why there are tricks around that (cache, prefetch, ...). The ultimate ideal state would be the whole RAM in a form of cache.

Some tasks don't need more than 1MB of memory. They run from cache and they are fast but there are many tasks that need to go main memory and they would benefit from high bandwidth.

GPUs need high bandwidth because by nature of a task at hand they need more than 1MB(2, 4, 8,... what ever cache size would be possible) of memory.

CPUs are a little bit different. They execute different type of tasks and many of them fit into cache and that's the reason why they appear that they don't need high bandwidth. But if there was no cache they would starve to death for high bandwidth.

From anand latest article:

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=2963&p=6

All the core2duo variety had the SAME bandwidth.
All the Athlon 64 X@ variety had the SAME bandwidth.

And that is from the 3D RENDERING Performance page.

Even when you look at 2 core CPU only you see a delta of 50% if not more.

What does that tell you?

Did you ever program any real world application? Do you have any idea what the hell you are talking about?

For every application you can find that is bottlenecked by bandwidth I can find you 10 that are bottlenecked by other subsystems, 10!

The point he is making is that without on-die cache every application would be bandwidth limited. Thus the fact that most PC applications work on small data sets paired with powerful prefetchers and the presence of a extremely high-speed cache on-die lessens the impact of lower bandwidth to system memory. Try turning off your L1 and L2 cache and see if applications are limited by anything other than memory bandwidth. The vast majority will struggle mightily. That is why janoo is arguing that CPUs do need massive bandwidth - they need it, but only for a smaller data set which can be predicted and cached in L1 and L2 most of the time.

This is in contrast to GPUs which work with much larger data sets. 1-2MB of high-speed cache on a GPU would be insufficient to hold all the data required to render even one frame at a decent resolution. One approach to mitigating this issue with integrated GPUs is found in the xbox 360 where there is 10MB of high-speed eDRAM integrated into the GPU die. Note however that the 360 only renders in one resolution all the time, and all 360's have the same amount of memory, so developers can target one set of specifications and tailor their applications to fit into the eDRAM. This approach would have a much harder time working on the PC platform because people expect to be able to use extreme resolutions with their high-end GPU, so the size of the eDRAM would have to increase significantly. Creating a much large eDRAM block would make the die huge and is probably cost-prohibitive. Note also that the even with the inclusion of the eDRAM, the 360 still uses comparatively high-speed system memory (4x the bandwidth of current PC system memory) that is soldered directly to the system board to complement the cache. As others have pointed out, getting this kind of bandwidth onto a consumer-level motherboard with traditional DIMMs would increase the complexity of motherboards many times over, and would most likely increase the cost of DIMMs significantly because the DIMM interface would most likely have to be increased from 64-bits/channel to 128 or 256 bits per channel. Even then the trace length would still have to be addressed in order to allow the memory clockspeeds necessary. We are a long way off from having a fusion GPU being the top dog. Midrage? Possible. But high-end is still a ways off.