*OFFICIAL* R700 thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JPB

Diamond Member
Jul 4, 2005
4,064
89
91
I love posting updates and what not. Because I love reading everyone's opinion's on the subject. :)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
you know, I think a better term for it (if it is really going the way we think) is not a multi core module... but a modular core model... where it is one core split into several die, with specific parts (like shader clusters) being stackable.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
@taltamir - 'modular core model' is probably about as accurate and brief as I can imagine, given the rumors we've heard about the card's design. Maybe they should just call it MCM, for short.

@JPB - my only additional comment is that I would be thrilled if the industry heads in this direction. With modular core design, neither NVIDIA or ATI/AMD would have an excuse *not* to fill a price-point with a semi-competitive product.

In the last round, NVIDIA couldn't just conjure a chip that stood between the 32-shader 86xx and the 96/128-shader 88xx out of thin air. They needed another die specifically designed for that. While they may have had one (the 88xx - mobility parts), they could at least pretend that they didn't, if competition didn't require its release.

It will be a lot harder for any company not to justify releasing an entire family of cards, if their cores are modular. There would be no logical reason why 2x and 3x products couldn't launch, along with the 1x and 4x (or if 2x and 4x products launch, why not 1x and 3x).

In other words, if this really happens, maybe we'll see fewer of the "Goodness gracious, you say that there is a larger performance difference between the HD26xx and the HD29xx? Gosh, what can we do about that? We don't have another die ready!" responses. Modular design should allow both companies to bring more competitive products to market faster--and that's good for us.

The big caveat is working out the technical issues. I've been around long enough to have heard any number of complaints about Xfire & SLI. If this system is to work, it has to be utterly transparent to the end user.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Modular design. Very smart. R700 will be value, midrange and high end all depending on how many cores are placed on the card. They don't have to waste time of different GPU designs. One size fits all, it's just how many on board that determines the performance bracket/price. This of course all depends on how well the driver is designed (CrossfireX) and how efficient they can make it. I mean, you could have 4 R700 cores on one card, but does it offer the performance of 4 R700's? Or is it maybe 3.1x the performance of a single R700? There is a percentage of overhead cost (for both Crossfire and SLI) with each additional GPU. Unless that is all changed.
This should be the most interesting thing the graphics world has had to offer in a long time if this bears fruit.

Looking forward to it.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: bryanW1995
lol

the problem is that you would need a multi-core monitor to go with the others...

And a multi-core brain to play 2 games at once.
 

wanderer27

Platinum Member
Aug 6, 2005
2,173
15
81
Originally posted by: keysplayr2003
Modular design. Very smart. R700 will be value, midrange and high end all depending on how many cores are placed on the card. They don't have to waste time of different GPU designs. One size fits all, it's just how many on board that determines the performance bracket/price. This of course all depends on how well the driver is designed (CrossfireX) and how efficient they can make it. I mean, you could have 4 R700 cores on one card, but does it offer the performance of 4 R700's? Or is it maybe 3.1x the performance of a single R700? There is a percentage of overhead cost (for both Crossfire and SLI) with each additional GPU. Unless that is all changed.
This should be the most interesting thing the graphics world has had to offer in a long time if this bears fruit.

Looking forward to it.

If you take AMD's X2 (Dual Core) CPU's as an example, I'm betting you'll get 2.x times the performance with four cores. There's going to be quite a bit lost to all the overhead management.

I kinda hate to see GPU's going this route as it means more power & heat. GPU's use too much power and put out too heat as it is . . .

I do realize that this is probably the only way we're going to keep ramping up the GPU processing though.

 

Mech0z

Senior member
Oct 11, 2007
270
1
81
Will these cards require PCI 2.0 ? Because I am buying a new computer very soonish and it would be nice to be compatible with those cards when they get out!
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Mech0z
Will these cards require PCI 2.0 ? Because I am buying a new computer very soonish and it would be nice to be compatible with those cards when they get out!

I think they might just for power usage. I see two extra power connectors on board. So 150W from the slot and an addition 150 from both power connectors. If a single RV670 uses about 130W to 140W then the PCIE2 @ 150W and a single connector @ 75W wouldn't be enough (225W total). They'll need the 2nd 75W connector to meet the "supposed on my part" 260W to 280W with a cutting it close total of 300W totally available.

So yes, I think it will need PCIE 2.0 and two power connectors. No less.
Either that, or a power brick if using a PCIE 1 slot.
 

nullpointerus

Golden Member
Apr 17, 2003
1,326
0
0
Originally posted by: keysplayr2003
Originally posted by: Mech0z
Will these cards require PCI 2.0 ? Because I am buying a new computer very soonish and it would be nice to be compatible with those cards when they get out!

I think they might just for power usage. I see two extra power connectors on board. So 150W from the slot and an addition 150 from both power connectors. If a single RV670 uses about 130W to 140W then the PCIE2 @ 150W and a single connector @ 75W wouldn't be enough (225W total). They'll need the 2nd 75W connector to meet the "supposed on my part" 260W to 280W with a cutting it close total of 300W totally available.

So yes, I think it will need PCIE 2.0 and two power connectors. No less.
Either that, or a power brick if using a PCIE 1 slot.

Maximum power that can be supplied to video cards by various sources:

A. PCI-E (1.x) 16x Slot: 75W
B. PCI-E (2.0) 32x Slot: 150W

C. PCI-E (1.x) 6-pin Connector: 75W
D. PCI-E (2.0) 8-pin Connector: 150W

B + C + D = 375W, but most people will only manage 2x6-pin connectors off the PSU

I thought RV670's power usage under load was about half the HD2900XT (i.e. 66W avg. vs. 133W avg.) due to the die shrink, and the HD2900XT is PCI-E 1.x w/ the same two power connectors as this 3870 X2 card, for total supported maximum power draw of 225W (at stock clocks only).

I also wonder whether clocks might be slightly lower than stock 3870 single-GPU cards due to cooling issues. Remember the 7950GX2 used mobile derivatives of the 7900GTX core w/ lower clocks than the 7900GTX. Lower 3D clocks means less heat, but more importantly lower power consumption under load.

My guess is the 3870 X2 will work fine with the same 225W total (PCI-E 1.x slot plus 2x PCI-E 1.x connectors) at stock clocks.

Regardless, buying PCI-E 2.0 is probably a sensible decision right now if not cost prohibitive (i.e. < $40 premium).
 

Mech0z

Senior member
Oct 11, 2007
270
1
81
Right now the only 2.0 is X38 right? Or do the 790FX have it as well? I really do not know where to read this kind of data :/

EDIT: Quad PCI-E 2.0 x16 graphics interfaces with ATI CrossFireX? support for extreme gaming performance
 

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
Seems like the bulk of the work is gonna have to go into the software drivers making sure that all the cores can communicate with each other efficiently. Otherwise, kudos to AMD because it sounds like they are heading in the right direction.
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
If all the cores are going to be on a single die, what's the point? Adding intermediate versions, sure. But at the high end, just adding more blocks of shaders/texture units will scale better.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
1. 790FX is pcie2 2.0.

2. Calling 'modular core model' MCM will be confusing since Multi-Core-Module is called MCM (the quad core C2Q are MCM, two conroe cores.)

3. Modular design is different them multi core design in that it shouldn't require any special drivers, instead it should be implement in the hardware and card bios. Meaning that to the user it is completely transparent. Think of L3 cache. L3 cache is currently modular, it is ram on the chip, but not part of the die. That is why they have different versions with different sized L3 cache. If they did the same with stream processor arrays or cache on a video card...


EDIT: I don't think there is any chance that the R700 is modular though.
1. There are pictures showing two module connections (ie, two squares containing die etc)
2. There would have been more mention of something like that
3. The 38xx series would have been modular.
etc etc...

Best we can hope for is two modules each with one die working on a single card instead of two cards glued together.
The are all planning on the fusion anyways, where some of the CPU cores would be GPU cores instead.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: nullpointerus
Originally posted by: keysplayr2003
Originally posted by: Mech0z
Will these cards require PCI 2.0 ? Because I am buying a new computer very soonish and it would be nice to be compatible with those cards when they get out!

I think they might just for power usage. I see two extra power connectors on board. So 150W from the slot and an addition 150 from both power connectors. If a single RV670 uses about 130W to 140W then the PCIE2 @ 150W and a single connector @ 75W wouldn't be enough (225W total). They'll need the 2nd 75W connector to meet the "supposed on my part" 260W to 280W with a cutting it close total of 300W totally available.

So yes, I think it will need PCIE 2.0 and two power connectors. No less.
Either that, or a power brick if using a PCIE 1 slot.

Maximum power that can be supplied to video cards by various sources:

A. PCI-E (1.x) 16x Slot: 75W
B. PCI-E (2.0) 32x Slot: 150W

C. PCI-E (1.x) 6-pin Connector: 75W
D. PCI-E (2.0) 8-pin Connector: 150W

B + C + D = 375W, but most people will only manage 2x6-pin connectors off the PSU

I thought RV670's power usage under load was about half the HD2900XT (i.e. 66W avg. vs. 133W avg.) due to the die shrink, and the HD2900XT is PCI-E 1.x w/ the same two power connectors as this 3870 X2 card, for total supported maximum power draw of 225W (at stock clocks only).

I also wonder whether clocks might be slightly lower than stock 3870 single-GPU cards due to cooling issues. Remember the 7950GX2 used mobile derivatives of the 7900GTX core w/ lower clocks than the 7900GTX. Lower 3D clocks means less heat, but more importantly lower power consumption under load.

My guess is the 3870 X2 will work fine with the same 225W total (PCI-E 1.x slot plus 2x PCI-E 1.x connectors) at stock clocks.

Regardless, buying PCI-E 2.0 is probably a sensible decision right now if not cost prohibitive (i.e. < $40 premium).
I read somewhere the other day that r680 is supposed to be slightly lower clocks than 3870. Keys, I thought that you told me that 3870 was only 105w? maybe it was john...all you mods run together...;)

 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
well, really 8800 series cards use this with their clusters, right? 4 for 8800gt, 5 for 8800gts 640 and 6 for gtx/ultra, why couldn't amd do something similar with r700?
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: bryanW1995

I read somewhere the other day that r680 is supposed to be slightly lower clocks than 3870. Keys, I thought that you told me that 3870 was only 105w? maybe it was john...all you mods run together...;)

Hmmm. If that were the case, then lets overkill and double it for the 3879GX2 to 210W. You'd still be ok with a PCIE 1 slot with two connectors.

 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: bryanW1995
well, really 8800 series cards use this with their clusters, right? 4 for 8800gt, 5 for 8800gts 640 and 6 for gtx/ultra, why couldn't amd do something similar with r700?

I think because transistor counts are getting really high. More wafer real estate is being taken up for each core ( die shrinks help alot ). But why increase the complexity even further by adding even more transistors to a single core, when a modular design is possible.
 

thilanliyan

Lifer
Jun 21, 2005
12,064
2,277
126
Originally posted by: taltamir
EDIT: I don't think there is any chance that the R700 is modular though.
1. There are pictures showing two module connections (ie, two squares containing die etc)

You've seen pics of R700? link? Are you sure you're not mistaking it for R680?

I know pics of R680 have been shown, but I've never seen R700.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
exactly keysplayr... yes they have a modular die, but not a modular module. So they can increase clusters but doing so makes the die bigger, lowering yields, increasing heat generated (at a very small location, compared to splitting the die) etc...
With the full modularity you could go and instead of having 6/7/8 clusters (its not 4/5/6 but 6/7/8) have X clusters of 6 each on a different die. So you could have, 5 die of 6 clusters.


@thilian: that is a possibility... is the R680 supposed to be MCM?
 

thilanliyan

Lifer
Jun 21, 2005
12,064
2,277
126
Originally posted by: taltamir
@thilian: that is a possibility... is the R680 supposed to be MCM?

R680 is supposed to be 2 dies on one PCB...more like a Crossfire setup on 1 card.

Were the pics you saw 2 separate dies?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
for it to be crossfire on one card it has to have two phyiscal boards... the pictures I have seen show two ... "squares" on one plastic board (not die, but the modules that go in a socket). So, duel socket board.... which means it COULD potentially use unified ram archetecture or some other advanced stuff and potentially not crossfire.
 

thilanliyan

Lifer
Jun 21, 2005
12,064
2,277
126
Originally posted by: taltamir
for it to be crossfire on one card it has to have two phyiscal boards... the pictures I have seen show two ... "squares" on one plastic board (not die, but the modules that go in a socket). So, duel socket board.... which means it COULD potentially use unified ram archetecture or some other advanced stuff and potentially not crossfire.

Not necessarily...there have been a bunch of 2600XT DUAL cards, where 2 dies were on one board, with their own separate memory. They basically were Crossfire on a board. So I guess the pics you saw were R680?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Maybe they were.

Thanks for informing me about the two die on one board xfire.... that is just stupid though. I can't beleive they actually did that.