• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Rumor Section: About the new GPU's

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: SlowSpyder
The 4890 uses less power on a whole than the 4870. The 4890 is really a refined, improved 4870. While it's hardly revolutionary, it is a step forward from the previous design.

Depends on which review you look at, it seems.
Eg, this review:
http://www.tweaktown.com/artic...hics_card/index17.html
Here you see the 4890 using less power than the 4870, but both use more power than the GTX285, both in idle and load.

But then this review:
http://www.bit-tech.net/hardwa...0-1gb-atomic-review/13
Here the 4890 takes quite a bit more than the 4870... However, they magically both stay well below the GTX285 (except for the Sapphire 'Atomic' version).

I don't see how the 4890 could be a refined, improved 4870 though.
As far as I know, it's just a higher binning of the same GPU.
 
Originally posted by: Scali
Originally posted by: SlowSpyder
The 4890 uses less power on a whole than the 4870. The 4890 is really a refined, improved 4870. While it's hardly revolutionary, it is a step forward from the previous design.

Depends on which review you look at, it seems.
Eg, this review:
http://www.tweaktown.com/artic...hics_card/index17.html
Here you see the 4890 using less power than the 4870, but both use more power than the GTX285, both in idle and load.

But then this review:
http://www.bit-tech.net/hardwa...0-1gb-atomic-review/13
Here the 4890 takes quite a bit more than the 4870... However, they magically both stay well below the GTX285 (except for the Sapphire 'Atomic' version).

I don't see how the 4890 could be a refined, improved 4870 though.
As far as I know, it's just a higher binning of the same GPU.

It's not. X-bit has the best explanation of the differences:

At first glance Radeon HD 4890 only differs from the predecessor, Radeon HD 4870, by the GPU and memory clock frequencies. However, if you take a closer look at the technical specifications above, you will see that RV790 has a few features distinguishing it well from the RV770 core. At least, it has 3 million more transistors and hence a 22sq.mm larger die. So what have they done?

Nvidia is not the only one who knows that the best is the enemy of the good. Successful graphics architecture hasn?t been changed since RV770. However, Advanced Micro Devices developers and engineers put a lot of efforts into making RV790 work stably at 850MHz frequency. They also did their best to lower the power consumption of the new GPU at least in idle mode.

When they designed RV790, they revised the entire RV770 internal structure quite significantly, having rebalanced internal chip timings and optimized its internal power circuitry. The memory controller now supports burst read, which was earlier implemented in RV710 and 730. But most importantly, they added a capacitor Decap Ring along the entire RV790 perimeter, which should improve the signal quality by lowering the parasitic noise. As the clock frequency increases, the influence of this noise on the GPU and memory stability becomes more and more serious. All these measures called for more transistors within the new core. However, in fact, RV790 became only 0.3% more complex, which is a really small price to pay for the ability to work at 850MHz+ frequencies.
 
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.
 
Originally posted by: Scali
I don't see how the 4890 could be a refined, improved 4870 though.
As far as I know, it's just a higher binning of the same GPU.
It's not. Additional on-chip decoupling caps and beefier power section in the reference design.
That's what netting you the vastly better overclocks.
 
Originally posted by: Jabbernyx
Originally posted by: Scali
I don't see how the 4890 could be a refined, improved 4870 though.
As far as I know, it's just a higher binning of the same GPU.
It's not. Additional on-chip decoupling caps and beefier power section in the reference design.
That's what netting you the vastly better overclocks.

It added an additional 3M of transistors along with some hand tuned optimizations inside of it's internal datapath which gives the HD 4890 the ability to overclock like hell and being competitive against the GTX 285 but with a realistic price tag.

Originally posted by: Scali
Depends on which review you look at, it seems.
Eg, this review:
http://www.tweaktown.com/artic...hics_card/index17.html
Here you see the 4890 using less power than the 4870, but both use more power than the GTX285, both in idle and load.

But then this review:
http://www.bit-tech.net/hardwa...0-1gb-atomic-review/13
Here the 4890 takes quite a bit more than the 4870... However, they magically both stay well below the GTX285 (except for the Sapphire 'Atomic' version).

I don't see how the 4890 could be a refined, improved 4870 though.
As far as I know, it's just a higher binning of the same GPU.

In the end, the HD 4890 consumes less power at idle compared to the HD 4870 and slighly higher power consumption at full load which good considering that the HD 4890 can run at much higher core clock speeds.

Originally posted by: Scali
It's not a sandwich card anymore. Besides, I don't think the GTX295 ever ran hotter than a 4870X2.

It's not even an nVidia reference at all, it was a custom card made by an OEM like Asus, the same type of SKU made by Sapphire with the HD 4850 X2, non reference cards which offers more options to the buyer.

The GTX280 has been on the market for quite a while though. It's about time that AMD closes the gap. nVidia is overdue for a refresh.
The 4890 is just AMD pushing its technology as hard as it can, making it an incredibly powerhungry and hot card. If you want to talk about elegance, 4890 isn't it.
I'm more interested in whether or not nVidia can make yet another leap forward in performance, like they did with the 8800 series a few years back. Or perhaps AMD can repeat the success of the Radeon 9700.

Long overdue, the GT200 is just a rehashed and beefed up 9800GTX which is a rehashed 8800GTX and let's not talk about the GTS 250 which is a terrible push of technology that nVidia is doing with the old and tired G92, at least the HD 4890 brough something new to the table, better thermal management (The RV790 runs cooler than the RV770 so I don't see your point of powerhungry and hot card), lower power consumption at idle and incredible overclockability. All nVidia DX10 cards currently shares the same features and brought nothing new besides performance improvements. (G80, G92, G92b, GT200 and GT200) share the same old features.

HD 2900XT was a flop but it was the first ATi's DX10 introduction so let's count it out, the same as the 8800GTX, HD 3800 series brought much better power and thermal management, better Crossfire scaling, DX10.1 and better video features brought by the UVD like VC-1 acceleration. Then the HD 4800 series brought much better anti aliasing performance, updated UVD for PIP in BluRays, pretty much a beefed up HD 3800 series, but nVidia has been beefing up the 8800 GPU for ages with no new features, nVidia is long overdue for a new product.
 
Originally posted by: Scali
Originally posted by: bryanW1995
ati has built their cards from the ground up with dual-gpu in mind. if you want to delude yourself into thinking that nvidia likes expensive sandwiches then have fun.

Who is deluding who here? Even for AMD a dual-GPU solution isn't exactly cheap, and AMD needs dual-GPU to go up against single-GPU nVidia cards.
And how are nVidia's GPUs not designed with multi-GPU in mind? Even their single-GPU cards have supported SLI for years, longer than AMD has had CrossFire.

um, ok, so ati's gpu is much smaller (so cheaper to manufacture) and for the past few gens has only required one pcb. nvidia's gpu, otoh, has been significantly larger and has required an actual "sandwich" board. So the extra board plus the added cost of the much larger gpu (a problem exacerbated until very recently by a process 1/2 step disadvantage enjoyed by nvidia). Let's face it, ati's "small ball" play lends itself much better to multi gpu than nvidia's strategy. If nvidia and ati keep to the same strategy for the dx 11 cards then the same issues will come up again. I think that even if they do, however, there will be a twist because nvidia will not be taken by surprise this year like they were last time and will probably devote more time/effort to maximizing gaming performance (what a novel idea...)

Nvidia has had so many other advantages for the past several years that it's probably no exaggeration to say that 4xxx saved ati from getting the ax. Even hector ruiz would have been hard-pressed to be so stupid that...uh, never mind... There is nothing too stupid for hector to attempt.
 
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.

too bad they haven't spend more time on k9/k10/etc...
 
Originally posted by: Scali
Originally posted by: akugami
So what is new about AMD's dual GPU solutions? Instead of being the odd SKU, AMD has made dual GPU cards the centerpiece of their GPU strategy. Whereas previously the odd dual GPU card was made to fill a very high end niche (nVidia GTX 295), AMD is now using it to create a more "top to bottom" release of GPU's.

Yea, that's a bit funny really.
People are on about how ATi uses smaller, cheaper GPUs...
But as soon as you start using a dual-GPU card, the whole thing blows out of proportion. Your PCB becomes much more expensive, you're 'wasting' half your memory, so you need to put twice as much on, and the power consumption goes way over the top.
And then there's the problem that performance relies a lot on how well the drivers and the games get on. In some games you just get the performance of one GPU.

So even if you're going to argue that small, cheap GPUs are a good strategy... The X2 strategy doesn't fit into those advantages AT ALL. But I never hear anyone mention that. It's as if they think 'single GPU is good, so two GPUs is twice as good'.

I'm well aware of the fact that the high end is out of whack. High end cards are all about performance first and are usually a bad deal in the bang for the buck department, power draw, and heat output. Power consumption is still in line with previous high end video cards, whether it be from nVidia or ATI. Both companies have developed their cards within certain parameters and time and again we see that for the most part, new cards draw power in roughly the same scale as their high end predecessors.

However, all of this is nitpicking. The main concern for any high end part is performance bar none. Second on that list is cost. Lower on the totem pole is power draw and heat output. In that respect the GTX 295 trumps the Radeon 4870 X2. The good thing for AMD/ATI is that the GTX 295, while the winner, is not a winner by such a high margin that they are not at least a worthy alternative. Especially considering it is a pretty good value (relatively speaking).

Bottom line, mid range and low range is about value. AMD's current strategy provides very good bang for the buck in this department. I don't understand why you're trying to apply this to the high end when the high end has always been about performance regardless of any other factors (within reason).

Originally posted by: Scali
Originally posted by: akugami
Let's face it, while nVidia has held the top spot for quite a while now, their lower mid-range and low-range GPU's are still running on previous gen technologies and only the upper mid-range and high end parts use the newer GPU's. Not that there is anything wrong with a 8800/9800. But I'd like to see some of the new fangled tech trickle down to the $100-$150 price range instead of repackaging the same old GPU they've been using for the last two years as a new part.

There is very little difference between G92 and GT200, and both are made on 55 nm now, so for all intents and purposes, a G92 *is* a scaled-down version of the 'latest tech'.

You're trying to tell me that the G92 at 754m transistors and the GT200 at 1.4b transistors has very little difference? If you want to argue this about the difference between the G80 core and the G92 core I can see since they have 690m and 754m transistors respectively and that the G92 was really a tweaked G80. However, when you're nearly doubling the transistor count on a new GPU core I don't think you can really say there is very little difference between them. And you're arguing that the G92, an older GPU design, is a scaled down GT200 then we might as well say the AMD Athlon are scaled down Phenoms.

Originally posted by: Scali
Originally posted by: akugami
Though the bottom line is performance, more important would be how much bang can I get for my dollars. At this point, AMD/ATI's strategy seems to make a little more sense. Especially with the economy the way it is. AMD can produce video cards based on their new smaller GPU designs and get them out to the public faster than with nVidia's monolithic GPU approach.

That depends though. Just because AMD's current 4000-series is successful doesn't mean that the smaller GPU is ALWAYS the more successful one.
I'd like to point out that AMD's current 'smaller cheaper' strategy was started with the 3000-series.
They were an improvement over the 2000-series in the sense that you got the same performance with a smaller chip, so AMD could actually make some profit... and the power consumption was a bit more realistic than before...
But the REAL star of that generation was the nVidia G92 (8800GT/GTS 512). THAT was the GPU that completely redefined price/performance. It was faster than anything AMD offered, and it was also REALLY cheap. They forced AMD's prices down.
As a result, the 3000-series still weren't a big success, even though they were decent cards in their own right. The G92-based cards were just too good to ignore.

We'll just have to see where the balance lies in this generation.

The Radeon 3xx0 GPU's were really process shrunk Radeon 2xx0 GPU's and were mostly a cost cutting measure. The 4xx0 series GPU's was where they really put the "small, sleek, but still good performer" into play. Really, the RAdeon 2xx0 and 3xx0 series were creamed by nVidia's G80 and G92. And further in my post,I already stated that past performance does not equate to future performance.

The G80 and G92 were price gouging GPU's. In their respective performance tiers, the G80 and G92 GPU's were overpriced compared to previous generations. I owned a 8800 GTS and that sucker cost me $400 at the time and was only considered upper mid-end. That was horrible in terms of pricing in each tier. This is as opposed to when ATI was competitive (especially the Radeon 9x00's) when we had great value. I don't blame nVidia for overpricing their GPU's. From a business perspective I applaud them. But don't try to feed me that line about what great values the G80 and G92 based video cards.

The G80 and G92 were the best GPU's at the time. After all I bought an 8800 GTS. I have always said that I will always buy what makes sense and the G80 and G92 were simply the obvious choices from performance standpoints. But great values they were not. nVidia probably raised the price for the mid and upper range of cards by an average of $100. That's not good value. One of the reasons why I swore never to pay more than $350'ish for a video card was because of how high the costs were for an nVidia card (the only real gamer's choice) at that time.

Originally posted by: ScaliI agree. So far AMD has only talked about physics and GPGPU, but actual tools and software failed to materialize. We'll have to see if AMD's next-gen gets improved GPGPU capabilities.
As I said before, I think the current generation of nVidia GPUs has an advantage over AMD's GPUs in GPGPU tasks. Aside from the fact that all nVidia's GPUs since the 8-series can run OpenCL, where only the 4000-series from AMD supports it, I also think that nVidia's architecture is considerably more efficient for OpenCL-style code.
So I wonder if AMD will close that gap a bit with the next generation... Ofcourse nVidia isn't resting on its laurels either, and probably has some new tricks up their sleeve for Cuda (current Cuda is already ahead of OpenCL/DX11 Compute in terms of features anyway).

At least we agree on something. LOL.
 
Originally posted by: evolucion8
In the end, the HD 4890 consumes less power at idle compared to the HD 4870 and slighly higher power consumption at full load which good considering that the HD 4890 can run at much higher core clock speeds.

Regardless, neither the 4870 nor the 4890 have very good idle consumption (extremely poor is more like it), and at load they seem to be very close to the GTX285, while the GTX285 is generally faster, and in some games (eg Crysis) with quite a margin.
So I'm not impressed with either card's performance-per-watt, especially since it's a smaller GPU than the GTX285.
And while people may praise the overclocking potential of the 4890, you see the power consumption going WAY overboard when you overclock it, such as with hat Sapphire Atomic. Which is a sign that the GPU is really being pushed to the extreme.

Originally posted by: evolucion8
It's not even an nVidia reference at all, it was a custom card made by an OEM like Asus, the same type of SKU made by Sapphire with the HD 4850 X2, non reference cards which offers more options to the buyer.

I think you have it confused with Asus' double GTX285?
The single-GPU GTX295 is an nVidia reference design:
http://en.expreview.com/2009/0...-gtx295-unearthed.html

Originally posted by: evolucion8
Long overdue, the GT200 is just a rehashed and beefed up 9800GTX which is a rehashed 8800GTX

It's just a different strategy. AMD puts two GPUs on a single card, nVidia instead redesigns the GPU itself to have about the same amount of increase in performance.


Originally posted by: evolucion8
All nVidia DX10 cards currently shares the same features and brought nothing new besides performance improvements. (G80, G92, G92b, GT200 and GT200) share the same old features.

That's not entirely true.
The differences are mostly with Cuda.
Eg, G80 doesn't support atomics and double precision, later models do.

Originally posted by: evolucion8
HD 2900XT was a flop but it was the first ATi's DX10 introduction so let's count it out

I don't see why. nVidia's first DX10 series was a huge success. It's no excuse that it was their first DX10 series. ATi just screwed up.

Originally posted by: evolucion8
but nVidia has been beefing up the 8800 GPU for ages with no new features, nVidia is long overdue for a new product.

The painful part is that it took AMD up to the 4000-series to finally catch up with those 8800 GPUs. And even then they still lag behind with things like GPGPU, physics and video encoding. nVidia may be overdue for a new product, but AMD hasn't exactly been pushing them to come up with something new either.
 
Originally posted by: bryanW1995
um, ok, so ati's gpu is much smaller (so cheaper to manufacture) and for the past few gens has only required one pcb. nvidia's gpu, otoh, has been significantly larger and has required an actual "sandwich" board.

I don't think you get my point.
AMD required a dual GPU card to compete with nVidia's single GPU cards.
Now the 4870X2 is actually a decent performer, but the 3870X2 was a joke.
It wasn't quite able to keep up with the 8800GTS512 and 8800GTX, while requiring two GPUs and twice the memory. It also consumed much more power than either nVidia card, and was more noisy. Surely the 3870X2 wasn't cheaper to build than the simple and elegant 8800GTS512.
 
Originally posted by: akugami
I'm well aware of the fact that the high end is out of whack. High end cards are all about performance first and are usually a bad deal in the bang for the buck department, power draw, and heat output. Power consumption is still in line with previous high end video cards, whether it be from nVidia or ATI. Both companies have developed their cards within certain parameters and time and again we see that for the most part, new cards draw power in roughly the same scale as their high end predecessors.

The danger is with cards like the 3870X2. It wasn't faster than the 8800GTX, and then nVidia came up with the 8800GT and 8800GTS512. Cards with about the same performance as the 8800GTX, but with much lower power consumption and WAY lower prices.
Suddenly the 3870X2 wasn't high-end anymore. It was pushed into the mainstream because the 8800s dictated the prices into the sub-$200 range. And there AMD was caught out with an underperforming mega-expensive powerhungry solution.
The small GPU strategy doesn't always work.

Originally posted by: akugami
You're trying to tell me that the G92 at 754m transistors and the GT200 at 1.4b transistors has very little difference?

Well obviously, if you take the full specs of both chips, you'll see that the GT200 has much more of everything, 240 stream processors instead of 128, 512 bit memory interface instead of 256 bit, etc.
So it's nearly two G92's on a single chip. Which makes perfect sense looking at the transistor count.
Aside from that, the G92 and GT200 are virtually identical feature-wise.
So in essence the GT200 is a 'blown up' version o the G92, which is why there is no actual 'scaled down' GT200 on the market, the G92 is already that chip, the same technology in a smaller package (I don't see why you start on transistorcount in the first place, when you asked for 'trickled down' technology. You forgot your own line of argument?).

Originally posted by: akugami
The Radeon 3xx0 GPU's were really process shrunk Radeon 2xx0 GPU's and were mostly a cost cutting measure. The 4xx0 series GPU's was where they really put the "small, sleek, but still good performer" into play.

Not at all. The 2000-series was a big, bloated GPU with 512 bit memory interface and all.
The 3000-series was where they trimmed the fat, went down to 256 bit, and improved the GPU in general to be more efficient (better AA, added full DX10.1 support etc).
The 3870 is a VERY small and sleek GPU compared to the monster that is the 2900, and that can't be done with just a simple die-shrink.

This cut the production costs for AMD, the only problem they had left was the lack of performance. With the 4000-series they revamped the design again, added features for OpenCL/DX11 Compute at last, and finally got the performance where it should be.

Originally posted by: akugami
The G80 and G92 were price gouging GPU's. In their respective performance tiers, the G80 and G92 GPU's were overpriced compared to previous generations. I owned a 8800 GTS and that sucker cost me $400 at the time and was only considered upper mid-end. That was horrible in terms of pricing in each tier. This is as opposed to when ATI was competitive (especially the Radeon 9x00's) when we had great value. I don't blame nVidia for overpricing their GPU's. From a business perspective I applaud them. But don't try to feed me that line about what great values the G80 and G92 based video cards.

Obviously new products are always expensive, but prices drop.
The thing with the G92 is that it dropped prices at an amazing rate. When the 8800GT came along, you could get 8800GTX-like performance for less than half the price.
G92 was all about value.
I'm amazed, actually pissed off, that people don't remember this.
You get all this talk about "AMD revolutionized value with the 4000-series", when the 8800GT did at least as much a generation earlier.
Let me refresh your collective memories:
http://www.anandtech.com/video/showdoc.aspx?i=3140&p=14
The title alone says enough: "The only card that matters"
"It's really not often that we have the pleasure to review a product so impressively positioned. The 8800 GT is a terrific part, and it is hitting the street at a terrific price (provided NVIDIA's history of properly projecting street prices continues). The performance advantage and price utterly destroyed our perception of the GPU landscape. We liked the value of the 8800 GTS 320, and we were impressed when NVIDIA decided to go that route, providing such a high performance card for so little money. Upping the ante even more this time around really caught us off guard.

This launch really has the potential to introduce a card that could leave the same lasting impression on the computer industry that the Ti4200 left all those years ago. This kind of inflection point doesn't come along every year, or even every generation."

That level of performance (touching the high-end 8800GTX) at a price-point of $200-$250 was unheard of.
So yes, Derek Wilson is telling you that G92 was great value.
 
The danger is with cards like the 3870X2. It wasn't faster than the 8800GTX, and then nVidia came up with the 8800GT and 8800GTS512. Cards with about the same performance as the 8800GTX, but with much lower power consumption and WAY lower prices.
Suddenly the 3870X2 wasn't high-end anymore. It was pushed into the mainstream because the 8800s dictated the prices into the sub-$200 range. And there AMD was caught out with an underperforming mega-expensive powerhungry solution.
The small GPU strategy doesn't always work.

What? The 3870X2 was Clearly faster than the 8800GTX at release where it is faster in 8/9 tests.

 
Originally posted by: Sylvanas
The danger is with cards like the 3870X2. It wasn't faster than the 8800GTX, and then nVidia came up with the 8800GT and 8800GTS512. Cards with about the same performance as the 8800GTX, but with much lower power consumption and WAY lower prices.
Suddenly the 3870X2 wasn't high-end anymore. It was pushed into the mainstream because the 8800s dictated the prices into the sub-$200 range. And there AMD was caught out with an underperforming mega-expensive powerhungry solution.
The small GPU strategy doesn't always work.

What? The 3870X2 was Clearly faster than the 8800GTX at release where it is faster in 8/9 tests.

Funny, these results are way different:
http://www.extremetech.com/art.../0,2845,2252575,00.asp

Anandtech's approach in those benchmarks seems a bit strange. They seem to use very high resolutions, but not very high graphics settings (no AA, doesn't seem to be DX10 content either).
Seems to deliberately favour the 3870X2, which doesn't perform that well with shader-heavy content or with AA. Funny, because that's exactly what you'd want to buy a high-end card for. Extremetech seems to paint a more balanced, more realistic picture.
 
Originally posted by: Scali
Originally posted by: Sylvanas
The danger is with cards like the 3870X2. It wasn't faster than the 8800GTX, and then nVidia came up with the 8800GT and 8800GTS512. Cards with about the same performance as the 8800GTX, but with much lower power consumption and WAY lower prices.
Suddenly the 3870X2 wasn't high-end anymore. It was pushed into the mainstream because the 8800s dictated the prices into the sub-$200 range. And there AMD was caught out with an underperforming mega-expensive powerhungry solution.
The small GPU strategy doesn't always work.

What? The 3870X2 was Clearly faster than the 8800GTX at release where it is faster in 8/9 tests.

Funny, these results are way different:
http://www.extremetech.com/art.../0,2845,2252575,00.asp

Anandtech's approach in those benchmarks seems a bit strange. They seem to use very high resolutions, but not very high graphics settings (no AA, doesn't seem to be DX10 content either).
Seems to deliberately favour the 3870X2, which doesn't perform that well with shader-heavy content or with AA. Funny, because that's exactly what you'd want to buy a high-end card for. Extremetech seems to paint a more balanced, more realistic picture.

I can find plenty of reviews with similar results to AT All of AT's tests state they are at 'Highest quality settings' and high resolution is used to demonstrate the sheer pixel pushing power of the cards reviewed. Some AA is used in 3 of the tests, where framerates are very playable- using AA on a game where you top out at an unplayable frame rate is pretty useless for consumers.
 
Can't wait for those cards to show up. Though 1200 ALUs... Hope that's another smoke screen (like the 4000-series that was supposed to have 480 ALUs - so the same +50%).

As for the HD3870x2, it was faster most of the time than a 8800GTX... Only when CrossFire didn't scale did it loose. However, I remember that there were a LOT of scaling issues at start. Something the HD4870x2 doesn't suffer from. At least in my eyes, the HD3870x2, even though usually faster, was not a better buy.

And G92b and value? 😕 You mean the 9800GTX launch at 299$ when the 8800GTS 512MB was going for a bit above 200$ at that time and it was the same card? 8800GT was an instant hit though, offering extremely good performance for ~250$ when it launched. It still is a fine card. Not to mention it's very popular (if not the most popular card currently owned by PC gamers).
 
Originally posted by: Scali

The painful part is that it took AMD up to the 4000-series to finally catch up with those 8800 GPUs. And even then they still lag behind with things like GPGPU, physics and video encoding. nVidia may be overdue for a new product, but AMD hasn't exactly been pushing them to come up with something new either.

Wrong, GPGPU is a matter of optimizations and I sent you a link which proved that MilkyWay@Home ran faster on ATi hardware and that's why they chose it. PhysX is a mimic, which has been for years and yet, less than 5 games currently uses GPU PhysX, DX10.1 has more games and is has been on the market for much less time. Video Encoding always has been faster on ATi hardware, and now with the new introduction of Stream Video Encoding, is even faster. Thanks to the HD 3870X2, it forced nVidia to make an 8800 Ultra which was expensive, and barely faster than the 8800GTX.

Admit it, in the end, the smaller multi GPU approach is much better than gluing lots of stream processors in the same die, making it bigger, expensive, hotter and power hungry. I don't understand your repetitive " ATis 2 GPU competing to nVidia's 1 GPU argument" , 2 ATi's GPU kills 1 nVidia's GPU in performance, they're not even in the same league, is the GTX 285 in the same league as the HD 4870X2? NO!, Only the sandwich GX2 is, which will eventually have the same fate as the 9800GX2 and the 7950GX2. After all, ATi's video cards ages better and aren't left behind. So now with the newer GPU's, the GPU market will be more entertaining than before.
 
Originally posted by: Sylvanas
I can find plenty of reviews with similar results to AT All of AT's tests state they are at 'Highest quality settings' and high resolution is used to demonstrate the sheer pixel pushing power of the cards reviewed. Some AA is used in 3 of the tests, where framerates are very playable- using AA on a game where you top out at an unplayable frame rate is pretty useless for consumers.

I don't see how those reviews are similar to Anandtech's. Many of them do include many AA settings, and also DX10 games.
In general the results are also closer to the 8800GTX than with Anandtech, the 8800GTX actually winning a few benchmarks.
Which is my point: the 3870X2 was competing with the 8800GTX/8800GTS, and wasn't in a league of its own. Which means the 3870X2 was an expensive and powerhungry solution for that price/performance class.
 
Originally posted by: Qbah
And G92b and value?

I never said G92b, I said G92.

Originally posted by: Qbah
8800GT was an instant hit though, offering extremely good performance for ~250$ when it launched. It still is a fine card. Not to mention it's very popular (if not the most popular card currently owned by PC gamers).

My point exactly, the original G92 parts were a revolution in price/performance, and were the final nail in the coffin of the 3000-series.
 
Originally posted by: evolucion8
Wrong, GPGPU is a matter of optimizations and I sent you a link which proved that MilkyWay@Home ran faster on ATi hardware and that's why they chose it.

Good luck trying to argue that point home.

Originally posted by: evolucion8
PhysX is a mimic, which has been for years and yet, less than 5 games currently uses GPU PhysX, DX10.1 has more games and is has been on the market for much less time. Video Encoding always has been faster on ATi hardware, and now with the new introduction of Stream Video Encoding, is even faster. Thanks to the HD 3870X2, it forced nVidia to make an 8800 Ultra which was expensive, and barely faster than the 8800GTX.

Uhhhhh... the 8800Ultra was on the market for about half a year before the 3870X2 was launched, so how did the 3870X2 force nVidia?

Originally posted by: evolucion8
Admit it, in the end, the smaller multi GPU approach is much better than gluing lots of stream processors in the same die, making it bigger, expensive, hotter and power hungry.

I will admit nothing.
The most simple arguments are these:
1) A single GPU doesn't require you to 'clone' your memory, so a 1 GB card really is a 1 GB card, and doesn't behave like 512MB.
2) Two small GPUs still use more than one large GPU, at least in the case of AMD vs nVidia. In fact, even two large GPUs (GTX295) use less than two small GPUs (4870X2).
3) A single GPU doesn't have problems with scaling due to driver/game incompatibilities.

I think those are serious drawbacks of the multi-GPU approach, and I will always prefer a single-GPU if that gets me the same performance.

Originally posted by: evolucion8
I don't understand your repetitive " ATis 2 GPU competing to nVidia's 1 GPU argument" , 2 ATi's GPU kills 1 nVidia's GPU in performance, they're not even in the same league

That's because you didn't bother to read my post.
I was simply pointing out that the 3870X2 wasn't such a fast card, wasn't in a league of its own. In other words: a dual-GPU card is no guarantee that it will outperform single GPUs.
They finally got it right with the 4870X2, but that doesn't automatically mean that AMD will get it right again for this generation.
 
Originally posted by: Scali
Originally posted by: Sylvanas
The danger is with cards like the 3870X2. It wasn't faster than the 8800GTX, and then nVidia came up with the 8800GT and 8800GTS512. Cards with about the same performance as the 8800GTX, but with much lower power consumption and WAY lower prices.
Suddenly the 3870X2 wasn't high-end anymore. It was pushed into the mainstream because the 8800s dictated the prices into the sub-$200 range. And there AMD was caught out with an underperforming mega-expensive powerhungry solution.
The small GPU strategy doesn't always work.

What? The 3870X2 was Clearly faster than the 8800GTX at release where it is faster in 8/9 tests.

Funny, these results are way different:
http://www.extremetech.com/art.../0,2845,2252575,00.asp

Anandtech's approach in those benchmarks seems a bit strange. They seem to use very high resolutions, but not very high graphics settings (no AA, doesn't seem to be DX10 content either).
Seems to deliberately favour the 3870X2, which doesn't perform that well with shader-heavy content or with AA. Funny, because that's exactly what you'd want to buy a high-end card for. Extremetech seems to paint a more balanced, more realistic picture.

You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.

That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.
 
Originally posted by: evolucion8

Admit it, in the end, the smaller multi GPU approach is much better than gluing lots of stream processors in the same die, making it bigger, expensive, hotter and power hungry.
I would disagree with this. Multi-GPU will never be as robust as a single GPU because it?ll always have problems inherent to its design, problems a single GPU is immune to.

Not to mention that adding another GPU does exactly the same thing that ?gluing? stream processors does; more-so in high-end configurations.
 
Originally posted by: Scali
They finally got it right with the 4870X2, but that doesn't automatically mean that AMD will get it right again for this generation.

They did!! Welcome to 2009!!, The HD 4870X2 is the fastest single PCB video card on the planet, no single nVidia GPU can beat it, the only card that barely outperforms it is the sandwich GX2 which uses 2 PCB!! You may say that ATi needs 2 GPU's to compete with 1 GPU from nVidia, but I say that nVidia needs 1.4B transistor chip which is twice as big to be competitive with the 959M ATi chip, how that can be?
 
Originally posted by: BFG10K
Originally posted by: evolucion8

Admit it, in the end, the smaller multi GPU approach is much better than gluing lots of stream processors in the same die, making it bigger, expensive, hotter and power hungry.
I would disagree with this. Multi-GPU will never be as robust as a single GPU because it?ll always have problems inherent to its design, problems a single GPU is immune to.

Not to mention that adding another GPU does exactly the same thing that ?gluing? stream processors does; more-so in high-end configurations.

Yeah, but the same thing could be said to the processors, which reached a point that it couldn't get higher performance no matter how many optimizations were made to increase the IPC. The same issue will happen eventually to the GPUs which will reach a point that they would become so big and power hungry that will be too expensive to manufacture and reach reasonable yields. In multi GPU/CPU environments always there will be issues, but is just a matter of time to iron them out, multi CPU environment is moving fast, the same should happen to the GPU environment.
 
Originally posted by: evolucion8
Yeah, but the same thing could be said to the processors, which reached a point that it couldn't get higher performance no matter how many optimizations were made to increase the IPC.

Difference is that CPUs have little parallelism by themselves, where GPUs are embaressingly parallel.
With CPUs you actually add something by having two or more cores running side-by-side. This enables you to run multiple threads in parallel.
A single GPU already has hundreds of parallel processing units. It's just more efficient to add more processing power to a single GPU, since adding a second GPU has lots of overhead and limitations with the current SLI/CrossFire schemes.

Aside from that, I don't think we've quite reached a point where GPUs can't get higher performance from a single GPU. The GT200 may not look all that flashy anymore, but it's getting rather dated. We'll just have to wait and see what the next generation brings (which the thread is about). I'm quite sure that both AMD and nVidia can still improve greatly on the efficiency of their single GPUs. GPUs are a moving target after all, unlike CPUs which have been running the same basic x86 code for ages. GPUs today are capable of completely different things than GPUs of just 4-5 years ago... whereas a 5 year old CPU is pretty much the same thing, only a bit slower.

I think we'll need something considerably better than current SLI/CrossFire AFR approaches to multi-GPU before multi-GPU becomes anywhere near as efficient as a single GPU. Besides, even with CPUs we're still struggling with the efficiency issues. A quadcore is rarely twice as fast as a dualcore. We never had that problem when we had single-core processors which scaled in IPC and clockspeed. They had much better performance gains overall, and didn't require software to be completely rewritten to take advantage of them. Hence in many cases people still pick a high-clockspeed dualcore over a lower clockspeed quadcore, because it suits their use better.
 
Originally posted by: evolucion8
Originally posted by: Scali
They finally got it right with the 4870X2, but that doesn't automatically mean that AMD will get it right again for this generation.

They did!! Welcome to 2009!!, The HD 4870X2 is the fastest single PCB card on the planet, no single nVidia GPU can beat it, the only card that barely outperforms it is the sandwich GX2 which uses 2 PCB!! You may say that ATi needs 2 GPU's to compete with 1 GPU from nVidia, but I say that nVidia needs 1.4B transistor chip which is twice as big to be competitive with the 959M ATi chip, how that can be?


Fastest "single pcb" :roll:


And Phenom was the fastest quad for a while as well by your standards.
 
Back
Top