• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Rumor Section: About the new GPU's

akugami

Diamond Member
Bit-tech.net news blurb

COMPUTEX 2009: Taiwanese sources close to both AMD and Nvidia have confirmed that AMD will be first to market with DirectX 11 capable graphics cards and they?re currently expected to arrive in October of this year ? right in line with Windows 7?s expected release.

The sources said that AMD is ?essentially ready? to release the new family of GPUs ? and has been for some time ? but the company is waiting for the problems with TSMC?s 40nm process to be ironed out.

The new family of GPUs ? with the flagship rumoured to be called RV870 (although not confirmed by our sources) ? will follow the same strategy that AMD employed with the Radeon HD 3000 and 4000 series. This means we can expect AMD to double up and use a pair of its fastest GPUs to create a dual-GPU flagship product of a similar ilk to the Radeon HD 4870 X2, which dominated the high-end for many months.

Nvidia, on the other hand, is expected to release another big GPU, but it is unlikely to be a conservative effort like GT200 ? we?re told the focus will very much be on maximising performance and efficiency when switching between graphics and general computing tasks (i.e. using the Compute Shader). It?s unclear whether it?ll be enough of a brute to match the performance of two smaller Radeon GPUs on a single board though.

With that said, our sources said that GT300 had taped out but Nvidia is being quite cagey about a release timeframe. It has been manufactured on TSMC?s 40nm node, which AMD has been having a lot of trouble with as RV740 chips are in ?very short supply.? If the problems with the process aren?t ironed out, it could affect both companies which wouldn?t be good for us consumers.

So it seems like whoever the sources are for Bit-tech have given a general time line for the release of both the new AMD and nVidia GPU's and it looks like AMD will be out first. Or at least first provided TSMC doesn't have further problems with their 40nm process.

Also of note is that AMD is "essentially ready" to release the new GPU's and that nVidia's GT300 has taped out. Assuming TSMC doesn't have problems it seems highly probably both AMD and nVidia will be releasing new GPU's to the market this year. AMD also seems slightly ahead on their release schedule but again, TSMC could impact this schedule and allow nVidia to catch up.

While I do believe the emphasis should still be on performance for games and other 3D apps, I also believe that the GPU market is heading towards GPGPU's. This is an area where nVidia seems to be comfortably ahead of AMD's efforts at this point. AMD did release a press release recently touting their ATI STREAM technology recently. It seems nVidia is making a heavy push towards GPGPU's with the GT300 and that the GT200 was considered conservative as far as GPGPU goes.
 
I don't understand when I read stuff like this....

we can expect AMD to double up and use a pair of its fastest GPUs to create a dual-GPU flagship product of a similar ilk to the Radeon HD 4870 X2, which dominated the high-end for many months.

Nvidia, on the other hand, is expected to release another big GPU

How many times does NVIDIA have to do it for them to see the pattern? Big chip (launch performance crown), let ATI take the lead for a few months with their dual gpu card, shrink, then put out a NVIDIA dual gpu card (regain performance crown)... It's not that $^$&*ing hard to figure out. The only time we'll really see a game changer is when someone releases a quad (or more) gpu card to compete against a singe gpu. Both companies are pushing multi-gpu beyond dual, but the performance gains have yet to materialize. Beyond dual gpu the returns are just not there for the investment.

This isn't an attack on either ATI or NV. It is more about how these sites always go on about how NVIDIA is focused on big, hot chips when in fact they are eying the dual gpu route the entire time. Both of these companies have to stay at least one step ahead of their actual retail products (AT's interview with the ATI engineers on RV7xx proves this), and I don't understand why these guys that are supposedly in the know don't seem to acknowledge that.
 
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb
 
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

That is like the zoners that complained a Q6600 was really just two E6600s super-glued together.


I like how the article says GT300 might not be a match for the X2 card. Do they forget that nV has had an X2 card the last few generations as well? That has held the performance crown?
 
Originally posted by: OCguy
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

That is like the zoners that complained a Q6600 was really just two E6600s super-glued together.


I like how the article says GT300 might not be a match for the X2 card. Do they forget that nV has had an X2 card the last few generations as well? That has held the performance crown?

Yeah, NV has only had a dual gpu card for the last three generations, yet somehow it's ATI's strategy. I just don't get it.
 
Originally posted by: OCguy
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

That is like the zoners that complained a Q6600 was really just two E6600s super-glued together.


I like how the article says GT300 might not be a match for the X2 card. Do they forget that nV has had an X2 card the last few generations as well? That has held the performance crown?

I think what they are getting at is that Nvidia isn't likely to launch with the dual GPU card where as AMD will probably have it at launch or very close to launch. Nvidia's dual GPU card tends to come later. If Nvidia's single larger GPU is faster than AMD's two smaller GPU's on a card, we might not even see a dual GPU card from Nvidia, but that's just a guess on my part.
 
Originally posted by: nitromullet
I don't understand when I read stuff like this....

we can expect AMD to double up and use a pair of its fastest GPUs to create a dual-GPU flagship product of a similar ilk to the Radeon HD 4870 X2, which dominated the high-end for many months.

Nvidia, on the other hand, is expected to release another big GPU

How many times does NVIDIA have to do it for them to see the pattern? Big chip (launch performance crown), let ATI take the lead for a few months with their dual gpu card, shrink, then put out a NVIDIA dual gpu card (regain performance crown)... It's not that $^$&*ing hard to figure out. The only time we'll really see a game changer is when someone releases a quad (or more) gpu card to compete against a singe gpu. Both companies are pushing multi-gpu beyond dual, but the performance gains have yet to materialize. Beyond dual gpu the returns are just not there for the investment.

This isn't an attack on either ATI or NV. It is more about how these sites always go on about how NVIDIA is focused on big, hot chips when in fact they are eying the dual gpu route the entire time. Both of these companies have to stay at least one step ahead of their actual retail products (AT's interview with the ATI engineers on RV7xx proves this), and I don't understand why these guys that are supposedly in the know don't seem to acknowledge that.

Maybe ATI thinks getting better yields from smaller dies makes more sense? Isn't it easier to make a smaller GPU than a larger GPU?
 
Would Intel's manufacturing method, of including hafnium on their current CPU's, be of any help with TSMC's 40nm problems?
Or: would Intel not be inclined to license (whatever patents are involved) to allow for this?
 
The real game changer for dual cards will be when the interchip communications is direct chip to chip with the PCIe switch removed. Plus each chip is able to access ALL on board memory.

Say 2,3, or 4 RV870s on a 512bit GDDR5 ring bus should do the trick.
 
Originally posted by: Paratus
The real game changer for dual cards will be when the interchip communications is direct chip to chip with the PCIe switch removed. Plus each chip is able to access ALL on board memory.

Say 2,3, or 4 RV870s on a 512bit GDDR5 ring bus should do the trick.

Is "ring bus" is the name for the technology that allows unified memory?

If (whoever) can perfect this then I think we might see smaller and smaller GPUs (that can be released earlier ala "RV740")). Then these companies can just increase total computing power by using this "ring bus" instead of making different size GPUs.

P.S. Who owns the "ring bus" patent?
 
http://www.patents.com/High-sp...ngbus/US7280549/en-US/

Looks like Micron owns a Ring bus patent. Not too long ago I was reading one of the GPU companies was considering using micron memory...but then I didn't realize Micron owned this ring bus patent.

Maybe we will see RV870 with the ring bus?

Three or four 40nm RV740 cores on a 512bit DDR5 Ring bus would make a pretty strong card that is very cheap to make I would think.

Or maybe it is Nvidia that will use a "ring bus" if these rumors about GT300 being a large chip are false.
 
Originally posted by: nitromullet
Originally posted by: OCguy
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

That is like the zoners that complained a Q6600 was really just two E6600s super-glued together.


I like how the article says GT300 might not be a match for the X2 card. Do they forget that nV has had an X2 card the last few generations as well? That has held the performance crown?

Yeah, NV has only had a dual gpu card for the last three generations, yet somehow it's ATI's strategy. I just don't get it.

OCguy does have a point. I don't think dual GPU's on one card is an original idea either. It's basically an extension of the dual GPU (though on separate cards) idea from the 3DFX days. So it's not exactly a new idea whether it's from AMD or nVidia. Both GPU companies have had dual GPU on a single card solutions in the past, even prior to the current generation of video cards.

Using two mid-range GPU's to create a high performance single card part is also not a new idea. Both AMD and nVidia have had past products utilizing lower end or mid-range GPU's for a high performance part.

So what is new about AMD's dual GPU solutions? Instead of being the odd SKU, AMD has made dual GPU cards the centerpiece of their GPU strategy. Whereas previously the odd dual GPU card was made to fill a very high end niche (nVidia GTX 295), AMD is now using it to create a more "top to bottom" release of GPU's.

Let's face it, while nVidia has held the top spot for quite a while now, their lower mid-range and low-range GPU's are still running on previous gen technologies and only the upper mid-range and high end parts use the newer GPU's. Not that there is anything wrong with a 8800/9800. But I'd like to see some of the new fangled tech trickle down to the $100-$150 price range instead of repackaging the same old GPU they've been using for the last two years as a new part. TSMC's problems with their 40nm process has also likely hindered nVidia's efforts at producing a lower end GT200 based card.

Though the bottom line is performance, more important would be how much bang can I get for my dollars. At this point, AMD/ATI's strategy seems to make a little more sense. Especially with the economy the way it is. AMD can produce video cards based on their new smaller GPU designs and get them out to the public faster than with nVidia's monolithic GPU approach.

While nVidia may take the top performer crown, it's probably more important to be the mid-range king, where most discrete GPU's are sold. AMD is taking an aggressive aim at this segment of the market and seems to have mostly conceded the top end to nVidia. In previous years, it was very important to be the top performer as it creates a halo effect where being the top dog pulls up your entire product line in terms of sales. GPU's have gotten good enough that even a $100-150 GPU is a very good performer and likely good enough for Joe Consumer.

AMD is basically going for the "bang for the buck" crown and is doing a good job of it. It has forced nVidia to lower prices (benefiting all consumers) in response. Now AMD has to follow up on that success and that has been a huge problem for it as evidenced by the relatively good Radeon 9700 series as well as AMD Athlon CPU's.

One huge area AMD will need to beef up is the GPGPU segment. While AMD makes very good GPU's, nVidia is not standing still and their GPU's are very high performers. nVidia also is much further ahead on stuff like physics acceleration and things like using the GPU for video encoding. So even though products like Badaboom and RapiHD are not ready for prime time, they will get better and better. This creates positive buzz for nVidia that AMD can't ignore.

AMD needs to do better in those areas. The markets are still young enough that nVidia, while in the lead, is not guaranteed any long term victory. So AMD can still make a splash and if not win, at least be in the race. But it has to get off its rear end and start showing something.
 
Originally posted by: OCguy
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

It saves ATI money for one. It doesn't block your airflow as much running a single card. Less heat in your case, etc..
 
Something that's been puzzling me for awhile concerning TSMC providing chips for both Nvidia and AMD is who gets priority? Take the current situation for example. Both the red and green teams are trying to get their new graphics cards out on TSMC's 40nm process. From all the rumors floating around it appears that TSMC is having a difficult time getting the 40nm process node to work for them. So where supply is scant, who gets first priority? Is it whoever pays TSMC the most money or are the contracts such that whoever signed first is first? Does TSMC dedicate different sections of their foundry to Nvidia and AMD so it doesn't matter who signed first? Any industry insiders know how that works?
 
Originally posted by: nitromullet
It is more about how these sites always go on about how NVIDIA is focused on big, hot chips when in fact they are eying the dual gpu route the entire time.

Yea I think that's started to lead a life of its own.
Yes, nVidia's chips are larger than AMD's... But they don't actually run hotter. They generally give slightly better performance per watt than AMD, and especially idle power seems to be better on nVidia cards. Generally reviewers also conclude that nVidia's cards are less noisy.
So somehow people have this idea that nVidia's chips are the 'Pentium 4' of GPUs, when that simply isn't true.
Larger chips may be more expensive to manufacture, but as long as nVidia keeps prices competitive for the end-user, why should they care? And as long as nVidia stays in business, why should even nVidia care? nVidia has had this strategy for years now. I don't think their engineering team is a bunch of drunken monkeys, so if this was really a stupid thing to do, they would have changed their strategy a long time ago.

As for dual cards... the GTX295 was recently redesigned as a single PCB.
It would surprise me if the next-generation dual-GPU card was NOT a single PCB aswell, since they can probably 'recycle' this PCB design for the most part. That probably was the idea in the first place, because why would they invest in redesigning the PCB for the GTX295 only months before it will be replaced by the newer lineup?
 
Originally posted by: akugami
So what is new about AMD's dual GPU solutions? Instead of being the odd SKU, AMD has made dual GPU cards the centerpiece of their GPU strategy. Whereas previously the odd dual GPU card was made to fill a very high end niche (nVidia GTX 295), AMD is now using it to create a more "top to bottom" release of GPU's.

Yea, that's a bit funny really.
People are on about how ATi uses smaller, cheaper GPUs...
But as soon as you start using a dual-GPU card, the whole thing blows out of proportion. Your PCB becomes much more expensive, you're 'wasting' half your memory, so you need to put twice as much on, and the power consumption goes way over the top.
And then there's the problem that performance relies a lot on how well the drivers and the games get on. In some games you just get the performance of one GPU.

So even if you're going to argue that small, cheap GPUs are a good strategy... The X2 strategy doesn't fit into those advantages AT ALL. But I never hear anyone mention that. It's as if they think 'single GPU is good, so two GPUs is twice as good'.

Originally posted by: akugami
Let's face it, while nVidia has held the top spot for quite a while now, their lower mid-range and low-range GPU's are still running on previous gen technologies and only the upper mid-range and high end parts use the newer GPU's. Not that there is anything wrong with a 8800/9800. But I'd like to see some of the new fangled tech trickle down to the $100-$150 price range instead of repackaging the same old GPU they've been using for the last two years as a new part.

There is very little difference between G92 and GT200, and both are made on 55 nm now, so for all intents and purposes, a G92 *is* a scaled-down version of the 'latest tech'.

Originally posted by: akugami
Though the bottom line is performance, more important would be how much bang can I get for my dollars. At this point, AMD/ATI's strategy seems to make a little more sense. Especially with the economy the way it is. AMD can produce video cards based on their new smaller GPU designs and get them out to the public faster than with nVidia's monolithic GPU approach.

That depends though. Just because AMD's current 4000-series is successful doesn't mean that the smaller GPU is ALWAYS the more successful one.
I'd like to point out that AMD's current 'smaller cheaper' strategy was started with the 3000-series.
They were an improvement over the 2000-series in the sense that you got the same performance with a smaller chip, so AMD could actually make some profit... and the power consumption was a bit more realistic than before...
But the REAL star of that generation was the nVidia G92 (8800GT/GTS 512). THAT was the GPU that completely redefined price/performance. It was faster than anything AMD offered, and it was also REALLY cheap. They forced AMD's prices down.
As a result, the 3000-series still weren't a big success, even though they were decent cards in their own right. The G92-based cards were just too good to ignore.

We'll just have to see where the balance lies in this generation.

Originally posted by: akugami
One huge area AMD will need to beef up is the GPGPU segment. While AMD makes very good GPU's, nVidia is not standing still and their GPU's are very high performers. nVidia also is much further ahead on stuff like physics acceleration and things like using the GPU for video encoding. So even though products like Badaboom and RapiHD are not ready for prime time, they will get better and better. This creates positive buzz for nVidia that AMD can't ignore.

AMD needs to do better in those areas. The markets are still young enough that nVidia, while in the lead, is not guaranteed any long term victory. So AMD can still make a splash and if not win, at least be in the race. But it has to get off its rear end and start showing something.

I agree. So far AMD has only talked about physics and GPGPU, but actual tools and software failed to materialize. We'll have to see if AMD's next-gen gets improved GPGPU capabilities.
As I said before, I think the current generation of nVidia GPUs has an advantage over AMD's GPUs in GPGPU tasks. Aside from the fact that all nVidia's GPUs since the 8-series can run OpenCL, where only the 4000-series from AMD supports it, I also think that nVidia's architecture is considerably more efficient for OpenCL-style code.
So I wonder if AMD will close that gap a bit with the next generation... Ofcourse nVidia isn't resting on its laurels either, and probably has some new tricks up their sleeve for Cuda (current Cuda is already ahead of OpenCL/DX11 Compute in terms of features anyway).
 
The i5 CPUs will have an on-die PCIe controller. This gives me the impression that it will be incredibly optimized for Larrabee. It apparently is 1/2 bandwidth, which will probably cripple AMD/NV GPUs, but will not affect the GPGPU Larrabee. The drastically reduced latency will most certainly boost Larrabee's performance a great deal.
 
Originally posted by: nitromullet
Originally posted by: OCguy
Originally posted by: LOUISSSSS
well we can all agree that ATI's dual gpu chips are much better designed. nvidia is still selling cards w/ dual PCB + internal sli bridge wheres the innovation? ati went from dual cards to dual chips on a single pcb

And what does it matter when you plug it in and play a game? 😕

That is like the zoners that complained a Q6600 was really just two E6600s super-glued together.


I like how the article says GT300 might not be a match for the X2 card. Do they forget that nV has had an X2 card the last few generations as well? That has held the performance crown?

Yeah, NV has only had a dual gpu card for the last three generations, yet somehow it's ATI's strategy. I just don't get it.

nvidia has pasted together two very large chips for the past several generations because they've had to if they wanted to maintain the performance crown. ati has built their cards from the ground up with dual-gpu in mind. if you want to delude yourself into thinking that nvidia likes expensive sandwiches then have fun.
 
Originally posted by: bryanW1995
ati has built their cards from the ground up with dual-gpu in mind. if you want to delude yourself into thinking that nvidia likes expensive sandwiches then have fun.

Who is deluding who here? Even for AMD a dual-GPU solution isn't exactly cheap, and AMD needs dual-GPU to go up against single-GPU nVidia cards.
And how are nVidia's GPUs not designed with multi-GPU in mind? Even their single-GPU cards have supported SLI for years, longer than AMD has had CrossFire.
 
All GPUs inherently lend themselves to multi-GPU because they are so massively parallel. It's not by design. If it were, we would have them sharing a pool of memory by now.
 
Originally posted by: SickBeast
All GPUs inherently lend themselves to multi-GPU because they are so massively parallel. It's not by design. If it were, we would have them sharing a pool of memory by now.

The parallelism has little to do with it.
CPUs aren't very parallel, yet we've had multi-CPUs for far longer than we've had multi-GPU.

You do have to have SOME design in there for multi-GPU, because there has to be some communication/synchronization between the GPUs in the system.
But indeed, neither SLI nor CrossFire are very elegant solutions to multi-GPU. They are just the most basic way to do it:
Take two videocards, let one render the even frames, let the other render the odd frames (and extend the concept to 3 and 4 GPUs).

Intel has been hinting at very good scaling with multiple Larrabee's. Will be interesting to see how they solve the multi-GPU problem. I hope it's a bit less brute-force, and a bit more elegant.
 
Originally posted by: bryanW1995
nvidia has pasted together two very large chips for the past several generations because they've had to if they wanted to maintain the performance crown. ati has built their cards from the ground up with dual-gpu in mind. if you want to delude yourself into thinking that nvidia likes expensive sandwiches then have fun.

This was Jen-Hsun Huang from August 2008 regarding what turned out to be the GTX 295...

http://www.guru3d.com/news/nvi...ips-gtx-280-gx2-rumor/

We?ve got nothing against GX2s and recently, we just had another GX2 with the 9800 GX2. It has its advantages and disadvantages and so I don?t know that there?s any particular philosophical approach that we take here. We just have to look at the market and build the right product.

So you just have to take it case by case and -- but we think our approach is the right approach. The best approach is to do both. If we could offer a single chip solution at 399, it certainly doesn?t preclude us from building a two-chip solution at something higher. So I think that having the right price, right product at each price point and the best-performing product at each price point is the most important thing.
 
Originally posted by: Scali
Originally posted by: bryanW1995
ati has built their cards from the ground up with dual-gpu in mind. if you want to delude yourself into thinking that nvidia likes expensive sandwiches then have fun.

Who is deluding who here? Even for AMD a dual-GPU solution isn't exactly cheap, and AMD needs dual-GPU to go up against single-GPU nVidia cards.
And how are nVidia's GPUs not designed with multi-GPU in mind? Even their single-GPU cards have supported SLI for years, longer than AMD has had CrossFire.

Yet, it kept the performance crown for a while and forced nVidia to make a sandwich card which barely outperforms the X2 card which is more expensive, hot, and inellegant. After all, a HD 4870 is slighly slower than the GTX 280, so an X2 is in a different league were the GTX 280 is no match, and currently the HD 4890 when is overclocked is nipping the heels of the GTX 285 and often outperforms it.
 
Originally posted by: evolucion8
Yet, it kept the performance crown for a while and forced nVidia to make a sandwich card which barely outperforms the X2 card which is more expensive, hot, and inellegant.

It's not a sandwich card anymore. Besides, I don't think the GTX295 ever ran hotter than a 4870X2.

Originally posted by: evolucion8
After all, a HD 4870 is slighly slower than the GTX 280, so an X2 is in a different league were the GTX 280 is no match, and currently the HD 4890 when is overclocked is nipping the heels of the GTX 285 and often outperforms it.

The GTX280 has been on the market for quite a while though. It's about time that AMD closes the gap. nVidia is overdue for a refresh.
The 4890 is just AMD pushing its technology as hard as it can, making it an incredibly powerhungry and hot card. If you want to talk about elegance, 4890 isn't it.
I'm more interested in whether or not nVidia can make yet another leap forward in performance, like they did with the 8800 series a few years back. Or perhaps AMD can repeat the success of the Radeon 9700.
 
Originally posted by: Scali
Originally posted by: evolucion8
Yet, it kept the performance crown for a while and forced nVidia to make a sandwich card which barely outperforms the X2 card which is more expensive, hot, and inellegant.

It's not a sandwich card anymore. Besides, I don't think the GTX295 ever ran hotter than a 4870X2.

Originally posted by: evolucion8
After all, a HD 4870 is slighly slower than the GTX 280, so an X2 is in a different league were the GTX 280 is no match, and currently the HD 4890 when is overclocked is nipping the heels of the GTX 285 and often outperforms it.

The GTX280 has been on the market for quite a while though. It's about time that AMD closes the gap. nVidia is overdue for a refresh.
The 4890 is just AMD pushing its technology as hard as it can, making it an incredibly powerhungry and hot card. If you want to talk about elegance, 4890 isn't it.
I'm more interested in whether or not nVidia can make yet another leap forward in performance, like they did with the 8800 series a few years back. Or perhaps AMD can repeat the success of the Radeon 9700.

The 4890 uses less power on a whole than the 4870. The 4890 is really a refined, improved 4870. While it's hardly revolutionary, it is a step forward from the previous design.
 
Back
Top