Nvidia to Amd : Bring On Antilles

Jdawg84

Senior member
Feb 6, 2010
256
0
0
Nvidia's dual chip card waits for Antilles


Nvidia has a dual GF110 card that is supposed to launch very soon. However, since AMD has postponed its dual-chip card for Q1 2011 it looks like Nvidia will do the same.

Sources close to company are saying that Nvidia partners are more or less ready and that the company can pull a quick launch even in 2010, but it looks like it will only happen after AMD's Antilles Radeon HD 6990 dual chip card comes out. Nvidia badly wants the two-chip card performance crown and has to see and study Antilles before its launch. The rumoured name for dual GF110 card is Geforce GTX 590.

We still have to see Cayman Radeon HD 6970 and HD 6950 cards in late 2010, and whether they can beat Geforce GTX 580 and soon to launch Geforce GTX 570. The challenge for AMD is that making a big chip and clocking it high is not that easy.

Last time ATI did this was with R600, more than three years ago and things didn't go as smooth as planned. The chip was quite hot and didn’t perform that well. After that the head of graphics Rick Bergman said that ATI is changing the course to make RV770 and a year later RV870, two chips that were chasing AMD's performance-per-watt dream.

Now this dream is apparently over as Cayman, a massive chip is powering Antilles, Radeon HD 6990. Let’s hope it all goes well for AMD.


http://www.fudzilla.com/graphics/item/20924-nvidias-dual-chip-card-waits-for-antilles


God i can't wait. :twisted::twisted::twisted:
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So how big and hot is Cayman? I know the AMD fan's have been tossing around 360mm. But isnt it supposed to be close to 3 billion transistors? I dont see how AMD can have a chip with that many transistors be that much smaller than a 580.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
So how big and hot is Cayman? I know the AMD fan's have been tossing around 360mm. But isnt it supposed to be close to 3 billion transistors? I dont see how AMD can have a chip with that many transistors be that much smaller than a 580.

Perhaps more importantly, have we decided how we define 'big and hot' to be? Maybe Fermi in it's initial forms, or the legendary FX 5800? ;)

Quite aside from that, why would *anyone* care how 'big and hot' it was if it walked the walk? Surely that was the argument from a certain side of the game since ever Fermi popped out? ;)

Heck, maybe AMD have invented smaller transistors, at the same node size. Or maybe Darth Vader, Michael J Fox and Professor Farnsworth have combined to make these AMD chips into doomsday devices ;)

Those who know can't say, which leaves pretty pooor pickings for the rumour mill. Wait a pretty short period relative to an average human lifespan, get on with the amazing collection of things there are to occupy ourselves in real life, and all will be revealed. If it's sh1t, then the knives can (and will, quite rightly) come out :)
 

biostud

Lifer
Feb 27, 2003
19,938
7,041
136
So how big and hot is Cayman? I know the AMD fan's have been tossing around 360mm. But isnt it supposed to be close to 3 billion transistors? I dont see how AMD can have a chip with that many transistors be that much smaller than a 580.

the 68xx are 1.7B transistors @ 255mm2
GTX 460 are 1.95B transistors (+14%) @ 332mm2 (+31%)
GTX 580 are 3.01B transistors (+77%) @ 520mm2 (+104%)

if they can pack the transistors as close in the 69xx series as the 68xx, and the transistor count is 2.8B Then the die size should be about 420mm2 (+64%)

But that's all speculation of course :)
 

Jdawg84

Senior member
Feb 6, 2010
256
0
0
Wonder why it'll be called the 590 and not the 595.

590 - 570 cores
595 - 580 cores

Would be pretty awesome
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Heck, maybe AMD have invented smaller transistors, at the same node size.

Transistors are 3-dimensional and the designers are allowed to implement transistors of varying dimensionality in all three dimensions.

There are design rule restrictions, but I think the primary reason people get hung up on the xtr density argumentation is that they are fixated on the node label and aren't really versed in even the most rudimentary concepts of how an IC functions.

AMD doesn't/didn't invent smaller transistors in order to get higher xtor density, TSMC had already invented them and they were always available.

In fact I can guarantee you that AMD is not using the smallest ones available at 40nm even still. If they did then their chips would probably be another 30-40% smaller still and only clock to around 200MHz tops.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
To be honest guys, I wouldn't take a word of what FUD says. Especially not after FUDzilla's two most recent false doozies. Sure, I expect Nvidia to release a dual GPU card. When? I have no idea.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
To be honest guys, I wouldn't take a word of what FUD says. Especially not after FUDzilla's two most recent false doozies. Sure, I expect Nvidia to release a dual GPU card. When? I have no idea.

Apoppin is right, as are you...I have confirmed story after story from FUDz regarding AMD stuff as being patently false. They really do just literally make crap up and post it.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
Transistors are 3-dimensional and the designers are allowed to implement transistors of varying dimensionality in all three dimensions.

There are design rule restrictions, but I think the primary reason people get hung up on the xtr density argumentation is that they are fixated on the node label and aren't really versed in even the most rudimentary concepts of how an IC functions.

AMD doesn't/didn't invent smaller transistors in order to get higher xtor density, TSMC had already invented them and they were always available.

In fact I can guarantee you that AMD is not using the smallest ones available at 40nm even still. If they did then their chips would probably be another 30-40% smaller still and only clock to around 200MHz tops.

I have to admit that I was playing the fool :oops: It just struck me as a possibility (on the very dopey thought process that perhaps transistors could vary in their complexity, with those of less complexity exhibiting a lesser tendancy to hog atomical space if well designed)... :)

An entirely ignorant fool at that, so at the very least I have learnt something from my shenanigans :)
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I would expect Nvidia to bring out a dual GPU card this time around. A dual GF104 might not have been enough to beat a 5970 and probably would have come pretty late compared to it anyway, so they put their resources else where. Now with new Fermis coming out and being a bit more well mannered in terms of power use, I don't see why they wouldn't bring them out.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I have to admit that I was playing the fool :oops: It just struck me as a possibility (on the very dopey thought process that perhaps transistors could vary in their complexity, with those of less complexity exhibiting a lesser tendancy to hog atomical space if well designed)... :)

An entirely ignorant fool at that, so at the very least I have learnt something from my shenanigans :)

My post wasn't intended to impugn you or denigrate in any fashion, was just trying to communicate that the topic of xtor density involves less ground up unicorn horns and fewer eyes of newt than some people are led to believe owing to an understandable gap in their knowledge base.

Trust me I understand where you are coming from, I was pretty clueless about the details until I started working in the field...even the stuff they taught in college was strikingly limited insofaras painting a picture of the actual landscape.

What AMD does is not "nothing short of a miracle" in GPUs just as what Intel does in CPUs is also not "nothing short of a miracle". These guys prioritize their budgets to focus on the key metrics of interest to them.

For AMD at 40nm that has netted them an architecture that delivers higher performance/watt than Nvidia as well as an IC that delivers higher xtor density.

If they did this while spending equal or less development money than Nvidia then they did good, if they spent considerably more money than Nvidia then it should come as no surprise that the product reflects the higher initial investment.

We won't ever know though, not unless a high level project manager who knows the budget numbers elected to make it public (at risk of losing their job).
 

maddie

Diamond Member
Jul 18, 2010
5,158
5,545
136
Apoppin is right, as are you...I have confirmed story after story from FUDz regarding AMD stuff as being patently false. They really do just literally make crap up and post it.

Why do you think we, myself included, still read them (Fudzilla)? I have no idea.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Still waiting for that dual-gpu gf100/104 card fudzilla announced a loooong time ago.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Dual GPU is much more important to NVDA than AMD, due to lack of single-GPU Surround. But we're close enough to 28nm now, plus I hate multi-GPU for various reasons (software, power/heat/noise, potential microstutter), else I'd at least CONSIDER getting a dual-Fermi card.

Nevertheless NV would go a long way to closing off that gaping hole they have in their feature set (single-card Surround). Now for single-GPU Surround... then they would have Surround, 3D, CUDA, and PhysX in their feature set, which simply outclasses what AMD can put up for their feature set.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Dual GPU is much more important to NVDA than AMD, due to lack of single-GPU Surround. But we're close enough to 28nm now, plus I hate multi-GPU for various reasons (software, power/heat/noise, potential microstutter), else I'd at least CONSIDER getting a dual-Fermi card.

Nevertheless NV would go a long way to closing off that gaping hole they have in their feature set (single-card Surround). Now for single-GPU Surround... then they would have Surround, 3D, CUDA, and PhysX in their feature set, which simply outclasses what AMD can put up for their feature set.
we are still probably at least 9-10 months way from having 28nm cards available.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Nevertheless NV would go a long way to closing off that gaping hole they have in their feature set (single-card Surround). Now for single-GPU Surround... then they would have Surround, 3D, CUDA, and PhysX in their feature set, which simply outclasses what AMD can put up for their feature set.

I always thought the reason that NV needed dual cards for surround is because each card could only output to two displays. Although, the cards have dual DVI and mini-HDMI on them, you can only use two of the outputs at a time. The reason for this is because HDMI and DVI require a RAMDAC, which supports up to two simultaneous outputs at a time.

AMD got around this by adding displayport which does not need a RAMDAC, whereas NV opted to forgo displayport and require SLI instead. I guess their rationale was that if someone was running three monitors for gaming they would probably also be willing to buy two cards. Granted, they also had the benefit of the doubt of having AMD introduce Eyefinity with displayport first, which didn't get the best feedback mainly because of the display port requirement.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Fudzilla huh? Right, I don't believe a word they say.

Still, I doubt nV would work on AMDs timeline and not release when THEY are ready. Not wait for AMD. If the 6990 is faster than the 590 then they wouldn't have the the title of fastest card even if it was for a month or so.

This will probably go down to who has the best scaling + performance/watt as these cards are likely to be limited to 300W.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Nevertheless NV would go a long way to closing off that gaping hole they have in their feature set (single-card Surround). Now for single-GPU Surround... then they would have Surround, 3D, CUDA, and PhysX in their feature set, which simply matches what AMD can put up for their feature set.


There, FIFY.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Why do you think we, myself included, still read them (Fudzilla)? I have no idea.

He knows how to sensationalize, and people are impatient and try to get as much info as possible. His site is so rank amateur though, that I am afraid to even navigate to it in fear I will get some sort of malware on my machine.

I have learned to never trust anything Fuad says. I am disappointed that so many members post his stories on our forum though. His lack of integrity has not only not hurt him, it has helped immensely based on the traffic he gets for it. The vast number of Video threads in this Forum based on his made up stories is representative of that.