nVidia GT300's Fermi architecture unveiled: 512 cores, up to 6GB GDDR5

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kuzi

Senior member
Sep 16, 2007
572
0
0
Originally posted by: dguy6789
It's probably safe to say that this will be faster than the 5870 but slower than the 5870x2. I hope for Nvidia's sake that it will be sold at a lower MSRP than the 5870x2. I'm also willing to bet that you will hear from the same people that said you can compare the GTX295 to the 5870 that you can't compare the Fermi to the 5870x2.

Yes, in all likelyhood Fermi will perform between an HD5870 and 5870x2, which should be fine as long as the price is right.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
After reading Anands article, I liked the fact that nV is taking a different approach for their future GPUs and working hard to make them more general purpose. While this takes up precious die space (and more time/resources) it might be the right move for nV right now. Especially considering they don't produce their own CPUs.

Looking at the last few years GPUs have been more or less doubling performance while CPUs are only marginally increasing it and/or adding more cores, which doesn't always mean higher performance. At least not for the average home and office user.

If this keeps up without better software optimization for multi-core CPUs, the CPU would be a bottleneck for next-gen GPUs like Fermi/RV870. Hopefully Sandybridge/Bulldozer will help with that when released :)
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
So let's consider what we've got here.

5870 = exactly 2 x 4870 and small improvement in memory bandwidth
(256/8 * 900*4 = 115.2GB/s vs 256/8 * 1200*4 = 153.6GB/s or 33% increase)

GTX 385 = more than 2 x GTX 285 (512 cores vs 240 cores) and significant improvement in memory bandwidth (512/8*1000*2 = 128GB/s vs 384/8*1200*4 = 230.4GB/s or 80% increase)

So if the clockspeeds of the new GTX are even just equal to those on the current GTX cards these new ones are gonna rock. 2.13 times the processing power and 80% more memory bandwidth? Yes, please...

But yeah, I can definitely see a $500+ price tag in the future for these.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Is it just me or does ATI seem to be competing with NV in terms of GPU, but NV is on a whole other level? I'm thinking that NVIDIA has actually conceded that PC gaming is on the way out over the next few years and are adjusting their engineering accordingly. Fermi not only looks like it a great Tesla chip, but it pretty much looks like MS or Sony could design a console around a single Fermi chip and some RAM and call it a day. Granted, NV hasn't exactly provided a truly compelling reason to wait for Fermi instead of opting for a 5870 today. From a straight up GPU perspective, it looks like the NV vs. ATI price/performance ratio in 2010 will be about the same as it was in 2008-2009.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
From page 7

Originally posted by: Anand Lal Shimpi
NVIDIA's architecture is designed to address its primary deficiency: the company's lack of a general purpose microprocessor. As such, Fermi's enhancements over GT200 address that issue. While Fermi will play games, and NVIDIA claims it will do so better than the Radeon HD 5870, it is designed to be a general purpose compute machine.

I think this is the gist of NV's dilemma. On the flip side, it was the reason why AMD paid a fortune to acquire ATI. I sure hope Fermi succeeds at what it aims and open a can of whoop-ass and put Intel back in its place. :D

Oh and anyone saw this?

Originally posted by: Anand Lal Shimpi
Larrabee is in rough shape right now. The chip is buggy, the first time we met it it wasn't healthy enough to even run a 3D game. Intel has 6 - 9 months to get it ready for launch. By then, the Radeon HD 5870 will be priced between $299 - $349, and Larrabee will most likely slot in $100 - $150 cheaper. Fermi is going to be aiming for the top of the price brackets.

So Larrabee = $200? Interesting.

And page 8

Originally posted by: Anand Lal Shimpi
Will 2010 be the beginning of good enough performance in PC games? Display resolutions have pretty much stagnated, PC games are first developed on consoles which have inferior hardware and thus don't have as high the GPU requirements. The fact that NVIDIA is looking to Tegra and Tesla to grow the company is very telling. Then again, perhaps a brand new approach to graphics is what we'll need for the re-invigoration of PC game development. Larrabee.
:laugh:
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
So it's going to be a nice card though late.
When we can expect scaled down version? Q2/Q3 2010?
 

guy93

Senior member
Aug 2, 2008
341
3
81
I wonder how much fps you would get in Counter Strike Source with Nvidia GT300 + Core i7.. o_O
 

Forumpanda

Member
Apr 8, 2009
181
0
0
The specs make me fairly excited, I hope someone sneaks out an actual benchmark.

As far as pure gaming goes I think its really hard to tell what card will be best, it almost feels like both camps are starting to hit some harsh diminishing returns in 3D performance (FF hardware not scaling as well?) .. and are aiming towards the GPGPU market (which I am more interested in anyway).
 

imported_Shaq

Senior member
Sep 24, 2004
731
0
0
It looks about 10.5" to me. Two of these or a 5870 X2 is going to be CPU limited so tri and quad-SLI/XFire will only be for breaking benchmark records with exotic cooling. A top-of-the-line gaming rig will be under $2000 until a new console generation is released which shouldn't be too far away since they are at $300 now with full features. That gives Nvidia a year or two to build up the GPGPU side of their business.

Edit: I noticed on the reviews that physics would be accelerated with the new architecture. It is possible that PhysX will be processed fast enough with the GF100 that it won't require a second card.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
Originally posted by: Denithor
So let's consider what we've got here.

5870 = exactly 2 x 4870 and small improvement in memory bandwidth
(256/8 * 900*4 = 115.2GB/s vs 256/8 * 1200*4 = 153.6GB/s or 33% increase)

GTX 385 = more than 2 x GTX 285 (512 cores vs 240 cores) and significant improvement in memory bandwidth (512/8*1000*2 = 128GB/s vs 384/8*1200*4 = 230.4GB/s or 80% increase)

So if the clockspeeds of the new GTX are even just equal to those on the current GTX cards these new ones are gonna rock. 2.13 times the processing power and 80% more memory bandwidth? Yes, please...

But yeah, I can definitely see a $500+ price tag in the future for these.

The GTX 285 uses 1242 mhz GDDR3 memory so it has 159 GB/s of bandwidth so the improvement is only 49% if it does use 1200 mhz memory.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: nitromullet
Is it just me or does ATI seem to be competing with NV in terms of GPU, but NV is on a whole other level? I'm thinking that NVIDIA has actually conceded that PC gaming is on the way out over the next few years and are adjusting their engineering accordingly. Fermi not only looks like it a great Tesla chip, but it pretty much looks like MS or Sony could design a console around a single Fermi chip and some RAM and call it a day. Granted, NV hasn't exactly provided a truly compelling reason to wait for Fermi instead of opting for a 5870 today. From a straight up GPU perspective, it looks like the NV vs. ATI price/performance ratio in 2010 will be about the same as it was in 2008-2009.

I think the writing has been on the wall since the G80. But especially since AMD bought ATI. Nvidia needs to find new revenues streams because they are locked out of the cpu market. And integrated graphics rule the roost. PC gaming is flatlining in a big way. They cant focus on gaming GPU's and expect to surprive.

There is a huge market in the HPC world that is just waiting to be tapped. Anands example of cost cutting and power reduction will surely get the attention of many project managers. And with their improvements to their dev tools. It is obvious Nvidia is aiming to two markets right now.

I am curious if it is possible to eventually have one of these GPUs run an OS?
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Kuzi
After reading Anands article, I liked the fact that nV is taking a different approach for their future GPUs and working hard to make them more general purpose. While this takes up precious die space (and more time/resources) it might be the right move for nV right now. Especially considering they don't produce their own CPUs.

Looking at the last few years GPUs have been more or less doubling performance while CPUs are only marginally increasing it and/or adding more cores, which doesn't always mean higher performance. At least not for the average home and office user.

If this keeps up without better software optimization for multi-core CPUs, the CPU would be a bottleneck for next-gen GPUs like Fermi/RV870. Hopefully Sandybridge/Bulldozer will help with that when released :)

Not to nitpick(or call you out in particular), but isn't the 5870 called Cypress and not RV870? I think for the sake of being accurate, we should call Nvidia's next gen GPU Fermi and not GT300 and refer to the 5870 as Cypress and not RV870.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: Genx87
I am curious if it is possible to eventually have one of these GPUs run an OS?

Well, Anand confirmed C++ and Visual Studio support, so the tools to develop for the gpu are starting to arrive.

That is sort of why I'm leaning towards NVIDIA working on a sort of "out of the box" console with this chip. I think a closed, proprietary console OS would be a good place to start. Plus, I'm sure a console manufacturer would be interested in a single chip solution that handles cpu and gpu tasks effectively.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Is it fermi, or is it GF100?

I agree, getting a concensus on what the forum community calls it (and Cypress) would be great :thumbsup:

So is Fermi the Tesla-based product and GF100 is the GPU-based product? Both products being derived from a GT300 chip?
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: Kakkoii
Well I can't now, apparently you've somehow taken control of the topic, so only you can edit the title lol.

crap i'm sorry, i just got home. what do you want me to change?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I programed quite a bit when I was in college. But since then, my career path has taken me away from the IT field in a occupational sense. My life-long pc enthusiast hobby has, for the most part, been relegated to mostly just game playing, web surfing, fixing friend's and neighbor's machines, shopping online, dvd encoding, music, building my own pc's, etc. I won't lie either, I spend most of my time on my pc reading tech related articles or gaming. Since the beginning of 3d acceleration, I owned a voodoo 1, banshee, ti4200, 6600gt, 7800gtx, and currently own a gtx260 core 216.

Sooo, my question is, aside from gaming performance, will a cGPU like what nvidia is bringing out provide a user like me with any more benefits than the current radeon 5800 series?
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: dguy6789
Originally posted by: Kuzi
After reading Anands article, I liked the fact that nV is taking a different approach for their future GPUs and working hard to make them more general purpose. While this takes up precious die space (and more time/resources) it might be the right move for nV right now. Especially considering they don't produce their own CPUs.

Looking at the last few years GPUs have been more or less doubling performance while CPUs are only marginally increasing it and/or adding more cores, which doesn't always mean higher performance. At least not for the average home and office user.

If this keeps up without better software optimization for multi-core CPUs, the CPU would be a bottleneck for next-gen GPUs like Fermi/RV870. Hopefully Sandybridge/Bulldozer will help with that when released :)

Not to nitpick(or call you out in particular), but isn't the 5870 called Cypress and not RV870? I think for the sake of being accurate, we should call Nvidia's next gen GPU Fermi and not GT300 and refer to the 5870 as Cypress and not RV870.


Cypress= Press Codename
RV870= Manufacturing codename
HD 5870= Retail name when put together as a video card.

Fermi= Press Codename
GF180= Manufacturing codename
GTX 380= Retail name when put together as a video card.


People were throwing around the term "GT300", because the manufacturing codename of the 2XX series was "GT200",
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: tviceman


Sooo, my question is, aside from gaming performance, will a cGPU like what nvidia is bringing out provide a user like me with any more benefits than the current radeon 5800 series?

you're very likely to see acceleration in mainstream applications. the only relevant ones you listed were gaming and dvd encoding, however.

it would be very interesting to see larrabee at $200 since it also boasts a 2:1 single:double FP performance ratio, so we're probably looking at >1 DP tflop from both chips. except larrabee is x86. should work right out of the box on some stuff.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: alyarb
Originally posted by: tviceman


Sooo, my question is, aside from gaming performance, will a cGPU like what nvidia is bringing out provide a user like me with any more benefits than the current radeon 5800 series?

you're very likely to see acceleration in mainstream applications. the only relevant ones you listed were gaming and dvd encoding, however.

it would be very interesting to see larrabee at $200 since it also boasts a 2:1 single:double FP performance ratio, so we're probably looking at >1 DP tflop from both chips. except larrabee is x86. should work right out of the box on some stuff.

Originally posted by: Kakkoii
I really don't like the idea of our CPU's being our GPU's also. A GPU advances a lot quicker than CPU's do. I don't want to have to upgrade my CPU/GPU every time I want better graphics performance.

part of the article was to address the fact that display resolutions are stagnant, CPU performance growth is comparatively slow, and the growth of graphical complexity in games is beginning to slow down as well. nvidia and AMD have shown that they can move into general compute eligibility while also increasing graphics performance, so what's the big deal? It's just a coprocessor. When you want better graphics performance, you upgrade the card on which the calculations are performed, just like today. However, if game development doesn't experience a revolution soon, it's likely that a 4890/285 will be "fast enough" for quite a while.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Kakkoii
Originally posted by: dguy6789
Originally posted by: Kuzi
After reading Anands article, I liked the fact that nV is taking a different approach for their future GPUs and working hard to make them more general purpose. While this takes up precious die space (and more time/resources) it might be the right move for nV right now. Especially considering they don't produce their own CPUs.

Looking at the last few years GPUs have been more or less doubling performance while CPUs are only marginally increasing it and/or adding more cores, which doesn't always mean higher performance. At least not for the average home and office user.

If this keeps up without better software optimization for multi-core CPUs, the CPU would be a bottleneck for next-gen GPUs like Fermi/RV870. Hopefully Sandybridge/Bulldozer will help with that when released :)

Not to nitpick(or call you out in particular), but isn't the 5870 called Cypress and not RV870? I think for the sake of being accurate, we should call Nvidia's next gen GPU Fermi and not GT300 and refer to the 5870 as Cypress and not RV870.


Cypress= Press Codename
RV870= Manufacturing codename
HD 5870= Retail name when put together as a video card.

Fermi= Press Codename
GF180= Manufacturing codename
GTX 380= Retail name when put together as a video card.


People were throwing around the term "GT300", because the manufacturing codename of the 2XX series was "GT200",

Ah I see. Thanks for clearing that up.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: OCguy
So are Intel and nV trying to meet in the middle here?

LOL . There is going to be a meeting in the middle I whole hearttedly agree with that.

But the marriage will remain as it is/ Intel/ATI/ Larrabee with Hydra chip/Havok""""AMD/ATI/Larrabe/Hydra/Havok This is the Marriage its as plain as day.