nVidia GT300's Fermi architecture unveiled: 512 cores, up to 6GB GDDR5

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lopri

Elite Member
Jul 27, 2002
13,314
690
126
These rumored specs are more believable than previous ones. (except the 6GB.. that's for Quadros)
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Originally posted by: OCguy
That sounds more credible than anything Charlie has ever wrote, but I just want benchmarks at this point.

1.5GB memory standard? Sweet!

Guess we can speculate the lower end cards (gt350) will have 768mb memory? Mabe the 3gb card is the gtx395?

 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Originally posted by: dguy6789
Only 384 bit memory? What happened to the 512 bit + GDDR5?
My guess is it's not needed. At 384-bit, bandwidth is already extreme, so why waste the extra money on a 512-bit board?

Looks good so far, but it would be nice to get some more info on performance. Considering this is a new architecture, it's tough to say what the GPU will the capable of.
 

Stoneburner

Diamond Member
May 29, 2003
3,491
0
76
So, Nvidia is releasing one product that'll keep 5800 and larrabee at bay in separate fields. That'd be a truly impressive feat if accomplished.
 

imported_Shaq

Senior member
Sep 24, 2004
731
0
0
Is the cache usable when playing games or is that only for the CPU features of the chip? If anyone can hazard a guess.
 

Red Storm

Lifer
Oct 2, 2005
14,233
234
106
Originally posted by: Stoneburner
So, Nvidia is releasing one product that'll keep 5800 and larrabee at bay in separate fields. That'd be a truly impressive feat if accomplished.

If it retails close to the 5870's price, definitely. However I think we'll be seeing a very high performance card (higher than 5870 or there's no point to it) with an equally very high price tag. Price/performance ratio is what it's all about. Very few people are going to spend over $500 on a single GPU.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: toyota


:D
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".



That really says nothing new about the timing. You say "widespread" Q1, which leaves wiggle-room for parts available for Xmas.

The 5870 is barely even available a week after launch, but you can still get them. That is probably what we will see with this card as well?

Oh, and @ the "hard" quote :laugh:
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: OCguy
Originally posted by: toyota


:D
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".



That really says nothing new about the timing. He says "widespread" Q1, which leaves him wiggle-room for parts available for Xmas.

The 5870 is barely even available a week after launch, but you can still get them. That is probably what we will see with this card as well.

Oh, and @ the "hard" quote :laugh:

The 5870 is readily available. They had a some parts on the day of the launch, and a week later they are easy to get. This was a good launch for AMD. Dell must not have gotten 90% of the cards, or they are pumping out a LOT of cards. It's looking more and more like Kyle and the other rumors about a later than sooner launch are correct... The 5870 (and more so in my opinion, the 5850) will be 'the' highend card(s) to get for a few months at least if all this is correct.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: wlee15
Fermi?s dual warp scheduler selects two warps, and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four SFUs.

I not sure you can really call this MIMD.

http://www.nvidia.com/content/...itectureWhitepaper.pdf

Oh awesome find.

Found this interesting:

o Full IEEE 754-2008 32-bit and 64-bit precision
o Full 32-bit integer path with 64-bit extensions
o Memory access instructions to support transition to 64-bit addressing

? NVIDIA GigaThreadTM Engine
o 10x faster application context switching
o Concurrent kernel execution
o Out of Order thread block execution
o Dual overlapped memory transfer engines
Hardware Execution
CUDA?s hierarchy of threads maps to a hierarchy of processors on the GPU; a GPU executes
one or more kernel grids; a streaming multiprocessor (SM) executes one or more thread blocks;
and CUDA cores and other execution units in the SM execute threads. The SM executes
threads in groups of 32 threads called a warp. While programmers can generally ignore warp
execution for functional correctness and think of programming one thread, they can greatly
improve performance by having threads in a warp execute the same code path and access
memory in nearby addresses.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
Originally posted by: Kakkoii
Originally posted by: wlee15
Fermi?s dual warp scheduler selects two warps, and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four SFUs.

I not sure you can really call this MIMD.

http://www.nvidia.com/content/...itectureWhitepaper.pdf

Oh awesome find.

Found this interesting:

o Full IEEE 754-2008 32-bit and 64-bit precision
o Full 32-bit integer path with 64-bit extensions
o Memory access instructions to support transition to 64-bit addressing

Here's the home page for it.

http://www.nvidia.com/object/fermi_architecture.html
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
Anand wrote:
Fermi will support DirectX 11 and NVIDIA believes it'll be faster than the Radeon HD 5870 in 3D games. With 3 billion transistors, it had better be.

Damn straight. This should outperform the GTX295 (how much, I'm not sure); if it doesn't then it would be considered a disappointment (for gaming).

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

Well that's nice. I would hope they wouldn't make the same mistake again because this time ATI's card has came out first.

I asked Jonah if that meant Fermi would take a while to move down to more mainstream pricepoints. Ujesh stepped in and said that he thought I'd be pleasantly surprised once NVIDIA is ready to announce Fermi configurations and price points. If you were NVIDIA, would you say anything else?

Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction. Perhaps Fermi will be different and it'll scale down to $199 and $299 price points with little effort? It seems doubtful, but we'll find out next year.

It would be nice to see Nvidia bring their new architecture to all price points, unlike what has happened with the Geforce GT, GTS, and GTX, where the GT and GTS are re-branded GPUs from the "last generation". Considering the more massive changes with Fermi, they would have to scale down their GPU for lower price brackets.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
It's probably safe to say that this will be faster than the 5870 but slower than the 5870x2. I hope for Nvidia's sake that it will be sold at a lower MSRP than the 5870x2. I'm also willing to bet that you will hear from the same people that said you can compare the GTX295 to the 5870 that you can't compare the Fermi to the 5870x2.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Originally posted by: her209
Jesus, what kind of power supply do you need to run that?

Supposedly they've improved power consumption considerably to which (and I read this a few weeks ago so I can't remember where I read it) the top end fermi card will have a lower TDP than a gtx280-285.
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
the more I read about fermi the more believe NV is moving towards CPU business with a general purpose solution that can do both much like Larabee. this just means all the AMD/Intel/NV is combining GPU into the CPU for the future compute model. I wonder if 5-10 years which company can dominate the landscape. anyways, such a big chip 3billion transisters must need some hefty cooling solution. It's about 40% more transister than HD58xx so I assume same process and all will run 40% faster overall when it comes out. how much will they charge for this monster is another thing.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
I really don't like the idea of our CPU's being our GPU's also. A GPU advances a lot quicker than CPU's do. I don't want to have to upgrade my CPU/GPU every time I want better graphics performance.
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
Originally posted by: Kakkoii
I really don't like the idea of our CPU's being our GPU's also. A GPU advances a lot quicker than CPU's do. I don't want to have to upgrade my CPU/GPU every time I want better graphics performance.

good point. I didn't think of this trend like this. but hey it's coming to it soon, I guess if you want better graphics card, get a better cpu. but look at it this way, if you need more memory just upgrade the main memory and be done with it, both will use the same memory in the future.