D10U-30 from nVidia

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AzN

Banned
Nov 26, 2001
4,112
2
0
What he really meant to say is that cards are just bottlenecked by one or the other that SP doesn't show bigger gains.
 

superbooga

Senior member
Jun 16, 2001
333
0
0
The flops per pixel in current games haven't reached a point where increasing SP power leads to noticeable performance gains. Current SP/TMU/ROP ratio is too slanted towards SP, but as expected, future games will require more and more SP power.
 

Piuc2020

Golden Member
Nov 4, 2005
1,716
0
0
Originally posted by: Cookie Monster
Im thinking it has to do with better memory management or some new compression method/algorithm behind G94 vs G92 when it comes to performing AA. G94s seem to get less of a hit when doing AA compared to the G92 counterparts. Could be also because of driver issues.

ROPs are hardly the bottleneck (G92 vs G80). What it comes down to is two things. Poor memory management (ALL G92 based cards take a nose dive in performance when it runs out of memory compared to ATi cards which can only mean that nV has a rather inefficent way of handling its memory) and a rather inefficient AA (Even with the G80 Ultra you still see has large AA performance hits, meaning that AA isnt too efficent either).

GT200 (NV55) will probably be a more refined G80 (NV50) with a 512bit memory interface. Something like NV40 was to G70/NV47. By the time we get to 2009~10, we will see nVIDIA's true next gen (NV60) in time for larabee which the whole intel vs nVIDIA will take place. Quite looking forward this.

I don't think the G94 is better optimized for AA than G92, they have the same memory bandwidth and framebuffer, they are obviously going to be limited in the same way that the 8800GT won't be able to flex it's shader muscles, apparently looking like the 9600GT has better AA. These same kinds o fthings happen with Crysis.
 

MXD TESTL4B

Junior Member
Apr 7, 2008
15
0
0
FYI, don't know if anyone posted it yet but my contact told me that a physiX chip will be integrated on the new high end cards ^^
Hardware physiX support on a VGA-Card, FINALLY ;-)
Got a presentation next week, can't wait to see it and to see, smell and feel the new high end cards 8)
 

angry hampster

Diamond Member
Dec 15, 2007
4,232
0
0
www.lexaphoto.com
Originally posted by: MXD TESTL4B
FYI, don't know if anyone posted it yet but my contact told me that a physiX chip will be integrated on the new high end cards ^^
Hardware physiX support on a VGA-Card, FINALLY ;-)
Got a presentation next week, can't wait to see it and to see, smell and feel the new high end cards 8)

And we should believe you because...?
 

MXD TESTL4B

Junior Member
Apr 7, 2008
15
0
0
LOL -.-

You will see hampster....best is I stop posting here....if no one believes me :p

WHY SHOULD I LIE ? gosh....

My Nvidia Contact (Field Engineer) told me THE NEWS! Shall I ask him too: WHY SHOULD I BELIEVE YOU....?

Everyone believed my first post in this thread about:

D10U-30 -> 1024MB GDDR3 -> 512Bit -> 225Watt - 250Watt power consumption [connectors: 2x4 (8pin) & 2x3 (6pin)] D10U-20 -> 896MB GDDR3 -> 448Bit -> 225Watt - 250Watt power consumption [connectors: 2x4 (8n) & 2x3 (6pin)] Supporting Hybrid Power & Hybrid Boost.....get your power supply prepared for the 8pin connectors in the future ###

You think i made this up too?? Cmon, what a question....


EDIT: If you google a bit, you'll find similar announcements....go and don't believe them too ;-)
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
News was that nVidia is to integrate hardware acceleration into the GPU design and not package a separate PPU to go along with the GPU (while not the best solution for maximum performance, it does make the most sense, by far, from a business standpoint)

If that's what you're trying to say then yes it has been posted before. If you're trying to tell us that our new high end cards will also feature a unique PPU for physics processing then forgive us if we call shens and disbelieve.
 

DwayneZ

Junior Member
Jun 5, 2008
11
0
0
Originally posted by: Azn
What he really meant to say is that cards are just bottlenecked by one or the other that SP doesn't show bigger gains.
You're totally right about that.
 

MXD TESTL4B

Junior Member
Apr 7, 2008
15
0
0
Hi,

no I'm not talking about a single PPU....would be too much of a bottleneck ^^ Sry for the misunderstanding ;)
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
There is no "PPU" in terms of hardware. Basically what nVIDIA has done with PhysX is that theyve incorporated the PhysX API to be compatible with the GPU hardware that already exists. GPUs are already massive parallel computing beasts, and thats exactly what you want for physics (since they involve alot of calculations).