• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

D10U-30 from nVidia

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
What he really meant to say is that cards are just bottlenecked by one or the other that SP doesn't show bigger gains.
 
The flops per pixel in current games haven't reached a point where increasing SP power leads to noticeable performance gains. Current SP/TMU/ROP ratio is too slanted towards SP, but as expected, future games will require more and more SP power.
 
Originally posted by: Cookie Monster
Im thinking it has to do with better memory management or some new compression method/algorithm behind G94 vs G92 when it comes to performing AA. G94s seem to get less of a hit when doing AA compared to the G92 counterparts. Could be also because of driver issues.

ROPs are hardly the bottleneck (G92 vs G80). What it comes down to is two things. Poor memory management (ALL G92 based cards take a nose dive in performance when it runs out of memory compared to ATi cards which can only mean that nV has a rather inefficent way of handling its memory) and a rather inefficient AA (Even with the G80 Ultra you still see has large AA performance hits, meaning that AA isnt too efficent either).

GT200 (NV55) will probably be a more refined G80 (NV50) with a 512bit memory interface. Something like NV40 was to G70/NV47. By the time we get to 2009~10, we will see nVIDIA's true next gen (NV60) in time for larabee which the whole intel vs nVIDIA will take place. Quite looking forward this.

I don't think the G94 is better optimized for AA than G92, they have the same memory bandwidth and framebuffer, they are obviously going to be limited in the same way that the 8800GT won't be able to flex it's shader muscles, apparently looking like the 9600GT has better AA. These same kinds o fthings happen with Crysis.
 
FYI, don't know if anyone posted it yet but my contact told me that a physiX chip will be integrated on the new high end cards ^^
Hardware physiX support on a VGA-Card, FINALLY ;-)
Got a presentation next week, can't wait to see it and to see, smell and feel the new high end cards 8)
 
Originally posted by: MXD TESTL4B
FYI, don't know if anyone posted it yet but my contact told me that a physiX chip will be integrated on the new high end cards ^^
Hardware physiX support on a VGA-Card, FINALLY ;-)
Got a presentation next week, can't wait to see it and to see, smell and feel the new high end cards 8)

And we should believe you because...?
 
LOL -.-

You will see hampster....best is I stop posting here....if no one believes me 😛

WHY SHOULD I LIE ? gosh....

My Nvidia Contact (Field Engineer) told me THE NEWS! Shall I ask him too: WHY SHOULD I BELIEVE YOU....?

Everyone believed my first post in this thread about:

D10U-30 -> 1024MB GDDR3 -> 512Bit -> 225Watt - 250Watt power consumption [connectors: 2x4 (8pin) & 2x3 (6pin)] D10U-20 -> 896MB GDDR3 -> 448Bit -> 225Watt - 250Watt power consumption [connectors: 2x4 (8n) & 2x3 (6pin)] Supporting Hybrid Power & Hybrid Boost.....get your power supply prepared for the 8pin connectors in the future ###

You think i made this up too?? Cmon, what a question....


EDIT: If you google a bit, you'll find similar announcements....go and don't believe them too ;-)
 
News was that nVidia is to integrate hardware acceleration into the GPU design and not package a separate PPU to go along with the GPU (while not the best solution for maximum performance, it does make the most sense, by far, from a business standpoint)

If that's what you're trying to say then yes it has been posted before. If you're trying to tell us that our new high end cards will also feature a unique PPU for physics processing then forgive us if we call shens and disbelieve.
 
Originally posted by: Azn
What he really meant to say is that cards are just bottlenecked by one or the other that SP doesn't show bigger gains.
You're totally right about that.
 
There is no "PPU" in terms of hardware. Basically what nVIDIA has done with PhysX is that theyve incorporated the PhysX API to be compatible with the GPU hardware that already exists. GPUs are already massive parallel computing beasts, and thats exactly what you want for physics (since they involve alot of calculations).
 
Back
Top