• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA G70 - What in the blue hell?...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Both companies are being very tight lipped about their next gen products. I think they're both going to try to catch the other with their pants down, which means awesome products from both, which means a win for us, the consumers.
 
Originally posted by: Insomniak
Both companies are being very tight lipped about their next gen products. I think they're both going to try to catch the other with their pants down, which means awesome products from both, which means a win for us, the consumers.

But hopefully, last year's scenario won't happen again (availability).

Originally posted by: geforcetony
Originally posted by: Jeff7181
Screw that... lets just jump to 128 pipelines. 😀

ROFL :laugh:

Yeah, and then we'd get 230FPS in DooM III @ 1600x1200 w\ 8xAA 16xAF

But this card already does that. 😉

Originally posted by: otispunkmeyer
i heard that this chip would have 32 pipelines, 24 of which were real

yes this confused the hell out of me, but i dunno maybe thats true, maybe they have a sorta hyperthreading for graphics cards, with 8 virtual pipes justlike intel could have a virtual cpu.

I like the 24pipe idea because this will mean the upcoming midrange cards will be 16pipes minimum. Imagine a 16pipes GeForce 7600GT for 180$ a year from now : drools:
 
Originally posted by: geforcetony
This has been bugging me since I first saw this G70. What in the hell? I saw that NV50 had been cancelled, but then NVIDIA said that it wasn't? This is all really confusing to me. I have searched and searched on the net, and all I could come up with was some rumored specs:

600MHz Core, 650-700MHz GDDR3\4
24\32 Pixel Processing Pipelines
90nm process @ TSMC
512MB RAM
Possible (FULL) WGF support

Thats about it, and if anyone has updated specs or details, post them here. I am guessing that its gonna be called GeForce 7-Series, but that could change. NVIDIA, will you PLEASE get off this GeForce thing. Its even older than Voodoo was back in the day. I'd like to see something other than GeForce from them, as it could possibly help their marketing. Also, no flaming. This is just a simple information query, nothing more. Definitely not a flame war. Hey, I can rime! 😛

Edit: Forgot to add the part about the 24\32 Pixel pipelines. Seems NVIDIA is really trying to up the ante on ATI (with R520's 24 pipelines).

Yep, but u sure can't spell! (rhyme...)😉
 
Originally posted by: jim1976
Originally posted by: slash196
My GT is starting to feel small...

*plays FarCry with level 7 HDR*

Ahh...thats better.

:disgust: Yeah SLI is essential because I PLAY everything @16x12 with 8xS

The only game that bring my GT on its knees and makes it unplayble is Chronicles of Riddick @ greater than 12x10 with SM2.0+ enabled...


Ever played EQ2 😛
 
any indication that the next gen chips are on track for a release 1 year from the 6/x series a year ago? because thats like....2 months. 🙁

<<angry that his 6800gt, which he bought in july, has been in STORAGE, UNUSABLE, for the last 6 months 🙁 bleh.
 
Originally posted by: Genx87
It seems odd they would only add two more quads over the NV4.x

You arent significantly increading your pixel pushing power. Unless they think they can attain a much higher clock than the NV4.x

I dont buy into the 32 virtual\24 real stuff.
What could they mean by that? 24 pipes dedicated to pixel with 8 configurable to perform pixel or vertex operations?


Actually, from what I read, the chip will have 24 real pipes, and the reason there's a slash there is also been speculations that it may include 32 real pipes! Obviously it is still unknown, but this would be kick ass!
 
Originally posted by: geforcetony
Originally posted by: Genx87
It seems odd they would only add two more quads over the NV4.x

You arent significantly increading your pixel pushing power. Unless they think they can attain a much higher clock than the NV4.x

I dont buy into the 32 virtual\24 real stuff.
What could they mean by that? 24 pipes dedicated to pixel with 8 configurable to perform pixel or vertex operations?


Actually, from what I read, the chip will have 24 real pipes, and the reason there's a slash there is also been speculations that it may include 32 real pipes! Obviously it is still unknown, but this would be kick ass!

I forget why but IIRC the max they can have for this generation is 24. I cant remember what forum thread i saw it on but they had support for it.

-Kevin
 
Originally posted by: Gamingphreek
I forget why but IIRC the max they can have for this generation is 24. I cant remember what forum thread i saw it on but they had support for it.

-Kevin

Who knows. Maybe only 24 pipes, but I saw somewhere, not sure where, that there is a possibility for 32 pipelines. I personally think that it is much more likely that there will be 24 pipes, but it would be awesome-sauce to see the chip with 32 pipes, huh?
 
Anyway, a 50% larger core at 600MHz seems a bit optimistic, even considering the move to 90nm.

It could well be that they are targetting 65nm. Not saying they are going to, but that is what Sony is aiming for with their PS3- and they are tooling up their foundries for it.

I dont buy into the 32 virtual\24 real stuff.
What could they mean by that? 24 pipes dedicated to pixel with 8 configurable to perform pixel or vertex operations?

Trying to think of a good way to explain this......

Say you are in a store and there are 32 registers but only 24 people are allowed to leave at a time. Because of the amount of time spent at the registers the bottleneck is still obviously going to be there and not at the doors to get out. Moving forward as shader complexity starts to become real having an insane amount of pixel pipes isn't going to do you much good if they are sitting around waiting for a shader computation to complete. The more shader heavy games get, the more we will likely see architectures start to move in this direction. For a 256bit mem bus going over 16 pipes is pushing it pretty far, when games like U3 hit with a decent shader load it is likely that a part with 48 "virtual" and 16 actual pixel pipes would trounce a traditional straight 24 pipe part all else being equal.
 
Originally posted by: BenSkywalker
Trying to think of a good way to explain this......

Say you are in a store and there are 32 registers but only 24 people are allowed to leave at a time. Because of the amount of time spent at the registers the bottleneck is still obviously going to be there and not at the doors to get out. Moving forward as shader complexity starts to become real having an insane amount of pixel pipes isn't going to do you much good if they are sitting around waiting for a shader computation to complete. The more shader heavy games get, the more we will likely see architectures start to move in this direction. For a 256bit mem bus going over 16 pipes is pushing it pretty far, when games like U3 hit with a decent shader load it is likely that a part with 48 "virtual" and 16 actual pixel pipes would trounce a traditional straight 24 pipe part all else being equal.

Never thought about it like that, but what you said, well, frankly, it makes perfect sense. I am starting to wonder when, since as you said, 16 pipes is pushing a 256-bit RAM bus already, are we going to see a 512-bit bus? That would alleviate the "strain" on the memory bus altogether wouldn't it?
 
Originally posted by: geforcetony
Originally posted by: Gamingphreek
I forget why but IIRC the max they can have for this generation is 24. I cant remember what forum thread i saw it on but they had support for it.

-Kevin

Who knows. Maybe only 24 pipes, but I saw somewhere, not sure where, that there is a possibility for 32 pipelines. I personally think that it is much more likely that there will be 24 pipes, but it would be awesome-sauce to see the chip with 32 pipes, huh?

What I have heard is that for Nvidia we are talking for non low-k 110nm@TSMC and 11% bigger die space from NV40. Room enough for 6 quads,16 ROPs and 8 VS. I found myself logical for the core/mem approximately to remain at the same timings as NV40.
As for anything related to R5xx might have 4 quads. R520 should be => than 600MHz.

What we say here has pure speculative nature but here are some thoughts I have made. IHVs can't overlap and some production costs. Yes 32 SIMD channels@90nm are of course possible BUT with what core/mem speed? Over 250-300MHz? And if yes what power oonsumption and cooling systems these monsters will need? And most importantly how rare this model will be and how much will have to cost to cover theproduction costs? If today 500+ gpus represent 1-1.5 % of the total market if we were talkin for a gpu like this then the price would have reched the limits of madness!!

Also ppl should realise that 16ROPs for today is more than enough,so I wouldn't expect a simultaneous increase in quads/rops for the near future.

With WGF2.0 an increase in the # of rops isn't necessarily needed when unified shaders will take place. For example instead of 24PS+8VS=32units, here automatically same work can be achieved with 24 units+Geometry shader(and tesselation unit if needed).

Just my 2 cents. Don't expect monsterous implementations for the near future

 
Originally posted by: Jeff7181
Originally posted by: geforcetony
Originally posted by: Jeff7181
Screw that... lets just jump to 128 pipelines. 😀

ROFL :laugh:

Yeah, and then we'd get 230FPS in DooM III @ 1600x1200 w\ 8xAA 16xAF

I want a 1024-bit memory bus to go with it... with 2 GB of memory onboard.

Hehe... the GPU core would be 3 inchs x 3 inchs, lol.


You need to talk to Bitboyz, they'll fab a great gfx card for you!
 
Originally posted by: BenSkywalker

Trying to think of a good way to explain this......

Say you are in a store and there are 32 registers but only 24 people are allowed to leave at a time. Because of the amount of time spent at the registers the bottleneck is still obviously going to be there and not at the doors to get out. Moving forward as shader complexity starts to become real having an insane amount of pixel pipes isn't going to do you much good if they are sitting around waiting for a shader computation to complete. The more shader heavy games get, the more we will likely see architectures start to move in this direction. For a 256bit mem bus going over 16 pipes is pushing it pretty far, when games like U3 hit with a decent shader load it is likely that a part with 48 "virtual" and 16 actual pixel pipes would trounce a traditional straight 24 pipe part all else being equal.

So what you are sayin is the output will actually be something like 24 pipelines but it will have the ability to process more shader operations at a time? So the 32 virtual is just more execution units that dont have the ability to directly output?

Makes more sense then if the computational part will become the heavier burden than the pixel pushing ability.
 
i dunno...i really doubt that those are real specs...

oh yeah, i heard that Nvidia was making a new card, and that they said it would blow the new ATI cards out of the water...those specs say otherwise...
 
Originally posted by: hans030390
i dunno...i really doubt that those are real specs...

oh yeah, i heard that Nvidia was making a new card, and that they said it would blow the new ATI cards out of the water...those specs say otherwise...

What do you mean? With specs like those (assuming that G70's specs are those), G70 would blow R520 out of the water, clear shot, done deal. However, I don't think those will be the specs, but NVIDIA may still have something up its sleeve that we still don't know about (512-bit memory bus anybody?) 😉
 
No they are not going to have a 512bit memory bus. Get it out of your head 😉.

Also with those specs there isn't going to be any blowing out of the water. Why dont we reserve judgement and the flames until these cards actually come out.

-Kevin
 
Originally posted by: Gamingphreek
No they are not going to have a 512bit memory bus. Get it out of your head 😉.

Also with those specs there isn't going to be any blowing out of the water. Why dont we reserve judgement and the flames until these cards actually come out.

-Kevin

No no no Kevin! We MUST flame, it is a stipulation for anyone talking about GPU's in the Video Forums 😉 [::cough:: PVP debacle, Rollo's SLI, etc etc ::cough::]
 
Originally posted by: Gamingphreek
No they are not going to have a 512bit memory bus. Get it out of your head 😉.

Also with those specs there isn't going to be any blowing out of the water. Why dont we reserve judgement and the flames until these cards actually come out.

-Kevin

Yeah, I know there isn't gonna be a 512-bit bus, but here's to hoping... :beer: .
 
Originally posted by: fbrdphreak
No no no Kevin! We MUST flame, it is a stipulation for anyone talking about GPU's in the Video Forums 😉 [::cough:: PVP debacle, Rollo's SLI, etc etc ::cough::]

ROFL ROFL ROFL :laugh: :laugh: :laugh:
 
Originally posted by: geforcetony
Originally posted by: imverygifted
that mobile 6800ultra did pretty damn good in the anandtech bench marks so im guessing nvidia might release a 6900 or something using that chip or something like it

No, the "6900" would be based on the soon-to-be-released NV47, which is supposed to be a 24 pipe part, but again, nothing's concrete. G70 would be the new generation chip such as going from NV30 to NV40 was. There would be nothing based on this chip that would be in the 6-Series family, as it is rumored that it will become GeForce 7-Series.

a 24 pipe part would not be a GeForce 6 named part, unless its performance is too piss poor to be named anything better. A 6900 would simply be an updated/faster 6800 Ultra, say with a 500MHz core like the Go 6800U only with 16 pipes instead of 12.
 
Originally posted by: bunnyfubbles
a 24 pipe part would not be a GeForce 6 named part, unless its performance is too piss poor to be named anything better. A 6900 would simply be an updated/faster 6800 Ultra, say with a 500MHz core like the Go 6800U only with 16 pipes instead of 12.

No, I did hear that, since R520 will be coming out before NVIDIA's "real" new silicon, they are going to release NV47 as the innitial competator to R520, which will be essentially an NV40 with 24 pipes instead of 16. Who knows, its still not out yet, so its not certain.
 
With WGF2.0 an increase in the # of rops isn't necessarily needed when unified shaders will take place. For example instead of 24PS+8VS=32units, here automatically same work can be achieved with 24 units+Geometry shader(and tesselation unit if needed).

While I had been expecting unified shader hardware for some time, nVidia now is giving every implication that they won't be moving in that direction for anything currently in the design process(which should cover out to NV60 at least). It appears that their upcoming parts will function under WGF as if they had unified shader hardware but they will be sticking to dedicated PS and VS hardware.

So what you are sayin is the output will actually be something like 24 pipelines but it will have the ability to process more shader operations at a time? So the 32 virtual is just more execution units that dont have the ability to directly output?

Yes.

I am starting to wonder when, since as you said, 16 pipes is pushing a 256-bit RAM bus already, are we going to see a 512-bit bus? That would alleviate the "strain" on the memory bus altogether wouldn't it?

We won't see a 512bit bus for quite some time more then likely(in the consumer space anyway). The current trend is for increased computational complexity more then bandwidth increases(in relative terms, obviously bandwidth needs are going to increase with FP framebuffers etc). The PCB complexity would be too prohibitive to make it viable in the near term.
 
Originally posted by: BenSkywalker
With WGF2.0 an increase in the # of rops isn't necessarily needed when unified shaders will take place. For example instead of 24PS+8VS=32units, here automatically same work can be achieved with 24 units+Geometry shader(and tesselation unit if needed).

While I had been expecting unified shader hardware for some time, nVidia now is giving every implication that they won't be moving in that direction for anything currently in the design process(which should cover out to NV60 at least). It appears that their upcoming parts will function under WGF as if they had unified shader hardware but they will be sticking to dedicated PS and VS hardware.

Yeah I know Ben this is what seems to be the case with Nvidia in contrast with ATI plans. I mentioned it as an example because I hear outrageous things about 512bit bus gpus or 32rops etc etc... Time will verify all these and if ATI or Nvidia follow the right road.
Of course it's safer for Nvidia,after all they aren't the ones that are going to make this big architecture step first. But if ATI suceeds with unified shaders then things will be tough for them. They will be a step back and after all they will have to do it some time in the future. Unified shaders is the future even if that is a " macro" case.
As I said time will tell. But I'm surely not expecting anything monsterous for the near future.
 
Unified shaders is the future even if that is a " macro" case.

I don't see that as being necessarily true. Having shader hardware unified can offer some potential savings in terms of die space however the effectiveness of the more general purpose units will be slower then offering dedicated units for both vertex and fragment shaders. Besides that, fragment shaders are going to need to increase in terms of accuracy when we start looking at more complex shading routines, particularly when we start approaching the point where some level of basic radiosity is plausible in real time. The additional accuracy that these levels of fragment shaders will require won't translate into benefits for vertex based shader ops and at that point it becomes questionable if the total die space sitting idle will be greater on a unified shader based architecture or one with dedicated units.

I mentioned it as an example because I hear outrageous things about 512bit bus gpus or 32rops etc etc... Time will verify all these and if ATI or Nvidia follow the right road.

I think it is likely we will see 32rops long before we see 512bit memory busses in terms of consumer add in graphics cards. I wouldn't be shocked to see the PS3's GPU with 32rops given their target of 65nm process and what the are looking for, although if that will translate over in to the consumer market quickly is something else entirely.
 
Ok i normally understand all the video card talk but this highly technical stuff is above me.

I understood the concept of virtual pipelines however is there anyway you can point me to a sight or post an explanation to: Unified Shaders, ROPs; basically the more advanced parts of Bens post.

Thanks, some of this microarchitecture stuff is a bit above me 🙂

-Kevin
 
Back
Top