• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia reveals Specifications of GT300

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: MarcVenice
The point being that GT200 ended up so big, because by your reasoning it is because of it's gpgpu-capabilities, that is taking up extra diespace. And I'm saying that's incorrect. You forget easily it seems.

Don't stress yourself, he's like Wreckage, nVidia = the Gods of videocards
ATi = teh suck - No reasoning, no understanding, no impartiallity, I bet their neurons have the nVidia logo inside.

Since CUDA and OpenCL are tailored for the GPU's stream processors for computing, the only thing really needed in the GPU level besides flexibility in the register management is cache to keep the data flowing through the stream processors, specially when not very parallel code is used like branchy code with jump, hops and subroutines or general purpose code, and the cache uses too little die space (256K in total). ATi did the same, GPU's stream processors are massively parallel and sacrificing that for multi purpose GPGPU performance will also impact the performance in games, leave that kind of work for the CPU's. So like you stated, I found doubtfull that the nVidia's huge die size is because of the GPGPU capabilities, you can see the GT200 diagram and will see that the stream processors are identical in complexity and size compared to the G92 GPU.

Both companies are necessary to avoid monopoly, better pricing, more innovative technologies and more choices, the 8800/9800 series of cards is what made ATi to wake up and release the HD 4800 series which is the reason why the GTX series dropped to almost competetively prices, heck a now you can get a nice GTX 260+ for less than $190 when it was released for more than $400. Now with the next generation of card like the GT300, we just have to wait how this is gonna end.
 
Originally posted by: ronnn
Without reading 17 pages of pr fud, are these specifications posted accurate?

Of course they are. Would you like to place a pre-order on the new GTX367.4 graphics card straight away?
 
Originally posted by: dreddfunk
IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.

That would be a general purpose trailer then, right? 😛 😉

What I want to know though is if they designed them shorter than usual, but daisy-chained two of them together, would it get better "tons hauled per mpg" than those CSX trains? :laugh:
 
Originally posted by: evolucion8
Originally posted by: MarcVenice
The point being that GT200 ended up so big, because by your reasoning it is because of it's gpgpu-capabilities, that is taking up extra diespace. And I'm saying that's incorrect. You forget easily it seems.

Don't stress yourself, he's like Wreckage, nVidia = the Gods of videocards
ATi = teh suck - No reasoning, no understanding, no impartiallity, I bet their neurons have the nVidia logo inside.

Since CUDA and OpenCL are tailored for the GPU's stream processors for computing, the only thing really needed in the GPU level besides flexibility in the register management is cache to keep the data flowing through the stream processors, specially when not very parallel code is used like branchy code with jump, hops and subroutines or general purpose code, and the cache uses too little die space (256K in total). ATi did the same, GPU's stream processors are massively parallel and sacrificing that for multi purpose GPGPU performance will also impact the performance in games, leave that kind of work for the CPU's. So like you stated, I found doubtfull that the nVidia's huge die size is because of the GPGPU capabilities, you can see the GT200 diagram and will see that the stream processors are identical in complexity and size compared to the G92 GPU.

Both companies are necessary to avoid monopoly, better pricing, more innovative technologies and more choices, the 8800/9800 series of cards is what made ATi to wake up and release the HD 4800 series which is the reason why the GTX series dropped to almost competetively prices, heck a now you can get a nice GTX 260+ for less than $190 when it was released for more than $400. Now with the next generation of card like the GT300, we just have to wait how this is gonna end.

Now that you've gotten your daily personal digs in, something you can't seem to have a conversation without, can we continue with the. Discussion in a respectful and civil manner?
 
What purpose does DP support provide for gaming accelerators?

What purpose does IEEE 754 compliance serve for gaming accelerators?

Denormalized double precission useful in games?

There is absolutely no doubt whatsoever that both ATi and nVidia were taking GPGPU functionality into consideration in the design phase of these parts. We don't have to guess at this or use a crystal ball, simple functionality that is available on the chips and very well documented tells us rather explicitly that GPGPU functionality was most certainly a consideration during the design phase. Precisely how much die space it takes up in end effect is something that we could speculate quite a bit on, that it was without a doubt an intended design goal isn't.
 
Originally posted by: Keysplayr
Now that you've gotten your daily personal digs in, something you can't seem to have a conversation without, can we continue with the. Discussion in a respectful and civil manner?

Meh, tell that to yourself and your bussiness partner Wreckage. "Am I asking you" reply wasn't really civilized at all. So can we continue with the discussion in a civilized way without nVidia marketing propaganda? This is a forum, not a TV/Ad broadcast.
 
I'm really looking forward to what this card will do with PhysX and other CUDA applications. There probably won't be any DirectX 11 games for awhile or even games that could stress the GT300. But physics, folding@home, video transcoding, could hit a level miles above where they are now.
 
I'm very interested in the new nvidia part.

If the specs quoted are true, it'll be a monster. I'm really hoping a repeat of the GTX280/260 and 4870/4850 occurs, with nvidia having the highend locked up with ATi bringing the competition forcing nvidia to lower their prices. I'll probably be purchasing a card around then and wouldn't mind having some options 🙂
 
My wild guess:

If GT300 is going to have 512-bit memory interface,

1) NV will stick to GDDR3, or
2) GT300 may be an external unit. (like QuadroPlex)

There is no factual basis for this guess.
 
I can?t see how nV will go for any of this. If they really want to make life miserable for ATI they will most certainly opt for something faster than GDDR3. Of course this will raise the price but I don?t think that this will stop them. As for the external unit, nV will probably release an external GT300 based quadroplex but this won?t be targeted towards us ?normal? users.
 
I am looking forward to this chip. But of course it will require me to play something other than WoW or Call of Duty W@W to take advantage of it 😀
 
Originally posted by: evolucion8
Originally posted by: Keysplayr
Now that you've gotten your daily personal digs in, something you can't seem to have a conversation without, can we continue with the. Discussion in a respectful and civil manner?

Meh, tell that to yourself and your bussiness partner Wreckage. "Am I asking you" reply wasn't really civilized at all. So can we continue with the discussion in a civilized way without nVidia marketing propaganda? This is a forum, not a TV/Ad broadcast.

Ok, I've got to laugh at this one. My "Why am I asking you" comment toward DF was because I was asking DF what Marc thinks, when I could have been just asking Marc!
Get it? How in the name of all that's holy wasn't that civilized? Oh I know, you took it out of context and made it to mean something else. Really my friend, if you are just here to sling insults and slurs, just go away. We don't want any. We gave at the office. The check is in the mail, etc. etc. All this stuff happens when you have nothing left to argue with. Well, find something if you feel so strongly about your point of view. If you can't find anything, then maybe there wasn't much to back up your point of view in the first place.

Though, I am telling you right now. Any further insults, slurs, whatever, coming from anybody at all, is going to be forwarded to the mods. No if's and's or buts. I suggest you do the same. It will clean up this bullshit that is present in the forum and deter others from following suit.
 
Originally posted by: Genx87
I am looking forward to this chip. But of course it will require me to play something other than WoW or Call of Duty W@W to take advantage of it 😀

Yeah this is where I am at. I will wait for a $200 GTX360.

Hey then I can tell everyone that I play WoW on my 360.
 
Originally posted by: lopri
My wild guess:

If GT300 is going to have 512-bit memory interface,

1) NV will stick to GDDR3, or
2) GT300 may be an external unit. (like QuadroPlex)

There is no factual basis for this guess.

I'm not certain I agree with NV still using GDDR3 for higher end cards. GDDR5 has been out for a good while now, and I think it's pricing isn't as cost prohibitive as it once was, and may be a contributing reason why 4870 and up have been coming down in price. Also look at 4770 utilizing GDDR5 now.

I'm thinking GDDR5 will be the standard for high end cards. Whether or not Nvidia will continue to use the 512-bit memory controller is a mystery, although if history repeats itself, as G80 to G92 went from 384 to 256 bit, We might be seeing a return of the 256 bus. But then again, this is a new architecture and not a core revision as G80 to G92 was. Very tough to speculate on what's going to go down this time around.

If GT300 is GDDR5 and a 512 bit memory controller, the bandwidth would be off the chain. GDDR5 prices coming down doesn't make using it that big of a deal anymore, but the PCB design for 512bit may be just a pricey as current GT200 boards. Yeeeaarrrgghh.. Brain.... hurts..... 🙂
 
Originally posted by: Keysplayr


I'm not certain I agree with NV still using GDDR3 for higher end cards. GDDR5 has been out for a good while now, and I think it's pricing isn't as cost prohibitive as it once was, and may be a contributing reason why 4870 and up have been coming down in price. Also look at 4770 utilizing GDDR5 now.

Thank god for ATi, because if it wasn't for them, we would have still seen ddr3 on Nvidia high end cards, even in 2012. :laugh:
 
Originally posted by: Keysplayr


Though, I am telling you right now. Any further insults, slurs, whatever, coming from anybody at all, is going to be forwarded to the mods. No if's and's or buts. I suggest you do the same. It will clean up this bullshit that is present in the forum and deter others from following suit.

/in

Oh no it wont, not until they perma ban a certain member who has been temp banned several times already for baiting, misquoting, and being a general troll 😀

/out
 
Originally posted by: ShadowOfMyself
Originally posted by: Keysplayr


Though, I am telling you right now. Any further insults, slurs, whatever, coming from anybody at all, is going to be forwarded to the mods. No if's and's or buts. I suggest you do the same. It will clean up this bullshit that is present in the forum and deter others from following suit.

/in

Oh no it wont, not until they perma ban a certain member who has been temp banned several times already for baiting, misquoting, and being a general troll 😀

/out

dadach was just given a vacation as far as I know. And there are quite a few people here who's bliss is to provoke one another. If you see something you don't like that breaks TOS or guidelines, report it. If it's just a heated conversation without getting personal, then there's no point. You should be able to tell the difference.
 
Originally posted by: error8
Originally posted by: Keysplayr


I'm not certain I agree with NV still using GDDR3 for higher end cards. GDDR5 has been out for a good while now, and I think it's pricing isn't as cost prohibitive as it once was, and may be a contributing reason why 4870 and up have been coming down in price. Also look at 4770 utilizing GDDR5 now.

Thank god for ATi, because if it wasn't for them, we would have still seen ddr3 on Nvidia high end cards, even in 2012. :laugh:

AFAICT, GDDR3 seems to be doing fine. Time to move on? For sure.
 
Originally posted by: Keysplayr
Originally posted by: error8
Originally posted by: Keysplayr


I'm not certain I agree with NV still using GDDR3 for higher end cards. GDDR5 has been out for a good while now, and I think it's pricing isn't as cost prohibitive as it once was, and may be a contributing reason why 4870 and up have been coming down in price. Also look at 4770 utilizing GDDR5 now.

Thank god for ATi, because if it wasn't for them, we would have still seen ddr3 on Nvidia high end cards, even in 2012. :laugh:

AFAICT, GDDR3 seems to be doing fine. Time to move on? For sure.

Yeah, Nvidia squeezed everything out of ddr3. To further improve the memory bandwidth, they would probably need to use an 1024 bit interface, which I don't think it will happen this year. GDDR5 is the only way to go. 🙂
 
Originally posted by: BenSkywalker
What purpose does DP support provide for gaming accelerators?

What purpose does IEEE 754 compliance serve for gaming accelerators?

Denormalized double precission useful in games?

There is absolutely no doubt whatsoever that both ATi and nVidia were taking GPGPU functionality into consideration in the design phase of these parts. We don't have to guess at this or use a crystal ball, simple functionality that is available on the chips and very well documented tells us rather explicitly that GPGPU functionality was most certainly a consideration during the design phase. Precisely how much die space it takes up in end effect is something that we could speculate quite a bit on, that it was without a doubt an intended design goal isn't.

Beat me to it. Nvidia has dedicated HW in the G200 architecture for the purpose of GPGPU applications. HW which takes up space, and serves no purpose in gaming. Along with Nvidia's marketing pimping the GPU as a faster alternative to a CPU, it doesn't take much genius to see where they're going with that strategy.
 
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.


youll prolly be able to get a GTX260 192 steam processor for 100$, maybe wait 2 or 3 yrs to get a 295 for less than 100$... tard :roll: lol, what makes you think in 1 yr a gtx295 will be 100$- .... -_-
 
Originally posted by: roid450
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.


youll prolly be able to get a GTX260 192 steam processor for 100$, maybe wait 2 or 3 yrs to get a 295 for less than 100$... tard :roll: lol, what makes you think in 1 yr a gtx295 will be 100$- .... -_-

Sounds about right. In 2-3 years there will probably be some mid level card (built on a smaller process) that runs cooler and uses less power than 295 GTX.

What is Today's equivalent of 7950 GX2? <----Doesn't 9600 GT come close to this?
 
Originally posted by: roid450
Originally posted by: ibex333
Good thing I didn't get a new video card. Thanks to the GT300 I should be able to buy a 295gtx for less than $100 less than a year from now.


youll prolly be able to get a GTX260 192 steam processor for 100$, maybe wait 2 or 3 yrs to get a 295 for less than 100$... tard :roll: lol, what makes you think in 1 yr a gtx295 will be 100$- .... -_-

There're so few 295s out there that in a year from now the price may reach a new high due to their antique nature. 😛
 
Originally posted by: error8
Originally posted by: Keysplayr
Originally posted by: error8
Originally posted by: Keysplayr


I'm not certain I agree with NV still using GDDR3 for higher end cards. GDDR5 has been out for a good while now, and I think it's pricing isn't as cost prohibitive as it once was, and may be a contributing reason why 4870 and up have been coming down in price. Also look at 4770 utilizing GDDR5 now.

Thank god for ATi, because if it wasn't for them, we would have still seen ddr3 on Nvidia high end cards, even in 2012. :laugh:

AFAICT, GDDR3 seems to be doing fine. Time to move on? For sure.

Yeah, Nvidia squeezed everything out of ddr3. To further improve the memory bandwidth, they would probably need to use an 1024 bit interface, which I don't think it will happen this year. GDDR5 is the only way to go. 🙂

It's just a shame ATi went the wrong way and tightened the bus bandwidth on their 4770 and cranked up the clockspeed to make up for it, instead of running a cooler, lower wattage part with wider bus and lower clockspeeds. 256-bit DDR5 would have been a winner, at least according to my understanding of how that would perform compared to 128-bit w/ higher clocks.
 
Back
Top