• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GeForce GTX 580

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How about a dual gtx4xx core chip on a single dye?

I anticipated a GTX 495 months ago (though not single die), if only because those with single PCIe slots are frozen out of NV Surround without such a card. Unless such mobo owners go way back to certain GTX 295 variants that could do Surround single-handedly, but who would want to do that?
 
What do you mean single die? Isn't that basically like a massive single chip? AKA very hard to make? A dual 4XX is not really worth it as the 5970 beats it unless its the 384 variant which would be hard to make dual and very hard to make in a single die.
 
What do you mean single die? Isn't that basically like a massive single chip? AKA very hard to make? A dual 4XX is not really worth it as the 5970 beats it unless its the 384 variant which would be hard to make dual and very hard to make in a single die.

Yes. and it was a question. I heard rumors about it on some rumor site. I think it was a gf110 dual core chip.
 
How about a dual gtx4xx core chip on a single dye?

Why do people keep saying "dual core cpus" like it makes more sense than simply adding the extra functional units to a single core for all the performance while getting rid of core to core latency, heat and redundancy issues?

I can, honestly, see Nvidia launching a new line based purely on their GF104 arch for a half-time refresh. Just as 6xxx is looking to shore up AMD's weakest game (DX11 performance, Crossfire scaling) without really blowing the lid off the 5xxx series, Nvidia might not be retargeting for more performance necessarily, simply a smaller, cooler core. You might see slight performance bumps over the existing line, but a lot of the GF100 "fat" if you will can be trimmed off and replaced with enhancements that matter to most gamers.

This would also allow Nvidia to to sell every GF100 it makes as a Tesla product with fat margins while satisfying their core consumers with their refresh line.
 
Why do people keep saying "dual core cpus" like it makes more sense than simply adding the extra functional units to a single core for all the performance while getting rid of core to core latency, heat and redundancy issues?

I can, honestly, see Nvidia launching a new line based purely on their GF104 arch for a half-time refresh. Just as 6xxx is looking to shore up AMD's weakest game (DX11 performance, Crossfire scaling) without really blowing the lid off the 5xxx series, Nvidia might not be retargeting for more performance necessarily, simply a smaller, cooler core. You might see slight performance bumps over the existing line, but a lot of the GF100 "fat" if you will can be trimmed off and replaced with enhancements that matter to most gamers.

This would also allow Nvidia to to sell every GF100 it makes as a Tesla product with fat margins while satisfying their core consumers with their refresh line.
Thats basically what Charlie is inferring is going to happen.
I don't get the resistance to believe Nvidia can and will innovate and compete.

http://www.semiaccurate.com/2010/10/18/nvidia-gtx580-paper-launch-next-month/
Easy, the GF104/6/8 line is not substantially more efficient than their big brother, just a little bit more efficient in some specific parts. If the GF100 uses the same 'new' architecture with all the 'updates' that the GF104 has, it will more likely than not get bigger, but it will use less power. At idle. It will not however get smaller, and will not be a 10 billion shader part that blows ATI back to the stone age. If Nvidia can squeeze 512 shaders that actually work out of a piece of silicon, and also up the clock by 10%, you are looking at about a 17% increase in speed over a GTX480.
 
Last edited:
I could see a GPU like this from Nvidia I think. How much space would these changes add? Remember, the 512SP's are already there, so it doesn't add room to enable them. And assuming the kinks are worked out of 40nm, maybe, just maybe, enabling them won't blow through the 300 watt wall. Fermi's weak point is texture power, right? It would make sense that they address that as the only addition that adds power use and physical size (other than enabling the additional SP's). The one part I don't know makes sense is the 512 bit memory, keeping the 384 bit connection but using faster DDR5 that is readily available makes more sense to me. I guess the only question would be clock speeds. Something like this would make sense seeing as they are stuck on 40nm for a while yet. Assuming all these specs are true, I still don't see them beating the 5970 (unless clocks speeds are insane, which I doubt seeing as power and heat are probably not going to allow for it) much less the upcoming AMD dual part.

Don't get me wrong, I'm not saying I believe the article or that I expect something like this from Nvidia. I am just saying the specs don't sound *that* unreasonable.
 
I recall no one was ready for the 200b either, however a moot point for me as I cant afford any new cards!

GT200b was a die shrink from 65nm to 55nm. So unless nVidia was somehow able to cook up some totally unexpected 32nm shrink (general consensus is that the GPUs will go straight to 28nm) in this amount of time without any word of that leaking out (incredibly unlikely) we won't see anything analogous to that release (which wasn't totally unexpected like a new flagship for Fermi would be)
 
At least they are doing a hard launch with this new wonder card...oh wait:wub:
http://vr-zone.com/forums/893390/report-nvidia-to-paper-launch-geforce-gtx-580-in-november.html
That is actually referenced from
Semi-accurate by Charlie Demerjian

We knew that 2 SMs were removed on 480 out of 16. Regardless of how Nvidia calls it, Nvidia should release a card that doesn't have anything cut out.

Based on Charlie's assumption, the new card will be approximately 17% faster than the fastest single core card, GTX480. I honestly don't think it will be under 600 USD. Electricity use is not a problem, but it will probably generate more heat than a mini-heater. It will be wise to release it before this winter as I heard that it will be a cold winter.

From the official screen shots of 6850/6870, 6850 only needs one 6-pin power and both cards only have 1 crossfire connector. IMO it will not be aiming for the fastest title. However, it will probably have better watt/performance compare to the previous release.

But then again, I don't think Nvidia is currently capable of making a 512 core card anytime soon(in 6 months).
 
No Nvidia knew they were going to be behind and late to the DX11 party, so they decided to skip a generation.

ugh, no, you do realize that there are actual 300 series GPUs?

Granted they're just rebadged lower end 200 series, but still, that was the whole point of that sub topic question Davidh373 brought up.
 
Last edited:
You cannot have 512 Cuda Cores divided by 48 (48Cuda Cores per SM in GF104)

GF104 has 8 x 48 = 384

Either you’ll have 10 x 48 = 480 (not likely) or 12 x 48 = 576
 
maybe they found a way to fix the problems with the wires in the gf100, as was explained in the video interview the the nvidia CEO.
 
I don't know why so many people have a hard time getting this aspect of the tech industry- all the major technology companies have multiple teams.

The team that made the 4870 did not make the 5870.

The team that made the GTX 280 did not make the GTX 480.

If part X from company A gets delayed, in no way does that mean part Y is also going to be delayed. Different teams, different parts of the design process.

I'm not going to guess exactly what they are going to do, but a couple easy adjustments they could make would be to reduce DP support/cache and dedicate more transistors to raw fill which would significantly change what the masses on these forums consider performance/watt. That would be more of a business directive seperating out Tesla and GeForce and I honestly don't think that it is terribly likely given how well their current business model has been working for them, but it is within the realm of possibility. There are many things nV can do to significantly increase their gaming performance using the same die space/power use compared to the GF100, that doesn't mean it makes sense from a financial perspective to do so.
 
Wow it looks like Charlie got GF100's die size right for a change. It only took 7 months to accidentally admit the chip is smaller than he claimed it was before, during, and after it's launch. Did he ever get GF104's die size right or is he still saying it's 367mm^2?

Anyways, he did point out one thing that is correct - GF104 uses less die space and is more efficient per core at power draw and performance than GF100. Even so, extrapolating GF104 out and making it a 512 core part would not mean it would increase in die size proportional to the current shader/die size. If ROPs and TMUs are reorganized, that will affect die size. Memory controllers and cache also take up die space.

On a strictly blown up scale, GF104 uses .86mm^2 per core (384 cores = 331mm^2), so even with Charlie's fuzzy math of just linearly scaling die size with shaders, a 512 core part based off GF104 architecture would be 440mm^2, or 17% smaller than GF100. 17% smaller and (according to Charlie's always pessimistic views of Nvidia) 17% faster. Again, this is horribly fuzzy math because if all other components were to remain the same, and cores are the only thing tacked on, the die size would be smaller than 440mm^2. However, with all the hearsay rumors going around about Nvidia adding in more TMUs and ROPs per clusterwho knows what the die size will end up at, and Charlie's 17% performance improvement would then probably be too low even for his low ball prediction.

The point of all this, though, is that Nvidia is quite capable and will still compete for the high end GPU space on 40nm all the way up until 28nm comes out.
 
Last edited:
I don't know why so many people have a hard time getting this aspect of the tech industry- all the major technology companies have multiple teams.

And in truth it is actually a mode of operation that is shared by virtually all industries - be it technology, refrigerators, autos, ship-building, road-repairs, etc.

You'd be hard-pressed to find a business that operates profitably without having embraced parallelism in virtually every aspect of their business units (from manufacturing to R&D).

The reason I suspect we encounter a fair number of forum members who don't comprehend this reality is that, and this is just a guess, these folks simply haven't been exposed to or experienced the aspects of life that bring them to realize the reality of today's business.

I'm not making it an age or maturity thing, but they do tend to be correlated.

If you don't work as a professional, having lived long enough to accumulate the education and the experience it requires, then you probably haven't been exposed to the realities of how businesses operate in pretty much any industry and as such the preconceptions you have in your mind in terms of how they operate are limited by your life experiences which may be little more than the environment at high school or working the floor at Frys.

The point being that it isn't fair to your fellow forum members to expect them to know things they have no way of knowing about, just as it isn't fair to yourself to expect these things to need go without saying and finding yourself frustrated having to repeat them for each new "generation" of forum members.

Think about how often we have the "its econ 101 people, simple supply vs. demand" threads and discussions.

That discussion will happen every year as new members join until the forums no longer exist. So too, I suspect, will the "product development happens in parallel, not serial" discussions. It may be old news, and frequently discussed new, to you but I'd be willing to bet its the first time they've heard of it or spent any time really contemplating it.
 
Back
Top