Separate names with a comma.
Discussion in 'Video Cards and Graphics' started by taltamir, Dec 23, 2011.
No you don't.
It's all about market and competition and if the market accepts the pricing with no competition -- that could be a reality again. Back to trickle down strategies with extreme premium pricing like the past.
The ultimate irony would be AMD targets premium pricing and nVidia targets sweet spots this generation.
I'm not really okay with a $500+ flagship card, I think it's a bit excessive. But not a deal breaker for me either. And I fully expect Nvidia to do exactly that, although the pricing I believe will be less. It really depends what AMD has to counter Kepler, and how the timing of that works out. AMD might not be able to respond right away, so Nvidia could very easily have the single card crown for several month, or perhaps until AMD's next major refresh. That is probably more likely, given the respective strategies.
Is it not very clear to everyone that AMD does not set out to make the fastest single GPU card? That strategy has worked out very well the last 4 years, AMD has gained back a good chunk of market share. But don't think for a second that AMD will not, if it aligns with their methodologies, have the fastest single GPU card. I just don't think they are going to do it damn the torpedoes like Nvidia.
Not gonna happen. The 7970 is already a sweet spot card, if it wasn't, it would be a ~275watt beast and 30% faster. IMO.
I don't see what you guys have against a very expensive flagship card... let them have it.
As long as the 200$ cards continue to impress they can do whatever they want for those rich enough to pay for it.
I wouldn't count too much on a HD7790 to become an impressive card. The whole sad thing with the prices is that you usually got 30 to 50% more FPS for the same, or maybe 10-20% more, money. Now you get ~40% more FPS (over the 6970) for 60% more money.
I recall the 5870 being a lot more expensive than the 4890. It was sold higher than it's release price for a lot of it's life, as well, which means it was priced too low.
Am I happy with pricing? Hell no. I wish it was cheaper. Bottom line though, AMD is going to sell everyone they make when they release them.
they have gone with the gpgpu format as they intend to totally merge cpu and gpu, bulldozer already has a reduced fpu compute ability, eventually, there will be a cpu with no dedicated fpu core, but simply integer units, but with a block of GCN cores to do any fpu work needed.
This actually sounds like it's one of the best posts in this thread. At least as far as keen speculation is concerned. I could definitely see this exact thing happening.
Whining about 7970's price is like complaining about Lamborghini's pricing..
When you offer the best stuff in demand, you can set the price. No one forces you to buy one and there will be more affordable options to the tight pursed consumers.
The HD 7990 will keep the GTX 780's prices in check.
What price do you expect the 7990 to be? What will get prices "inline" is competition from nVidia. For now we wait.
I thought they had already listed the 7990 at $849 MSRP. The only thing that price is keeping in check is my ability to purchase it.
I really think this is the "end goal" for AMD. Let the CPU do what it does best, and the GPU do what it does best. And for us gamers that require more GPU power than the APU provides, AMD has been putting a lot of resources into the X-Fire drivers so we can add a discreet card that works with the APU.
Concerning the title of the thread, calling this AMDs Fermi moment is not really fair. Fermi was six months behind AMD's product, while the 7000 series is actually ahead of nVidias. Once again, AMD has "shown up for the fight" while nVidia is late. I am confident Kepler will be faster than the 7970 when it gets here, but until then AMD owns the high end. By the time Kepler shows up, AMD may also have a 7975 for release that is a bit faster plus has the advantage of optimized drivers. I bet Kepler will still be faster, but not incredibly so.
How are people, after 6 pages of the thread, still not getting that the comparison to Fermi is the major shift towards GPGPU. Really, how is that so hard to swallow.
That is like comparing the millennium falcon with the Titanic because both are craft made of metal.
When something (Fermi) has an incredibly bad reputation due to a specific reason (heat, power usage, cost, lack of fully functional cores) and you use that thing in a headline, everyone is going to assume the headline refers to the well known reputation, rather than some minor sub details most people don't even care about.
Fermi is a GPU architecture, GCN is a GPU architecture.
Just because you associate Fermi with bad things, doesn't mean normal people do. Just because the term "Fermi" was used to compare to GCN, doesn't mean the OP is an Nvidia fanboy and we should all jump down his throat. Get a grip on yourselves...
I'm not expecting much from the GTX 780. But I don't think Nvidia will be able to sell it for more than $500. I just don't see AMD dropping the price of the HD 7970 much at all unless Nvidia seriously pushes them too.
So my guess would be the HD 7990 being $600-$700 depending on the market prices when released.
You are confusing comparing "products" and comparing "architectures." The comment was specifically related to "efficiency of the architectural design", not efficiency of the product.
Please re-read this part of my post more carefully.
"The only way to fairly compare the efficiency of both architectures is to place them on equal nodes."
Notice, you keep talking about comparing products, while my post was discussing architectures, not products. You can compare efficiency of SKUs/products vs. each other, even if they are on different nodes. However, the only way to compare the "architecture" itself is to compare it on the same node.
Think about it: if GCN architecture was designed on 90nm node, it would get blown away by a 40nm Cayman. So you'd conclude that Cayman architecture is way more efficient? If Sandy Bridge architecture was designed on a 65nm node, would you compare it against a Nehalem architecture on a 32nm node? You see how absurd it is to compare architectures across different nodes and try to derive any meaningful information from that about the efficiency of the architecture? If you took the Pentium 4 architecture and put it on an 22nm node and clocked it to 20 ghz (because a lower node allows for higher transistor switching clock speeds), it'll suddenly appear far better than an Ivy Bridge architecture on 180nm. etc. etc.
You can't directly conclude which architecture is actually more efficient across different nodes because the node differences affect:
1) Transistor density (performance/transistor);
2) Transistor power consumption (performance/watt);
3) Transistor switching speed.
Here is another way of understanding this principle. If someone suddenly put a 40nm Fermi architecture against a 28nm Fermi architecture, without any changes to the Fermi architecture by itself, we would see a dramatic improvement in performance/transistor, performance/watt and performance per clock. But in fact, the efficiency of the Fermi architecture would be EXACTLY the same.
* 28nm transistors offer up to 60% higher performance than 40nm at comparable leakage with up to 50% lower energy per switch and 50% lower static power. ~ Global Foundries
So using your premise that you can compare architectures across 2 different nodes, the 28nm Fermi architecture would be "more efficient" than a 40nm Fermi architecture. Since this conclusion is false as the Fermi architecture is actually constant in efficiency and performance, that means your premise that you can compare architectures across 2 different nodes is incorrect. If you can't isolate the architecture as the only variable, the comparison is not conclusive since the node shrink itself brings 3 major advantages listed above. Comparison across 2 different nodes can tell you about the efficiency of 2 products, but not much about the architectures themselves.
Yet another way of looking at it: For instance, Product A's architecture might be 20% more efficient than Product B's architecture. But if Product B's architecture is on a 28nm node (which brings 60% higher performance with 50% lower power consumption vs. a 40nm product), then architecture B will suddenly appear to be more efficient, while in fact it was the node that made it more efficient. We can't know for sure since we are discussing 2 variables (Variable 1: node, Variable 2: architecture).
The only way to compare something to see clear causation is to keep all the variables constant and change the variable you are trying to compare. In this case, if you are comparing architectures, the changing variable has to be the architecture.
This is BS. How would a straight die-shrink improve performance per clock?
Of course, if you just do a straight die shrink from 40nm to 28nm while keeping everything else the same, performance per clock is the same. But I said a node shrink brings 3 key advantages at the same time - one of them being transistor density. You didn't read my post carefully. I didn't say 28nm GTX580 would be have higher performance per clock than a 40nm GTX580. I said 28nm Fermi architecture would have higher performance/clock than the same architecture on 40nm.
Here is what I meant when I said performance per clock improves with a node shrink:
With a die shrink transistor density rises, which lets you increase your functional units in the same space. With a 28nm node shrink, at the same GPU clock speed, you can fit have more TMUs, more ROPs and more SPs since transistor density from 40nm to 28nm improves by 60%. Therefore, a hypothetical 772mhz 28nm Fermi chip with 768 CUDA cores, 96 TMUs and 48 ROPs built on the 28 nm TSMC process would have superior performance at the same GPU clock speed than a GTX580. However, the efficiency of the Fermi architecture itself would still be identical. This is because the node shrink brings higher performance/clock as a result of higher transistor density, which lets you pack more functional units.
Therefore, you cannot conclusively compare architecture efficiency across nodes. The node itself brings major advantages. By ignoring the node transistor density advantage, you are giving one architecture an advantage of fitting a lot more functional units in the same space, which makes the comparison unfair if trying to gauge the actual efficiency of the architecture itself.
Now translate this back to HD7970 GCN architecture. AMD was able to fit a lot more functional units at 28nm that it would have been able to do at 40nm. Similarly, NV would have been able to fit a lot more Fermi functional units at 28nm than it did at 40nm. Therefore, how can you say which architecture is actually more efficient? We are comparing apples-to-oranges.
You can conclude that GCN on 28nm is more efficient than Fermi is on 40nm. You can also say that HD7970 is more efficient than GTX580. Both of those are valid statements.
That was my concern, and is what motivated my intial response in this thread.
According to Anandtech....
BF3 is catered towards NVIDIA, not Both.
From HotHardware on Crysis 2 tessellation...
Doesn't Metro also have heavy tessellation?
Metro 2033 only tesselates characthers...but have some evil shadow calculations.
1. Fermi doesn't have an increadibly bad reputation. It was an undisputed champion of GPGPU. And the other issues you brough up were an issue of the GTX400 series, and not present in the GTX500 series which is ALSO fermi.
2. You assume what "everyone" else assumes. Stop assuming things.
3. The architectural structure is not "some minor detail nobody cares about". Everyone who reads anandtech cares about it else they would be reading some other site, like HardOCP, which doesn't go as much in depth into architectures.
4. The architectures are in actuality similar.
These are all things I addressed already yet you completely ignored when repeating the original accusation.
Which I have addressed, so why bring it up again like that?
If you want to argue that GCN is not architecturally similar to fermi, go ahead and if you are right I will concede that I was wrong.
I really don't see why you would want to spend all this effort to prove I had the worst possible of intentions when I already clarified what I actually meant.
I also wish more people here will be discussing the actual architecture of GCN and fermi instead of their fellow posters.