I never said die size alone determined costs. GTX580's die is 520mm^2. HD7870's die is 212mm^2. Rumored estimates are that 28nm wafers are 20-30% more expensive. GTX580 die size is 145% larger.
Yes, you did. You implied it very strongly. Why are we supposed to care about it having a larger die now? It was you that made an argument about this initially, and it was debunked.
Also, AMD is dropping MSRP for HD7950 to $399.
There goes your cost argument. If HD7950 is selling for $399 with a 365mm^2 die size, HD7870 for $349 with a 212mm^2 is a rip off.
Except no, because it was you that started this arguing about die sizes. And the HD 7870 will get a price drop when the GTX 660 or GTX 670 (non-Ti) are released. Funny how you go from arguing about prices to die sizes suddenly. Fact is, the GTX 580 has a 145% larger die size yet it's only AS fast.
Great logic. Next stop, HD8790 for $699 because it's the fastest single GPU.
Yet you conveniently forgot that in the past NVIDIA has had cards at outrageous prices, like the GTX 280 which was... what again... $650. But that doesn't matter, because it's NVIDIA. It also doesn't matter that it was NVIDIA that set the price for the GTX 580 at $500 because it was the fastest single-GPU card, right? But since it's AMD we're talking about now, they couldn't price the faster HD 7970 at $50 more than the GTX 580, right? If NVIDIA decides to release a GTX 685/GTX 780 at $700 will you be here complaining? Probably not. What if AMD releases an HD 8970 that's faster and prices it at $750? You bet.
Like I said from a business perspective, they can price it at for $999. If consumers start buying GPUs for $999, is it reasonable pricing? It would be. That doesn't change the fact that technology is different - performance gets cheaper / and or faster over time. HD7870 replaces HD6870. So it's not cheaper. Similarly, if you compare it to HD6970, given the time span, the 8-10% performance increase isn't sufficient given the $349 price. Again, it's overpriced. The fact that someone out there is willing to pay $349 doesn't mean it's great value.
Citation needed. Also, who are you to impose what's a necessary performance increase for a card at a given price point? Do you control the economy? Do you know about supply and demand? Did you forget again that 28nm wafers cost more than their 40nm counterparts? Did you forget again that 28nm is immature and therefore yields are much lower than 40nm? Finally, do you think AMD is a charity that HAS to give you more performance at lower prices with each new card even when there's little if any cost savings for AMD from a new gen, and why do you think it's fair for NVIDIA to price a card at high prices yet not AMD?
It does matter. If 1 year from now we can't get faster performance in technology for a similar price (barring inflation), then something has changed about the industry. But this is a moot point now since HD7950 dropped to $399. That just supports the view many have held that HD7900 series was overpriced.
I guess one thing has changed: AMD saw that giving cards away wasn't a very smart business strategy, and so they stopped giving people bargain basement prices for high-end cards. Also, the GTX 580 dropped to $390 and AMD is pricing the HD 7950 to keep the competition they had going. Since the GTX 580's price dropped before the HD 7950's, does that mean the GTX 580 was even more overpriced? Speaking of overpricing, the HD 7950 competed with the GTX 580 and initially had a lower price. In comparison to its competition, it was priced more aggressively, so in comparison to its competition it was not overpriced. In comparison to the rest of the market, namely the GTX 570 and HD 6970, it was overpriced. But that's simply points of diminishing returns at work. When you get into Enthusiast-level cards, you get into points of diminishing returns, and the higher Enthusiast card you go with the lower the performance difference will be and the higher the price will be.
Really, so if there is no competition companies should be able to price products as high as they want? So Intel could be selling 2500K for $299 then because it destroys every Bulldozer chip. Why isn't Intel using AMD's predator pricing strategy? I understand that companies feel pressure to lower prices when there is more competition. But HD7870's pricing from a consumer perspective didn't make sense from day 1, regardless of competition because
it replaced HD6870. A $350 card is not a replacement for a $239 card. Just because a company was raising prices unreasonably, doesn't make it rational to consumers.
Yes, that is exactly right. Corporations price their products at however much they want. If they price too high, the admittedly-flawed free market takes care of it because consumers will not purchase said product.
And you've still got to get me that quote from a reputable source saying the HD 7870 replaces the HD 6870, because AMD has never said that. It's what you conveniently make up to then use as an argument. The HD 6850 and HD 6870 didn't replace the HD 5850 and HD 5870, but rather the HD 5830 and HD 5850 respectively. Your price argument is therefore invalid, because you pulled the argument of the HD 7870 being the replacement for the HD 6870 right out of nowhere. Pro tip: what replaces the HD 6870 is probably a yet-to-be-announced HD 7790 or HD 7830 because the HD 7770 replaces the HD 6850. And the HD 7870's pricing makes a great deal of sense. You're getting a card that's as good, or better than, the GTX 580 for $50 less. Unfortunately, AMD shot themselves in the foot by putting the HD 7870 and 7950's stock performance so close. Regardless, since the HD 7950 has more compute units and memory bandwidth, it has higher IPC than the HD 7870 and it can also overclock to a higher degree. At $10 more than the GTX 580 it's still a great choice in comparison.
Not exactly. GTX680 needs 1536 SPs and 70GB/sec lower memory bandwidth to accomplish the same feat that it takes a 2048 SP chip. But IPC for GPUs is a moot point. It's too complex to compare since GPU tasks are parallel in nature. So comparing IPC for GPUs is largely irrelevant. Still HD7970 for $470 is too expensive since the reference blower is loud, it consumes more power, requires a huge overclock just to hang with a stock 680. In other words, still overpriced.
Yes, exactly. Turn the argument around and we land at the fact that the GTX 680 needs an effective clock speed of around 1050-1100MHz because of GPU Boost to be 5-10% faster than the HD 7970. And an HD 7970 at 1GHz is already as fast as a GTX 680, and that's an overclock 99% of HD 7970's will be able to do on stock voltage. Then there's also the fact that you didn't point out: Tahiti gains significantly more than GK104 from increases in clock speeds. So yes, they are very much alike when it comes to IPC. Both at 1GHz GK104 has higher IPC because it needs less cores to do the same, but at 1.1 or 1.2GHz Tahiti has higher IPC because of better clock scaling. Simple as that.
You had an argument about performance earlier, and now you have an argument about noise? How atypical of you. The reference cooler does make more noise than the GTX 680's, but you've yet to explain how that equates to it needing to be priced $40 lower at $430. You've also yet to mention the downside of having a central internal exhaust cooler: all the heat gets dumped into your case, raising temperatures for all the other components nearby. There's also the fact it makes a multi-GPU configuration a lot worse because if you have one card next to the other the one on top won't be able to breathe air in and therefore increasing its temperature AND the card on the bottom will receive all the heat exhaust from the top card. Unless you have a motherboard with a spacing of three slots for PCIe slots, it's a lose-lose situation for CF/SLI.
Yes, it can be closed and you can even surpass the 680. For that, you'd want an after market quiet 7970. Once those cost $499, they start to make sense. Good luck using your reference 7970 @ 1200mhz though.
Right, except you only need 1-1.05GHz to match a stock GTX 680. At 1.2GHz you're already ahead by a noticeable margin. And since, like I mentioned before, cards with third-party heatsinks, fans, and PCBs (boards) are typically $20-30 more expensive than reference and the HD 7970 will be priced at $470, then those cards being at $500 is realistic.
You can believe what you want. It's your choice. The information is there and it shows otherwise.
15-16% supports the notion that it takes a 1050-1070 7970 to keep up.
Except no because Hardware Canuck's review has titles that are mainly biased in favor of NVIDIA and they include too little in terms of quantities to get a broad picture of performance. Techpowerup test using 14 games, some NVIDIA biased, some AMD biased, and some neutral and what they get is that, both stock, the GTX 680 is 8% faster than the HD 7970. They're the most unbiased when it comes to selecting titles, so I'll trust them more.
More misunderstanding on how GTX680s GPU Boost works. Yes, because 1 card did it in 1 review in 1 game, than most 680s can do 1200mhz out of the box.
It still doesn't change the fact that it's stupid shenanigans by NVIDIA. Different reviews got different GPU Boost clock speeds, and even if the difference in on average only 50MHz between reviews, it makes comparing more difficult. Why couldn't NVIDIA do what they should do and make it work like Intel's and AMD's Turbo, where if it follows thermal parameters it overclocks to a specific clock speed? Furthermore, why are Enthusiasts obligated to use an offset mode for overclocking and why did they remove the ability to fine tune voltage, instead making the card decide for itself what voltage it needs? Given who this card is targeted to, it's complete BS and NVIDIA deserve to get called out on it.
And you conveniently forgot that those are sales figures for ALL platforms. I'm sorry, but almost no one plays Batman: AC on the PC, just like almost no one plays Metro 2033 and DiRT 3.
You can look at any popular modern games, SKYRIM, BF3, Witcher 2, Batman, etc. GTX680 wins in all of them.
Starcraft II is more popular than all of those, and Batman: AC isn't popular on the PC at all. Crysis 2 and many other titles I can't recall right now are popular on the PC as well.
Metro 2033 is good benchmark. Sure it runs faster on AMD cards, but it's still unplayable on any single GPU with everything maxed out. It's also a worse game than SKYRIM, BF3, Witcher 2 and Batman games.
Now you've stepped into making completely subjective arguments. Not many people play it, so it's not that good of a benchmark UNLESS you're looking to compare everything to give a broad picture of what you can expect in every situation.
It's not just cost that determines the product's pricing. If AMD can't design and sell a product that's profitable and performs as well as the competition, they should work harder. You keep looking at it from a business' perspective. Consumers don't care if it costs AMD $x to make the 7970. If they can buy a faster, quieter card for $30 more, it makes more sense for most people.
That first sentence looks completely redundant. Barring that, people that want a card that's more widely available, prefer AMD, care about compute performance, or know that with a small overclock it will match the GTX 680 will go for the HD 7970.
Why would I? No one brought up Nvidia's GPGPU superiority when they were recommending HD4870/4890/5850/5870/6950/6970 cards for 3 full generations. Suddenly GPGPU is at the forefront of discussions? :sneaky:
Except you're conveniently taking things out of context for your own arguments. I mentioned compute performance as the reason why Tahiti is bigger than GK104, which it is, because you initially brought up the die size argument. Less than 1% of people buying these cards care about compute performance, but the difference is that unlike NVIDIA's previous efforts AMD makes a reasonable balance between the two. Tahiti is good both in gaming AND compute while having a medium-sized die and decent power consumption. Compute performance isn't important for most gamers, yes.
If you need GPGPU compute, that's different. But since most people buy cards to play games, I think the crowd that cares for GPGPU would buy HD7970 even at $550. So should AMD price the 7970 to cater to 0.1% of these customers?
Answered above.
With overclocking, yes. Otherwise, no.
13% slower at 1080P @
Computerbase
It's 8% faster at 1920x1200, as I discussed above.
I was discussing MSRP. It was
$369.
Doesn't change much, if anything. The 6970 eventually became $30 cheaper than MSRP, and the 7870 is now at $20 lower than MSRP. And it's still a great value, especially in comparison to the GTX 580 as I've discussed before.
So wait, you include overclocking which is luck of the draw for 7850 but ignore overclocking or unlocking for 6950? :sneaky:
Except no, because unlocking does not equal overclocking. Overclocking is luck of the draw when it comes to the max clocks one or another card can reach, but all HD 7850s can overclock. The vast majority of reviewers and users are getting 1100MHz on stock voltage, so it's not luck of the draw until you get to where you want to see the maximum clock speeds achievable. 1100MHz is a typical overclock; 1200MHz is something some may not reach. Unlocking isn't in the same ballpark because unlike overclocking, it isn't guaranteed (and yes, all 7850s will overclock from their stock 860MHz, so don't come back with phony arguments about how overclocking is "luck of the draw" when I mentioned an average overclock; your argument is like saying it's "luck of the draw" to get 4.2GHz on a 2500K/2600K when 99% of samples can achieve it). By not guaranteed, I mean that even when you had the "right" hardware (reference 2GB card) some people weren't able to unlock. Only the initial 6950s were able to unlock as a matter of fact, because after some months all cards started to come with cheaper, non-6970 PCBs and there was also the introduction of a 1GB version which had less success unlocking. Another obvious thing is that it involves a lot more risk to the card than doing an average overclock. So no, they're not similar in any way, shape, or form. Probably less than 50% of all HD 6950s sold were able to unlock, while 99% of HD 7850s are gonna easily get 1.05-1.1GHz on stock voltage because it's an average overclock.
Ya, there have been deals for $180-$210 GTX560Ti 448. That's better value than HD7850. So what's your point exactly? HD7850 needs a 20% overclock to beat HD6950 @ 6970 by
5%, 15 months later? If that impresses you, we just have different standards. Not to mention GTX570 offered all of this as well for $250-270 for months before 7850 launched.
Well, since it's now been established that an unlock is not the same as an average overclock and that even if you CAN unlock a 6950 you still have lower-rated memory chips, your whole argument falls apart. Also interesting that you go from jumping around from MSRP to street pricing to deals to suit your argument whenever possible. Very objective and unbiased. Deals are not the same as street pricing, and like rebates they're not what the vast majority of people will pay, and therefore it's invalid. That's why they're called deals: because they're rare, available for a limited amount of time, and often in a specific shop only. Wanna keep it fair? Use street pricing from what everyone will pay in the US, and that means stick to Newegg and Amazon and any other reputable retailers that ship to all US states. Similarly, and as I've mentioned several times in the forum, mentioning rebates as final price is deceiving and inaccurate because it's not upfront price (meaning you need to pay full price when you buy), not everyone gets them back, and others don't even bother with them.
Oh, HD7850 also performs
worse than HD6870 in certain games. Glad you mentioned that.
The GTX 680 also performs worse than the HD 7970 in certain games.
Look guyz, the HD 7970 is faster than the GTX 680. OMG!!!