• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[wccftech] rumor: amd hawaii benchmarked in 3dmark11 firestrike AMD attacks titan

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The safest way for retaking the crown would be going for a gaming chip stripped of HPC features, something like 3 x Bonaire.
So of course it's doable, but I don't see them doing it with the big HPC chip (imho Kepler is simply more efficient),
or in a convincing way with the big gaming chip.

Sorry for such a huge snip, but I want to nit-pick something. Isn't Boinare improved version of GCN in terms of HPC?
http://www.anandtech.com/show/6837/...feat-sapphire-the-first-desktop-sea-islands/2
As a further change to the frontend, the number of geometry engines and command processors (ACEs) has been doubled compared to Cape Verde from 1 to 2 each, giving Bonaire the ability to process up to 2 primitives per clock instead of 1
 
Pitcairn with memory boosts, not just a relaunch.

Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.

I disagree that Tahiti is good for AMD. the GPU division just broke even last quarter . That includes GPU licensing revenue for Xbox 360 and Wii U which is almost all profit. so basically AMD GPU division lost money last quarter. I think its ridiculous to sell a 365 sq mm chip at low margins. the clearance sale type prices on HD 7950 / HD 7970 is not good for AMD. definitely this kind of pricing is not sustainable.
 
YES!
As it clearly shows on their record margins ^_^

Sorry Char-lie, not everyone is privy to your wealth of information.
So no, I did not forget NV had GK110 canned, I simply have no idea what you are talking about.

BTW did you mean GK100?

I meant GK100, and considering how keen you were to call out a typo, I assume you don't have anything to say about my point, which still stands btw.

Unless you already entered in denial mode that there wasnt a GK100 to begin with, because for some weird reason a company that heavily relies on the HPC market didnt continue their traditionally big-die and compute-oriented top GPU product, just to retake that route like what, less than 1 year later?
 
Pitcairn with memory boosts, not just a relaunch.

Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.

So Curacao is not that much different from Pitcairn in terms of the core itself (number of CUs etc.)?

I meant GK100, and considering how keen you were to call out a typo, I assume you don't have anything to say about my point, which still stands btw.

Unless you already entered in denial mode that there wasnt a GK100 to begin with, because for some weird reason a company that heavily relies on the HPC market didnt continue their traditionally big-die and compute-oriented top GPU product, just to retake that route like what, less than 1 year later?

That 1 year makes a lot of difference in terms of yields and thus profitability. Nvidia simply went with the more profitable market first. Only a fool would have believed that they would go for the gaming market with a GK100 first and repeat their mistakes. There was no GK100.
 
Last edited:
Knowing the GCN1.1 substitutes were canned, i speculate the VI will be a great leap forward in terms of architecture efficiency, and for both perf/mm² and perf/watt, plus still having the compute efficiency per watt/mm² bigger than Bonaire. 9970 will be NOT a tweaked Tahiti.
I think Maxwell will be relatively more efficient and AMD will be preparing PI in 20nm to compete against GM104.

Don't have any idea of future performance or die sizes of mid-end cards... My very optimistic speculation for 9970:

Die size 30-45% bigger than Tahiti. Efficiency for the die size improved;
40-50% more SPs than Tahiti(Bonaire have 40% more SPs than 7770 at the same power consumption/sighlty higher TDP);
Clock speeds around 925-1000Mhz;
TDP ~250W(or a little more) having power consumption around/sightly higher GTX 770 and 8+6pin power connection, will have a bit less overclocking headroom(proportionally) than 780/Titan(Max OC Titan will win or match against Max OC 9970 with higher power cost, of course);
Card 1,5 inch longer than 7970, have way more efficient cooling making a little less hot and less noisier than the original 7970.
Crossfire improved from the GCN crossfires(13.8 with FPacing) with 8-14% better scalling/perf and better frametimes, but it still will not come with hardware frame metering, Nvidia SLI's solutions will be still more smooth.
The more important thing: the rate of SP/TMUs-ROPs-Geometry Units increase for the Hawaii vs Tahiti is not proportionally. VI will be a more balanced architecture fixing all the Tahiti bottlenecks.
Compute performance/watt increased by an ratio of 1,15/1 against Bonaire.
Performance for Hawaii: 5-15%(10% generally) more performance than Titan in games. In computing Hawaii will crush everything.
Starting price for 9970: $550.


If AMD don't make bigger improvements like this they will be doomed against Maxwell.
 
Interesting how trolls function.

It's almost as interesting as how so many other participants here function, which is to respond to them, and give them the exact reaction they're looking for. Think about that the next time you click "Quote".
-- stahlhart
 
Man... I'm still surprised as to how AMD has been holding their secrets so well.

Still having a hard time deciding between a $350 7970 DirectCU II or a $550 9970. Thoughts?

I really doubt the 7970 will drop further than that anytime soon, and if the 9970 is only 20-30% faster but is more than twice as expensive at launch, I might just pull the trigger on a 7970...

I guess I'll wait till it launches. If I can get a 7970 DirectCU II for around $250-$300, I'd probably get that before getting the 9970 if it really is just around Titan performance.

I paid $400 for my first 5870, and it was the only part of my computer that I sort of regret buying (the second one cost $150). I don't think I can bring myself to buy another card for >$400.


That being said, today is the last day of my quote for the $347 7970 (after tax, rebate and free shipping) so please help me make up my mind fast!
 
Last edited:
No, not at all. But I could see $600 or $650 for a card on parity or even better once both are overclocked.

I only see this happening though if AMD truly is going with a large die, something they have not done in a long time. If it's just a bit bigger than Tahiti it's not going to happen. 500mm2 or very near to it, at least if it's going to take on 780/Titan.
Even on 28nm, a die of about 440mm²-450mm² would suffice for this level of performance:
Tahiti: 365mm²
+20% die area ~ 438mm²
+10% transistor density by better handling the process
+10% architectural efficiency
Adding these up, (i.e. multiplying these factors) would, if done properly, yield a chip on titan level performance-wise, yet far below area-wise.
2560 shaders and 4 command engines seems a good estimate on this level. But this is all speculating within the frame of current GCN1.1 tech, where GCN 2.0 smells a lot like HSA and compute.


http://i.imgur.com/L9FOaBO.png
TSMC 20nm SoC is the equivalent to GlobalFoundries 20nm LPM. Both nodes are able to fit into all markets.
It really depends on the relationship of AMD to its Foundries.
It's not an "able to fit into all markets" but a lack of variation on the 20nm node. With their work on FinFet still in progress, there's simply not enough R&D to provide a high performance process yet. Plus, there's more business on the mobile market.

Knowing the GCN1.1 substitutes were canned, i speculate the VI will be a great leap forward in terms of architecture efficiency, and for both perf/mm² and perf/watt, plus still having the compute efficiency per watt/mm² bigger than Bonaire. 9970 will be NOT a tweaked Tahiti.
One thing that made Kepler more efficient than GCN1 is its ability to clock down its compute units seperately when not needed. At some point, AMD should follow with its next GCN architecture.
 
You take a different position. I welcome it. All i'm saying is I fully expect 2gb will be exceeded when maxed @ 1080p.
I'm going off of experience, that's all. Experience must not be worth anything anymore compared to slides with colored bars on them. My GTX570's had 1.2gb of ram. BF3 maxed at 1.5gb ram usage. That caused hitching and skips, not poor FPS. Its like it took a second to load textures or something because I didn't have enough Vram.
Hopefully our 670's will not have any issues. Maybe 2gb will be enough. I am hoping like you are (you should be hoping based on your sig).

I have confidence...not hope 😀
 
It's also interesting how people think releasing GPU's for cheap is a bad idea, like from the perspective of a shareholder and not just a consumer.

I would have expected for most people here to just look from an immediate consumer prespective.
 
It's also interesting how people think releasing GPU's for cheap is a bad idea, like from the perspective of a shareholder and not just a consumer.

I would have expected for most people here to just look from an immediate consumer prespective.

Yeah but what I think some people see is pricing = performance. If it's cheaper it's slower or something like that. Or they feel that taking a loss is not a good idea for a company that is losing money as it is.
 
That 1 year makes a lot of difference in terms of yields and thus profitability. Nvidia simply went with the more profitable market first. Only a fool would have believed that they would go for the gaming market with a GK100 first and repeat their mistakes. There was no GK100.

Sorry, but their most profitable market for them is HPC. If your HPC oriented part, the GK110, comes 6 months later than your gaming oriented ones, and the one before that, and the GK100 magically dissapeared from existence 🙂awe🙂, when there were in fact a GF100 and a GF110 (even at horribly low yields at first, they still hit the market eventually) before them, it shows you that something did happen for them to rush GK110 and thus shorten their refresh cycle in the same node.

If things didnt go wrong with GK100, why NV showed first a 2xGK104 part (K10), a gaming dual gpu dressed as an HPC part? Because the big die compute oriented one, the one really suited for the market from that gen wasnt anywhere to be seen and it's successor was already being rushed in to make up for it.

That K10 was their Richland, a product recicled up from existing ones just to fill the gap for another one that is coming late because it's first itineration was just broke or didn't meet the expectations, and thus was discarded.

Even the codename is showing you that GK110 is a revamped kepler in the same node, just as GF110 was of GF100. Only that this time it didn't even make it to the market, as either it yielded horribly worse than their predecessor at 40nm, or had as bad yields as GF100 at that time, but now with NV having to pay per wafer (instead of paying per good die) numbers didn't add up, so they canned it and rushed GK110 while waiting for 28nm to yield a little better.
 
Sorry, but their most profitable market for them is HPC. If your HPC oriented part, the GK110, comes 6 months later than your gaming oriented ones, and the one before that, and the GK100 magically dissapeared from existence 🙂awe🙂, when there were in fact a GF100 and a GF110 (even at horribly low yields at first, they still hit the market eventually) before them, it shows you that something did happen for them to rush GK110 and thus shorten their refresh cycle in the same node.

If things didnt go wrong with GK100, why NV showed first a 2xGK104 part (K10), a gaming dual gpu dressed as an HPC part? Because the big die compute oriented one, the one really suited for the market from that gen wasnt anywhere to be seen and it's successor was already being rushed in to make up for it.

That K10 was their Richland, a product recicled up from existing ones just to fill the gap for another one that is coming late because it's first itineration was just broke or didn't meet the expectations, and thus was discarded.

Even the codename is showing you that GK110 is a revamped kepler in the same node, just as GF110 was of GF100. Only that this time it didn't even make it to the market, as either it yielded horribly worse than their predecessor at 40nm, or had as bad yields as GF100 at that time, but now with NV having to pay per wafer (instead of paying per good die) numbers didn't add up, so they canned it and rushed GK110 while waiting for 28nm to yield a little better.

It is naive to believe that a 550mm2 die would have been ready early in the 28nm game. No way in hell. But for gaming they needed something, hence GK104.

GF100 came out in March 2010, but the first 40nm GPU was the HD4770 about a year earlier. You have to look at the process - there was no 28nm GPU before Tahiti, the process was brand new. Thus, GK110 was not later in 28nm than GF100 was later in 40nm. The only difference being that this time around Nvidia released the GK104 during that "waiting time".

About the nomenclature:
The different names (10x vs 11x) stem from different compute capability afaik. I was told this by a knowledgeable source.

What is true is that GK110 yields were very bad in the beginning (about 15% iirc). What is also true is that they could not have released it any further in HPC. What is not true is that there was a GK100 that got canned.
 
It is naive to believe that a 550mm2 die would have been ready early in the 28nm game. No way in hell. But for gaming they needed something, hence GK104.

Not so naive, they did it just 2 years before (see next quote).

GF100 came out in March 2010, but the first 40nm GPU was the HD4770 about a year earlier.

Not relevant, 4770 wasnt an new architecture on a new process and it wasnt even made by NV. Naive is to think that both companies share the same expertise in running a new node. (Hint: they dont).
The HD4770 was a pipecleaner of the same architecture as the rest of the 4xxx series for a new node.

GK100 was going to be a new architecture on a new process, as was GF100. Why couldn't with Kepler and did with Fermi? :hmm:

You have to look at the process - there was no 28nm GPU before Tahiti, the process was brand new. Thus, GK110 was not later in 28nm than GF100 was later in 40nm. The only difference being that this time around Nvidia released the GK104 during that "waiting time".

Adding examples to my point isn't doing any good for you. Tahiti was also new architecture on a new node, as was GF100, both made it to the market. GK100, not so much. Somethings up, dont you think? :hmm:

And I repeat myself, but again, you can't compare the expertise entering a new node for different companies. The process is the same for both, the ability to deal with it with a new architecture and still make a good yielding product, as seen lately, nowhere as close.

About the nomenclature:
The different names (10x vs 11x) stem from different compute capability afaik. I was told this by a knowledgeable source.

What is true is that GK110 yields were very bad in the beginning (about 15% iirc). What is also true is that they could not have released it any further in HPC. What is not true is that there was a GK100 that got canned.

No need to cite misterious sources, its pretty simple:

GK -> first letter of architecture codename.
1xx -> Distinction between different dies. Going from biggest to smallest goes from 100 to 107 or whatever NV desires to name their smallest die at that generation. The refresh follows this rule, only changes the middle number. Then you have 110, 114, 116, etc.

Just recently, NV added a new number to distinct between fully fledged dies (with no shaders and other units disabled) from crippled parts. This was shown with the 7xx kepler refresh series.

If you want to feel good about youself thinking there was no GK100, when timing to market, NV's own history regarding dealing with new architectures in new nodes and their big-die strategy and the need to release a dual GK104 GPU in the HPC market so it doesnt lose to the existing single die GF110 based ones shows the exactly opposite, then so be it. No GK100 ever existed, nope, not at all :awe:
 
Do you guys think this card will carry a premium price of $600+? And will Nvidia just respond by cutting prices on there line of cards?

Do you have kids? They accept boys under the age of 2 if you don't have the hard cash. Seriously though, don't get your hopes up for a good price if this card competes with the 780. You can expect 780 prices.
 
Do you guys think this card will carry a premium price of $600+? And will Nvidia just respond by cutting prices on there line of cards?

Most people are saying it will be $500-$550.

From what I've seen and what seems most logical, it will have similar performance to the Titan, if not a bit higher. If that is the case, I would expect the Titan's price to drop, but the 780's not as much.
 
Last edited:
Most people are saying it will be $500-$550.

From what I've seen and what seems most logical, it will have similar performance to the Titan, if not a bit higher. If that is the case, I would expect the Titan's price to drop, but the 780's not as much.

Thinking about it, NV probably won't drop the price on Titan. It's a luxury card and not designed to compete on price/performance.

The higher clocked 780s basically match the Titan anyway. I could see NV keeping the Titan at $1k just to funnel more money out from the diehard fans that simply want the prestige of having the card.
 
Oh for crying out loud. When we replace our 670's in a few months we should buy 4 of them at once for the both of us. Maybe the egg will give us a group discount.

I'm not replacing anything until at least next year cause there's absolutely no need to.

Need better game engines, maybe when UE4 games start showing up.
 
Last edited:
I'm not replacing anything until at least next year cause there's absolutely no need to.

Need better game engines, maybe when UE4 games start showing up.


Looking forward to that right there. If there are any issues with BF4, they will be minor so maybe i'll just stick it out as well. Haswell-E would be a nice time for a complete new build perhaps. Its just that even the slightest glitch really gets under my skin, especially if its preventable.
 
Still having a hard time deciding between a $350 7970 DirectCU II or a $550 9970. Thoughts?

I really doubt the 7970 will drop further than that anytime soon, and if the 9970 is only 20-30% faster but is more than twice as expensive at launch, I might just pull the trigger on a 7970...

I guess I'll wait till it launches. If I can get a 7970 DirectCU II for around $250-$300, I'd probably get that before getting the 9970 if it really is just around Titan performance.

I paid $400 for my first 5870, and it was the only part of my computer that I sort of regret buying (the second one cost $150). I don't think I can bring myself to buy another card for >$400.

Then go ahead and get a 7970. The 9970 will be way over you comfort "zone" ($$)
 
Also, regarding the 2gb is enough forever crowd, i'm sure you are well aware of this old picture by now. BF4, 1080p, 2gb RAPE.

 
The problem with BF3 (and I assume it will be carried over to BF4) is that isn't useful for the "2GB getting short soon" argument as it is a game that will downscale texture quality whenever it sees you are limited in that department, unless you go even lower than 512mb or so, then the game will crash from time to time (happened to me a lot in my old 5670).


Altho it might be a problem in games that doesnt implement this feature.
 
Back
Top