sushiwarrior
Senior member
Pitcairn with memory boosts, not just a relaunch.
Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.
Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.
The safest way for retaking the crown would be going for a gaming chip stripped of HPC features, something like 3 x Bonaire.
So of course it's doable, but I don't see them doing it with the big HPC chip (imho Kepler is simply more efficient),
or in a convincing way with the big gaming chip.
As a further change to the frontend, the number of geometry engines and command processors (ACEs) has been doubled compared to Cape Verde from 1 to 2 each, giving Bonaire the ability to process up to 2 primitives per clock instead of 1
Pitcairn with memory boosts, not just a relaunch.
Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.
YES!
As it clearly shows on their record margins ^_^
Sorry Char-lie, not everyone is privy to your wealth of information.
So no, I did not forget NV had GK110 canned, I simply have no idea what you are talking about.
BTW did you mean GK100?
Pitcairn with memory boosts, not just a relaunch.
Tahiti will not be EOL for a while. AMD is very happy with how Tahiti has done.
I meant GK100, and considering how keen you were to call out a typo, I assume you don't have anything to say about my point, which still stands btw.
Unless you already entered in denial mode that there wasnt a GK100 to begin with, because for some weird reason a company that heavily relies on the HPC market didnt continue their traditionally big-die and compute-oriented top GPU product, just to retake that route like what, less than 1 year later?
Interesting how trolls function.
Even on 28nm, a die of about 440mm²-450mm² would suffice for this level of performance:No, not at all. But I could see $600 or $650 for a card on parity or even better once both are overclocked.
I only see this happening though if AMD truly is going with a large die, something they have not done in a long time. If it's just a bit bigger than Tahiti it's not going to happen. 500mm2 or very near to it, at least if it's going to take on 780/Titan.
It's not an "able to fit into all markets" but a lack of variation on the 20nm node. With their work on FinFet still in progress, there's simply not enough R&D to provide a high performance process yet. Plus, there's more business on the mobile market.http://i.imgur.com/L9FOaBO.png
TSMC 20nm SoC is the equivalent to GlobalFoundries 20nm LPM. Both nodes are able to fit into all markets.
It really depends on the relationship of AMD to its Foundries.
One thing that made Kepler more efficient than GCN1 is its ability to clock down its compute units seperately when not needed. At some point, AMD should follow with its next GCN architecture.Knowing the GCN1.1 substitutes were canned, i speculate the VI will be a great leap forward in terms of architecture efficiency, and for both perf/mm² and perf/watt, plus still having the compute efficiency per watt/mm² bigger than Bonaire. 9970 will be NOT a tweaked Tahiti.
You take a different position. I welcome it. All i'm saying is I fully expect 2gb will be exceeded when maxed @ 1080p.
I'm going off of experience, that's all. Experience must not be worth anything anymore compared to slides with colored bars on them. My GTX570's had 1.2gb of ram. BF3 maxed at 1.5gb ram usage. That caused hitching and skips, not poor FPS. Its like it took a second to load textures or something because I didn't have enough Vram.
Hopefully our 670's will not have any issues. Maybe 2gb will be enough. I am hoping like you are (you should be hoping based on your sig).
It's also interesting how people think releasing GPU's for cheap is a bad idea, like from the perspective of a shareholder and not just a consumer.
I would have expected for most people here to just look from an immediate consumer prespective.
That 1 year makes a lot of difference in terms of yields and thus profitability. Nvidia simply went with the more profitable market first. Only a fool would have believed that they would go for the gaming market with a GK100 first and repeat their mistakes. There was no GK100.
Sorry, but their most profitable market for them is HPC. If your HPC oriented part, the GK110, comes 6 months later than your gaming oriented ones, and the one before that, and the GK100 magically dissapeared from existence 🙂awe🙂, when there were in fact a GF100 and a GF110 (even at horribly low yields at first, they still hit the market eventually) before them, it shows you that something did happen for them to rush GK110 and thus shorten their refresh cycle in the same node.
If things didnt go wrong with GK100, why NV showed first a 2xGK104 part (K10), a gaming dual gpu dressed as an HPC part? Because the big die compute oriented one, the one really suited for the market from that gen wasnt anywhere to be seen and it's successor was already being rushed in to make up for it.
That K10 was their Richland, a product recicled up from existing ones just to fill the gap for another one that is coming late because it's first itineration was just broke or didn't meet the expectations, and thus was discarded.
Even the codename is showing you that GK110 is a revamped kepler in the same node, just as GF110 was of GF100. Only that this time it didn't even make it to the market, as either it yielded horribly worse than their predecessor at 40nm, or had as bad yields as GF100 at that time, but now with NV having to pay per wafer (instead of paying per good die) numbers didn't add up, so they canned it and rushed GK110 while waiting for 28nm to yield a little better.
It is naive to believe that a 550mm2 die would have been ready early in the 28nm game. No way in hell. But for gaming they needed something, hence GK104.
GF100 came out in March 2010, but the first 40nm GPU was the HD4770 about a year earlier.
You have to look at the process - there was no 28nm GPU before Tahiti, the process was brand new. Thus, GK110 was not later in 28nm than GF100 was later in 40nm. The only difference being that this time around Nvidia released the GK104 during that "waiting time".
About the nomenclature:
The different names (10x vs 11x) stem from different compute capability afaik. I was told this by a knowledgeable source.
What is true is that GK110 yields were very bad in the beginning (about 15% iirc). What is also true is that they could not have released it any further in HPC. What is not true is that there was a GK100 that got canned.
I have confidence...not hope 😀
Do you guys think this card will carry a premium price of $600+? And will Nvidia just respond by cutting prices on there line of cards?
Do you guys think this card will carry a premium price of $600+? And will Nvidia just respond by cutting prices on there line of cards?
Most people are saying it will be $500-$550.
From what I've seen and what seems most logical, it will have similar performance to the Titan, if not a bit higher. If that is the case, I would expect the Titan's price to drop, but the 780's not as much.
Oh for crying out loud. When we replace our 670's in a few months we should buy 4 of them at once for the both of us. Maybe the egg will give us a group discount.
I'm not replacing anything until at least next year cause there's absolutely no need to.
Need better game engines, maybe when UE4 games start showing up.
Still having a hard time deciding between a $350 7970 DirectCU II or a $550 9970. Thoughts?
I really doubt the 7970 will drop further than that anytime soon, and if the 9970 is only 20-30% faster but is more than twice as expensive at launch, I might just pull the trigger on a 7970...
I guess I'll wait till it launches. If I can get a 7970 DirectCU II for around $250-$300, I'd probably get that before getting the 9970 if it really is just around Titan performance.
I paid $400 for my first 5870, and it was the only part of my computer that I sort of regret buying (the second one cost $150). I don't think I can bring myself to buy another card for >$400.