What is not so great is a lack of competiton from AMD, they still have just old 6000 series in this segment. What are they doing???
That won't stop it from choking at 4K and eventually 1440p just as quick once more games come out that push the VRAM buffer.
It will be fine for a while just like the 970 was and then it won't be. Won't choke as fast as the 8GB (960 2GB back then) did but won't last like the 980Ti (4090) did either.
Cool. A lot of people who want 4070 level of performance will buy this new card instead of old and power hungry RX 6000 cards, and that just because they have nothing to buy from AMD.Trying to unload RDNA2 stock.
As I said before I tuned my 6800XT to use less than 200W and still get stock performance while having 33% more vRAM and still paid less than the $600 MSRP of the 4070. Yes it would be nice if AMD had new cards to compete, but the fact that last gen cards still compete says quite a bit about this release from Nvidia.Cool. A lot of people who want 4070 level of performance will buy this new card instead of old and power hungry RX 6000 cards, and that just because they have nothing to buy from AMD.
Cool. A lot of people who want 4070 level of performance will buy this new card instead of old and power hungry RX 6000 cards, and that just because they have nothing to buy from AMD.
Where are you finding it at $599? The best I can find is $649 at Microcenter.The 6950 XT should be 10-15% faster in raster than the 4070 and you get more VRAM. I doubt AMD is happy about selling the 6950 XT at $599 or so but that's what they are doing.
Where are you finding it at $599? The best I can find is $649 at Microcenter.
The 6950 XT should be 10-15% faster in raster than the 4070 and you get more VRAM. I doubt AMD is happy about selling the 6950 XT at $599 or so but that's what they are doing.
6950XT is a power hungry breaker tripping monster from past compared to 4070.The 6950 XT should be 10-15% faster in raster than the 4070 and you get more VRAM. I doubt AMD is happy about selling the 6950 XT at $599 or so but that's what they are doing.
I don't know why they would be that mad though? It's got the same BoM as 6800XT essentially and that launched with a $650 MSRP, which certainly had some profit margin baked in. Given the time, I would expect yields to be solid. It still commands a small premium over the 6800XT as well. I am guessing AMD is "fine" with ~$600-$700 for the 6950XT in the way that my is "fine" with me 😉
No doubt a similar performing 7800XT would be more efficient out of the box and have a better margin (maybe we would doubt it?), but it's not like they are totally MIA from the segment. nvidia has pushed the pricing models up high enough that you can imagine some at AMD are wondering why they would pay to develop a new $600 part at all when what they have is "fine" and they continue to "win" at prices sub $600 where surely many sales are being made in the retail channel. By winning I mean their products are already highly competitive on all fronts and they are probably wooing as many GPU curious buyers as they can.
Sure it's no EPYC or even a FireGL chip but its not like the 7nm TSMC chips are the hot stuff that they want to be using for those products anyway.
With regards to the 4070, the real sin is that the RDNA 2 parts are old, not that they are too slow or really too power hungry with even a modicum of tuning. I am certain that is lip service now given how expansive the power budgets for CPUs and GPUs have been in the last few years, and anyone that really cares about power consumption is going to be limiting frame rates, etc. to keep their power bill in check.
What I would like like to see (for grins) is a power usage graph at say capped 90 fps at 1440P high settings on these cards. Every one would argue what those settings would be, I get it, but running uncapped FPS and measuring power usage is silly given how trivial it is to trim it down. I suppose it's like every test being run at "Ultra" settings when we know their is likely something in that "ultra" that kneecaps frame rates and is nearly imperceptible in game.
So you mean, AMD RDNA2 is like a classic muscle car from the 60s...6950XT is a power hungry breaker tripping monster from past compared to 4070.
Anecdotal thing here, but I was running a 3DMark stress test in a window so I could monitor temps, etc. in another window. I guess it locked me to 165 FPS likely due to a windowed versus full-screen setting in the drivers. Anyway, I was getting 165 FPS with 27% GPU utilization on my 4090 and doing it all within about a 200w power budget.
If you shoot for settings both cards can run a game at 165 FPS at, then the power budgets would be nearly identical, but % utilization for the 4070 would be around 75% (rough estimation) compared to 27% on my 4090 in this 3DMark stress test example. They're the same architecture and the only advantage the 4090 would give is to run at settings the 4070 isn't capable of running, but it will use more power overall when doing so.Nice - even anecdotal that is the kind of thing I am talking about. With Radeon Chill my 6800 is constantly flirting with ~100W (I'll have to check it again) while I game at my preferred settings.
If you were happy with "just" 165 fps, then you are running effortlessly (given the extra ~200w potential power usage?) inside the max power budget of the 4070. And if the 4070 has to run full bonkers mode to get 165 fps and use the same 200W, what kind of efficiency increase we even seeing?
So I guess I am saying, normalized for some performance level, what is the power usage?
If you shoot for settings both cards can run a game at 165 FPS at, then the power budgets would be nearly identical, but % utilization for the 4070 would be around 75% (rough estimation) compared to 27% on my 4090 in this 3DMark stress test example. They're the same architecture and the only advantage the 4090 would give is to run at settings the 4070 isn't capable of running, but it will use more power overall when doing so.
I have done this with a RX 6400 and RX 6600XT. The 6600XT can run the same game same settings same frame cap, and use LESS power than the 6400.If you shoot for settings both cards can run a game at 165 FPS at, then the power budgets would be identical. They're the same architecture and the only advantage the 4090 would give is to run at settings the 4070 isn't capable of running, but it will use more power overall when doing so.
The problem is that the 4070 is using the third largest due this generation when normally it uses the second largest die. Most people wouldn't notice since it's still the 104 die, but this time the xx80 GPU is using AD-103 and historically the xx70 GPU is just a more cut down version of that same die.
In any previous generation this would have been an xx60 card. Over time the xx70 branding has really slid down the stack. If you go back far enough the xx70 cards were the second best NVidia card you could buy, but now it's been shoved so far down the stack that it's about as close to the bottom as it is to the top.
Cool explanation, thanks for taking the time.The problem is that the 4070 is using the third largest due this generation when normally it uses the second largest die. Most people wouldn't notice since it's still the 104 die, but this time the xx80 GPU is using AD-103 and historically the xx70 GPU is just a more cut down version of that same die.
In any previous generation this would have been an xx60 card. Over time the xx70 branding has really slid down the stack. If you go back far enough the xx70 cards were the second best NVidia card you could buy, but now it's been shoved so far down the stack that it's about as close to the bottom as it is to the top.
Jensen tries to tell us that Moore's law is dead, but it really seems like Nvidia has just shifted the dies in the stack. So if this were a 60 series card it would have previous gen 80 series performance just like it did before. On top of that they raised the price point. So it's really a 60 series for $600. Yikes.The problem is that the 4070 is using the third largest due this generation when normally it uses the second largest die. Most people wouldn't notice since it's still the 104 die, but this time the xx80 GPU is using AD-103 and historically the xx70 GPU is just a more cut down version of that same die.
In any previous generation this would have been an xx60 card. Over time the xx70 branding has really slid down the stack. If you go back far enough the xx70 cards were the second best NVidia card you could buy, but now it's been shoved so far down the stack that it's about as close to the bottom as it is to the top.
Just marketing mind games. The numbers mean nothing, really, except what nvidia wants them to mean.
The marketing department figured out the "70" number was really valuable and had already launched the "90" number, so now it's time to milk it. Think of the shareholder value. 😉
Yep, when the top dog is now a 90-class card when it used to be an 80-class card, the 70-class card is basically what used to be a 60-class card. A 60-class die should be roughly middle of the pack for affordable and accessible prices for most people. $600 is not exactly affordable and accessible to most.The problem is that the 4070 is using the third largest due this generation when normally it uses the second largest die. Most people wouldn't notice since it's still the 104 die, but this time the xx80 GPU is using AD-103 and historically the xx70 GPU is just a more cut down version of that same die.
In any previous generation this would have been an xx60 card. Over time the xx70 branding has really slid down the stack. If you go back far enough the xx70 cards were the second best NVidia card you could buy, but now it's been shoved so far down the stack that it's about as close to the bottom as it is to the top.