• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD Radeon 6970/6950 Retail Pictured, released Dec 13-17, $500/$375 pricing

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Save on electricity bill in games where you dont need 200-300 fps? So everytime it goes over 100 fps or so, it ll go abit easier? so you get a smooth 100 fps experiance at lower power watt usage?

Exactly. I don't want my card still running as fast as it can when I'm playing CSS thats running at 250fps already and making a racket.
 
Isn't that what vsync buys you?
Triple buffering uses the full power of the video card.

http://www.anandtech.com/show/2794/2
The name gives a lot away: triple buffering uses three buffers instead of two. This additional buffer gives the computer enough space to keep a buffer locked while it is being sent to the monitor (to avoid tearing) while also not preventing the software from drawing as fast as it possibly can (even with one locked buffer there are still two that the software can bounce back and forth between). The software draws back and forth between the two back buffers and (at best) once every refresh the front buffer is swapped for the back buffer containing the most recently completed fully rendered frame. This does take up some extra space in memory on the graphics card (about 15 to 25MB), but with modern graphics card dropping at least 512MB on board this extra space is no longer a real issue.

In other words, with triple buffering we get the same high actual performance and similar decreased input lag of a vsync disabled setup while achieving the visual quality and smoothness of leaving vsync enabled.

Now, it is important to note, that when you look at the "frame rate" of a triple buffered game, you will not see the actual "performance." This is because frame counters like FRAPS only count the number of times the front buffer (the one currently being sent to the monitor) is swapped out. In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh). With triple buffering, front buffer swaps only happen at most once per vsync.

Now I wonder how this power feature works with V-sync.
 
Last edited:
To be honest, I'd find that quite disappointing. It leaves people who don't want a dual gpu performance solution with only one option at the high end: the 580.

Why do you say that? Let's say it's within 10% of a 580, it's still a high end option.

That would be like saying if the 6970 beat a 580 then the 580 wouldn't be high end anymore.
 
I would think that purposely lowering the power a card uses could be done very easily through software, I don't see AMD expecting us to open our PC case and flip that little switch every time I exit Torchlight to fire up Crysis. We already can overvolt/undervolt most reference cards. We can already adjust the clock speed of the memory and GPU via software, it's even included in CCC. I can't see a little physical switch being for that reason.
 
if the 6970 is beating the 570 by ~25% at a 190watt cap limit... man oh man... imagine what happends when you remove the 190watt cap, and overclock these cards? This would def. kill the 580 sales, if the 6970 is selling cheaper and beating it at a lower power usage.

Then theres people on forums saying the oc like champs. Im looking forwards to the reviews abit more now 🙂

We , (you or I) can look at the powertune slider almost like the o/c slider in CCC now.
And I ask you ? Would you review a card with that slider under-clocked ? So why start juxtaposing because there is a slider there, benchmarks might be run less than full power. So we can dream of ever more performance , lol
I fear endless addendum in debates whether a reviewer really had the slider on full power ,when comparing results.
Why wouldn't they ?

edit: I LIKE these new vapor chambers, next card I get (long way away), is going to have one of these , me thinks.
5x4qz9.jpg
 
Last edited:
if the 6970 is beating the 570 by ~25% at a 190watt cap limit... man oh man... imagine what happends when you remove the 190watt cap, and overclock these cards? This would def. kill the 580 sales, if the 6970 is selling cheaper and beating it at a lower power usage.

Then theres people on forums saying the oc like champs. Im looking forwards to the reviews abit more now 🙂

Your post does not make any sense. If the 6970 is beating the 570 by 25%, then it's already beating the 580. And further, if AMD could release a cayman based chip that could beat the gtx580 on both using less power AND being overall faster, why wouldn't they do that? To purposefully limit their own chip's potential vs. it's competition makes absolutely no sense whatsoever, especially if it can win in every given measurement.

Think.
 
I would think that purposely lowering the power a card uses could be done very easily through software, I don't see AMD expecting us to open our PC case and flip that little switch every time I exit Torchlight to fire up Crysis. We already can overvolt/undervolt most reference cards. We can already adjust the clock speed of the memory and GPU via software, it's even included in CCC. I can't see a little physical switch being for that reason.

The switch is a dual BIOS switch

The actual TDP thing is done with sliders in AMD OverDrive:

fot023.jpg
 
And this is the latest pricing from Gibbo at OCUK:

GTX 460 768MB = £105 - £115
5830 1024MB = £115-£130 **EOL** ***Practically 5850 performance - BARGAIN***
6850 1024MB = £130-£140
GTX 460 1024MB = £140-£160
5850 1024MB = £130-£150 **EOL** ***Stock None Existent***
6870 1024MB = £170-£190
5870 1024MB = £180-£200 **EOL** ***Nothing sub £200 beats this***
GTX 470 1280MB = £190-£220 **EOL**
6950 2048MB = £220-£230 ***Expect £10 price increase in January***
GTX 480 1536MB = £250-£270 **EOL**
GTX 570 1280MB = £250-£290
6970 2048MB = £285 - £320 ***Expect £20 price increase in January***
GTX 580 1536MB = £350-£450 **Supply & Demand will keep this high**
6990 4096MB = £450-£500

If true, and 6970 is close to the 580, then it will sell really damn well. FWIW, those are basically 58xx launch prices, without the supply issue, so that's very nicely priced.

5-10% slower than the 580 but $100 cheaper is a-okay in my book
 
Hopefully for AMD it beats the 580, because right now at the egg they only have one brand in stock. Those things are selling well I guess. Not much time left to get them holiday $$$. Looking forward to the reviews tomorrow.

I think they usually go by 12:00AM PST?
 
I read the NDA does not lift until 0.00 am Nov 15 eastern,
9 PM PST Nov 14 , for West Coast. So about 24 hours ?
 
Kitguru is concurring with a new Fudzilla, webicle.
AMD HD6970 slower than nVidia GTX580, confirmed by Fudzilla

KitGuru says: So there you have it, HD6970 is markedly slower than the GTX580, although it should also be significantly cheaper also.

:thumbsdown: 3dMark11 is not a real game.

Hopefully for AMD it beats the 580

Remember this?

June 16, 2008
GTX280 - $649 MSRP
GTX260 - $399 MSRP

June 25, 2008
HD4870 - $299 MSRP

Performance is only 1 part of the equation - we still need price. If HD6970 arrives at $349 with performance between GTX570 and 580, it's game over for both of those NV cards.
 
Last edited:
:thumbsdown: 3dMark11 is not a real game.



Remember this?

June 16, 2008
GTX280 - $649 MSRP
GTX260 - $399 MSRP

June 25, 2008
HD4870 - $299 MSRP

Performance is only 1 part of the equation - we still need price. If HD6970 arrives at $349 with performance between GTX570 and 580, it's game over for both of those NV cards.

I paid less than the cost of a 4870 for my 260 (at the time). Don't forget that the 5xxx series actually went up in price after release. NVIDIA has room to adjust prices.
 
I paid less than the cost of a 4870 for my 260 (at the time). Don't forget that the 5xxx series actually went up in price after release. NVIDIA has room to adjust prices.

Ya that's true. I am just saying price is very important. If HD6970 is 5-10% slower than GTX580 but costs $449 that's totally different then if it launches at $349 or even $399. Perhaps AMD will force NV to lower prices once again as was the case with GT200. Although a part of me would still be mildly disappointed that after 14 months since HD5870, we are still stuck with only a 35% performance increase for the top single-GPU GTX580 if HD6970 doesn't beat that. Right now NV is charging $150 extra over a GTX570 for another 15-20% performance which is way too steep. We need more competition.
 
Last edited:
I paid less than the cost of a 4870 for my 260 (at the time). Don't forget that the 5xxx series actually went up in price after release. NVIDIA has room to adjust prices.
yeah I paid $50 less for my gtx260 than what I could get a 4870 1gb for at the time. but those 260/280 release prices where laughably bad and IMO a bit arrogant.
 
I paid less than the cost of a 4870 for my 260 (at the time). Don't forget that the 5xxx series actually went up in price after release. NVIDIA has room to adjust prices.

NV will have to price drop if 6970 is anywhere near or better than gtx580.
Can they do it? Sure. Making a huge profit in HPC helps a lot.
 
yeah I paid $50 less for my gtx260 than what I could get a 4870 1gb for at the time. but those 260/280 release prices where laughably bad and IMO a bit arrogant.

Oh, I agree. Until the price drop the 260 was out of my range. Just like the 470 I have now. Good things come to those who wait. ^_^
 
Back
Top