[ArsTechnica] Next-gen consoles and impact on VGA market

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
I hope they give the new consoles, especially what Nintendo is going to offer (Wii next) , a much more capable system. My kids Wii barely copes with some games. They play Skylanders and you can really see it taxing the system with low framerates.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I hope they give the new consoles, especially what Nintendo is going to offer (Wii next) , a much more capable system. My kids Wii barely copes with some games. They play Skylanders and you can really see it taxing the system with low framerates.

While probably not an option, you can look into a Dolphin Emulator. It does wonders for poor Wii games. I can't play Wii games on a Wii anymore.

I use it for IQ, but it fixes low frame rates issues too.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Do you have proof the hd6670 is slower than the hd4770? Also does nvidia have a better alternative to the hd6670 (i.e. faster for the same thermal/wattage envelope, or cheaper for the same speed)?

He doesn't and he doesn't because neither are true. The 4770 had higher GFLOPS but was only DX10.1, meaning it was never an option in the first place. He's talking out of his ... you know what =P

28nm was never an option either. In case you guys haven't noticed, neither nVidia nor AMD have made any sub-$100 28nm GPUs and nor have they any planned either. With the way that 28nm has turned out at TSMC (significant ramp ups won't be happening until in Q3 and Q4, way too late for console makers and far too late to impact prices significantly) the console makers decided to stick with 40nm at high volume, capable hardware and lower price tags. They learned the first time around that although great pricey hardware sounds great, the $500 you lose every time you actually sell a console means you've lost $500 of profits. That's just not going to happen anymore.

The 6670 has 3x the power of the current PS4 and for some people that's not enough. It's more than enough and plenty for a console. If you don't mind paying $1000 for a console with a GTX680 and a 2600K then you're in the minority and doing it wrong. Build yourself a PC.

I think the most shocking thing to me about this thread is seeing how vehemently people are defending such idiotic reasoning... Kepler doesn't exist for consoles (it doesnt exist at all outside of one ~$500 GPU which is rarely in stock) and there was never a chance in hell for 28nm to make it into the Xbox or PS4 (pricing and volume). Both companies wanted cheaper hardware (we knew this years ago...) and they wanted nothing to do with nVidia (we also knew this years ago).
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
So, put me down as skeptical, but I doubt that either MS or Sony uses *purely* off the shelf hardware.

AMD is likely going to create custom silicon that incorporates custom DRM for both companies (you know, to prevent piracy directly to PCs - how cool would a "hackinPS4" be?), and knows that tens of millions units will ship?

Furthermore, who is paying to shrink VLIW - unless it is in an APU? We've already mentioned a) how companies want cheap hardware and b) we know getting 28nm spun up is not a trivial task, so if MS/Sony want small and power efficient chips it seems that they'll either be using a modified Trinity (more cores? more memory channels? XDR support? more GPU cores?) or a PD/GCN discrete combination or custom APU that incorporates GCN.

Further more, it makes way, way, way too much sense to implement a APU in a console and share the memory - especially if there is some compute being performed on the GPU. I would think 1GB is a realistic amount of memory for them to use, especially if they use something custom or exotic, 2GB of combined RAM is my upper boundary expectation.
It would seem that AMD would really want GCN adoption, as it would allow them to really build out the compute infrastructure in a cohesive way across many different platforms.

As to tesselation, it would likely become a pretty standard game feature, even if it wasn't cranked to insane levels and applied to every concrete surface in the game. This is still good for PC game visual fidelity.

I hope that we will some amount of SLC flash (4GB?) for I/O caching. With full control of the hardware, this could do wonders for loading time, etc. especially with modern CPUs that can handle decryption much more quickly - which is one of the "features" of moving to less geriatric CPUs that seams to be much overlooked. What did that BRAID dev say? That levels that loaded nearly instantly on his dev kit took something like 30 seconds once MS did all their encryption? Something like that...

Anyway, there is a lot of room for the next gen consoles to be a lot better - and on the cheap. Consoles gamers, on the whole, are going to be pretty excited about 1080p 30 FPS and 720P 60 FPS (with the odd 1080p 60 FPS title thrown in there) and holy hell, the textures a 512/768/1024MB frame buffer can provide! w00t! :p Honestly, when I think about gaming on my 360/1080p Projector/90" screen those improvements even make me a little excited about a new console. Which is a little sad.

Another "cost cutting" feature not really mentioned so far is that by using more generic hardware with more traditional memory coherency and such, building and optimizing game engines and assets is going to require less specialized knowledge and time (an easy way to blow lots of $$$). This is a "win" for devs as well.

@ Pelov - if they are putting together consoles starting a year from now, is 40nm still going to be the go to process? Hmm... I suppose those are really going to be some cheap parts, but they aren't going to get APUs on the process. I still think that APUs are going to be the real engines of the next consoles, mainly so that one pool of memory can be used for all console purposes (much of which is no longer gaming in the traditional sense...)

Or are they going to have a APU (where the GPU is available for Compute/CF) during "gaming" and then a discrete GPU that is then powered off during other periods? That would make some sense to me, I guess... although it would seem that the Trinity APU is going to be as powerful as a 6670, especially if they did something like add a EDRAM chip or moved to a much wider (256 bit? Higher?) memory interface for the APU.
 
Last edited:

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
Another point I think people are missing is the reason our current console generation has lasted so long is that it's taken this long for manufacturing capabilities to catch up and for Sony and Microsoft to recoup some of the losses during the initial years. By making cheaper and more power efficient consoles, the next generation might be much shorter since the companies don't have to extend the profitability years.

No not at all. You can bet your ass that unless forced to by dwindling sales, Microsoft and Sony will stick to a more profitable sales model for consoles if the next generation sets that bar. No way will they go "ok we'll make a loss on the next one again guys, it's only "fair". If there's a profit to be made, they'll make it.

So if the next gen is a proportionately pathetic increase, be prepared for this to happen to every generation hence unless market pressure forces them to change back to the old model.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
No not at all. You can bet your ass that unless forced to by dwindling sales, Microsoft and Sony will stick to a more profitable sales model for consoles if the next generation sets that bar. No way will they go "ok we'll make a loss on the next one again guys, it's only "fair". If there's a profit to be made, they'll make it.

So if the next gen is a proportionately pathetic increase, be prepared for this to happen to every generation hence unless market pressure forces them to change back to the old model.

Right.

As hardware (compute and rendering) has been diminished as a differentiation factor and more focus is placed on the "online experience", application ecosystems and input hardware the one-upmanship has moved into different areas... IMHO.
 

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
Right.

As hardware (compute and rendering) has been diminished as a differentiation factor and more focus is placed on the "online experience", application ecosystems and input hardware the one-upmanship has moved into different areas... IMHO.

Agreed, but it's a fluid thing isn't it, Sony and Microsoft concentrated on online capability and media options, Nintendo focused on control methods and "party" based games. Now we've already seen Sony and Microsoft's solutions to Nintendo's success, so it begs the question why release new consoles? Usually the answer would be obvious - more power, bigger, better games (in principle at least just by virtue of more powerful hardware). Now it seems the motivation is different, as many consumers argue that they see no need for a more powerful console. This leads me to believe that there are two options both companies could take:

First, the strong-arm option:
Force development onto a new more powerful platform that is profitable from release, enticing developers by giving them more resources to play with (but not a huge leap over the last generation). Lock down the platform to maximize profits wherever possible - more dlc, limit second hand games sales or eradicate them altogether, not allow people to borrow games or rent them, always on connection to their servers. Centralizing all these services under their control to maximize their profits and gather more information to "tailor" games to the market and help maximize investments each company makes (1st party developers etc).

The second, much less likely scenario:
Realise that many consumers are happy with the current capability of consoles and that both Microsoft and Sony have existing options for every part of the market, from casual to hardcore, to party (motion controllers) to online. Therefore, a big leap comparable to past jumps will be needed to entice consumers to the new platform, as it will need that "wow" factor. Therefore both companies aim for Unreal's Samaritan Demo to be the de facto standard of next gen capability.


The demands are the same, but the mindset is different. The first sees console gaming purely from a business point of view - "how much money can we make". The second is the more traditional view of still wanting to make money, but that consumers demand more from a new generation.

In short, we can blame the average consumer for the next generation of consoles. We may all disagree with it, but we're not average consumers. They don't understand about hardware, chip yields, manufacturing processes, game development. So how will they know if they're getting less than they should be based on the existing model? Simple, they won't.
 
Last edited:

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
@ Pelov - if they are putting together consoles starting a year from now, is 40nm still going to be the go to process? Hmm... I suppose those are really going to be some cheap parts, but they aren't going to get APUs on the process. I still think that APUs are going to be the real engines of the next consoles, mainly so that one pool of memory can be used for all console purposes (much of which is no longer gaming in the traditional sense...)

Or are they going to have a APU (where the GPU is available for Compute/CF) during "gaming" and then a discrete GPU that is then powered off during other periods? That would make some sense to me, I guess... although it would seem that the Trinity APU is going to be as powerful as a 6670, especially if they did something like add a EDRAM chip or moved to a much wider (256 bit? Higher?) memory interface for the APU.

28nm feasibility depends on how TSMC manages. Due to the lower price of the consoles and the supposed hardware you'd have to figure that volume is going to be a significant issue. If they can't produce X amount of viable chips by Y time then they'll opt to go with the safer and more reliable node. MS/Sony are expecting to sell more consoles on Day 1 this time around and far more down the line. They're cheaper and the hardware isn't as advanced for this generation.

http://www.techpowerup.com/163691/TSMC-Faces-Acute-28-nm-Capacity-Shortage.html

To add to this, apparently nVidia has seen more yield/supply issues than the rest. At the moment 28nm isn't even an option.

The 32nm process at GloFo has improved drastically since summer of last year and by the end of the year GloFo beat their 32nm expectations for 2011. It seems like they'll probably have enough APUs to fit into consoles regardless of the architecture, be it Llano or Trinity. Fab 8 in Fishkill here in NY has also started running as of earlier this year.

The node isn't too big of an issue, though. So long as it's on a process that promises cheap prices and high quantity then the console makers are content. Currently that's the 40nm half-node for GPUs. Node shrinks can happen and do happen after a product is released. The Xenos that was used in the XBOX360 came in at either 90nm, 60nm or 45nm depending on when the console was manufactured. This allows room in the future to drop the 40nm node for 28nm when it makes sense for the fab as well as MS/Sony. You don't want to make the adjustment early and have supply issues bite you in the ass but you don't want to use an ancient node that's being left on life support either because that also costs TSMC, and in turn you as well, money. Picking a safe process is always the best bet.

I'm not sure what APU/CPU the consoles will use but an x86 APU seems to make the most sense. Coupled with a discrete GPU like the 6670 (or two of them for the Xbox720) it's capable of driving 1080p at great framerates. A higher memory interface for the APU would make sense but it also drives the cost up just as it would with a GPU. Frankly I just don't know and it's all assumptions but an off-the-shelf APU like the Trinity or Llano could potentially work but it would seem like a customized architecture would be the best bet. If the rumors about the xbox720 using a 6 core CPU (or APU) are true then it would point to a customized design as the Trinity models for the desktop and mobile space only go up to 4 cores/2 modules, and PowerPC or ARM would have to be customized -- no other option with the latter two. There are no "drop in" Power/ARM cores. The x86 APU would likely require the least amount of tinkering so long as you drop backwards compatibility, which is apparently what's happened. That's the biggest indication that both MS/Sony have taken up an x86 APU for their consoles.
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
Ironic that as gaming has gone "mainstream" the hardware is going to become more pedestrian, isn't it?

When geeks and enthusiasts where the demographic (and lets face it, the folks at the companies designing and heading up those teams were very much those types of people as well, to have the dream of building a game console...) they had to impress us.

Now they don't. Our wallets are in the minority.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
28nm feasibility depends on how TSMC manages. Due to the lower price of the consoles and the supposed hardware you'd have to figure that volume is going to be a significant issue. If they can't produce X amount of viable chips by Y time then they'll opt to go with the safer and more reliable node. MS/Sony are expecting to sell more consoles on Day 1 this time around and far more down the line. They're cheaper and the hardware isn't as advanced for this generation.

http://www.techpowerup.com/163691/TSMC-Faces-Acute-28-nm-Capacity-Shortage.html

RIGHT! I had totally forgotten about Glo-Fo and 32 nm somehow... Wow. Maybe I was trying to forget? :p I would think that they would have at least the theoretical capacity to produce the number of units needed. It will be a lot, but compared to PCs? It has to be in the same order of magnitude, if not less.

I see your point, and I agree.

A 6670 type GPU is still going to be big leap for the consoles.

Why would they do two of them if they could just use one larger GPU and save themselves from having redundant frame buffers, etc? That doesn't sound very realistic, IMHO.

Like you said, assumptions :)
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Well, cheaper hardware = cheaper consoles = bigger audience. MS/Sony made most of their money off of the other stuff and not the hardware the last generation. They were selling each console (at launch) for hundreds of dollars in losses. It just makes more business sense to make them cheaper because that way you'll actually make more money (again, not from the hardware but everything else).

And, if they're brazen enough, we might even see more consoles more frequently. That would be the best news for PC gamers. I'd rather have 3 cheaper consoles released in a span of 10 years than 1 or 2.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Ironic that as gaming has gone "mainstream" the hardware is going to become more pedestrian, isn't it?

When geeks and enthusiasts where the demographic (and lets face it, the folks at the companies designing and heading up those teams were very much those types of people as well, to have the dream of building a game console...) they had to impress us.

Now they don't. Our wallets are in the minority.

That's not irony at all - that's how everything works. The bigger your demograph, the less quality you have to invest into your product.

Before when only 10 of us knew what game A was, that company need all 10 of us to buy it. Now that 1,000,000 of us know what Game A is, the company would be more than happy just getting 200,000 of us buying it.

Exhibit A - Hollywood.

Eventually it will all come full circle and the industry will collapse on itself. We've already seen great devs close their doors because their games weren't mainstream enough. Anyone play Ninja Gaiden 3 yet? Don't. Trust me.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Do you have proof the hd6670 is slower than the hd4770? Also does nvidia have a better alternative to the hd6670 (i.e. faster for the same thermal/wattage envelope, or cheaper for the same speed)?

He doesn't and he doesn't because neither are true. The 4770 had higher GFLOPS but was only DX10.1, meaning it was never an option in the first place. He's talking out of his ... you know what =P
so both of you actually think the 6670 is faster than the 4770? sorry but you are wrong.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Do you have proof the hd6670 is slower than the hd4770? Also does nvidia have a better alternative to the hd6670 (i.e. faster for the same thermal/wattage envelope, or cheaper for the same speed)?

1. For starters, here is a review of HD5670 vs. HD6670
and here is HD5670 vs. HD4770.

I'll let you put the 2 together.

2. If you don't care to look at reviews, look at this chart that summarizes reviews all over the internet. HD4770 > HD6670.

OR if you want a more professional looking chart, here is one at Tom's Hardware. HD4770 > HD6670.

He doesn't and he doesn't because neither are true. The 4770 had higher GFLOPS but was only DX10.1, meaning it was never an option in the first place. He's talking out of his ... you know what =P

And since HD6670 supports DX11, under what codepath will it run next generation games? DX11 code.
What's the performance hit of DX11 codepath that uses DX11 specific features (bokeh depth of field, tessellation)?

DX11 path and its differentiating graphical features over DX9/10 take on average a 1 generation performance hit over DX9/10. In other words, to play DX11 games at the same framerates that HD4770 can do in DX9/10 mode, it takes at minimum a GPU ~ 2x faster than HD4770. So that would be an HD6850 level.

To make it as fair as possible, I'll even use the most recent popular DX11 game where switching to DX11 hardly provides any better visuals other than Tessellation. The performance hit of going to DX11 is massive on HD6000 hardware. I'll even focus on a GPU with 2x the power of HD4770 to make my point:

1080P DX9
GTX460 = 66 fps
HD6850 = 66 fps
HD4890 = 68 fps (notice similar performance of HD4890 vs. HD6850 in DX9)
bt_9_1920.png


1080P DX11
GTX460 = 25 fps
HD6850 = 25 fps
(just lost 2.6x performance)
b_1920_11.png


Tessellation adds massive levels of geometry which are extremely taxing on current hardware:
BatmanAC_tess-930x498.jpg


Now please explain how an HD6670 that's slower in DX9/10 games than an HD4770 will play next generation DX11 games with Tessellation? You can't have it both ways. If you are going to run games in DX11 with its bells and whistels, HD6670 will have 2-2.5x slower performance than HD4770 will have in DX9/10. But even if you run HD6670 in DX9/10, it's still slower than HD4770.

Essentially this GPU combo is the worst thing that could have happened for PC gaming. They would have been better off taking Kaveri off the assembly line in 2013 and just plopping it inside the PS4.

If they are putting Llano with an outdated HD6670 for hybrid CF just shows that it was bean counters that were making a decision, not people passionate about next generation games. That type of setup at best will achieve Kaveri level of GPU performance, but be plagued with micro-stuttering. Furthermore, HD6550 needs to use system RAM because it doesn't have dedicated RAM, potentially creating yet another bottleneck down the line for what probably won't be a system with 8GB of DDR3-1866. And on top of that, this setup would still consumer more power than a single Kaveri chip...
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
As I said earlier, I truly hope this new console hardware trend leads us to getting a new console generation every 3-5 years.

MrK is absolutely right when he said that the current generation has been milked for so long just to recoup their R&D costs building the consoles. If they use premade solutions from AMD they pretty much just pay for the parts, and their research costs are drastically reduced.

Also, we should be comparing other GPU and console specs to get a better idea of what performance will be like. How does the 6670 perform compared to the 7800 series that was in the PS3?

Also the huge increase in VRAM is going to play a huge role in how new games will look. I don't think the next generation is going to be all that spectacular, but I don't think it's all doom and gloom either. Either way there is going to be a noticeable leap in graphics and the sheer amount of resources a developer can put into a game world.
 
Last edited:

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
A 6670 cf with 100% scaling is like 6-8 times as powerful as a 7800gtx. And assuming 50% dx 11 hit, it is still nearly 3+ times as fast
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,349
270
126
A 6670 cf with 100% scaling is like 6-8 times as powerful as a 7800gtx. And assuming 50% dx 11 hit, it is still nearly 3+ times as fast

100% scaling does not even come to close to a perceived 100% increase in performance thanks to microstutter.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
A 6670 cf with 100% scaling is like 6-8 times as powerful as a 7800gtx. And assuming 50% dx 11 hit, it is still nearly 3+ times as fast
a single 6670 is not 3-4 times faster than a 7800gtx. at best, a 6670 is barely over 2x faster than that level of gpu. so at best you are looking at maybe a tad over 4x the performance of the current consoles using 6670 crossfire. that is a joke considering current consoles struggle to maintain 30 fps in some newer games even at resolutions usually well below 1280x720. there would be no way they could run higher graphics with demanding features like tessellation and be playable at 1920x1080.
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
a single 6670 is not 3-4 times faster than a 7800gtx. at best, a 6670 is barely over 2x faster than level of gpu. so at best you are looking at maybe a tad over 4x the performance of the current consoles using 6670 crossfire. that is a joke considering current consoles struggle to maintain 30 fps in some newer games even at resolutions usually well below 1280x720. there would be no way they could run higher graphics with demanding features like tessellation and be playable at 1920x1080.

Really? In todays games, a 7800GTX is going to be a pooch. That was why I had to part ways with my 7900GTO, it could no longer game well @ 1440*900. That was... jeeze... almost four years ago? When I moved up to a 3870 and could actually run Company of Heroes at a respectable clip...

The 6570 that I actually have right now is much more capable at that resolution. A 6670 will handily beat that 3870, I'd wager, given a DDR5 frame buffer.

I suppose it really matters if the game is bound by fill rate or texture performance.

It's hard to compare a DX9 against a DX10+ card because there are so many games that you simply cannot run where the performance discrepancy would be interesting...

TL;DR: I think you are giving the 7800GTX more credit than it deserves.

http://www.anandtech.com/show/4278/amds-radeon-hd-6670-radeon-hd-6570/4 <-- 6670 is faster than an 8800GT...

http://www.anandtech.com/show/2365/11 <-- Which is handily faster than a 7950GT, by 2-3x, depending on resolution.

Somewhat sadly, that would be a huge improvement over current gen consoles.

Haha, 2007 says hi! http://forums.anandtech.com/showthread.php?t=110912 :p
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
Really? In todays games, a 7800GTX is going to be a pooch. That was why I had to part ways with my 7900GTO, it could no longer game well @ 1440*900. That was... jeeze... almost four years ago? When I moved up to a 3870 and could actually run Company of Heroes at a respectable clip...

The 6570 that I actually have right now is much more capable at that resolution. A 6670 will handily beat that 3870, I'd wager, given a DDR5 frame buffer.

I suppose it really matters if the game is bound by fill rate or texture performance.

It's hard to compare a DX9 against a DX10+ card because there are so many games that you simply cannot run where the performance discrepancy would be interesting...

TL;DR: I think you are giving the 7800GTX more credit than it deserves.
yes direct comparisons are a bit hard. you can look though benchmarks to get an idea though. a 6670 is just over twice as fast as a 9500gt. a 9500gt is basically identical to a 8600gt which was easily beaten by a 7800gt in most cases. even if games got not more demanding than now, a 6670 will not be able to run games a decent framerates with most DX10 and DX11 features. so in games that would be "playable", there would only be about 2-2.5 times more performance going from a 7800gtx to a 6670.

EDIT: you own link proves my point which is that a 6670 will not really deliver a playable experience in newer demanding games with DX10 or DX11 features.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I'm thinking at least one of the MS and Sony duo will opt for being conservative on the specs but throw in some sort of feature they can leverage in advertising, perhaps 3D support.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
They didn't choose Kepler for a very simple reason:

nVidia makes them. They aren't the easiest people to get along with. nVidia also doesn't make x86 and unifying console>desktop also means more money for everyone.

I agree
Intel:
-Expensive
-GPU sucks.
+CPU is amazing.
+Has Strong APU
-APU has weak GPU

nVidia:
-Hard to work with.
-Last gen GPU too big & hot and does unwanted things (GPGPU), don't want current gen, too expensive.
-No CPU on offer.
-No APU on offer (an ARM based one under development)

AMD:
+Cheap (AMD got a dose of humility, charges the least of those 3 companies)
+Last gen CPU is small, low power, and fast (in games).
+Has APU on offer
+Already has asymmetrical Xfire.
+APU has much better GPU then competition (although still under powered and requiring a secondary dedicated GPU).
+Buying both an APU and a GPU from same company for a better deal.
 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Do you have proof the hd6670 is slower than the hd4770? Also does nvidia have a better alternative to the hd6670 (i.e. faster for the same thermal/wattage envelope, or cheaper for the same speed)?

The 4770 has a huge per clock advantage in the general output statistics, it's 640:32:16 in configuration vs 480:24:8 for the 6670 (Turks). Given the same starting clock speeds, yes, you could boost the clock by 33% to the 6670 match the 4770's general GFLOPS and texture output, but you'd be lacking the pixel output of the 4770. You would, however be getting a better tessellation scheme and of course DX11 and higher end OpenGL support. Though I wonder how much extra power consumption that 33 percent boost really means. In a console, it would be a tough decision, considering the 6670s overall level of features, but the 4770 in raw capabilities would certainly be pretty awesome, and should be better GFLOPS/mm² since there are no transistors wasted on DX11 capability (which would be irrelevant for any non-MS console). The 4770 also makes true 1080p output an easier proposition, and boosting the 6670 clock might push the power consumption too high for what it can do and the thermals too high in the process. Seeing where the 6670 sits would make me just want to op for a Juniper GPU (Radeon 5770/6770).
 
Status
Not open for further replies.