AMD Bulldozer in PS4 - rumor sufaces

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Sure the 5770 may not be the top end but you must remember that power consumption of current cards are vastly different from cards 5 or so years ago. Looking at xbitlabs power consumptions high end cards from the 7xxx and x19xx era consume around 100w in comparison to recent cards that consume 200-300w. You can't just shove a 200w GPU in a console

HD6850 uses around 100W at load.
HD6870 uses around 130W in a game.
Source

If they shrink either of those designs to 28nm, you could easily get GPU way more powerful than the low-end HD5770 and still keep GPU power consumption under 100W.

RV770 is such an old design. I put the chances of a custom rv770 being put into the wii at 0%.

At 0%, really?

Every single source has cited that Wii U will have an R700-RV770 style GPU. They are all wrong?

Modern GPUs such as the GTX570/580/HD6970 are only 2x faster than the HD4870. In contrast, the HD4870 is 4-5x faster than the RSX GPU in the PS3 (slower than a 7900GTX on the desktop). This is plenty fast for a Wii U GPU. It probably would have made more sense for Wii U to use an HD5770 instead, but it's possible that AMD was able to offer an RV770 GPU for a lower price.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Die shrink. Also, if they were going to use BD, it wouldn't be the same BD that goes into normal computers. It would be optimized for console use.

That's wishful thinking. Bulldozer hasn't even launched on 32nm. AMD has had way too many problems trying to get a Bulldozer core out on its own. So suddenly you think they'll be able to add a GPU inside a Bulldozer in 2012 on 32nm? Btw, trinity is still ways out (prob. won't launch until 2013); that's if it doesn't get delayed.

Bulldozer is a 32nm CPU. Next generation GPUs are going to be 28nm. How do you do a die shrink on a 32nm CPU and a 28nm GPU? You'd need to move both to 22nm. That's not happening until 2013 (if taking both into account).

Because a console needs to play games at resolutions greater than 1080p? Something with the performance of a 5770 would be fairly ideal.

You missed my point entirely. A $500 console costs that much because the components inside of it are expensive. If they put a $50 GPU into the console, what exactly is going to cost $450 extra? Blu-Ray drives no longer cost $230 (the price of a Blu-Ray drive when PS3 debuted). They can squeeze a $120-150 GPU design into that price. So it makes no sense to put such a cheap GPU in 2012 into PS4 or Xbox3. Also, please explain why I wouldn't put an HD6800M into a PS4 instead? My $400-500 console budget easily suffices for that; and that GPU doesn't break any power consumption limits.

AMD could easily pair a 5770-class GPU with Llano, but they would need to alleviate the existing memory bottleneck before it became useful to do so. Also consider that it's their first part on the 32 nm process, which means that smaller chips will have better yields.

If AMD could have easily put an HD5770 GPU inside Llano, then Llano wouldn't have debuted with a 400 SP 6620 GPU. It's obviously very expensive and not economically feasible on 32nm process. In fact, AMD wasn't even able to fit a full-fledged Phenom II X4 core with L3 cache together with a GPU. They had to use a small Athlon II X4 core to make sure their die size doesn't balloon.

It's merely just an engineering problem. It might be costly but it's worth it.

How is it worth it? It'd be more cost effective to design a CPU + GPU on an embedded 2 die package instead. The costs of increasing die size with a modern GPU + CPU is exponential. Did you not see that Xbox360 had 5 revisions from 90nm process before it even became cost effective to shrink the 2 components under 1 package? If you start off with an APU design, you severely limit both your CPU and GPU performance right off the bat (not to mention your TDP envelope is not more than 150W). There is no reason console makers need to be constrained by these limits. Consoles are sold at a loss with game and accessory sales making up the difference over time. From that perspective, both MS and Sony will want to put the fastest possible CPU+GPU into the console. If you start with an APU design, you literally start with the slowest possible GPU you can put......Sure one of the consoles may choose to go that route, but it would be a huge mistake imo.

You are also forgetting that Bulldozer and IBM Cell are 32nm designs. How are you going to magically shrink them to 28nm in 2012?

Let's not forget game developers have a say in this too. They have been learning how to utilize IBM's CPUs for the last 4-5 years on PS3 and Xbox360. They'd likely want a more powerful IBM CPU, but they can already leverage what they learned. If you start with Bulldozer, you are starting from scratch all over again.
 
Last edited:

iCyborg

Golden Member
Aug 8, 2008
1,367
73
91
That's wishful thinking. Bulldozer hasn't even launched on 32nm. AMD has had way too many problems trying to get a Bulldozer core out on its own. So suddenly you think they'll be able to add a GPU inside a Bulldozer in 2012 on 32nm? Btw, trinity is still ways out (prob. won't launch until 2013); that's if it doesn't get delayed.

Bulldozer is a 32nm CPU. Next generation GPUs are going to be 28nm. How do you do a die shrink on a 32nm CPU and a 28nm GPU? You'd need to move both to 22nm. That's not happening until 2013 (if taking both into account).
I believe you had already said something like that and people pointed out that it's wrong. Trinity has already been taped out and samples were presented at Computex. It looks on track for Q1/2012, certainly H1. So where do you get this 2013 date?

Trinity will use Northern Islands probably with some elements from SI like UVD, better powergating etc. NI is 40nm, just like Redwood. And Trinity will be 32nm, just like Llano. So no need to wait until 22nm.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
I think it's highly unlikely Sony switches from an IBM PowerPC RISC architecture back to an AMD legacy x86 architecture.

Console manufactures purposely want to use a proprietary architecture or custom chipset to not only own it from top to bottom and move away from a commodity PC but also to curb any potential piracy.

They would essentially be undoing what Microsoft did in moving a away from the Intel Pentium 3 in the original XBOX.

We've seen Sony has gone from a hardware to a software layer to handle playing PS2 games on the PS3 before finally removing it altogether. Microsoft went to quite a bit of effort to write an x86 to PowerPC emulator to run as many original XBOX games on the 360.

I don't know if Sony is willing to sacrifice backwards compatibility or write a software layer to run PS3 games on some new CPU PS4.
Microsoft wanted a proprietary design because its cheaper and they can shop around production. Microsoft will never pick a design they don't own the production of again after the issues with Nvidia. But that doesn't mean that Sony or Nintendo hold the same feelings. Nintendo used an off the shelf G5 for the game cube and have been using die shrunk versions of that since for backwards compatibility and reduced design costs. Sony had been designing and manufacturing their own hardware this whole time, but while they had marginal success pushing Blur-Ray (it won but will never push a true profit) Cell has been a flop and later sold off that branch and production over to Toshiba. They couldn't design a competitive GPU for this generation.

Its entirely possible that Sony since they have never been one to keep hardware legacy support in their new designs (the PS2 contained a mini PS1 and the same applies to the original PS3 with the PS2), that they would be willing to use an off the shelf product. AMD would have to hit certain performance expectations specially on the GPU end, but considering that the PS3/Cell/Blu-Ray development almost killed the company (they were hoping other departments would pick up the tab but sales in every sector went stale or dropped badly for them), I wouldn't be surprised it they try to go the GameCube/Wii route and piggyback on someone else's R&D.
 

Jionix

Senior member
Jan 12, 2011
238
0
0
Let's not forget game developers have a say in this too. They have been learning how to utilize IBM's CPUs for the last 4-5 years on PS3 and Xbox360. They'd likely want a more powerful IBM CPU, but they can already leverage what they learned. If you start with Bulldozer, you are starting from scratch all over again.

What? Game developers have a minimal say in what CPU goes into the consoles. If they did, Sony wouldn't always push custom designs that no one ever understands that takes them 2+ years to even begin to code correctly for.

Besides, it doesn't matter what processor this generation of console uses, just like it didn't matter what processor was used last generation! Both the XBox360 and PS3 use different processors then what was used in the XBox and PS2.

If it didn't matter then, why would it matter now?
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
I guess that explains why the original Xbox, which used a Pentium III CPU, had so many more overheating and power consumption problems than the Xbox 360, which used a more dedicated PowerPC CPU.

Oh, hrm... wait....

(1)- CPUs waaaaaaaaaaay back in the midrange socket 370 era didn't need large heatsinks/fans.

(2)- That was sort of a hybrid between a P3 and a Celeron, at mediocre clock speeds.

(3)- It's basically the only time x86 was ever used in a game console, and there were indeed some drawbacks. It competed fairly even with the older PS2, but the PS2 only needed a 300mhz chip to perform similarly.

(4)- XB360's heat issues came from trying to cram in a relatively high end GPU for the time with inadequate cooling. They were in a race to get it out there, and kind of released a rushed product imho. Aftermarket cooling on original X360 fixes most issues beyond the horrendous power draw.

(5)- Back to x86, it's just not efficient for gaming consoles. General purpose CPUs are not a great fit, any more than a ULV x86 chip makes sense for your phone. If anything, we're more likely to see multicore slightly beefed-up ARM stuff take off for consoles.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I believe you had already said something like that and people pointed out that it's wrong. Trinity has already been taped out and samples were presented at Computex. It looks on track for Q1/2012, certainly H1. So where do you get this 2013 date?

Trinity will use Northern Islands probably with some elements from SI like UVD, better powergating etc. NI is 40nm, just like Redwood. And Trinity will be 32nm, just like Llano. So no need to wait until 22nm.

Even if it launches in 2012, it won't be any faster than a dedicated HD5770. Either way, you can always put a faster GPU + CPU in 2 separate packages. We'll have even faster discrete GPUs in 2012. Honestly, look at how much hype Llano was. The GPU inside is laughable (but it's understandable given the space constraints and TDP limitations). The bottom line is even today a $50 HD5670 whoops Llano's GPU. So can you please address my comment why wouldn't a console maker put a $120-150 mobile GPU inside a console? Why do they need to constraint themselves to some slow $150 APU setup in a $400-500 console?

If they do go with a single APU design, they will have thrown the towel for trying to extract a reasonable maximum GPU performance out of a console - in which case it will be extremely disappointing. If you don't want future consoles to handicap PC gaming once again for another 5 years, then the last thing you'd want is an APU in a console. :thumbsdown:

Most importantly, no working APU exists that can play current games at 1920x1080, nevermind future games for the next 5-6 years. They can try putting a Trinity APU + a discrete GPU onboard for some CrossFire action. Now that type of a setup I can understand.
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
^^ RS has it written in cold hard fact. APU is a cool concept for low power and simple purpose devices, but it's never going to be able to compete with contemporary configurations of superior CPU and GPU elements.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
2005 has come and gone. What's so far fetched about what we have had since around '07?

As someone else pointed out CPU subsystems within consoles have been pretty abysmal despite better options. Just depends on what they define as " cutting coreners " this time around. Could that be BD? We'll see.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
(5)- Back to x86, it's just not efficient for gaming consoles. General purpose CPUs are not a great fit, any more than a ULV x86 chip makes sense for your phone. If anything, we're more likely to see multicore slightly beefed-up ARM stuff take off for consoles.

i don't think arm is inherently any better than powerpc for power draw if the part is engineered for it. both are properly risc so don't have decoders taking up a lot of the power/xtor budget.


arm is used in the second best selling console of all time so it'd be hard for it to 'take off'
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
i don't think arm is inherently any better than powerpc for power draw if the part is engineered for it. both are properly risc so don't have decoders taking up a lot of the power/xtor budget.


arm is used in the second best selling console of all time so it'd be hard for it to 'take off'

Oh I agree, I'm just saying that either make a lot more sense than trying to adapt x86 technology to a console that runs much simpler/more dedicated tasks.

I do think ARM will grow in the area for a number of reasons, but that's my personal guess, and it remains to be seen.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
As someone else pointed out CPU subsystems within consoles have been pretty abysmal despite better options. Just depends on what they define as " cutting coreners " this time around. Could that be BD? We'll see.

It hasn't really mattered much to be honest. GPU and memory limitations have hindered PS3 and X360 far more than any potential lack of CPU power.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
So can you please address my comment why wouldn't a console maker put a $120-150 mobile GPU inside a console? Why do they need to constraint themselves to some slow $150 APU setup in a $400-500 console?

-Console may not be $500.

-Sony and MS might actually want to turn a profit off of console sales rather than a loss.

-They could use a custom designed APU. No need for arbitrary constraints.

-They might be working on other features of the console to fit within the price.
 

IlllI

Diamond Member
Feb 12, 2002
4,929
11
81
pretty sure they will go with a new cell processor, for backward compatibility.

dont want to see another debacle involving ps2 backward compatibility, or how they 'cut costs' by removing the hardware for it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Microsoft wanted a proprietary design because its cheaper and they can shop around production. Microsoft will never pick a design they don't own the production of again after the issues with Nvidia. But that doesn't mean that Sony or Nintendo hold the same feelings. Nintendo used an off the shelf G5 for the game cube
Nope, Nintendo used a customized G3.
 

iCyborg

Golden Member
Aug 8, 2008
1,367
73
91
Even if it launches in 2012, it won't be any faster than a dedicated HD5770. Either way, you can always put a faster GPU + CPU in 2 separate packages. We'll have even faster discrete GPUs in 2012. Honestly, look at how much hype Llano was. The GPU inside is laughable (but it's understandable given the space constraints and TDP limitations). The bottom line is even today a $50 HD5670 whoops Llano's GPU. So can you please address my comment why wouldn't a console maker put a $120-150 mobile GPU inside a console? Why do they need to constraint themselves to some slow $150 APU setup in a $400-500 console?
What hype? You mean the people who expected to max Crysis 2 at 1080p on an IGP with ~50W TDP? Llano's CPU is somewhat underwhelming, but GPU is pretty much what should have been expected: 2x better than Intel's without dirty IQ tricks beating entry level and reaching lower midrange discrete. It wasn't meant to match 5770/6770.

I was only commenting about Trinity release date, I don't really have an argument with the rest. I'd like to see ~6970/580 or better in a console, and I don't even care about consoles, it's just for the sake of PC gaming. Whatever they put, I don't see much change in console vs PC specs in the coming years unless:
A) console crowd is willing to fork out more than $400-$500 for it
and B) consoles are refreshed every 2-3 years instead of 5+
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
At 0%, really?

Every single source has cited that Wii U will have an R700-RV770 style GPU. They are all wrong?

Modern GPUs such as the GTX570/580/HD6970 are only 2x faster than the HD4870. In contrast, the HD4870 is 4-5x faster than the RSX GPU in the PS3 (slower than a 7900GTX on the desktop). This is plenty fast for a Wii U GPU. It probably would have made more sense for Wii U to use an HD5770 instead, but it's possible that AMD was able to offer an RV770 GPU for a lower price.
It's not about speed. I don't expect Nintendo to use something fast because I'm sure they will try to meet as low a price point for the Wii U as possible just like they did with the Wii.

It's about age & economics. It is a mistake to think that just because something is older it's also cheaper to manufacture. Manufacturing cost is determined by die size, R&D costs, and buildout costs. The rv770 is OLD. It's already 2 generations ago. By the time Wii U comes out, it will be 3 generations old. The newer generations of cores from AMD aren't just faster, they are also faster on a per transistor basis. The 6870 packs 1.7B transistors but is faster than the 5850 which packs 2.15B transistors. That means the marginal cost of manufacturing newer gen stuff is less for a given performance point compared to older stuff. The only reason older stuff is cheaper in the market is because the cost for building out the assembly lines and fabs have already been sunk in the past and because of what economists call pricing discrimination. These two factors will not come into play to make a modified rv770 cheaper because the chip Nintendo will use would be a customized design and require capital to build out it's manufacturing (even if outsourced which it will certainly be). Plus, the rv770 was never manufactured on the 32nm or whatever process they are going to use so extra work would be needed to update it for a modern process node. It costs more to modernize and old design than to just customize a modern design. So basically, they will use a newer design because new designs are cheaper. I expect a HD7000 series or hd6000 series derivative (with a small die size of course).

As far as me contradicting all the sources - so what. There was actually only one supposed source of the leaked info and every other website just copypastas that info. The rumor mill is an echo chamber. Lots of leaked info has been wrong before.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
HD6850 uses around 100W at load.
HD6870 uses around 130W in a game.
Source

If they shrink either of those designs to 28nm, you could easily get GPU way more powerful than the low-end HD5770 and still keep GPU power consumption under 100W.

They don't need anything that powerful. When you program a game for a console, you know that every single console will have the exact same hardware. This lets you eek out a lot of additional performance because you can target very specific hardware. Look at a game like Uncharted 3. It looks amazing. Now look at the hardware that's running it.


That's wishful thinking. Bulldozer hasn't even launched on 32nm. AMD has had way too many problems trying to get a Bulldozer core out on its own. So suddenly you think they'll be able to add a GPU inside a Bulldozer in 2012 on 32nm? Btw, trinity is still ways out (prob. won't launch until 2013); that's if it doesn't get delayed.

Trinity is scheduled for H1 2012. Whether it slips remains to be seen. The next generation of consoles from Microsoft and Sony are a ways from being launched. Next year at E3 is probably the first time we'll hear about either and they won't ship for another year after that. Plenty of time for GFs process to mature, assuming that they have use Global Foundries to make their chips.

Bulldozer is a 32nm CPU. Next generation GPUs are going to be 28nm. How do you do a die shrink on a 32nm CPU and a 28nm GPU? You'd need to move both to 22nm. That's not happening until 2013 (if taking both into account).

If they did build an APU, it would be from the ground up. Anything that they make will be a custom design. It could even be some kind of strange design where it's a BD module that has 4 BD cores fed by a single front end. It's not going to be an existing product and whatever they might make will be designed for a single process, most likely the 28 nm TSMC process because that will be the most mature at the time.

You missed my point entirely. A $500 console costs that much because the components inside of it are expensive. If they put a $50 GPU into the console, what exactly is going to cost $450 extra? Blu-Ray drives no longer cost $230 (the price of a Blu-Ray drive when PS3 debuted). They can squeeze a $120-150 GPU design into that price. So it makes no sense to put such a cheap GPU in 2012 into PS4 or Xbox3. Also, please explain why I wouldn't put an HD6800M into a PS4 instead? My $400-500 console budget easily suffices for that; and that GPU doesn't break any power consumption limits.

As Sony found out, having a $600 console kills sales quite a bit and as Microsoft found out trying to cut corners can also blow up in your face. These consoles won't need high-end graphics cards. They're going to be used to run games at 60 fps at 1080p. That doesn't take a lot of power.

If AMD could have easily put an HD5770 GPU inside Llano, then Llano wouldn't have debuted with a 400 SP 6620 GPU. It's obviously very expensive and not economically feasible on 32nm process. In fact, AMD wasn't even able to fit a full-fledged Phenom II X4 core with L3 cache together with a GPU. They had to use a small Athlon II X4 core to make sure their die size doesn't balloon.

It turns out that putting anything more than they did would have resulted in a system so bottlenecked that the extra SPs would have resulted in minimal performance gains. Additionally, you don't want to design big chips on a new process if you can help it. It really eats into the yields.

However, by the time Sony and MS are going to start building their new consoles, the processes will be more mature and the design constraints that limited Llano won't be in place.

How is it worth it? It'd be more cost effective to design a CPU + GPU on an embedded 2 die package instead. The costs of increasing die size with a modern GPU + CPU is exponential. Did you not see that Xbox360 had 5 revisions from 90nm process before it even became cost effective to shrink the 2 components under 1 package? If you start off with an APU design, you severely limit both your CPU and GPU performance right off the bat (not to mention your TDP envelope is not more than 150W). There is no reason console makers need to be constrained by these limits. Consoles are sold at a loss with game and accessory sales making up the difference over time. From that perspective, both MS and Sony will want to put the fastest possible CPU+GPU into the console. If you start with an APU design, you literally start with the slowest possible GPU you can put......Sure one of the consoles may choose to go that route, but it would be a huge mistake imo.

Not really. Manufacturing silicon isn't terribly expensive once you get the ball rolling. Also, since these a chips for a console, they don't need to be as large or complicated as the ones driving PC gaming rigs. A single chip makes the board design more simple and the cooling system more simple. Considering that the chip can be designed differently than current APUs and make assumptions such as a wide-ass memory bus connected to GDDR5 memory, there's no need that the current problems with APUs need to exist.

Microsoft and Sony sell their consoles at a loss and it really hasn't worked out that well for them. Nintendo has beaten both and probably made money on each console sale. Until we move to larger televisions, the performance target for consoles isn't moving beyond a certain point and the longer MS or Sony wait, they cheaper it will be to deliver that performance.

I don't expect Microsoft or Sony to release anything above $400. I think that they've both learned that there's a price ceiling for consoles and that going above it can really hurt sales. The only reason to surpass that point is because the initial stock of devices is extremely limited.

Let's not forget game developers have a say in this too. They have been learning how to utilize IBM's CPUs for the last 4-5 years on PS3 and Xbox360. They'd likely want a more powerful IBM CPU, but they can already leverage what they learned. If you start with Bulldozer, you are starting from scratch all over again.

Yes, just like they worked on an Intel chip for the Xbox and the Emotion Engine for the PS2 before that. They'll learn new architectures and like every console generation before them, they'll get better at utilizing the newest generation as time goes on. It's not as though they haven't gone through this before.
 

psoomah

Senior member
May 13, 2010
416
0
0
Bulldozer is a 32nm CPU. Next generation GPUs are going to be 28nm. How do you do a die shrink on a 32nm CPU and a 28nm GPU? You'd need to move both to 22nm. That's not happening until 2013 (if taking both into account).

You are also forgetting that Bulldozer and IBM Cell are 32nm designs. How are you going to magically shrink them to 28nm in 2012?

Let's not forget game developers have a say in this too. They have been learning how to utilize IBM's CPUs for the last 4-5 years on PS3 and Xbox360. They'd likely want a more powerful IBM CPU, but they can already leverage what they learned. If you start with Bulldozer, you are starting from scratch all over again.

Llano integrated a 40nm GPU into a 32nm process chip, so AMD NOW KNOWS HOW to fabricate GPUs with the 32nm process. It also now knows how to fabricate bulldozer at 32nm. The hard preliminary work to a Trinity chip has already been done and GloFo's 32nm process is sufficiently mature to be in mass production of Llano chips, so all the pieces of the puzzle are in place to produce Trinity in 2012.

All the major PS3 and Xbox 360 game developers already have the knowledge and tools to develop for or port to PCs and THAT isn't going to change. What COULD change is the consoles getting sufficiently on the same page as PCs that developers could develop games for all three platforms in far less time and much cheaper than they do now. Game developers don't WANT to have to develop for both x86 AND IBM CPU architecture and AMD and nVidia GPU architectures, they WANT to develop for a single CPU and a single GPU architecture. This would slash both their costs and time to market while allowing them to maximize the market for each game they develop.
 

Edgy

Senior member
Sep 21, 2000
366
20
81
Assuming that Trinity and beyond will be finally be the beginning of AMD's realization of their M-Space modular chip design philosophy/methodology/manufacturing strategies - it is quite conceivable assuming AMD finds the ideal balance between CPU parts and GPU parts within 1 Fusion chip targeting the next gen console solution that will have enough appeal for consideration for next PS in terms of performance/thermal/cost.

M-Space thingy was conceived with such flexibility in mind from the beginning - the ability to produce a chip with a broad range of quite different capabilities catering to specific requirements without having to change the core design/manufacturing methodologies.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
Yes, just like they worked on an Intel chip for the Xbox and the Emotion Engine for the PS2 before that. They'll learn new architectures and like every console generation before them, they'll get better at utilizing the newest generation as time goes on. It's not as though they haven't gone through this before.

games weren't as expensive to produce before as they are now. some break 400 million don't they? it's very possible that to save dev costs a game will be made for 1 console in mind and have a cheap ass port to another. neither sony/ms/nin want to be on the short end of that stick.

anyhoo, what i am hoping for is a console that can give me games rendered at 1080p (not lame upscales of 600p) @ 60fps for fluid motion. ideally with graphics comparable to crysis or better :eek:
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Correct, it was an enhanced G3. A G5 would have been waaaaaaaaaaaaaaaay too large/hot for a console, unless it was a heavily underclocked variant.

After I wrote that I immediately thought I was wrong and that it was the G4, only was thinking G5 because it was by default DC (maybe its the Wii that used a die shrunk G5?). I had a game cube and have a wii but for the life of me I just can't stay interested in the hardware in those long enough to actually remember the specs, whereas I am pretty confident in the PS2-3 and Xbox-Xbox360 configurations.