[Techspot]- AC:Unity "too much" for console CPUs

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Yeah.... Right.....

I also remember all the Developers complaining about programming for the Emotion Engine. That clearly led to the Playstation 2 being a complete sales flop, right?

Where did the sales come in? And game developers isnt part of the buyers segment last I checked. Regular Averge Joe is.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,931
13,011
136
I think most of us had hoped it would be AMDs big core series at least. A 2M/4T CPU at 3Ghz+.

An a8-7600 would have been pretty sweet as a console APU. Had the target TDP been software-adjustable during operation, developers could have pulled some pretty fancy tricks with that.

Not possible within the given TDP limit of the PS4/XBONE. More smaller cores is more TDP efficient.

TDP limits are bars that can be moved. In all likelihood, both Sony and MS were averse to risking more hardware failure fiascos ala red ring of death, so they overcompensated by setting target TDP too low. Fear is a compelling motive.

This is not even wasting tax payers money. This is actually trying to hinder competition on the market - using Tax payers money. How pathetic can it be.

Typical "chamber of commerce"-style thinking. Help the local company at all costs. When all municipal and provincial/state governments everywhere do the same thing, all you get is a bunch of crony capitalism. Larger corporations expect subsidy wherever they go, in various different forms.

What's funny is that they list the % of subsidy from the Quebec government for games produced in English. What do you suppose the % is for games produced in French? How do they handle bilingual products (which very many are, Ubisoft or otherwise)?
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
TDP limits are bars that can be moved. In all likelihood, both Sony and MS were averse to risking more hardware failure fiascos ala red ring of death, so they overcompensated by setting target TDP too low. Fear is a compelling motive.

The 360 fiasco definitely was a big part in this. It clearly showed the highest TDP that they can reliably fit into a console sized package- namely, slightly lower than the launch XBox 360. Remember that even the hottest original incarnation of the 360 only drew 180W while gaming- tiny compared to a modern PC PSU!

While console power consumption has fallen since last generation, gaming PC power consumption has massively increased. The entire PS4 only consumes 140W at most, less than the TDP of a single HD 7870- and less than half the power consumption of an R9 290X.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
An a8-7600 would have been pretty sweet as a console APU. Had the target TDP been software-adjustable during operation, developers could have pulled some pretty fancy tricks with that.



TDP limits are bars that can be moved. In all likelihood, both Sony and MS were averse to risking more hardware failure fiascos ala red ring of death, so they overcompensated by setting target TDP too low. Fear is a compelling motive.

TDP was never an issue. Cooling have also improved dramaticly since plus throttle behaviour if it should overheat.

I have a feeling they went with the 8 slower cores due to PR. A 2M/4T chip just wouldnt look good on paper when you had a 3C/6T and an "8 core" Cell before. The cores are also so weak that we see a whole 2 cores dedicated to non gaming.

Its a real shame tho, because it impacts games in such a negative manner.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
While console power consumption has fallen since last generation, gaming PC power consumption has massively increased. The entire PS4 only consumes 140W at most, less than the TDP of a single HD 7870- and less than half the power consumption of an R9 290X.

A GTX970M 17" notebook with a quadcore Intel CPU with HT and a GPU performance level of a GTX680 consumes ~150W - with the LCD display.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
A GTX970M 17" notebook with a quadcore Intel CPU with HT and a GPU performance level of a GTX680 consumes ~150W - with the LCD display.

Yeah, but I doubt that Intel and Nvidia would sell those at 15-17% gross margins.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
A GTX970M 17" notebook with a quadcore Intel CPU with HT and a GPU performance level of a GTX680 consumes ~150W - with the LCD display.

Quad Core Intel is 170mm2 - 37% for the iGPU = ~100-110mm2 at 22nm

GM204 at 28nm = 400mm2

Total ~500mm2 vs ~300mm2 for the Consoles SoC.

Dont believe Microsoft or Sony would like something like that even if both Intel and NVIDIA would allow 15-20% margins.

Not to mention Maxwell was not ready at the time.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
TDP was never an issue. Cooling have also improved dramaticly since plus throttle behaviour if it should overheat.

I have a feeling they went with the 8 slower cores due to PR. A 2M/4T chip just wouldnt look good on paper when you had a 3C/6T and an "8 core" Cell before. The cores are also so weak that we see a whole 2 cores dedicated to non gaming.

Its a real shame tho, because it impacts games in such a negative manner.

Last thing on console you want to have is variable performance. Its the whole point of consoles! Exact same performance for millions of users. You can pin-point your target.

Doing some fishy boost technology that helps in short benchmark bursts to artificially inflate scores is, to put it mildly, unnecessary. ;)
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Last thing on console you want to have is variable performance. Its the whole point of consoles! Exact same performance for millions of users. You can pin-point your target.

Doing some fishy boost technology that helps in short benchmark bursts to artificially inflate scores is, to put it mildly, unnecessary. ;)

The current consoles throttle/shutdown as well if they overheat. So there is absolutely zero difference.
Example:
http://www.geek.com/games/xbox-one-will-get-louder-and-slower-instead-of-overheating-1567107/

Nobody is talking about boost either.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
TDP was never an issue. Cooling have also improved dramaticly since plus throttle behaviour if it should overheat.

That seemed like you were talking about dynamic clocks seen on current PC hardware that adjust speed according to load, power and temperature.

There is a dramatic difference between current console thermal design:
(from your link)
According to Castillo, the Xbox One has been designed to never run at its full performance capacity under normal conditions. That way there’s always some headroom if things start to go awry
So unless you block the cooling vents on the machine, or sit something else that runs hot on top of the Xbox One, it should never overheat. In the worst case your games may slow down and the internal fan becomes audible.

and PC hardware that runs on constantly self-adjusting clocks based on many variables.

I bet it slows/shuts down. I don't think anyone would like to see smoke in their living room. But I don't think its anything like the constant throttling/boosting (in the end its the same thing) we see in desktop GPUs
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
A GTX970M 17" notebook with a quadcore Intel CPU with HT and a GPU performance level of a GTX680 consumes ~150W - with the LCD display.

Mobile parts only get such low power consumption by binning heavily for low voltages. If you tried that for a console you would be left with millions of unsellable parts.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
They have no business running AI on the CPU. That's just bad programming.

Excactly. And its old news.

"This is what makes AI ripe for the picking when it comes to GPGPU technology. Almost every PC gamer has a graphics card, and providing it’s compatible with Nvidia’s CUDA or AMD’s Stream technology (or a cross-platform GPU API such as OpenCL), it could be used to take some of the load from the CPU when it comes to repetitive AI processing.

AMD’s head of developer relations, Richard Huddy, explains that the most common AI tasks involve visibility queries and path finding queries. "Our recent research into AI suggests that it isn’t uncommon for gaming AI to spend more than 90 per cent of its time resolving these two simple questions," says Huddy. He adds that these two queries are "almost perfect for GPU implementation", since they "make excellent use of the GPU’s inherently parallel architecture and typically aren’t memory-bound".

Nvidia agrees with this. Director of product management for PhysX, Nadeem Mohammad, explained that "the simple, complex operations" involved with pathfinding and collision detection "are all very repetitive, so pathfinding is one of the algorithms that works very well on CUDA". Mohammad adds that ray tracing via CUDA could play a useful part in AI when it comes to visibility queries. We aren’t talking about graphical ray tracing, but tracing a ray from a bot in order to work out what it can see. "You have to shoot rays from point A to point B to see if they hit anything," says Mohammad. "We do the same calibration in PhysX for operations such as collision detection."

http://www.bit-tech.net/gaming/2009/03/05/how-ai-in-games-works/7

(this is from 2009)
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
To the degree, eg. AI can not be run on the GPU for Ubisoft, what is the purpose of complaining? - its known specs for years, its little bit late in the game for that.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Quad Core Intel is 170mm2 - 37% for the iGPU = ~100-110mm2 at 22nm

GM204 at 28nm = 400mm2

Total ~500mm2 vs ~300mm2 for the Consoles SoC.

Nope. GM204 is 2048 core chip, the 970m is only 1280 cores. The die size to implement would be significantly smaller (1280 cores and 192 bit memory).

Still be larger but a GM206 type implementation would have been used.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
80-100W worth of Maxwell GPU performance would probably walk over X1/PS4's graphics capabilities. Alternatively, M$/Sony could achieve Radeon HD7850-like performance with lower power consumption (perhaps smaller die area too) and invest a little bit more on the CPU side (higher clocked Jaguar, 2M/4C Piledriver/Kaveri, Intel?). If only Maxwell was ready by late 2013. :p
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Nope. GM204 is 2048 core chip, the 970m is only 1280 cores. The die size to implement would be significantly smaller (1280 cores and 192 bit memory).

Still be larger but a GM206 type implementation would have been used.

Even if GM206 is close to 300mm2 it is still very large to be paired with a quad big core Intel CPU. And then again you wouldnt need that much of CPU performance for that GPU.

2-3 years ago when MS and Sony were looking for their next consoles the best possible implementation and at the best price was the AMD Jaquar + GCN. What we have today or what will come next year is irrelevant. we have to see what was available 2-3 years ago from Intel and NVIDIA. And it seams, none of them could provide something better at the time.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Even if GM206 is close to 300mm2 it is still very large to be paired with a quad big core Intel CPU. And then again you wouldnt need that much of CPU performance for that GPU.

2-3 years ago when MS and Sony were looking for their next consoles the best possible implementation and at the best price was the AMD Jaquar + GCN. What we have today or what will come next year is irrelevant. we have to see what was available 2-3 years ago from Intel and NVIDIA. And it seams, none of them could provide something better at the time.

Except for the fact that it would absolutely destroy the consoles. I understand what you are saying, APU was the way to go but for similar performance the comparison (if you are using maxwell which wasn't out then) would be a i3 type CPU and a 750 Ti.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Except for the fact that it would absolutely destroy the consoles. I understand what you are saying, APU was the way to go but for similar performance the comparison (if you are using maxwell which wasn't out then) would be a i3 type CPU and a 750 Ti.

Which is worst than what ps4 have.

750ti is quite a bit slower than 7850, which is slower than ps4 APU.
8 jaguar cores are great for consoles. 2 cores are relegated to background tasks, while the rest is in 100% dedicated to game. Would you sacrafice 1 of 2 cores of i3 to background tasks? It would be a waste.

2 heatsinks or heatpipes, two memory pools without hUMA, less compute performance that developers start to use. No dedicated integrated sound processor. This all adds cost. The logistics with two dies is a hassle in itself.

I think the consoles are great for amd. Possibly amd pushed for more jaguar cores, rather than less BD cores to force developers use more cores, which will affect competitiveness of their CPUs - FX8350(8T) vs i5(4T) and FX6300(6T) vs i3 (4t).

Also, putting more emphasis on GPU than CPU will hurt intel more than amd. People will not have to upgrade to new CPUs so soon and if they will, even low end stuff, where amd competes is sufficient for gaming.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
1) Why do the OSes take 2 cores? What can they possibly be doing in a bare-metal programming model that needs 2 cores? Give the game 7 of 8 cores, and 7 of 8 GB RAM. Given how little RAM and CPU the 360 and PS3 OS worked on, surely 4-5x more RAM (1GB) and 4-5x more CPU (1 jaguar core @1.6-1.7) is enough. Potential throughput with 7 cores instead of 6 obviously increases.

2) Why did they downclock Jaguar from its design target of 2.0 ghz? They have a thermal budget, yes, but AMD clearly already did the optimization math of speed vs. TDP and settled around 2.0ghz for its top models of jaguar. It's already a very low power CPU... Maybe the process the fab the console chips on has a inferior power curve?
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Did you ever think that they could actually have exceeded the CPU capacities of both console chips?

This ^^^^.

People seem to forget that Assassin's Creed was the first implementation of many-actor crowd AI, which is about as hard on the CPU as any game mechanic can get. And that was on 360 and PS3.

Unlike graphics, running a lot of crowds in buildings and things around the player isn't reduced to nearly nil by occlusion, the calculations must still run even if the non player actors are not in view -- at least in some defined radius around the player. CPU usage would increase exponentially as you increase that radius. They could easily max out the CPU capabilities. Game engine designers are no idiots; some of the smartest and sharpest people in computer science to be honest... The problem is they come with the corresponding price tag which studios try to minimize.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
We've all seen dev do alot with so few resources. What makes you think the cores can't cut it? what is enough? where does it end?

I don't believe the cpus are too slow, I believe they can't optimize their designs to the specifics of the consoles. This is just clever marketing and a way to shift the blame from their lack of innovation to a lack of performance.

Ai especially can be shifted to compute on the gpu, I am just to ignorant on how ai would affect the resolution to the point where they couldnt hit 1080p30 instead of 900p30.

This is the correct response. Take look at what games look like on consoles as developers become more familiar with the hardware. I'm impressed by how well Gran Tourismo looks on the PS1.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The problem is they are already familiar with the hardware from the PC side.

Even aspect ratio is now being chopped into 2.35:1 and the same 30FPS nonsense.
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
The problem is they are already familiar with the hardware from the PC side.

Are they familiar with the unified memory pool? With fine grained compute? With the quirks of the Jaguar core? With the DMA offload engines? With the audio coprocessor? With the SRAM memory pool in the XBone?