Nvidia: Not Enough Money in a PS4 GPU for us to bother

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ams23

Senior member
Feb 18, 2013
907
0
0
Bumb Nvidia, dumb for not using that awesome A9 R4 core in the Tegra 4 chip. Better perf per watt and perf per mm^2. Beats me why they haven't included it in their high end chip. Oh, yeah... maybe cuz it's nowhere near as powerful as an A15 is.

While the R4 Cortex A9 CPU has higher performance per watt and performance per mm^2, the Cortex A15 CPU has higher IPC and higher maximum performance, so there is a tradeoff involved here. Also, it will be quite a few months before R4 Cortex A9 is available, while Cortex A15 will be available much sooner (and already has been commercially available in an SoC since late last year).
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Yeah, yeah. My awesomesauce chip will beat yours... when I launch it in 6-9 months.

The industry will stay still for Nvidia.
 

ams23

Senior member
Feb 18, 2013
907
0
0
T4i first production silicon with R4 Cortex A9 is already available, and carrier certification will start in Q2 2013. The primary reason for the long wait time is the six month lag between carrier certification and device production. S800 will be available sooner, but not before second half of 2013. S800 will be targeting high end smartphones, while T4i will be targeting mainstream smartphones.
 
Last edited:

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
The frigging S600 will launch next month with the Galaxy S4 and the LG Optimus G. WTF are you talking about?
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Did you even bother to read this thread? T4i doesn't use a standard Cortex A9, it uses a new and not-yet-released R4 rearchitected variant that has many features in common with the Cortex A15. S800 uses the Krait 400 CPU which is Qualcomm's own design. Since neither of these CPU's have been released yet, it is hard to say for sure exactly how they would stack up. The only thing we can probably conclude is that neither of these CPU's will quite match the performance of the Cortex A15.

Krait has a wider front end and beefed up back end with an extra level of cache. I bet the S4 Pro will be faster than the A9r4 cores clock/clock.

T4i first production silicon with R4 Cortex A9 is already available, and carrier certification will start in Q2 2013. The primary reason for the long wait time is the six month lag between carrier certification and device production. S800 will be available sooner, but not before second half of 2013. S800 will be targeting high end smartphones, while T4i will be targeting mainstream smartphones.

T4i has been sampled? Source? Phonearena reports sampling to start in Q2 and be available to consumers Q3/4
 
Last edited:

ams23

Senior member
Feb 18, 2013
907
0
0
Krait has a wider front end and beefed up back end with an extra level of cache. I bet the S4 Pro will be faster than the A9r4 cores clock/clock.

Based on SPECInt, NVIDIA believes that R4 Cortex A9 has higher IPC (ie. will be faster clock for clock) than Krait 400 in S800 (let alone Krait from S4 Pro). Anyway, not worth debating further until we see some real results from R4 Cortex A9 and from Krait 400.

T4i has been sampled? Source? Phonearena reports sampling to start in Q3 and be available to consumers Q1 '14.

Yes, it has already started sampling. Source is Mobile World Congress 2013.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
The frigging S600 will launch next month with the Galaxy S4 and the LG Optimus G. WTF are you talking about?

You must mean like AMD's creditors are going to wait for Console revenue 6-9 months from now? lol Let's hope they sell, or who knows whats going up for sale next.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Based on SPECInt, NVIDIA believes that R4 Cortex A9 has higher IPC (ie. will be faster clock for clock) than S800, let alone S4 Pro. Anyway, not worth debating further until we see some real results from R4 Cortex A9 and from Krait 400.

Cortex A9r4 with 15-30% higer IPC than the A9 is going to have higher IPC than S800? Yeah, sounds legit.

EDIT: Hell, Nvidia claims 0-25% performance increase

Tegra%204%20Deep%20Dive_%20Slide%2029_575px.jpg
 
Last edited:

ams23

Senior member
Feb 18, 2013
907
0
0
Cortex A9r4 with 15-30% higer IPC than the A9 is going to have higher IPC than S800? Yeah, sounds legit.

EDIT: Hell, Nvidia claims 0-25% performance increase

I suggest that you actually read their CPU whitepaper to read about all the enhancements made to the R4 variant, and to see why they don't consider Dhrystone MIPS to be a reliable benchmark with modern workloads.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
I suggest that you actually read their CPU whitepaper to read about all the enhancements made to the R4 variant, and to see why they don't consider Dhrystone MIPS to be a reliable benchmark with modern workloads.

So they put in a marketing slide with claimed IPC improvements that are actually less than they'll be in real life? Why would they do that? If they get more than 25% IPC increase, wouldn't they promote it?
 

ams23

Senior member
Feb 18, 2013
907
0
0
So they put in a marketing slide with claimed IPC improvements that are actually less than they'll be in real life? Why would they do that? If they get more than 25% IPC increase, wouldn't they promote it?

They compared SPECInt IPC of R4 Cortex A9 to what is expected with Krait 400, and R4 Cortex A9 had higher IPC. For a different benchmark or different workload, that may or may not be the case, but that still doesn't refute the fact that R4 Cortex A9 is using several design features from Cortex A15 that are designed to improve real world performance with modern day workloads.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
They compared SPECInt IPC of R4 Cortex A9 to what is expected with Krait 400, and R4 Cortex A9 had higher IPC. For a different benchmark or different workload, that may or may not be the case, but that still doesn't refute the fact that R4 Cortex A9 is using several design features from Cortex A15 that are designed to improve real world performance with modern day workloads.

You should probably take some time to look into Krait architectural differences compared to A9 and A15. Look at the front and back end.

http://www.anandtech.com/show/5559/...mance-preview-msm8960-adreno-225-benchmarks/2

20-80% faster than Cortex A9. And that's the old S4 krait cores. What Nvidia claims will have higher IPC than Krait 400 cores will probably be about as fast as what Qualcomm had last year.
 

ams23

Senior member
Feb 18, 2013
907
0
0
You should probably take some time to look into Krait architectural differences compared to A9 and A15. Look at the front and back end.

Those tests do not show the difference in IPC between R4 Cortex A9 and Krait. Linpack tests the memory controller, which is radically improved in newer ARM SoC designs. Sunspider and Browsermark test the JavaScript browser speed, which again is radically improved in newer ARM SoC designs. Vellamo is Qualcomm's own benchmark to test the system/browser/CPU. Basemark tests I/O performance, which yet again is radically improved in newer ARM SoC designs. Even Cortex A9 has been improved several times since it first came out, with R4 being the latest and greatest fourth revision.

NVIDIA showed some slides at MWC 2013 comparing Tegra 4 (with Cortex A15) to quad-core S4 Pro (with Krait) (http://forums.anandtech.com/showpost.php?p=34689618&postcount=230). Tegra 4 apparently is more than 2x faster than S4 Pro in SPECint2000, Sunspider, Web page load, WebGL Aquarium (50 fish), Quadrant Pro 2.0, Vellamo Metal, and is between 1.5-2x faster than S4 Pro in Geekbench, AndEBench Native, CFBench Native, Linpack MT (4T-Market), Antutu, DMIPS, Vellamo HTML5, while being only slightly faster than S4 Pro in Coremark. Strangely enough, NVIDIA somehow estimated the performance of Snapdragon 800 on a case by case basis. Now, take this with a huge grain of salt, but according to NVIDIA they expect Tegra 4 to have a significant performance advantage vs. S800 in SPECint2000, Sunspider, Web page load, WebGL Aquarium (50 fish), Quadrant Pro 2.0, Geekbench, CFBench Native, Linpack MT (4T-Market), Antutu, Vellamo HTML5, and Vellamo Metal, while Snapdragon 800 will have a slight performance advantage vs. Tegra 4 in AndEBench Native, and a significant performance advantage in Coremark and DMIPS. Tegra 4i is supposed to have 80% of the CPU performance of Tegra 4, but we will have to wait and see if that is truly the case across the board.

Finally, here is a more recent chart comparing Tegra 4 (with Cortex A15) to S600 (with Krait 300) in SPECInt2000, Sunspider, and Web page load where Tegra 4 is more than 2x faster in comparison: http://www.notebookcheck.net/typo3temp/pics/64b5dababd.png . Like it or not, the Cortex A15 is a very fast CPU (relatively speaking). If R4 Cortex A9 is anywhere even remotely close to the performance of Cortex A15 (which is not totally out of the question given the fact that it shares many features with the A15 and given the fact that it will be running at a significantly higher clock operating frequency than the A15), then it will be quite fast too. Obviously Krait 400 will be quite fast too, there is no denying that.
 
Last edited:

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
You keep running the same mantra "it has A15 features so it must be faster!" while completely disregarding the fact that so do the Krait cores.

Wall, meet head. Commence banging.
 

ams23

Senior member
Feb 18, 2013
907
0
0
You keep running the same mantra "it has A15 features so it must be faster!" while completely disregarding the fact that so do the Krait cores.

Right, like how the Cortex A15 has: 128 micro-op lookup window (compared to 40 in Krait); 32KB L1 data cache (compared to 16KB in Krait); 2MB unified L2 cache (compared to 512KB per core in Krait), Variable Symmetric Multiprocessing (compared to Asynchronous Symmetric Multiprocessing in Krait), not to mention differences in pipeline depth, branch prediction, Out of Order execution, etc. too. Let's face it, Krait is a custom design that is significantly different than both Cortex A15 and R4 Cortex A9.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Right, like how the Cortex A15 has: 128 micro-op lookup window (compared to 40 in Krait); 32KB L1 data cache (compared to 16KB in Krait); 2MB unified L2 cache (compared to 512KB per core in Krait), Variable Symmetric Multiprocessing (compared to Asynchronous Symmetric Multiprocessing in Krait), not to mention differences in pipeline depth, branch prediction, Out of Order execution, etc. too. Let's face it, Krait is a custom design that is significantly different than both Cortex A15 and R4 Cortex A9.

So how much time have you devoted to reading white papers on Tegra 4 from nvidia? I've seen at least 10 references to this.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
why do people keep diverting from thread topic. if you want a Nvidia Tegra discussion start another thread. As for the statement from Nvidia which is the discussion on this thread Nvidia does not have the x86 license to deliver a single chip x86 APU. So Nvidia can say what they want but its clear that Sony and MS chose AMD because of the x86 technology (which developers prefer for ease of development) and ability to design a single chip APU with a very high level of integration both archtecturally (unified memory address space with fully coherent memory between CPU and GPU) and physically (single chip , low latency between CPU and GPU with power, die size and cost benefits).
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
why do people keep diverting from thread topic. if you want a Nvidia Tegra discussion start another thread. As for the statement from Nvidia which is the discussion on this thread Nvidia does not have the x86 license to deliver a single chip x86 APU. So Nvidia can say what they want but its clear that Sony and MS chose AMD because of the x86 technology (which developers prefer for ease of development) and ability to design a single chip APU with a very high level of integration both archtecturally (unified memory address space with fully coherent memory between CPU and GPU) and physically (single chip , low latency between CPU and GPU with power, die size and cost benefits).

I agree with a lot of that, but I disagree that x86 was a necessity. PowerPC can be just as easy to develop for as x86, so long as it is in a sensibly designed processor. The PowerPC cores in the PS3 and 360 were a pain to develop for because they were slow, in-order designs. A well designed out of order PowerPC chip (without any of the Cell silliness) would be just as easy to develop for as Jaguar.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I agree with a lot of that, but I disagree that x86 was a necessity. PowerPC can be just as easy to develop for as x86, so long as it is in a sensibly designed processor. The PowerPC cores in the PS3 and 360 were a pain to develop for because they were slow, in-order designs. A well designed out of order PowerPC chip (without any of the Cell silliness) would be just as easy to develop for as Jaguar.

Developing on a x86 architecture based console is going to be easier because porting to PC is going to be very easy. In fact developers would save time and can focus their efforts on the creative aspects of game development rather than purely technical.

Also in this case AMD is not selling CPU or GPU IP they are selling a fully integrated APU chip (SOC) with CPU, GPU, high speed low latency communication bus between CPU and GPU, memory controller, video decoders etc. So with this kind of design its all or nothing. AMD is selling manufactured chips because the x86 license prohibits licensing to other companies. So its either AMD or Intel or IBM (PowerPC with some other GPU tech). Given that only AMD can provide good x86 performance with industry leading graphics and GPU compute performance its easy to see why sony and microsoft picked AMD.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
I agree with a lot of that, but I disagree that x86 was a necessity. PowerPC can be just as easy to develop for as x86, so long as it is in a sensibly designed processor. The PowerPC cores in the PS3 and 360 were a pain to develop for because they were slow, in-order designs. A well designed out of order PowerPC chip (without any of the Cell silliness) would be just as easy to develop for as Jaguar.

As raghu already stated the reason for going x86 wasn't because it was necessarily more easy to code for than the PowerPC or Cell, but that it would allow them to port games between the consoles and PC far easier. Which would save money for devs which will help offset the increase in cost to develop games for next gen consoles.

They are going to need more level designers, artists, programmers, etc. to build games for the increased capabilities of the new consoles. Games/game worlds are going to get bigger and require more of everything with the new consoles. Any way that the console makers can ease game development and reduce cost for their content creators they are going to push.
 

cplusplus

Member
Apr 28, 2005
91
0
0
why do people keep diverting from thread topic. if you want a Nvidia Tegra discussion start another thread. As for the statement from Nvidia which is the discussion on this thread Nvidia does not have the x86 license to deliver a single chip x86 APU. So Nvidia can say what they want but its clear that Sony and MS chose AMD because of the x86 technology (which developers prefer for ease of development) and ability to design a single chip APU with a very high level of integration both archtecturally (unified memory address space with fully coherent memory between CPU and GPU) and physically (single chip , low latency between CPU and GPU with power, die size and cost benefits).

I would argue that Tegra isn't as off-topic as you think, as it was more than likely what nVidia pitched to MS and Sony to use in their consoles. Because while they probably didn't just want to put a GPU in there, you can be damned sure that they would have if they could have gotten Tegra in there as well.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
I would argue that Tegra isn't as off-topic as you think, as it was more than likely what nVidia pitched to MS and Sony to use in their consoles. Because while they probably didn't just want to put a GPU in there, you can be damned sure that they would have if they could have gotten Tegra in there as well.

It's not off topic because it goes to show that Nvidia is in the market for low margin deals, and shows that their comments about their disinterest in the console contracts are all a load of marketing spin.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Ok so lets assume Tegra is low margin.

What else is it?

Is it a possible replacement for servers using their graphics cards mainly?

Is it a possible replacement for x86 or at least a contender in the foreseeable future?

Can volume change the dynamics, meaning can the volume of devices sold with ARM chips currently produce better revenue than is possible with a low margin console deal (assuming only Sony was possible, not both)?

What other possible benefits are there for Nvidia to push ARM outside of it's direct revenue that are not present with a console contract?