2013: Console market pillaged by Kaveri.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
lol, consoles are weak for today standard and are going soon to be replaced

btw, even llano is already more powerfull
 

happysmiles

Senior member
May 1, 2012
340
0
0
its the same story

"AMD is going to be bees knees next year"

next year comes and it's an improvement but no one is buying

"We will have to wait if AMD really brings it next year"

at the same time

"AMD is going broke! everyone jump the sinking ship"

and then

"Intel has too much market share and could squash AMD if they really wanted to"


over and over and over again.

I want AMD to succeed, I want them to make products we all know deep down inside they are capable of doing.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
They don't want to. In fact they are as scared as AMD fanboys that AMD might go bust.

You don't have to even be a fan of AMD to worry here. We have already seen tremendous slow-down on the CPU side as a result of non-competitive AMD. I definitely do not want this to happen on the GPU side. Also, price increases and stagnation will inevitably come from both Intel and NV if AMD goes bust. Intel will just continue on its miserably path of adding 35-45% increase in CPU speed every 4-5 years, hiding price increases by delivering smallish die and holding back 6- and 8-core CPUs from $300 level. NV, I don't even want to imagine. They could easily prolong a generation from 1.5 to 3 years and we'll be buying sub-300mm^2 die chips for $500 for years to come. Not fun.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
No matter how many times you repeat it, die size is not going to magically become a metric of processor value across different nodes.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
They could easily prolong a generation from 1.5 to 3 years and we'll be buying sub-300mm^2 die chips for $500 for years to come. Not fun.

Instead of thinking of it as die size, think about what you are really buying. You're buying transistors. So your measurement should be transistors / $.

Go ahead and chart that out, and you will see massive improvements in price.
 

Abwx

Lifer
Apr 2, 2011
11,910
4,890
136
Instead of thinking of it as die size, think about what you are really buying. You're buying transistors. So your measurement should be transistors / $.

Go ahead and chart that out, and you will see massive improvements in price.

Transistors that are cheaper to produce , isnt it....:whiste:
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Transistors that are cheaper to produce , isnt it....:whiste:

But how much cheaper?

But I agree, using diesize/$ is wrong. Using transistors/$ is equally wrong.

We aint far away from where a node shrink however will cost close to the same to produce transistor wise as the previous one.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
No matter how many times you repeat it, die size is not going to magically become a metric of processor value across different nodes.

No matter how much you deny it, the progress in CPUs on the Intel side has slowed down tremendously since historical levels. By now at the very least we should be at 6-core CPU at $325. In the past when AMD was competitive, CPU speed increases on average 2x every 2 to 2.5 years via IPC, clock speeds and more cores. Those cares were passed on to us for free over time as node shrinks allowed more transistors to be shrunk into similar die space and Intel would share the cost savings of node shrinks with us by passing on the extra cores. This way we went from Pentium 4 to Core 2 Duo (2x the cores), and then to C2 Q and 1st generation i7 (2x the cores again). Since that era, Intel fell asleep because they could.

If you are happy with 35-45% performance increase in the last 4 years, good for you. If you can't see that Intel has slowed down progress in the CPU space in regard to price/performance, I can't help you. I've been building systems for a long time and after Nehalem, Intel went into hybernation mode. Next year we'll still have quad-core Haswell for $325 or so. Intel gets away charging nearly $100 for HT and a small bump in cache/clocks. This would have never happened if AMD had a competitive product.

Instead of thinking of it as die size, think about what you are really buying. You're buying transistors. So your measurement should be transistors / $.

Well really I am neither buying die space not transistors. I am really buying performance. I look at what I had in summer of 2007 and it was Q6600 @ 3.4ghz, then in September 2009 I got Core i7 860 @ 3.9ghz. Since then, CPU speed has hardly improved at a similar price level. Why do I keep focusing on die size? Because a larger die means Intel can add more cores and give us a 6-core CPU $325 (P4 -> C2D -> C2Q). We cannot use the argument that "well it makes no sense to add more cores since very few programs take advantage of more than 4 threads". During Core 2 Duo and Quad era, even less programs used 2-4 threads and yet Intel doubled the number of cores over Pentium 4 and doubled it again over C2D. So lack of software that takes advantage of 6 cores cannot be the primary reason. The primary reason is Intel is milking the consumer because the competition is lacking and they can get away with charging extra $ for K series SKU, extra $ for HT and a small bump in clocks (HT was free during Pentium 4 C era when A64 mopped it).

When AMD was competitive, we got IPC, clock speed and core increases and this was very frequent. Transistors by themselves are meaningless if they don't really contribute to extra CPU performance. Enlarging cache or adding a GPU inside Intel CPUs does nothing for me to improve my CPU performance tangibly. If you remove the GPU inside Intel's current CPUs, the amount of die space allocated for CPU centric functions becomes even smaller. Cost wise, nothing is stopping Intel from launching a performance based 6-core Haswell CPU for $325 without a GPU. But they will not do this because there is no need. Instead, they'll probably hold back Haswell-E by nearly a year and continue charging $500+ for a 6-core IVB-E.

I will still buy Intel CPUs because I love technology and upgrading for fun, but there is no question in my mind that Intel of today is not the Intel of yesterday because they are not being pushed, not even a little. IVB is actually the perfect example of Intel going towards maximizing margins by moving away from more expensive fluxless solder to the cheap TIM. As Anand noted in his Podcast, IVB was strictly a money making part for Intel. The focus was not really on performance or overclocking. I would even say other than GPU, IVB was the worst refresh in Intel's history. Even Q9550 over Q6600/6700 series was a much better part since on top of added IPC, Q9550 could overclock to 3.8ghz.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,910
4,890
136
But how much cheaper?

But I agree, using diesize/$ is wrong. Using transistors/$ is equally wrong.

We aint far away from where a node shrink however will cost close to the same to produce transistor wise as the previous one.

From a node to the next one cost/transistor collapse.

This is the area that matters and in this respect
production cost + RD cost is at most 30$/cm2 for intel
and at least 35$/cm2 for AMD.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
No matter how much you deny it, the progress in CPUs on the Intel side has slowed down tremendously since historical levels. By now at the very least we should be at 6-core CPU at $325. In the past when AMD was competitive, CPU speed increases on average 2x every 2 to 2.5 years via IPC, clock speeds and more cores. Those cares were passed on to us for free over time as node shrinks allowed more transistors to be shrunk into similar die space and Intel would share the cost savings of node shrinks with us by passing on the extra cores. This way we went from Pentium 4 to Core 2 Duo (2x the cores), and then to C2 Q and 1st generation i7 (2x the cores again). Since that era, Intel fell asleep because they could. If you are happy with 35-45% performance increase in the last 4 years, good for you. If you can't see that Intel has slowed down progress in the CPU space in regard to price/performance, I can't help you. I've been building systems for a long time and after Nehalem, Intel went into hybernation mode. Next year we'll still have quad-core Haswell for $325 or so. Intel gets away charging nearly $100 for HT and a small bump in cache/clocks. This would have never happened if AMD had a competitive product. Quote: Originally Posted by Phynaz Instead of thinking of it as die size, think about what you are really buying. You're buying transistors. So your measurement should be transistors / $. Well really I am neither buying die space not transistors. I am really buying performance. I look at what I had in summer of 2007 and it was Q6600 @ 3.4ghz, then in September 2009 I got Core i7 860 @ 3.9ghz. Since then, CPU speed has hardly improved at a similar price level. Why do I keep focusing on die size? Because a larger die means Intel can add more cores and give us a 6-core CPU $325 (P4 -> C2D -> C2Q). We cannot use the argument that "well it makes no sense to add more cores since very few programs take advantage of more than 4 threads". During Core 2 Duo and Quad era, even less programs used 2-4 threads and yet Intel doubled the number of cores over Pentium 4 and doubled it again over C2D. So lack of software that takes advantage of 6 cores cannot be the primary reason. The primary reason is Intel is milking the consumer because the competition is lacking and they can get away with charging extra $ for K series SKU, extra $ for HT and a small bump in clocks (HT was free during Pentium 4 C era when A64 mopped it). When AMD was competitive, we got IPC, clock speed and core increases and this was very frequent. Transistors by themselves are meaningless if they don't really contribute to extra CPU performance. Enlarging cache or adding a GPU inside Intel CPUs does nothing for me to improve my CPU performance tangibly. If you remove the GPU inside Intel's current CPUs, the amount of die space allocated for CPU centric functions becomes even smaller. Cost wise, nothing is stopping Intel from launching a performance based 6-core Haswell CPU for $325 without a GPU. But they will not do this because there is no need. Instead, they'll probably hold back Haswell-E by nearly a year and continue charging $500+ for a 6-core IVB-E. I will still buy Intel CPUs because I love technology and upgrading for fun, but there is no question in my mind that Intel of today is not the Intel of yesterday because they are not being pushed, not even a little. IVB is actually the perfect example of Intel going towards maximizing margins by moving away from more expensive fluxless solder to the cheap TIM. As Anand noted in his Podcast, IVB was strictly a money making part for Intel. The focus was not really on performance or overclocking. I would even say other than GPU, IVB was the worst refresh in Intel's history. Even Q9550 over Q6600/6700 series was a much better part since on top of added IPC, Q9550 could overclock to 3.8ghz.


Did you read what I said?

I said your equating die size with value is naive or disingenuous.

Instead of addressing that, you ignored it completely and pretended I said there was 0 stagnation and went off with a wall of text rebutting some made up point that I didn't make.

It isn't as if I made a long post that is confusing. Is it so hard to actually address someone's point without going off on a tangent and pretending what you're posting is related to what they've said?
 
Aug 11, 2008
10,451
642
126
No matter how much you deny it, the progress in CPUs on the Intel side has slowed down tremendously since historical levels. By now at the very least we should be at 6-core CPU at $325. In the past when AMD was competitive, CPU speed increases on average 2x every 2 to 2.5 years via IPC, clock speeds and more cores. Those cares were passed on to us for free over time as node shrinks allowed more transistors to be shrunk into similar die space and Intel would share the cost savings of node shrinks with us by passing on the extra cores. This way we went from Pentium 4 to Core 2 Duo (2x the cores), and then to C2 Q and 1st generation i7 (2x the cores again). Since that era, Intel fell asleep because they could.

If you are happy with 35-45% performance increase in the last 4 years, good for you. If you can't see that Intel has slowed down progress in the CPU space in regard to price/performance, I can't help you. I've been building systems for a long time and after Nehalem, Intel went into hybernation mode. Next year we'll still have quad-core Haswell for $325 or so. Intel gets away charging nearly $100 for HT and a small bump in cache/clocks. This would have never happened if AMD had a competitive product.



Well really I am neither buying die space not transistors. I am really buying performance. I look at what I had in summer of 2007 and it was Q6600 @ 3.4ghz, then in September 2009 I got Core i7 860 @ 3.9ghz. Since then, CPU speed has hardly improved at a similar price level. Why do I keep focusing on die size? Because a larger die means Intel can add more cores and give us a 6-core CPU $325 (P4 -> C2D -> C2Q). We cannot use the argument that "well it makes no sense to add more cores since very few programs take advantage of more than 4 threads". During Core 2 Duo and Quad era, even less programs used 2-4 threads and yet Intel doubled the number of cores over Pentium 4 and doubled it again over C2D. So lack of software that takes advantage of 6 cores cannot be the primary reason. The primary reason is Intel is milking the consumer because the competition is lacking and they can get away with charging extra $ for K series SKU, extra $ for HT and a small bump in clocks (HT was free during Pentium 4 C era when A64 mopped it).

When AMD was competitive, we got IPC, clock speed and core increases and this was very frequent. Transistors by themselves are meaningless if they don't really contribute to extra CPU performance. Enlarging cache or adding a GPU inside Intel CPUs does nothing for me to improve my CPU performance tangibly. If you remove the GPU inside Intel's current CPUs, the amount of die space allocated for CPU centric functions becomes even smaller. Cost wise, nothing is stopping Intel from launching a performance based 6-core Haswell CPU for $325 without a GPU. But they will not do this because there is no need. Instead, they'll probably hold back Haswell-E by nearly a year and continue charging $500+ for a 6-core IVB-E.

I will still buy Intel CPUs because I love technology and upgrading for fun, but there is no question in my mind that Intel of today is not the Intel of yesterday because they are not being pushed, not even a little. IVB is actually the perfect example of Intel going towards maximizing margins by moving away from more expensive fluxless solder to the cheap TIM. As Anand noted in his Podcast, IVB was strictly a money making part for Intel. The focus was not really on performance or overclocking. I would even say other than GPU, IVB was the worst refresh in Intel's history. Even Q9550 over Q6600/6700 series was a much better part since on top of added IPC, Q9550 could overclock to 3.8ghz.

I agree that cpu progress has slowed markedly. That is partly due to lack of competition from AMD, but also due to the market maturing. In other words, when a product is new and not very powerful/efficient it is easy to make improvements. As a product gets more powerful and efficient, it becomes increasing difficult and expensive to increase performance. The place where I would agree with you most strongly is that Intel should have brought out a six core chip in the 300 to 350 dollar price range. I know a lot of software will not use it, but it would still be great to have for those that want to use it.

And I think if Bulldozer had had better performance, it might have pushed intel to bring out a mainstream hex core. You are also right about the TIM issue for IVB. They cheaped out bad on that one. But ivb was a good step up for the mobile market because of lower power use and the igp.

However, whether we like it or not, (and I dont) the focus has switched from raw CPU power to improving performance/watt and the igpu. This is great for the mobile sector, but lousy for those of us who still are fans of the desktop.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Well really I am neither buying die space not transistors. I am really buying performance.

Performance can be measured in many ways. The current improvements in performance are occurring in power consumption, not doohickies per second.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
But how much cheaper?

But I agree, using diesize/$ is wrong. Using transistors/$ is equally wrong.

We aint far away from where a node shrink however will cost close to the same to produce transistor wise as the previous one.

Which is why foundry consolidation will be happening soon (as you've pointed out). The easiest way to offset the higher costs of a new node is higher volume.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Excavator has been taped out already, the first version.

Wow, do you have a link for this? On what node (I'd expect 20nm)? That's incredible if true.

I didn't even know Steamroller was taped out (though that would make sense if it's around a year out from being released).

I know GF has run test shuttles on their 14XM process, so clearly GF is running forward with their hair on fire!*


*14XM + Finfet
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
They could easily prolong a generation from 1.5 to 3 years and we'll be buying sub-300mm^2 die chips for $500 for years to come. Not fun.


No matter how many times you repeat it, die size is not going to magically become a metric of processor value across different nodes.

Instead of thinking of it as die size, think about what you are really buying. You're buying transistors. So your measurement should be transistors / $.

Go ahead and chart that out, and you will see massive improvements in price.

I feel like Russian is having one conversation and the rest of the audience is having an entirely different conversation.

And I say this because I totally see where Russian is coming from, but judging by the responses his posts have engendered I don't think people are seeing where Russian is coming from.

Which isn't to say that the other side of the convo isn't right, it is. But one dude is saying "Some apples are red!" and the other side is shouting back "Everyone knows some bananas are yellow!".

Two different conversations, both sides are right, neither realizes they are talking to themselves IMHO.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Excavator has been taped out already, the first version.
Wow, do you have a link for this? On what node (I'd expect 20nm)? That's incredible if true.

I didn't even know Steamroller was taped out (though that would make sense if it's around a year out from being released).

Hey are you still here? I'd really like more info on this.
 
Last edited:

tipoo

Senior member
Oct 4, 2012
245
7
81
I would guess that 70-95% of console people just want a box they can plug in and play their latest favorites on, it doesn't matter if PCs in the same form factor are better to them.

A Steam box might work, but only if it had other publishers support and good online multiplayer, plus with ever changing hardware people may run into incompatibilities, which again would throw off most console people.

An APU IN a console on the other hand, that could do it, assuming one had adequate performance. For consoles a high end APU in 2014 might just do it. And it would mean code well optimised for PC chips too, a nice thing for us.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
@RussianSensation

I am not disagreeing that Intel is no longer focusing on just speed, and because of this we are barely faster than we are 4 years ago.

Instead Intel is not trying to get faster than they were 4 years ago, but instead trying to get more efficient with less power use than they were 4 years ago.

The work performed by an intel i7 920 desktop chip with a 130w tdp can now be performed by a 35w laptop chip (intel i7 3612qm) and the laptop chip is a good amount faster than the non overclocked desktop chip.

Intel i7 920 (135w tdp)
3846 Score in CB Marks (Higher is better) Cinebench R10 - Single Threaded Benchmark
16211 Score in CB Marks (Higher is better) Cinebench R10 - Multithreaded Threaded Benchmark

vs
Intel i7 3612QM (35w tdp)
5682 (47% faster) Score in CB Marks (Higher is better) Cinebench R10 - Single Threaded Benchmark
21434 (32% faster) Score in CB Marks (Higher is better) Cinebench R10 - Multithreaded Threaded Benchmark

So do to a more efficient boost, increases in ipc, die shrinks, and retooling for lower power chips intel can be roughly 30% faster with a laptop chip than a desktop chip of 4 years ago all the while cutting the tdp (which I agree is not power consumption) by 3.7 times. That is right the maximum amount of heat that it has to dissipate (which is highly correlated with maximum power consumption) of the laptop chip is 27% of the desktop i7 that is now 4 years old.

(To make matters even worse the i7 920 has no integrated graphics, on the laptop chip that 35w tdp is taking in consideration the intel hd 4000 graphics.)
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,910
4,890
136
(To make matters even worse the i7 920 has no integrated graphics, on the laptop chip that 45w tdp is taking in consideration the intel hd 4000 graphics.)

If the HD4000 is not loaded the CPU will have the full TDP , and even
beyond , at its disposal.
 

tipoo

Senior member
Oct 4, 2012
245
7
81
No matter how much you deny it, the progress in CPUs on the Intel side has slowed down tremendously since historical levels. By now at the very least we should be at 6-core CPU at $325. In the past when AMD was competitive, CPU speed increases on average 2x every 2 to 2.5 years via IPC, clock speeds and more cores.

I have to agree. From the Haswell article, "You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. ".

10%. From a "Tock". That's so puny, even Ivy Bridge as a "Tick" managed 10% in many cases.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I feel like Russian is having one conversation and the rest of the audience is having an entirely different conversation.

And I say this because I totally see where Russian is coming from, but judging by the responses his posts have engendered I don't think people are seeing where Russian is coming from.

Which isn't to say that the other side of the convo isn't right, it is. But one dude is saying "Some apples are red!" and the other side is shouting back "Everyone knows some bananas are yellow!".

Two different conversations, both sides are right, neither realizes they are talking to themselves IMHO.

This is relatively common.

If you point out a flaw in something he says, he pretends that you're making claims you aren't and procedes to write a novel rebutting some stance that you don't take, but one that he feels more confortable addressing. It's kind of his thing I guess (does it all the time on the video card forum). Most people don't notice because they don't realize that the length of content doesn't necessarily make it correct (or even on topic).

No where did I agree or disagree regarding any potential stagnation. I merely am pointing out the pure folly in trying to assign any value or amount of progress based on die sizes and the assumption that the cost of a given unit of area is the same across multiple nodes. It's just a silly notion.

He's the king of straw man or something.
 
Last edited: