• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Richland & Kabini rumours

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I too would love to see more low end 13.3 and 14 inch form factors. However OEMs have been stonewalling on that for ages.

I would go even further and say I would like to see an 11" formfactor with decent performance at a decent price. Something like the old HPdm1z with the E450, but at a similar or lower price point with better performance. I was very intrigued by that but it did not seem the performance was quite up to the price, especially when you could get a full size laptop with a pentium for the same price.
 
No, but the i3 isn't a capable gaming chip that makes 99% of customers happy. The 5800K is.

I assume you are talking about without using a discrete card, because the i3 is very capable as a gaming chip with a discrete card. I just cant believe people keep pushing the 5800K as a "gaming" processor when a 50 to 100 dollar discrete card will pretty much double the performance.
 
I assume you are talking about without using a discrete card, because the i3 is very capable as a gaming chip with a discrete card. I just cant believe people keep pushing the 5800K as a "gaming" processor when a 50 to 100 dollar discrete card will pretty much double the performance.

Yeah and amazingly enough 50 to 100 dollars extra is more than what most people want to pay for a PC.

http://www.youtube.com/watch?v=0k2znvfyfUs&feature=youtu.be

Watch that and tell me that the 5800K isn't a gaming chip. Now consider that less than 2% of the market will ever play a game that is so demanding as Far Cry 3 is.

The difference between the 5800K and the i3's is playability. Forget about doubling fps with a discrete card, we're talking playable fps and the i3 will never cut it whereas the 5800K does. The majority of the desktop PC buying public buys intel and ends up disappointed at how crap it is at gaming. So they then have to buy a discrete card and get a couple fps more than had they bought the 5800k? Had they bought the 5800K in the first place they wouldn't have been looking for a discrete card because they'd be satisfied with their PC's performance out of the box. These people are not enthusiasts, they just want to play games playably and intel graphics fail to deliver.

Intel is creating a legacy of shitness at gaming for themselves, that's why they are trying so hard to change it by doubling their graphics real estate every year.
 
Last edited by a moderator:
For Apple their stockholders are getting uneasy after seeing their stocks drop from 700 to 500 in a short amount of time without any sign of slowing down. And seeing the continual onslaught from Android with Samsung in the lead aint fun either. Apple simply got its entire busy in 3 very volatile products. iPhone, iPad and iTunes. Apple had the chance to actually gain business traction with OSX. But dropped it all on the floor to focus on a very narrow selection of products.

MS on the other hand utterly flopped now with Windows 8 and WinRT. And the investors havent seen the promised gold while the stocks continue to fall. Second in command got fired on used toilet paper. And people start to worry if Google+Intel is the future rather than MS+Intel.

Both companies gonna pay in blood by massive ever increasing dividends to compensate stockholders:
http://online.wsj.com/article/SB10000872396390443816804578004312698030712.html

I saw an interesting article from 2010, where an engineer with ARM said that once processors fell below 45nm we would see diminishing returns on die shrinks. He seems to have been right.

I think what is happening is fundamental. The computer is now a mature technology; a commodity. No one cares about MS, or intel, or even Apple. They only care what the device does, how well it does it. The distinction between all these vendors has become minimal. That spells trouble for the old guard industry leaders like MS, intel, and yes even Apple. I suspect google is less affected, since its an advertiser that really thrives on all platforms.
 
Yeah and amazingly enough 50 to 100 dollars extra is more than what most people want to pay for a PC.

http://www.youtube.com/watch?v=0k2znvfyfUs&feature=youtu.be

Watch that and tell me that the 5800K isn't a gaming chip. Now consider that less than 2% of the market will ever play a game that is so demanding as Far Cry 3 is.

The difference between the 5800K and the i3's is playability. Forget about doubling fps with a discrete card, we're talking playable fps and the i3 will never cut it whereas the 5800K does. The majority of the desktop PC buying public buys intel and ends up disappointed at how crap it is at gaming. So they then have to buy a discrete card and get a couple fps more than had they bought the 5800k? Had they bought the 5800K in the first place they wouldn't have been looking for a discrete card because they'd be satisfied with their PC's performance out of the box. These people are not enthusiasts, they just want to play games playably and intel graphics fail to deliver.

Intel is creating a legacy of shitness at gaming for themselves, that's why they are trying so hard to change it by doubling their graphics real estate every year.

Yea, gaming at 800p has always been a dream of mine. I notice he didnt show any framerates either. Also it says in the comments that the igp is overclocked. I am sure the target audience that according to you doesnt want to add a discrete card is going to be into overclocking.
 
I dont really see why it is not fair to compare 32nm Vishera with 22nm IVB though. Those two processors are what is on the market now, so seems valid to me to compare them. It is AMDs fault they sold off their foundaries and are a node or more behind. Actually the whole point is that they trail badly in process technology, you cant deny that, whatever the reason is, that is irrelevant. What they produce is the bottom line.

We where talking about SandyBridge at 32nm vs IvyBridge at 22nm.
 
I would go even further and say I would like to see an 11" formfactor with decent performance at a decent price. Something like the old HPdm1z with the E450, but at a similar or lower price point with better performance. I was very intrigued by that but it did not seem the performance was quite up to the price, especially when you could get a full size laptop with a pentium for the same price.

I was initially very happy with my $202+tax Acer Aspire One 722 C-60 11.6" Netbook.

But I'm even happier with my $300+tax Asus X401A SB Pentium B970 14" Netbook.
 
Yea, gaming at 800p has always been a dream of mine. I notice he didnt show any framerates either. Also it says in the comments that the igp is overclocked. I am sure the target audience that according to you doesnt want to add a discrete card is going to be into overclocking.

A gamer looking for 1080P will go dedicated, but there are millions of people that might occasionally indulge in a game but want a cheap PC mainly for productivity. It's definitely going to reduce the number of people that upgrade to dedicated GPU for that one game they really like.
 
Yeah and amazingly enough 50 to 100 dollars extra is more than what most people want to pay for a PC.

Interesting, because a lot of the 15% NVDA gained in the last quarter were from attach rates of cards costing around 50-100 dollars.
 
Interesting, because a lot of the 15% NVDA gained in the last quarter were from attach rates of cards costing around 50-100 dollars.

Thats from the OEM orders where they use dual core Intel CPUs with discrete GPUs due to the horrible iGPU Intel performance. Also those 50-100 dollar GPUs are purchased for upgrades over older obsolete GPUs or iGPUs.

For a new system setup Trinity is the better choice (over CPU + Discrete) for OEMs (less parts, less power consumption, less heat etc) and for DIY users. You can also install a better GPU later on if you need it.
 
Or they misread the market, or both.

I'd say both. It is pretty clear by now that AMD vision for the APU, anemic CPU + good GPU, didn't pan out. They get bashed by the poor performance of the CPU and can't charge a premium for their better graphics performance. But this doesn't change the fact that Bulldozer is a very inefficient architecture in performance per area and performance per watt.
 
Thats from the OEM orders where they use dual core Intel CPUs with discrete GPUs due to the horrible iGPU Intel performance. Also those 50-100 dollar GPUs are purchased for upgrades over older obsolete GPUs or iGPUs.

Still have no clue, do you? Attach rate as measured by OEMs and research companies is the number of *new* systems that are sold with at least one dGPU and the total number of systems sold. Upgrades have *nothing* to do with attach rates.

For a new system setup Trinity is the better choice (over CPU + Discrete) for OEMs (less parts, less power consumption, less heat etc) and for DIY users. You can also install a better GPU later on if you need it.

Less power consumption? Less heat? Really? We are talking here of one of the most inefficient CPU architectures ever launched. If Larry Ellison were AMD CEO he would give Bulldozer similar remarks he gave to Sun's Rock. And Kepler is no slouch here, it can take whatever AMD throws at it in terms of power consumption and performance.

I pretty much doubt that AMD can be competitive with their APU solution against Intel and Nvidia, both with smaller die solution than AMD, specially when we are dealing with big OEMs like Dell, HP or Asus. AMD slumping market share seems to corroborate this theory.

IMO the main problem of AMD APU concept if regardless of the performance level you want you are stuck with the same die size. While you don't give a "flying fart" for that, it is exactly this metric that defines AMD break even price.

At first I was appalled to see a big die like that selling for peanuts and Llano being sold at loss, until techpowerup brought Mercury Research numbers. When 2c parts numbers are around 1.5 million per quarter and 4c parts are around 2.5 million, an specific die shouldn't be worth the development validation efforts.
 
I'd say both. It is pretty clear by now that AMD vision for the APU, anemic CPU + good GPU, didn't pan out. They get bashed by the poor performance of the CPU and can't charge a premium for their better graphics performance. But this doesn't change the fact that Bulldozer is a very inefficient architecture in performance per area and performance per watt.

it was never meant to be an anemic CPU ( not that piledriver really is).

The problem isn't architecture per say, nothing wrong with the module design and nothing wrong with bulldozers pipeline, the issues are components within the pipeline . Each module is 214million transistors ( about the same size as SB core) and has the same throughput despite all its issues.

Remember IPC was meant to go up, it went down a fair way and if anything it costs them even more power with things like L$I aliasing etc. Also clock it at 3.0-3.5ghz and it all of a sudden its power consumption looks a lot better(A-5700 etc). Clocks have been pushed to the limit because per clock performance wasn't adequate.

Now here are the list of issues that i can see seemed to have played a major part in bulldozers problems, you will notice how pretty much all of them have nothing to do with it being a module or require radical changes of the pipeline:

L$I cache aliasing (fixed in SR)

L$D write bandwidth/latency , it is 6:1 R/, i could understand 4:1 on both cores given an aggregate to the L2 of 2:1, most people blame the Write Coalescing Cache ( improved L1D mentioned by papermaster for SR)

instruction decode, its Vertical' multithreading which hurts both threads and it cant decode enough instruction to feed two cores. ( fixed in SR, dual decoders)

branch miss predict latency, its been noted in articles even on anand, it hurts both power consumption and potentially throughput ( loop buffer added for SR)

you gain on average 15% perf per core on BD/PD in threaded workloads by disabling one core in the module, that only alleviates one of those issues. SR is supposed to fix all of them plus additional benefits (L/S prefecters/predictors etc). Bulldozers issues are largely in instruction throughput which is sad for the rest of the module which appears to behave as expected.

There have been a bunch of things like the L2 22 cycle latency that people have latched on to and blamed but i have never seen anyone ever show that it is a performance bottleneck.
 
Last edited:
Remember IPC was meant to go up, it went down a fair way and if anything it costs them even more power with things like L$I aliasing etc. Also clock it at 3.0-3.5ghz and it all of a sudden its power consumption looks a lot better(A-5700 etc). Clocks have been pushed to the limit because per clock performance wasn't adequate

This notion that IPC was to go up was John Fruehe creation, quickly embraced by AMD drones in the forums, it was never in any official AMD statement. Quite the opposite, have a look on AMD official slides:

http://www.anandtech.com/Gallery/Album/754#6

"Throughput advantages for multi-threaded workloads without significant loss on serial single-threaded workload components"

It was AMD stating that IPC would go down, albeit it would not be a significant drop.

RThe problem isn't architecture per say, nothing wrong with the module design and nothing wrong with bulldozers pipeline, the issues are components within the pipeline . Each module is 214million transistors ( about the same size as SB core) and has the same throughput despite all its issues.

Throughput is not the same at the same frequency. Only when you scale up clock (and power) it is the same. And you are still forgetting the uncore. Trinity CPU portion is roughly the same of 2 SNB cores + uncore + HD2500, and SNB has L3 cache.

In a more conceptual level, I don't see how sharing a few resources (CMT) can be more efficient than sharing all resources (SMT), except in a few special cases or code highly optimized for the concept.
 
Still have no clue, do you?
I will suggest to stop getting it personal.


Less power consumption? Less heat? Really? We are talking here of one of the most inefficient CPU architectures ever launched. If Larry Ellison were AMD CEO he would give Bulldozer similar remarks he gave to Sun's Rock. And Kepler is no slouch here, it can take whatever AMD throws at it in terms of power consumption and performance.

AMD Trinity 5800K = Quad Core + iGPU = 100W TDP Single Heat Sink
Price = $129,99

Intel Pentium G870 = Dual Core 65W TDP + CPU HeatSink
Price : $89,99
NVIDIA GT440 = 65W TDP + GPU HeatSink
Price : $50-60.00

Total price = $139,99 to $149,99

So with Trinity we have, Less overall TDPs, less heatsinks, less hardware for RMAs and above all Better PERFORMANCE (both in CPU and iGPU) for lower cost. OEMs HEAVEN.

http://www.anandtech.com/show/6332/amd-trinity-a10-5800k-a8-5600k-review-part-1

http://www.anandtech.com/show/6347/amd-a10-5800k-a8-5600k-review-trinity-on-the-desktop-part-2

* prices are from Newegg
 
In a more conceptual level, I don't see how sharing a few resources (CMT) can be more efficient than sharing all resources (SMT), except in a few special cases or code highly optimized for the concept.

So when you thread and synchrozie your code you generally assume that all threads are created equal .. which is not the case with SMT, you get one "real" thread, call it a 100% and a "hyper" that is ~30%. Putting the wrong workload on the 30%'er will be catastrophic for performance, as the slow thread will idle the fast when synchronizing.
 
I will suggest to stop getting it personal.




AMD Trinity 5800K = Quad Core + iGPU = 100W TDP Single Heat Sink
Price = $129,99

Intel Pentium G870 = Dual Core 65W TDP + CPU HeatSink
Price : $89,99
NVIDIA GT440 = 65W TDP + GPU HeatSink
Price : $50-60.00

Total price = $139,99 to $149,99

So with Trinity we have, Less overall TDPs, less heatsinks, less hardware for RMAs and above all Better PERFORMANCE (both in CPU and iGPU) for lower cost. OEMs HEAVEN.

http://www.anandtech.com/show/6332/amd-trinity-a10-5800k-a8-5600k-review-part-1

http://www.anandtech.com/show/6347/amd-a10-5800k-a8-5600k-review-trinity-on-the-desktop-part-2

* prices are from Newegg

Not to mention, leaving out the discrete GPU means they can go for a smaller form factor. Something this size, with a mini-ITX board and a half-height wifi card in it. Much smaller case means lower BOM, and a machine that's easier to fit into a buyer's house.
 
I will suggest to stop getting it personal.

This isn't personal. If you don't want to be called clueless, don't make clueless comments.

Look at what you did before. In order to refute the power consumption gains from IVB, you bring... entire system power consumption, not processor power consumption. And now, this. In order to refute my claim about OEM prices, you bring... retail prices.
 
I will suggest to stop getting it personal.




AMD Trinity 5800K = Quad Core + iGPU = 100W TDP Single Heat Sink
Price = $129,99

Intel Pentium G870 = Dual Core 65W TDP + CPU HeatSink
Price : $89,99
NVIDIA GT440 = 65W TDP + GPU HeatSink
Price : $50-60.00

Total price = $139,99 to $149,99

So with Trinity we have, Less overall TDPs, less heatsinks, less hardware for RMAs and above all Better PERFORMANCE (both in CPU and iGPU) for lower cost. OEMs HEAVEN.

http://www.anandtech.com/show/6332/amd-trinity-a10-5800k-a8-5600k-review-part-1

http://www.anandtech.com/show/6347/amd-a10-5800k-a8-5600k-review-trinity-on-the-desktop-part-2

* prices are from Newegg

"Better performance" because you picked a POS graphics card. With a HD7750 and the pentium you would get better gaming performance than trinity.
 
This notion that IPC was to go up was John Fruehe creation, quickly embraced by AMD drones in the forums, it was never in any official AMD statement. Quite the opposite, have a look on AMD official slides:

http://www.anandtech.com/Gallery/Album/754#6

"Throughput advantages for multi-threaded workloads without significant loss on serial single-threaded workload components"

It was AMD stating that IPC would go down, albeit it would not be a significant drop.
that is completely your misinterpretation of what is being said there, nothing else. You will also notice i said almost the exact same thing in the post you have quoted. AMD stated that bulldozers CMT brought a performance deficit per thread compared to two separate cores. That has nothing to do with IPC vs hound.

You also now completely ignoring the fact that things are broken/suboptimal in bulldozer which have a very real impact on performance.

Throughput is not the same at the same frequency. Only when you scale up clock (and power) it is the same.
I said relatively the same, at the same clock on threaded workloads the difference isn't that big.

And you are still forgetting the uncore. Trinity CPU portion is roughly the same of 2 SNB cores + uncore + HD2500, and SNB has L3 cache.
a few things here,
1. bulldozer is a core micro architecture and has nothing to do with uncore or GPU and rather irrelevant when talking about bulldozers performance.

2. a BD/PD module is aprox the same size as a SB core
3. BD/PD L2 and SB L3 serve roughly the same purpose, are the same size (per core/module) and have latencies in the same ball park.
4. your making the mistake of assuming * per mm actually matters in a direct comparison/absolute sense , the actual equation is far more complex and without knowing all costs can't really be judged.

In a more conceptual level, I don't see how sharing a few resources (CMT) can be more efficient than sharing all resources (SMT), except in a few special cases or code highly optimized for the concept.
CMT is far simpler, you dont have to worry about handling/switching registers etc. look how many tries intel has had at SMT, took intel a few tries to get it right.

so compared to two seperate cores all the "stuff" that is shared in CMT is largely predictive and fetching. These block use lots of power and there job is to run as far ahead as possible to keep the rest of the core busy. sharing it allows you to make more expensive higher accuracy predictors then having two separate ones.

Then when you get to the execution/address gen/scheduling resources it is far easier to design and implement a 2 ALU , 2 AGU then it is a 3/4 ALU 3/4 AGU core and all its connecting logic.

On the FPU side, well one just has to look at the internal data path sizes of both SB/IB and BD/PD. you have to remember with vectors you have to gather all the data which takes time, sharing it allows higher utilization.
 
I saw an interesting article from 2010, where an engineer with ARM said that once processors fell below 45nm we would see diminishing returns on die shrinks. He seems to have been right.

I think what is happening is fundamental. The computer is now a mature technology; a commodity. No one cares about MS, or intel, or even Apple. They only care what the device does, how well it does it. The distinction between all these vendors has become minimal. That spells trouble for the old guard industry leaders like MS, intel, and yes even Apple. I suspect google is less affected, since its an advertiser that really thrives on all platforms.

I would say he was wrong. Just look at 32->22nm.
 
I would say he was wrong. Just look at 32->22nm.


from a power perspective he is completely correct, go look at 180nm > 150nm > 110 nm >90nm > 65 nm they brought far bigger power reductions. Look at all the additional advances being made not just transistor size yet power reduction isn't scaling like it used to. Its still Intels competitive edge but shouldn't be overstated. Also the "cost" of that power reduction hasn't scaled at all from about 45nm onwards.
 
Last edited:
from a power perspective he is completely correct, go look at 180nm > 150nm > 110 nm >90nm > 65 nm they brought far bigger power reductions. Look at all the additional advances being made not just transistor size yet power reduction isn't scaling like it used to. Its still Intels competitive edge but shouldn't be overstated. Also the "cost" of that power reduction hasn't scaled at all from about 45nm onwards.

Try compare Lynnfield, SB and IB. Then see if you are still thinking the same. And remember last 2 also got an IGP. 😛
 
Status
Not open for further replies.
Back
Top