Will Bobcat be the home run AMD is looking for?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Will Bobcat be the Home run AMD is looking for?

  • Yes

  • No


Results are only viewable after voting.

Dark_Archonis

Member
Sep 19, 2010
88
1
0
Intel will be improving Atom; they won't be standing still. Also ULV Sandy Bridge and Ivy Bridge will give some serious competition for Ontario.

So to answer the topic, no I don't think Bobcat will be a "home run" for AMD. I think it will be a good product for them, but nothing more.

I'm fairly confident that everyone will want a Bobcat in their laptop for the next 2 years.

I'm fairly confident you're wrong. I'm getting a new laptop soon, and I strictly want an Intel Arrandale or Sandy Bridge combined with a discrete GPU. I have no interest in a Bobcat laptop.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
http://en.wikipedia.org/wiki/File:AmdahlsLaw.svg

I was referring to this graph.

If speedup reaches diminishing returns wouldn't a point be reached where electricity was better spent making less cores go faster?

Absolutely, not too mention that depending on the communication fabric you can reach a point where adding more cores and spawning more threads actually slows the calculation time down!

Impactofbroadcastprotocolonscaling.gif
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Intel will be improving Atom; they won't be standing still. Also ULV Sandy Bridge and Ivy Bridge will give some serious competition for Ontario.

Ivy Bridge maybe.
Sandy bridge will be pretty large. Intel will price Sandy Bridge to compete, but indications are Sandy will be 200mm2 and Zacate is looking to be around 75mm2. Even if AMD costs are higher, they will be able to be very competitive.

The question remains as to how much Intel will have to cut off from GPU performance to reach 18W TDP on such a large chip. Current platforms are also around 200 mm2 (separate dies, with 81mm2 CPU and 114mm2 GPU / memory controller). they are clocked down to around 1.3GHz to be competitive. We've all seen the GPU comparisons at that level, and Intel does not come out shining.

Now Sandy Bridge is supposed to be faster, but the desktop Sandy Bridge benches were showing competitive to what Zacate demonstrated... what will happen when they cut the clock speed for the CULV version? Currently Intel is pricing notebook i5s at $200ish. You think Zacate is going to be a $200ish processor? I'd be surprised. They might try it there, but I think it's more likely to be in the $100ish range. This will force Intel's hand to basically cut out entire processor lines (Celeron) to be competitive, or simply choose to not be competitive.

I don't think I am so confident on SB even being competitive here at any price. Early indications to me are that this will be a netburst P4 vs. K8 scenario again. Intel will be close enough to compete and has the profit margin to sustain being cost competitive, but the products will likely be riding the Intel brand to sell parts and I'm doubtful they'll be truly competitive. It shouldn't really be surprising, as Ontario is targeted specifically at the market it's going for, but Intel is overbuilding and cutting down capability. It's similar to the way nVidia created the GF104 to compete at the price point between the HD5850 and the HD5770. It competed well there because it was purpose-built for that market segment, where the competitor offering (HD5830) was a scaled down high end chip that was lackluster in many ways.

That being said I'll be buying whichever makes sense (read: is cheaper) and is available for purchase in some kind of desktop form factor for my file server / media server / router machine. I'm due for hard drive replacement for this machine next year, as they'll be approaching 6 years continuous on time (my limit), and if there is some low power platform availability, I'll be moving to it from my current ancient Socket 939 / single core.

And I'll be recommending to my friends who buy low-end notebooks whichever has the better GPU option for their notebooks, because the only thing any of them do that is at all taxing on any piece of hardware is light gaming, and in this scenario, who really cares what the CPU is doing when we all know that IGP performance is a huge bottleneck in that part of the market.

In summary, I think Zacate will perform well for it's target segment, but I think AMD will face what it has always faced... people buying Intel because that's what they buy. We saw with the A64 that even when AMD has superior products, they have a really difficult time gaining marketshare. The A64 wasn't able to until the clearly superior Core series processors were almost out. A lot of the success of AMD depends on the willingness of people to switch brands for performance reasons, and I think that the percentage of people who will do so is fairly small. They'll have to fight for the marketshare they gain.
 
Last edited:

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
http://www.chip-architect.com/news/AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg

So we have a 15.2 mm^2 40nm CPU that is 90% as fast a same clocked C2D @ 45nm 107 mm^2? And that is not even including the 80 SP GPU...*jaws drop*. This is far and away the most impressive product from AMD since K7.

AMD can slap 6 Bobcat cores into one die and still come out smaller than a Wolfdale.

This type of spec actually worries me, it's just too good to be true. How can they crame so much functionality in such a small space. It's probably not true.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
Assuming we can read anything from the bobcat image posted earlier in this thread, and assuming it operates at a < 20W TDP, Then yes, I think AMD has a winner here. So long as they don't saddle this thing with a chipset that consumes more power than the CPU this thing looks like the perfect competition for any ATOM situation.

Intel went with extreme power savings when they built the Atom and sacrificed a lot of performance along the way. It looks like AMD is taking a more balanced approach, getting more speed at a slightly higher power consumption.

Only time will tell, however, if this thing is a winner or a loser. I could see this being quite the nifty laptop and HTPC CPU.
 

Dekasa

Senior member
Mar 25, 2010
226
0
0
This type of spec actually worries me, it's just too good to be true. How can they crame so much functionality in such a small space. It's probably not true.

Likewise, the physical cores themselves look smaller than Atom's. But it's a crazy awesome design if it is. (Also, I heard it was roughly clock for clock even with Core 2, but mostly clocked at ~1Ghz (1.2Ghz single-core))
 

jihe

Senior member
Nov 6, 2009
747
97
91
I sure hope so. I quite like netbooks except for the terrible atom performance.
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
I dont know how we are supposed to guess with just about no hard facts out. Although, my first guess would be not necessarily, as its not so much a new arch as bulldozer, and thats what AMD needs, but if the leaks and speculation is right, it could be quite sucessful. AMD should expand into super mobile markets as well though. But this might help them get some market share, but im not so sure it will be very profitable for them. I dont see an Intel screw up until they really do a new arch, maybe as far as after Ivy Bridge. It will be hard for AMD to make up the half gen they are behind without a little help from an Intel Netburst. They need Llano to crush Sandy, and Bulldozer to compete with Ivy to really catch up.
 

maddie

Diamond Member
Jul 18, 2010
5,172
5,565
136
I dont know how we are supposed to guess with just about no hard facts out. Although, my first guess would be not necessarily, as its not so much a new arch as bulldozer, and thats what AMD needs, but if the leaks and speculation is right, it could be quite sucessful. AMD should expand into super mobile markets as well though. But this might help them get some market share, but im not so sure it will be very profitable for them. I dont see an Intel screw up until they really do a new arch, maybe as far as after Ivy Bridge. It will be hard for AMD to make up the half gen they are behind without a little help from an Intel Netburst. They need Llano to crush Sandy, and Bulldozer to compete with Ivy to really catch up.


I thought Bobcat was as advanced a design as Bulldozer, just for a different market segment. My thinking actually puts it more advanced than Bulldozer.

What I don't understand is no one commenting on the application for servers. If this is as powerful and power efficient as claimed, it's use in some server farms (not reliant on much FPU code) will be unbeatable. Google and facebook comes to mind.
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
I thought Bobcat was as advanced a design as Bulldozer, just for a different market segment. My thinking actually puts it more advanced than Bulldozer.

What I don't understand is no one commenting on the application for servers. If this is as powerful and power efficient as claimed, it's use in some server farms (not reliant on much FPU code) will be unbeatable. Google and facebook comes to mind.

To my knowledge Bobcat is Stars+40nm shrink+APU+maybe a little tweaking and of course shrinking.
 

maddie

Diamond Member
Jul 18, 2010
5,172
5,565
136
To my knowledge Bobcat is Stars+40nm shrink+APU+maybe a little tweaking and of course shrinking.

Thats Llano not Bobcat

Bobcat is the new cpu core.

Zacate is the bobcat core and gpu shaders added (80?)

Llano is the stars cpu + gpu + 40nm shrink
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Ivy Bridge maybe.
Sandy bridge will be pretty large. Intel will price Sandy Bridge to compete, but indications are Sandy will be 200mm2 and Zacate is looking to be around 75mm2. Even if AMD costs are higher, they will be able to be very competitive.

Ehh...

Similar talk from an automobile salesman(say Honda): Our cars have 50mpg fuel efficiency, have a towing capacity of 10,000 lbs, while having 0-60 acceleration times of 5 seconds.
 

velis

Senior member
Jul 28, 2005
600
14
81
If I see where this is going correctly:

1. A decent CPU together with a decent GPU + low power usage
This is exactly what I'm waiting for (for my HTPC)
No impact on server computers when looked at the integrated GPU perspective
Also this is exactly what pretty much every joe sixpack wants (a cheaper computer). This should sell well with the masses

2. Strong int, shared FPU performance
Quite frankly, I think this is an insanely good idea. No, that's an understatement. It's a genious idea. FPU instructions don't exactly stack like one after another. There's always int code in between, no matter how "HPC" code it is. My personal expectation from joined cores is only a minor FPU performance drop. A few percent, nothing more. In insanely HPC applications.
Looking from a server perspective, this is a super winner.

3. Increasing cores, not per core performance
This is a bad (in)decision in the short term. The problem is that they probably didn't have much of an option here. They can't possibly match Intel's resources unless a miracle (major breakthrough) happens again, like with Opteron. Obviously this hasn't happened.
On the upside: A vast majority of today's code can be converted to multi threaded. One way or the other. There are many problems that can't be expressed in multi threaded models. Most of them are theoretical when today's software is taken into account. Tomorrow this may change, but I don't see it so soon. Today's solutions convert well. Really well. Flame on :D

4. We're talking about undisclosed technology anyway
We have no idea what either single core or multi core performance will be. Same for GPU subsection. Both for BD and for SB. The entire discussion is moot.
We'll see in half a year, won't we?

Have fun,
Jure
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
If I see where this is going correctly:

1. A decent CPU together with a decent GPU + low power usage
This is exactly what I'm waiting for (for my HTPC)
No impact on server computers when looked at the integrated GPU perspective
Also this is exactly what pretty much every joe sixpack wants (a cheaper computer). This should sell well with the masses

2. Strong int, shared FPU performance
Quite frankly, I think this is an insanely good idea. No, that's an understatement. It's a genious idea. FPU instructions don't exactly stack like one after another. There's always int code in between, no matter how "HPC" code it is. My personal expectation from joined cores is only a minor FPU performance drop. A few percent, nothing more. In insanely HPC applications.
Looking from a server perspective, this is a super winner.

3. Increasing cores, not per core performance
This is a bad (in)decision in the short term. The problem is that they probably didn't have much of an option here. They can't possibly match Intel's resources unless a miracle (major breakthrough) happens again, like with Opteron. Obviously this hasn't happened.
On the upside: A vast majority of today's code can be converted to multi threaded. One way or the other. There are many problems that can't be expressed in multi threaded models. Most of them are theoretical when today's software is taken into account. Tomorrow this may change, but I don't see it so soon. Today's solutions convert well. Really well. Flame on :D

4. We're talking about undisclosed technology anyway
We have no idea what either single core or multi core performance will be. Same for GPU subsection. Both for BD and for SB. The entire discussion is moot.
We'll see in half a year, won't we?

Have fun,
Jure

I agree with 1, 2, and 4. However while 3 does have some merit, i disagree in a sense. If AMD released a quad core, IE phenom 2, that costed 20% less and was 20% slower than an Intel quad, IE i7, hooray. Not bad. If AMD can release a 8 core thats still 10-20% cheaper, but now 20% faster, then thats a winner. Everything is relative, and i dont think we can look at things on a per clock or per core basis much anymore. If BD comes in a 3.8 GHz and 16 cores, however costs the same, uses the same amount of energy, produces the same amount of heat, and overclocks the same amount, whos to stop them? Thats how i feel about this new AMD design. It might be terribly inefficient in a per core or per clock basis, however might be extremely efficient just because its different. 2 ways of looking at things. We dont falut ATi for using 1600 shaders where Nvidia uses 4 anymore do we? Of course not. Its just a different way of getting the same task done.
 

maddie

Diamond Member
Jul 18, 2010
5,172
5,565
136
i thought this was about Bobcat.

People seem to be confusing Bobcat core Zacate, stars core Llano and Bulldozer.

Zacate is supposed to be available in available consumer products about 2 months from now. January 2011.

Bulldozer and Llano several months later.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The cost issues are purely economics. If any company needs additional competitiveness, they can adjust the prices accordingly.

If AMD released a quad core, IE phenom 2, that costed 20% less and was 20% slower than an Intel quad, IE i7, hooray.

But if the costs for AMD Phenom II are really 1x(rather than the 0.8x they are selling at) compared to Core i7 then its not completely favorable for the company either.
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
The cost issues are purely economics. If any company needs additional competitiveness, they can adjust the prices accordingly.



But if the costs for AMD Phenom II are really 1x(rather than the 0.8x they are selling at) compared to Core i7 then its not completely favorable for the company either.

Well by cost for them is what im getting at. Look at ATi vs Nvidia for example. 5870 has 1600 shaders. the GTX 470 has 448 i believe. Does that mean ATi CPU's are 4x less efficient per "core"? Pf course not, its just a different way of doing things. If ATi can get those 1600 shaders, weak or strong as they may be, to cost the same, perform the same, use as much energy, and produce as much heat as the 448 shaders, i certainly wouldnt care.
 

velis

Senior member
Jul 28, 2005
600
14
81
i thought this was about Bobcat.

People seem to be confusing Bobcat core Zacate, stars core Llano and Bulldozer.

Zacate is supposed to be available in available consumer products about 2 months from now. January 2011.

Bulldozer and Llano several months later.

I guess I was the target here :)
Yes, sorry, I went a bit off topic here. In #1 I wasn't talking about BD or Zacate, but fusion in general whichever physical form it may take.
I am not confusing bulldozer with llano or whatever, but I do admit I'm waiting for llano, not something with a bd core. :) That's assuming a llano will run MediaPortal fine, of course - with full HD vector deinterlacing. If not, I'm waiting for a bd fusion part...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel will be improving Atom; they won't be standing still.

In Spring 2011, I believe Intel will launch the first 32nm Atom (Cedar Trail). With 2.13 Ghz 45nm single core atom cores listed at Intel Ark, I just wonder if we could see some good clocks for the 32nm dual core I mentioned.

Furthermore, What kind of graphics core will Cedar Trail use? Genuine Intel HD graphics or Power SGX running third party drivers?

With regard to future atom upgrades, I am pretty sure Anand mentioned a redesigned core featuring "Out of order execution" in one of past Anandtech atom articles scheduled sometime around 2012/2013. (If someone could double check and confirm that would be great.)

Also ULV Sandy Bridge and Ivy Bridge will give some serious competition for Ontario.

Those should cost a good deal more money. Furthermore, Ivy Bridge won't feature dual core. According to Wikipedia, "quad core" is the default for those chips.
 
Last edited:

Dark_Archonis

Member
Sep 19, 2010
88
1
0
Ivy Bridge maybe.
Sandy bridge will be pretty large. Intel will price Sandy Bridge to compete, but indications are Sandy will be 200mm2 and Zacate is looking to be around 75mm2. Even if AMD costs are higher, they will be able to be very competitive.

The question remains as to how much Intel will have to cut off from GPU performance to reach 18W TDP on such a large chip. Current platforms are also around 200 mm2 (separate dies, with 81mm2 CPU and 114mm2 GPU / memory controller). they are clocked down to around 1.3GHz to be competitive. We've all seen the GPU comparisons at that level, and Intel does not come out shining.

Now Sandy Bridge is supposed to be faster, but the desktop Sandy Bridge benches were showing competitive to what Zacate demonstrated... what will happen when they cut the clock speed for the CULV version? Currently Intel is pricing notebook i5s at $200ish. You think Zacate is going to be a $200ish processor? I'd be surprised. They might try it there, but I think it's more likely to be in the $100ish range. This will force Intel's hand to basically cut out entire processor lines (Celeron) to be competitive, or simply choose to not be competitive.

I don't think I am so confident on SB even being competitive here at any price. Early indications to me are that this will be a netburst P4 vs. K8 scenario again. Intel will be close enough to compete and has the profit margin to sustain being cost competitive, but the products will likely be riding the Intel brand to sell parts and I'm doubtful they'll be truly competitive. It shouldn't really be surprising, as Ontario is targeted specifically at the market it's going for, but Intel is overbuilding and cutting down capability. It's similar to the way nVidia created the GF104 to compete at the price point between the HD5850 and the HD5770. It competed well there because it was purpose-built for that market segment, where the competitor offering (HD5830) was a scaled down high end chip that was lackluster in many ways.

That being said I'll be buying whichever makes sense (read: is cheaper) and is available for purchase in some kind of desktop form factor for my file server / media server / router machine. I'm due for hard drive replacement for this machine next year, as they'll be approaching 6 years continuous on time (my limit), and if there is some low power platform availability, I'll be moving to it from my current ancient Socket 939 / single core.

And I'll be recommending to my friends who buy low-end notebooks whichever has the better GPU option for their notebooks, because the only thing any of them do that is at all taxing on any piece of hardware is light gaming, and in this scenario, who really cares what the CPU is doing when we all know that IGP performance is a huge bottleneck in that part of the market.

In summary, I think Zacate will perform well for it's target segment, but I think AMD will face what it has always faced... people buying Intel because that's what they buy. We saw with the A64 that even when AMD has superior products, they have a really difficult time gaining marketshare. The A64 wasn't able to until the clearly superior Core series processors were almost out. A lot of the success of AMD depends on the willingness of people to switch brands for performance reasons, and I think that the percentage of people who will do so is fairly small. They'll have to fight for the marketshare they gain.

Wow, a lot of assumptions there in your post. I will wait and see what the official die size figures are.

Also if you really believe that simply a smaller die size for AMD will mean more profit you are mistaken. There are many more factors than simply die size that determines profit.

You seriously think this is going to be a P4 vs K8 situation? Based on what exactly? SB will wipe the floor with Zacate in terms of CPU performance, this is clear. The only uncertainty will be SB's GPU performance vs Zacate's GPU performance.

Both the CPU and GPU on SB will be able to independently turbo, making more efficient use of the TDP.

You contradict yourself here. You stated that you will recommend to your friends who buy low end notebooks the option with the best GPU. If they buy low-end notebooks and only do light tasks, then there is *no point* in even trying to go for the best GPU option. The IGP in this situation is a bottleneck only if you do more than light gaming.

Even if Zacate is more competitive on the GPU side than SB and in terms of TDP, Ivy Bridge will erase whatever advantage would exist.

In Spring 2011, I believe Intel will launch the first 32nm Atom (Cedar Trail). With 2.13 Ghz 45nm single core atom cores listed at Intel Ark, I just wonder if we could see some good clocks for the 32nm dual core I mentioned.

Furthermore, What kind of graphics core will Cedar Trail use? Genuine Intel HD graphics or Power SGX running third party drivers?

Correct, Atoms next year are supposed to see increased clock speeds on 32nm. There is Cedar Trail, and also later there is Oak Trail coming as well. Oak Trail will be a system-on-a-chip design.

Intel's goal with Atom is to slowly increase performance while dramatically decreasing size and TDP of Atom. Future Atom designs will continue to see lower and lower TDPs. Intel is aiming to directly compete with ARM using Atom, not even AMD. Intel is aiming to get Atom small and cool enough so that it can be used in smartphones. No AMD design, including Bobcat, targets such a market. Meanwhile, netbooks and low-end laptops will very likely be covered by low-end SB and Ivy Bridge models.

I heard Oak Trail would have a separate GPU design, not sure from who though.

With regard to future atom upgrades, I am pretty sure Anand mentioned a redesigned core featuring "Out of order execution" in one of past Anandtech atom articles scheduled sometime around 2012/2013. (If someone could double check and confirm that would be great.)

Yes, the rumor is that in the future Atom will get redesigned to feature out of order execution. That would certainly improve performance, and the rumor is that Intel was going to do it without raising Atom's TDP.

Those should cost a good deal more money. Furthermore, Ivy Bridge won't feature dual core. According to Wikipedia, "quad core" is the default for those chips.

They are a good deal more money simply because Intel charges a nice premium on them. If Intel really wants to, they can slash pricing on some of the ULV CPUs and really squeeze AMD hard.

If Ivy Bridge will be standard quad-core, even better. Imagine a quad core Ivy with a TDP in the 18-25W range. That would absolutely destroy any Bobcat in terms of performance. The beauty with Ivy is that each core will be able to independently turbo, and the GPU will also be able to independently turbo. For low TDP situations, 3 cores could turn off and the remaining one core and GPU would produce only a low amount of heat, especially considering Ivy will be on 22nm.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Ehh...

Similar talk from an automobile salesman(say Honda): Our cars have 50mpg fuel efficiency, have a towing capacity of 10,000 lbs, while having 0-60 acceleration times of 5 seconds.

:hmm: you mean the car can't do all three at the same time? :biggrin:
 

Eug

Lifer
Mar 11, 2000
24,165
1,809
126
So, is AMD still on target for a Zacate / Ontario January release? Or is that Ontario only?

I'm personally interested in an AMD SFF / nettop machine relatively soon and I'm guessing that's going to be Ontario, but I could see Zacate getting put in there too.

The way I see it, if it gets even halfway in-between Atom dual-core D525 and Core 2 Duo class speeds, with good H.264/VC-1 acceleration AND GOOD STABLE DRIVERS, then AMD has a winner.

Atom 330 with ION is already capable of Blu-ray playback but has trouble with HD Flash content and some heavier multitasking. I'm guessing that if Ontario is fast enough to handle full 1080p Flash then it's reached that entry-level threshold of "decent-enough-for-an-everyday-primary-machine". Atom hasn't quite reached that threshold yet IMO, even with ION.

IOW, I wouldn't necessarily recommend Atom/ION for a person's primary machine, but it's almost there. If Ontario achieves what I hope it will, I would be willing to recommend it as a low end primary machine if space and/or low power is an issue.

In my case, I will be keeping my Atom/ION machine for now as my VPN machine. If Ontario/Zacate does prove to be even close to what people are saying it will be, then I will buy one as my VPN machine and put the Atom/ION machine in the guest room. Why do I need improved performance for a VPN machine? Cuz I play H.264 HD video on a second monitor while I'm tunneling in sometimes. Atom / ION is OK for that, but it'd be nice to have more reserve.

I personally have no doubt Ontario/Zacate will be a performance improvement over Atom solutions. I am however skeptical it will really compete with Core 2 Duo in terms of raw performance. If so, great, but if not, that's fine too as long as it's a big improvement over Atom. What I'm really worried about though is the drivers, esp. for video acceleration.

P.S. I'd like to see an Ontario 11.6" netbook too for $399.
 
Last edited:

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
:hmm: you mean the car can't do all three at the same time? :biggrin:

I can think of a design to meet all three of those conditions. Although those arbitrary tasks probably aren't the greatest design drivers.
 

Eug

Lifer
Mar 11, 2000
24,165
1,809
126
http://www.chip-architect.com/news/AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg

So we have a 15.2 mm^2 40nm CPU that is 90&#37; as fast a same clocked C2D @ 45nm 107 mm^2? And that is not even including the 80 SP GPU...*jaws drop*. This is far and away the most impressive product from AMD since K7.

AMD can slap 6 Bobcat cores into one die and still come out smaller than a Wolfdale.
Is that 74 mm2 spec accurate, or is that just an estimate?

Either way, much-better-than-Atom CPU performance, with an integrated GPU that doesn't suck would win them a LOT of sales if the price and power characteristics are right.

As a consumer, anything that says "integrated Intel graphics" is a big red flag to me. I trust it's the same for a lot of people.