The definition of a "core" is blurring and it looks like the industry will now have the unenviable job of accurately portraying performance to the consumer without being able to use the term "core" in an apples-to-apples way. Just as Ghz became a clearly inadequate measure of performance earlier.
Will AMD and Intel have the honesty to come up with new metrics that the average consumer will understand or will they exploit the blurred lines for less-than-100%-accurate marketing purposes? I sincerely hope that both choose the former.
Whatever AMD is doing here at least we can all agree it is definitely the start of something different, and different can be good if change is what is needed.
Will these Bulldozer CPUs be able to run on existing AM3 boards?
From what I have read bulldozer will be released with a new AM3 chipset that has thus far been referred to as Rev2. I think the plan is an intermediary update, like from AM2 to AM2+
Bobcat to me looks like a slam dunk. I have not read anything from intel about adding out-of-order processing, SSE extension, or vertualization to any atom derivitive anywhere. Unless intel can die shrink a core2duo and achieve similar performance and power then bobcat will be a vastly superior chip. Even at a 1ghz clock speed it will thuroughly trounce 2.0ghz Atoms in most cases, and the AMD IGP's are lightyears ahead of intel in terms of raw power and efficiency.
What I wonder is if the ratio of 2 to 1 Integer Cores to FP Cores is the right ratio. I wonder if 3-1 or 4-1 or 3-2 would be a more efficient ratio. In this way, GPU's appear more advanced, as their ratio of processor cores seems fixed to a specific expected outcome, while this ratio seems more arbitrary. Of course it only seems arbitrary, since i don't know the reasoning behind it, but I do wonder where this ratio was conceived, and what the numbers to calculate the most efficient ratio were. I think we will see more efficient ratios in the future after this has gone through a few cycles, just like the core ratios on GPU's have gotten far more efficient over the years.
So the answer is no?
What confuses me even more is this idea of an integrated GPU. Will this integrated GPU be coming with all Bulldozers or just some of them?
I am wondering if a non-integrated GPU Bulldozer will be compatible with existing AM3 mainboards.
Yes the answer is no. My understanding is that AMD's mainstream answer in Llano will all feature IGP's, similar to the IGP's used in Intel's Clarksdale CPU. This will effectively eliminate the north bridge completely (although I am not sure if they will route the PCIE bus through the chip as well like intel did with P55). I believe, and don't quote me, that the high end zambezi parts will not have IGP's and instead will have increased core counts and possibly higher cache densities. Maybe something similar to athlon II and Phenom II, with the athlon II having the IGP.
Bobcat to me looks like a slam dunk. I have not read anything from intel about adding out-of-order processing, SSE extension, or vertualization to any atom derivitive anywhere. Unless intel can die shrink a core2duo and achieve similar performance and power then bobcat will be a vastly superior chip. Even at a 1ghz clock speed it will thuroughly trounce 2.0ghz Atoms in most cases, and the AMD IGP's are lightyears ahead of intel in terms of raw power and efficiency.
Intel has had ULV Core2Duo parts for some time, though I'm not sure if they have bothered producing anything sub-10W yet. As with any post-Netburst s775 chip (or any derivative thereof), the real issue is going to be chipset power draw moreso than processor power draw. Bobcat will, presumably, be able to do things on-die within its stated power envelope that a die-shrunk ULV Core2Duo could not, hence the need for on-board northbridge and igp with any Core2Duo laptop (and all the power draw that would come with such things).
If anything, I would expect Intel to fight back with a mobile i3 variant or . . . something. Atom is more intended to go head-to-head with ARM chips so I wouldn't expect Intel to use it against Bobcat.
What I wonder is if the ratio of 2 to 1 Integer Cores to FP Cores is the right ratio. I wonder if 3-1 or 4-1 or 3-2 would be a more efficient ratio. In this way, GPU's appear more advanced, as their ratio of processor cores seems fixed to a specific expected outcome, while this ratio seems more arbitrary. Of course it only seems arbitrary, since i don't know the reasoning behind it, but I do wonder where this ratio was conceived, and what the numbers to calculate the most efficient ratio were. I think we will see more efficient ratios in the future after this has gone through a few cycles, just like the core ratios on GPU's have gotten far more efficient over the years.
Even Tegra is better then the HD4500 or any other intel graphics.
deputc, given that we don't even make much of an attempt to characterize single-threaded IPC as a function of instruction set mix (execution speed of some 700+ instructions!?) between any two competing microarchitectures I doubt we will see much progress in the direction you are wanting to see things go by further incorporating the effects of multiple threads operating simultaneously.
And really why bother? We let price be our primary selection criterion anyways and we rely on "performance by app category" (are you encoding? are you using matlab? are you big on cinebench? etc) to guide us towards a price/performance decision point from there.
And lets be honest with ourselves, anything AMD (or Intel) did in the name of reducing the complex issue of performance to a single metric (as in performance ratings) will just be met with harsh criticism and abject skepticism anyways as we are all cynics when it comes to gift horses offered to us by anything with a profit motive.
So really AMD (and Intel) are just better off leaving it up to the reviewers (the ones we view as trustable 3rd parties in this cabal) to evaluate the products and tell us how they perform within each given application category.
How can that be possible? None of the Tegra series including the APX 2x00 series, or the T egra 6x0 series have a very powerful dedicated GPU because it would kill its purpose as an ultra efficient technology in regard of power consumption. It doesn't even support full fledged OpenGL or Direct3D, only their mobility counterparts. I couldn't find information regarding of which GPU derivative had, but for sure isn't powerful enough to even outperform an HD 3200 GPU or nVidia similar cards.
I fully agree with what you are saying, I didn't state my concern clearly and in retrospect I was beating around the bush a bit.
When a company misrepresents, even a little, the performance of a product; they invariably profit in the short term but receive backlash from the reviewers (and opinion leaders, us) which has a long term and very-difficult-to quantify but very real effect on not only sales but also the reputation of the company and by extension the industry. I tried to be brand agnostic in my first post but I am worried that AMD will advertise "more cores than Intel" which will fool the average consumer into thinking that this means "better than Intel" (and maybe it will be) while really each bulldozer "module" should *honestly* be referred to as just 1 really awesome core or maybe 1.5 cores. I really do wish the best for AMD and I don't want to see them lose credibility with short-sighted marketing.
Yep, this will transition from a "core war" to a "thread war"
...
Bobcat is 1-10 watts single core?
Well that definitely is low power consumption. What kind of products will this be going into?
But this vernacular is undoubtedly going to give way to the better descriptive label of "thread count" in place of "core count" IMO.
Yes, that is a very good question. In talking to customers, their applications typically fall into 2 camps. ~80-90% fall into the "mainly integer" camp where they have little or no FP in their code, so 2:1 is a definite improvement because they save power. Going to less than that (like 3:1 or 4:1) *might* adds too much latency because the FPU could get more traffic lining up behind it. If the results for an integer thread rely on a pre-determined result from the FPU, everything would slow down (but I am not a designer, I am guessing here.)
The other applications (the 10-20% or so) are heavily FPU, and for those customers, they are actually more concerned about getting GPUs in their system for massive FP calculations. In this case they are also thinking 2:1, but 2 GPUs for every CPU.
Over time, as the software becomes more sophisticated, the desire for FPU inside the system could potentially shrink more in favor of GPU to do that work. But we still have a while on that trend becoming mainstream.
And again, I am a marketing guy, but that is the typical conversation that I have with customers.
Yep, this will transition from a "core war" to a "thread war"...which is all it ever really was to begin with but since cores were synonymous with threads in x86 world until the advent of hyperthreading there was no penalty for failing to make the distinction.
What hyperthreading and modules are going to do is push the vernacular away from "core performance" to "thread performance". The metrics remain the same, we just shift the labels appropriately so as to more correctly refer to what matters.
Then the question becomes what is the computational power available per thread?
Something tells me it is more power efficient to divide computational power into more and more threads (up to a point). If this is true then practical computational power (for a light user like me) becomes low unless the processor is relatively small.
For this reason I almost wish AMD would release a dual core 32nm Bulldozer. With this 80% scaling hyperthreading such a CPU would almost act like a native quad core.