Rumour: Bulldozer 50% Faster than Core i7 and Phenom II.

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

JFAMD

Senior member
May 16, 2009
565
0
0
Nobody needs more than 640K.

Right?

I think there should be a requirement on this board that anyone that makes a statement like "we don't need more cores" should be required to revisit that statement in 18 months when they are running a CPU with more cores than they are using today.

And they should not be allowed to complain about having "only" 8 cores in 24 months.

If you take a step back, logically, and look at the market, both AMD and intel are increasing core counts. That can mean only one of 2 things:

1. The future will be more multithreaded and software will be able to take advantage

2. Neither company, who spends hundreds of thousands of hours researching, talking to customers, talking to software vendors, studying trends and looking at technology, called it right.

I am putting all of my money on #1.

If you seriously believed that the future was not going to be more threaded than today, you'd build a bunch of single core processors and be done with it. You wouldn't spend tens of millions on R&D to get more cores into a processor unless you had a pretty clear understanding of how those cores were going to be used.

I guess a third option is that AMD is 100% wrong on this and intel is chasing AMD instead of paying attention to the needs of the market. But I tend to not believe that is even a viable choice.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
There are a ton of current games that benefit enormously from a quad core vs. a dual core.
You're saying back in H1 2010 there were lots of games that benefited enormously from a quad? Uh, even now I can play 98% of all games just as well with my e8400 at 4.4ghz as with a modern quad (the only interesting games are those that are limited by CPU power and then only if the dual core doesn't already has more than 60fps [or whatever your personal boundary is - can't notice much difference in a RTS between 40 and 60fps, but that's just me]). Which means I've used the CPU quite a bit longer than I planned to and by now it should be replaced by a 2400k - ah well that didn't turn out to well.. will have to do for another month or two ~


JFAMD said:
If you take a step back, logically, and look at the market, both AMD and intel are increasing core counts. That can mean only one of 2 things:

1. The future will be more multithreaded and software will be able to take advantage

2. Neither company, who spends hundreds of thousands of hours researching, talking to customers, talking to software vendors, studying trends and looking at technology, called it right.
Yeah it's not as if one company invested billions in trying to get faster cores which only failed because of physical limitations, then overhauled their architecture for much higher IPC before starting to focus on core counts.
You somehow forget version 3: Both Amd/Intel can't make much of a business by selling old CPUs forever and since much higher frequencies won't work they need something else. So what else should they do? Increase IPC.. both are working on that.
Integrate a GPU or something other new? Well, we've heard enough about that.
Seems to me like they're also looking into other things, but they more or less got to use the extra diesize for something and core count is pretty much the only thing available (oh or even more cache, but that's already a large portion of the die size.. and doesn't sell that well I'd wager).


Also you work on the business suide of things, where there are completely other problems - for lots of businesses causes more cores are extremely useful.
But sure if we want to bet that in 2013 a overclocked 2400k will still run every game (nobody's disputing that more cores are inherently useful for things like en/decoding) just fine, I'm all for it.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Nobody needs more than 640K.

Right?

Using a debunked lie won't score you any points...it will just make you look uninformed..and a PR tool:

http://en.wikiquote.org/wiki/Bill_Gates#Misattributed
http://www.wired.com/politics/law/news/1997/01/1484

But kinda funny you should bring up the "Nobody needs more than...".
Becuae that is the what you (AMD) is doing in regards to tesselation, because NVIDA has too "much" tesselation power compared to AMD o_O.

Don't use debunked myths to make a point...you get hit by a hard dosis of reality blowback...
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@Lonberjg

Do you believe a 4 thread CPU is enough by 2013-2014?

There are many games that currently use up to 6 threads, and thats bound to go to 8 before 2013. Only haveing 4 threads, when most games by then uses 8 threads... is gonna make you CPU bottlenecked compaired to what others use.

That is what JFAMD is saying, in the computer world, you should be carefull about saying "i ll never need more of this, or that". Do you disagree with that sentiment?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Don't feed him, he's known to hate AMD and ATi regardless of their technology prowess or failure.

I can say that I've been using this Quad CPU for over 3 years and I couldn't be happier with the performance, but there's too few applications that can push it to the limits, some like Nero Media Encoding and less than 10 PC games like Mafia 2, Mass Effect 1 and 2, etc etc. The future is more cores definitively, but I think that even a i5 750 is more than enough for today software on the consumer market, unless if you are working with professional workstation stuff like lots of VM's/Management/3D rendering software.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
but there's too few applications that can push it to the limits, some like Nero Media Encoding and less than 10 PC games like Mafia 2, Mass Effect 1 and 2, etc etc.
Ok the first one's encoding which we don't have to argue about.. you need a gigantic number of cores or a tiny sample before that runs into problems ;)

Never played Mafia 2, so can't argue with that, but I've played ME1 and 2 on a e8400 and if you're claiming that it's bottlenecked by the CPU then it's either completely not optimized for more than 2 threads (which iirc is not the case) or your quad is running at 2ghz since my good old e8400 is getting over 60fps there.
Great case where you may see a large percentual increase which is not really interesting with a refresh rate of 60Hz.


I really should look up some old posts when the dual/quad debatte was hot - should be quite funny in retrospective.
 
Last edited:

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
Using a debunked lie won't score you any points...it will just make you look uninformed..and a PR tool:

http://en.wikiquote.org/wiki/Bill_Gates#Misattributed
http://www.wired.com/politics/law/news/1997/01/1484

But kinda funny you should bring up the "Nobody needs more than...".
Becuae that is the what you (AMD) is doing in regards to tesselation, because NVIDA has too "much" tesselation power compared to AMD o_O.

Don't use debunked myths to make a point...you get hit by a hard dosis of reality blowback...

Grow up already :/


Anyways I don't believe the 50% faster rumor personally. Unless a higher core BD is being compared to a lesser core PII/i7.


I'm also feeling BD is going to be delayed :(
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
That is what JFAMD is saying, in the computer world, you should be carefull about saying "i ll never need more of this, or that". Do you disagree with that sentiment?

Remember 15 years ago? Everyone upgraded because they NEEDED better performance. Since a few years ago, the upgrades started only happening for majority of the people because it broke down. Pentium III, and Pentium 4 days, even the early Core 2 days, I really wanted a more responsive system. When I moved the Core 2 to the Core i5, I started feeling "this is enough".

Anyways I don't believe the 50% faster rumor personally. Unless a higher core BD is being compared to a lesser core PII/i7.

Maybe its not that hard, in very parallel apps or those that take advantage of the FMA instructions in the new CPU.
 

Soleron

Senior member
May 10, 2009
337
0
71
/Anyways I don't believe the 50% faster rumor personally. Unless a higher core BD is being compared to a lesser core PII/i7.

I think that is the point. AMD will be selling desktop 8-cores while Intel will only go to 6 cores. AMD will also sell 16-cores on the server against 10-core Westmeres (no SB for the high end servers this year).

Since Westmere-EX only clocks to 2.4GHz, if BD can match that but with 60% more cores, they win.

Same with SB. If AMD can get a 3.5GHz 8-core BD (architecture is officially capable of this) against 3.4GHz 6-core SB (since the 4-cores clock to this) it should do very well.
 

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
Using a debunked lie won't score you any points...it will just make you look uninformed..and a PR tool:

http://en.wikiquote.org/wiki/Bill_Gates#Misattributed
http://www.wired.com/politics/law/news/1997/01/1484

But kinda funny you should bring up the "Nobody needs more than...".
Becuae that is the what you (AMD) is doing in regards to tesselation, because NVIDA has too "much" tesselation power compared to AMD o_O.

Don't use debunked myths to make a point...you get hit by a hard dosis of reality blowback...

Don't recall him attributing the quote to Bill Gates. Sticking words in someone else's mouth won't score you any points either.
 

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
Remember 15 years ago? Everyone upgraded because they NEEDED better performance. Since a few years ago, the upgrades started only happening for majority of the people because it broke down. Pentium III, and Pentium 4 days, even the early Core 2 days, I really wanted a more responsive system. When I moved the Core 2 to the Core i5, I started feeling "this is enough".

The other side of Moore's law is that you'll be able to get a similar amount of performance from a smaller die. This has obvious benefits such as lower cost and lower power draw.

You may have enough power right now, but eventually you'll be able to get that same amount of power in smaller devices. Netbooks or something like the Macbook Air wouldn't have been possible ten years ago.

If all you do is browse the web and play the occasional game, you most likely have more than enough performance already. However, there are plenty of professionals who are always hungry for more computational power.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Ok the first one's encoding which we don't have to argue about.. you need a gigantic number of cores or a tiny sample before that runs into problems ;)

Never played Mafia 2, so can't argue with that, but I've played ME1 and 2 on a e8400 and if you're claiming that it's bottlenecked by the CPU then it's either completely not optimized for more than 2 threads (which iirc is not the case) or your quad is running at 2ghz since my good old e8400 is getting over 60fps there.
Great case where you may see a large percentual increase which is not really interesting with a refresh rate of 60Hz.


I really should look up some old posts when the dual/quad debatte was hot - should be quite funny in retrospective.

A Quad was barely enough for me, but then again I did some weird stuff with my PC, like trying to run a PS2 emulator in the background and a game as well (That didn't go well. needed more cores and ram). Theres a few other things too that I do.

Since I gave my brother my old rig, I'm going to be waiting for 2011 or BD. Sandy bridge doesn't impress me enough, or have enough cores. I also want more sata ports. At least 8. I keep running out.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
@Lonberjg

Do you believe a 4 thread CPU is enough by 2013-2014?

There are many games that currently use up to 6 threads, and thats bound to go to 8 before 2013. Only haveing 4 threads, when most games by then uses 8 threads... is gonna make you CPU bottlenecked compaired to what others use.

Many?

I'll believe several, but not many.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
1. The future will be more multithreaded and software will be able to take advantage

Here's a quote from Extremetechs Phenom review that I think sums up the situation:

Again, it's not really much of a contest, with Intel ruling the roost in these 3D rendering and applications tests.

There is one ray of sunshine here for the Phenom. The Cinebench 10 benchmark runs parts of the benchmark in both single threaded and multithreaded mode, then reports the overall efficiency of moving from single to multiple threads. Both Intel CPUs see multicore efficiencies of around 3.5-3.6 (3.53 for the Q6600.) The Phenom 9600's multicore speedup is 3.77. This is no doubt due to the integrated memory controller. But it's not enough to catch the Q6600.

This suggests that if AMD can get the clock frequencies up, it could be very competitive with Intel in multicore performance. Given what we've seen, that may not happen until AMD brings up its 45nm process, however.

In other words, Intel's "fake" quad core was better than AMDs, because Intel focused on single threaded performance and clockspeed first. Then they just glued some dual core CPUs together and called it a day. The fact that this gave them a better product is proof that AMD's bet was wrong. Phenom is now at the end of its life and there is still no "I told you so" moment. Even at 45nm, Bloomfield is better than Thuban for the vast majority of people. An i7 950 is better than an 1100 in virtually every single benchmark, and they are both the same price.

It seems to me that Bulldozer is actually going in the opposite direction. Obviously, I don't know too much about how it will actually perform at this point, but generally speaking it seems to be emphasizing greater single-threaded performance. If they expected all 8 cores to be fully utilized, then they would have made 8 "full" cores instead of having them share resources, and they wouldn't have bothered with the whole turbo mode thing. Turbo mode is only really useful for when you aren't using the extra cores. Or am I looking at it the wrong way?

Another consideration is GPUs. Isn't it true that workloads that can take advantage of multiple CPU cores are also usually possible to code for a GPU? Things like video encoding and simulations. So where does that leave the CPU if it is more efficient to run a lot of this code on a GPU anyway? What's the point of running F@H on a Bulldozer when you can run it twice as fast on a video card that costs half as much?
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,088
3,931
136
Using a debunked lie won't score you any points...it will just make you look uninformed..and a PR tool:

http://en.wikiquote.org/wiki/Bill_Gates#Misattributed
http://www.wired.com/politics/law/news/1997/01/1484

But kinda funny you should bring up the "Nobody needs more than...".
Becuae that is the what you (AMD) is doing in regards to tesselation, because NVIDA has too "much" tesselation power compared to AMD o_O.

Don't use debunked myths to make a point...you get hit by a hard dosis of reality blowback...

i signed up just because dribble liek this post gives me the ****s.
the tessellation AMD recommends has nothing to do with NV Tessellation power, its has everything to do with Rasterisation. Basically as triangles size gets smaller Rasterisation efficiency goes down as well, its about balancing increased Geometry performance with decreased rasterisation performance. AMD believes the optimal point is around 16X for tessellation. With the new Cat hotfix drivers AMD backs this up, with the user controllable tessellation factor you can limit NV's tech demos to 16X and see that there is barely any recognisable difference while improving performace.

NV is pushing higher tessellation just to push an advantage vs AMD not to actually produce better looking graphics.
 

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
In other words, Intel's "fake" quad core was better than AMDs, because Intel focused on single threaded performance and clockspeed first. Then they just glued some dual core CPUs together and called it a day. The fact that this gave them a better product is proof that AMD's bet was wrong. Phenom is now at the end of its life and there is still no "I told you so" moment. Even at 45nm, Bloomfield is better than Thuban for the vast majority of people. An i7 950 is better than an 1100 in virtually every single benchmark, and they are both the same price.

The codebase for many applications has only recently been rewritten to take advantage of the changes in computing. There are applications such as Final Cut Studio that are are 32-bit despite the fact that 64-bit processors have been around for quite a while now. A lot of these programs were not written to take advantage of multiple threads, and of those that were, many weren't written to scale as the number of cores increased.

Intel has also worked significantly towards improving their IPC and chose to use technologies such as hyper-threading to increase performance rather than increasing their core count. Technologies like OpenCL that are designed to easily allow scaling across cores have only appeared recently and many applications have yet to implement them into their codebases. Applications that do implement them will be able to take advantage of CPUs regardless of how many cores they contain.

It seems to me that Bulldozer is actually going in the opposite direction. Obviously, I don't know too much about how it will actually perform at this point, but generally speaking it seems to be emphasizing greater single-threaded performance. If they expected all 8 cores to be fully utilized, then they would have made 8 "full" cores instead of having them share resources, and they wouldn't have bothered with the whole turbo mode thing. Turbo mode is only really useful for when you aren't using the extra cores. Or am I looking at it the wrong way?

I'm not a chip designer, but power gating makes sense if you're not using all of the chip. If they're going to add that, it might not be much harder to add a turbo mode. Turbo is also quite useful for people mostly interested in web browsing or other light workloads where the chip isn't being stressed. Being able to increase frequency in the situations where the user needs something done is quite useful.

Another consideration is GPUs. Isn't it true that workloads that can take advantage of multiple CPU cores are also usually possible to code for a GPU? Things like video encoding and simulations. So where does that leave the CPU if it is more efficient to run a lot of this code on a GPU anyway? What's the point of running F@H on a Bulldozer when you can run it twice as fast on a video card that costs half as much?

It depends on the code. There are some things that GPUs are insanely good at, but general purpose computing isn't one of them. They're very good at parallel processing, but horrible at general purpose computing. I imagine that both Intel and AMD eventually plan to make a processor that doesn't so much contain an IGP as it does integrate the best parts of both together.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
It depends on the code. There are some things that GPUs are insanely good at, but general purpose computing isn't one of them. They're very good at parallel processing, but horrible at general purpose computing. I imagine that both Intel and AMD eventually plan to make a processor that doesn't so much contain an IGP as it does integrate the best parts of both together.

My point is that an 8-core CPU is also pretty useless for general purpose computing. The applications that take advantage of 8 cores are going to be the ones that will benefit most from being offloaded to the GPU, so what role does an 8-core processor actually play in a desktop system?

power gating makes sense if you're not using all of the chip

So why go through all that trouble to implement power gating if you expect people to actually use all the chip? Normally, a CPU is either idle or stressed. If applications truly were multithreaded as a matter of course, having individually gated modules would be pointless since they are all going to be running at the same speed anyway. It only makes sense if the assumption is that people aren't actually going to be using all the chip, even when they are stressing it.
 

classy

Lifer
Oct 12, 1999
15,219
1
81
I have seen these arguments since dual cpu motherboard days and they are stupid. No one shops for a cpu by saying gee I don't need 8 threads or the classic "which one has the higher ipc performance". 99.9% of all pcs and servers are bought based around price. Period. If you get a higher performance cpu which you don't need of of its power but it costs the same as a lesser performing cpu, you will buy the higher performing cpu.

Now those of us who manage servers in real life at work may compare price to performance more carefully, but in the end most of the time we buy the "best deal". Comparing architectures also is somewhat dumb too for the same reasons. Sandy Bridge or the upcoming Bulldozer can have the most perfect design. But in the end 99.9% are going to buy the best performing cpu whether its design is inferior or not. In short you buy a finished product and can only use the finished result of the product.

Now its completely ok to discuss points of design and such, but the arguments here are just not reality.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
My point is that an 8-core CPU is also pretty useless for general purpose computing. The applications that take advantage of 8 cores are going to be the ones that will benefit most from being offloaded to the GPU, so what role does an 8-core processor actually play in a desktop system?

Part of the problem is that GPGPU computing is fairly new and there aren't a lot of applications coded to take advantage of it. There's also the issue with different standards. Nvidia is promoting CUDA whereas AMD isn't using it. If things were to standardize behind OpenCL, developers wouldn't have to worry about having to recode it if CUDA didn't take off or code for multiple architectures.

You can also do a lot with 8 cores, such as running browsing in a virtual machine. At that point you don't have to worry about getting your system infected due to security holes in the browser. Once your down browsing you just throw away the VM and any infections along with it. You could also use a few to encrypt data as it's written to disk ensuring that even if your system is compromised, it will be harder to steal your data. It also allows better multitasking. Flash videos in Linux usually peg an entire processor core. If I didn't have more than one I wouldn't be able to watch videos with smooth playback while doing anything else while running Linux. If I have 8 and only 4 are used while something is rendering, it means I can use the computer for something else without impacting the performance of the render. Just because you can't see any reason to have more cores doesn't mean that clever developers won't be able to take advantage of them.

So why go through all that trouble to implement power gating if you expect people to actually use all the chip? Normally, a CPU is either idle or stressed. If applications truly were multithreaded as a matter of course, having individually gated modules would be pointless since they are all going to be running at the same speed anyway. It only makes sense if the assumption is that people aren't actually going to be using all the chip, even when they are stressing it.

Almost no system is going to operate at peak load all of the time. My processor had to render this webpage, but once it's done that it doesn't have much work to do while I read it so why keep both cores running? That's why power gating is important. If the default clock rate was set at 3.8 GHz rather than 3.4, it might increase the TDP. However if it's not doing anything taxing before or after, it can temporarily increase its clock to gain the same level of performance without a TDP increase. That's why turbo boost is important.

It's also important in server settings where the amount of processing power used will vary heavily throughout the course of the day. When it's not being used, being able to turn off cores saves massive amounts of electricity. When it's being used sporadically, being able to increase the clock rate to get things done faster is nice. However, eventually they will need the full power of the chip.

Some of the reasoning also probably lies in the fact that developing multiple types of chips is expensive. It's easier to have a few designs that can scale across different ranges. Intel's SB architecture is being used in everything from notebooks to high-end desktops. Eventually it will make its way into servers. It wouldn't be cost effective for Intel or AMD to have to customize chips for every conceivable usage scenario wouldn't be practical. Intel and AMD spend billions of dollars on chip development and design. I think they know what they're doing.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
The other side of Moore's law is that you'll be able to get a similar amount of performance from a smaller die. This has obvious benefits such as lower cost and lower power draw.

You may have enough power right now, but eventually you'll be able to get that same amount of power in smaller devices. Netbooks or something like the Macbook Air wouldn't have been possible ten years ago.

If all you do is browse the web and play the occasional game, you most likely have more than enough performance already. However, there are plenty of professionals who are always hungry for more computational power.

Here's a quote from Extremetechs Phenom review that I think sums up the situation:



In other words, Intel's "fake" quad core was better than AMDs, because Intel focused on single threaded performance and clockspeed first. Then they just glued some dual core CPUs together and called it a day. The fact that this gave them a better product is proof that AMD's bet was wrong. Phenom is now at the end of its life and there is still no "I told you so" moment. Even at 45nm, Bloomfield is better than Thuban for the vast majority of people. An i7 950 is better than an 1100 in virtually every single benchmark, and they are both the same price.

It seems to me that Bulldozer is actually going in the opposite direction. Obviously, I don't know too much about how it will actually perform at this point, but generally speaking it seems to be emphasizing greater single-threaded performance. If they expected all 8 cores to be fully utilized, then they would have made 8 "full" cores instead of having them share resources, and they wouldn't have bothered with the whole turbo mode thing. Turbo mode is only really useful for when you aren't using the extra cores. Or am I looking at it the wrong way?

Another consideration is GPUs. Isn't it true that workloads that can take advantage of multiple CPU cores are also usually possible to code for a GPU? Things like video encoding and simulations. So where does that leave the CPU if it is more efficient to run a lot of this code on a GPU anyway? What's the point of running F@H on a Bulldozer when you can run it twice as fast on a video card that costs half as much?

My point is that an 8-core CPU is also pretty useless for general purpose computing. The applications that take advantage of 8 cores are going to be the ones that will benefit most from being offloaded to the GPU, so what role does an 8-core processor actually play in a desktop system?



So why go through all that trouble to implement power gating if you expect people to actually use all the chip? Normally, a CPU is either idle or stressed. If applications truly were multithreaded as a matter of course, having individually gated modules would be pointless since they are all going to be running at the same speed anyway. It only makes sense if the assumption is that people aren't actually going to be using all the chip, even when they are stressing it.


WTF this thread is discussing about bullldozer performance and NOT ABOUT CORE COUNT, its all about PERFORMANCE, if you don't want/need 8 core then just buy the 4 core version with higher clock or go to freakin intel if AMD not satisfy you. and with the new turbo boost it will surely help single threaded.


and btw thats why AMD heading to fusion chip, so it will take GPGPU to the mainstream,
 
Last edited:

drizek

Golden Member
Jul 7, 2005
1,410
0
71
Part of the problem is that GPGPU computing is fairly new and there aren't a lot of applications coded to take advantage of it. There's also the issue with different standards. Nvidia is promoting CUDA whereas AMD isn't using it. If things were to standardize behind OpenCL, developers wouldn't have to worry about having to recode it if CUDA didn't take off or code for multiple architectures.

You can also do a lot with 8 cores, such as running browsing in a virtual machine. At that point you don't have to worry about getting your system infected due to security holes in the browser. Once your down browsing you just throw away the VM and any infections along with it. You could also use a few to encrypt data as it's written to disk ensuring that even if your system is compromised, it will be harder to steal your data. It also allows better multitasking. Flash videos in Linux usually peg an entire processor core. If I didn't have more than one I wouldn't be able to watch videos with smooth playback while doing anything else while running Linux. If I have 8 and only 4 are used while something is rendering, it means I can use the computer for something else without impacting the performance of the render. Just because you can't see any reason to have more cores doesn't mean that clever developers won't be able to take advantage of them.

- So does OpenCL run on both CPUs and GPUs?
- Virtualization on a desktop? Ya, I do it, and it's kinda cool, but isn't that what servers are for? Why not just go whole hog and run a browser over a networked VM?
- Westmere/SB, and I think BD, have hardware AES encryption. Again, it is massively faster than any general purpose CPU could ever hope to be
- Flash video? I'm glad you brought it up. You can play 1080p Youtube now on a 1.4GHz Core 2 Duo? How? GPU acceleration.

Again, the most demanding tasks we do on a CPU can be ported over to the GPU or other specialized hardware or instructions(like AES, for example). Encryption, decryption, video encoding, video decoding, 3D rendering, scientific modeling and the user interface itself can all be off loaded to the GPU. There is no doubt that there are some types of applications will remain on the CPU, but these applications are likely to not be highly threaded, and they are likely to benefit more from higher clockspeeds than from more cores.

Look at ARM. Their A9 chips can do hardware 1080p encoding in real time. I don't think you can do that even on a multi-core CPU. Sure, there is a quality hit, but it is still a pretty incredible achievement. Those chips also accelerate things like JPEG rendering. All this on a tiny little chip with a tiny little TDP. We are already seeing smartphones like the Motorola Atrix coming out that can function as full fledged desktop computers. Sure, it is still a generation or two away from being truly viable, but the potential is there. If I had to bet money, I would say that we are more likely to see an ARM SOC desktop be the mainstream PC of the future rather than, say, a 24-core x86 chip.

Almost no system is going to operate at peak load all of the time. My processor had to render this webpage, but once it's done that it doesn't have much work to do while I read it so why keep both cores running?

I'm not saying get rid of cool n quiet entirely, but if web page rendering was truly multithreaded, then your CPU would have maxed out all 8 cores for two seconds, then put them all back down to the lowest p state. The reality though is that most things aren't multithreaded, so a Bulldozer would only max out one or two cores while rendering a page, and it would leave the others volted down while they idle. This is why power gating is efficient. I'm not arguing that it is bad(it isn't, it's great), but what I am saying is that it is an implicit admission on the part of AMD that mutlithreading is not the norm.

Some of the reasoning also probably lies in the fact that developing multiple types of chips is expensive. It's easier to have a few designs that can scale across different ranges. Intel's SB architecture is being used in everything from notebooks to high-end desktops. Eventually it will make its way into servers. It wouldn't be cost effective for Intel or AMD to have to customize chips for every conceivable usage scenario wouldn't be practical. Intel and AMD spend billions of dollars on chip development and design. I think they know what they're doing.

Yes, which is why AMD hasn't gone out of business. They know how to be competitive in the server world where multi-threading is easy and good scaling actually pays off. For desktops though, they usually don't end up with the best chips. It seems to me that Bulldozer is more desktop-oriented than Thuban, but I could be wrong about that.
 

ShadowVVL

Senior member
May 1, 2010
758
0
71
Ok I think we are done with the debate on6&8v4 core.

I would like to know a little more on how tessellation works.from what Ive read its basicaly like subdivisions or something like that. But I wondering if someone can explain how it works,why is so special and what its all about?

In 3 or less paragraphs per post please.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
That link isn't going to work for us. You have to upload your photo to a photo shareing service like Image Shack.
 

hamunaptra

Senior member
May 24, 2005
929
0
71
Ok I think we are done with the debate on6&8v4 core.

I would like to know a little more on how tessellation works.from what Ive read its basicaly like subdivisions or something like that. But I wondering if someone can explain how it works,why is so special and what its all about?

In 3 or less paragraphs per post please.

Tessellation ... um arent you in the wrong thread? LOL
 
Status
Not open for further replies.