AMD Thuban (6 core desktop) for Q2

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
According to Fuad...
http://www.fudzilla.com/content/view/17006/1/

"We don’t have many details, but we can confirm that AMD plans to launch two six-core desktop CPUs next year. This should happen in Q2 2010 and if AMD holds on to this date, it might come a bit later than Intel’s Core i7 980X.

AMD’s six-core 45nm part is codenamed Thuban and it comes with 6MB L3 cache, C-state performance boost as well as DDR3 1333MHz support. As we said before, it supports AM3 motherboards and it should work in most existing models.

We don’t have any specifics about the clocks or what will be difference between these two SKUs, but we can confirm that they are planned."
 

LoneNinja

Senior member
Jan 5, 2009
825
0
0
Be awesome if they manage 3.0Ghz+ parts out of these, they need the clock speed it they're to even compete against I7. To bad for most users the Phenom II X4 will probably offer better performance due to higher clocks and lack of software that can use 2+ threads, much less 4+.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
A six core part is about the only thing AMD has to throw at intel this year, should be interesting.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
From the sound of it, C-state performance boost must be like Intel Turbo Boost.

That's possible...c-state is the power state (and p-state is performance state). As a WAG, I'd say that they are trying to dynamically reduce power on low usage cores while cranking up those that are in high demand. If they can do this fast enough, it should have some very nice results.
 

MalVeauX

Senior member
Dec 19, 2008
653
176
116
Heya,

It's interesting that both Intel and AMD really are pushing more cores, more cores, more cores. Nothing has caught up to it yet mainstream. I understand it from a server/enterprise perspective. But from a desktop perspective, it's weird. People have Quads who have no idea that it's not doing anything more than a Dual for them at home in their little premades. Good marketing chaps.

Very best, :p
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
It's interesting that both Intel and AMD really are pushing more cores, more cores, more cores. Nothing has caught up to it yet mainstream. I understand it from a server/enterprise perspective. But from a desktop perspective, it's weird.
What else should they do? The times where you could crank up the frequency by x% every years are gone.

Static power has already catched up to dynamic power and keeps growing faster..

We can try to multi-thread as much as possible but there are things where you can't just throw more CPUs at it to speed the process up.. and I don't think anyone has a good idea what to do with them.
 

21stHermit

Senior member
Dec 16, 2003
927
1
81
Since the Intel Hex is part of the Extreme series, translation $1000+, if AMD can price its hex below $500 it should sell very well.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
The Opteron 6-core parts are already priced from under $500 (http://it.anandtech.com/IT/showdoc.aspx?i=3571&p=3), but that's for the low end 2.2GHz part.
Certainly it might be expected that at least one 6-core desktop chip would be under $500, assuming they launch multiple SKUs, unlike Intel who are only producing one 6 core to go at the very high end (since they can already do 8 threads with <$300 parts).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Heya,

It's interesting that both Intel and AMD really are pushing more cores, more cores, more cores. Nothing has caught up to it yet mainstream. I understand it from a server/enterprise perspective. But from a desktop perspective, it's weird. People have Quads who have no idea that it's not doing anything more than a Dual for them at home in their little premades. Good marketing chaps.

Very best, :p

Yep. I would rather see Intel and AMD increase core size and IPC than increase core count.

However, I'll bet redesigning chips for increased IPC is much more complicated than just adding a few more cores after a die shrink.
 

manimal

Lifer
Mar 30, 2007
13,559
8
0
Am I the only one that would like to see the netburst architecture modernized and clocked at 10 GHZ!!!

Then when I would overclock it I could say my pc is turned up to 11~~~~!
 

Cattykit

Senior member
Nov 3, 2009
521
0
0
Heya,

It's interesting that both Intel and AMD really are pushing more cores, more cores, more cores. Nothing has caught up to it yet mainstream. I understand it from a server/enterprise perspective. But from a desktop perspective, it's weird. People have Quads who have no idea that it's not doing anything more than a Dual for them at home in their little premades. Good marketing chaps.

Very best, :p

But, as more people encode media files for their portable devices, it's great.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Bigger and slower clock speed cores would work for me.
If you want slower CPUs just buy one from 2006?

@manimal: No thanks, "modernizing" would mean getting a 40+stage pipeline and a 200+W TDP, right? I think I'll pass.


I'd say the best performance gain would be a leaner instruction set and not the abomination we have now, but we all know that that won't happen.. backwards compatibility is too important.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Yep. I would rather see Intel and AMD increase core size and IPC than increase core count.

However, I'll bet redesigning chips for increased IPC is much more complicated than just adding a few more cores after a die shrink.

You'd bet right there, especially with all the effort that went into the 'glue' logic on AMD's and Intel's chips.

Computers are by their nature multi-tasking machines though, and Windows happens to be kind of bad at it, so there's use for the extra cores, even if it's just to make up for problems in windows' task scheduler. Windows LOVES to lock a process to a core, and processes often wait for I/O or for some other task to complete, so the extra cores are useful just so windows doesn't deadlock itself. (linux doesn't have the same problem so much since it has a more advanced, but possibly slower task scheduler)

You also probably have or will soon have at least a few apps that make use of quad cores, so if you can afford the extra initial cash, why not?
6 cores are almost certainly overdoing it though, unless you do some kind of professional work (image or video processing perhaps, which will soon move onto the gpu anyway), but even then, more cores without more memory seems like it might leave an unbalanced system.

I'd say the best performance gain would be a leaner instruction set and not the abomination we have now, but we all know that that won't happen.. backwards compatibility is too important.

I don't know about that. Atom would certainly benefit, but I think the out of order execution and all the code morphing stuff that modern x86 processors do eliminates most of the handicap for common tasks. Of course, if x86 processors didn't have to do all that, then we could have more atom like designs of good enough performance and the transistors and power budget could be spent on more cores, higher clock speeds, more cache, wider SIMD units, etc, allowing the cpu to be better at more specialized tasks, even if it's not as good at things current cpus are beastly at.
 

JFAMD

Senior member
May 16, 2009
565
0
0
The transition to cores today is akin to the transition to 32-bit in the mid-90's. At the time there were a lot of people talking about how 16-bit apps were good enough for them and didn't see the need to go to 32-bit. Where are they now? All comfortably running 32-bit apps. Eventually they get over the hump.

With certain limitations in design you just don't get to the 10GHz that people want. The real limitaiton (on the client side) has more to do with OS than anything else. More cores are a great solution if your OS can take advantage of them. Pop into task manager. I have one program running (IE8) and have 50 processes going (Win7). There is a need for multiple cores, there is always a lot going on in the PC. But often you don't see the benefit because the OS scheduler might not be doing a great job of handling the cores efficient.

However, in my case, all the cores in the world won't help right now because I am typing in a forum. But when I start doing real work, the notebook's 2 cores become strained. It all depends on what you are doing.
 

RaistlinZ

Diamond Member
Oct 15, 2001
7,470
9
91
Anyone else think we've been going about faster computing completely backwards for the past 20 years?

It's like trying to get from point A to point B on a windy-bumpy road. Traditionally, we've been putting larger engines under the hood and bigger wheels on the frame - hoping that more power will let us plow our way to our destination faster.

What we should have been doing though is building a better road - a smarter way of computing. I think that will be the next revolutionary step to get us off this constant merry-go-round of upgrading for raw speed year after year.

I have a concept in mind, but I really need to flesh it out and get it on paper - I bet there are patents on it though.
 
Dec 30, 2004
12,553
2
76
The transition to cores today is akin to the transition to 32-bit in the mid-90's. At the time there were a lot of people talking about how 16-bit apps were good enough for them and didn't see the need to go to 32-bit. Where are they now? All comfortably running 32-bit apps. Eventually they get over the hump.

With certain limitations in design you just don't get to the 10GHz that people want. The real limitaiton (on the client side) has more to do with OS than anything else. More cores are a great solution if your OS can take advantage of them. Pop into task manager. I have one program running (IE8) and have 50 processes going (Win7). There is a need for multiple cores, there is always a lot going on in the PC. But often you don't see the benefit because the OS scheduler might not be doing a great job of handling the cores efficient.

However, in my case, all the cores in the world won't help right now because I am typing in a forum. But when I start doing real work, the notebook's 2 cores become strained. It all depends on what you are doing.

Um, no, it's entirely different. 16-32 was as simple as changing some compiler code and reworking your codebase to be 32 bit.
Going multicore with your code necessitates a complete rework of practically your entire architecture and framework of your program.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
I don't know about that. Atom would certainly benefit, but I think the out of order execution and all the code morphing stuff that modern x86 processors do eliminates most of the handicap for common tasks. Of course, if x86 processors didn't have to do all that, then we could have more atom like designs of good enough performance and the transistors and power budget could be spent on more cores, higher clock speeds, more cache, wider SIMD units, etc, allowing the cpu to be better at more specialized tasks, even if it's not as good at things current cpus are beastly at.
Well out of order execution may help, but just think about how ia-32 instructions are encoded:
4 1byte opt. prefixes, 1-3byte Opcode, ModR/M byte, SIB byte, 1-4bytes Displacement, 1-4byte Immediates. Theoretically you could construct 17byte instructions (though I think the longest legal instruction is 15bytes).
We have bits to distinguish 16bit from 32bit operations (there were no op codes left for the 32bit operations), there are different opcodes for instructions that just differ in the used registers and many other really funny things.

The decoding alone can be a bottleneck in some situations and you need larger execution units to support all those instructions (even if you split them up in smaller ones).


Compare that to a RISC structure like MIPS (only 32bit instructions and only 3 different instruction types) and you see the advantages on the first glance - it has it advantages, but all in all it's just a giant hodgepodge of instructions that must be kept for backwards compatbility or stuff that gains a tiny fraction of the users some performance gains.


PS: And depending on the program it's rather hard to write multi-threaded code, that's completly different from the 16-32-64bit transition.. if you sticked to the documentation, it was not a lot of work (I knew one guy who ported a driver he'd written to 64bit.. and with "ported" I mean basically "compiled it new" - learned a lot from him).
There are some interesting ideas (Transactional memory,..) but at the moment it's still a lot of low level work.. and finding bugs in concurrent programs is probably one of the most horrible things at all.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
If you want slower CPUs just buy one from 2006?

Yeah, but those cores aren't larger. They are the just the same size (optically) and slower.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
With certain limitations in design you just don't get to the 10GHz that people want. The real limitaiton (on the client side) has more to do with OS than anything else. More cores are a great solution if your OS can take advantage of them. Pop into task manager. I have one program running (IE8) and have 50 processes going (Win7). There is a need for multiple cores, there is always a lot going on in the PC. But often you don't see the benefit because the OS scheduler might not be doing a great job of handling the cores efficient.

Do we really need small cores doing 10 GHz? Why not just increase the size of the cores? Larger cores should be capable of higher IPC right?
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Yeah, but those cores aren't larger. They are the just the same size (optically) and slower.

It may not be possible to make cores with much higher IPC. Unless they did what Voo said and switched instruction sets (maybe to a VLIW instruction set) I doubt you can squeeze much more implicit parallelism out of x86. (and x86 may have more implicit parallelism than just about any architecture considering how many instructions do multiple things)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
What we should have been doing though is building a better road - a smarter way of computing. I think that will be the next revolutionary step to get us off this constant merry-go-round of upgrading for raw speed year after year.

When Conroe was released back in 2006 it seemed the entry level chips were purposely clocked low.

Why they did this I don't know? Maybe this was done because the IPC of Core 2 was already strong enough to beat AMD. Intel also probably wanted to keep some extra speed back so they wouldn't be competing against themselves in the future.

Therefore, I don't think we will be seeing another massive core redesign from Intel till either Core speeds approach the point of diminishing returns and/or AMD comes out with something vastly superior from an IPC standpoint.

This makes me believe we may see smaller/budget mainboards proliferate. (Especially if a competing very low cost OS is able to make its way into enough machines).
 
Last edited: