Thoughts, Rumors, or Specs of AMD fx series steamroller cpu

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TakeNoPrisoners

Platinum Member
Jun 3, 2011
2,599
1
81
Who cares about AMD, they are going the moar cores route which is just destined to fail.

I predict stupid high power consumption and performance that will flop.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
So what, OpenCL apps went from 2 to 3? :hmm:

CUDA/OpenCL/DirectCompute will never go much beyond where it is now. CUDA is the most advanced of them and one that has been out for ages now. How many CUDA apps and for what?

Apples to interstellar particle fields. OpenCL has a pretty big install base already, it's an open standard, not limited to the x86 world AND compatible with AVX2.

Who cares about AMD, they are going the moar cores route which is just destined to fail.

I predict stupid high power consumption and performance that will flop.

Yeah, just like Trinity! Wait...
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
So what, OpenCL apps went from 2 to 3? :hmm:

CUDA/OpenCL/DirectCompute will never go much beyond where it is now. CUDA is the most advanced of them and one that has been out for ages now. How many CUDA apps and for what?

I wouldn't dismiss it so quickly. Unlike CUDA which is proprietary and built around HPC applications, openCL was developed as a universal platform that all hardware/software can benefit from. Basically, if you've got a chip that can make use of it then it can make use of it :p meaning everything from the ARM A5 on your phone to the 3770K in your desktop.

AMD isn't the driving force here, but you're right in that if it was just AMD then HSA/openCL would have been dead before it left the ground. Historically AMD has never worked well with developers. Thankfully Apple is behind it full force and there's already a slew of applications that are supported which took years for CUDA support in comparison. As far as expansion rate goes, openCL is taking off far faster than CUDA did and it's mainly to do with the companies backing it.

Apple wants small/slim/sexy PCs and the most possible computing power they can possibly squeeze out of that small form factor. In order to do that you've got to use every piece of hardware capable of number crunching to chip in and help out. Considering GPUs have gotten bigger and more powerful over the years, using more and more valuable die space, openCL and GPGPU makes a lot of sense. It's the same argument to counter AtenRa's "moar coars" garbage. Why let hardware sit idle when you can use it to help? That's the point behind this.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Problem is the things you can use CUDA/DXC/OpenCL for is in nature very limited. Yet some seem to think that they can basicly be universally used.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Problem is the things you can use CUDA/DXC/OpenCL for is in nature very limited.

It's growing and growing at a faster rate than you think. If this were an AMD project then, yea, it would never go anywhere. Not with their market share and not with their CPUs, or lack of in the mobile sector. It's not, though. It's a joint initiative by the mobile crowd that benefits them greatly.

Essentially, any floating point task can be done via GPGPU. The amount that it helps and the exact ISA differs, but that was the point of openCL in the first place -- bypass the layers beneath it and compile a single way to leverage GPGPU regardless of platform.

I don't think it can be universally used nor have I ever said that. It's limited (derp) but it can help tremendously, especially in formats that do lots of visual trickery and rely a lot on the user interface to provide a better experience, aka graphics.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
It's growing and growing at a faster rate than you think.

It is?

Here is alittle task then. List me the OpenCL applications. Dont worry, its gonna be a short list ;) And thats after 3 years with OpenCL. OpenCL also got mixed support. Microsoft prefers DirectCompute via DX. nVidia prefers CUDA and even does OpenCL via CUDA.

And no, ANY FP cant be done via GPGPU.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
And no, ANY FP cant be done via GPGPU.

Not any FP code, no, but that's architecturally limited and by ISAs. The amount of code that can run via GPGPU varies. You either forget or neglect that I said this

It's limited (derp)

Adobe (the full line, from flash to media converter), GIMP, Handbrake, Chrome, Firefox, Opera, ArcSoft, Sony Vegas, Cyberlink, Corel and there's more.

CUDA took significantly longer to get that kind of support and some of the support that it did get was from some weird developers that only wanted nVidia hardware/money that have since disappeared. It's far easier to draw attention to openCL when the world's biggest and most profitable company is behind it full force and it's not proprietary. nVidia is going to prefer CUDA as that's their last bastion of survival in the market: HPC. Without CUDA nVidia will make no money. Even still nVidia supports openCL on its products.

As for Microsoft, I think we both know where that ship is headed :p
 

Don Karnage

Platinum Member
Oct 11, 2011
2,865
0
0
If piledriver can increase IPC by 10-15% and still clock to around 5Ghz it should be a decent chip.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Not any FP code, no, but that's architecturally limited and by ISAs. The amount of code that can run via GPGPU varies. You either forget or neglect that I said this



Adobe (the full line, from flash to media converter), GIMP, Handbrake, Chrome, Firefox, Opera, ArcSoft, Sony Vegas, Cyberlink, Corel and there's more.

CUDA took significantly longer to get that kind of support and some of the support that it did get was from some weird developers that only wanted nVidia hardware/money that have since disappeared. It's far easier to draw attention to openCL when the world's biggest and most profitable company is behind it full force and it's not proprietary. nVidia is going to prefer CUDA as that's their last bastion of survival in the market: HPC. Without CUDA nVidia will make no money. Even still nVidia supports openCL on its products.

As for Microsoft, I think we both know where that ship is headed :p

Flash doesnt use CUDA/DXC/OpenCL does it? Either does a few of the others on your list ;)

To make GPGPU more useful you would penalize the GPGPU so much that the performance delta is gone.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
If you're referring to webCL then that's just nitpicking =P webCL is just a web-based version from Khronos, the same people that brought us openCL.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
If you're referring to webCL then that's just nitpicking =P webCL is just a web-based version from Khronos, the same people that brought us openCL.

Sure you dont think on WebGL in those apps? That runs via OpenGL
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Intel might just buy Amd by 2014 if this next one is a flop
Your knowledge of business is appalling.

As far as Steamroller goes, it will probably support DDR4, be built on a 28nm process, and make some significant improvements over Bulldozer. Still, Haswell seems quite intimidating, and I highly doubt we can hope for a situation better than PhII vs. Nehalem. AMD can't hope for anything other than being the budget alternative, at this point.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
As far as Steamroller goes, it will probably support DDR4, be built on a 28nm process, and make some significant improvements over Bulldozer. Still, Haswell seems quite intimidating, and I highly doubt we can hope for a situation better than PhII vs. Nehalem. AMD can't hope for anything other than being the budget alternative, at this point.

Seriously, a half node shrink? Where did you read/hear that?
 

Makaveli

Diamond Member
Feb 8, 2002
4,975
1,571
136
If piledriver can increase IPC by 10-15% and still clock to around 5Ghz it should be a decent chip.

It would have to do this and drop power consumption down by 100 watts minimum for me to even consider it.

Have you seen how much power BD uses at 4.8Ghz at full load?
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Seriously, a half node shrink? Where did you read/hear that?

http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9MTI1Mjk1fENoaWxkSUQ9LTF8VHlwZT0z&t=1


Source Link: http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-2012analystday

Edit:

It'll be interesting seeing how much performance Steamroller adds, given that Piledriver will still be the high-end in 2013. Of course, Intel has been doing this too lately. Very interested in seeing how different performance per clock will be between Trinity and Vishera. Hopefully the delta is on the order of 5%~10%. Otherwise it is pretty clear they are killing off their high-end platform in the most painful (for us) way possible...
 
Last edited:

Don Karnage

Platinum Member
Oct 11, 2011
2,865
0
0
It would have to do this and drop power consumption down by 100 watts minimum for me to even consider it.

Have you seen how much power BD uses at 4.8Ghz at full load?

Too much but if prices are low it wouldn't matter to me
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
It is?

Here is alittle task then. List me the OpenCL applications. Dont worry, its gonna be a short list ;) And thats after 3 years with OpenCL. OpenCL also got mixed support. Microsoft prefers DirectCompute via DX. nVidia prefers CUDA and even does OpenCL via CUDA.


And no, ANY FP cant be done via GPGPU.

Ones I use daily:

F@H
POEM@Home
Milkyway@Home
Collatz Conjecture

Evidently more and more DC projects are gaining OpenCL support, there is one coming for WCG and one for Einstein @ Home which I have not tried.

I don't see any Directcompute at all or any new CUDA apps released this year, whereas there has been 3-4 on the OpenCL side.

On the DC side, it appears that OpenCL adoption is certainly accelerating, even as the admins of the projects complain how clunky it is :p

https://secure.worldcommunitygrid.org/forums/wcg/viewthread?thread=32687#366756

OpenCL/GPU compute is a crazy boon for science. Thank goodness there are some smart people out there.

One I will put a GPU in my WHS just to take advantage of:

Handbreak (beta - this is freaking huge IMHO)

An army of apps is not needed. The right ones are.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9MTI1Mjk1fENoaWxkSUQ9LTF8VHlwZT0z&t=1


Source Link: http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-2012analystday

Edit:

It'll be interesting seeing how much performance Steamroller adds, given that Piledriver will still be the high-end in 2013. Of course, Intel has been doing this too lately. Very interested in seeing how different performance per clock will be between Trinity and Vishera. Hopefully the delta is on the order of 5%~10%. Otherwise it is pretty clear they are killing off their high-end platform in the most painful (for us) way possible...

Hmm, well so the APU version of Steamroller will be out on 28nm, guess that's the way it had to be. Last I saw, Piledriver on the desktop was all AMD showed for 2013. I wonder if Steamroller will come out on 20nm (which is supposed to be ready 4Q13?
 

Cpus

Senior member
Apr 20, 2012
345
0
0
FX Steamroller is supposed to be out in early 2014. I personally think it will be on 28nm. I don't think it will be until mid/late 2015 which is when FX Excavator comes out that we'll see an AMD processor on 20nm. But I hope FX Steamroller will be on 20nm. Just wondering when AMD said FX Steamroller is going to be a major architecture update does that mean they are getting rid of the modules?
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
Just remember that whatever it is, it will be late.

I've read about all of AMD's chips from the K6 to Bulldozer, and I don't think a single one came out terribly near when AMD originally planned it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Microsoft prefers DirectCompute via DX. nVidia prefers CUDA and even does OpenCL via CUDA.
If languages other than very basic ones like Brook existed, when they were coming up with CUDA, they'd have probably developed a compiler that wasn't tied much to one language being used for input, but that wasn't the case, so this is how it has turned out. Source-to-source is a perfectly good thing to do, when you already have a good source-to-assembly or source-to-IR compiler, and the language whose source is being converted has only a subset of the features of what it is being converted to (its quite common for scripting languages, converters for now-defunct business languages, and functional languages).

NV doing it with CUDA is neither good nor bad. It just shows they made CUDA implementation a high priority. If they can convert from OpenCL to CUDA, optimize a bit in the process, then compile good code for CUDA, why skip over the CUDA stage? If there were significant downsides to it, sure, but I can't think of many, and I'm sure NV has some work-arounds for corner cases as part of the conversion.

Just remember that whatever it is, it will be late.

I've read about all of AMD's chips from the K6 to Bulldozer, and I don't think a single one came out terribly near when AMD originally planned it.
Nope, not a one.

But, when they have a nice chip, it still works out, even years late (K8). If they keep up improvements like Trinity, and fix up the caches for desktop/mobile (4-way L1I, bigger L1D, maybe a uop cache, etc.), and reduce some major latencies (div and mispredict, FI), it aught to do OK. Today and in the future, they're stuck competing for good enough.
 
Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,975
1,571
136
Too much but if prices are low it wouldn't matter to me

i'm curious to know why you would even look at it. Whatever improvements they make won't help it catch up to IVY which you are running.

Are you planning on a second machine for a dedicated purpose?