• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

New Data for Piledriver

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It's not "bad" -- 32nm is quite modern, and it looks like Intel's 22nm process is so immature that it sucks for desktop chips...

Would the jump from 32nm to the TSMC 28nm node make a noticeable difference in performance? 4nm doesn't seem like a worthy jump to me anyhow.
 
It's definitely not worth the cost, their node processes are different enough that a lot of engineering work is required. Node alignment between TSMC and GF may come at 20nm though I believe that's when GF is going Gate Last.
 
i won't be surprised if the GF 32nm is better than TSMC 28...when both are mature

(or even intel 22, but that i will be surprising)
 
I need something to drop into this ASROCK 990FX, my 1090t(4ghz) is showing its age. I like the route AMD is going, their version of tick-tock is

APU with new CPU revisions(sometimes including die shrink)-------->Desktop optimized.
 
Last edited:
Roughly equivalent to an i7-970 with an integrated Radeon 6570 with 100W TDP, not too shabby if true.

I thought the IGP was already close to the performance of a 6570, and Trinity was supposed to be something like a 40% improvement over that. Maybe I was incorrect.

If it is only equal to a 6570, that is really not quite there yet for me in a desktop. Maybe OK for a laptop, but it is just so easy to put a discrete card in a desktop that will walk all over a 6570.
 
I think it should be around 6670 levels of performance. AMD claimed to have bumped up graphical performance over Llano by >50%. Though that's almost certainly in synthetic benchmarks it should still translate to a heck of an increase in gaming. It's got less shaders but the architecture is more efficient and the clocks are higher, VLIW4 and 800mhz. The 6670 is 768 GFLOPS and the Trinity APU is supposedly around that 700 mark as well.

I don't think it's going to replace a discrete card at 1080p, but for people playing at 1680 and lower it would be enough. There's also the crossfire performance, which if the rumor is accurate, you can crossfire Trinity with cards of a different architecture. Now if only they're on time with their crossfire profiles 🙁
 
Going to be starved by the DDR3 shared bandwidth though. Best case I think it will be just a bit slower than the 6670 DDR3 model, anyone have a review link for that (not the DDR5 version)?
 
I thought the IGP was already close to the performance of a 6570, and Trinity was supposed to be something like a 40% improvement over that. Maybe I was incorrect.

If it is only equal to a 6570, that is really not quite there yet for me in a desktop. Maybe OK for a laptop, but it is just so easy to put a discrete card in a desktop that will walk all over a 6570.

6550D is around 10-20% slower than the 5570 which in turn is around 40% slower than the 5670. The 6570/6670 are a different chip (Turks not Redwood) and perform slightly better than their previous gen counterpart.
 
I really look forward to trinity, my llano rig serves me well, 3.5ghz with 850mhz gpu core.....

Trinity will bee perfect for mid range builds.
 
I'm looking forward to Trinity based net-tops with Fusion set ups. I'll get a few of those for myself and family members.

I won't be looking to upgrade my main rig until Intel has Haswell at the earliest. Mature 22nm process or go home.
 
I planning to build a thin mini itx PC to use as a gaming and HTPC combination.

The cases I want won't accommodate a discrete card so trinity looks to be my saviour
 
It's not "bad" -- 32nm is quite modern, and it looks like Intel's 22nm process is so immature that it sucks for desktop chips...

I don't think its necessarily that. When they have said they are changing the optimization point from 35W to 17W or less, it could have impacted top line performance. A circuit designed for one point doesn't always work better for another. It might be even true for process too. 32nm circuit and/or process might have been optimized for 10nA/um and 100nA/um leakage currents while in 22nm they might have moved to 1nA/um and 10nA/um leakage. In that case, 22nm chip with 100nA/um leakage transistors might not be optimal as it was with 32nm.

If you want real world examples, they are claiming that there will be greater gains with the lower power 3770S SKUs than the standard 3770K vs Sandy Bridge generation. More gain from 3550S than 3550, and same for 3450S and 3450.

Same for mobile chips. The highest 3920XM will probably end up only 5-15% faster than the 2960XM, but lower clocked 3720QM is 15-20% faster than 2760QM.

Intel does seem to be having problems with Ivy Bridge, otherwise we would have seen it in retail already. So things can still change, with final retail steppings. But I don't think the issue is quite so simple.

Besides, even among us enthusiasts, how many of us already say faster CPU isn't needed anymore?
 
Last edited:
Would the jump from 32nm to the TSMC 28nm node make a noticeable difference in performance? 4nm doesn't seem like a worthy jump to me anyhow.

I don't think there's any performance gain. 32nm used in current AMD processors are performance optimized. If the 28nm process AMD is moving to is anywhere related to the 28nm process being used by ARM chips and GPUs, it might even end up being slower.

What 28nm offers seems to be about ~10% gain in density. That allows AMD to bring Trinity-like graphics gain without moving to a full node change. Moving from Llano to Trinity, AMD gained space for better graphics using a slightly bigger die and smaller CPU core. Moving to 28nm would afford similar gains yet again.
 
cz2djx3q.jpg
 
Doesn't it seem like the mobile chips seem to be benefiting the most out of the architectural tweaks AMD has done with Piledriver along with that licensed RCM tech? Supposedly there were poor returns for RCM past 4ghz. If so it may be that the mobile parts are awesome (Trinity>Llano) whereas the desktop chips are in that "good" area.

Every supposed leak shows the Trinity mobile chips spanking equivalent-TDP Llanos but the gains don't seem to scale up as well. Compare that 5800K to the Llano 3870 and the 4600m to the 3500m.
 
There is going to be an AM3+ version of PD/Trinity too, right? But without the IGP?

Will they come out in 8-core versions? Would it be worthwhile to upgrade from my 1045T @ 3.51? (Running primarily distributed computing, PrimeGrid, WCG, etc., on BOINC. Gaming is a secondary consideration.)
 
There is going to be an AM3+ version of PD/Trinity too, right? But without the IGP?

Yeah of course, but that's coming in Q3, a direct successor to the FX chips. It's code name is "Vishera".

Every supposed leak shows the Trinity mobile chips spanking equivalent-TDP Llanos but the gains don't seem to scale up as well. Compare that 5800K to the Llano 3870 and the 4600m to the 3500m.
Before rushing into judgment, there are much better A8-3500M scores. Still, I believe AMD's claims of 25% gain in CPU over Llano fairly accurate. Trinity versions clock much higher to more than compensate for any loss in performance/clock. In Llano, multi-core TurboCore doesn't really work, so Trinity with better TurboCore and higher default clocks will result in significant clock speed changes.

In desktops, Llano isn't clocking so low, so the gain for Trinity is much less.

That seems to be uniform across segments though, whether Intel or AMD, graphics or CPU. The gains on high end looks to be "ehh".
 
Last edited:
Back
Top