AMD summit today; Kaveri cuts out the middle man in Trinity.

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@Sef

HMC is still far away isnt it? I have no doubt its the future though.
Samsung + Mircon are leading it, and with IBM helping (along with aton of others supporting it)

Yes I saw they have 128 GB/s bandwidth prototypes, working.
Which would fix the memory bandwidth issues IGP's have.

However short term? DDR3 is quad channel is probably the best/easiest/fastest/cheapest? solution.

--------------------------------------------

@Lepton87

Intel has x79 motherboards with Quad-Channel memory.
The cheapest one I could find was ~180$ (ouchies).

So your right, the cost factor for something like that is probably alot higher than 5$.

Meanwhile the cheapest 1155 motherboard I could find was around 50$.

However I refuse to believe thats all because of memory lanes.
Its about quanity, when you make a small number of units of something that takes alot of R&D,
you have to keep a higher price point to recoup the R&D costs.

I suspect if "quad channel" became the "norm", it wouldnt add nearly that much to the total system cost.


 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Ye, I wouldnt bet on a new memory type. We tried once and the DRAM cartel flexed its antitrust muscles.

Adding more channels is not really cost effective. Also its quite silly today that we need atleast 2 64bit DIMMs.

So like last time, perhaps the next step is 128 or 256bit DIMMs. And if lucky, with a serial interface.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Rambus failed not because of "cartel flexed anti trust muscles", but because of outrageous prices.

Back then memory was alot more expensive than it is today and made out a more significant percentage of the systems total cost than today. Dispite this the rambus ram cost like 4-5 times as much as DDR ram did.

It was the fastest ram technology at that point in time, however its not around today? why? because consumers didnt think the advantage was worth paying 5 times as much for, greed killed it.


The memory cube is gonna succede where the other rambus ram failed, because there is a "need" for it now, unlike then (its all benefit vs cost, back then that wasnt a good ratio cuz the benefits where small).

That said.... its gonna be a long time until there is commercial memory cubes selling at lowend markets
(its that way with all new tech).

I guess meanwhile we ll just have to hope those people behinde DDR4 have their arses working overtime to spit out some DDR4-4233 (dual channel thats enough memory bandwidth for now).

Its just sad that, thats probably not gonna happend until 2014.

from wiki:
RDRAM was initially expected to become the standard in PC memory, especially after Intel agreed to license the Rambus technology for use with its future chipsets. Further, RDRAM was expected to become a standard for VRAM. However, RDRAM got embroiled in a standards war with an alternative technology - DDR SDRAM, quickly losing out on grounds of price, and, later on, performance. By the early 2000s, RDRAM was no longer supported by any mainstream computing architecture.

RDRAM cost to much because rambus where greedy facks that demanded to much royalties, and the tech was more expensive to produce..... thus it lost to a cheaper alternative (since there was hardly any differnce in performance, to be gained).

IGPs present a "need" for higher memory bandwidth.... which means a new technology for memory that ll give that, has a much better chance of surviving against a cheaper alternative this round, however this "need" is mostly at low-end mid-end consumers, so it cant be that much more expensive if it's to have a chance to take off.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Rambus failed not because of "cartel flexed anti trust muscles", but because of outrageous prices.

RAMBUS didnt manufactor the memory. So who was to blame for the high cost? Who got execs in jail and billions in total fines?
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@ShintaiDK

whos to blame? the people that designed it? just like all GPUs dont cost the same amount to produce, its about design, if 1 is bigger than another, its usually faster, but it ll also cost more.

Rambus's design was probably just alot more expensive to produce than DDR one was.
Who's to blame? the people that designed it are. Because design does impact production cost.

anyways... royalties by rambus, made it ALOT more expensive for the consumers.
Again whos to blame? rambus are, they where greedy (benefit vs cost ratio was to small, for consumers to want it).

This was dispite intel was backing it and throwing money at it.
In the end it doesnt matter how good a technology is, if no one wants to buy it.
Price impacts sales, and dispite some thinking it, consumers are not total fools.

High price for something for very little practical use/results = failour on consumer end.
Exact same reason for why I think Thunderbolt will be a big flop, even though so many are throwing money at it.

USB already does all that matters on consumer end, thunderbolt is just useless extra costs (for the vast majority of consumers).
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Problem is we didnt get to choose as consumers. Our hands was forced by the DRAM cartel.

And today we pay the price for it.

I know people hate RAMBUS because they are portraited so due to lawsuits. But lets face it, the DRAM cartel in convicted of wrong doing.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@ShintaiDK

"and today we pay the price for it"

huh? what? then and now, we where SAVED from haveing to pay 5 times as much for ram.


"Problem is we didnt get to choose as consumers. Our hands was forced by the DRAM cartel."

The *only* thing that really impacted RDRAMs future was its price.
Consumers *DID* get to choose.

they choose to pay 1/5 th the price, for 95% of the performance of RDRAM.

You have a very differnt view on DRAM than I do, I see them as "heroes" that saved us a ton of money. You for some reason see them as villians.


I know people hate RAMBUS because they are portraited so due to lawsuits.

Rambus are patent trolls, that live on sueing others for useing any peice of technology that even "looks" like something they might once have had an idea of.

Their not part of the next new thing when it comes to memory.... so they ll just stay patent trolls.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
DRAM cartel did price fixing. And you never had a choice, they made that choice for you. So talking about how they saved you from spending more money is just plain silly and nonsense. You payed more than you should. And you got the inferiour technology. Memory is the only place we still use a parallel bus for high bandwidth connections.

They are convicted, they got fined, they got jailed.
 

Atreidin

Senior member
Mar 31, 2011
464
27
86
Rambus being awful and actual DRAM manufacturers being convicted of price fixing are completely different matters. One does not excuse the other.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
And Samsung. Yeah, this is a bad argument. HMC-style memories are the future, and everyone in industry knows it.
Hynix has HBM (High Bandwidth Memory)... I'm pretty sure that regardless of what it's called, stacked memory is undoubtedly what we'll be starting to see take over computers as early as next years.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
And Samsung. Yeah, this is a bad argument. HMC-style memories are the future, and everyone in industry knows it.

QFT!

Samsung = ~41% market share
Mircon = ~25% market share

total = ~66% of total market.

Mircon partnering with Samsung = big weights.

but wait.... it doesnt stop there! because Hynix is a supporter of the 2 others in this project.
Hynix has about ~24% market share.

= ~90% of the market share of the DRAM memory sales.

IBM,ARM,Mircosoft,HP,.... and a bunch of others support it too.
Hybrid memory cube like technology is the future, no doubt about it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
HMC doesnt help if the bus wont get changed. That HMC is utterly fast within itself is fine. But that still wont change much. And with the current path fixed for DDR4 and the same 2x64bit parallel busses. The question remains, when will that bottleneck be removed for iGPUs?

Even if they change the bus finally. Its not exactly a product around the corner. At best we talk what for system memory, 2016-2017?

And looking on their prototypes. 8W for a 512MB module. Ouch ouch.
One could worry it wont be system memory at all.
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
@ShintaiDK

We might see DDR4-4233 in 2014.
In "dual channel" thats like 68 GB/s.

So the answear is sometime in 2014-2015 the bottleneck will probably be momentarly removed, until IGPs get much faster than current mid-range discrete GPUs are.

The "idea" of a IGP thats faster than say a 5770 is still probably 2years off from now.
 

sefsefsefsef

Senior member
Jun 21, 2007
218
1
71
HMC doesnt help if the bus wont get changed. That HMC is utterly fast within itself is fine. But that still wont change much. And with the current path fixed for DDR4 and the same 2x64bit parallel busses. The question remains, when will that bottleneck be removed for iGPUs?

Even if they change the bus finally. Its not exactly a product around the corner. At best we talk what for system memory, 2016-2017?

And looking on their prototypes. 8W for a 512MB module. Ouch ouch.
One could worry it wont be system memory at all.

The whole point of HMC is to change the bus. Instead of having DRAM chips slowly drive bus wires at ~2Gbps, you have a logic layer underneath the DRAM layer(s) that drives the interconnect at >10Gbps. The prototype was built on 65nm (EDIT: 50nm) technology, which is obviously old. Capacities for production devices (which admittedly won't come out for some time), will probably be competitive with regular DDRx DIMMs.

EDIT: actually the DRAM in the prototype was built using 50nm, and the logic layer was built using 65nm. I admit I just pulled the 65nm number out of my memory, and it was wrong.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Er, it's highly likely that both Kaveri and Haswell GT3 will surpass that.

Kaveri will have 384 GCN shaders with the same 14 GB/sec of bandwidth... and that is going to be faster than 800 VLIW5 shaders with 80 GB/sec bandwidth?
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Kaveri will have 384 GCN shaders with the same 14 GB/sec of bandwidth... and that is going to be faster than 800 VLIW5 shaders with 80 GB/sec bandwidth?
Yes? If it ends up having access to L3 cache, which it likely will (4MB). If it doesn't end up passing the 5770, it'll be quite close. As the article in the OP states, Kaveri's GPU will will have full memory coherence with the CPU.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Kaveri will have 384 GCN shaders with the same 14 GB/sec of bandwidth... and that is going to be faster than 800 VLIW5 shaders with 80 GB/sec bandwidth?

Why will it only have 14 GB/sec bandwidth?

with AMD you have a few choices (in dual channel):
1) DDR3-1333 = 21,328 GB/s
2) DDR-1600 = 25.6 GB/s
3) DDR3-1866 = 29,9 GB/s
4) DDR3-2133 = 34,2 GB/s

Yes? If it ends up having access to L3 cache, which it likely will (4MB). If it doesn't end up passing the 5770, it'll be quite close. As the article in the OP states, Kaveri's GPU will will have full memory coherence with the CPU.

You think the gains will be that big? o_O
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Why will it only have 14 GB/sec bandwidth?

with AMD you have a few choices (in dual channel):
1) DDR3-1333 = 21,328 GB/s
2) DDR-1600 = 25.6 GB/s
3) DDR3-1866 = 29,9 GB/s
4) DDR3-2133 = 34,2 GB/s



You think the gains will be that big? o_O

Those are peak GB/sec numbers from memory to processor, including GPU, so unless two slots of DDR3 are reserved for a seperate bus to the GPU portion of the processor, actual memory to GPU bandwidth will indeed be notably less than theoretical peak bandwidth for a particular dual-channel DDR3 specification.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
You think the gains will be that big? o_O
AMD's projections put it at over 1 TFLOP/s for both the FPUs and GPU combined. Seeing as the HD 7750 is rated for 819 GFLOP/s, and performs similarly to HD 5770... yeah. The only thing of concern is the memory bandwidth, which could be addressed with stacked memory and L3. Stacked memory definitely has the potential to be out in 2013, but I'm thinking it'd be too expensive to use for the market that Kaveri will be targeting.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Sandra and AIDA report 14 GB/sec for Trinity throughput. I'm not looking at the peak rating of the RAM module, I was going by how their controller performs in real life. Kaveri is supposed to be FM2 compatible so I can't imagine too many dramatic changes to the package if the interface is the same.

I definitely don't foresee any exotic memory or stacking in Kaveri but who knows.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Kaveri will have 384 GCN shaders with the same 14 GB/sec of bandwidth... and that is going to be faster than 800 VLIW5 shaders with 80 GB/sec bandwidth?

512 GCN shaders and bandwith will be used much more efficently.

In Kaveri, hUMA takes away the hoops: the processor cores and integrated graphics have a shared address space, and they share both physical and virtual memory. Also, data is kept coherent between the CPU and IGP caches, so there are no cycles lost to synchronization like in current, NUMA-based solutions. All of this should translate into higher performance (and lower power utilization) in general-purpose GPU compute applications. Those applications tap into both the CPU cores and the IGP shaders and must pass data back and forth between them, which would require extra steps without hUMA. AMD said Kaveri's hUMA architecture has been implemented entirely in hardware, so it should support any operating systems and programming models. Virtualization is supported, as well.
huma-without.jpg


huma-with.jpg

huma-features.jpg


http://techreport.com/news/24737/amd-sheds-light-on-kaveri-uniform-memory-architecture
 
Last edited:
Status
Not open for further replies.