I don't understand Nvidia's VRAM logic...

Beavermatic

Senior member
Oct 24, 2006
374
8
81
So I sit here looking at my alienware 18 with sli'd 780m's (each with 4GB of vram), and my desktop with Sli'd 780ti's (each with 3GB of vram), I can't help to think that something is horribly bass-ackwards here.. Going over the specs of the recently released 880m, its just a 780m rebrand, with double the VRAM. While its clocked a bit faster, it apparently throttles like mad, causing equal or lesser performance than the 780m (not mentioning the 780m with a custom vBIOS can do the the exact same clock as the 880m). But that's not why I'm here. It's more so to do with the 880m's VRAM... a whopping 8GB that makes even the Titan blush. Now, outside of a marketing gimmick.. or a attempt to justify vendor demands for a yearly refresh of a card and to justify a simple 780 rebrand, I can't think of any other reason for 8GB of VRAM on that card.

I can understand the Titan's 6GB vram... after all, lets be serious... those were more budget workstation quadro series cards than anything and probably should have never been marketed as gamer cards. I had two titan's previously to my 780ti's, and trust me when I say I never came close to hitting that 6GB limit (not even 3gb's), and that was running max settings with a good amount of msaa at 2560x1600 on every game I could throw at it.

I can barely understand the 780ti's 3GB vram... sure its plenty (for now), but why they didn't bother with the extra 1GB to bring it to 4GB just as breathing room for Ultra HD resolutions is beyond me (though likely understandable, as by the time ultra hd is mainstream, the RAM in these cards wont make a difference as the performance itself will be far surpassed in future cards, so this in fact may be self explanatory)

But what I really don't get is the excessively high amounts of vram in mobile cards? 780m having 4gb of RAM? 880m having 8gb of RAM? Serious overkill for the target resolution and the performance those cards are capable of.

You have to think... one of those cards strains at max settings in performance games with any form of MSAA at just 1080p.. they are about half the power of their desktop equivalent respectively. There's no way they'd be capable of running 4k or Ultra HD gaming on a mobile gpu... 8GB of vRAM for these cards is a "but why?" head scratcher, and even 4GB's is kind of ridiculous for their actual relative performance and target resolution. To even be able to start using MSAA in games like crysis 3 or Metro Last Light on mobile, you're going to need at least (2) of a 780m or 880m to get near a highend desktop's gpu performance, and even then it still doesn't come as close as it should (playable, but nowhere near as good).

So I'm trying to figure out why dump that much vRAM into a mobile card when it would never be able to reach resolutions or settings capable of coming close to hitting that ram limit to begin with? Unless someone can explain this akward design decision to me, I'm gonna have to say that I kind of think it really is just a marketing gimmick, and to just jack the price of the card up knowing full well it'll never make a difference to the audience.

discuss...
 
Last edited:
Feb 19, 2009
10,457
10
76
Weaker GPU with too much vram for them to handle has always been a gimmick to charge more. Apparently there are users who see bigger numbers and assume its better.
 

Beavermatic

Senior member
Oct 24, 2006
374
8
81
Weaker GPU with too much vram for them to handle has always been a gimmick to charge more. Apparently there are users who see bigger numbers and assume its better.

I guess I just don't understand as why even do it to begin with. Sure.. you're going to catch the sales from the handful of people who don't know any better and think power in numbers, but you could technically sell a whole lot more setting the vram at a more realistic amount, charging less, and make more sales.

Those 780m's are $800 a card. The 880m's are about $1100.

The only (literal) difference between the two is the vram, so that alone shows me you're paying for almost an extra $300 for the additional VRAM between them, when 4gb's is already overkill for the 780m.

Not to mention... for their price, they are performing like a $300 desktop equivalent card as far as cores and shaders go. I understand it takes a bit of extra effort to design cards to smaller scale, and you can't pack as much performance on them, but couldn't they cut out the unnecessary chips of RAM and fill those spots with more tech focused on cores and shaders?
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
I can't understand the 780ti's 3GB vram... sure its plenty, but why they didn't bother with the extra 1GB to bring it to 4GB just as breathing room for Ultra HD resolutions is beyond me

they probably dont want to go asymetrical on a top tier part like that. because the memory bus is 384bit means its either 3 Gb or 6 GB.
 

Beavermatic

Senior member
Oct 24, 2006
374
8
81
they probably dont want to go asymetrical on a top tier part like that. because the memory bus is 384bit means its either 3 Gb or 6 GB.

alright that makes a bit more sense, didn't think of it like that before.
 

Tristor

Senior member
Jul 25, 2007
314
0
71
they probably dont want to go asymetrical on a top tier part like that. because the memory bus is 384bit means its either 3 Gb or 6 GB.

I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.

AMD just went to 512bit on their newest release. Doesn't preclude Nvidia from doing it in the future. The important number (other than amount of VRAM) is the bandwidth. Doesn't matter if you get there with wide bus or fast GDDR.

Nvidia's memory clocks are much higher than Hawaii's...
 
Last edited:

24601

Golden Member
Jun 10, 2007
1,683
40
86
I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.

AMD/ATi went 512-bit because they fail at making IMCs all around.

The only real limit to the GK110 is it's lack of sufficient ROPs.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.

Cost.
Nvidia has done 512 bit before when the memory speeds lagged behind (GTX 280/285), but a higher bus width means more silicon on die, a bigger package, more traces, and more memory chips.

And no, they're not particularly memory bandwidth constrained. Do you think they're just leaving 33% performance on the table?

GTX 680 had a 256 bit bus, 6GHz RAM giving 192GB/s
GTX 780 Ti is 384 bit at 7GHz, 336GB/s

Is the 780 Ti 75% faster than the GTX 680? No.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I'm not sure about the 780m, but in the past, most the mobile parts with the same number are not based on the same desktop part. The 780m is more likely more closely related to the GTX 770 than the GTX 780. That is likely why it has 4Gb instead of 3Gb, as the 770 comes with either 2GB or 4GB. They chose to use the 4GB version.
 

RaulF

Senior member
Jan 18, 2008
844
1
81
AMD/ATi went 512-bit because they fail at making IMCs all around.

The only real limit to the GK110 is it's lack of sufficient ROPs.

So AMD put a 512 bus on new cards and was able to buy cheaper memory but provide more bandwidth!

But AMD can't make IMC!

Just want to make clear this is your opinion and not a fact.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
163
106
So AMD put a 512 bus on new cards and was able to buy cheaper memory but provide more bandwidth!

But AMD can't make IMC!

Just want to make clear this is your opinion and not a fact.
I think what he meant was that since AMD is so bad wrt their CPU IMC's he's extrapolating that their GPU's are just as bad in this regard, though I don't recall ATI being bad as such, anyways the point being that they make up for the poor IMC through the use of a wider bus vis-a-vis Nvidia's top GPU, something which I don't agree with at all his opinion that is.
 

nenforcer

Golden Member
Aug 26, 2008
1,774
14
81
There are also things to consider such as manufacturing and availability of the GDDR5 memory chips which are not cheap to produce and are in ever decreasing availability due to the NAND / mobile push.

I can't even buy 2GB DDR3 chips of the DRAM brand I prefer since they have moved production entirely to 4GB and 8GB chips.

I'm guessing when nVidia designed this thing as a "next" generation part even though its basically a rebrand they needed to sell it as such and placed orders for the larger memory chips to last them until 2015 or whenever the Maxwell GTX 980M parts come out.

4GB VRAM is enough for 4K resolution but these things don't have the horsepower for gaming at the resolution. Just web surfing and watching 4K movies.
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.
Most GPUs are capable saturating vram when using simple shaders on a transparent surface, especially on 64bit framebuffer. (32ROPs @ 1000Ghz should be enough to saturate ~500GB/s of bandwidth. (not counting possible compression.))

Volta and close by ramcubes with very high bandwidth and lower latency should give some nice advantages.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I don't really understand AMDs VRAM logic either. According to gamegpu.ru's constant measurements it seems AMD cards use less VRAM at the same settings. Yet alongside that AMD seems to equip its cards with quite a bit more RAM than Nvidia does, and has done for a few generations. Yet most of the data shows it sits idle doing nothing, as does much of what Nvidia equips there cards with. When both these cards were designed there wasn't really any idea when 4K would likely be ready, and its not universally true even that the 2GB VRAM is too short, its short in a couple of very VRAM heavy games.

One of the things that has puzzled me for a couple of years is how a 365mm² 7970 seems to loose to the 680 which is just 294 mm². The smaller die with less memory width ultimately ends up beating it fairly consistently, and it does so with less power consumption. I guess AMDs extra instruction that makes mining faster might be a small amount more as would the better 64 bit support but is it enough to make up that quite large difference?

There is a big difference with how these two companies approach the market. I don't think AMD's VRAM strategy is very balanced, they seem to be putting more onto their cards than they technically need to and its costing them power/heat and silicon space both of which is costing performance. So I guess I don't understand AMD's VRAM strategy. Why the heck Nvidia puts so much on a laptop is beyond me also.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I guess I just don't understand as why even do it to begin with. Sure.. you're going to catch the sales from the handful of people who don't know any better and think power in numbers, but you could technically sell a whole lot more setting the vram at a more realistic amount, charging less, and make more sales.

Those 780m's are $800 a card. The 880m's are about $1100.

The only (literal) difference between the two is the vram, so that alone shows me you're paying for almost an extra $300 for the additional VRAM between them, when 4gb's is already overkill for the 780m.

Not to mention... for their price, they are performing like a $300 desktop equivalent card as far as cores and shaders go. I understand it takes a bit of extra effort to design cards to smaller scale, and you can't pack as much performance on them, but couldn't they cut out the unnecessary chips of RAM and fill those spots with more tech focused on cores and shaders?

They did it specifically to be able to charge $1100. The extra 4gig of RAM doesn't cost them anywhere near as much as they are charging for it. It's purely to add perceived value for the masses that don't know better.

I'm more curious why we have 384-bit buses rather than 512-bit buses on nVidia cards. AMD has 512-bit buses on their top-end cards. Perhaps the GPU just isn't capable of saturating it, but somehow I don't think that's the case.

In AMD's case the 512 bus (more accurately 2x256 Pitcairn busses) was smaller and cheaper than the 384 bus on Tahiti. Increased bandwidth, less die space, cheaper RAM for the same or better performance, what's not to like?

AMD/ATi went 512-bit because they fail at making IMCs all around.

The only real limit to the GK110 is it's lack of sufficient ROPs.

You obviously aren't familiar with the benefits of AMD's 512 bus. See my comments above.
 

Beavermatic

Senior member
Oct 24, 2006
374
8
81
They did it specifically to be able to charge $1100. The extra 4gig of RAM doesn't cost them anywhere near as much as they are charging for it. It's purely to add perceived value for the masses that don't know better.

That's what im having a hard time understanding. The people who buy such gaming laptops generally are not the mainstream public who knows little about vram to performance ratio. Especially at their $3000+ pricetags.

Lets be honest, at almost 14lbs, over 2 inches deep, and over 18 inches wide ... the alienware 18's are not a laptop at that point, and battery life is secondary to primary power adapter due to the hardware's bloodthirsty use of juice. Nobody that would not know what they are doing or what its used for would buy such a thing. Well, not normal people anyway.

I would think majority of those who buy such portable gaming machines are enthusiasts on-the-go (college, work, engineers never around one place too long to have a gaming desktop), who need something relative to desktop performance for gaming or designmedia/compute for mobility. Heck, what am I saying... even the dell precision mobile workstations for engineering and development are still nothing in size compared to a alienware m18/18.

But back to my point... most that would buy these (I would think) know better. You *could* say people that buy them are richies who have no idea how hardware really works and just have too much money too care about investing the time and effort into building their own... but I think that applies more to desktops when it comes to custom gaming rigs. And it's not really a "status" symbol fad, like say... a MacBook pro would be. (I've actually had people with MacBook Pro's ask me "what the 'eff is that?" when I have my alienware out, and then try to argue how their MacBook Pro's are infinitely better and more powerful because it's "not a PC"... so I know its definitely not a fad-ster type of machine, and yes, I know they have no idea what they are talking about.). And like most, I would assume, I build by own desktop gaming machines... but I'm just too lazy to invest the time into building my mobile gaming rig because it is a little more time consuming and I hate working with tiny laptop parts (sorry clevo fans).


So, again, who are they trying to fool? Sure, there's gonna be SOME who buy it off the sheer numbers alone, but that can't be the vast majority of their sales. Im wondering if its not the consumer they are trying to fool... as much as it is the manufacturers. Im sure they are under contractual obligation to supply new cards every year or so with some of these PC manufacturers, and this is their way to justify a rebrand, by dumping more vram into it that won't make a difference either way or clocking it a hair faster than the prior... slapping a new sticker on it, and say "here ya go. new card. cya next year."

Of course now that has me thinking about MacBook Pro's, lol... which is the bigger gimmick? 4k screens on laptops -or- 8GB's of vram on the gpu in one? I'd sure love to see a MacBook Pro run crysis 3 all maxed out at those retina resolutions on its measely mobile gpu at that resolution. Which, of course, is why we don't see such silly resolutions on actual gaming/engineering laptops.. even sli'd 880m's would melt trying to push it in most games.
 
Last edited:

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
For laptops they are constrained by the thermal envelope and card packaging dimensions.

MOAR ram is far easier then pushing through a new mobile video module standard and cooling.
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
OP: You may be surprised at how many tech illiterate people blow that kind of cash. Let's put it this way: If they weren't fooling anyone, they would've stopped doing it.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
I really think the reason is that there are zero options on video card configuration in the mobile space. You either get the 780m/880m or you don't. The price of the vram is passed along and they make their margin. On the desktop side, you can shop around between both NVidia and AMD easily and have a ton of options and configurations at your disposal. The only upside is that you are given more than enough vram which is a better spot to be in than limited.
 

tareqjj

Member
Apr 25, 2011
88
0
61
Weaker GPU with too much vram for them to handle has always been a gimmick to charge more. Apparently there are users who see bigger numbers and assume its better.


Right on the money. I always thought of exaggerated VRAM sized are only there to inflate prices of older or lower end hardware.