AM I looking at the trends of Video card correctly?

kyrax12

Platinum Member
May 21, 2010
2,416
2
81
One of the best video card in 2010 was the 5970 if I recalled correctly and it was priced at around the GTX 780 of this gen. Now a $250 GTX 760 wrecks it.

So going by that.. In three years time, a $250 video card will be on par if not better than the GTX titan of today.

RIght?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Depends on the market at the time. While it's possible, and even likely, there's always the possibility that we'll be dealing with inflated prices. We've already had $1000 single GPU cards (and they are threatening to release another one) and inflated prices due to mining on 28nm.

Right now prices on the high end are still very inflated, unfortunately. That $500 780 is about the equivalent of the 560 ti 448 within the current product stack, which was a $230 card when it launched.
 
Last edited:

kyrax12

Platinum Member
May 21, 2010
2,416
2
81
Depends on the market at the time. While it's possible, and even likely, there's always the possibility that we'll be dealing with inflated prices. We've already had $1000 single GPU cards (and they are threatening to release another one) and inflated prices due to mining on 28nm.

Right now prices on the high end are still very inflated, unfortunately. That $500 780 is about the equivalent of the 560 ti 448 within the current product stack, which was a $230 card when it launched.
mining?
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106

Using the compute abilities of your GPU to process information and obtain cryptocoins (the most popular being bitcoins, but other "altcoins" are used as well). These coins are treated as a sort of virtual currency. For the past few years AMD's graphics cards have been significantly better than nVidia at it, which pushes up their price, though the best way to mine cryptocoins is a dedicated ASIC box.
 

lamedude

Golden Member
Jan 14, 2011
1,222
45
91
The process use to take only 1 year, but that was when we got new cards every 6 months. Now they just rebrand them.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
A $400 MSRP 290 is pretty dang close to the overpriced $1k titan.

The titan was/is an overpriced joke. The 780 TI is overpriced, as well as the 290x but they are getting away with it since the AMD cards sell out due to mining.

(91% vs. 94%)
perfrel_2560.gif

http://www.techpowerup.com/reviews/Sapphire/R9_290X_Tri-X_OC/24.html

The overall point is true, the cards will be matched with mid-range alternatives in upcoming generations. Progress is slowing imo though, and they are jacking up the prices significantly. Without mining I don't think they would be sustainable (the pricing).

The $100 and below, perhaps $150 and below are really lacking in progress though. The mid-range cards are getting steady progress.
http://www.thetechbuyersguru.com/VideoCardRankings.php
 

RaulF

Senior member
Jan 18, 2008
844
1
81
A $400 MSRP 290 is pretty dang close to the overpriced $1k titan.

The titan was/is an overpriced joke. The 780 TI is overpriced, as well as the 290x but they are getting away with it since the AMD cards sell out due to mining.

(91% vs. 94%)
perfrel_2560.gif

http://www.techpowerup.com/reviews/Sapphire/R9_290X_Tri-X_OC/24.html

The overall point is true, the cards will be matched with mid-range alternatives in upcoming generations. Progress is slowing imo though, and they are jacking up the prices significantly. Without mining I don't think they would be sustainable (the pricing).

The $100 and below, perhaps $150 and below are really lacking in progress though. The mid-range cards are getting steady progress.
http://www.thetechbuyersguru.com/VideoCardRankings.php

The prices they are charging for the 290/X cards now days, the Ti i think is the best bang for the buck.

Still king of the hill.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
The prices they are charging for the 290/X cards now days, the Ti i think is the best bang for the buck.

Still king of the hill.

You can pick up 290/x cards for ~$450 (290) - 600 (290x) at Microcenter (if you watch for them), and at retail in most of the world.

So, if your only choice is gougeegg, and you are spending $700ish then of course the ti is a good option vs. the gouged 290x's on the egg. The statement you are making is not true universally though.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Yes. The problem currently in NA is that demand is so great they are selling out before they get them in at MSRP. Go with a pre order.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Graphics card performance jumps everytime they put new designs onto a new silicon process. The process change allows the GPU to near double the amount of transistors and this in turn increases its performance. New process technology comes along about every 18 - 24 months, and has historically done this for about 40 years. The observation is thus that the number of transistors on a given size of silicon will thus double every 18 months to 24 months, which in GPU terms means significant climbs in performance with each jump the silicon process makes.

Right now the industry is moving to 20nm away from the 28nm all the cards on sale use. TSMC has already started producing chips on 20nm at volume and so now its simply a matter of months before we see products based on it.

The other observation about the GPU market is that in between these 2 years of process technology they also produce another card which we call the refresh or the rebrand depending on what it is. Typically these are a slight adjustment to the architecture to make the card a little bit faster based on what they learnt in the first one. This time around we got significantly bigger chips with more features and a moderate amount more performance, but the 780 and 290 are just refreshes, small increments of change on the same process (28nm). The biggest jumps in performance come from the process change.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
The prices they are charging for the 290/X cards now days, the Ti i think is the best bang for the buck.

Still king of the hill.

Sorry...did you just name a 780Ti as "Best bang for the Buck" :eek:

That's a 'no" from me.

How people keep comparing a $150 more expensive card than R9 290X and calling it "a winner" confounds me.
If AMD had put another $150 worth of hardware into the 290X...think the 780Ti would still be "King of the Hill" ?

That's also a "No"
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
The observation is thus that the number of transistors on a given size of silicon will thus double every 18 months to 24 months, which in GPU terms means significant climbs in performance with each jump the silicon process makes.

And (correct me if I'm wrong) the GPU is massively parallel so its performance can be scaled up by throwing more transistors at it. So unlike CPUs (where you have to be clever and there seems to be a slowdown in improvements despite more transistors), a GPU can just get more and more massively powerful as more transistors are added.

So it's very likely the video card performance will continue to get more impressive and scale with transistor increases, unlike CPUs where their performance seems to have plateaued a bit.
 

Mand

Senior member
Jan 13, 2014
664
0
0
Sorry...did you just name a 780Ti as "Best bang for the Buck" :eek:

That's a 'no" from me.

How people keep comparing a $150 more expensive card than R9 290X and calling it "a winner" confounds me.
If AMD had put another $150 worth of hardware into the 290X...think the 780Ti would still be "King of the Hill" ?

That's also a "No"

R9 290X: http://www.newegg.com/Product/Produc...CH&isdeptsrh=1

The only ones in stock are $750 or higher. Lowest is $700, but out of stock.

780 Ti: http://www.newegg.com/Product/Produc...=-1&isNodeId=1

Prices range from $700-$730.

So no, 780 Ti is not $150 more expensive than R9 290X. It may have started out that way, but it isn't anymore, and you can't ignore that.
 

Sohaltang

Senior member
Apr 13, 2013
854
0
0

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
And (correct me if I'm wrong) the GPU is massively parallel so its performance can be scaled up by throwing more transistors at it. So unlike CPUs (where you have to be clever and there seems to be a slowdown in improvements despite more transistors), a GPU can just get more and more massively powerful as more transistors are added.

So it's very likely the video card performance will continue to get more impressive and scale with transistor increases, unlike CPUs where their performance seems to have plateaued a bit.

The limit of parallelism is somewhere around when the number of "cores" a GPU has equals the total pixels on screen. Todays processing model is parallel on pixels but has a series of serial programs that are run, producing lighting and HDR and finalising with anti aliasing.

The limit of scaling with todays model is somewhere around the pixel count of the screen, once we have a GPU processor per pixel the compute performance can no longer grow without a change in the software interface and more parallelism in the core programming model. This is the same limit CPUs hit and software still hasn't solved the issue and with GPUs it will like be the same thing.

That means from todays 3000 core GPUs we can expect another 100x worth of cores to be added before scaling stops via this mechanism of just adding more simple cores. That is unfortunately just 7 doublings away or some 14 years worth of progress. They do have other avenues open to improve performance like increasing the complexity of the cores to increase their performance, ie the same way Intel currently does with its CPUs.

So its not infinite, but for now its a decent enough way to get scaling, even if the fixed function part of the core doesn't also increase due to memory bandwidth limits.
 

Mand

Senior member
Jan 13, 2014
664
0
0
The limit of parallelism is somewhere around when the number of "cores" a GPU has equals the total pixels on screen. Todays processing model is parallel on pixels but has a series of serial programs that are run, producing lighting and HDR and finalising with anti aliasing.

The limit of scaling with todays model is somewhere around the pixel count of the screen, once we have a GPU processor per pixel the compute performance can no longer grow without a change in the software interface and more parallelism in the core programming model. This is the same limit CPUs hit and software still hasn't solved the issue and with GPUs it will like be the same thing.

That means from todays 3000 core GPUs we can expect another 100x worth of cores to be added before scaling stops via this mechanism of just adding more simple cores. That is unfortunately just 7 doublings away or some 14 years worth of progress. They do have other avenues open to improve performance like increasing the complexity of the cores to increase their performance, ie the same way Intel currently does with its CPUs.

So its not infinite, but for now its a decent enough way to get scaling, even if the fixed function part of the core doesn't also increase due to memory bandwidth limits.

Er

There's about two million pixels on 1080p, three and a half million on 1440p, and eight million on 4k. Getting as many cores as pixels is a factor of over a thousand for 1440p, which is ten doublings, with another doubling to go to 4k. Quite a far cry from 100x.

Given what's happened in the last twenty years (4 MB VRAM, we're rockin' now!), I don't think we can reasonably make predictions about what will be the state of graphics processing twenty years from now.
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,582
163
106
Er

There's about two million pixels on 1080p, three and a half million on 1440p, and eight million on 4k. Getting as many cores as pixels is a factor of over a thousand for 1440p, which is ten doublings, with another doubling to go to 4k. Quite a far cry from 100x.

Given what's happened in the last twenty years (4 MB VRAM, we're rockin' now!), I don't think we can reasonably make predictions about what will be the state of graphics processing twenty years from now.
I take it you're referring to the 4TB equipped 10 Petaflop quantum(or graphene based) GPU's of the "not so distant" future :awe:

The thing is Physics(& the economics of node shrink) is the single biggest hurdle to general purpose computing that we face today & eventually it'll probably force us to look for alternatives like C instead of Si that we've been using for over half a century now !
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
You absolutely right its 1000x (3000x1000 = 3million or so, right order even if the multiple differs) of what it is today, miss calculation on my part. Principle remains the same, Core count on GPU = pixels on screen is the limit.

Incidentally that only gives us 3 more doublings or another 6 years (20 years total). Any amount of serial behaviour whatsoever will always bring parallelism to a stop and this is one of those cases.

We could argue that in 20 years density will increase beyond 4k, I think that is very likely. However its estimated that we can simulate the entire visual area with fidelity that is better than pixel resolution at just 12million pixel screens. Maybe the real number is a little higher or lower but screen pixel scaling wont bring benefits forever and its not even an extra order of magnitude, its a mere 1-2 extra cycles of GPU progress consumed and nothing more.

Definitely a finite limit to the current GPU parallelism trend, although its a long way away yet.
 

gammaray

Senior member
Jul 30, 2006
859
17
81
You can pick up 290/x cards for ~$450 (290) - 600 (290x) at Microcenter (if you watch for them), and at retail in most of the world.

So, if your only choice is gougeegg, and you are spending $700ish then of course the ti is a good option vs. the gouged 290x's on the egg. The statement you are making is not true universally though.

interesting, can you show me a place in the world where an online shop beats NA prices on videocards?

thx.
 

Mand

Senior member
Jan 13, 2014
664
0
0
The thing is Physics(& the economics of node shrink) is the single biggest hurdle to general purpose computing that we face today & eventually it'll probably force us to look for alternatives like C instead of Si that we've been using for over half a century now !

Yep.

Eventually, you get to a point where electrons are no longer constrained by the metal wires and start to tunnel from one to the other, effectively sending current through insulating layers to places where we would rather it not be. That's the brick wall that's in front of Moore's Law right now. And we're not far off. We're talking about things like 14nm, 10nm right now, and the spacing between atoms in most materials is around 0.4nm. There not a whole lot more room to go down. Several years ago people demonstrated the capability to draw a metal wire atom by atom, and while that sounds great for shrinking circuits, things at that small scale have much, much bigger quantum mechanical issues to deal with.

I feel confident saying that I don't think we have 10 doubles' worth of margin with silicon transistors. We'll have to switch to a fundamentally different technology in order to keep Moore's Law going. Whether that's graphene circuits or a switch to optical computing (my personal bet), that day will come, and sooner than people probably think.
 
Last edited:

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
The $100 and below, perhaps $150 and below are really lacking in progress though. The mid-range cards are getting steady progress.
http://www.thetechbuyersguru.com/VideoCardRankings.php

I was giving that some thought earlier today.

I want to say I picked up a HD4870 in (early?) 2009 for about $150, and it performs generally between an HD7750 and HD7770 today. 5 years hasn't done a whole lot for performance in that price bracket.