Wait for Pascal or upgrade to 980ti

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
I am confused. 55nm to 28nm is a half-node + full-node (1.5 full node steps):

55nm -> 40nm (half node)
40nm -> 28nm (full node)

28nm -> 20/22nm (half node)
28nm -> 14/16nm (full node)
http://www.decryptedtech.com/leaks-...f-node-and-move-straight-to-16nm-for-new-gpus

How are you getting 2 steps? It's the same as moving from 40nm to 28nm (Kepler to Maxwell) or 1 full node. Is it not?

TSMC's description also coicides with just a 1 full node shrink:

"TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology."

A 2 node step would result in more than 2X the density increase and more than a 70% reduction in power usage/more than 65% increase in transistor switching speed. This coincides with rumours that next gen flagship chips will have 16-18B transistors, which is about double of the existing GM200/Fiji chips.

1.5 node steps would be 28nm -> 10nm (14/16nm is full node + 1/2 node to 10nm), while 2 full node steps is 28nm -> 7nm.

---

GTX285 (55nm) had 1.4B transistors.
GTX780Ti (1st gen 28nm) had 7.1B (5X 285's)
GTX980Ti (28nm) had 8B (5.7X).

Granted, there will be other benefits such as HBM2/GDDR5X (higher memory bandwidth) and possible improvements in IPC/compute/DX12 capability via a newer architecture. I don't think NV will launch a consumer $699 card 80-100% faster than a 980Ti in 2016. Why would they when they can just split the generation into parts or first release a Titan X successor for $1K+?
TMSC got onto an alternate schedule when they went from 65nm to 55nm, but since then they've been keeping the full node cadence.

55nm to 40nm was a full node. 55/40 = 1.428 = ~√2
40nm to 28nm was also a full node. 40/28 = 1.428 = ~√2

28nm to 20nm was the full node, but that didn't work out. 14nm would be the next full node from 20nm, so 28nm to 14nm would be two full nodes.

The difficulty arises when your full nodes no longer double your density, like they've seen with their "16nm" node. 28nm to 16/14nm should ideally give 4x the density, but we're not seeing that at all.
Edit: 22nm for Intel and 16nm for TMSC also introduced Finfet, which kind of breaks traditional scaling.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
TMSC got onto an alternate schedule when they went from 65nm to 55nm, but since then they've been keeping the full node cadence.

55nm to 40nm was a full node. 55/40 = 1.428 = ~√2
40nm to 28nm was also a full node. 40/28 = 1.428 = ~√2

28nm to 20nm was the full node, but that didn't work out. 14nm would be the next full node from 20nm, so 28nm to 14nm would be two full nodes.

But you forgot that 16nm TSMC FinFet node is mostly marketing in the name. It's not a true 14/16nm node as it shares most of the characteristics with a true 22/20nm node. Therefore, I do not agree that moving from 28nm Maxwell to 16nm Pascal is considered 2 full nodes. 2 full nodes would mean 4X the area density of 28nm but 16nm FinFet is nowhere near that -- rumors have it next gen GPUs will have 16-18B transistor GPUs which is 'only' double today's flagship chips.

14nm-2.png


If you read the expected area scaling and power consumption improvements, they also point to the improvement roughly equivalent to a full node, not 2 full nodes. Just look at it from a transistor point of view. You yourself said that Fermi to Kepler (40nm to 28nm) = full node. Look at the transistor count and die sizes for GTX580 vs. GTX780Ti (2X the performance difference). Now you are saying the move from 28nm to 16nm FF is 2 full nodes but yet performance is expected to increase 2X and transistor density goes up 2X and perf/watt goes up 2X. This isn't logical.

We can side aside node definitions for a minute because the node names no longer mean what they used to mean years ago. With the transition to the 45/40nm process node, some fabs opted to drop the "0.9x scaling" feature altogether, and focus on optimizing a single lithographic design rule set. For example, TSMC's roadmap has followed the 40nm -> 28nm -> 20nm -> 14nm progression, a 0.7x full node-like roadmap, starting with the old 40nm half-node. Perhaps, the key measure is the contacted device and metal layer "pitches" chosen for the circuit libraries, as opposed to the minimum drawn gate length. Someone like IDontcare who is an expert on this would be very helpful here. As you mentioned, 16nm/14nm should provide 4X the density but it won't since the first wave of 16nm/14nm seem to be that in marketing name for TSMC/GloFo.

Look at the big picture. The move from Maxwell to Pascal should more mimic the move from Fermi (480/580) to Kepler (780Ti) or about 2X the performance from Titan X to its successor.
 
Last edited:
Mar 10, 2006
11,715
2,012
126
But you forgot that 16nm TSMC FinFet node is mostly marketing in the name. It's not a true 14/16nm node as it shares most of the characteristics with 20nm. Therefore, I do not agree that moving from 28nm Maxwell to 16nm Pascal is considered 2 full nodes. 2 full nodes would mean 4X the area density of 28nm but 16nm FinFet is nowhere near that -- rumors have it next gen GPUs will have 16-18B transistor GPUs which is 'only' double today's flagship chips.

14nm-2.png

Russian, think of 16nm as the "true" full node jump from 28nm. 20nm brought 2x density but it wasn't that big of an improvement in xtor perf; 16nm doesn't bring a real density improvement, but it brings the FinFET xtor goodness.

So yeah, it's more like a one-node jump in going from 28nm to 16FF+.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Russian, think of 16nm as the "true" full node jump from 28nm. 20nm brought 2x density but it wasn't that big of an improvement in xtor perf; 16nm doesn't bring a real density improvement, but it brings the FinFET xtor goodness.

So yeah, it's more like a one-node jump in going from 28nm to 16FF+.

As long as we get 80-100% more performance per/watt and more absolute performance, I would be more than satisfied. Some people here think we will only get 30-50% more performance, while the remaining 30-50% will only come with Volta in 2018. I think we'll get 80-100% with Pascal and another 50-70% with Volta in 2018-2019.

Based on my amateur tracking of rough GPU performance increases since September 2009, GPU performance roughly doubles every 3 years. R9 290X and 780Ti are at about 59-61% on this chart:

perfrel_2560_1440.png


I am expecting the fastest consumer GPU of 2016 (not including Titan X successor) to be roughly double 290X/780Ti by November-December 2016. That would give us 118-122% on the charts vs. 82% is where the 980Ti sits.

Look at this too -- $399 GTX770 2GB came out May 30, 2013. R9 390/980 cost about $350-400 and they are 2X faster. The main reason it took so long was because AMD/NV were stuck on 28nm. With 16nm HBM2, I expect GPU performance to increase exponentially over the next 3-4 years. By summer 2017, I expect a GPU 80-100% faster than the 980Ti.
 

Shlong

Diamond Member
Mar 14, 2002
3,129
55
91
That system is still really good, except for the GPU. The system is wasting away while you keep that 570 and wait, IMO.

I was in a similar situation this summer (running a GTX 670) and got the cheapest GTX 970 I could find. That way I can enjoy games like Witcher 3, Fallout 4, Anno 2205 etc. today without feeling too bad if Pascal ends up beating everything else and costing $250 (unlikely).

However I play at 1080p. At 1440p, I'm not sure the GTX 970 would be enough. Maybe look into a cheap GTX 980, a heavily factory OC'd GTX 970 or something on the AMD side.


I have an i7 2600K @ 4.6ghz, 32gb DDR3 ram, 512GB Samsung 850 Pro, and a GTX 960 4GB. Newer games run fine at 1440p medium settings. Older games like Counter-strike GO, Starcraft 2, Skyrim, League of Legends, DOTA 2 can be run at max settings on 1440p.
 

Subwayeatbig

Member
Jan 4, 2006
112
0
0
Hi Guys,

Update! I ended up getting a gigabyte GTX 980TI G1 Gaming and I have no regrets what so ever. Everything looks so nicer and smooth when I was playing some battlefield and battlefront. Not having to turn down settings is great. I do not have buyers remorse. Thank you guys for the help!

Though I might have had buyer's remorse on the ASUS PG279Q as many people were complaining about some blb issues. Not sure if I should try to get an exchange or if my panel is not as bad as others who may have complained.

https://www.youtube.com/watch?v=pd7btnQH1IU
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Hi Guys,

Update! I ended up getting a gigabyte GTX 980TI G1 Gaming and I have no regrets what so ever. Everything looks so nicer and smooth when I was playing some battlefield and battlefront. Not having to turn down settings is great. I do not have buyers remorse. Thank you guys for the help!

Though I might have had buyer's remorse on the ASUS PG279Q as many people were complaining about some blb issues. Not sure if I should try to get an exchange or if my panel is not as bad as others who may have complained.

https://www.youtube.com/watch?v=pd7btnQH1IU

That isn't bad in my opinion. I've own two Auria (who?) IPS panels so far. My first one had so much bleed through I could walk around my room in the dark with it on haha. It wasn't until I got my second panel (first one randomly died, MC Warranty to the rescue) and it's probably only a little more than yours.

There some I've seen vids of where each corner is lit like 20% or so. Like my first panel. EDIT: Same panel as yours, this is almost as bad as my first panel haha.
https://www.youtube.com/watch?v=lg0DlTbB0Vw

Grats on the upgrades!
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
But you forgot that 16nm TSMC FinFet node is mostly marketing in the name. It's not a true 14/16nm node as it shares most of the characteristics with a true 22/20nm node. Therefore, I do not agree that moving from 28nm Maxwell to 16nm Pascal is considered 2 full nodes. 2 full nodes would mean 4X the area density of 28nm but 16nm FinFet is nowhere near that -- rumors have it next gen GPUs will have 16-18B transistor GPUs which is 'only' double today's flagship chips.

If you read the expected area scaling and power consumption improvements, they also point to the improvement roughly equivalent to a full node, not 2 full nodes. Just look at it from a transistor point of view. You yourself said that Fermi to Kepler (40nm to 28nm) = full node. Look at the transistor count and die sizes for GTX580 vs. GTX780Ti (2X the performance difference). Now you are saying the move from 28nm to 16nm FF is 2 full nodes but yet performance is expected to increase 2X and transistor density goes up 2X and perf/watt goes up 2X. This isn't logical.

We can side aside node definitions for a minute because the node names no longer mean what they used to mean years ago. With the transition to the 45/40nm process node, some fabs opted to drop the "0.9x scaling" feature altogether, and focus on optimizing a single lithographic design rule set. For example, TSMC's roadmap has followed the 40nm -> 28nm -> 20nm -> 14nm progression, a 0.7x full node-like roadmap, starting with the old 40nm half-node. Perhaps, the key measure is the contacted device and metal layer "pitches" chosen for the circuit libraries, as opposed to the minimum drawn gate length. Someone like IDontcare who is an expert on this would be very helpful here. As you mentioned, 16nm/14nm should provide 4X the density but it won't since the first wave of 16nm/14nm seem to be that in marketing name for TSMC/GloFo.

Look at the big picture. The move from Maxwell to Pascal should more mimic the move from Fermi (480/580) to Kepler (780Ti) or about 2X the performance from Titan X to its successor.

Sorry I missed this early. I didn't forget that 16nm isn't a true node jump in density. I mentioned it in the very next sentence that you didn't quote. I don't really think that's that controversial.

What I was pointing out (and probably should have made more explicit) is that 55->40nm was a full node, not a half node as you indicated. Tesla @ 55nm vs Kepler/Maxwell @ 28nm was a two node jump as Fallen Krell indicated, though he'll probably be disappointed by transistor scaling going from 28nm to 14nm. You can see it in the transistor scaling as well, where GT200 density was 3M/mm^2, and GK104 was 12M/mm^2.

It's going to be really interesting to see what kind of density increase we get between a very mature (geriatric?) 28nm process (13.3M/mm^2 in GM200) and Pascal when it launches.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
As long as we get 80-100% more performance per/watt and more absolute performance, I would be more than satisfied. Some people here think we will only get 30-50% more performance, while the remaining 30-50% will only come with Volta in 2018. I think we'll get 80-100% with Pascal and another 50-70% with Volta in 2018-2019.

Based on my amateur tracking of rough GPU performance increases since September 2009, GPU performance roughly doubles every 3 years. R9 290X and 780Ti are at about 59-61% on this chart:

I am expecting the fastest consumer GPU of 2016 (not including Titan X successor) to be roughly double 290X/780Ti by November-December 2016. That would give us 118-122% on the charts vs. 82% is where the 980Ti sits.

Look at this too -- $399 GTX770 2GB came out May 30, 2013. R9 390/980 cost about $350-400 and they are 2X faster. The main reason it took so long was because AMD/NV were stuck on 28nm. With 16nm HBM2, I expect GPU performance to increase exponentially over the next 3-4 years. By summer 2017, I expect a GPU 80-100% faster than the 980Ti.

Really, the GTX770 came out in March of 2012 and cost $500. :p That the 980 is so much faster is pretty amazing, and shows just how good Maxwell 2 is. The 980 has 47% more transistors than GK104 and 33% more cores, but performs much better than that.

Pure speculation and setting aside power, if we get a 300mm^2 die like the first 28nm product and a straight doubling of transistors (to 26.6M/mm^2), that would give a 7980M transistor GP104. That's essentially the same number of transistors as GM200. Based on your TPU graph above Maxwell 2 got a performance boost per transistor over Kepler. Even in Pascal is the same kind of leap over GM2 that GM2 was of GK, you'd be looking at GP104 being 30% faster than a 980Ti.

How fast everything scales up probably depends a lot on how the node develops and when nVidia launches GP100, but I'm not so confident we'll see 50% over a 980Ti from GP104. You'd need another great architectural leap as well as a pretty hefty die for a first crack at 16nm to get there. For reference and ignoring clocks, GK104 (in the 680) had a 5% performance increase over GF110 (in the 580) per transistor, while GF110 (in the 580) actually only had 73% of the performance per transistor as GT200-B3 (in the 285). This is based on TPU launch summaries of the new cards at 19x12 res. Another 30% increase per transistor for Pascal over Maxwell would be amazing, but that isn't the norm for the last decade.
 

Ares202

Senior member
Jun 3, 2007
331
0
71
don't think NV will launch a consumer $699 card 80-100% faster than a 980Ti in 2016. Why would they when they can just split the generation into parts or first release a Titan X successor for $1K+?

Your right. I can't remember a time where any next generation card was 100% faster, maybe after 2+ generations of 16nm FF

*edit* infact I can if we look into the history books, but I'd bet my house we will get 50% or less this time

http://www.anandtech.com/show/1314
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
I posted this in a another thread discussing Pascal and I thought I'd share it here since it is relevant to the thread. When using historical comparisons of 780 Ti/GK110 vs 970/980/GM200 performance to estimate Pascal's performance a couple of thoughts came to mind.

I wonder how much of that has to do with 3GB of vs 3.5GB/4GB of VRAM. They are similar in performance but I would think the 780 has to do more swapping hurting potential performance. The raw silicon power from GK110 helps make up for it so they kind of even out in the end. VRAM usage has gone up a bunch since the last generation due to the consoles but it has leveled out a bit so barring another console or disruption I don't think we'll see the same memory shortage going into the next generation.

One thing I haven't seen mentioned is that Maxwell ditched a lot of the compute hardware leaving the silicon specifically for gaming instead of being a dual purpose architecture like Kepler.

For GM200 NVIDIA’s path of choice has been to divorce graphics from high performance FP64 compute. Big Kepler was a graphics powerhouse in its own right, but it also spent quite a bit of die area on FP64 CUDA cores and some other compute-centric functionality. This allowed NVIDIA to use a single GPU across the entire spectrum – GeForce, Quadro, and Tesla – but it also meant that GK110 was a bit jack-of-all-trades. Consequently when faced with another round of 28nm chips and intent on spending their Maxwell power savings on more graphics resources (ala GM204), NVIDIA built a big graphics GPU. Big Maxwell is not the successor to Big Kepler, but rather it’s a really (really) big version of GM204.

GM200 is 601mm2 of graphics, and this is what makes it remarkable. There are no special compute features here that only Tesla and Quadro users will tap into (save perhaps ECC), rather it really is GM204 with 50% more GPU. This means we’re looking at the same SMMs as on GM204, featuring 128 FP32 CUDA cores per SMM, a 512Kbit register file, and just 4 FP64 ALUs per SMM, leading to a puny native FP64 rate of just 1/32. As a result, all of that space in GK110 occupied by FP64 ALUs and other compute hardware – and NVIDIA won’t reveal quite how much space that was – has been reinvested in FP32 ALUs and other graphics-centric hardware.

It’s this graphics “purification” that has enabled NVIDIA to improve their performance over GK110 by 50% without increasing power consumption and with only a moderate 50mm2 (9%) increase in die size. In fact in putting together GM200, NVIDIA has done something they haven’t done for years. The last flagship GPU from the company to dedicate this little space to FP64 was G80 – heart of the GeForce 8800GTX – which in fact didn’t have any FP64 hardware at all. In other words this is the “purest” flagship graphics GPU in 9 years.

http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/2

In other words, the smaller 970/980 was competitive with the big 780 Ti because they were able to focus Maxwell's hardware on gaming instead of being multi purpose like Kepler. Nvidia has already shown Pascal to have strong focus on compute (the 10x performance quote going around). So if they go back to having a dual purpose gaming/compute card like Kepler we may not see the performance increase people are hoping for. If they split out product lines between Gaming and Compute on Pascal architecture then we could be in for a treat but I haven't heard any hints that would be happening.

With all this said, I do expect the the Pascal GPUs to be very competitive with Maxwell across the entire product line including the 980 Ti. Performance we see in today's products should move down a segment.

980 Ti -> '1080'
980/970 -> '1060'
etc.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
GTX 1080 would be a terrible naming scheme. Everybody would think "Lol 1080p where's my 4K GPU?".
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
Hi Guys,

Update! I ended up getting a gigabyte GTX 980TI G1 Gaming and I have no regrets what so ever. Everything looks so nicer and smooth when I was playing some battlefield and battlefront. Not having to turn down settings is great. I do not have buyers remorse. Thank you guys for the help!

Though I might have had buyer's remorse on the ASUS PG279Q as many people were complaining about some blb issues. Not sure if I should try to get an exchange or if my panel is not as bad as others who may have complained.

https://www.youtube.com/watch?v=pd7btnQH1IU

Excellent choice! Now you're cooking with grease! Nice card and beautiful monitor. That level of glow and bleed is perfectly normal for a gaming IPS of this type. They all seem to have it and yours seems minimal compared to many others. Mine is about the same as yours. I can notice the glow in the lower right especially. Congrats.
 

Elcs

Diamond Member
Apr 27, 2002
6,278
6
81
I'm on a similar fence and tempted to take that leap now rather than later down the line... but I'm playing less FPS games due to increasingly bad motion sickness and most games are playing on my 3-4 year old GPU pretty alright at 1440p dsr/vsr (preferable to 1080p on my 42" screen).

For me logic is telling me not to buy that 980(Ti) or Fury(X), that I can wait as my GPU is doing okay and there might be a good reason this time next year... but I have that seemingly near irresistible allure of upgrading that I'm struggling against.

There never seems to be a good time to buy a CPU or GPU. Only bad times and really bad times :) No matter what hype the companies put out, no matter what information we're fed or find out about we're never really going to know what price it's going to be or how fast it's going to be.

The 1080 TI, or whatever it'll be called, won't come out until 6 months after Pascal has dropped. That means it'll be here in Q1 of 2017.

For some amusing reason I just thought of the 1080Pi as being the next top Nvidia card :)
 

CakeMonster

Golden Member
Nov 22, 2012
1,392
501
136
I'm very happy that I got my 980 at the time I did. I played several large titles right after its release, and now there's nothing demanding I need to play so there's no itch to scratch with the price-hiked 980Ti that I don't have to buy. I'm quite happy that I don't need to get a power hungry expensive card at this point. Yeah, it boils down to what you plan to play and when you plan to play it.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
GTX 1080 would be a terrible naming scheme. Everybody would think "Lol 1080p where's my 4K GPU?".

Why don't they just make it so you can always tell what generation/architecture the card is:

Kepler (K)
GT610/620/630/640/GTX650/650Ti/660/660Ti/670/680 would become:

K10/20/30/40/50/50Ti/60/60Ti/70/80

Then for refreshes, they could have called GTX770/780/780Ti as K75/K85/K80Ti or K90.

....

Maxwell (M) 950/960/970/980/980Ti would then be M50/60/70/80/80Ti or M90.

If they wanted to designate 1st vs. 2nd gen architecture such as differentiating between GTX750/750Ti (1st) gen and GTX950/960 (2nd gen) you'd just call GTX750/750Ti as M150/150Ti and M250/260.

The letter would always tell you what architecture you have, 1st number tells you the generation of that architecture and last 2 digits the ranking in the product stack, with Ti designating the flagship chip of that particular series.

So for Pascal you'd have:

GTX950 -> P50
GTX960 -> P60
GTX960Ti -> P60Ti (or just P65)
GTX970 -> P70
GTX980 -> P80
GTX980Ti -> P80Ti or P90

This way, no matter the generation, you'd always be able to tell so fast Oh ok this is a Pascal level chip, series 60 so about mid-way in the stack. In essence, the architecture letter designates the family of products, and the series is the standing within that family.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
That would never fly. The numbers always have to get bigger. 980 is better than 780, that's obvious. Is the M80 in this box better than my T80 or the K80 in that one over there? Who knows.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That would never fly. The numbers always have to get bigger. 980 is better than 780, that's obvious. Is the M80 in this box better than my T80 or the K80 in that one over there? Who knows.

So

4600 > 480/580
6800 > 660/680
8800 > 780/780Ti/980/980Ti

The numbers don't mean anything and were always confusing. What's faster GTX580 or GeForce 5800Ultra? The only reason I know is because I remember what came first. To someone who doesn't know videocards, you can't say that 580 beating 5800 Ultra is a logical naming convention.

Worse yet, at least at first, the generations made sense. GeForce 2-8 are all real new generations. GeForce 9 is fake because it's only an 8 refresh/rebadge. GeForce 200 makes no sense since they skipped GeForce 10/100. GeForce 400 makes no sense since they skipped GTX300. GeForce 500 is made up generation since it's just 400 series refresh. GeForce 700 is another fake generation because they split GeForce 600 into 2 parts (670/680->780/780Ti are actually just GTX660/660Ti/670/680).

In other words, after GeForce 8, nothing they did with names makes any logical sense. They went full blown marketing starting with GF9. My solution proposes an end to marketing BS and designating each family with an architecture code name, followed by series in that stack. Then all the consumer needs to do is look up what's newer, Maxwell, Kepler, Pascal. So easy to understand.
 
Last edited:

Seba

Golden Member
Sep 17, 2000
1,485
139
106
One problem with that naming convention is that you can not tell if card J70 is newer or older than card L70 and how many series were in-between, unless you already know that J-series was released after L-series and that you also had Q-series and T-series after L-series and before J-series.

Even if they use alphabetical order when they pick the series names, it would still be easier with the current numbering system to make an idea about release order and how close or far apart are two cards named 760 and 460 for instance.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
One problem with that naming convention is that you can not tell if card J70 is newer or older than card L70 and how many series were in-between, unless you already know that J-series was released after L-series and that you also had Q-series and T-series after L-series and before J-series.

Even if they use alphabetical order when they pick the series names, it would still be easier with the current numbering system to make an idea about release order and how close or far apart are two cards named 760 and 460 for instance.

It's just as confusing to someone who doesn't know GPUs what's newer GeForce 8800GTX or GTX780.

If they go up to GTX1080, then GTX2080, etc. they will eventually hit 4080. All of a sudden we have GeForce 4200/4400/4600 going against GTX4060/4070/4080. It only makes sense to us since we follow GPUs. For anyone else, it makes no sense what is newer.

Also, what's newer, GeForce MX440 or GeForce GT430? GeForce 7600GT or GeForce GTX760?

Their current naming convention is ridiculous since 750/750Ti are Maxwell, 770 is Kepler, while 950 is Maxwell implying 2 full generations ahead of 750/750Ti when it's just 2nd gen Maxwell.

What was 2 full generations in the past? That was like going from GeForce 2 to GeForce 4 or GeForce 4 to GeForce 6. Today, it's made up marketing with 750/750Ti --> 950. That's logical?

Is 770 a new generation compared to the 680? No, it's just a 685. What's 580? That's just a 485. So if we are honest with each other, their names haven't made sense since GeForce 9.
 
Last edited: